text
stringlengths 333
651k
| id
stringlengths 47
47
| metadata
dict |
|---|---|---|
Solar water disinfection ( ‘ SoDis ‘ ) is a type of portable water purification that uses solar energy to make biologically-contaminated (eg bacteria, viruses, protozoa and worms) water safe to drink. Water contaminated with non-biological agents such as toxic chemicals or heavy metals require additional steps to make the water safe to drink.
Solar water disinfection is produced by solar photovoltaic panels, heat ( solar thermal ) and solar ultraviolet light collection.
Disinfection using the effects of electricity generated by electrolytic processes, which causes the production of electrolytic disinfection. A second approach uses stored solar electricity from a battery, and operates at night or at low light levels to power an ultraviolet lamp to perform secondary solar ultraviolet water disinfection.
Solar thermal water disinfection uses heat from the Sun to heat water to 70-100 ° C for a short period of time. A number of approaches exist here. Solar heat collectors can have lenses in front of them, or use reflectors. They can also use different levels of insulation or glazing. In addition, some solar thermal water disinfection processes are batch-based, while others (through-flow solar thermal disinfection) operate almost continuously while the sun shines. Water heated to temperatures below 100 ° C is generally referred to as Pasteurized water.
The ultraviolet part of sunlight can also kill pathogens in water. The SODIS method uses a combination of UV light and high temperature (solar thermal) for disinfecting water using only sunlightand PET plastic bottles. SODIS is a free and effective method for decentralized water treatment , and is generally recommended by the World Health Organization as a viable method for household water treatment and safe storage. SODIS is already applied in many developing countries . : 55 Educational pamphlets on the method are available in many languages, each equivalent to the English-language version.
Exposure to sunlight has been shown to inactivate diarrhea -causing organisms in polluted drinking water . The inactivation of pathogenic organisms is attributed to the UV-A (wavelength 320-400 nm) part of the sunlight, which reacts with oxygen dissolved in the water and produces highly reactive forms of oxygen (oxygen free radicals and hydrogen peroxides ) that damage pathogens, while it also interferes with metabolism and destroys bacterial cell structures; and the full band of solar energy (from infrared to UV) heats the water.
At a water temperature of about 30 ° C (86 ° F), a solar irradiance threshold of at least 500 W / m 2 (all spectral light) is required for about 5 hours for SODIS to be efficient. This dose contains energy of 555 Wh / m 2 in the range of UV-A and violet light, 350-450 nm, corresponding to 6 hours of mid-latitude (European) midday summer sunshine.
At water temperatures higher than 45 ° C (113 ° F), synergistic effects of UV radiation and temperature further enhance disinfection efficiency. Above 50 ° C (122 ° F), the bacterial count drops three times faster.
Process for household application
Guides for the household use of SODIS describe the process.
Coloredless, transparent PET water or gold pop bottles of 2 liter or smaller size with few surface scratches are selected for use. Glass bottles are also suitable. Any labels are removed and the bottles are washed before the first use. Water from possibly-contaminated sources is in the bottles, using the clearest water possible. Where the turbidity is higher than 30 NTU it is necessary to filter or precipitate out particulatesprior to exposure to the sunlight. Filters are locally made from stretched fabric over inverted bottles with the bottoms cut off. In order to improve oxygen saturation, the guides recommend that bottles be filled with three-quarters, shaken for 20 seconds (with the cap on), then filled completely, recapped, and checked for clarity.
The filled bottles are then exposed to the fullest possible sunlight. If they are placed on a sloped Sun-facing reflective metal surface. A corrugated metal roof (or, rather, it is compared to thatched roof). Overhanging structures or plants that shade the bottles must be avoided, as they reduce both illumination and heating. After sufficient time, the treated water can be consumed directly from the bottle or poured into clean drinking cups. The risk of re-contamination is minimized if the water is stored in the bottles. Refilling and storage in other containers increases the risk of contamination.
|Weather conditions||Minimum treatment duration|
|Sunny (less than 50% cloud cover)||6 hours|
|Cloudy (50-100% cloudy, little to no rain)||2 days|
|Continuous rainfall||Unsatisfactory performance;
use rainwater harvesting
The most favorable regions for the SODIS method are located between 15 ° N and 35 ° N, and also 15 ° S and 35 ° S. These regions have high levels of solar radiation, with over 90% of sunlight reaching the earth’s surface as direct radiation. The second most favorable region between latitudes 15 ° N and 15 ° S. These regions have high levels of scattered radiation, with about 2500 hours of sunshine annually.
Local education in the use of SODIS is important to avoid confusion between PET and other bottle materials. Applying SODIS without proper assessment of existing hygienic practices and diarrhea incidence. Community trainers must be trained first.
SODIS is an effective method for treating water where fuel or cookers are unavailable or prohibitively expensive. Even where fuel is available, SODIS is a more economical and environmentally friendly option. The application of SODIS is not available, or if the water is highly turbid . In fact, if the water is highly turbid, SODIS can not be used alone; additional filtering is then necessary.
A basic field test to determine if the water is too turbid for the SODIS method to work properly is the newspaper test. For the newspaper test the user has to fill the empty bottle. If the letters are readable, the water can be used for the SODIS method. If the letters are not readable then the turbidity of the water can not be exceeded 30 NTU, and the water must be pretreated.
In theory, the method could be used in disaster relief or refugee camps. However, supplying bottles can be more difficult than providing equivalent disinfecting tablets containing chlorine, bromine, gold iodine. In addition, in some circumstances, it may be difficult to guarantee that the water will be left in the sun for the necessary time.
Other methods for household water treatment (eg, chlorination) differ filtration procedures or flocculation / disinfection. The selection of the appropriate method should be based on the criteria of effectiveness, the co-occurrence of other types of pollution (turbidity, chemical pollutants), treatment costs, labor input and convenience, and the user’s preference.
When the water is highly turbid, SODIS can not be used alone; Additional filtering or flocculation is then necessary to clarify the priority of SODIS treatment. Recent work has shown that common table salt (NaCl) is an effective flocculation agent for decreasing turbidity for the SODIS method in some types of soil. This method could be used to increase the cost of SODIS could be used for low cost.
SODIS may alternatively be implemented using plastic bags. SODIS bags have been found to be more effective than SODIS bottles. SODIS bottles with a water layer of 1 cm to 6 cm SODIS bottles, and treat Vibrio cholerae more effectively. It is assumed that this is because of the surface area to volume ratio in SODIS bags. In remote regions, they are not widely available and can not be shipped in a very small area. Bags can be packed more densely than bottles, and can be shipped to other countries. The disadvantages of using plastic bags are more often than not, they are more difficult to handle when they are watered, and they typically require that the water be transferred to a second container for drinking.
Another important benefit of water treatment is the use of water-related devices. Point-of-use means that the water is treated in the same way to handle container, thus decreasing the risk of secondary water contamination.
If the water bottles are not left in the Sun for the proper length of time, the water may not be safe to drink and could cause illness. If sunlight is needed, it is necessary to have a good weather.
The following issues should also be considered:
- Bottle material
- Some glass or PVC materials may prevent ultraviolet light from reaching the water. Commercially available bottles made of PET are recommended. The handling is much more convenient in the box of PET bottles. Polycarbonate (resin identification code 7) blocks all UVA and UVB rays, and therefore should not be used. Bottles that are clear, green lemon / lime soda pop bottles.
- Aging of plastic bottles
- SODIS efficiency depends on the physical condition of the plastic bottles, with scratches and other signs of reducing the efficiency of SODIS. Heavily scratched or old, blind bottles should be replaced.
- Shape of containers
- The intensity of the UV radiation decreases with increasing water depth. At a water depth of 10 cm (4 inches) and moderate turbidity of 26 NTU, UV-A radiation is reduced to 50%. PET soft drink is the most practical application for the SODIS application.
- Sunlight produces highly reactive forms of oxygen (oxygen free radicals and hydrogen peroxides) in the water. These reactive molecules contribute to the destruction process of microorganisms. Under normal conditions (rivers, creeks, wells, ponds, taps) water contains sufficient oxygen (more than 3 mg / L of oxygen) and does not have to be aerated before the application of SODIS.
- Leaching of bottle material
- There has been some concern about the question of whether or not it is possible for them to be toxic or not toxic. The Swiss Federal Laboratories for Materials Testing and Research -have Examined the diffusion of adipates and phthalates (DEHA and DEHP ) from new and reused PET-bottles in the water During solar exposure. The levels of concentrations found in the water after a 17 hours in 60 ° C (140 ° F) water were far below WHOGuidelines for drinking water and the same magnitude as the concentrations of phthalate and adipate found in high-quality tap water. Concerns about the general use of PET-bottles were also expressed by researchers from the University of Heidelberg on the release of antimony from PETs. However, the antimony concentrations found in the bottles are orders of magnitude below WHO and national guidelines for antimony concentrations in drinking water. In addition , SODIS is not included in this policy.
- Regrowth of bacteria
- Once removed from sunlight, the remaining bacteria may still be reproduced in the dark. A 2010 study showed that adding just 10 parts per million of hydrogen peroxide is effective in preventing the regrowth of wild Salmonella .
- Toxic chemicals
- Solar water disinfection does not remove toxic chemicals that may be present in the water, such as factory waste.
Health impact, diarrhea reduction
According to the World Health Organization , more than two million people per year of preventable water-borne diseases, and one billion people.
It has been shown that the SODIS method (and other methods of household water treatment) can be effectively removed from the water. However, infectious diseases are also transmitted through other pathways, ie due to a general lack of sanitation and hygiene. Studies on the reduction of diarrhea among SODIS users show reduction values of 30-80%.
The effectiveness of the SODIS was first discovered by Aftim Acra, of the American University of Beirut in the early 1980s. Follow-up Was Conducted By the research groups of Martin Wegelinat the Swiss Federal Institute of Aquatic Science and Technology (Eawag) and Kevin McGuigan at the Royal College of Surgeons in Ireland . Clinical control trials were pioneered by Ronan Conroy of the RCSI team in collaboration with Michael Elmore-Meegan . ICROSS
A joint research project on SODIS was implemented by the following institutions:
- Royal College of Surgeons in Ireland (RCSI), Ireland (coordination)
- University of Ulster (UU), United Kingdom
- CSIR Environmentek, South Africa, EAWAG, Switzerland
- The Institute of Water and Sanitation Development ( IWSD ), Zimbabwe
- Plataforma Solar from Almería (CIEMAT-PSA), Spain
- University of Leicester (UL), United Kingdom
- The International Commission for the Relief of Suffering and Starvation ( ICROSS ), Kenya
- University of Santiago de Compostela (USC), Spain
- Swiss Federal Institute of Aquatic Sciences and Technology (Eawag), Switzerland
The project is a multi-country study in Zimbabwe , South Africa and Kenya .
Other developments include the development of a continuous flow disinfection unit and solar disinfection with titanium dioxide film over glass cylinders, which prevents the bacterial regrowth of coliforms after SODIS.
Research has shown that a number of low-cost additives are capable of accelerating SODIS and that additives might make more efficient and acceptable to users. A 2008 study showed that powdered seeds of five natural vegetables (peas, beans and lentils) – Vigna unguiculata (cowpea), Phaseolus mung (black lentil), Glycine max (soybean), Pisum sativum (green pea), and Arachis hypogaea(peanut) -when evaluated as natural flocculants for the removal of turbidity, were as effective as alum and superior for clarification in that the optimum dosage was low (1 g / L), flocculation was rapid (7-25 minutes, depending on the seed used) and the water hardness and pH was essentially unaltered. Later studies have used chestnuts , oak acorns, and Moringa oleifera (drumstick tree) for the same purpose.
Other researches have been used to increase the production of oxygen radicals under solar UV-A. Recently, researchers at the National Center for Sensor Research and the Biomedical Diagnostics Institute at Dublin City University have developed an inexpensive printable UV dosimeter for SODIS applications that can be read using a mobile phone. The camera of the phone is used to acquire an image of the sensor and custom software running on the phone analyzes the sensor color to provide a quantitative measurement of UV dose.
In isolated regions the effect of wood smoke increases lung disease. Research groups have found that boiling water is neglected due to the difficulty of gathering wood, which is scarce in many areas. When presented with basic household water treatment options, they are shown to be a preference for SODIS method or other basic water treatment methods.
The Swiss Federal Institute of Aquatic Science and Technology (EAWAG), SODIS promotion projects in 33 countries including Bhutan, Bolivia, Burkina Faso, Cambodia, Cameroon, DR Congo, Ecuador , El Salvador, Ethiopia, Ghana, Guatemala, Guinea, Honduras, India, Indonesia, Kenya, Laos, Malawi, Mozambique, Nepal, Nicaragua, Pakistan, Peru, Philippines, Senegal, Sierra Leone, Sri Lanka, Togo, Uganda, Uzbekistan, Vietnam, Zambia, and Zimbabwe.
SODIS projects are funded by, among others, the SOLAQUA Foundation , several Lions Clubs , Rotary Clubs, Migros , and the Michel Comte Water Foundation.
SODIS has also been applied in several communities in Brazil, one of them being Prainha do Canto Verde , Beberibe west of Fortaleza . There has been a good deal of success with the SODIS method, since the temperature is more than 40 ° C (104 ° F) and there is a limited amount of shade. [ quote needed ]
One of the most important things to consider for public health workers reaching out to communities in need of affordable, cost effective, and sustainable water treatment methods is teaching the importance of water quality in the context of health promotion and disease prevention while educating about methods Themselves. Although it may be difficult to use these methods of treatment, it may be necessary to treat these conditions.
- Appropriate technology
- Ultraviolet Germicidal Irradiation
- Water Pasteurization Indicator
- Jump up^ Household water treatment and safe storage . World Health Organization http://www.who.int/household_water/research/technologies_intro/en/ . Retrieved 6 June 2016 . Missing or empty( help )
- ^ Jump up to:a b c d e f Meierhofer R Wegelin million (October 2002). Solar water disinfection – A guide for the application of SODIS (PDF) . Swiss Federal Institute of Environmental Science and Technology (EAWAG) Department of Water and Sanitation in Developing Countries (SANDEC). ISBN 3-906484-24-6 .
- ^ Jump up to:a b “Training material” . Swiss Federal Institute of Environmental Science and Technology (EAWAG) Department of Water and Sanitation in Developing Countries (SANDEC) . Retrieved 1 February 2010 .
- Jump up^ “How does it work?” (PDF) . sodis.ch . Retrieved 1 February 2010 .
- Jump up^ Limitations of SODIS ArchivedOctober 11, 2010, at theWayback Machine.
- Jump up^ “Treating turbid water” . World Health Organization . 2010 . Retrieved 30 November 2010 .
- Jump up^ Clasen T (2009). Scaling Up Household Water Treatment Among Low-Income Populations (PDF) . World Health Organization.
- Jump up^ B. Dawney and JM Pearce “Optimizing Solar Water Disinfection (SODIS) by Method Decreasing Turbidity with NaCl”,The Journal of Water, Sanitation, and Hygiene for Development2 (2) pp. 87-94 (2012). open access
- Jump up^ B. Dawney, C. Cheng, R. Winkler, JM Pearce. Evaluating the geographical viability of the solar water disinfection (SODIS) method by decreasing turbidity with NaCl: A case study of South Sudan. Applied Clay Science99: 194-200 (2014). open access soonDOI: 10.1016 / j.clay.2014.06.032
- ^ Jump up to:a b “Plastic Bags for Water Treatment: A New Approach to Solar Disinfection of Drinking Water” . University of British Columbia (Vancouver). 2011.
- Jump up^ Mintz E; Bartram J; Lochery P; Wegelin M (2001). “Not just a drop in the bucket: Expanding access to point-of-use water treatment systems” . American Journal of Public Health . 91 (10): 1565-1570. doi : 10.2105 / ajph.91.10.1565 . PMC 1446826 . PMID 11574307 .
- Jump up^ “Plastic Packaging Resins” (PDF) . American Chemistry Council.
- Jump up^ “SODIS Technical Note # 2 Materials: Plastic versus Glass Bottles” . sodis.ch. 20 October 1998. Archived from the original on June 24, 2009 . Retrieved 1 February 2010 .
- Jump up^ “Guidelines for Drinking Water Quality” (PDF) . World Health Organization. pp. 304-6.
- Jump up^ Kohler M, Wolfensberger M. “Migration of organic components from polyethylene terephthalate (PET) bottles to water” (PDF) . Swiss Federal Institute for Materials Testing and Research (EMPA). Archived from the original (PDF) on 2007-09-21.
- Jump up^ William Shotyk Michael Krachler & Bin Chen (2006). “Contamination of Canadian and European bottled waters with antimony from PET containers”. Journal of Environmental Monitoring . 8 (2): 288-292. doi : 10.1039 / b517844b . PMID 16470261 . Lay summary .
- Jump up^ “Bottled Water Contaminated with Antimony from PET” (Press release). University of Heidelberg. January 26, 2006.
- Jump up^ Sciacca F, Rengifo-Herrera JA, Wet J, Pulgarin C (2010-01-08). Dramatic enhancement of solar disinfection (SODIS) of wild Salmonella sp., In bottles by H (2) O (2). Chemosphere (Epub ahead of print) . 78 (9): 1186-91. doi :10.1016 / j.chemosphere.2009.12.001 . PMID 20060566 .
- Jump up^ “Household water treatment and safe storage” . Retrieved 30 November2010 .
- Jump up^ The WHO and UNICEF Joint Monitoring Program for Water Supply and Sanitation (2000). Global water supply and sanitation assessment 2000 report . Geneva: World Health Organization . ISBN 92-4-156202-1 .
- Jump up^ RM Conroy, Elmore-Meegan M, Joyce T, McGuigan KG, J Barnes (1996). “Solar disinfection of drinking water and diarrhoea in Maasai children: a controlled field trial”. Lancet . 348 (9043): 1695-7. doi : 10.1016 / S0140-6736 (96) 02309-4 . PMID 8973432 .
- Jump up^ RM Conroy, ME Meegan, Joyce T, McGuigan K, J Barnes (October 1999). “Solar disinfection of water reduces diarrhoeal disease: an update” . Archives of Disease in Childhood . 81 (4): 337-8. doi : 10.1136 / adc.81.4.337 . PMC 1718112 . PMID 10490440 .
- Jump up^ RM Conroy, ME Meegan, Joyce T, McGuigan K, J Barnes (October 2001). “Solar disinfection of drinking water protects against cholera in children under 6 years of age” . Archives of Disease in Childhood . 85 (4): 293-5. doi :10.1136 / adc.85.4.293 . PMC 1718943 . PMID 11567937 .
- Jump up^ Rose A, Roy S, Abraham V, et al. (February 2006). “Solar disinfection of water for diarrhoeal prevention in southern India” . Archives of Disease in Childhood . 91 (2): 139-41. doi : 10.1136 / adc.2005.077867 . PMC 2082686 . PMID 16403847 .
- Jump up^ LF Caslake, DJ Connolly, Menon V, CM Duncanson, Rojas R, Tavakoli J (February 2004). “Disinfection of contaminated water by using solar irradiation” . Appl. About. Microbiol . 70 (2): 1145-50. doi : 10.1128 / AEM.70.2.1145-1150.2004 . PMC 348911 . PMID 14766599 .
- Jump up^ Gelover S, Gomez LA, Reyes K, Teresa Leal M (October 2006). “A practical demonstration of water disinfection using TiO2 films and sunlight”. Water Res . 40 (17): 3274-80. doi : 10.1016 / j.waste.2006.07.006 . PMID 16949121 .
- Jump up^ MB Fisher, Keenan CR, Nelson KL, Voelker BM (March 2008). “Speeding up solar disinfection (SODIS): effects of hydrogen peroxide, temperature, pH, and copper plus ascorbate on the photoinactivation of E. coli”. J Water Health . 6 (1): 35-51. doi : 10.2166 / wh.2007.005 . PMID 17998606 .
- Jump up^ Mbogo SA (March 2008). “A novel technology to improve drinking water quality using natural treatment methods in rural Tanzania”. J Environ Health . 70 (7): 46-50. PMID 18348392 .
- Jump up^ Šćiban M, Klašnja M, Antov M, Škrbić B (2009). “Removal of water turbidity by natural coagulants obtained from chestnut and acorn”. Bioresource technology . 100 (24): 6639-43. doi : 10.1016 / j.biortech.2009.06.047 . PMID 19604691 .
- Jump up^ Nkurunziza, T; Nduwayezu, JB; Banadda, EN; Nhapi, I (2009). “The effect of turbidity and Moringa oleifera concentration on the effectiveness of coagulation in water treatment”. Water Science and Technology . 59 (8): 1551-8. doi : 10.2166 / wst.2009.155 . PMID 19403968 .
- Jump up^ Byrne JA; Fernandez-Ibañez PA; Dunlop PSM; Alrousan DMA; Hamilton JWJ (2011). Photocatalytic Enhancement for Solar Disinfection of Water: A Review. International Journal of Photoenergy . 2011 : 1-12. doi : 10.1155 / 2011/798051 .
- Jump up^ Copperwhite, R; McDonagh, C; O’Driscoll, S (2011). “A Phone-Based UV-Dosimeter Camera for Monitoring the Solar Disinfection (SODIS) of Water”. IEEE Sensors Journal . 12 (5): 1425-1426. doi : 10.1109 / JSEN.2011.2172938 .
- Jump up^ Contact addresses and case studies of the projects coordinated by the Swiss Federal Institute of Aquatic Science and Technology (EAWAG) are available atsodis.ch.
- Jump up^ “SOLAQUA” . Wegelin & Co. Archived from the original on 2008-05-04
|
<urn:uuid:59c93f56-6d2e-4209-8c34-c4b2b7e862ea>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00207.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8593295216560364,
"score": 3.734375,
"token_count": 5638,
"url": "http://www.durabilit.eu/solar-water-disinfection/"
}
|
These days, we’re often warned about the dangers of hacking and data theft, or reminded of the need to protect our vital documents, sensitive information, and credentials by encryption, when they’re transmitted over the internet or an unprotected network. But what does that mean, exactly?
Well, in simple terms, encryption is the process of taking a recognizable item (such as a written message, list of figures. or an image) and scrambling it so that it becomes unrecognizable to everyone except the person or entity intended to recognize it – who of course must have a key for unscrambling what’s been done to make it unrecognizable.
Though the results may look random to the casual observer, encryption isn’t a haphazard process. There are established mechanisms and systematic techniques. And yes – there’s mathematics involved.
Math and Matrices in Encryption
What’s commonly referred to as an encryption algorithm is really just a collection of operations performed on the elements (alphabet letters, numbers, bits of digital data, etc.) making up an object to be scrambled or encrypted. Typically, these operations will manifest as some kind of mathematical or geometric formula, governing how the component parts of an original object are dispersed to make up its encrypted counterpart.
As encryption is all about making the original source material indecipherable or unrecognizable, these mathematical functions typically involve moving its constituent elements around (shifting them to different positions), or replacing them with something else (substitution). Matrix functions are ideal for this, which is why they’re involved in many of the more advanced encryption techniques.
Doing the Rounds
The more sophisticated or advanced an encryption technique becomes, the more complex its encryption algorithm must be. You’d logically expect this to mean that the algorithm would have to consist of many operations of great complexity – and to a certain extent, this is true.
This brings us to the concept of a “round”. In cryptography, a round is made up of a number of algorithmic building blocks (mathematical functions, matrix transformations, etc.) strung together to create a function that’s run multiple times on source material to encrypt it according to a specific cipher or encryption algorithm.
The number of rounds performed on the source equates to how many times the information passes through the algorithm before it’s considered to have been sufficiently encrypted.
When encrypting data for digital transmission, we’re dealing with bits or bytes of information. A block cipher is an encryption algorithm which acts on a fixed-length group of bits, which is referred to as a block. This is why encryption algorithms are said to be 128-bit, 192-bit, 256-bit, and so on.
A block cipher’s transformations are specified by a symmetric encryption key, and many are performed by specifying a round which is then run multiple times.
The Feistel Cipher is a design model which formed the basis of many different block ciphers. Cryptographic systems based on Feistel use the same algorithm for encrypting and decrypting data.
Typically, the encryption process for a Feistel Cipher imposes multiple rounds of processing onto the plain text of the source. Each round involves a substitution step, followed by a permutation step.
DES and Triple DES
The Data Encryption Standard (DES) is a symmetric-key block cipher derived from the Feistel model. It was published by the National Institute of Standards and Technology (NIST), and uses a 16-round Feistel structure operating on a block size of 64 bits.
The key length for DES is 64 bits, but this is effectively reduced to 56 bits, as 8 of the 64 bits in the key aren’t used by the encryption algorithm, acting instead as check bits.
DES has some great strengths as a cipher. Any small changes made in the original plain text result in huge changes in the cipher text, once the algorithm is run. This is known as the “Avalanche Effect”. Each bit of cipher text also depends on many bits of plain text, making it more difficult to crack.
Difficult, but not impossible – and an improved variant known as Triple DES or 3-DES was adopted to address the vulnerability of DES to brute force attacks stemming from its relatively small key size. However, Triple DES was found to be too slow for practical applications.
The Advanced Encryption Standard AES
Hoping for an improvement on the performance of DES and Triple DES, the National Institute of Standards and Technology (NIST) started development of an Advanced Encryption Standard (AES) in 1997. This was to be the symmetric block cipher of choice for the US government in protecting classified information – and one that could be deployed in software and hardware across the world, for encrypting sensitive data.
NIST specified that the algorithm chosen for the AES should be a block cipher capable of handling 128 bit blocks, using keys sized at 128, 192, and 256 bits. It should also be resistant to attack, low-cost in terms of computational power and memory usage (the algorithm itself being released on a global, nonexclusive and royalty-free basis), and able to be deployed on a range of software and hardware platforms.
Fifteen symmetric key algorithm schemes were presented for analysis, from which five finalists were chosen, including:
- MARS, from an IBM Research team
- RC6, from RSA Security
- Rijndael, submitted by Joan Daemen and Vincent Rijmen, two Belgian cryptographers
- Serpent, submitted by Ross Anderson, Eli Biham and Lars Knudsen
- Twofish, submitted by researchers from Counterpane Internet Security
AES uses an iterative model, rather than the Feistel structure. Its basis is a “substitution–permutation network” consisting of a set of linked operations, some replacing inputs with specific outputs (substitution), and others shifting components of the plain text source around (permutation).
AES computations are performed on bytes, rather than bits, with 128 bits of a plain text block being treated as 16 bytes. These 16 bytes may be arranged in four columns and four rows, for processing as a matrix.
The Shift Row Transformation
The matrix function crucial to an AES cipher is known as a shift row transformation. As its name suggests, the function shifts the bytes in each row of a matrix by a certain offset, determined by the encryption algorithm.
For AES, the first row of the matrix is left unchanged. Each byte in the second row is shifted one position to the left. Bytes in the third and fourth rows are shifted by offsets of two and three, respectively. The shifting pattern for blocks of 128 bits and 192 bits is the same, with each row n being shifted left circular by n-1 bytes.
So for a 128-bit block (16 bytes under AES, a four by four matrix), the shift row transformation looks like this:
|1 5 9 13||1 5 9 13|
|2 6 10 14||6 10 14 2|
|3 7 11 15||11 15 3 7|
|4 8 12 16||16 4 8 12|
For 192 bits, the transformation takes this form:
|1 5 9 13 17 21||1 5 9 13 17 21|
|2 6 10 14 18 22||6 10 14 18 22 2|
|3 7 11 15 19 23||11 15 19 23 3 7|
|4 8 12 16 20 24||16 20 24 4 8 12|
The Rijndael Cipher
It was the Rijndael Cipher (whose name derives from the surnames of its creators, Rijmen and Daemen) that was ultimately selected as the basis for the new Advanced Encryption Standard (AES).
The cipher consists of a variable number of rounds: 9 if both the block and key are 128 bits long, and 11 if either the block or the key is 192 bits long, and neither one is longer than that. This doesn’t include an extra round performed at the end of the encryption, with one step omitted.
Rijndael also allows for encryption with 256-bit keys, in which case the shift row transformation looks like this:
|1 5 9 13 17 21 25 29||1 5 9 13 17 21 25 29|
|2 6 10 14 18 22 26 30||6 10 14 18 22 26 30 2|
|3 7 11 15 19 23 27 31||15 19 23 27 31 3 7 11|
|4 8 12 16 20 24 28 32||20 24 28 32 4 8 12 16|
Each regular round involves four steps:
- A Byte Substitution
- The Shift Row transformation
- A Mix Column step, where matrix multiplication is performed
- An Add Round Key, where a logical operation known as XOR is performed
The AES as based on the Rijndael Cipher was adopted as a US federal government standard in 2002, and has since proven to be resistant to most forms of attack targeting it. The exceptions have been “side-channel” attacks focusing on weaknesses found in the implementation or key management of specific encryption products based on AES.
Share this Post
|
<urn:uuid:86884654-cdff-4e3e-8f68-d17ba80f9913>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815812.83/warc/CC-MAIN-20180224132508-20180224152508-00408.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9225618243217468,
"score": 4.09375,
"token_count": 1915,
"url": "https://blog.finjan.com/shift-row-transformation-and-other-examples-of-advanced-encryption/"
}
|
The Australian Senate is the upper house of the bicameral Parliament of Australia, the lower house being the House of Representatives. The composition and powers of the Senate are established in Chapter I, Part II of the Australian Constitution. There are a total of 76 senators: 12 senators are elected from each of the six states (regardless of population) and two from each of the two autonomous internal territories (the Australian Capital Territory and the Northern Territory). Senators are popularly elected under the single transferable vote system of proportional representation.
|Single transferable vote|
|2 July 2016|
|On or before 18 May 2019|
Canberra, ACT, Australia
Unlike upper houses in Westminster parliamentary systems, the Senate is vested with significant power, including the capacity to reject all bills, including budget and appropriation Bills, initiated by the government in the House of Representatives, making it a distinctive hybrid of British Westminster bicameralism and US-style bicameralism. As a result of proportional representation, the chamber features a multitude of parties vying for power. The governing party or coalition, which has to maintain the confidence of the lower house, has not held a majority in the Senate since 2005-2008 (and before that since 1981) and usually needs to negotiate with other parties and Independents to get legislation passed.
Senators normally serve fixed six-year terms (from 1 July to 30 June). At most federal elections, the seats of 40 of the 76 senators (half of the 72 senators from the six States and all four of the senators from the Territories) are contested, along with the entire House of Representatives; such an election is sometimes known as a half-Senate election. The seats of senators elected at a half-Senate election are not contested at the next election, provided it is a half-Senate election. However, under some circumstances, the entire Senate is dissolved early, in what is known as a double dissolution. Following a double dissolution, half the senators representing States serve terms ending on the third 30 June following the election (i.e. slightly less than three years) and the rest serve a six year term. The term of senators representing a Territory expires at the same time as there is an election for the House of Representatives. While there is no constitutional requirement for the election of senators to take place at the same time as those for members of the House of Representatives, the government usually synchronises the dates of elections for the Senate and House of Representatives.
Origins and roleEdit
The Commonwealth of Australia Constitution Act (Imp.) of 1900 established the Senate as part of the new system of dominion government in newly federated Australia. From a comparative governmental perspective, the Australian Senate exhibits distinctive characteristics. Unlike upper Houses in other Westminster system governments, the Senate is not a vestigial body with limited legislative power. Rather it was intended to play – and does play – an active role in legislation. Rather than being modelled solely after the House of Lords, as the Canadian Senate was, the Australian Senate was in part modelled after the United States Senate, by giving equal representation to each state and equal powers. The Constitution intended to give less populous states added voice in a Federal legislature, while also providing for the revising role of an upper house in the Westminster system.
Although the Prime Minister and Treasurer, by convention, are members of the House of Representatives (after John Gorton was appointed prime minister in 1968, he resigned from the Senate and was elected to the House), other members of the Cabinet may come from either house, and the two Houses have almost equal legislative power. As with most upper chambers in bicameral parliaments, the Senate cannot introduce or amend appropriation bills (bills that authorise government expenditure of public revenue) or bills that impose taxation, that role being reserved for the lower house; it can only approve, reject or defer them. That degree of equality between the Senate and House of Representatives reflects the desire of the Constitution's authors to address smaller states' desire for strong powers for the Senate as a way of ensuring that the interests of more populous states as represented in the House of Representatives did not totally dominate the government. This situation was also partly due to the age of the Australian constitution – it was enacted before the confrontation in 1909 in Britain between the House of Commons and the House of Lords, which ultimately resulted in the restrictions placed on the powers of the House of Lords by the Parliament Acts 1911 and 1949.
In practice, however, most legislation (except for private member's bills) in the Australian Parliament is initiated by the Government, which has control over the lower house. It is then passed to the Senate, which has the opportunity to amend the bill, pass or reject it. In the majority of cases, voting takes place along party lines, although there are occasional conscience votes.
The system for electing senators has changed several times since Federation. The original arrangement involved a first-past-the-post block voting or "winner takes all" system, on a state-by-state basis. This was replaced in 1919 by preferential block voting. Block voting tended to produce landslide majorities and even "wipe-outs". For instance, from 1920 to 1923 the Nationalist Party held all but one of the 36 seats, and from 1947 to 1950, the Labor Party held all but three.
In 1948, single transferable vote proportional representation on a state-by-state basis became the method for electing Senators. This had the effect of limiting the government's ability to control the chamber, and has helped the rise of Australian minor parties. From the 1984 election, group ticket voting was introduced, in order to reduce a high rate of informal voting that arose from the requirement that each candidate be given a preference, and to allow small parties and independent candidates a reasonable chance of winning a seat. This allowed voters to select a single party "Above the Line" to distribute their preferences on their behalf, but voters were still able to vote directly for individual candidates and distribute their own preferences if they wished "Below the Line" by numbering every box.
In 2016 group tickets were abolished to avoid undue influence of preference deals amongst parties that were seen as distorting election results and a form of optional preferential voting was introduced. As a result of the changes, voters may assign their preferences for parties above the line (numbering as many boxes as they wish), or individual candidates below the line, and are not required to fill all of the boxes. Both above and below the line voting now use optional preferential voting. For above the line, voters are instructed to number at least their first six preferences; however, a "savings provision" is in place to ensure that ballots will still be counted if less than six are given. For below the line, voters are required to number at least their first 12 preferences. Voters are free to continue numbering as many preferences as they like beyond the minimum number specified. Another savings provision allows ballot papers with at least 6 below the line preferences to be formal. The voting changes make it more difficult for new small parties and independent candidates to be elected to the Senate.
The changes were subject to a High Court Challenge by sitting South Australian Senator Bob Day of the Family First Party. The senator argued that the changes meant the senators would not be "directly chosen by the people" as required by the constitution. The High Court decided that both above the line and below the line voting were valid methods for the people to choose their Senators.
The Australian Senate voting paper under the single transferable vote proportional representation system resembles the following example (shown in two parts), which shows the candidates for Victorian senate representation in the 2016 federal election.
To vote correctly, electors must either:
- Vote for at least six parties above the thick black line, by writing the numbers 1-6 in party boxes. Votes with less than six boxes numbered are still admitted to the count through savings provisions.
- Vote for at least twelve candidates below the thick black line, by writing the numbers 1-12 in the individual candidates' boxes. Votes with between six and twelve boxes numbered are still admitted to the count through savings provisions.
Because each state elects six senators at each half-Senate election, the quota for election is only one-seventh or 14.3% (one third or 33.3% for territories, where only two senators are elected). Once a candidate has been elected with votes reaching the quota amount, any votes they receive in addition to this may be distributed to other candidates as preferences.
With an odd number of seats in a half-Senate election (3 or 5), 50.1% of the vote wins a majority (2/3) or (3/5).
With an even number of seats in a half-Senate election (6), 57.1% of the vote is needed to win a majority of seats (4/6).
The ungrouped candidates in the far right column do not have a box above the line. Therefore, they can only get a primary (number 1) vote from electors who vote below the line. For this reason, some independents register as a group, either with other independents or by themselves, such as group B in the above example.
Names of parties can be shown only if the parties are registered, which requires, among other things, a minimum of 500 members.
Order of partiesEdit
The order of parties on the ballot papers and the order of ungrouped candidates are determined by a ballot conducted by the Electoral Commission.
Candidates, parties and groups pay a deposit of $2000 per candidate, which is forfeited if they fail to achieve 4% of the primary vote.
Candidates, parties and groups earn a public subsidy if they gain at least 4% of the primary vote. At the 2013 federal election, funding was $2.488 per formal first preference vote.
Under sections 7 and 8 of the Australian Constitution:
- The Senate must comprise an equal number of senators from each original state,
- each original state shall have at least six senators, and
- the Senate must be elected in a way that is not discriminatory among the states.
These conditions have periodically been the source of debate, and within these conditions, the composition and rules of the Senate have varied significantly since federation.
Size and nexusEdit
Under Section 24 of the Constitution, the number of members of the House of Representatives has to be "as nearly as practicable" double the number of Senators. The reasons for the nexus are twofold. These are a desire to maintain a constant influence for the smaller states and maintain a constant balance of the two Houses in case of a joint sitting after a double dissolution. A referendum held in 1967 to eliminate the nexus failed to pass.
The size of the Senate has changed over the years. The Constitution originally provided for 6 senators for each state, resulting in a total of 36 senators. The Constitution permits the Parliament to increase the number of senators, provided that equal numbers of senators from each original state are maintained. Accordingly, in 1948, Senate representation was increased to 10 senators for each state, increasing the total to 60.
In 1975, the two territories, the Northern Territory and the Australian Capital Territory, were given an entitlement to elect two senators each for the first time, bringing the number to 64. The senators from the Northern Territory also represent constituents from Australia's Indian Ocean Territories (Christmas Island and the Cocos (Keeling) Islands), while the senators from the Australian Capital Territory also represent voters from the Jervis Bay Territory and since 1 July 2016, Norfolk Island.
The latest expansion in Senate numbers took place in 1984, when the number of senators from each state was increased to 12, resulting in a total of 76 senators.
Normally, elections for senators take place at the same time as those for members of the House of Representatives. However, because their terms do not coincide, the incoming Parliament will for some time comprise a new House of Representatives and an old Senate.
Section 13 of the Constitution requires that in half-Senate elections the election of State senators shall take place within one year before the places become vacant. The actual election date is determined by the Governor of each State, who acts on the advice of the State Premier. The Governors almost always act on the recommendation of the Governor-General, with the last independent Senate election writ being issued by the Governor of Queensland during the Gair Affair in 1974.
Slightly more than half of the Senate is contested at each general election (half of the 72 state senators, and all four of the territory senators), along with the entire House of Representatives. Except in the case of a double dissolution, senators are normally elected for fixed terms of six years, commencing on 1 July following the election, and ceasing on 30 June six years later.
The term of the four senators from the territories is not fixed, but is defined by the dates of the general elections for the House of Representatives, the period between which can vary greatly, to a maximum of three years and three months. Territory senators commence their terms on the day that they are elected. Their terms expire the day prior to the following general election day.
Following a double dissolution, all 76 senators face re-election. If there is an early House election outside the 12-month period in which Senate elections can occur, the synchronisation of the election will be disrupted, and there can be half-Senate elections without a concurrent House election. The last time this occurred was on 21 November 1970.
Issues with equal representationEdit
Each state elects the same number of senators, meaning there is equal representation for each of the Australian states, regardless of population, so the Senate, like many upper Houses, does not adhere to the principle of "one vote one value". Tasmania, with a population of around 500,000, elects the same number of senators as New South Wales, which has a population of over 7 million. Because of this imbalance, governments favoured by the more populous states are occasionally frustrated by the extra power the smaller states have in the Senate, to the degree that former Prime Minister Paul Keating famously referred to the Senate's members as "unrepresentative swill". The proportional election system within each state ensures that the Senate incorporates more political diversity than the lower house, which is basically a two party body. The elected membership of the Senate more closely reflects the first voting preference of the electorate as a whole than does the composition of the House of Representatives, despite the large discrepancies from state to state in the ratio of voters to senators. This often means that the composition of the Senate is different from that of the House of Representatives, contributing to the Senate's function as a house of review.
With proportional representation, and the small majorities in the Senate compared to the generally larger majorities in the House of Representatives, and the requirement that the number of members of the House be "nearly as practicable" twice that of the Senate, a joint sitting after a double dissolution is more likely than not to lead to a victory for the House over the Senate. When the Senate had an odd number of Senators retiring at an election (3 or 5), 51% of the vote would lead to a clear majority of 3 out of 5 per state. With an even number of Senators retiring at an election, it takes 57% of the vote to win 4 out of 6 seats, which may be insurmountable. This gives the House an unintended extra advantage in joint sittings but not in ordinary elections, where the Senate may be too evenly balanced to get House legislation through.
The Government does not need the support of the Senate to stay in office; however, the Senate can block or defer supply, an action that precipitated a constitutional crisis in 1975. However, if the governing party does not have a majority in the Senate, it can often find its agenda frustrated in the upper house. This can be the case even when the government has a large majority in the House.
The overwhelming majority of senators have always been elected as representatives of political parties. Parties which currently have representation in the Senate are:
- The Coalition – Liberal Party of Australia, Liberal National Party of Queensland, National Party of Australia and Country Liberal Party
- Australian Labor Party
- Australian Greens
- Pauline Hanson's One Nation
- Nick Xenophon Team
- Derryn Hinch's Justice Party
- Liberal Democratic Party
- Australian Conservatives
Other parties that have achieved Senate representation in the past include the Jacqui Lambie Network, Family First Party, Australian Democrats, Palmer United Party, Australian Motoring Enthusiast Party, Nuclear Disarmament Party, Liberal Movement, the Democratic Labour Party and the related but separate Democratic Labor Party.
Due to the need to obtain votes statewide, independent candidates have difficulty getting elected. The exceptions in recent times have been elected in less populous States—the former Tasmanian Senator Brian Harradine and the former South Australian Senator Nick Xenophon. It is less uncommon for a senator initially elected representing a party to become an independent, most recently in the cases of Senator Lucy Gichuhi resigning from Family First, Senators Rod Culleton and Fraser Anning resigning from One Nation, and Senator Steve Martin being expelled from the Jacqui Lambie Network.
The Australian Senate serves as a model for some politicians in Canada, particularly in the Western provinces, who wish to reform the Canadian Senate so that it takes a more active legislative role.
There are also small factions in the United Kingdom (both from the right and left) who wish to the see the House of Lords take on a structure similar to that of the Australian Senate.[who?]
Section 15 of the Constitution provides that a casual vacancy of a State senator shall be filled by the State Parliament. If the previous senator was a member of a particular political party the replacement must come from the same party, but the State Parliament may choose not to fill the vacancy, in which case Section 11 requires the Senate to proceed regardless. If the State Parliament happens to be in recess when the vacancy occurs, the Constitution provides that the State Governor can appoint someone to fill the place until fourteen days after the State Parliament resumes sitting.
The Australian Senate typically sits for 50 to 60 days a year.[h] Most of those days are grouped into 'sitting fortnights' of two four-day weeks. These are in turn arranged in three periods: the autumn sittings, from February to April; the winter sittings, which commence with the delivery of the budget in the House of Representatives on the first sitting day of May and run through to June or July; and the spring sittings, which commence around August and continue until December, and which typically contain the largest number of the year's sitting days.
The senate has a regular schedule that structures its typical working week.
Dealing with legislationEdit
All bills must be passed by a majority in both the House of Representatives and the Senate before they become law. Most bills originate in the House of Representatives, and the great majority are introduced by the government.
The usual procedure is for notice to be given by a government minister the day before the bill is introduced into the Senate. Once introduced the bill goes through several stages of consideration. It is given a first reading, which represents the bill's formal introduction into the chamber.
The first reading is followed by debate on the principle or policy of the bill (the second reading debate). Agreement to the bill in principle is indicated by a second reading, after which the detailed provisions of the bill are considered by one of a number of methods (see below). Bills may also be referred by either House to their specialised standing or select committees. Agreement to the policy and the details is confirmed by a third and final reading. These processes ensure that a bill is systematically considered before being agreed to.
The Senate has detailed rules in its standing orders that govern how a bill is considered at each stage. This process of consideration can vary greatly in the amount of time taken. Consideration of some bills is completed in a single day, while complex or controversial legislation may take months to pass through all stages of Senate scrutiny. The Constitution provides that if the Senate vote is equal, the question shall pass in the negative.
In addition to the work of the main chamber, the Senate also has a large number of committees which deal with matters referred to them by the Senate. These committees also conduct hearings three times a year in which the government's budget and operations are examined. These are known as estimates hearings. Traditionally dominated by scrutiny of government activities by non-government senators, they provide the opportunity for all senators to ask questions of ministers and public officials. This may occasionally include government senators examining activities of independent publicly funded bodies, or pursuing issues arising from previous governments' terms of office. There is however a convention that senators do not have access to the files and records of previous governments when there has been an election resulting in a change in the party in government. Once a particular inquiry is completed the members of the committee can then produce a report, to be tabled in Parliament, outlining what they have discovered as well as any recommendations that they have produced for the Government to consider.
The ability of the Houses of Parliament to establish committees is referenced in Section 49 of the Constitution, which states that, "The powers, privileges, and immunities of the Senate and of the House of Representatives, and of the members and the committees of each House, shall be such as are declared by the Parliament, and until declared shall be those of the Commons House of Parliament of the United Kingdom, and of its members and committees, at the establishment of the Commonwealth."
Parliamentary committees can be given a wide range of powers. One of the most significant powers is the ability to summon people to attend hearings in order to give evidence and submit documents. Anyone who attempts to hinder the work of a Parliamentary committee may be found to be in contempt of Parliament. There are a number of ways that witnesses can be found in contempt, these include; refusing to appear before a committee when summoned, refusing to answer a question during a hearing or to produce a document, or later being found to have lied to or misled a committee. Anyone who attempts to influence a witness may also be found in contempt. Other powers include the ability to meet throughout Australia, to establish subcommittees and to take evidence in both public and private hearings.
Proceedings of committees are considered to have the same legal standing as proceedings of Parliament. They are recorded by Hansard, except for private hearings, and also operate under Parliamentary privilege. Every participant, including committee members and witnesses giving evidence, is protected from being prosecuted under any civil or criminal action for anything they may say during a hearing. Written evidence and documents received by a committee are also protected.
Holding governments to accountEdit
One of the functions of the Senate, both directly and through its committees, is to scrutinise government activity. The vigour of this scrutiny has been fuelled for many years by the fact that the party in government has seldom had a majority in the Senate. Whereas in the House of Representatives the government's majority has sometimes limited that chamber's capacity to implement executive scrutiny, the opposition and minor parties have been able to use their Senate numbers as a basis for conducting inquiries into government operations. When the Howard government won control of the Senate in 2005, it sparked a debate about the effectiveness of the Senate in holding the government of the day accountable for its actions. Government members argued that the Senate continued to be a forum of vigorous debate, and its committees continued to be active. The Opposition leader in the Senate suggested that the government had attenuated the scrutinising activities of the Senate. The Australian Democrats, a minor party which frequently played mediating and negotiating roles in the Senate, expressed concern about a diminished role for the Senate's committees.
Senators are called upon to vote on matters before the Senate. These votes are called divisions in the case of Senate business, or ballots where the vote is to choose a senator to fill an office of the Senate (such as President of the Australian Senate).
Party discipline in Australian politics is extremely tight, so divisions almost always are decided on party lines. Nevertheless, the existence of minor parties holding the balance of power in the Senate has made divisions in that chamber more important and occasionally more dramatic than in the House of Representatives.
When a division is to be held, bells ring throughout the parliament building for four minutes, during which time senators must go to the chamber. At the end of that period the doors are locked and a vote is taken, by identifying and counting senators according to the side of the chamber on which they sit (ayes to the right of the chair, noes to the left). The whole procedure takes around eight minutes. Senators with commitments that keep them from the chamber may make arrangements in advance to be 'paired' with a senator of the opposite political party, so that their absence does not affect the outcome of the vote.
The Senate contains an even number of senators, so a tied vote is a real prospect (which regularly occurs when the party numbers in the chamber are finely balanced). Section 23 of the Constitution requires that in the event of a tied division, the question is resolved in the negative. The system is however different for ballots for offices such as the President. If such a ballot is tied, the Clerk of the Senate decides the outcome by the drawing of lots. In reality, conventions govern most ballots, so this situation does not arise.
Political parties and voting outcomesEdit
The extent to which party discipline determines the outcome of parliamentary votes is highlighted by the rarity with which members of the same political party will find themselves on opposing sides of a vote. The exceptions are where a conscience vote is allowed by one or more of the political parties; and occasions where a member of a political party crosses the floor of the chamber to vote against the instructions of their party whip. Crossing the floor very rarely occurs, but is more likely in the Senate than in the House of Representatives.
One feature of the government having a majority in both chambers between 1 July 2005 and the 2007 elections was the potential for an increased emphasis on internal differences between members of the government coalition parties. This period saw the first instances of crossing the floor by senators since the conservative government took office in 1996: Gary Humphries on civil unions in the Australian Capital Territory, and Barnaby Joyce on voluntary student unionism. A more significant potential instance of floor crossing was averted when the government withdrew its Migration Amendment (Designated Unauthorised Arrivals) Bill, of which several government senators had been critical, and which would have been defeated had it proceeded to the vote. The controversy that surrounded these examples demonstrated both the importance of backbenchers in party policy deliberations and the limitations to their power to influence outcomes in the Senate chamber.
Where the Houses disagreeEdit
If the Senate rejects or fails to pass a proposed law, or passes it with amendments to which the House of Representatives will not agree, and if after an interval of three months the Senate refuses to pass the same piece of legislation, the government may either abandon the bill or continue to revise it, or, in certain circumstances outlined in section 57 of the Constitution, the Prime Minister can advise the Governor-General to dissolve the entire parliament in a double dissolution. In such an event, the entirety of the Senate faces re-election, as does the House of Representatives, rather than only about half the chamber as is normally the case. After a double dissolution election, if the bills in question are reintroduced, and if they again fail to pass the Senate, the Governor-General may agree to a joint sitting of the two Houses in an attempt to pass the bills. Such a sitting has only occurred once, in 1974.
The double dissolution mechanism is not available for bills that originate in the Senate and are blocked in the lower house.
After a double dissolution election, section 13 of the Constitution requires the Senate to divide the senators into two classes, with the first class having a three-year "short term", and the second class a six-year "long term". The Senate may adopt any approach it wants to determine how to allocate the long and short terms, however two methods are currently 'on the table':
- "elected-order" method, where the Senators elected first attain a six-year term. This approach tends to favour minor party candidates as it gives greater weight to their first preference votes; or
- re-count method, where the long terms are allocated to those Senators who would have been elected first if the election had been a standard half-Senate election. This method is likely to be preferred by the major parties in the Senate where it would deliver more six-year terms to their members.
The Senate applied the "elected-order" method following the 1987 double dissolution election. Since that time the Senate has passed resolutions on several occasions indicating its intention to use the re-count method to allocate seats at any future double dissolution, which Green describes as a fairer approach but notes could be ignored if a majority of Senators opted for the "elected-order" method instead. In both double dissolution elections since 1987, the "elected order" method was used.
On 8 October 2003, the then Prime Minister John Howard initiated public discussion of whether the mechanism for the resolution of deadlocks between the Houses should be reformed. High levels of support for the existing mechanism, and a very low level of public interest in that discussion, resulted in the abandonment of these proposals.
Because of the federal nature of our Constitution and because of its provisions the Senate undoubtedly has constitutional power to refuse or defer supply to the Government. Because of the principles of responsible government a Prime Minister who cannot obtain supply, including money for carrying on the ordinary services of government, must either advise a general election or resign. If he refuses to do this I have the authority and indeed the duty under the Constitution to withdraw his Commission as Prime Minister. The position in Australia is quite different from a position in the United Kingdom. Here the confidence of both Houses on supply is necessary to ensure its provision. In United Kingdom the confidence of the House of Commons alone is necessary. But both here and in the United Kingdom the duty of the Prime Minister is the same in a most important aspect – if he cannot get supply he must resign or advise an election.
The constitutional text denies the Senate the power to originate or amend appropriation bills, in deference to the conventions of the classical Westminster system. Under a traditional Westminster system, the executive government is responsible for its use of public funds to the lower house, which has the power to bring down a government by blocking its access to supply – i.e. revenue appropriated through taxation. The arrangement as expressed in the Australian Constitution, however, still leaves the Senate with the power to reject supply bills or defer their passage – undoubtedly one of the Senate's most powerful abilities.
The ability to block supply was exercised in the 1975 Australian constitutional crisis. The Opposition used its numbers in the Senate to defer supply bills, refusing to deal with them until an election was called for both Houses of Parliament, an election which it hoped to win. The Prime Minister of the day, Gough Whitlam, contested the legitimacy of the blocking and refused to resign. The crisis brought to a head two Westminster conventions that, under the Australian constitutional system, were in conflict – firstly, that a government may continue to govern for as long as it has the support of the lower house, and secondly, that a government that no longer has access to supply must either resign or be dismissed. The crisis was resolved in November 1975 when Governor-General Sir John Kerr dismissed Whitlam's government and appointed a caretaker government on condition that elections for both Houses of parliament be held. This action in itself was a source of controversy and debate at that time on the proper usage of the Senate's ability to block supply.
The blocking of supply alone cannot force a double dissolution. There must be legislation repeatedly blocked by the Senate which the government can then choose to use as a trigger for a double dissolution.
The 2 July 2016 double dissolution election Senate result was announced on 4 August: Liberal/National Coalition 30 seats (−3), Labor 26 seats (+1), Greens 9 seats (−1), One Nation 4 seats (+4) and Nick Xenophon Team 3 seats (+2). Derryn Hinch won a seat, while Liberal Democrat David Leyonhjelm, Family First's Bob Day, and Jacqui Lambie retained their seats. The number of crossbenchers increased by two to a record 20. The Liberal/National Coalition required at least nine additional votes to reach a Senate majority, an increase of three. The Liberal/National Coalition and Labor parties agreed that the first elected six of twelve Senators in each state would serve a six-year term, while the last six elected in each state would serve a three-year term, despite two previous bipartisan senate resolutions to use an alternative method to allocate long and short term seats. By doing this, Labor and the Coalition each gained one Senate seat from 2019.
Bob Day, of the Family First Party, resigned from the Senate on 1 November 2016 following the collapse of his business. His eligibility to have stood in the 2016 election was referred by the Senate to the High Court, sitting as the Court of Disputed Returns. In April 2017 the court found that Day was not validly elected at the 2016 election and ordered that a special recount of South Australian ballot papers be held in order to determine his replacement. The court announced that Lucy Gichuhi was elected in his place on 19 April 2017. On 26 April 2017, Family First merged with the Australian Conservatives; however, Gichuhi declined to join the new party, announcing she would sit as an independent.
Rodney Culleton, who had left Pauline Hanson's One Nation Party on 19 December 2016 to become an independent, had his eligibility to stand in the 2016 election challenged on two constitutional grounds. Among the grounds of ineligibility provided in Constitution section 44, a person cannot sit in either house of the Parliament if they are bankrupt or have been convicted of a criminal offence carrying a potential prison sentence of one year or more.
Culleton was declared bankrupt by the Federal Court on 23 December 2016. On 11 January 2017, after receiving an official copy of the judgment, the President of the Senate declared Culleton's seat vacant. Culleton's appeal against that judgment was dismissed by a full court of the Federal Court on 3 February 2017.
This judgment was followed later on the same day by the High Court's decision that Culleton was ineligible owing to conviction for a criminal offence carrying a potential prison sentence of one year or more. This was a decision of the Court of Disputed Returns following a reference by the Senate at the same time as with Day. It was decided that, since Culleton's liability to a two-year sentence for larceny had been in place at the time of the 2016 election, he had been ineligible for election and that this was not affected by the subsequent annulment of that conviction; the Court also held that the resulting vacancy should be filled by a recount of the ballot, in a manner to be determined by a single Justice of the Court. Following that recount, on 10 March 2017 the High Court named Peter Georgiou as his replacement, returning One Nation to 4 seats.
In July 2017 a co-deputy leader of the Greens, Senator Scott Ludlam, resigned from the Senate on discovering that he was a dual citizen (born in New Zealand) and therefore, under Section 44 of the Constitution, had been ineligible to sit in the Parliament. The revelation prompted Ludlam's fellow co-deputy leader of the Greens, Senator Larissa Waters, to examine her citizenship status and, on discovering that she too was a dual citizen (born in Canada), she also resigned. It is expected that both seats will be filled by a recount of the 2016 election, respectively in Western Australia and in Queensland, resulting in the seats being filled by the candidates who came next in each State.
On 2 February 2018, South Australian Senator Lucy Gichuhi joined the Liberal Party, ceasing to be an independent and strengthening the position of the government.
Composition changes since the last electionEdit
In the time elapsed between the 2016 election and the following federal election, many parliamentarians resigned from their seats, while some were disqualified by the High Court of Australia. The parliamentary eligibility crisis involving dual citizenship was responsible for a significant portion of these departures. Some individual parliamentarians also made an impact by changing their party membership or independent status.
Historical party composition of the SenateEdit
The Senate has included representatives from a range of political parties, including several parties that have seldom or never had representation in the House of Representatives, but which have consistently secured a small but significant level of electoral support, as the table shows.
Results represent the composition of the Senate after the elections. The full Senate has been contested on eight occasions; the inaugural election and seven double dissolutions. These are underlined and highlighted in puce.
|2nd||1903||8||12[k]||14||1||1||Revenue Tariff||36||Plurality-at-large voting|
|8th||1919||1||35||36||Preferential block voting|
|9th||1922||12||24||36||Preferential block voting|
|10th||1925||8||25||3||36||Preferential block voting|
|11th||1928||7||24||5||36||Preferential block voting|
|12th||1931||10||21||5||36||Preferential block voting|
|13th||1934||3||26||7||36||Preferential block voting|
|14th||1937||16||16||4||36||Preferential block voting|
|15th||1940||17||15||4||36||Preferential block voting|
|16th||1943||22||12||2||36||Preferential block voting|
|17th||1946||33||2||1||36||Preferential block voting|
|18th||1949||34||21||5||60||Single transferable vote|
|19th||1951||28||26||6||60||Single transferable vote|
|20th||1953||29||26||5||60||Single transferable vote|
|21st||1955||28||24||6||2||60||Single transferable vote|
|22nd||1958||26||25||7||2||60||Single transferable vote|
|23rd||1961||28||24||6||1||1||60||Single transferable vote|
|24th||1964||27||23||7||2||1||60||Single transferable vote|
|25th||1967||27||21||7||4||1||60||Single transferable vote|
|26th||1970||26||21||5||5||3||60||Single transferable vote|
|27th||1974||29||23||6||1||1||Liberal Movement||60||Single transferable vote|
|28th||1975||27||26||6||1||1||1||Liberal Movement||64||Single transferable vote|
|29th||1977||27||27||6||2||1||1||64||Single transferable vote|
|30th||1980||27||28||3||5||1||1||64||Single transferable vote|
|31st||1983||30||23||4||5||1||1||64||Single transferable vote|
|32nd||1984||34||27||5||7||1||1||1||Nuclear Disarmament||76||Single transferable vote (Group voting ticket)|
|33rd||1987||32||26||7||7||1||2||1||Nuclear Disarmament||76||Single transferable vote (Group voting ticket)|
|34th||1990||32||28||5||8||1||1||1||Greens (WA)||76||Single transferable vote (Group voting ticket)|
|35th||1993||30||29||6||7||1||1||2||Greens (WA) (2)||76||Single transferable vote (Group voting ticket)|
|36th||1996||29||31||5||7||1||1||2||Greens (WA), Greens (Tas)||76||Single transferable vote (Group voting ticket)|
|37th||1998||29||31||3||9||1||1||1||1||One Nation||76||Single transferable vote (Group voting ticket)|
|38th||2001||28||31||3||8||2||1||2||1||One Nation||76||Single transferable vote (Group voting ticket)|
|39th||2004||28||33||5||4||4||1||1||Family First||76||Single transferable vote (Group voting ticket)|
|40th||2007||32||32||4||5||1||1||1||Family First||76||Single transferable vote (Group voting ticket)|
|41st||2010||31||28 + (3 LNP)||2||1||9||1||1||76||Single transferable vote (Group voting ticket)|
|42nd||2013||25||23 + (5 LNP)||3 + (1 LNP)||1||10||1||1||6||Family First,
Palmer United (3)
|76||Single transferable vote (Group voting ticket)|
|43rd||2016||26||21 + (3 LNP)||3 + (2 LNP)||9||1||11||Family First,
Nick Xenophon Team (3),
One Nation (4)
|76||Single transferable vote (Optional preferential voting)|
- Next Australian federal election
- Members of the Australian Senate, 2016–2019
- President of the Australian Senate
- Double dissolution
- Women in the Australian Senate
- Clerk of the Australian Senate
- Members of the Australian Parliament who have served for at least 30 years
- Father of the Australian Senate
- List of Australian Senate appointments
- Canberra Press Gallery
- 2 LNP Senators sit in the Liberal party room and 2 in the National party room
- Sits in National party room
- Cory Bernardi resigned from the Liberal Party on 7 February 2017 and founded the Australian Conservatives.
- Fraser Anning (Queensland) was declared elected at a recount to replace Malcolm Roberts as a Senator for One Nation, but left the party within an hour of being sworn in on 13 November 2017.
- Steve Martin (Tasmania) was declared elected at a recount to replace Jacqui Lambie as a Senator for the Jacqui Lambie Network, but was expelled from the party two days before his election was declared on 7 February 2018.
- Tim Storer (South Australia) was declared elected at a recount to replace Skye Kakoschke-Moore as a Senator for the Nick Xenophon Team, but resigned from the party in November 2017, after an unsuccessful attempt to fill the casual vacancy left by party leader Nick Xenophon's Senate resignation.
- LNP Senator George Brandis resigned on 8 February 2018.
- Figures are available for each year on the Senate StatsNet.
- Includes results for the Free Trade Party for 1901 and 1903, the Anti-Socialist Party for 1906, the Commonwealth Liberal Party for 1910—1914, the Nationalist Party for 1917—1929, and the United Australia Party for 1931—1943.
- Includes results for the Country Party for 1919—1974 and the National Country Party for 1975—1980.
- Protectionist Party
- Williams, George; Brennan, Sean; Lynch, Andrew (2014). Blackshield and Williams Australian constitutional law and theory : commentary and materials (6th ed.). Annandale, NSW: Federation Press. p. 415. ISBN 9781862879188.
- "Part V - Powers of the Parliament". Retrieved 13 May 2017.
- "No. 14 - Ministers in the Senate". Senate Briefs. Parliament of Australia. December 2016.
- Day v Australian Electoral Officer for the State of South Australia HCA 20
- "Chapter 4, Odgers' Australian Senate Practice". Aph.gov.au. 2 February 2010. Archived from the original on 21 March 2011. Retrieved 17 July 2010.
- "Senate (Representation of Territories) Act 1973. No. 39, 1974". Austlii.edu.au. Retrieved 22 March 2017.
- "Norfolk Island Electors". Australian Electoral Commission. 2016. Retrieved 6 August 2016.
- Department of the Senate, Senate Brief No. 1, 'Electing Australia's Senators' Archived 29 August 2007 at the Wayback Machine.. Retrieved August 2007.
- Section 6 of the Senate (Representation of Territories) Act 1973. Retrieved August 2010.
- Question without Notice: Loan Council Arrangements House Hansard,
- Lijphart, Arend (1 November 1999). "Australian Democracy: Modifying Majoritarianism?". Australian Journal of Political Science. 34 (3): 313–326. doi:10.1080/10361149950254. ISSN 1036-1146.
- Sawer, Marian (1999). Marian Sawer and Sarah Miskin, eds. Overview: Institutional Design and the Role of the Senate (PDF). Representation and Institutional Change: 50 Years of Proportional Representation in the Senate. 34. pp. 1–12. Archived from the original (PDF) on 17 January 2011.
- Ted Morton, 'Senate Envy: Why Western Canada Wants What Australia Has' Archived 14 May 2013 at the Wayback Machine., Senate Envy and Other Lectures in the Senate Occasional Lecture Series, 2001–2002, Department of the Senate, Canberra.
- "Senate weekly routine of business". Australian Senate. 7 November 2011. Archived from the original on 26 January 2012.
- Australian Senate, 'The Senate and Legislation' Archived 24 September 2008 at the Wayback Machine., Senate Brief, No. 8, 2008, Department of the Senate, Canberra.
- Australian Senate, 'Consideration of legislation' Archived 26 September 2008 at the Wayback Machine., Brief Guides to Senate Procedure, No. 9, Department of the Senate, Canberra.
- "Odgers' Australian Senate Practice Fourteenth Edition Chapter 16 - Committees". 2017. Retrieved 19 March 2017.
- Constitution of Australia, section 49.
- "Infosheet 4 - Committees". aph.gov.au. Retrieved 22 February 2017.
- "Media Release 43/2006 – Senate remains robust under Government majority". 30 June 2006. Archived from the original on 27 September 2007.
- "Senator Chris Evans, The tyranny of the majority (speech)". 10 November 2005. Archived from the original on 12 November 2009.
Labor has accused the Government of 'ramming' bills through the Senate – but Labor "guillotined" Parliamentary debate more than twice the number of times in their 13 years in Government than the Coalition has over the last decade. In the last six months, the Government has not sought to guillotine any bill through the Senate.
- "Senator Andrew Murray: Australian Democrats Accountability Spokesperson Senate Statistics 1 July 2005 – 30 June 2006" (PDF). 4 July 2006. Archived from the original (PDF) on 5 August 2006.
- Senate Standing Orders, numbers 7, 10, 98–105, 163
- Deirdre McKeown, Rob Lundie and Greg Baker, 'Crossing the floor in the Federal Parliament 1950 – August 2004' Archived 3 October 2008 at the Wayback Machine., Research Note, No. 11, 2005–06, Department of Parliamentary Services, Canberra.
- Uhr, John (June 2005). "How Democratic is Parliament? A case study in auditing the performance of Parliaments" (PDF). Democratic Audit of Australia, Discussion Paper. Archived from the original (PDF) on 14 May 2013.
- Peter Veness, 'Crossing floor 'courageous, futile', news.com.au, 15 June 2006. Retrieved January 2008.
- Neither of these instances resulted in the defeat of a government proposal, as in both cases Senator Steve Fielding voted with the government.
- Prime Minister's press conference, 14 August 2006 "Archived copy". Archived from the original on 21 August 2006. Retrieved 21 August 2006.
- "Nationals won't toe Libs' line: Joyce – SMH 18/9/2008". News.smh.com.au. 18 September 2008. Retrieved 17 July 2010.
- Uma Patel (6 July 2016). "Election 2016: How do we decide which senators are in for three years and which are in for six?". Australian Broadcasting Corporation.
- Antony Green (25 April 2016). "How Long and Short Senate Terms are Allocated After a Double Dissolution". Australian Broadcasting Corporation.
- Consultative Group on Constitutional Change (March 2004). "Resolving Deadlocks: The Public Response" (PDF). p. 8.
- Kerr, John. "Statement from John Kerr (dated 11 November 1975) explaining his decisions". WhitlamDismissal.com. Retrieved 11 January 2017.
- Kerr, John. "Statement from John Kerr (dated 11 November 1975) explaining his decisions". WhitlamDismissal.com. Retrieved 11 January 2017.
- Green, Antony. "An Early Double Dissolution? Don't Hold Your Breath!". Antony Green's Election Blog. ABC. Retrieved 1 August 2016.
- AEC (21 February 1984). "AEC". Twitter. Retrieved 22 March 2017.
- "Federal Election 2016: Senate Results". Australia Votes. Australian Broadcasting Corporation. 3 July 2016. Retrieved 4 July 2016.
- "Senate photo finishes". Blogs.crikey.com.au. 12 July 2016. Retrieved 30 July 2016.
- "Cormann raises 'first elected' plan to halve Senate terms for crossbenchers". The Australian. 12 August 2016. Retrieved 18 March 2017.
- Hutchens, Gareth (12 August 2016). "Senate terms: Derryn Hinch and Greens' Lee Rhiannon given three years". Retrieved 3 February 2017 – via The Guardian.
- "LP-LNP deal to force senators back to poll in three years". The Australian. 13 August 2016. Retrieved 18 March 2017.
- Hunter, Fergus (12 August 2016). "Coalition and Labor team up to clear out crossbench senators in 2019". smh.com.au. Retrieved 3 February 2017.
- "Court finds witnesses not enough to prove Bob Day breached constitution". abc.net.au. 27 January 2017. Retrieved 3 February 2017.
- "Family First ex-senator Bob Day's election ruled invalid by High Court". ABC News. 5 April 2017.
- Doran, Matthew; Belot, Henry; Crothers, Joanna (19 April 2017). "Family First senator Lucy Gichuhi survives ALP challenge over citizenship concerns". ABC News. Retrieved 19 April 2017.
- Karp, Paul (20 April 2017). "Court rebuffs Labor challenge to Family First senator Lucy Gichuhi". The Guardian. Retrieved 20 April 2017.
- Belot, Henry (26 April 2017). "Cory Bernardi unwilling to wait for Lucy Gichuhi to 'get her head around' things". ABC News. Retrieved 26 April 2017.
- "Rod Culleton: Former One Nation senator loses appeal against court bankruptcy verdict". abc.net.au. 4 February 2017. Retrieved 10 February 2017.
- Re Culleton [No 2] HCA 4 (3 February 2017).
- "One Nation: Rod Culleton's brother-in-law Peter Georgiou confirmed as replacement". abc.net.au. Retrieved 10 March 2017.
- Uhlmann, Chris; Norman, Jane (7 February 2017). "Cory Bernardi to split with Coalition to form Australian Conservatives party". ABC News Australia. Australian Broadcasting Corporation. Archived from the original on 7 February 2017. Retrieved 7 February 2017.
- Strutt, J; Kagi, J (14 July 2017). "Greens senator Scott Ludlam resigns over failure to renounce dual citizenship". ABC News. Australia.
- Belot, Henry (18 July 2017). "Larissa Waters, deputy Greens leader, quits in latest citizenship bungle". abc.net.au.
- "A database of elections, governments, parties and representation for Australian state and federal parliaments since 1890". University of Western Australia. Retrieved 15 February 2009.
- Bach, Stanley (2003). Platypus and Parliament: The Australian Senate in Theory and Practice. Department of the Senate. ISBN 0-642-71291-3.
- Harry Evans, Australian Senate Practice, A detailed reference work on all aspects of the Senate's powers, procedures and practices.
- John Halligan, Robin Miller and John Power, Parliament in the Twenty-first Century: Institutional Reform and Emerging Roles, Melbourne University Publishing, 2007.
- Wilfried Swenden, Federalism and Second Chambers: Regional Representation in Parliamentary Federations: the Australian Senate and German Bundesrat Compared, P.I.E. Peter Lang, 2004.
- Sawer, Marian & Miskin, Sarah (1999). Papers on Parliament No. 34 Representation and Institutional Change: 50 Years of Proportional Representation in the Senate. Department of the Senate. ISBN 0 642 71061 9.
|
<urn:uuid:61d840f5-7db8-4857-b0c9-876a99aabcef>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812758.43/warc/CC-MAIN-20180219171550-20180219191550-00608.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.944070041179657,
"score": 3.9375,
"token_count": 11124,
"url": "https://en.m.wikipedia.org/wiki/Australian_Senate"
}
|
What really caused France’s humiliating loss to the Viet Minh in the French Indochina war? To understand we must focus on logistics. Charles Shrader’s A War of Logistics: Parachutes and Porters in Indochina, 1945–1954 reveals the true staggering failures of the French were simply the result of poor logistics.
On the surface, it may not make sense. A western power falling to an agrarian band of guerrilla fighters? No author has precisely examined Viet Minh and French military logistics in great detail. This is an impressive view.
Shrader has taught at West Point, the Command & General Staff College at Fort Leavenworth, and at the Army War College. He is a former executive director of the Society for Military History. His metrics and well-written history document those French military pillars that collapsed triggering their retreat not only from Indochina but from the world stage.
Many respected books point to Dien Bien Phu as the surprising French loss and later defeat in the war. Shrader documents how this battle was the culmination of a series of shocking logistical failures that plagued their efforts against the Viet Minh.
The shift benefitting the Viet Minh developed after the Korean War. China began delivering overwhelming logistical resources to the Viet Minh. While French and CIA intelligence captured communications confirming numerous deliveries of infrastructure, France did not adjust to this threat.
In retrospect, the logistical failure to support the French effort should have sent strong signals to American military advisors that success against this communist enemy would be a long and difficult task.
Shrader has addressed today in the age of analytics a unique view of a war fought seventy years ago by revealing the military infrastructure throughout Indochina created at the onset of the conflict was, in fact, an uneven stage favoring the Viet Minh.
On the surface, following World War II the obvious advantage in this war was with France. The Viet Minh were literally a band of guerrilla fighters confronting a colonial empire that fought and suffered through the greatest conflict on the world’s stage from 1917-1945.
The seemingly largest advantage for France was air power. The Viet Minh had no air force. The French had access to planes with napalm that horrified the Viet Minh. Again on the surface how could France be defeated with an enemy lacking any air force?
Yet France could not overcome the country’s physical environment as their logistical efforts remained in a European battlefield mentality.
For having modern armaments the French never swayed the outcome of most battles, most notably Dien Bien Phu which was Navarre’s plan to draw their Viet Minh enemy into a final confrontation.
Paris was forever trapped with colonial-era demands upon their French Union forces. It was a war that post-World War II France was not willing to fight:
In August 1949, the French National Assembly made continued support for the Indochina war contingent on a pledge that no draftees would be sent to Indochina, thus further limiting the already small pool of manpower available for assignment to the theater.
As I have noted in earlier blog posts about Dien Bien Phu book, Paris would not send French boys to bleed across Indochina. Their colonial armies would take that role. And they sadly filled it well. During the war, French-backed communist parties did their best to slow all legislation to slow their military effort.
This loss of a true military commitment and logistical support doomed their war plans and ultimately their seat on the global stage as a world power.
Even the most mobile French Union forces were heavily laden with weapons, ammunition, and a plethora of other equipment, and the French discovered, much to their sorrow, that their mechanized mobility was no match for the foot mobility of the Viet Minh. In their after-action reports, French commanders freely acknowledged that their units, organized for warfare in Europe, proved to be “ill-suited to the task of carrying on a struggle against rebel forces in an Asiatic theater of operations.
As late as the end of 1953, the commandant en chef still found it necessary to issue a bulletin that admonished: “Commanders at all echelons still suffer from a ‘motor complex.’ They are used to moving with vehicles which restrict them to roads and certain trails. They forget that our enemy is completely independent of motor transport and can rapidly assemble and move large forces in difficult areas where it is impossible for us to follow and give battle unless we give up our motorized transport
Not only were the motorized GMs tied to the limited road networks of Indochina, they were also notorious consumers of logistical support, particularly petroleum products, ammunition, and maintenance. The artillery and the headquarters elements of the typical GM were 100 percent motorized, and the infantry battalions were usually about one-third motorized. In part, the high consumption of fuel and repair parts associated with the GMs stemmed from the fact that the older trucks, scout cars, half-tracks, and light tanks utilized by them, although adequate for route security and escort duties, were not specially adapted to the climate and terrain of Indochina.
The role of the emerging 4th Republic was forever tied to the failed 3rd republic and the divided Vichy government that came stumbling out of World War II. While not directly addressed by Schrader, France was simply unable to fight this war for colonial restoration. Paris understood this yet ordered their French Union troops into war and sacrificed a generation of officers from St. Cyr:
Throughout the First Indochina War, the leaders of the French Union forces struggled to maintain the necessary troop levels and to organize effective forces to deal with the Viet Minh threat. The debilitation of French resources in World War II, commitments elsewhere, and political resistance at home meant that sufficient resources of men, money, and materiel were not forthcoming, even after 1950, when military and economic aid from the United States became more available.
The lack of well-trained staff officers created many weaknesses in the functioning of the Viet Minh General Staff, the weakest functional area being that of logistics. However, by 1953 many Viet Minh officers had been trained in Chinese Communist military schools, and the influence of Chinese advisors became evident with the reorganization of the General Staff to bring it more into line with Chinese organizational concepts.7 In January 1951, the head of the Chinese Military Advisory Group (CMAG)
By 1953, the battlefield in Indochina looked much more like the European battlefield for which much of the French Army was organized and equipped, but, nevertheless, Indochina remained a unique theater of war. The harsh climate, difficult terrain, and poor transportation networks were coupled with a distant overseas supply base controlled by a constantly changing, unenthusiastic, and generally parsimonious government.
A major contributing factor to the problems that afflicted the French supply services in Indochina was the unfavorable political and economic situation in France. Even by 1954, France’s political morale as well as its physical capital had not yet recovered from the shock of the Second World War. The instability occasioned by some fourteen governments in the ten years from 1945 to 1954 did little to ensure adequate systematic planning and execution of the war in Indochina. Ideological divisions within the French government as well as doubts over whether France’s colonial empire ought to be retained at all hampered the adequate support of its forces in Indochina.
The impact of China upon the Viet Minh effort was not directly confronted between France and the United States.
As the war progressed, so too did the organization of the Viet Minh combat units, which grew larger, better equipped, and capable of sustained combat operations against the French Union forces. Although the Viet Minh developed division-size units, including a heavy division, they did not permit the addition of artillery and engineer forces to hamper their operations by restricting their mobility. In the end, the Viet Minh were far more successful than the French in adapting their combat organizations to the physical and operational environment. They thus secured a significant advantage over their opponent, one that led ultimately to victory.
What price did France pay in logistics to maintain their colonial armies fighting for Paris? Plenty:
At the beginning of the war French Union forces ate fresh meat from local livestock, but the situation in Indochina soon required a shift to the use of imported boneless frozen meat. This in turn required the establishment of a system of cold storage facilities and the use of refrigerated trucks and containers for distribution of frozen meat to the field. The resulting system of cold storage depots, completed in 1951, consisted of large-capacity cold storage facilities for long-term storage at the major ports and depots and a number of smaller, short-term cold storage facilities located at the less important depots or near the troops. The cold storage facilities available in 1952 amounted to 92,660 cubic feet, the bulk of which was located in Saigon and Haiphong. Another 6,178 cubic feet of space was under construction, and plans for 1953 called for the construction of an additional 102,370 cubic feet of cold storage.
American Thomas J. H. Trapnell, a former chief of MAAG-Indochina, noted in May 1954:
The French Expeditionary Corps is composed of Foreign Legion, Moroccans, Algerians, Tunisians, Senegalese and a small percentage of metropolitan French volunteers. These units are diluted nearly 59 percent by native Indochinese. The Associated States Forces are composed of varieties of native Vietnamese, Laotians and Cambodians. The whole effect is that of a heterogeneous force among whom even basic communication is difficult. Troops require a variety of clothes sizes and diets. They have different religious customs, folk-ways and mores. They vary in their capacity for different tasks and terrain. Logistically, a great problem exists in the support of such troops.
And the logistical demands to arm the French Union troops were just as demanding upon France’s strained infrastructure:
Until 1950, the French Union forces in Indochina suffered chronic shortages of equipment and were plagued by the age and diverse types of most of the weapons, vehicles, and other equipment available. In 1947, for example, only 210 vehicles were received from France out of 3,682 requested; of 9,148 motors requested only 250 were received; and out of 76,639 tires requested only 10,843 arrived, of which only 6,517 were from France and half of them were used. The weapons and vehicles used by the French Union forces in Indochina were drawn from the stocks of at least five countries (France, the United States, Britain, Germany, and Japan) and represented a large number of makes and models, many of which were obsolete and for which spare parts were no longer available.
The First Indochina War was fought with arms and equipment designed for a war in Europe rather than in the tropical climate and terrain of southeast Asia. The effect of high temperatures and high humidity on packaging, textiles, and radios and other electronic equipment significantly reduced the performance and life span of some equipment and increased the demand for those items.
Overall, the French authorities, both at home and in Indochina, demonstrated an inability (or perhaps an unwillingness) to come to grips with the logistical problems of supporting a modern army engaged in heavy fighting against a determined and increasingly sophisticated enemy halfway around the world. For a while, peacetime regulations and a blasé contempt for the ability of the Viet Minh inhibited the search for viable solutions to the logistical problems inherent in the Indochina situation. However, French military leaders at lower levels in Indochina could not blame their misfortunes entirely on the parsimonious government at home and the lack of wisdom of the higher commanders. Almost every observer of the First Indochina War reported the lack of commitment, lackadaisical attitude, and sloppy performance of many of the officers and soldiers at the lowest levels. Of course, dedication and even heroism were to be found frequently, but on the whole the technical skill and massive amounts of modern war equipment available to the French Union forces could not compensate for the lack of enthusiasm and discipline, qualities that were so prominently displayed by their Viet Minh opponents. “Once it was recognized that the Viet Minh were indeed capable of achieving their objective of driving out the French colonial regime, it was almost too late to devise effective means of countering them.
Schrader at times shows the French effort as almost a comedy of errors:
French air capabilities were further limited by inadequate maintenance stemming from poor procedures, the lack of qualified personnel, and a general lack of interest in improving the situation.
As might be expected, the weather limited the air transport of men and supplies as well as the tactical employment of airborne forces in Indochina, but the principal constraints were the limited number of suitable transport aircraft and crews and the perpetual shortage of parachutes and related equipment.
Between November 20, 1953, and May 7, 1954, 20,860 tons of cargo were delivered to Dien Bien Phu, some 6,584 tons of which were airlanded before the loss of the airfields. The other 14,276 tons were airdropped or parachuted over the course of the entire 169 days of the operation, and amounted to about 100 kilograms per minute, or about 124 tons per day, and required almost 80,000 parachutes, plus airdrop rigging.
The number and quality of the transport aircraft available grew steadily throughout the First Indochina War but were never sufficient to meet the ever-increasing air transport and airdrop needs of the French Union ground forces. In November 1947, for example, the three available transport groups could muster only seventeen C-47s and thirty-five ancient Amiot AAC 1 Toucans (a French-built version of the German Junkers Ju 52), of which twenty-seven were inoperative.
Losses, particularly over Dien Bien Phu in the first months of 1954, were heavy, and the Viet Minh conducted several daring raids on French air transport bases in Tonkin that resulted in heavy losses.
The perpetual shortage of transport aircraft required the French authorities to rely heavily on the temporary augmentation provided by the civil air transport firms operating in Indochina. By the end of the war, civilian aircraft and pilots were used even for the most dangerous missions, such as the resupply of commando units and the support of Dien Bien Phu. The civilian pilots, already thoroughly familiar with flying conditions in Indochina, became very experienced at military formation flying and air delivery techniques and were a valuable supplement to the limited military air transport. However, as late as November 1953, the French authorities still had not instituted effective procedures for the control of the available airlift, military or civilian, and there were no definitive procedures to regulate the flow of cargo and establish priorities.
A few British Bristol 170 Freighters belonging to commercial airlines in Indochina were also employed and proved particularly useful for the air transport of heavy equipment. The most notable achievement of the Bristol Freighters was the delivery of ten M24 Chaffee light tanks to Dien Bien Phu in what was called Operation RONDELLE II. The French command in Indochina established the requirement for a cargo plane capable of landing or taking off with a two-ton load of personnel or equipment on short (less than 150 yards) unimproved landing strips, and the French aeronautical designer Louis Breguet actually designed such an airplane, but the French Air Force was not interested.
Influence of Korean war logistics that failed included helicopters. On the surface, it appears the American deployment of helicopters in Korea for mobile surgical hospitals was a success. Across Indochina the results were less than stellar:
The helicopter, which proved so characteristic an element of the Second Indochina War, was still something of a novelty during the First Indochina War. Although helicopters held the promise of overcoming many of the obstacles to the movement of men and supplies in Indochina, they were employed by the French in very small numbers and almost entirely for medical evacuation purposes. The first two Hiller H-23 ambulance helicopters were delivered to Saigon in April 1950. The delivery of additional medical evacuation helicopters was delayed due to the priority given to forces in Korea, but by 1952 ten were available. On December 31, 1953, the French forces in Indochina had eighteen helicopters on hand (six Hiller H-23As; five Hiller H-23Bs; three Westland-Sikorsky WS-51s; and four Sikorsky S-55s), which had already accumulated a total of 4,821 flying hours, evacuated 4,728 casualties, and rescued nineteen pilots and three observers who had been shot down.
However, the French were already in the process of shipping the Westland-Sikorsky WS-51 helicopters back to France due to insurmountable maintenance support problems. The army organized a helicopter training command in early 1954, built a heliport in Saigon, and made plans to acquire one hundred helicopters by the end of the year. The plan was to activate GT 65 with a twenty-five-machine light helicopter squadron, a twenty-five-machine medium helicopter squadron, and a maintenance squadron. However, only twenty-eight helicopters had arrived by the end of 1954, and American military aid personnel, who were supplying the helicopters, advocated “a more modest approach to the helicopter force build up for the French Land Forces. The small number of available machines and pilots as well as the lack of a well-developed understanding of helicopter operations limited the use of the helicopter in Indochina to medical evacuation and rescue work. As a result, they played an insignificant role in the logistics of the First Indochina War.
A basic lack of infrastructure stretching across the French Union would paralyze their battle plans:
Throughout much of the First Indochina War, parachutes and airdrop equipment were in short supply. Basically, every man and every one hundred kilograms of cargo dropped required one parachute. Given the heavy use of paradrops to support isolated garrisons and combat forces in the field, the perpetual shortage of parachutes demanded a maximum effort on the part of airborne forces to recover parachutes after an operation. After every jump, one-fourth to one-third of the paratroopers spent up to half a day just recovering the parachutes. This imposed a significant burden on the parachute units and supporting logistical personnel. The shortage of parachutes also increased the importance of free-drop techniques. As early as 1950, about 40 percent of the air-delivered tonnage was free-dropped, and the use of free-drop techniques made possible a gain of up to 12 percent in the useful tonnage delivered by air. Increased French production and American aid deliveries of parachutes and airdrop equipment provided some relief by the end of the war.
Not only were the parachutes and other airdrop equipment used in Indochina expensive and generally in short supply, but the great variety of such materiel greatly complicated the work of the French aerial resupply units. French industry was unable to supply parachute releases suitable for the 118-mile-per-hour speed of the C-47, so about 80 percent of the parachutes used in Indochina were supplied by the United States under the MDAP. Although many of the technical “problems associated with the design and manufacture of parachutes, particularly the heavy-drop parachutes required for equipment and supplies, were resolved during the First Indochina War, some problems were never satisfactorily overcome. For example, the increasing strength of Viet Minh antiaircraft defenses later in the war made delayed-opening drops from higher altitudes a necessity, but a truly effective delayed-opening device capable of ensuring reasonable accuracy of the drop and reliable opening of the parachute was not perfected before the cease-fire. The lack of reliable delay fuzes resulted in as much as 50 percent of some drops failing to hit the designated drop zone. Parachute loads with malfunctioning delay fuzes often were destroyed on impact, and occasionally the results were even more tragic, as when the defective parachutes and their loads fell indiscriminately, destroying friendly bunkers and killing friendly personnel. The ground lighting of drop zones and temporary landing fields was yet another problem not resolved satisfactorily before the end of the war.
The French were highly successful in overcoming many of the obstacles to effective and efficient aerial support of their forces in Indochina. Effective staff and operating organizations were developed to plan and execute air transport, and techniques were developed to minimize the impact of climate, terrain, and an aggressive enemy. However, persistent shortages of trained personnel and specialized equipment, particularly aircraft and parachutes, limited what could be accomplished. Excited by the advantages inherent in air transport unopposed by enemy counterair operations, the French came to rely too heavily on what was in reality a very thin and fragile rope, and in the final analysis air transport and aerial resupply, which many French military leaders saw as the key to victory in Indochina, turned out to be a major factor in their ultimate defeat.
The Viet Minh spent the first half of the war destroying roads, bridges, railways, and other transportation facilities, but with the advent of Chinese Communist aid in 1950 they found it necessary to initiate a program for the repair and improvement of existing routes and the construction of new routes in the areas under their control as well as those leading to the areas in which they intended to conduct operations.
The Viet Minh skillfully utilized all of the modes of transport at their disposal, including porters, animal transport, trucks, coastal and inland water transport, and railroads. Although there is no firm evidence to suggest that the Viet Minh had access to air transport, some French officials claimed that small amounts of cargo were flown in to the Viet Minh from Communist China. In the early days of the war the Viet Minh armed forces requisitioned civilian laborers locally as required to meet their needs. But beginning in November 1949, the Viet Minh leadership implemented a program of obligatory military service and tried to mobilize the entire civilian population to support what was rapidly becoming a conventional modern war.
The porter system was placed on a more regular basis in 1951 when the Viet Minh government decreed that all able-bodied peasants, male and female, must contribute three months of labor per year to the Viet Minh logistical effort.
Operationally, the supporting Viet Minh porters were organized in “convoys” protected by armed escorts.86 The escorts provided security for the porters and sometimes created diversions to distract French Union forces while the porters slipped by an observation post. The porters marched mainly at night over routes offering good cover and concealment. The established routes were cleared, smoothed out, and maintained. They usually were divided by relay posts called trams.
The night’s march usually ended at a tram, which often was equipped with crude facilities for cooking and shelter and at which the porter convoy hid and rested during daylight hours.
That in the eight major battles between the battle for RC-4 in 1950 and Dien Bien Phu in 1954, the Viet Minh employed some 1,541,381 “transport porters” who worked a total of 47.8 million man days in all.
As one French authority noted: “The arrival of Chinese automotive equipment revolutionized the enemy’s military transportation system. It resulted in the country’s roads being put into good condition and removed the terrible strain off the country insofar as the levying of coolies was concerned for forming up the troop combat trains.
The Viet Minh truck fleet began with fifty–sixty trucks abandoned by the French during their evacuation of Cao Bang and Lang Son in 1950.103 In 1951, the Viet Minh still had less than one hundred trucks, mostly taken from the French, but by 1953 the number had risen to nearly one thousand. Most of the motor vehicles used by the Viet Minh after 1951 were supplied by Communist China, and they included large numbers of American trucks captured in Korea and subsequently refurbished by the Chinese.
The heavy reliance of Viet Minh logisticians on porters for the movement of supplies was considered by many Western observers to be a weakness, but in reality the use of porters was perfectly adapted to the terrain of Indochina and capitalized on the large but untrained manpower pool available to the Viet Minh. Porters could go where trucks could not, and they proved generally invulnerable to French air and ground interdiction. More significantly, although difficult to manage, after a few initial failures the porter system proved more than adequate to meet Viet Minh needs. The adaptability of the Viet Minh logistical system was further demonstrated by the effective use of motor transport once it became available in sufficient quantities.
While the Viet Minh porters slipped through the jungles and mountains with relative ease to support their combat forces in the areas of operations, the French Union forces struggled to support their wide-spread garrisons and operational units by land, water, and air. The terrain, climate, and poor transportation infrastructure restricted movements and often meant that isolated units went days without resupply. The wear and tear on both equipment and personnel were heavy, and in the end the results proved unsatisfactory for the French, who, unlike the Viet Minh, failed to adapt adequately to the existing physical and operational environment.
In 1945, the French Union forces seemed to have an enormous advantage over the Viet Minh with respect to the acquisition of war materiel. France was an industrial nation with direct access to the production of the other major Western industrial powers. Moreover, France controlled the principal resources of Indochina itself as well as the facilities necessary to process them for use. But to some degree the French advantage was illusory because neither the French nor the Viet Minh would have been able to pursue the war in Indochina without outside assistance. Although possessed of enormous potential resources, including those of its colonies in Africa and Asia, France had suffered heavily in World War II and the French economy was in shambles. Hard pressed to restore the economy of metropolitan France, the French were clearly unable to sustain a major war effort in Indochina without help. That aid became available after 1950 in the form of massive American financial and material support, including aircraft, watercraft, arms, ammunition, and a vast array of other war supplies.
In the immediate post–World War II period, the French authorities in Indochina attempted to purchase some of the enormous amounts of military equipment declared surplus or simply abandoned by the U.S. and British forces in Asia. Purchasing missions were established in Manila, Singapore, New Delhi, and Calcutta, and a significant amount of war surplus was purchased before anticolonialist opposition cut off such sources of supply.4 The Americans were generally reluctant to approve such purchases, but the British were more accommodating.
The delays in deliveries from metropolitan France and North Africa to Indochina were due not only to the long distances involved and the delays in budgeting and procurement. Active opposition to the war in Indochina was promoted by the French Communist Party and by other left-wing groups that were able to slow down movements of cargo to the ports and the loading of that cargo aboard ships bound for Indochina.
From mid-1950 to the end of the war in July 1954, the United States was the principal source of military equipment and supplies for the French Union forces. In May 1954, one former chief of MAAG-Indochina estimated that “indigenous production is practically negligible,” and that only about 30 percent of the hard items needed by the French Union forces was provided by French procurement agencies, the remainder being provided through U.S. military aid….Although entirely dependent on American support to continue the war, the French were rude and ungrateful recipients of American largesse….U.S. leaders were uncomfortable with the idea of supporting a failed colonial regime and with providing millions of dollars’ worth of equipment and supplies to a client who refused to consider seriously any American suggestion. In fact, the goals of the two countries in Indochina were very different. France sought to retain control over her colonies, while the United States was instead focused on containing the spread of Communism.
The Pentagon Papers show the relationship with Indochina began with President Roosevelt and continued through Presidents Truman and Eisenhower. Ike was the single most influential President about Indochina and the Chinese threat. Eisenhower was in office for over a year before Dien Bien Phu and saw the need for a strong defense of the French. Yet the US and French government efforts were strained due to the support for a free Indochina following World War II:
In his several works on Indochina, Bernard B. Fall, reflecting a French perspective, portrayed the American opposition to the French as strong and premeditated. More recently, Ronald H. Spector has taken a contrary view, noting that “the view that the United States deliberately limited and delayed its help to the French during the Japanese takeover is incorrect,” and that, although opposed to the restoration of French colonial rule in Indochina, President Franklin D. Roosevelt did permit limited support to the French. In any event, the French perception that the United States deliberately abandoned them to the Japanese and then worked with the Viet Minh to prevent the restoration of French control in Indochina did much to sour relationships between France and the United States in the postwar period.
From the beginning of the Second World War until 1950, American policy toward the French in Indochina might indeed be described as thoroughly antipathetic. President Roosevelt himself led the anti-Vichy, even anti-French, opposition and limited assistance to the French regime in Indochina before and during World War II. Roosevelt’s distaste for French colonial rule in Indochina seems to have been largely personal, but it was translated into policies that inhibited French resistance to the Japanese. In a memorandum to Secretary of State Cordell Hull on October 13, 1944, President Roosevelt stated, “We should do nothing in regard to resistance groups or in any other way in relation to Indochina,” and less than a month later, on November 3, he instructed American field commanders in Asia to refuse “American approval . . . to any French military mission being accredited to the South-East Asia Command.
French efforts to obtain aircraft, weapons, and other equipment from the United States or elsewhere before the Japanese moved into French Indochina on September 22, 1940, were also stymied. For example, the efforts of the French commander in Indochina, General Georges Catroux, to strengthen his position against Japanese demands by obtaining the 120 modern fighter aircraft and the antiaircraft artillery already bought and paid for by the French government were brought to naught when the U.S. government prohibited shipment of the equipment to Indochina.
Once the French had taken up arms against the Japanese, President Roosevelt refused to sanction low-level French participation in U.S. intelligence and commando operations in Indochina, and the few joint Franco-American operations that did take place were mostly unsuccessful since the Indochinese were wary of the French members of the teams and refused to help. According to Bernard Fall, President Roosevelt directed his military commanders in China to deny support to the scattered and starving French forces even when they were overrun by the Japanese in March 1945 and were fighting for their very existence against the common enemy.
The British were somewhat more sympathetic and provided the French forces recently returned to Indochina with some eight hundred U.S. Lend-Lease jeeps and trucks as well as other materiel.32 President Truman approved the transfer only because repatriation of the vehicles to the United States would have been impractical, but in general the U.S. government continued to oppose such aid. For example, until 1950 American-built propellers installed on British aircraft had to be removed when such aircraft were sent to the French in Indochina.
The brief flirtation of the United States with the Viet Minh in 1945 and 1946 also created a very negative impression on the French that has even yet to be dispelled. The desire to defeat the Japanese and American anticolonialist sentiment combined to produce a degree of American cooperation with Ho Chi Minh and his nationalist movement. Although the degree of cooperation and the amount of arms and equipment provided to the Viet Minh by the American Office of Strategic Services (OSS) were small, the public approbation of the Viet Minh greatly offended the French, who had hoped for more from their old ally….The French complained bitterly that the Viet Minh had been able to seize control of large parts of Indochina in 1945 only because they had been supplied by the OSS with arms and ammunition, but Ronald Spector notes that the effect was mainly psychological and that “arms received during World War II accounted for only about 12 percent of the estimated 36,000 small arms in Viet Minh hands in March 1946 and only about 5 percent of the weapons available to them at the start of the war against the French in December 1946. On the other hand, the humiliating treatment of French prisoners of war and the public encouragement of the Viet Minh by American officers in the immediate postwar period provided more than sufficient grounds for French suspicion and distrust of American motives….At best, American attitudes toward the French in Indochina were ambivalent until the late 1940s. Even after the United States abandoned Uncle Ho, little effort was made to aid the French in retaining their colonial empire in Asia. However, as the Cold War with the Soviet Union began to take shape, the U.S. State Department and Joint Chiefs of Staff (JCS) recognized that Indochina was an area of vital strategic interest to the United States, and France came to be viewed as a lynchpin of the NATO alliance facing the Soviets in Europe. The situation began to change dramatically in 1949 with the successful Soviet testing of an atomic bomb and, more importantly, the victory of Mao Tse-tung’s People’s Liberation Army over the Chinese Nationalist troops of Chiang Kai-shek at the end of the year. The outbreak of the war in Korea in June 1950 and the subsequent intervention of the Chinese Communists in that conflict in October 1950 completed the transformation. Thereafter, the United States acted forcefully to assist the French Union forces against the Viet Minh as part of an overall effort to stop the Communist tide in Asia. As Lieutenant General Henri Navarre, one of the last French commandants en chef in Indochina, later wrote: “The Americans finally realized the danger of Communism in Southeast Asia, which led them to modify their point of view on the war in Indochina. In place of an impious ‘colonial war,’ they promised a holy war against Communism.
On February 4, 1950, the French government announced formal ratification of the Elysée Agreements granting independence within the French Union to the so-called Associated States, and on the following day the United States recognized the governments of Viet Nam, Cambodia, and Laos. Although reluctant to call upon the United States for assistance, following an interarmy conference at Paris in February 1950, the French drew up initial lists of equipment needed in Indochina, and on March 16, 1950, those lists, which included arms and equipment worth some $94 million, were presented by the French government to the U.S. embassy in Paris as a formal request for American aid.
Meanwhile, on March 1, 1950, the JCS recommended the allocation of $15 million in Section 303 funds to Indochina, and President Harry Truman “approved that recommendation on March 10.40 The same day, President Truman asked the JCS to study the situation in Indochina and forward its recommendations. The JCS responded in a memorandum for the Secretary of Defense dated April 10, 1950, and recommended “early implementation of military aid programs for Indochina, Indonesia, Thailand, the Philippines, and Burma.
Given the recent debacle in China, where enormous amounts of U.S. military aid had fallen into Communist hands, the JCS urged that the following conditions be applied to aid to Indochina:
a. That United States military aid not be granted unconditionally; rather, that it be carefully controlled and that the aid program be integrated with political and economic programs; and
b. That requests for military equipment be screened first by an officer designated by the Department of Defense and on duty in the recipient state. These requests should be subject to his determination as to the feasibility and satisfactory coordination of specific military operations. It should be understood that military aid will only be considered in connection with such coordinated operational plans as are approved by the representative of the Department of Defense on duty in the recipient country. Further, in conformity with current procedures, the final approval of all programs for military materiel will be subject to the concurrence of the Joint Chiefs of Staff.
The JCS also recommended the immediate formation of “a small United States military aid group in Indochina” to fulfill the requirements set forth in paragraph 9b of the memorandum.43 The bottom line was that the JCS recommended “the provision of military aid to Indochina at the earliest practicable date under a program to implement the President’s action approving the allocation of 15 million dollars [of MDAP aid] for Indochina and that corresponding increments of political and economic aid be programmed on an interim basis without prejudice to the pattern of the policy for additional military, political and economic aid that may be developed later.
Following the Communist Chinese capture of Hainan Island at the beginning of May 1950, President Truman approved the allocation of $10 million to pay for the shipment of urgently needed military supplies to Indochina.
President Truman acknowledged his decision publicly two days after North Korean forces attacked the Republic of Korea, when he issued a press release on June 27, 1950, condemning the Communist action and outlining the measures that the United States would take to aid South Korea and prevent further Communist aggression in Asia. As part of those actions, President Truman “directed acceleration in the furnishing of military assistance to the forces of France and the Associated States in Indo China and the dispatch of a military mission to provide close working relations with those forces.
Three days later, on June 30, the same day U.S. ground forces were committed to combat in Korea, the first shipments of American aid arrived in Saigon aboard eight old C-47 transports loaded with spare parts, and by July 30, equipment sufficient for twelve infantry battalions was en route by ship to Indochina.
Schrader also reveals the role of the US military’s aid to France. The American material began to increase to Indochina in 1950 when China joined the war in Korea. For a year infrastructure shipments slowed as America could not generate enough production facilities and tools. At the end of 1951 just 444 of the 968 promised jeeps were in service by the French. Yet the French at the same time used the slow-arriving American aid as a reason for their continued failures in Indochina.
Yet, despite the slowness of American deliveries, the French were unable to keep up with the distribution of the material within Indochina, although many observers credited the influx of American equipment with contributing to the French victories in the first half of 1951.
Following the successful visit of General de Lattre to the United States in September–October 1951, U.S. military aid deliveries were speeded up—as U.S. Army Chief of Staff General J. Lawton Collins had personally assured de Lattre they would be. From November 1951, deliveries were quite steady, delivery time was reduced, and the number of items in critical short supply in Indochina declined.58 Between October 1951 and February 1952, a total of 130,000 tons of equipment, including 53 million rounds of ammunition, 8,000 vehicles, 650 combat vehicles, 200 aircraft, 3,500 radios, and 14,000 automatic weapons were received by the French from American sources.59 Overall, deliveries in 1951 from the United States and from U.S. stocks in Japan totaled some 95,000 tons and then rose to 110,000 tons in 1952, and by February 1953 some 137,200 long tons—the equivalent of 224 shiploads—of American equipment had reached Indochina.60 That materiel included 900 tracked combat vehicles, 15,000 wheeled vehicles, nearly 2,500 artillery pieces, 24,000 automatic weapons, 75,000 small arms, and almost 9,000 radios, as well as 160 F6F and F8F fighters, 41 B-26 light bombers, 28 much-needed C-47 transports, “155 aircraft engines, and 93,000 bombs for the French air forces in Indochina.
In his May 3, 1953, debriefing, the former commander of MAAG-Indochina, Major General Thomas J. H. Trapnell, noted:
The U.S. has greatly contributed to the success of the French in holding Indochina from the beginning. In January 1951, material was rushed from the docks of Haiphong to the battlefield of Vinh Yen, then being fought under the personal direction of Marshall De Lattre himself. Since then, delivery of aid has kept pace with changing French needs, often on a crash basis, down to the present heroic defense of Dien Bien Phu. U.S. aid has consisted of budgetary support, furnishing of end items, military hardware, and of technical training teams. The magnitude and range of this contribution is shown by the following very few examples. All of these figures are as of 31 March this year
a. 785 million dollars has been allocated for the budgetary support of the French Expeditionary Force and the Vietnamese Army. This will assist in meeting budgetary requirements for pay, food, and allowances for these troops
b. Under MDA Programs, a total of more than 784 millions of dollars has been programmed for the years 1950–54. Of this, more than 440 million dollars’ worth of military end items have been received.
c. To date, 31 March 1954, 441 ships have delivered a total of 478 thousands of long tons of MDA equipment to Indochina.
With the inactivation of the CEFEO on April 28, 1956, the U.S. military assistance program was terminated, and all the remaining MAP-provided equipment was supposed to revert to the U.S. government, but the French kept the best of it and left the rest for the armed forces of the Republic of Viet Nam.
Between 1950 and 1954, the United States provided the French Union forces in Indochina with an astounding amount of arms and equipment, in all more than 1.5 million measurement tons, not including aircraft and naval vessels that arrived under their own power. As the authors of the U.S. Joint Chiefs of Staff history of the period stated: “When the United States entered the picture in 1950 French Union forces were indifferently armed with largely obsolescent World War II equipment.
Included in the total of equipment and supplies provided by the United States to the French in Indochina between 1950 and 1954 were 1,880 tanks and combat vehicles, 30,887 motor vehicles, 361,522 small arms and machine guns, 5,045 artillery pieces, over 500 million rounds of small arms ammunition, and over 10 million artillery shell.
Article 1719 of the Geneva Accords, which ended the First Indochina War on July 20, 1954, severely restricted the supply of arms and equipment to the former belligerents by outside parties. The shipment to Indochina of new types of arms, ammunition, and equipment was forbidden, and worn-out or defective materiel could be replaced only on a one-for-one basis and then only through designated control points.
The US Military Assistance Advisory Group (MAAG) Indochina as the point team for the flow of US financial aid to France was never well received by Paris:
The French gave the American plan a chilly reception; they wanted American arms with no strings attached. Their views indicated a desire that the United States simply fill French orders for equipment without attempting to influence types or quantities of material or how it was employed. General Marcel Carpentier, French Commander-in-Chief in Indochina, said that he “would welcome” a United States military mission but wished it to be as small as possible and part of the attaché group at the American legation in Saigon. Although he “would welcome” representatives of the Associated States in the receiving and distributing apparatus, only the French High Command “would be equipped [to] receive and stock American materiel for Indochina.
Despite French misgivings, the first elements of the U.S. Military Assistance Advisory Group–Indochina (MAAG-Indochina) arrived in Saigon on August 3, 1950. The MAAG was formally organized on September 17, and assembled in the Saigon-Cholon area on November 20. Thus, the main function of MAAG-Indochina was “to make sure that equipment supplied by the United States reached its prescribed destination and that it was properly maintained by French Union forces.” The allocation of aid to the Associated States had to be made through the French, and the French prohibited the MAAG from controlling the dispensing of supplies once they were in Indochina. At least one French commander in chief in Indochina, General Henri Navarre, considered “any function of MAAG in Saigon beyond bookkeeping to be an intrusion upon internal French affairs,”
Although the French had the final say on the use of the materiel provided, MAAG-Indochina was charged with providing advice and with conducting inspections in the field to observe how the American-supplied weapons and other equipment were being maintained and utilized. However, MAAG-Indochina proved unable to perform even the minimum functions assigned to it inasmuch as the French, never eager for U.S. advice, limited the MAAG to “order-taking in the commercial sense. Accordingly, Brigadier General Brink was directed to not assume any training or advisory responsibilities toward the armies of the Associated States, and “from the outset, the French rigorously limited end-use inspections of MAAG to a small number of carefully prescribed visits.
Despite the restrictions imposed by the French on American observation, examination, and advice-giving, the members of the MAAG did their best to aid the ungrateful and obstinate French.
Unfortunately, few of the officers and men assigned to MAAG-Indochina spoke French, and the French military authorities in Indochina actively obstructed their efforts.78 French pique at their dependence on American aid was manifested in a number of petty ways. For example, MAAG-Indochina personnel received very little assistance in either their living arrangements or in the conduct of their duties, which after all did involve the coordination of U.S. aid to the French in Indochina.79 Of greater consequence, however, was that: “MAAG officers were not given the necessary freedom to develop intelligence information on the course of the war; information supplied by the French was limited, and often unreliable or deliberately misleading.
What MAAG-Indochina personnel did see of French logistical operations was not pleasing, and the officers of the MAAG frequently complained of the waste and sloppy supply accounting of the French. U.S. Air Force and Navy MAAG officers, who had somewhat freer access to French air and naval bases, also complained of the lack of safety precautions and the poor quality of French maintenance efforts.
Shrader reveals the true arrogance of French military leadership in approaching their enemy with a European battlefield mentality. This error revealed incredible oversight to their execution of the war efforts beginning in 1946 and culminating in the defeat at Dien Bien Phu. With the Geneva Accords awaiting in the wings with a new role for China that also altered how the US approached South Vietnam, Bao Dai and ultimately Ngo Dinh Diem.
It cannot be overstated that France was never fully committed to winning against the Vietnam Minh, The French, with American financial and military support, could not turn the tide of the war after the Korean armistice which led to China flooding the Viet Minh with material, training, and armaments.
It would be Schadenfreude to exclaim the French got what they deserved. But the US efforts to stop the spread of communism in Asia only delayed our own national nightmare. The US repeated some of the same logistically errors fighting the Viet Cong and the NVA throughout the 60s and 70s.
Shrader offers us a larger picture of the Indochina war by examining the infrastructure of both opponents purely from a numbers view. While western history was initially surprised by the French defeat, as become accepting of the ‘surprise” defeat of the French by digesting his book we understand the uneven battlefields, armies and supply lines that tipped the scale in the favor to the Viet Minh.
To make sure that students can easily come to grips with the lessons of this war I would easily recommend starting with their book. This will give an eye-opening account of how both opponents pursued a logistical advantage from 1946 to 1954. This book will also show early reporting by the CIA of the failed French efforts, their hubris and the will of the national movement across Indochina.
Published in 2015 Shrader benefits from time as resources have been recently declassified, providing a deeper insight into the effort by France.
Due to the nature of the closed approach to their history Vietnam has not released counts of their men, material, and losses. In order to truly measure the impact of their tremendous sacrifice this book can serve as a base to which future authors may explore war logistics from multiple angles.
Shrader does echo notable authors who addressed Paris, which gave no support to their Far East Expeditionary Corps campaign. Wavering from the devastating tolls of two world wars France experienced the rise and fall of multiple governments. With less than total support from their government, French Union troops were destined to fail.
|
<urn:uuid:edf7cd7e-a631-427c-8040-63c1fe8e72dc>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00008.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9716464877128601,
"score": 3.4375,
"token_count": 10438,
"url": "http://donkasprzak.com/war-of-logistics/"
}
|
THE EARLY YEARS
At the end of the Mexican War in 1848, the U.S. Army had only three mounted regiments, the 1st Dragoons, the 2nd Dragoons, and the Regiment of Mounted Rifleman to protect settlers moving westward. By 1855, Congress realizing the number of mounted soldiers was not enough authorized the raising of two more regiments, the 1st Cavalry and the 2nd Cavalry.
The 1st Cavalry Regiment was constituted on 3 March 1855 and organized at Jefferson Barracks, Missouri on 26 March 1855 under the command of Colonel Edwin Voss Sumner. The military aptitude of the twenty-eight officers selected for the 1st Cavalry was conclusively proven in the Civil War when twenty-two of them became general officers in either the Union or Confederate armies. Among them were Captain George B. McClellan, (Major General, Commander, Army of the Potomac and the inventor of the famed McClellan saddle), and 2nd Lieutenant James E.B. (Jeb) Stuart, (Major General, CSA, Commander of the Confederate Cavalry Corps).
Upon completion of the organization of the regiment in August 1855, the 1st Cavalry was assigned to Fort Leavenworth, Kansas. Its mission was two-fold; to maintain law and order in the Kansas Territory between pro and anti-slavery factions and to protect the settlers from attacks by the Cheyenne Indians. In 1857 the regiment was split with half taking up new quarters at Fort Riley, Kansas and the rest maintaining small garrisons scattered throughout the state. On 3 March 1861, Colonel Robert E. Lee assumed command of the 1st Cavalry only to resign his commission a month later to lead the Confederate States Army in the Civil War.
THE CIVIL WAR
With so many units being sent east for the war the 1st Cavalry was initially kept on the frontier until militia type units were raised to protect against Indian raids. On June 22, 1861 George McClellan now a Major General, requested Company A and Company E to serve as his personal escort. The two companies saw action in the Bull Run, Peninsula, Antietam and Fredericksburg campaigns, not rejoining the Regiment until 1864. The rest of the 1st Cavalry was committed to action in Mississippi and Missouri
Since 1854 it had been advocated to redesignate all mounted regiments as cavalry and to renumber them in order of seniority. This was done on 3 August 1861. As the 1st Cavalry was the fourth oldest mounted regiment it was redesignated as the 4th Cavalry Regiment.
During the early years of the Civil War Union commanders scattered their cavalry regiments throughout the army conducting company, squadron (two company) and battalion (four company) operations. The 4th Cavalry was no exception with its companies scattered from the Mississippi River to the Atlantic coast carrying out traditional cavalry missions of reconnaissance, screening and raiding.
In the first phases of the war in the west companies of the Regiment saw action in Missouri, Mississippi and Kentucky campaigns, the seizure of Forts Henry and Donelson and the Battle of Shiloh. On 31 December 1862 a two-company squadron of the 4th Cavalry attacked and routed a Confederate cavalry brigade near Murfreesboro, Tennessee. In 1863-64 companies of the 4th saw further action in Tennessee, Georgia and Mississippi. On 30 June 1863 another squadron of the Regiment charged a six-gun battery of Confederate artillery near Shelbyville Tennessee capturing the entire battery and three hundred prisoners.
By the spring of 1864, the success of the large Confederate cavalry corps of Jeb Stuart had convinced the Union leadership to form their own cavalry corps under General Phillip Sheridan. The 4th Cavalry was ordered to unite as a regiment and on 14 December 1864 joined in the attack on Nashville, Tennessee as part of the cavalry corps commanded by General James Wilson. In the battle the 4th help turn the Confederate flank, sending them in retreat. As the Confederate forces attempted a delaying action at West Harpeth, Tennessee an element of the 4th Cavalry led by Lt. Joseph Hedges charged and captured a Confederate artillery battery. For his bravery, Lt Hedges received the Medal of Honor, the first to be bestowed on a member of the 4th Cavalry.
In March 1865, General Wilson was ordered to take his cavalry on a drive through Alabama to capture the Confederate supply depot at Selma. General Wilson had devoted much effort in preparing his cavalry for the mission. It was a superbly trained and disciplined force that left Tennessee led by the 4th Cavalry. It was more than a traditional cavalry raid rather it was an invasion by a cavalry army, a preview of the blitzkrieg of World War II. As the column moved south into Alabama it encountered the famed Confederate cavalry leader Nathan Bedford Forrest. The Union force was too strong and defeated the Confederate cavalry allowing the Union forces to arrive at Selma the next day.
On 2 April 1865, the attack on Selma commenced led by the 4th Cavalry in a mounted charge. A railroad cut and fence line halted the mounted attack. Dismounting the Regiment pressed the attack and stormed the town. Selma's rich store of munitions and supplies were destroyed along with the foundries and arsenals.
General Wilson next turned east to link up with General Sherman. His force took Montgomery, Alabama, Columbus, Georgia and had arrived in Macon, Georgia when word came of the end of the war. The Regiment remained in Macon as occupation troops.
THE INDIAN WARS
The end of the Civil War brought a new surge of westward migration. Indian nations were determined to hold on to the lands they had taken back during the Civil War. In Texas the situation was acute with the Cheyenne and Arapahoe roaming at will in the north and the Comanche, Kiowa and Mescalero Apache controlling western Texas and eastern New Mexico. The 4th Cavalry was ordered into Texas to confront these formidable foes. The Regiment was filled with skilled Civil War veterans from both armies and outfitted with the latest and best equipment. On War Department records of that day the 4th Cavalry was rated the best cavalry regiment in the U.S. Army.
By November 1865 the Regiment had transferred to Fort Sam Houston, Texas. From here the 4th pacified the San Antonio area and conducted campaigns against Indians along the Mexican border. On 15 December 1870 twenty-nine year old Colonel Ranald Slidell Mackenzie, U.S. Cavalry assumed command of the Regiment. A brilliant leader, he commanded a Union cavalry corps at the age of twenty-four. He would command the 4th Cavalry for twelve years, leading it on some of its most famous campaigns.
On 1 April 1873 the Regiment moved to Fort Clark, Texas close to the Mexican border. To stop the cross-border raiding by the Apaches coming out of Mexico Mackenzie was ordered by President Grant to ignore Mexican sovereignty and strike at the Apache/Kickapoo village at Remolino, Mexico some fifty-five miles south of the border. With utmost secrecy Mackenzie began training and preparations for the operation. On 17 May 1873 six companies of the 4th (A,B,C,E,I,M) crossed the Rio Grande under cover of darkness and headed to Remolino. It was a difficult night march over unfamiliar terrain but by dawn they were in position and on Mackenzie's signal the 4th charged the camp. There was some scattered resistance but most of the warriors fled leaving their horses and families behind. The families and horse herd were rounded up and the 4th began a grueling march back to the Rio Grande reaching Texas at dawn on 19 May. During this operation the 4th Cavalry covered 160 miles in thirty-two hours fought an engagement and destroyed a hostile camp. With out their horses and their families in captivity the Indian warrior returned to their reservations in Texas.
The Texas legislature voted "the grateful thanks of the people of Texas for the gallant conduct of Colonel Mackenzie and the 4th U.S. Cavalry". President Grant also sent his congratulations. In the early 1950s John Ford made a film called "Rio Grande" starring John Wayne based on the raid. In 1958, ZIV television produced a 52-week series based on the raid and other 4th Cavalry exploits entitled "Mackenzie's Raiders". (The 3rd Squadron, 4th Cavalry used "Mackenzie's Raiders" as their unofficial nickname before and during the Vietnam War.)
In August 1874, with the border pacified the 4th began a major campaign against the Comanche nation in northern Texas. On 27 September 1874 the Regiment located the Comanche in the Paladuro Canyon of the Red River. Two companies drove off the large pony herd of 1200 while other companies attacked the camp driving off the warriors and then burning it. The Comanche's made their way on foot to Fort Sill to surrender.
Successfully accomplishing their pacification mission in Texas, the Regiment was stationed in what is now the state of Oklahoma when it received orders to march with General Crook north to avenge the massacre of General George Custer and five companies of the 7th Cavalry. On 24 November 1876, the 4th Cavalry located Chief Dull Knife and his northern Cheyenne band. The Regiment rode all night to reach the Indian camp. At dawn the 4th Cavalry charged the village killing many of the Indian warriors, destroying their lodges and capturing 500 horses. The survivors soon surrendered. In 1880 and 1881 the Regiment was busy relocating Indian tribes in Utah and Colorado.
In 1883 the War Department redesignated all cavalry companies as troops. The designation squadron was given to a group of four troops and the cavalry no longer used the designation battalion. Since 1862 the U.S. Cavalry had used guidons similar in appearance to the United States flag to better distinguish Union from Confederate cavalry. On 4 February 1885 the War Department ordered a return to the traditional red and white cavalry guidon used before the Civil War with one specific change. On the upper red half instead of displaying U.S. in white the regimental numeral would be displayed and as before the troop letter would be displayed in red on the white lower half.
In 1884 the 4th Cavalry was ordered to Arizona to combat the Apache. By May 1884 the Regimental headquarters was located at Fort Huachuca along with Troops B, D and I. The rest of the Regiment was stationed at army posts throughout the eastern half of Arizona. In May 1885 150 Apaches led by Geronimo left the reservation and cut a wide swath of murder and robbery throughout southern Arizona as they headed for Mexico.
After unsuccessful efforts to bring Geronimo back to the reservation. General Nelson A. Miles commander of the Department of Arizona ordered Captain Henry W. Lawton with B Troop, 4th Cavalry in pursuit. Several engagements with 4th and 10th Cavalry elements took a toll on Geronimo's band but he managed to escape back to Mexico. In July Lawton resumed the pursuit. Geronimo sent word he was willing to surrender. Moving into Mexico Lawton accompanied by Lieutenant Charles Gatewood, 6th Cavalry, whom Geronimo respected and trusted, met with Geronimo on 24 August. Geronimo agreed to cross back into Arizona and surrender to General Miles. Captain Lawton and Lieutenant Gatewood brought Geronimo to Skeleton Canyon some twenty miles north of the Mexican border where he formally surrendered to General Miles on 3 September 1886.
General Miles and Captain Lawton escorted Geronimo and his band to Fort Bowie. They were immediately put on a train and sent to Florida accompanied by B Troop, 4th Cavalry. After delivering Geronimo to the authorities in Florida, B Troop was ordered to Fort Myer Virginia to serve as an honor guard. With the end of the Geronimo Campaign the 4th Cavalry was transferred to Fort Walla Walla Washington in May 1890. For the next eight years it performed routine garrison duties.
THE PHILIPPINE INSURRECTION
After the seizure of Manila during the War with Spain by Admiral Dewey the call was made for American ground forces to defend the Philippines. The first regiment to be sent was the 4th Cavalry. Six troops were initially sent in August 1898 to Manila were they were immediately deployed to defend Manila from dissident elements of the Philippine army that resented the American takeover of their islands. Fighting broke out when Filipino forces fired on U.S. Forces. The Americans drove the Filipinos from the city and began a campaign to capture the insurgent capitol of Malolos. Because of a mix-up the 4th Cavalry's horses had been unloaded in Hawaii. Troops E, I and K were mounted on Filipino ponies and participated in the Malolos campaign. The dismounted squadron consisting of Troops C and L participated in the capture of Santa Cruz led by Major General Lawton. (He had served in the 4th Cavalry as a 1st Lieutenant and Captain from 1871 to 1888 and had commanded B Troop during the Geronimo Campaign.)
By August 1899 the rest of the Regiment had arrived in the Philippines. In the fall of 1899 the 4th Cavalry moved north under General Lawton to capture the insurgent President Aguinaldo. Severe fighting took place and in the small town of San Mateo and General Lawton was killed in action.
In January 1901 the Regiment was assigned pacification duties in the southern part of Luzon. On 31 September 1901 the tour of duty in the Philippines ended for the Regiment. The 4th Cavalry had participated in 119 skirmishes and battles. The Regiment's three squadrons were reassigned to Fort Leavenworth and Fort Riley Kansas and Jefferson Barracks Missouri, the birthplace of the regiment. In 1905 the 4th returned once again to the Philippines and participated in the Jolo campaign on the island of Mindanao.
THE QUIET YEARS
In 1907 the 4th was reassigned back to the United States to be stationed at Fort Meade, South Dakota less the 3rd Squadron stationed at Fort Snelling, Minnesota. In 1911 the 4th was sent to the Mexican border and two years later departed for Schofield Barracks Hawaii where it served throughout World War I. In 1919 the Regiment returned to the Mexican border and then to Fort Meade, South Dakota in 1925. Regular duties were performed with practiced marches and annual maneuvers held in Wyoming. In 1926 the March King John Phillip Sousa, impressed with the reputation of the 4th Cavalry, wrote an official march for the regiment entitled "Riders For the Flag." The 4th Cavalry Band and the Black Horse Drill Team of Troop F participated in many civic functions throughout the Midwest.
WORLD WAR II
As war swept Europe in 1940 the 4th Cavalry Regiment was reorganized as a Horse-Mechanized Corps Reconnaissance Regiment. The 1st Squadron retained their horses and the 2nd Squadron was mechanized. By 1942 the Army decided that the corps reconnaissance regiments should be completely mechanized. The 1st Squadron turned in its horses at Fort Robinson, Nebraska in the spring of 1942 and was issued M-5 light tanks. In January 1943 the Regiment left Fort Meade for the last time for the Mohave Desert to prepare for the North African campaign. But the Regiment's orders were changed and the 4th arrived in England in December 1943 to serve as the reconnaissance regiment of the VII Corps. Immediately upon arrival the 4th Cavalry Regiment was redesignated and reorganized as the 4th Cavalry Group Mechanized. The 1st Squadron was redesignated the 4th Cavalry Squadron, Mechanized and the 2nd Squadron redesignated as the 24th Cavalry Squadron, Mechanized.
In preparation for the Normandy invasion the 4th Cavalry was assigned a critical role in the amphibious assault of the VII Corps onto Utah Beach. Aerial reconnaissance showed German fortifications on the St. Marcouf Islands 6000 yards off of Utah Beach. These fortifications could pose a serious threat to the Utah Beach landings. The 4th Cavalry was assigned the mission of neutralizing them prior to the landing. The 4th also had the mission of getting two troops ashore on D-Day to link up with the 82nd and 101st Airborne Divisions to give them armor support.
At 0430 Hours 6 June 1944, elements of Troop A, 4th Squadron and B Troop, 24th Squadron landed on the St. Marcoufs. Corporal Harvey S. Olsen and Private Thomas C. Killeran of Troop A, with Sergeant John S. Zanders and Corporal Melvin F. Kinzie of B Troop, each armed only with a knife, swam ashore to mark the beaches for the landing crafts. They became the first seaborne American soldiers to land on French soil on D-Day. As the troops dashed from their landing craft they were met with silence. The Germans had evacuated the islands but they did leave them heavily mined. Meanwhile one platoon of B Troop, 4th Squadron got ashore at Utah Beach and liked up with the 82nd Airborne. On 7 June the platoon surprised a German column and in a mechanized cavalry charge hit the column routing it with a loss of some 200 casualties. Heavy seas prevented Troop C from linking up with the 101st until 8 June.
As the American forces swung into the Cherbourg peninsula the 4th Cavalry performed screening missions. To prevent the Germans from escaping from the Cap de la Hague area the 4th Squadron dismounted and sized all of their objectives in five days of bloody fighting capturing over 600 prisoners. For its gallant conduct a Cape de la Hague the 4th Squadron less B Troop received the French Croix de Guerre with Silver Star.
In the dash across France the 4th Cavalry assumed traditional cavalry missions of flank screening and protection of line of communication for the VII Corps. By 3 September the 4th crossed into Belgium and by 15 September the 4th had reached Germany and the Siegfried Line.
On the 19th, 20th and 21st of December 1944 while the attention of the world was on the Battle of the Bulge some of the fiercest fighting of the war continued on the edge of the Hurtgen Forest along the approaches to the Roer River. The 4th Cavalry was given the mission to seize the heavily defended town of Bogheim and the high ground to its southeast. On the 19th under a ground fog two troops of the 4th got into the town undetected and engaged the Germans. Two other troops coming up in support were caught in the open as the fog lifted and took heavy casualties. The two troops already in the town successfully drove out the Germans by the afternoon. All four troop commanders had either been killed or wounded and over one fourth of the enlisted personnel had also become casualties.
The next morning the 4th Squadron charged dismounted across two hundred yards of open fields to seize the high ground overlooking the town. In the battle for Bogheim the 4th Squadron destroyed two battle groups of the 947th German Infantry and a company of the 6th Parachute Regiment. For its magnificent bravery at Bogheim the 4th Squadron was awarded the Presidential Unit Citation.
On 25 March 1945 the 4th crossed the Rhine River and swept further into Germany brushing aside light resistance and capturing hundreds of prisoners. The war ended with the 4th Cavalry in the Harz Mountains.
For occupation duties in Germany and Austria the Army organized the U.S. Constabulary. The 4th Cavalry Group was redesignated the 4th Constabulary Regiment with the 4th and 24th Constabulary Squadrons. The Regiment was stationed in Salzburg, Austria. On 1 May 1949 the 4th Constabulary Regiment was inactivated. The 4th Squadron underwent several designation changes to become the 4th Armored Cavalry Reconnaissance Battalion. It was inactivated on 1 July 1955. The 24th Squadron was transferred to Germany in 1949 and inactivated on 15 December 1952. To perpetuate some small remnant of the 4th Cavalry on the active rolls of the Army, Headquarters Company of the 4th Reconnaissance Battalion was redesignated as Headquarters Company, 4th Armor Group and activated in Germany on 1 July 1955.
THE REBIRTH OF THE 4TH CAVALRY
In the short span of twelve years the 4th Cavalry Regiment had been redesignated five times and all that was left of one of the U.S. Army's finest regiments was its regimental numeral on an armor group headquarters company. With the decision to also do away with most tactical regiments the Army realized it must preserve the valuable honors, traditions and history of famous regiments. In 1957 the Army set up the Combat Arms Regimental System (CARS). Under CARS the regiment would be a group of tactical units bearing the regimental name. Over one hundred and fifty historic regiments of cavalry, armor, infantry and artillery were preserved. The original line companies/batteries/troops of a regiment would be activated as the headquarters company/battery/troop of newly constituted battle group/battalion /squadron to preserve the lineal ties with the old regiment. Should a separate company-sized element be required the original company/battery/troop would be activated.
On 15 February 1957 five elements of the 4th Cavalry were activated. The 1st Squadron descending from Troop A was activated in the 1st Infantry Division at Fort Riley Kansas. The 2nd Battle Group (infantry) descending from B Troop was activated in the 1st Cavalry Division in Korea. The 3rd Squadron descending from Troop C joined the 25th Infantry Division at Schofield Barracks Hawaii. The 4th Squadron descending from Troop D was activated in the Army Reserve 102nd Infantry Division at Kansas City Missouri and the 5th Squadron descending from Troop E was activated with the Army Reserve103rd Infantry Division at Ottumwa, Iowa.
During the 1960s Army requirements led to changes in the active elements of the 4th Cavalry. On 1 August 1963 the 2nd Battle Group was reorganized and redesignated as the 2nd Squadron and assigned to the 4th Armored Division. On 15 March 1963 the 5th Squadron was inactivated. Its predecessor Troop E was activated on 3 December 1963 and assigned to the Army Reserve 205th Infantry Brigade at Madison, Wisconsin. On 31 December 1965 the 4th Squadron was inactivated.
Elements of the 4th Cavalry Regiment saw extensive combat during the Vietnam War. The 1st Squadron 4th Cavalry was assigned to the 1st Infantry Division as the division reconnaissance squadron based at Di An. The 1st Squadron participated in eleven campaigns of the Vietnam War from 20 October 1965 to 5 February 1970. The 1st Squadron was awarded the Presidential Unit Citation for its heroism in Binh Long Province as well as a Valorous Unit Award for Binh Doung Province. Troop A, 1st Squadron received a Valorous Unit Award for its actions at the battle of Ap Bau Bang.
The 3rd Squadron 4th Cavalry served as the reconnaissance squadron for the 25th Infantry Division and was based at Cu Chi near Saigon. Troop C, was the first 3rd Squadron element to arrive in Vietnam in December 1965 with the 3rd Brigade, 25th Division. Initially operating in the Vietnamese Central Highlands against North Vietnamese forces, Troop C later saw action against Viet Cong main force units in Quang Tri Province receiving a Valorous Unit Award. On 1 August 1967 Troop C rejoined the 3rd Squadron in Cu Chi.
The 3rd Squadron participated in twelve campaigns from 24 March 1966 to 8 December 1970. The 3rd Squadron received the Presidential Unit Citation for its magnificent defense of Ton Son Nhut air base outside of Saigon during the 1968 Tet counteroffensive and two Valorous Unit Awards for battles along the Cambodian border and in Binh Doung Province. In addition, Troop D, 3rd Squadron received a Presidential Unit Citation for gallantry in Tay Ninh Province and Troop A, 3rd Squadron received a Valorous Unit Award for the Cu Chi District.
Troop F, 4th Cavalry was activated on 10 February 1971 in Vietnam and assigned to the 25th Division as a separate air cavalry troop. After the 25th Division left Vietnam, Troop F remained assigned to the 25th while serving with the 11th and 12 Aviation Groups. It was one of the last Army units to leave Vietnam on 26 February 1973.
In the mid-1980s the Army decided to move to a unit replacement system whereby soldiers would spend the majority of their army careers rotating between the elements of a regiment located in the United States and overseas. In order to set up the proper alignment of like units old historic long-term assignments of regiments in certain divisions were terminated. The 3rd Squadron 4th Cavalry which had served with the 25th Division since 1957 was inactivated on 16 March 1987 because under the unit replacement system 4th Cavalry elements would only be assigned to heavy divisions and the 25th had been reorganized as a light division. The 4th Squadron was reactivated in 1986 and was assigned to the 3rd Infantry Division (Mechanized) in Germany. Loud complaints over the inactivation of the 3rd Squadron from senior Army leaders who had served with the squadron led to the army inactivating the 4th Squadron and replacing it with the 3rd Squadron in 1989. One of the missions of both squadrons was the patrolling of the inner-German border until the collapse of East Germany in 1990.
Meanwhile the 2nd Squadron which had been inactivated in Germany in 1972 after serving both in the 4th Armored Division and then in the 1st Armored Division, was reactivated with the 24th Infantry Division (Mechanized) at Fort Stewart, Georgia in January 1987.
THE GULF WAR
Three 4th Cavalry elements participated in the Gulf War. The 1st Squadron, 4th Cavalry continued to serve as the reconnaissance squadron for the 1st Infantry Division (Mechanized) assigned to the VII Corps. The 2nd Squadron, 4th Cavalry was the reconnaissance squadron for the 24th Infantry Division (Mechanized) assigned to the XVIII Airborne Corps. Troop D, 4th Cavalry, the reconnaissance troop of the 197th Infantry Brigade (which was attached to the 24th Division) was placed under operational control of the 2nd Squadron.
The ground attack of Desert Storm was launched shortly after midnight on 24 February 1991. The attack began in the XVIII Airborne Corps sector on the extreme left flank of the Coalition Forces. The 24th Division had the critical mission of blocking the Euphrates River valley to cut the escape of Iraqi forces in Kuwait and then to attack east with VII Corps to destroy the Republican Guard divisions. The 2nd Squadron, 4th Cavalry had crossed the border six hours ahead of the main attack and scouted north along the two axis of advance. The 2nd Squadron found little evidence of the enemy and the division made rapid progress. With the 4th Cavalry screening 5 to 10 miles in front of the attacking brigades the 24th continued north until around midnight when the division was halted 75 miles inside Iraq. By 27 February, the fourth day of combat, the 24th Division had destroyed all Iraqi units it had encountered securing the Euphrates River Valley and had trapping most of the Republican Guards divisions for the two Corps to destroy.
On the first day of the ground attack the VII Corps ordered the 1st Infantry Division to breach the main enemy lines. The Big Red One soon had destroyed some ten miles of enemy defenses and had created a breach in the Iraqi lines for the VII Corps to pour through. Swinging east the Corps with the 1st Division on the south passed through the cavalry screen and attacked the Iraqi forces. By 27 February the 1st Division had destroyed two armored divisions. The 1st Squadron, 4th Cavalry then set up blocking positions on the Al Basrah -Kuwait City highway preventing Iraqi forces from escaping from Kuwait. The Squadron received a Valorous Unit Award for its actions during Desert Storm.
A cease-fire was declared at 0800 28 February 1991. Thus ended the quickest and most overpowering victory in U.S. Army history. The 4th Cavalry elements that participated in Desert Storm the 1st Squadron, the 2nd Squadron and Troop D all performed their missions with courage, and outstanding professionalism adding to the reputation of the 4th Cavalry as being one of the Army's finest regiments.
THE 4TH CAVALRY TODAY
The deep draw down of the Army beginning in the middle 1980s and continuing after Desert Storm combined with the burgeoning peace keeping commitments led to the decision to halt the implementation of the unit replacement system. Unfortunately by the time the decision was made the Army had completed a massive reassignment of regiments, which had often terminated long standing historical associations between regiments and divisions. The inactivation of the 3rd Squadron 4th Cavalry after serving with the 25th Division for thirty years is a case in point. By 1996 the Army, recognizing the damage such moves had made on esprit-de-corps reassigned many units back to their traditional parent organizations. Thus the 3rd Squadron, which had served with the 3rd Infantry Division since 1989 to include a tour in Bosnia, was reassigned back to the 25th Division.
The post Desert Storm drawdown did not leave the 4th Cavalry unscathed. The Army inactivated the 24th Infantry Division (Mechanized) in February 1996 with the concurrent inactivation of the 2nd Squadron 4th Cavalry. And with the inactivation of the 197th Infantry Brigade earlier, Troop D, 4th Cavalry was also inactivated. Troop E, 4th Cavalry was inactivated on 5 June 1994 when the decision was made to remove combat units from the Army Reserve.
Currently the 4th Cavalry has five elements on active duty. The 1st Squadron, 4th Cavalry is the reconnaissance squadron assigned to the 1st Infantry Division at Conn Barracks in Schweinfurt, Germany. The 1st Squadron's combat elements consist of three armored cavalry troops and two air cavalry troops. Conn Barracks is considered to be the 4th Cavalry regimental home base as it is where the 4th Cavalry regimental colors are located with the 1st Squadron. The 3rd Squadron, 4th Cavalry is the reconnaissance squadron of the 25th Infantry Division (Light) and is stationed at Wheeler Army Air Field, Hawaii. The 3rd Squadron's combat elements consist of two air cavalry troops and a heavily armed ground cavalry troop mounted on high mobility multipurpose wheeled vehicles (HMMWV).
On 16 January 1999, Troop E, 4th Cavalry was reactivated as the reconnaissance troop for the 2nd Brigade, 1st Infantry Division stationed in Schweinfurt, Germany. Troop F, 4th Cavalry was also reactivated on 16 January 1999 as the reconnaissance troop for the 3rd Brigade, 1st Infantry Division in Vilseck, Germany. Troop D, 4th Cavalry was reactivated 25 February 2000 as the reconnaissance troop for the 1st Brigade, 1st Infantry Division stationed at Fort Riley, Kansas.
Additionally the U.S Army sponsors and maintains B Troop, 4th U.S. Cavalry (Memorial) at Fort Huachuca Arizona. Organized in1973 B Troop appears at military and civilian ceremonies and functions throughout the southwest to promote the heritage and traditions of the U.S. Army during the Indian Wars. The memorial troop is equipped and mounted identically to B Troop, 4th Cavalry in 1886 when it participated in the Geronimo Campaign under the command of Captain Henry W. Lawton. Active duty soldiers and Department of the Army civilians wear authentic 1886 cavalry uniforms and are armed with the cavalry weapons of that era and the horses are saddled and bridled with equally authentic equipment.
Soldiers who have served in the 4th Cavalry can take great pride in having contributed to the record of one of the finest regiments in the U. S Army. Today's active duty 4th Cavalrymen and the volunteers of B Troop (Memorial) continue to add to and perpetuate the magnificent history of the 4th Cavalry Regiment.
|
<urn:uuid:f96ea12f-f118-4336-a0d4-83ae46371692>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00008.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9795928001403809,
"score": 3.90625,
"token_count": 6686,
"url": "http://mileditors.com/training/index.php/units-tenants/b-troop/history-4th-us-cavalry-regiment"
}
|
Sixteenth Amendment to the United States Constitution
|This article is part of a series on the|
|Constitution of the
United States of America
|Preamble and Articles
of the Constitution
|Amendments to the Constitution|
|Full text of the Constitution and Amendments|
The Sixteenth Amendment (Amendment XVI) to the United States Constitution allows the Congress to levy an income tax without apportioning it among the states or basing it on the United States Census. This amendment exempted income taxes from the constitutional requirements regarding direct taxes, after income taxes on rents, dividends, and interest were ruled to be direct taxes in the court case of Pollock v. Farmers' Loan & Trust Co. (1895). The amendment was adopted on February 3, 1913.
- 1 Text
- 2 Other Constitutional provisions regarding taxes
- 3 Income taxes before the Pollock case
- 4 The Pollock case
- 5 Adoption
- 6 Pollock overruled
- 7 Case law
- 8 See also
- 9 Notes
- 10 External links
The Congress shall have power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several States, and without regard to any census or enumeration.
Other Constitutional provisions regarding taxes
Article I, Section 2, Clause 3:
Article I, Section 8, Clause 1:
The Congress shall have Power to lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States.
Article I, Section 9, Clause 4:
No Capitation, or other direct, Tax shall be laid, unless in proportion to the Census or Enumeration herein before directed to be taken.
This clause basically refers to a tax on property, such as a tax based on the value of land, as well as a capitation.
Article I, Section 9, Clause 5:
No Tax or Duty shall be laid on Articles exported from any State.
Income taxes before the Pollock case
Until 1913, customs duties (tariffs) and excise taxes were the primary sources of federal revenue. During the War of 1812, Secretary of the Treasury Alexander J. Dallas made the first public proposal for an income tax, but it was never implemented. The Congress did introduce an income tax to fund the Civil War through the Revenue Act of 1861. It levied a flat tax of three percent on annual income above $800. This act was replaced the following year with the Revenue Act of 1862, which levied a graduated tax of three to five percent on income above $600 and specified a termination of income taxation in 1866. The Civil War income taxes, which expired in 1872, proved to be both highly lucrative and drawing mostly from the more industrialized states, with New York, Pennsylvania, and Massachusetts generating about 60 percent of the total revenue that was collected. During the two decades following the expiration of the Civil War income tax, the Greenback movement, the Labor Reform Party, the Populist Party, the Democratic Party and many others called for a graduated income tax.
The Socialist Labor Party advocated a graduated income tax in 1887. The Populist Party "demand[ed] a graduated income tax" in its 1892 platform. The Democratic Party, led by William Jennings Bryan, advocated the income tax law passed in 1894, and proposed an income tax in its 1908 platform.
Before Pollock v. Farmers' Loan & Trust Co., all income taxes had been considered to be indirect taxes imposed without respect to geography, unlike direct taxes, that have to be apportioned among the states according to population.
The Pollock case
In 1894, an amendment was attached to the Wilson–Gorman Tariff Act that attempted to impose a federal tax of two percent on incomes over $4,000 (equal to $109,000 in 2017). The federal income tax was strongly favored in the South, and it was moderately supported in the eastern North Central states, but it was strongly opposed in the Far West and the Northeastern States (with the exception of New Jersey). The tax was derided as "un-Democratic, inquisitorial, and wrong in principle."
In Pollock v. Farmers' Loan & Trust Co., the U.S. Supreme Court declared certain taxes on incomes — such as those on property under the 1894 Act — to be unconstitutionally unapportioned direct taxes. The Court reasoned that a tax on income from property should be treated as a tax on "property by reason of its ownership" and so should be required to be apportioned. The reasoning was that taxes on the rents from land, the dividends from stocks, and so forth, burdened the property generating the income in the same way that a tax on "property by reason of its ownership" burdened that property.
After Pollock, while income taxes on wages (as indirect taxes) were still not required to be apportioned by population, taxes on interest, dividends, and rental income were required to be apportioned by population. The Pollock ruling made the source of the income (e.g., property versus labor, etc.) relevant in determining whether the tax imposed on that income was deemed to be "direct" (and thus required to be apportioned among the states according to population) or, alternatively, "indirect" (and thus required only to be imposed with geographical uniformity).
Dissenting in Pollock, Justice John Marshall Harlan stated:
When, therefore, this court adjudges, as it does now adjudge, that Congress cannot impose a duty or tax upon personal property, or upon income arising either from rents of real estate or from personal property, including invested personal property, bonds, stocks, and investments of all kinds, except by apportioning the sum to be so raised among the States according to population, it practically decides that, without an amendment of the Constitution — two-thirds of both Houses of Congress and three-fourths of the States concurring — such property and incomes can never be made to contribute to the support of the national government.
Members of Congress responded to Pollock by expressing widespread concern that many of the wealthiest Americans had consolidated too much economic power.
On June 16, 1909, President William Howard Taft, in an address to the Sixty-first Congress, proposed a two percent federal income tax on corporations by way of an excise tax and a constitutional amendment to allow the previously enacted income tax.
An income tax amendment to the Constitution was first proposed by Senator Norris Brown of Nebraska. He submitted two proposals, Senate Resolutions Nos. 25 and 39. The amendment proposal finally accepted was Senate Joint Resolution No. 40, introduced by Senator Nelson W. Aldrich of Rhode Island, the Senate majority leader and Finance Committee Chairman.
On July 12, 1909, the resolution proposing the Sixteenth Amendment was passed by the Congress and was submitted to the state legislatures. Support for the income tax was strongest in the western and southern states and opposition was strongest in the northeastern states. Supporters of the income tax believed that it would be a much better method of gathering revenue than tariffs, which were the primary source of revenue at the time. From well before 1894, Democrats, Progressives, Populists and other left-oriented parties argued that tariffs disproportionately affected the poor, interfered with prices, were unpredictable, and were an intrinsically limited source of revenue. The South and the West tended to support income taxes because their residents were generally less prosperous, more agricultural and more sensitive to fluctuations in commodity prices. A sharp rise in the cost of living between 1897 and 1913 greatly increased support for the idea of income taxes, including in the urban Northeast. A growing number of Republicans also began supporting the idea, notably Theodore Roosevelt and the "Insurgent" Republicans (who would go on to form the Progressive Party). These Republicans were driven mainly by a fear of the increasingly large and sophisticated military forces of Japan, Britain and the European powers, their own imperial ambitions and the perceived need to defend American merchant ships. Moreover, these progressive Republicans were, as the name suggests, convinced that central governments could play a positive role in national economies. A bigger government and a bigger military, of course, required a correspondingly larger and steadier source of revenue to support it.
Opposition to the Sixteenth Amendment was led by establishment Republicans because of their close ties to wealthy industrialists, although not even they were uniformly opposed to the general idea of a permanent income tax. In 1910, New York Governor Charles Evans Hughes, shortly before becoming a Supreme Court Justice, spoke out against the income tax amendment. While he supported the idea of a federal income tax, Hughes believed the words "from whatever source derived" in the proposed amendment implied that the federal government would have the power to tax state and municipal bonds. He believed this would excessively centralize governmental power and "would make it impossible for the state to keep any property".
Between 1909 and 1913, several conditions favored passage of the Sixteenth Amendment. Inflation was high and many blamed federal tariffs for the rising prices. The Republican Party was divided and weakened by the loss of Roosevelt and the Insurgents who joined the Progressive party, a problem that blunted opposition even in the Northeast. The Democrats won both houses and the Presidency in 1912 and the country was generally in a left-leaning mood, with the Socialist Party winning a seat in the House in 1910 and polling six percent of the popular presidential vote in 1912.
Three advocates for a federal income tax ran in the presidential election of 1912. On February 25, 1913, Secretary of State Philander Knox proclaimed that the amendment had been ratified by three-fourths of the states and so had become part of the Constitution. The Revenue Act of 1913 was enacted shortly thereafter.
- Alabama (August 10, 1909)
- Kentucky (February 8, 1910)
- South Carolina (February 19, 1910)
- Illinois (March 1, 1910)
- Mississippi (March 7, 1910)
- Oklahoma (March 10, 1910)
- Maryland (April 8, 1910)
- Georgia (August 3, 1910)
- Texas (August 16, 1910)
- Ohio (January 19, 1911)
- Idaho (January 20, 1911)
- Oregon (January 23, 1911)
- Washington (January 26, 1911)
- Montana (January 27, 1911)
- Indiana (January 30, 1911)
- California (January 31, 1911)
- Nevada (January 31, 1911)
- South Dakota (February 1, 1911)
- Nebraska (February 9, 1911)
- North Carolina (February 11, 1911)
- Colorado (February 15, 1911)
- North Dakota (February 17, 1911)
- Michigan (February 23, 1911)
- Iowa (February 24, 1911)
- Kansas (March 2, 1911)
- Missouri (March 16, 1911)
- Maine (March 31, 1911)
- Tennessee (April 7, 1911)
- Arkansas (April 22, 1911), after having previously rejected the amendment
- Wisconsin (May 16, 1911)
- New York (July 12, 1911)
- Arizona (April 3, 1912)
- Minnesota (June 11, 1912)
- Louisiana (June 28, 1912)
- West Virginia (January 31, 1913)
- Delaware (February 3, 1913)
Ratification (by the requisite 36 states) was completed on February 3, 1913 with the ratification by Delaware. The amendment was subsequently ratified by the following states, bringing the total number of ratifying states to forty-two of the forty-eight then existing:
- 37. New Mexico (February 3, 1913)
- 38. Wyoming (February 3, 1913)
- 39. New Jersey (February 4, 1913)
- 40. Vermont (February 19, 1913)
- 41. Massachusetts (March 4, 1913)
- 42. New Hampshire (March 7, 1913), after rejecting the amendment on March 2, 1911
The legislatures of the following states rejected the amendment without ever subsequently ratifying it:
The legislatures of the following states never considered the proposed amendment:
Professor Sheldon D. Pollack at the University of Delaware has written:
- On February 25, 1913, in the closing days of the Taft administration, Secretary of State Philander C. Knox, a former Republican senator from Pennsylvania and attorney general under McKinley and Roosevelt, certified that the amendment had been properly ratified by the requisite number of state legislatures. Three more states ratified the amendment soon after, and eventually the total reached 42. The remaining six states either rejected the amendment or took no action at all. Notwithstanding the many frivolous claims repeatedly advanced by so-called tax protestors, the Sixteenth Amendment to the Constitution was duly ratified as of February 3, 1913. With that, the Pollock decision was overturned, restoring the status quo ante. Congress once again had the “power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several States, and without regard to any census or enumeration.”
From William D. Andrews, Professor of Law, Harvard Law School:
- In 1913 the Sixteenth Amendment to the Constitution was adopted, overruling Pollock, and the Congress then levied an income tax on both corporate and individual incomes.
From Professor Boris Bittker, who was a tax law professor at Yale Law School:
- As construed by the Supreme Court in the Brushaber case, the power of Congress to tax income derives from Article I, Section 8, Clause 1, of the original Constitution rather than from the Sixteenth Amendment; the latter simply eliminated the requirement that an income tax, to the extent that it is a direct tax, must be apportioned among the states. A corollary of this conclusion is that any direct tax that is not imposed on "income" remains subject to the rule of apportionment. Because the Sixteenth Amendment does not purport to define the term "direct tax," the scope of that constitutional phrase remains as debatable as it was before 1913; but the practical significance of the issue was greatly reduced once income taxes, even if direct, were relieved from the requirement of apportionment.
Professor Erik Jensen at Case Western Reserve University Law School has written:
- It [the Sixteenth Amendment] was a response to the Income Tax Cases (Pollock v. Farmers' Loan & Trust Co.), and it exempts only "taxes on incomes" from the apportionment rule that otherwise applies to direct taxes.
Professor Calvin H. Johnson, a tax professor at the University of Texas School of Law, has written:
- The Sixteenth Amendment to the Constitution, ratified in 1913, was written to allow Congress to tax income without the hobbling apportionment requirement.
- [ . . . ]
- Pollock was itself overturned by the Sixteenth Amendment as to apportionment of income....
From Gale Ann Norton:
- Courts have essentially abandoned the permissive interpretation created in Pollock. Subsequent cases have viewed the Sixteenth Amendment as a rejection of Pollock's definition of "direct tax". The apportionment requirement again applies only to real estate and capitation taxes. Even if the Sixteenth Amendment is not viewed as narrowing the definition of direct taxes, it at least introduces an additional consideration to analysis under the Apportionment Clause. For the Court to strike an unapportioned tax, plaintiffs must establish not only that a tax is a direct tax, but also that it is not in the subset of direct taxes known as an income tax.
From Alan O. Dixler:
- In Brushaber, the Supreme Court validated the first post - 16th Amendment income tax. Chief Justice White, who as an associate justice had dissented articulately in Pollock, wrote for a unanimous Court. Upholding the income tax provisions of the tariff act of October 3, 1913, Chief Justice White observed that the 16th Amendment did not give Congress any new power to lay and collect an income tax; rather, the 16th Amendment permitted Congress to do so without apportionment ....
Congress may impose taxes on income from any source without having to apportion the total dollar amount of tax collected from each state according to each state's population in relation to the total national population.
In Wikoff v. Commissioner, the United States Tax Court said:
[I]t is immaterial, with respect to Federal income taxes, whether the tax is a direct or an indirect tax. Mr. Wikoff [the taxpayer] relied on the Supreme Court's decision in Pollock v. Farmers' Loan & Trust Co. [ . . . ] but the effect of that decision has been nullified by the enactment of the 16th Amendment.
In Abrams v. Commissioner, the Tax Court said:
Since the ratification of the Sixteenth Amendment, it is immaterial with respect to income taxes, whether the tax is a direct or indirect tax. The whole purpose of the Sixteenth Amendment was to relieve all income taxes when imposed from [the requirement of] apportionment and from [the requirement of] a consideration of the source whence the income was derived.
The federal courts' interpretations of the Sixteenth Amendment have changed considerably over time and there have been many disputes about the applicability of the amendment.
The Brushaber case
In Brushaber v. Union Pacific Railroad, 240 U.S. 1 (1916), the Supreme Court ruled that (1) the Sixteenth Amendment removes the Pollock requirement that certain income taxes (such as taxes on income "derived from real property" that were the subject of the Pollock decision), be apportioned among the states according to population; (2) the federal income tax statute does not violate the Fifth Amendment's prohibition against the government taking property without due process of law; (3) the federal income tax statute does not violate the Article I, Section 8, Clause 1 requirement that excises, also known as indirect taxes, be imposed with geographical uniformity.
The Kerbaugh-Empire Co. case
It was not the purpose or the effect of that amendment to bring any new subject within the taxing power. Congress already had the power to tax all incomes. But taxes on incomes from some sources had been held to be "direct taxes" within the meaning of the constitutional requirement as to apportionment. [citations omitted] The Amendment relieved from that requirement and obliterated the distinction in that respect between taxes on income that are direct taxes and those that are not, and so put on the same basis all incomes "from whatever source derived". [citations omitted] "Income" has been taken to mean the same thing as used in the Corporation Excise Tax of 1909 (36 Stat. 112), in the Sixteenth Amendment, and in the various revenue acts subsequently passed. [citations omitted] After full consideration, this court declared that income may be defined as gain derived from capital, from labor, or from both combined, including profit gained through sale or conversion of capital.
The Glenshaw Glass case
In Commissioner v. Glenshaw Glass Co., 348 U.S. 426 (1955), the Supreme Court laid out what has become the modern understanding of what constitutes "gross income" to which the Sixteenth Amendment applies, declaring that income taxes could be levied on "accessions to wealth, clearly realized, and over which the taxpayers have complete dominion." Under this definition, any increase in wealth — whether through wages, benefits, bonuses, sale of stock or other property at a profit, bets won, lucky finds, awards of punitive damages in a lawsuit, qui tam actions — are all within the definition of income, unless the Congress makes a specific exemption, as it has for items such as life insurance proceeds received by reason of the death of the insured party, gifts, bequests, devises and inheritances, and certain scholarships.
Income taxation of wages, etc.
Federal courts have ruled that the Sixteenth Amendment allows a direct tax on "wages, salaries, commissions, etc. without apportionment."
The Penn Mutual case
Although the Sixteenth Amendment is often cited as the "source" of the congressional power to tax incomes, at least one court has reiterated the point made in Brushaber and other cases that the Sixteenth Amendment itself did not grant the Congress the power to tax incomes, a power the Congress had since 1789, but only removed the possible requirement that any income tax be apportioned among the states according to their respective populations. In Penn Mutual Indemnity, the United States Tax Court stated:
In dealing with the scope of the taxing power the question has sometimes been framed in terms of whether something can be taxed as income under the Sixteenth Amendment. This is an inaccurate formulation... and has led to much loose thinking on the subject. The source of the taxing power is not the Sixteenth Amendment; it is Article I, Section 8, of the Constitution.
The United States Court of Appeals for the Third Circuit agreed with the Tax Court, stating:
It did not take a constitutional amendment to entitle the United States to impose an income tax. Pollock v. Farmers' Loan & Trust Co., 157 U. S. 429, 158 U. S. 601 (1895), only held that a tax on the income derived from real or personal property was so close to a tax on that property that it could not be imposed without apportionment. The Sixteenth Amendment removed that barrier. Indeed, the requirement for apportionment is pretty strictly limited to taxes on real and personal property and capitation taxes.
It is not necessary to uphold the validity of the tax imposed by the United States that the tax itself bear an accurate label. Indeed, the tax upon the distillation of spirits, imposed very early by federal authority, now reads and has read in terms of a tax upon the spirits themselves, yet the validity of this imposition has been upheld for a very great many years.
It could well be argued that the tax involved here [an income tax] is an "excise tax" based upon the receipt of money by the taxpayer. It certainly is not a tax on property and it certainly is not a capitation tax; therefore, it need not be apportioned. We do not think it profitable, however, to make the label as precise as that required under the Food and Drug Act. Congress has the power to impose taxes generally, and if the particular imposition does not run afoul of any constitutional restrictions then the tax is lawful, call it what you will.
The Murphy case
On December 22, 2006, a three-judge panel of the United States Court of Appeals for the District of Columbia Circuit vacated its unanimous decision (of August 2006) in Murphy v. Internal Revenue Service and United States. In an unrelated matter, the court had also granted the government's motion to dismiss Murphy's suit against the "Internal Revenue Service." Under federal sovereign immunity, a taxpayer may sue the federal government, but not a government agency, officer, or employee (with some exceptions). The Court ruled:
Insofar as the Congress has waived sovereign immunity with respect to suits for tax refunds under eo nomine in this case., that provision specifically contemplates only actions against the "United States". Therefore, we hold the IRS, unlike the United States, may not be sued
An exception to federal sovereign immunity is in the United States Tax Court, in which a taxpayer may sue the Commissioner of Internal Revenue. The original three-judge panel then agreed to rehear the case itself. In its original decision, the Court had ruled that was unconstitutional under the Sixteenth Amendment to the extent that the statute purported to tax, as income, a recovery for a nonphysical personal injury for mental distress and loss of reputation not received in lieu of taxable income such as lost wages or earnings.
Because the August 2006 opinion was vacated, the Court of Appeals did not hear the case en banc.
On July 3, 2007, the Court (through the original three-judge panel) ruled (1) that the taxpayer's compensation was received on account of a nonphysical injury or sickness; (2) that gross income under section 61 of the Internal Revenue Code does include compensatory damages for nonphysical injuries, even if the award is not an "accession to wealth," (3) that the income tax imposed on an award for nonphysical injuries is an indirect tax, regardless of whether the recovery is restoration of "human capital," and therefore the tax does not violate the constitutional requirement of Article I, Section 9, Clause 4, that capitations or other direct taxes must be laid among the states only in proportion to the population; (4) that the income tax imposed on an award for nonphysical injuries does not violate the constitutional requirement of Article I, Section 8, Clause 1, that all duties, imposts and excises be uniform throughout the United States; (5) that under the doctrine of sovereign immunity, the Internal Revenue Service may not be sued in its own name.
The Court stated that "[a]lthough the 'Congress cannot make a thing income which is not so in fact,' [ . . . ] it can label a thing income and tax it, so long as it acts within its constitutional authority, which includes not only the Sixteenth Amendment but also Article I, Sections 8 and 9." The court ruled that Ms. Murphy was not entitled to the tax refund she claimed, and that the personal injury award she received was "within the reach of the Congressional power to tax under Article I, Section 8 of the Constitution" – even if the award was "not income within the meaning of the Sixteenth Amendment". See also the Penn Mutual case cited above.
On April 21, 2008, the U.S. Supreme Court declined to review the decision by the Court of Appeals.
- Knowlton v. Moore 178 U.S. 41 (1900) and Flint v. Stone Tracy Co. 220 U.S. 107 (1911)
- Hylton v. United States 3 U.S. 171 (1796)
- Buenker, John D. 1981. "The Ratification of the Sixteenth Amendment." The Cato Journal. 1:1. PDF
- Baack, Bennet T. and Edward John Ray. 1985. "Special Interests and the Adoption of the Income Tax in the United States." The Journal of Economic History V. 45, No. 3. pp. 607-625.
- "On This Day: Congress Passes Act Creating First Income Tax". Findingdulcinea.com. Retrieved 2012-03-26.
- Baack and Ray, p. 608.
- "Socialist Labor Party Platform" (PDF). Retrieved 2012-03-26.
- "Populist Party Platform, 1892". Historymatters.gmu.edu. Retrieved 2012-03-26.
- Speeches of William Jennings Bryan, pp. 159-179. Books.google.com. 1909. Retrieved 2012-03-26.
- 1908 Democratic party platform Archived January 13, 2008 at the Wayback Machine
- Commentary, James W. Ely, Jr., on the case of Springer v. United States, in answers.com, at
- "Again the situation is aptly illustrated by the various acts taxing incomes derived from property of every kind and nature which were enacted beginning in 1861, and lasting during what may be termed the Civil War period. It is not disputable that these latter tax laws were classed under the head of excises, duties, and imposts because it was assumed that they were of that character, although putting a tax burden on income of every kind, including that derived from property real or personal, since they were not taxes directly on property because of its ownership.” Brushaber v. Union Pac. Railroad, 240 U.S. 1 (1916), at 15
- Consumer Price Index (estimate) 1800–. Federal Reserve Bank of Minneapolis. Retrieved November 10, 2015.
- Baack and Ray, p. 610
- "Mr. Cockran's Final Effort" (PDF). New York Times. 1894-01-31.
- Read a description of the decision at the Tax History Museum
- "Justice Harlan's dissenting opinion in ''Pollock''". Law.cornell.edu. Retrieved 2012-03-26.
- See the quotes from Theodore Roosevelt at the Tax History Museum
- "Taft Address of June 16, 1909 (American Presidency Project)". Presidency.ucsb.edu. 1909-06-16. Retrieved 2012-03-26.
- President Taft Presidential addresses. Books.google.com. 1910. Retrieved 2012-03-26.
- Volume 36, Statutes at Large, 61st Congress Session I, Senate Joint Resolution No. 40, p. 184, approved July 31, 1909
- Senate Joint Resolution 40, 36 Stat. 184.
- "The Ratification of the Federal Income Tax Amendment, John D. Buenker" (PDF). Retrieved 2012-03-26.
- Buenker, p. 186.
- Buenker, p. 189
- Baack and Jay, p. 613-614
- Buenker, p. 184
- "Arthur A. Ekirch, Jr., "The Sixteenth Amendment: The Historical Background," p. 175, ''Cato Journal'', Vol. 1, No. 1, Spring 1981." (PDF). Retrieved 2012-03-26.
- Buenker, pp. 219-221
- Adam Young, "The Origin of the Income Tax", Ludwig von Mises Institute, Sept. 7, 2004
- "FindLaw: U.S. Constitution: Amendments". FindLaw. Retrieved 2012-03-26.
- "Ratification of Constitutional Amendments". U.S. Constitution Online. Retrieved April 20, 2012..
- See Senate Document # 108-17, 108th Congress, Second Session, The Constitution of the United States of America: Analysis and Interpretation: Analysis of Cases Decided by the Supreme Court of the United States to June 28, 2002, at pp. 33-34, footnote 8, Congressional Research Service, Library of Congress, U.S. Gov't Printing Office (2004).
- "Virginia House Opposes Federal Clause by 54 to 37", The Washington Post, March 8, 1910
- Boris Bittker, "Constitutional Limits on the Taxing Power of the Federal Government," The Tax Lawyer, Fall 1987, Vol. 41, No. 1, p. 3 (American Bar Association) (Pollock case "was in effect reversed by the sixteenth amendment")
- "The Sixteenth Amendment to the Constitution overruled Pollock [ . . . ]" Graf v. Commissioner, 44 T.C.M. (CCH) 66, TC Memo. 1982-317, CCH Dec. 39,080(M) (1982).
- Sheldon D. Pollack, "Origins of the Modern Income Tax, 1894-1913," 66 Tax Lawyer 295, 323-324, Winter 2013 (Amer. Bar Ass'n) (footnotes omitted; italics in original).
- William D. Andrews, Basic Federal Income Taxation, p. 2, Little, Brown and Company (3d ed. 1985).
- Boris I. Bittker, Martin J. McMahon, Jr. and Lawrence A. Zelenak, Federal Income Taxation of Individuals (2d ed. 2006) (emphasis added).
- Erik M. Jensen, "The Taxing Power, The Sixteenth Amendment, And the Meaning of 'Incomes'", Oct. 4, 2002, Tax Analysts (footnotes not reproduced).
- Calvin H. Johnson, "Purging Out Pollock: The Constitutionality of Federal Wealth or Sales Tax", Dec. 27, 2002, Tax Analysts.
- Gale Ann Norton, "The Limitless Federal Taxing Power," Vol. 8 Harvard Journal of Law and Public Policy 591 (Summer, 1985) (footnotes not reproduced).
- Alan O. Dixler, "Direct Taxes Under the Constitution: A Review of the Precedents," Nov. 20, 2006, Tax Analysts.
- "Findlaw: Sixteenth Amendment, History and Purpose of the Amendment". Caselaw.lp.findlaw.com. Retrieved 2012-03-26.
- Wikoff v. Commissioner, 37 T.C.M. (CCH) 1539, T.C. Memo. 1978-372 (1978).
- 82 T.C. 403, CCH Dec. 41,031 (1984)
- "As construed by the Supreme Court in the Brushaber case, the power of Congress to tax income derives from Article I, Section 8, Clause 1 of the Constitution, rather than from the Sixteenth Amendment; the latter simply eliminated the requirement that an income tax, to the extent that it is a direct tax, must be apportioned among the states." Boris I. Bittker, Martin J. McMahon, Jr. & Lawrence A. Zelenak, Federal Income Taxation of Individuals, ch. 1, paragr. 1.01[a], Research Institute of America (2d ed. 2005), as retrieved from 2002 WL 1454829 (W. G. & L.).
- 26 U.S.C. § 101.
- 26 U.S.C. § 102.
- 26 U.S.C. § 117.
- Parker v. Commissioner, 724 F.2d 469, 84-1 U.S. Tax Cas. (CCH) paragr. 9209 (5th Cir. 1984) (closing parenthesis in original has been omitted). For other court decisions upholding the taxability of wages, salaries, etc. see United States v. Connor, 898 F.2d 942, 90-1 U.S. Tax Cas. (CCH) paragr. 50,166 (3d Cir. 1990); Perkins v. Commissioner, 746 F.2d 1187, 84-2 U.S. Tax Cas. (CCH) paragr. 9898 (6th Cir. 1984); White v. United States, 2005-1 U.S. Tax Cas. (CCH) paragr. 50,289 (6th Cir. 2004), cert. denied, ____ U.S. ____ (2005); Granzow v. Commissioner, 739 F.2d 265, 84-2 U.S. Tax Cas. (CCH) paragr. 9660 (7th Cir. 1984); Waters v. Commissioner, 764 F.2d 1389, 85-2 U.S. Tax Cas. (CCH) paragr. 9512 (11th Cir. 1985); United States v. Buras, 633 F.2d 1356, 81-1 U.S. Tax Cas. (CCH) paragr. 9126 (9th Cir. 1980).
- Penn Mutual Indemnity Co. v. Commissioner, 32 T.C. 653 at 659 (1959), aff'd, 277 F.2d 16, 60-1 U.S. Tax Cas. (CCH) paragr. 9389 (3d Cir. 1960).
- Penn Mutual Indemnity Co. v. Commissioner, 277 F.2d 16, 60-1 U.S. Tax Cas. (CCH) paragr. 9389 (3d Cir. 1960) (footnotes omitted).
- Order, Dec. 22, 2006, the ruling of Murphy v. Internal Revenue Service and United States, U.S. Court of Appeals for the District of Columbia Circuit.
- 460 F.3d 79, 2006-2 U.S. Tax Cas. (CCH) paragr. 50,476, 2006 WL 2411372 (D.C. Cir. August 22, 2006).
- (Murphy v. United States)
- 26 U.S.C. § 61 (Murphy v United States, on rehearing)
- Opinion on rehearing, July 3, 2007, Murphy v. Internal Revenue Service and United States, case no. 05-5139, U.S. Court of Appeals for the District of Columbia Circuit, 2007-2 U.S. Tax Cas. (CCH) paragr. 50,531 (D.C. Cir. 2007)
- Opinion on rehearing, July 3, 2007, p. 16, Murphy v. Internal Revenue Service and United States, case no. 05-5139, U.S. Court of Appeals for the District of Columbia Circuit, 2007-2 U.S. Tax Cas. (CCH) paragr. 50,531 (D.C. Cir. 2007).
- Opinion on rehearing, July 3, 2007, p. 5-6, Murphy v. Internal Revenue Service and United States, case no. 05-5139, U.S. Court of Appeals for the District of Columbia Circuit, 2007-2 U.S. Tax Cas. (CCH) paragr. 50,531 (D.C. Cir. 2007).
- Denniston, Lyle (April 21, 2008). "Court to hear anti-dumping, sentencing cases". SCOTUSblog. Retrieved 21 April 2008.
- National Archives: Sixteenth Amendment
- Sixteenth Amendment and 1913 tax return form Images of original documents
- CRS Annotated Constitution: Sixteenth Amendment
- Pollock Decision The decision nullified by the Sixteenth Amendment
- Brushaber Decision Supreme Court opinion on the apportionment clause of the Constitution.
- Stanton Decision - no new power of taxation (affirming constitutionality of income tax after Sixteenth Amendment)
- History of the U.S. Tax System - Almanac of Policy Issues; annotated as "US Department of the Treasury Undated.".
|
<urn:uuid:ce7113b7-2dc7-49db-a9cb-c0a5aa04fe11>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00208.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9460877776145935,
"score": 3.875,
"token_count": 7923,
"url": "https://infogalactic.com/info/Sixteenth_Amendment_to_the_United_States_Constitution"
}
|
History of terrorism
The history of terrorism is a history of well-known and historically significant individuals, entities, and incidents associated, whether rightly or wrongly, with terrorism. Scholars agree that terrorism is a disputed term, and very few of those labeled terrorists describe themselves as such. It is common for opponents in a violent conflict to describe the other side as terrorists or as practicing terrorism.
Depending on how broadly the term is defined, the roots and practice of terrorism can be traced at least to the 1st-century AD Sicarii Zealots, though some dispute whether the group, which assassinated collaborators with Roman rule in the province of Judea, was in fact terrorist. The first use in English of the term 'terrorism' occurred during the French Revolution's Reign of Terror, when the Jacobins, who ruled the revolutionary state, employed violence, including mass executions by guillotine, to compel obedience to the state and intimidate regime enemies. The association of the term only with state violence and intimidation lasted until the mid-19th century, when it began to be associated with non-governmental groups. Anarchism, often in league with rising nationalism and anti-monarchism, was the most prominent ideology linked with terrorism. Near the end of the 19th century, anarchist groups or individuals committed assassinations of a Russian Tsar and a U.S. President.
In the 20th century, terrorism continued to be associated with a vast array of anarchist, socialist, fascist and nationalist groups, many of them engaged in 'third world' anti-colonial struggles. Some scholars also labeled as terrorist the systematic internal violence and intimidation practiced by states such as the Stalinist Soviet Union and Nazi Germany.
Though many have been proposed, there is no consensus definition of the term "terrorism." This in part derives from the fact that the term is politically and emotionally charged, "a word with intrinsically negative connotations that is generally applied to one's enemies and opponents."
The term terrorist is believed to have originated during the Reign of Terror (September 5, 1793 – July 28, 1794) in France. It was a period of eleven months during the French Revolution when the ruling Jacobins employed violence, including mass executions by guillotine, in order to intimidate the regime's enemies and compel obedience to the state. The Jacobins, most famously Robespierre, sometimes referred to themselves as "terrorists". Some modern scholars, however, do not consider the Reign of Terror a form of terrorism, in part because it was carried out by the French state.
Scholars dispute whether the roots of terrorism date back to the 1st century and the Sicarii Zealots, to the 11th century and the Al-Hashshashin, to the 19th century and the Fenian Brotherhood and Narodnaya Volya, or to other eras. The Sicarii and the Hashshashin are described below, while the Fenian Brotherhood and Narodnaya Volya are discussed in the 19th Century sub-section. Other pre-Reign of Terror historical events sometimes associated with terrorism include the Gunpowder Plot, an attempt to destroy the English Parliament in 1605.
During the 1st century CE, the Jewish Zealots in Judaea Province rebelled, killing prominent collaborators with Roman rule. In 6 CE, according to contemporary historian Josephus, Judas of Galilee formed a small and more extreme offshoot of the Zealots, the Sicarii ("dagger men"). Their efforts were also directed against Jewish "collaborators," including temple priests, Sadducees, Herodians, and other wealthy elites. According to Josephus, the Sicarii would hide short daggers under their cloaks, mingle with crowds at large festivals, murder their victims, and then disappear into the panicked crowds. Their most successful assassination was of the High Priest of Israel Jonathan.
In the late 11th century, the Hashshashin (a.k.a. the Assassins) arose, an offshoot of the Isma'ili sect of Shia Muslims. Led by Hassan-i Sabbah and opposed to Fatimid rule, the Hashshashin militia seized Alamut and other fortress strongholds across Persia. Hashshashin forces were too small to challenge enemies militarily, so they assassinated city governors and military commanders in order to create alliances with militarily powerful neighbors. For example, they killed Janah al-Dawla, ruler of Homs, to please Ridwan of Aleppo, and assassinated Mawdud, Seljuk emir of Mosul, as a favor to the regent of Damascus. The Hashshashin also carried out assassinations as retribution. Under some definitions of terrorism, such assassinations do not qualify as terrorism, since killing a political leader does not intimidate political enemies or inspire revolt.
The Sons of Liberty was a clandestine group that formed in Boston and New York City in the 1770s. It had a political agenda of independence of Britain's American colonies. The groups engaged in several acts that could be considered terroristic and used the deeds for propaganda purposes.
On November 5, 1605, a group of conspirators led by Robert Catesby attempted to destroy the English Parliament on its State Opening by King James I. They planned in secret to detonate a large quantity of gunpowder placed beneath the Palace of Westminster. The gunpowder was procured and placed by Guy Fawkes. The group intended to enact a coup by killing King James I and the members of both houses of Parliament. The conspirators planned to make one of the king's children a puppet monarch and then restore the Catholic faith to England. The conspirator leased a coal cellar beneath the House of Lords and began stockpiling gunpowder in 1604. As well as its primary targets, it would have killed hundreds, if not thousands, of Londoners – the most devastating act of terrorism in Britain's history, plunging the nation into a religious war. English spymasters uncovered the plot and caught Guy Fawkes with the gunpowder beneath Parliament. The other conspirators fled to Holbeach in Staffordshire. A shoot out on November 8th with authorities led to the deaths of Robert Catesby, Thomas Percy and the brothers Christopher and John Wright. The rest were captured. Fawkes and seven others were tried and executed in January 1606. The planned attack has become known as the Gunpowder Plot and is commemorated in Britain every November 5 with fireworks displays and large bonfires with effigies of Guy Fawkes and the Pope are often burned. Comparisons are often drawn between gunpowder plot and modern religious terrorism, such as the attacks in the US by Islamic terrorists on 9/11 2001.
Emergence of modern terrorismEdit
Terrorism was associated with state terror and the Reign of Terror in France, until the mid-19th century when the term also began to be associated with non-governmental groups. Anarchism, often in league with rising nationalism, was the most prominent ideology linked with terrorism. Attacks by various anarchist groups led to the assassination of a Russian Tsar and a U.S. President.
In the 19th century, powerful, stable, and affordable explosives were developed, global integration reached unprecedented levels and often radical political movements became widely influential. The use of dynamite, in particular, inspired anarchists and was central to their strategic thinking.
One of the earliest groups to utilize modern terrorist techniques was arguably the Fenian Brotherhood and its offshoot the Irish Republican Brotherhood. They were both founded in 1858 as revolutionary, militant nationalist and Catholic groups, both in Ireland and amongst the emigre community in the United States.
After centuries of continued British rule, and influenced most recently from the devastating effects of the 1840s Irish potato famine, these revolutionary fraternal organisations were founded with the aim of establishing an independent republic in Ireland, and began carrying out frequent acts of violence in metropolitan Britain to achieve their aims through intimidation.
In 1867, members of the movement's leadership were arrested and convicted for organizing an armed uprising. While being transferred to prison, the police van in which they were being transported was intercepted and a police sergeant was shot in the rescue. A bolder rescue attempt of another Irish radical incarcerated in Clerkenwell Prison, was made in the same year: an explosion to demolish the prison wall killed 12 people and caused many injuries. The bombing enraged the British public, causing a panic over the Fenian threat.
Although the Irish Republican Brotherhood condemned the Clerkenwell Outrage as a "dreadful and deplorable event", the organisation returned to bombings in Britain in 1881 to 1885, with the Fenian dynamite campaign, beginning one of the first modern terror campaigns. Instead of earlier forms of terrorism based on political assassination, this campaign used modern, timed explosives with the express aim of sowing fear in the very heart of metropolitan Britain, in order to achieve political gains - (Prime minister William Ewart Gladstone was partly influenced to disestablish the Anglican Church in Ireland as a gesture by the Clerkenwell bombing). The campaign also took advantage of the greater global integration of the times, and the bombing was largely funded and organised by the Fenian Brotherhood in the United States.
The first police unit to combat terrorism was established in 1883 by the Metropolitan Police, initially as a small section of the Criminal Investigation Department. It was known as the Special Irish Branch, and was trained in counter terrorism techniques to combat the Irish Republican Brotherhood. The unit's name was changed to Special Branch as the unit's remit steadily widened over the years.
Anarchism and "propaganda of the deed"Edit
The concept of "propaganda of the deed" (or "propaganda by the deed", from the French propagande par le fait) advocated physical violence or other provocative public acts against political enemies in order to inspire mass rebellion or revolution. One of the first individuals associated with this concept, the Italian revolutionary Carlo Pisacane (1818–1857), wrote in his "Political Testament" (1857) that "ideas spring from deeds and not the other way around". Anarchist Mikhail Bakunin (1814–1876), in his "Letters to a Frenchman on the Present Crisis" (1870) stated that "we must spread our principles, not with words but with deeds, for this is the most popular, the most potent, and the most irresistible form of propaganda". The French anarchist Paul Brousse (1844–1912) popularized the phrase "propaganda of the deed"; in 1877 he cited as examples the 1871 Paris Commune and a workers' demonstration in Berne provocatively using the socialist red flag. By the 1880s, the slogan had begun to be used[by whom?] to refer to bombings, regicides and tyrannicides. Reflecting this new understanding of the term, in 1895 Italian anarchist Errico Malatesta described "propaganda by the deed" (which he opposed the use of) as violent communal insurrections meant to ignite an imminent revolution.
Founded in Russia in 1878, Narodnaya Volya (Народная Воля in Russian; People's Will in English) was a revolutionary anarchist group inspired by Sergei Nechayev and by "propaganda by the deed" theorist Pisacane. The group developed ideas—such as targeted killing of the "leaders of oppression"—that would become the hallmark of subsequent violence by small non-state groups, and they were convinced that the developing technologies of the age—such as the invention of dynamite, which they were the first anarchist group to make widespread use of—enabled them to strike directly and with discrimination. Attempting to spark a popular revolt against Russian Tsardom, the group killed prominent political figures by gun and bomb, and on March 13, 1881, assassinated Russia's Tsar Alexander II. The assassination, by a bomb that also killed the Tsar's attacker, Ignacy Hryniewiecki, failed to spark the expected revolution, and an ensuing crackdown brought the group to an end.
Individual Europeans also engaged in politically motivated violence. For example, in 1893, Auguste Vaillant, a French anarchist, threw a bomb in the French Chamber of Deputies in which one person was injured. In reaction to Vaillant's bombing and other bombings and assassination attempts, the French government restricted freedom of the press by passing a set of laws that became pejoratively known as the lois scélérates ("villainous laws"). In the years 1894 to 1896 anarchists killed President of France Marie Francois Carnot, Prime Minister of Spain Antonio Cánovas del Castillo, and the Empress of Austria-Hungary, Elisabeth of Bavaria.
The United StatesEdit
Prior to the American Civil War, abolitionist John Brown (1800–1859) advocated and practiced armed opposition to slavery, leading several attacks between 1856 and 1859, the most famous attack was launched in 1859 against the armory at Harpers Ferry. Local forces soon recaptured the fort and Brown was tried and executed for treason. A biographer of Brown has written that Brown's purpose was "to force the nation into a new political pattern by creating terror." In 2009, the 150th anniversary of Brown's death, prominent news publications debated over whether or not Brown should be considered a terrorist.
After the Civil War, on December 24, 1865, six Confederate veterans created the Ku Klux Klan (KKK). The KKK used violence, lynching, murder and acts of intimidation such as cross burning to oppress African Americans in particular, and it created a sensation with its masked forays' dramatic nature.
The group's politics were white supremacist, anti-Semitic, racist, anti-Catholic, and nativist. A KKK founder boasted that it was a nationwide organization of 550,000 men and that it could muster 40,000 Klansmen within five days' notice, but as a secret or "invisible" group with no membership rosters, it was difficult to judge the Klan's actual size. The KKK has at times been politically powerful, and at various times it controlled the governments of Tennessee, Oklahoma, Indiana and South Carolina, as well as several legislatures in the South.
The Ottoman EmpireEdit
Several nationalist groups used violence against an Ottoman Empire in apparent decline. One was the Armenian Revolutionary Federation (in Armenian Dashnaktsuthium, or "The Federation"), a revolutionary movement founded in Tiflis (Russian Transcaucasia) in 1890 by Christapor Mikaelian. Many members had been part of Narodnaya Volya or the Hunchakian Revolutionary Party. The group published newsletters, smuggled arms, and hijacked buildings as it sought to bring in European intervention that would force the Ottoman Empire to surrender control of its Armenian territories. On August 24, 1896, 17-year-old Babken Suni led twenty-six members in capturing the Imperial Ottoman Bank in Constantinople. The group demanded European intervention in order to stop the Hamidian massacres and the creation of an Armenian state, but backed down on a threat to blow up the bank. An ensuing security crackdown destroyed the group.
Also inspired by Narodnaya Volya, the Internal Macedonian Revolutionary Organization (IMRO) was a revolutionary movement founded in 1893 by Hristo Tatarchev in the Ottoman-controlled Macedonian territories. Through assassinations and by provoking uprisings, the group sought to coerce the Ottoman government into creating a Macedonian nation. On July 20, 1903, the group incited the Ilinden uprising in the Ottoman villayet of Monastir. The IMRO declared the town's independence and sent demands to the European Powers that all of Macedonia be freed. The demands were ignored and Turkish troops crushed the 27,000 rebels in the town two months later.
Early 20th centuryEdit
Revolutionary nationalism continued to motivate political violence in the 20th century, much of it directed against western colonial powers. The Irish Republican Army campaigned against the British in the 1910s and inspired the Zionist groups Hagannah, Irgun and Lehi to fight the British throughout the 1930s in the Palestine mandate.[need quotation to verify] Like the IRA and the Zionist groups, the Muslim Brotherhood in Egypt used bombings and assassinations to try to free territory from British control.
The women's suffrage movement in the UK also committed terrorist attacks prior to the First World War. There were three phases of WSPU militancy in 1905, 1908, 1913; including civil disobedience, destruction of public property and arson and bombings. Most notably, The WSPU burned down Government Minister, and future Prime Minister David Lloyd George's house (despite his support for women's suffrage).
Political assassinations continued, resulting in the assassinations of King Umberto I of Italy, killed in July 1900 and US President William McKinley in September 1901. Political violence became especially widespread in Imperial Russia, and several ministers were killed in the opening years of the 20th century. The highest-ranking was prime minister Pyotr Stolypin, killed in 1911 by Dmitry Bogrov, a spy for the secret police in several anarchist, socialist and other revolutionary groups.
On June 28, 1914, Gavrilo Princip, one of a group of six assassins, shot and killed Archduke Franz Ferdinand of Austria, heir to the Austro-Hungarian throne, and his wife, Sophie, Duchess of Hohenberg, in Sarajevo, the capital of the Condominion of Bosnia and Herzegovina. The assassinations produced widespread shock across Europe, setting in motion a series of events which led to World War I.
In the 1930s, the Nazi regime in Germany and Stalin's rule in the Soviet Union practiced state terror systematically and on a massive and unprecedented scale. Meanwhile, the Stalin regime branded its opponents with the label "terrorist".
In an action called the Easter Rising or Easter Rebellion, on April 24, 1916, members of the Irish Volunteers and the Irish Citizen Army seized the Dublin General Post Office and several other buildings, proclaiming an independent Irish Republic. The rebellion failed militarily but was a success for physical force Irish republicanism, leaders of the uprising becoming Irish heroes after their eventual execution by the British government.
Shortly after the rebellion, Michael Collins and others founded the Irish Republican Army (IRA), which from 1916 to 1923 carried out numerous attacks against symbols of British power. For example, it attacked over 300 police stations simultaneously just before Easter 1920, and, in November 1920, publicly killed a dozen police officers and burned down the Liverpool docks and warehouses, an action that became known as Bloody Sunday.
After years of warfare, London agreed to the 1921 Anglo-Irish treaty creating a free Irish state encompassing 26 of the island's 32 counties. IRA tactics were an inspiration to other groups, including the Palestine Mandate's Zionists, and to British special operations during World War II.
The IRA are considered by some the innovators of modern terrorism as the British would replicate and build upon the tactics used against by the IRA in World War II. Tony Geraghty in The Irish War: The Hidden Conflict Between the IRA and British Intelligence wrote:
The Irish [thanks to the example set by Collins and followed by the SOE] can thus claim that their resistance provide the originating impulse for resistance to tyrannies worse than any they had to endure themselves. And the Irish resistance as Collins led it, showed the rest of the world an economical way to fight wars the only sane way they can be fought in the age of the Nuclear bomb.— M. R. D. Foot, who wrote several official histories of SOE
From January 1939 to March 1940, the Irish Republican Army (IRA) carried out a campaign of bombing and sabotage against the civil, economic, and military infrastructure of Britain. It was known as the S-Plan or Sabotage Campaign. During the campaign, the IRA carried out almost 300 attacks and acts of sabotage in Britain, killing seven people and injuring 96. Most of the casualties occurred in the Coventry bombing on 25 August 1939.
Following the 1929 Hebron massacre of 67 Jews in the British Mandate of Palestine, the Zionist militia Haganah transformed itself into a paramilitary force. In 1931, however, the more militant Irgun broke away from Haganah, objecting to Haganah's policy of restraint. Founded by Avraham Tehomi, Irgun sought to aggressively defend Jews from Arab attacks. Its tactic of attacking Arab communities, including the bombing of a crowded Arab market, is considered[by whom?] among the first examples of terrorism directed against civilians. After the British, in the White Paper of 1939, placed severe restrictions on Jewish immigration into Palestine and set forth a vision of a single state with an Arab majority, the Irgun began a campaign against British rule by assassinating police, capturing British government buildings and arms, and sabotaging British railways. Irgun's best-known attack targeted the King David Hotel in Jerusalem, parts of which housed the headquarters of the British civil and military administrations. The bombing, in 1946, killed ninety-one people and injured forty-six, making it the most deadly attack during the Mandate era. This attack was sharply condemned by the organized leadership of the Yishuv, and further widened the gulf between David Ben-Gurion's Hagana and Begin's Irgun. Following the bombing, Ben-Gurion called Irgun an "enemy of the Jewish people". After the founding of the state of Israel in 1948, Menachem Begin (Irgun leader from 1943 to 1948) transformed the group into the political party Herut, which later became part of Likud in an alliance with the center-right Gahal, Liberal Party, Free Centre, National List, and Movement for Greater Israel. On the 60th anniversary of the bombing, a plaque was unveiled at the hotel.
Operating in the British Mandate of Palestine in the 1930s, Izz ad-Din al-Qassam (1882-1935) organized and established the Black Hand, a Palestinian nationalist militia. He recruited and arranged military training for peasants, and by 1935 had enlisted between 200 and 800 men. Al-Qassam obtained a fatwa from Shaykh Badr al-Din al-Taji al-Hasani, the Mufti of Damascus, authorizing armed resistance against the British and against the Jews of Palestine. Black Hand cells were equipped with bombs and firearms, which they used to kill Jews. Although al-Qassam's revolt was unsuccessful in his lifetime, many organizations gained inspiration from his example. He became a popular hero and an inspiration to subsequent Arab militants, who in the 1936–39 Arab revolt, called themselves Qassamiyun, followers of al-Qassam. The Izz ad-Din al-Qassam Brigades, the military wing of Hamas, as well as the rockets they developed, take their names after Qassam.
Lehi (Lohamei Herut Yisrael, a.k.a. "Freedom Fighters for Israel", a.k.a. the Stern Gang) was a revisionist Zionist group that splintered off from Irgun in 1940. Abraham Stern formed Lehi from disaffected Irgun members after Irgun agreed to a truce with Britain in 1940. Lehi assassinated prominent politicians as a strategy. For example, on November 6, 1944, Lord Moyne, the British Minister of State for the Middle East, was assassinated. The act was controversial among Zionist militant groups, Hagannah sympathizing with the British and launching a massive man-hunt against members of Lehi and Irgun. After Israel's 1948 founding, Lehi formally dissolved and its members became integrated into the Israeli Defense Forces.
Resistance during WWIIEdit
This section needs expansion. You can help by adding to it. (October 2017)
Some of the tactics of the guerrilla, partisan, and resistance movements organised and supplied by the Allies during World War II, according to historian M. R. D. Foot, can be considered terrorist. Colin Gubbins, a key leader within the British Special Operations Executive (SOE), made sure the organization drew much of its inspiration from the IRA.
On the eve of D-Day, the SOE organised with the French Resistance the complete destruction of the rail and communication infrastructure of western France the largest coordinated attack of its kind in history Allied supreme commander Dwight Eisenhower later wrote that "the disruption of enemy rail communications, the harassing of German road moves and the continual and increasing strain placed on German security services throughout occupied Europe by the organised forces of Resistance, played a very considerable part in our complete and final victory".
The SOE also conducted operations in Africa, the Middle East and the Far East.
Th work of the SOE received recognition in 2009 with a memorial in London, however there are differing views on the morality of the SOE's actions; the British military historian John Keegan wrote:
We must recognise that our response to the scourge of terrorism is compromised by what we did through SOE. The justification ... That we had no other means of striking back at the enemy ... is exactly the argument used by the Red Brigades, the Baader-Meinhoff gang, the PFLP, the IRA and every other half-articulate terrorist organisation on Earth. Futile to argue that we were a democracy and Hitler a tyrant. Means besmirch ends. SOE besmirched Britain.
Anti-colonial struggles (Cold War)Edit
After World War II, largely successful anti-colonial campaigns were launched against the collapsing European empires, as many World War II resistance groups became militantly anti-colonial. The Viet Minh, for example, which had fought against the Japanese, now fought against the returning French colonists. In the Middle East, the Muslim Brotherhood used bombings and assassinations against British rule in Egypt. Also during the 1950s, the National Liberation Front (FLN) in French-controlled Algeria and the EOKA in British-controlled Cyprus waged guerrilla and open war against colonial powers.
In the 1960s, inspired by Mao's Chinese revolution of 1949 and Castro's Cuban revolution of 1959, national independence movements often fused nationalist and socialist impulses. This was the case with Spain's ETA, the Front de libération du Québec, and the Palestine Liberation Organization[clarification needed].
In the late 1960s and 1970s violent left wing and revolutionary groups were on the rise, sympathizing with Third World guerrilla movements and seeking to spark anti-capitalist revolts. Such groups included the PKK in Turkey, Armenia's ASALA, the Japanese Red Army, the German Red Army Faction, the Italian Red Brigades, and, in the United States, the Weather Underground. Nationalist groups such as the Provisional IRA and the Tamil tigers also began operations at this time.
Throughout the Cold War, both the United States and the Soviet Union made extensive use of violent nationalist organizations to carry on a war by proxy. For example, Soviet and Chinese military advisers provided training and support to the Viet Cong during the Vietnam War. The Soviet Union also provided military support to the PLO during the Israeli-Palestinian Conflict, and Fidel Castro during the Cuban Revolution. The United States funded groups such as the Contras in Nicaragua. Many violent Islamic militants of the late 20th and early 21st century had been funded in the 1980s by the United States and the UK because they were fighting the USSR in Afghanistan.
Founded in 1928 as a nationalist social-welfare and political movement in British-controlled Egypt, the Muslim Brotherhood began to attack British soldiers and police stations in the late 1940s. Founded and led by Hassan al-Banna, it also assassinated politicians seen as collaborating with British rule, most prominently Egyptian Prime Minister Nuqrashi in 1948. In 1952 a military coup overthrew British rule, and shortly thereafter the Muslim Brotherhood went underground in the face of a massive crackdown. Though sometimes banned or otherwise oppressed, the group continues to exist in present-day Egypt.
The National Liberation Front (FLN) was a nationalist group founded in French-controlled Algeria in 1954. The group became a large-scale resistance movement against French rule, with terrorism only part of its operations. The FLN leadership took inspiration from the Viet Minh rebels who had made French troops withdraw from Vietnam. The FLN was one of the first anti-colonial groups to use large-scale compliance violence. The FLN would establish control over a rural village and coerce its peasants to execute any French loyalists among them. On the night of October 31, 1954, in a coordinated wave of seventy bombings and shootings known as the Toussaint attacks, the FLN attacked French military installations and the homes of Algerian loyalists. In the following year, the group gained significant support for an uprising against loyalists in Philippeville. This uprising, and the heavy-handed response by the French, convinced many Algerians to support the FLN and the independence movement. The FLN eventually secured Algerian independence from France in 1962, and transformed itself into Algeria's ruling party.
Fatah was organized as a Palestinian nationalist group in 1954, and exists today as a political party in Palestine. In 1967 it joined the Palestine Liberation Organization (PLO), an umbrella organization for secular Palestinian nationalist groups formed in 1964. The PLO began its own armed operations in 1965. The PLO's membership comprises separate and possibly contending paramilitary and political factions, the largest of which include Fatah, the Popular Front for the Liberation of Palestine (PFLP), and the Democratic Front for the Liberation of Palestine (DFLP). Factions of the PLO have advocated or carried out acts of terrorism. Abu Iyad organized the Fatah splinter-group Black September in 1970; the group is arguably best-known for seizing eleven Israeli athletes as hostages at the September 1972 Summer Olympics in Munich. All the athletes and five Black September operatives died during a gun battle with the West German police in what later became known as the Munich massacre. The PFLP, founded in 1967 by George Habash,[year missing] on September 6, 1970 hijacked three international passenger planes, landing two of them in Jordan and blowing up the third. Fatah leader and PLO chairman Yasser Arafat publicly renounced terrorism in December 1988 on behalf of the PLO, but Israel has stated that it has proof that Arafat continued to sponsor terrorism until his death in 2004.
In the 1974 Ma'alot massacre 22 Israeli high-school students, aged 14 to 16 from Safed were killed by three members of the Democratic Front for the Liberation of Palestine. Before reaching the school, the trio shot and killed two Arab women, a Jewish man, his pregnant wife, and their 4-year-old son, and wounded several others.
The People's Mujahedin of Iran (PMOI) or Mujahedin-e Khalq (founded in 1965), is a socialist Islamic group that has fought Iran's government since the Khomeini revolution. The group originated to oppose capitalism and what it perceived as western exploitation of Iran under the Shah. The group would go on to play an important role in the Shah's overthrow but was unable to capitalize on this in the following power-vacuum. The group is suspected[by whom?] of having a membership of between 10,000 and 30,000. The group renounced violence in 2001 but remains a proscribed terror-organization in Iran and in the United States. The EU, however, has removed the group from its terror list. The PMOI is accused of supporting other groups such as the Jundallah.
In 1975 Hagop Tarakchian and Hagop Hagopian, with the help of sympathetic Palestinians, founded the Armenian Secret Army for the Liberation of Armenia (ASALA) in Beirut during the Lebanese Civil War. At the time Turkey was in political turmoil, and Hagopian believed that the time was right to avenge the Armenians who died during the Armenian Genocide and to force the Turkish government to cede the territory of Wilsonian Armenia to establish a nation state also incorporating the Armenian SSR. In its Esenboga airport attack, on 7 August 1982, two ASALA rebels opened fire on civilians in a waiting room at the Esenboga International Airport in Ankara. Nine people died and 82 were injured. By 1986, the ASALA had virtually ceased all attacks.
The "Partiya Karkerên Kurdistan" (Kurdistan Workers Party or PKK) was established in Turkey in 1978 as a Kurdish nationalist party. Founder Abdullah Ocalan was inspired by the Maoist theory of people's war, and like Algeria's FLN he advocated the use of compliance terror. The group seeks to create an independent Kurdish state consisting of parts of south-eastern Turkey, north-eastern Iraq, north-eastern Syria and north-western Iran. In 1984 the PKK transformed itself into a paramilitary organisation and launched conventional attacks as well as bombings against Turkish governmental installations. In 1999 Turkish authorities captured Öcalan. He was tried in Turkey and sentenced to life imprisonment. The PKK has since gone through a series of name changes.
Founded in 1959 and still active, the Euskadi Ta Askatasuna (or ETA (Basque for "Basque Homeland and Freedom", pronounced [ˈeta])) is an armed Basque nationalist separatist organization. Formed in response to General Francisco Franco's suppression of the Basque language and culture, ETA evolved from an advocacy group for traditional Basque culture into an armed Marxist group demanding Basque independence. Many ETA victims are government officials, the group's first known victim a police chief killed in 1968. In 1973 ETA operatives killed Franco's apparent successor, Admiral Luis Carrero Blanco, by planting an underground bomb under his habitual parking spot outside a Madrid church. In 1995, an ETA car bomb nearly killed Jose Maria Aznar, then the leader of the conservative Popular Party, and the same year investigators disrupted a plot to assassinate King Juan Carlos. Efforts by Spanish governments to negotiate with the ETA have failed, and in 2003 the Spanish Supreme Court banned the Batasuna political party, which was determined to be the political arm of ETA.
The Provisional Irish Republican Army (IRA) was an Irish nationalist movement founded in December 1969 when several militants including Seán Mac Stíofáin broke off from the Official IRA and formed a new organization. Led by Mac Stíofáin in the early 1970s and by a group around Gerry Adams since the late 1970s, the Provisional IRA sought to create an all-island Irish state. Between 1969 and 1997, during a period known as the Troubles, the group conducted an armed campaign, including bombings, gun attacks, assassinations and even a mortar attack on 10 Downing Street. On July 21, 1972, in an attack later known as Bloody Friday, the group set off twenty-two bombs, killing nine and injuring 130. On July 28, 2005, the Provisional IRA Army Council announced an end to its armed campaign. The IRA is believed to have been a major exporter of arms to and provided military training to groups such as the FARC in Colombia and the PLO. In the case of the latter there has been a long held solidarity movement, which is evident by the many murals around Belfast.
The Red Army Faction (RAF) was a New Left group founded in 1968 by Andreas Baader and Ulrike Meinhof in West Germany. Inspired by Che Guevara, Maoist socialism, and the Vietcong, the group sought to raise awareness of the Vietnamese and Palestinian independence movements through kidnappings, taking embassies hostage, bank robberies, assassinations, bombings, and attacks on U.S. air bases. The group is best known for 1977's "German Autumn". The buildup leading to German Autumn began on April 7, when the RAF shot Federal Prosecutor Siegfried Buback. On July 30, it shot Jurgen Ponto, then head of the Dresdner Bank, in a failed kidnapping attempt; on September 5, the group kidnapped Hanns Martin Schleyer (a former SS officer and an important West German industrialist), executing him on October 19. The hijacking of the Lufthansa jetliner "Landshut" by the PFLP, a Palestinian group, is also considered to be part of German Autumn.
The Red Brigades were a New Left group founded by Renato Curcio and Alberto Franceschini in 1970 that sought to create a revolutionary state. The group carried out a series of bombings and kidnappings until Curcio and Franceschini were arrested in the mid-1970s. Their successor as leader, Mario Moretti, led the group toward more militarized and violent actions, including the kidnapping of former Prime Minister Aldo Moro on March 16, 1978. Moro was killed 56 days later. This led to an all-out assault on the group by Italian law enforcement and security forces and condemnation from Italian left-wing radicals and even imprisoned ex-leaders of the Brigades. The group lost most of its social support and public opinion turned strongly against it. In 1984, the group split, the majority faction becoming the Communist Combatant Party (Red Brigades-PCC) and the minority faction reconstituting itself as the Union of Combatant Communists (Red Brigades-UCC). Members of these groups carried out a handful of assassinations before almost all were arrested in 1989.
The Front de libération du Québec (FLQ) was a Marxist nationalist group that sought to create an independent, socialist Quebec. Georges Schoeters founded the group in 1963 and was inspired by Che Guevara and Algeria's FLN. The group was accused of bombings, kidnappings, and assassinations of politicians, soldiers, and civilians. On October 5, 1970, the FLQ kidnapped James Richard Cross, the British Trade Commissioner, and on October 10, the Minister of Labor and Vice-Premier of Quebec, Pierre Laporte. Laporte was killed a week later. After these events support for violence in order to attain Quebec's independence declined, and support increased for the Parti Québécois, which took power in Quebec in 1976.
In Colombia several paramilitary and guerrilla groups formed during the 1960s and afterwards. In 1983, President Fernando Belaúnde Terry of Peru described armed attacks on his nation's anti-narcotics police as "narcoterrorism", i.e., which refers to "violence waged by drug producers to extract political concessions from the government." Pablo Escobar's ruthless violence in his dealings with the Colombian and Peruvian governments has been probably two of the best known and best documented examples of narcoterrorism. Paramilitary groups associated with narcoterrorism include the Ejército de Liberación Nacional (ELN), the Fuerzas Armadas Revolucionarias de Colombia (FARC), and the Autodefensas Unidas de Colombia (AUC). While the ELN and FARC were originally left wing revolutionary groups and the AUC was originally a right-wing paramilitary, all have conducted numerous attacks on civilians and civilian infrastructure and engaged in the drug trade. The U.S. and some European governments consider them terrorist organizations.
The Jewish Defense League (JDL) was founded in 1969 by Rabbi Meir Kahane in New York City, with its declared purpose being the protection of Jews from harassment and antisemitism. Federal Bureau of Investigation statistics state that, from 1980 to 1985, 15 attacks which the FBI classified as acts of terrorism were attempted in the U.S. by members of the JDL. The National Consortium for the Study of Terror and Responses to Terrorism states that, during the JDL's first two decades of activity, it was an "active terrorist organization.". Kahane later founded the far-right Israeli political party Kach, which was banned from elections in Israel on the ground of racism. The JDL's present-day website condemns all forms of terrorism.
The Fuerzas Armadas de Liberación Nacional (FALN, "Armed Forces of National Liberation") is a nationalist group founded in Puerto Rico in 1974. Over the decade that followed the group used bombings and targeted killings of civilians and police in pursuit of an independent Puerto Rico. The FALN in 1975 took responsibility for four nearly simultaneous bombings in New York City. The United States Federal Bureau of Investigation (FBI) has classified the FALN as a terrorist organization.
The Weather Underground (a.k.a. the Weathermen) began as a militant faction of the leftist Students for a Democratic Society (SDS) organization, and in 1969 took over the organization. Weathermen leaders, inspired by China's Maoists, the Black Panthers, and the 1968 student revolts in France, sought to raise awareness of its revolutionary anti-capitalist and anti-Vietnam War platform by destroying symbols of government power. From 1969 to 1974 the Weathermen bombed corporate offices, police stations, and Washington government sites such as the Pentagon. After the end of the Vietnam War in 1975, most of the group disbanded.
The Japanese Red Army was founded by Fusako Shigenobu in Japan in 1971 and attempted to overthrow the Japanese government and start a world revolution. Allied with the Popular Front for the Liberation of Palestine (PFLP), the group committed assassinations, hijacked a commercial Japanese aircraft, and sabotaged a Shell oil refinery in Singapore. On May 30, 1972, Kōzō Okamoto and other group members launched a machine gun and grenade attack at Israel's Lod Airport in Tel Aviv, killing 26 people and injuring 80 others. Two of the three attackers then killed themselves with grenades.
Founded in 1976, the Liberation Tigers of Tamil Eelam, (also called "LTTE" or Tamil Tigers) was a militant Tamil nationalist political and paramilitary organization based in northern Sri Lanka. From its founding by Velupillai Prabhakaran, it waged a secessionist resistance campaign that sought to create an independent Tamil state in the northern and eastern regions of Sri Lanka. The conflict originated in measures the majority Sinhalese took that were perceived as attempts to marginalize the Tamil minority. The resistance campaign evolved into the Sri Lankan Civil War, one of the longest-running armed conflicts in Asia. The group carried out many bombings, including an April 21, 1987, car bomb attack at a Colombo bus terminal that killed 110 people. In 2009 the Sri Lankan military launched a major military offensive against the secessionist movement and claimed that it had effectively destroyed the LTTE.
In Kenya, because of the seeming ongoing failure of the Kenyan African Union to obtain political reforms from the British through peaceful means, radical activists within the KAU set up a splinter group and organised a more militant kind of nationalism. By 1952 The Mau Mau consisted of Kikuyu fighters, along with some Embu and Meru recruits. The Mau Mau carried out attacks on political opponents, loyalist villages, raiding white settler farms and destroying livestock. The British colonial administration declared a state of emergency and British forces were sent to Kenya. The majority of fighting was between loyalist and Mau Mau Kikuyu, so many scholars today now consider it a Kikuyu civil war. The Kenyan Government considers the Mau Mau Uprising a key step towards Kenya's independence from British Imperial rule. The British were accused of using torture and mass executions as part of their efforts to suppress the Mau Mau, though the British forces did have strict orders not to mistreat Mau Mau terrorists.
Founded in 1961, Umkhonto we Sizwe (MK) was the military wing of the African National Congress; it waged a guerrilla campaign against the South African apartheid regime and was responsible for many bombings. MK launched its first guerrilla attacks against government installations on 16 December 1961. The South African government subsequently banned the group after classifying it as a terrorist organization. MK's first leader was Nelson Mandela, who was tried and imprisoned for the group's acts. With the end of apartheid in South Africa, Umkhonto we Sizwe was incorporated into the South African armed forces.
Late 20th centuryEdit
In the 1980s and 1990s, Islamic militancy in pursuit of religious and political goals increased, many militants drawing inspiration from Iran's 1979 Islamic Revolution. In the 1990s, well-known violent acts that targeted civilians were the World Trade Center bombing by Islamic terrorists on February 27, 1993, the Sarin gas attack on the Tokyo subway by Aum Shinrikyo on March 20, 1995, and the bombing of Oklahoma City's Murrah Federal Building by Timothy McVeigh a month later that same year. This period also saw the rise of what is sometimes categorized as Single issue terrorism. If terrorism is the extension of domestic politics by other means, just as war is for diplomacy, then this represents the extension of pressure groups into violent action. Notable examples that grow in this period are Anti-abortion terrorism and Environmental terrorism.
The Contras were a counter-revolutionary militia formed in 1979 to oppose Nicaragua's Sandinista government. The Catholic Institute for International Relations asserted the following about contra operating procedures in 1987: "The record of the contras in the field... is one of consistent and bloody abuse of human rights, of murder, torture, mutilation, rape, arson, destruction and kidnapping." Americas Watch—subsequently folded into Human Rights Watch—accused the Contras of targeting health care clinics and health care workers for assassination; kidnapping civilians, torturing civilians; executing civilians, including children, who were captured in combat; raping women; indiscriminately attacking civilians and civilian houses; seizing civilian property; and burning civilian houses in captured towns. The contras disbanded after the election of Violetta Chamorro in 1990.
The April 19, 1995, Oklahoma City bombing was directed at the U.S. government, according to the prosecutor at the murder trial of Timothy McVeigh, who was convicted of carrying out the crime. The bombing of the Alfred P. Murrah Federal Building in downtown Oklahoma City claimed 168 lives and left over 800 people injured. McVeigh, who was convicted of first degree murder and executed, said his motivation was revenge for U.S. government actions at Waco and Ruby Ridge.
659 people died in Lebanon between 1982 and 1986 in 36 suicide attacks directed against American, French and Israeli forces, by 41 individuals with predominantly leftist political beliefs who were adherents of both the Christian and Muslim religions.[dubious ] The 1983 Beirut barracks bombing (by the Islamic Jihad Organization), which killed 241 U.S. and 58 French peacekeepers and six civilians at the peacekeeping barracks in Beirut, was particularly deadly. Hezbollah ("Party of God") is an Islamist movement and political party officially founded in Lebanon in 1985, ten years after the outbreak of that country's civil war. Inspired by Ayatollah Ruhollah Khomeini and the Iranian revolution, the group originally sought an Islamic revolution in Lebanon and has long fought for the withdrawal of Israeli forces from Lebanon. Led by Sheikh Sayyed Hassan Nasrallah since 1992, the group has captured Israeli soldiers and carried out missile attacks and suicide bombings against Israeli targets.
Egyptian Islamic Jihad (a.k.a. Al-Gamaa Al-Islamiyya) is a militant Egyptian Islamist movement dedicated to the establishment of an Islamic state in Egypt. The group was formed in 1980 as an umbrella organization for militant student groups which were formed after the leadership of the Muslim Brotherhood renounced violence. It is led by Omar Abdel-Rahman, who has been accused of participation in the 1993 World Trade Center bombing. In 1981, the group assassinated Egyptian president Anwar Sadat. On November 17, 1997, in what became known as the Luxor massacre, it attacked tourists at the Temple of Hatshepsut (Deir el-Bahri); six men dressed as police officers machine-gunned 58 Japanese and European vacationers and four Egyptians.
On December 21, 1988, Pan Am Flight 103, a Pan American World Airways flight from London's Heathrow International Airport to New York City's John F. Kennedy International Airport, was destroyed mid flight over the Scottish town of Lockerbie, killing 270 people, including 11 on the ground. On January 31, 2001, Libyan Abdelbaset al-Megrahi was convicted by a panel of three Scottish judges of bombing the flight, and was sentenced to 27 years imprisonment. In 2002, Libya offered financial compensation to victims' families in exchange for lifting of UN and U.S. sanctions. In 2007 Megrahi was granted leave to appeal against his conviction, and in August 2009 was released on compassionate grounds by the Scottish executive due to his terminal cancer.
The first Palestinian suicide attack took place in 1989 when a member of the Palestinian Islamic Jihad ignited a bomb onboard Tel Aviv bus, killing 16 people. In the early 1990s another group, Hamas, also became well known for suicide bombings. Sheikh Ahmed Yassin, Abdel Aziz al-Rantissi and Mohammad Taha of the Palestinian wing of Egypt's Muslim Brotherhood had created Hamas in 1987, at the beginning of the First Intifada, an uprising against Israeli rule in the Palestinian Territories which mostly consisted of civil disobedience but sometimes escalated into violence. Hamas's militia, the Izz ad-Din al-Qassam Brigades, began its own suicide bombings against Israel in 1993, eventually accounting for about 40% of them. Palestinian militant organizations have been responsible for rocket attacks on Israel, IED attacks, shootings, and stabbings. After winning legislative elections, Hamas since June 2007 has governed the Gaza portion of the Palestinian Territories. Hamas is designated as a terrorist organization by the European Union, Canada, Israel, Japan, and the United States. Australia and the United Kingdom have designated the military wing of Hamas, the Izz ad-Din al-Qassam Brigades, as a terrorist organization. The organization is banned in Jordan. It is not regarded as a terrorist organization by Iran, Russia, Norway, Switzerland, Brazil, Turkey, China, and Qatar. As well as Hamas, the Popular Front for the Liberation of Palestine, Palestinian Islamic Jihad, Palestine Liberation Front, PFLP-General Command, and the Al-Aqsa Martyrs Brigade were all listed as terrorist organizations by the US State Department in the 1990s.
On February 25, 1994, Baruch Goldstein, an American-born Israeli physician, perpetrated the Cave of the Patriarchs massacre in the city of Hebron, Goldstein shot and killed between 30 and 54 Muslim worshippers inside the Ibrahimi Mosque (within the Cave of the Patriarchs), and wounded another 125 to 150. Goldstein, who was lynched and killed in the mosque, was a supporter of Kach, an Israeli political party founded by Rabbi Meir Kahane that advocated the expulsion of Arabs from Israel and the Palestinian Territories. In the aftermath of the Goldstein attack and Kach statements praising it, Kach was outlawed in Israel. Today, Kach and a breakaway group, Kahane Chai, are considered terrorist organisations by Israel, Canada, the European Union, and the United States. The far-right anti-miscegenation group Lehava, headed by former Kach member Bentzi Gopstein, is politically active inside Israel and its occupied territories.
Aum Shinrikyo, now known as Aleph, was a Japanese religious group founded by Shoko Asahara in 1984 as a yogic meditation group. Later, in 1990, Asahara and 24 other members campaigned for election to the House of Representatives under the banner of Shinri-tō (Supreme Truth Party). None were voted in, and the group began to militarize. Between 1990 and 1995, the group attempted several apparently unsuccessful violent attacks using the methods of biological warfare, using botulin toxin and anthrax spores. On June 28, 1994, Aum Shinrikyo members released sarin gas from several sites in the Kaichi Heights neighborhood of Matsumoto, Japan, killing eight and injuring 200 in what became known as the Matsumoto incident. Seven months later, on March 20, 1995, Aum Shinrikyo members released sarin gas in a coordinated attack on five trains in the Tokyo subway system, killing 12 commuters and damaging the health of about 5,000 others in what became known as the subway sarin incident (地下鉄サリン事件, chikatetsu sarin jiken). In May 1995, Asahara and other senior leaders were arrested and the group's membership rapidly decreased.
In 1985, Air India Flight 182 flying from Canada was blown up by a bomb while in Irish airspace, killing 329 people, including 280 Canadian citizens, mostly of Indian birth or descent, and 22 Indians. The incident was the deadliest act of air terrorism before 9/11, and the first bombing of a 747 Jumbo Jet which would set a pattern for future air terrorism plots. The crash occurred within an hour of the fatal Narita Airport Bombing which also originated from Canada without the passenger for the bag that exploded on the ground. Evidence from the explosions, witnesses and wiretaps of militants pointed to an attempt to actually blow up two airliners simultaneously by members of the Babbar Khalsa Khalistan movement militant group based in Canada to punish India for attacking the Golden Temple.
The Iranian Embassy siege took place in 1980, after a group of six armed men stormed the Iranian embassy in South Kensington, London. The government ordered the Special Air Service (SAS), a special forces regiment of the British Army, to conduct an assault—Operation Nimrod—to rescue the remaining hostages. This response set the tone for how Western governments would respond to terrorism. Replacing an era of negotiation with one of military intervention.
Chechen separatists, led by Shamil Basayev, carried out several attacks on Russian targets between 1994 and 2006. In the June 1995 Budyonnovsk hospital hostage crisis, Basayev-led separatists took over 1,000 civilians hostage in a hospital in the southern Russian city of Budyonnovsk. When Russian special forces attempted to free the hostages, 105 civilians and 25 Russian troops were killed.
Major events after the September 11 attacks in 2001 include the Moscow Theatre Siege, the 2003 Istanbul bombings, the Madrid train bombings, the Beslan school hostage crisis, the 2005 London bombings, the October 2005 New Delhi bombings, the 2008 Mumbai Hotel Siege, and the 2011 Norway attacks.
The Moscow theatre hostage crisis was the seizure of a crowded Moscow theatre on 23 October 2002 by some 40 to 50 armed Chechens who claimed allegiance to the Islamist militant separatist movement in Chechnya. They took 850 hostages and demanded the withdrawal of Russian forces from Chechnya and an end to the Second Chechen War. The siege was officially led by Movsar Barayev. After a two-and-a-half-day siege, Russian Spetsnaz forces pumped an unknown chemical agent (thought to be fentanyl, 3-methylfentanyl), into the building's ventilation system and raided it. Officially, 39 of the attackers were killed by Russian forces, along with at least 129 and possibly many more of the hostages (including nine foreigners). All but a few of the hostages who died were killed by the gas pumped into the theatre, and many condemned the use of the gas as heavy handed. Roughly, 170 people died in all.
On September 1, 2004, in what became known as the Beslan school hostage crisis, 32 Chechen separatists took 1,300 children and adults hostage at Beslan's School Number One. When Russian authorities did not comply with the rebel demands that Russian forces withdraw from Chechnya, 20 adult male hostages were shot. After two days of stalled negotiations, Russian special forces stormed the building. In the ensuing melee, over 300 hostages died, along with 19 Russian servicemen and all but perhaps one of the rebels. Basayev is believed to have participated in organizing the attack.[clarification needed].
The 2004 Madrid train bombings (also known in Spain as 11-M) were nearly simultaneous, coordinated bombings against the Cercanías commuter train system of Madrid, Spain, on the morning of 11 March 2004—three days before Spain's general elections and two and a half years after the September 11 attacks in the United States. The explosions killed 191 people and wounded 1,800. It was concluded that the bombs were carried on the trains hidden in backpacks, While many went off three were found later that did not detonate. The official investigation by the Spanish judiciary found that the attacks were directed by an al-Qaeda-inspired terrorist cell. ETA and al Qaeda were the original suspects cited by the Spanish government.
The 7 July 2005 London bombings (often referred to as 7/7) were a series of coordinated suicide bomb attacks in central London which targeted civilians using the public transport system during the morning rush hour. On the morning of Thursday, 7 July 2005, four Islamist extremists separately detonated three bombs in quick succession aboard London Underground trains across the city and, later, a fourth on a double-decker bus in Tavistock Square. Fifty-two civilians were killed and over 700 more were injured in the attacks. Later a dozen unexploded bombs were found in a car located in North London. 3 out of the 4 suspects were identified Mohammed Silique Khan, Germaine Morris Lindsay, Shahzad Tawnier where they are found to be in cohorts with Osama Bin Laden and eventually documents are leaked showing that Osama bin laden and Rashid Ruff planned the London bombings.
In Norway in 2011 two sequential lone wolf terrorist attacks by right wing extremist Anders Behring Breivik were carried out against the government, the civilian population, and a Workers' Youth League (AUF)-run summer camp in Norway on 22 July 2011. The attacks claimed a total of 77 lives. The first part of the attack was a van bomb in Oslo. The van was placed in front of the office block housing the office of Prime Minister and other government buildings. The explosion killed eight people and injured at least 209 people, twelve of them seriously. He followed this attack by impersonating a police officer to access the island on which the AUF summer camp was being held and proceeded to go on a shooting spree that killed 69 people.
In 2013 the British government branded the killing of a serviceman in a Woolwich street, a terrorist attack. One of his attackers made political statements which were later broadcast with blood still on his hands from the attack. The two men responsible for the attack remained on the scene until incapacitated by armed police. They were later tried and found guilty of murder.
From 7 January to 9 January 2015, a series of five terrorist attacks occurred across the Île-de-France region, particularly in Paris. The attacks killed a total of 17 people, in addition to the three perpetrators of the attack, and wounded 22 others, some of whom are in critical condition as of 16 January 2015[update]. A fifth shooting attack did not result in any fatalities. Numerous other smaller incidents of attacks on mosques have been reported, but have not yet been directly linked to the attacks. The group that claims responsibility for the attacks, Al-Qaeda in the Arabian Peninsula, claimed that the attack had been planned for years ahead.
On 7 January 2015, two Islamist gunmen forced their way into and opened fire in the Paris headquarters of Charlie Hebdo shooting, killing twelve: staff cartoonists Charb, Cabu, Honoré, Tignous and Wolinski, economist Bernard Maris, editors Elsa Cayat and Mustapha Ourrad, guest Michel Renaud, maintenance worker Frédéric Boisseau and police officers Brinsolaro and Merabet, and wounding eleven, four of them seriously.
During the attack, the gunmen shouted "Allahu akbar" ("God is great" in Arabic) and also "the Prophet is avenged". President François Hollande described it as a "terrorist attack of the most extreme barbarity". The two gunmen were identified as Saïd Kouachi and Chérif Kouachi, French Muslim brothers of Algerian descent.
On 9 January, police tracked the assailants to an industrial estate in Dammartin-en-Goële, where they took a hostage. Another gunman also shot a police officer on 8 January and took hostages the next day, at a kosher supermarket near the Porte de Vincennes. GIGN (a special operations unit of the French Armed Forces), combined with RAID and BRI (special operations units of the French Police), conducted simultaneous raids in Dammartin and at Porte de Vincennes. Three terrorists were killed, along with four hostages who died in the Vincennes supermarket before the intervention; some other hostages were injured.
On 13 November, 28 hours after the Beirut attack, three groups of ISIS terrorists performed mass killings in various places in Paris' Xe and XIe arrondissements. They killed a total of more than 130 citizens. Hostages were taken in the concert hall "Le Bataclan" for three hours, and ninety were killed before the special police entered. The president immediately started the emergency threat procedure, for the first time on the entire French territory since the Algeria events in 1960.
On March 22, 2016 yet another terrorist attack happened within the confines of Europe. Three nail bombs went off at the same time in Belgium, two happened at Brussels Airport in Zaventem approximately 40 seconds apart. The other nail bomb was at Maalbeck metro station also in Brussels about an hour after the airport attacks The act was carried out by 3 suicide bombers, killing 31 people and injuring 300 people in the process. The three men were claimed as members of the Islamic State in Iraq and Syria known as ISIS. The third nail bomb at the metro station failed to explode during the terrorist act that Tuesday and was safely deactivated. The two men who hit the Brussels Airport were brothers Ibrahim El Bakraoui and Khalid El Bakraoui while a third person was with them Najim Laachraoui who was believed to be the bomb maker also died, the suspect who had the bomb at the metro station has not yet been identified. The Brothers were both killed during the explosions along with the bomb maker, while the unidentified terrorist fled for he had the bomb that did not explode in the metro station.
Brussels responded to this attack with a level 4 alert the highest it had had since the Paris attacks mentioned above. World leaders responded by unifying and offering their aid along with sorrows for the tragedy that happened there on March 22, 2016. 28 heads of state in the European union agreed to fight the War on Terror to better and protect the Union. The attacks on Brussels had no forewarning nor did officials think an attack on that scale could ever be perpetuated.
Osama bin Laden, closely advised by Egyptian Islamic Jihad leader Ayman al-Zawahiri, in 1988 founded Al-Qaeda (Arabic: القاعدة, meaning "The Base"), an Islamic jihadist movement to replace Western-controlled or dominated Muslim countries with Islamic fundamentalist regimes. In pursuit of that goal, bin Laden issued a 1996 manifesto that vowed violent jihad against U.S. military forces based in Saudi Arabia. On August 7, 1998, individuals associated with Al Qaeda and Egyptian Islamic Jihad carried out simultaneous bombings of two U.S. embassies in Africa which resulted in 224 deaths. On October 12, 2000, Al-Qaeda carried out the USS Cole bombing, a suicide bombing of the U.S. Navy destroyer USS Cole harbored in the Yemeni port of Aden. The bombing killed seventeen U.S. sailors.
On September 11, 2001, nineteen men affiliated with al-Qaeda hijacked four commercial passenger jets all bound for California, crashing two of them into the World Trade Center in New York City, the third into the Pentagon in Arlington County, Virginia, and the fourth (originally intended to target Washington, D.C., either the White House or the U.S. Capitol) into an open field near Shanksville, Pennsylvania, after a revolt by the plane's passengers. As a result of the attacks, 2,996 people (including the 19 hijackers) perished and more than 6,000 others were injured.
The United States responded to the attacks by launching the War on Terror. Specifically, on October 7, 2001, it invaded Afghanistan to depose the Taliban, which had harbored al-Qaeda terrorists. On October 26, 2001, the U.S. enacted the Patriot Act that expanded the powers of U.S. law enforcement and intelligence agencies. Many countries followed with similar legislation. Under the Obama administration, the U.S. changed tactics moving away from ground combat with large numbers of troops, to the use of drones and special forces. This campaign eliminated much of al-Qaeda's most senior members, including a strike by Seal Team Six that resulted in the death of Osama Bin Laden in 2011.
On Israel's northern border, after its unilateral withdrawal from southern Lebanon in May 2000, Hezbollah launched numerous Katyusha rocket attacks against non-civilian and civilian areas within northern Israel. Within Israel, the 1993–2008 Second Intifada involved in part a series of suicide bombings against civilian and non-civilian targets. 1100 Israelis were killed in the Second Intifada, the majority being civilians. A 2007 study of Palestinian suicide bombings from September 2000 through August 2005 found that 40% percent were carried out by Hamas's Izz ad-Din al-Qassam Brigades, and roughly 26% by the Palestinian Islamic Jihad (PIJ) and Fatah militias. Also, between 2001 and January 2009, over 8,600 rocket attacks were launched from the Gaza Strip were launched into civilian areas and non-civilian areas inside Israel, causing deaths, injuries, and psychological trauma. Formed in 2003, Jundallah is a Sunni insurgent group from the Baloch region of Iran and neighboring Pakistan. It has committed numerous attacks within Iran, stating that it is fighting for the rights of the Sunni minority there. In 2005 the group attempted to assassinate Iran's president, Mahmoud Ahmadinejad. The group takes credit for other bombings, including the 2007 Zahedan bombings. Iran and other sources accuse the group of being a front for or supported by other nations, in particular the U.S. and Pakistan.
As the Islamic state of Syria and Iraq increases in size and power their attacks are affecting all parts of the world even in their own back yard of Turkey. Taking place in Istanbul a suicide bomber once again detonated a car bomb killing 4 people and injuring 31. No extremist group took responsibility for the attack but the attacker Mehmet Ozturk was linked to have ties with ISIS. This was just days after the car bomb attack in Turkeys capital of Ankara killing 37 people. The U.S. security council asked for the repeated terror attacks on Turkey to stop, and that the War on Terror will just become stronger due actions like these killing innocent people. Since the attacks Israel has requested that its citizens not travel to Turkey unless its necessary.
On December 27, 2007 two time elected Pakistani Prime Minister Benazir Bhutto was assassinated during a gathering she was having with her supporters. A suicide bomber detonated a bomb along with other extremists against her shooting off guns killing the prime minister and 14 other people. She was immediately rushed to the hospital and was pronounced dead. She was believed to be target because she was warning Pakistan along with the world of the uprising Jihadist groups and extremist groups gaining power. The responsibility of her death falls on the president of the time Pervez Musharraf who also was the ex- military chief, She had several conversations with Musharraf about upping her security due to the increase of death threats she was receiving and he denied her request. Although AL-Qaeda took responsibility for her death it is seen in the eye of the people as former President's Pervez Musharraf's fault for not taking her concerns seriously. However, during his trial he denies that no conversation happened between him and Benzair Bhutto about the security of her life.
The 2008 Mumbai attacks were more than ten coordinated shooting and bombing attacks across Mumbai, India's largest city, by Lashkar-e-Taiba, a Pakistani Islamic terrorist organization with ties to ISI, Pakistan's secret service. The six main targets were
- Chhatrapati Shivaji Terminus – formerly known as Victoria Station
- The Taj Mahal Palace and Tower Hotel – six explosions were reported in the hotel,200 hostages were rescued from the burning building. A group of European Parliament committee members were staying at the hotel at the time but none were injured. Two attackers held hostages in the hotel.
- Leopold Café – a popular cafe and bar on the Causeway that was one of the first places to be attacked resulting in the death of 10 people
- The Trident-Oberoi Hotel – one explosion was heard here where the President of Madrid was eating, he was not injured
- Nariman House, a Jewish community center – had a hostage situation by two attackers eventually the hostages became freed when an aerial view of the building was displayed and NSG's stormed the building eventually killing the two attackers.
- Cama Hospital – the attacks were carried out by 10 gunman that arrived on speed boats boat from Pakistan, separating going building to building grabbing hostages, setting bombs up and mass murdering with guns. Eventually 9 out of the 10 gunman were killed. Pakistan denied that the men were a part of their country but eventually released documents that 3 of the men were from Pakistan and that cases would be opened against them
The attacks, which drew widespread condemnation across the world, began on 26 November 2008 and lasted until 29 November, killing at least 173 people and wounding at least 308.
On January 14, 2016 a series of terrorist attacks took place in Jakarta, Indonesia resulting in 8 dead. The responsibility of these attacks were claimed by ISIS Counter terrorism has named this type of attack 'Marauding Terrorist Firearms Attack' because of the fast reaction needed by local policemen to stop the gunfire attack from the terrorists. The attack on Jakarta is linked to a bigger picture of terror in the Indonesian country for those of ISIS. Indonesia is home of the "largest regional terror groups" housing seven Islamist extremist groups. Leaving the thoughts that ISIS is trying to establish a satellite city in Indonesia, due to the fact that it has the largest Muslim population. Although ISIS branches have not yet reached the land of Southeast Asia in big masses, there is the fear that it is only a matter of time until Indonesias small extremist groups grow in masses once direct contact with ISIS is made. Once contact is established local terror groups will quickly mobilize to carry out the tasks that ISIS asks of them. ISIS will turn to Southeast Asia because it is only evident that they will lose control of the middle east.
2001 also saw the second acknowledged act of bioterrorism with the 2001 anthrax attacks (the first being intentional food poisoning conducted in The Dalles, Oregon by Rajneeshee followers in 1984), when letters carrying anthrax spores were posted to several major American media outlets and two Democratic Party politicians. This resulted in several of the first fatalities attributed to a bioterror attack.
The more recent terrorist attack in the United States have included the 2015 San Bernardino attack, the Bombing of Boston Marathon by Islamic terrorists, the shooting of police officers in sniper ambushes by members of Black Lives Matter movement, and the shooting of multiple black parishioners at church and car attack on anti-fascist protesters in Charlottesville by right-wing extremists and white supremacists.
List of non-state groups accused of terrorismEdit
This section does not cite any sources. (September 2012) (Learn how and when to remove this template message)
- Paul Reynolds; quoting David Hannay; Former UK ambassador (14 September 2005). "UN staggers on road to reform". BBC News. Retrieved 2010-01-11.
This would end the argument that one man's terrorist is another man's freedom fighter...
- Furstenberg, François (28 October 2007). "Opinion - Bush's Dangerous Liaisons". Retrieved 10 January 2018 – via NYTimes.com.
- Nazi Terror Begins, United States Holocaust Museum, 20 June 2014
- State terror in the Stalin era, The Foundations of Modern Terrorism, 2013
- Jeffrey Record. Bounding the Global War on Terrorism, December 1, 2003, ISBN 1-58487-146-6. p. 6 (page 12 of the PDF document) citing in footnote 11: Walter Laqueur, The New Terrorism: Fanaticism and the Arms of Mass Destruction, New York: Oxford University Press, 1999, p. 6.
- Angus Martyn, The Right of Self-Defence under International Law-the Response to the Terrorist Attacks of 11 September Archived April 29, 2009, at the Wayback Machine. http://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/Publications_Archive/CIB/cib0102/02CIB08, Australian Law and Bills Digest Group, Parliament of Australia Web Site, February 12, 2002
- Hoffman (1998), p. 32. See review in The New York Times Inside Terrorism
- "BBC - History - The Changing Faces of Terrorism". bbc.co.uk. Retrieved 27 November 2015.
- Hoffman, p.1
- Chialand, p.6
- History of Terrorism article by Mark Burgess Archived 2012-05-11 at the Wayback Machine.
- Hoffman 1998, p. 17
- http://webarchive.nationalarchives.gov.uk/20090609003228/http://www.berr.gov.uk/fireworks/download/FW1434_Keystage2_07.pdf
- Hoffman 1998, p. 83
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.56
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.68
- Hoffman 1998, p. 167
- Rapoport, David. "Fear and Trembling: Terrorism in Three Religious Traditions." American Political Science Review, 1984. p.658
- Willey, Peter. The Castles of the Assassins. New York: Linden Press, 2001. p.19
- Daftary, Farhad. The Assassin Legends: Myths of the Isma'ilis. London: I. B. Tauris, 1995. p.42
- Hodgson, Marshall G. S. The Secret Order of Assassins: The Struggle of the Early Nizari Ismai'lis Against the Islamic World. University of Pennsylvania Press, 2005. p.83
- Hoffman 1998, p. 84
- "Sons of Liberty: Patriots or Terrorists? - Archiving Early America". www.varsitytutors.com. Retrieved 10 January 2018.
- "How Britain's first terrorists almost changed the course of history with the Gunpowder Plot". 5 November 2017. Retrieved 10 January 2018.
- "The Gunpowder Plot: Terror and Toleration - History Today". www.historytoday.com. Retrieved 10 January 2018.
- Britten, Nick (21 April 2005). "Gunpowder Plot was England's 9/11, says historian". Retrieved 10 January 2018 – via www.telegraph.co.uk.
- The Gunpowder Plot: Terror and Faith in 1605; Author Antonia Fraser; published by Weidenfeld & Nicolson
- Furstenberg, François (October 28, 2007). "Bush's Dangerous Liaisons". The New York Times. Retrieved May 4, 2010.
- "BBC - History - The Changing Faces of Terrorism". bbc.co.uk. Retrieved 27 November 2015.
- The Dynamite Club by John Merriman
- "Early History of Terrorism". terrorism-research.com. Retrieved 27 November 2015.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.124
- "A History of Terrorism'’, by Walter Laqueur, Transaction Publishers, 2000, ISBN 0-7658-0799-8, p. 92
- "Terrorism: From the Fenians to Al Qaeda". Retrieved 2014-01-09.http://www.thecanadianencyclopedia.ca/en/article/terrorism/
- Irish Freedom, by Richard English Publisher: Pan Books (2 November 2007), ISBN 0-330-42759-8 p179
- Irish Freedom, by Richard English Publisher: Pan Books (2 November 2007), ISBN 0-330-42759-8 p. 180
- Irish Freedom, by Richard English Publisher: Pan Books (2 November 2007), ISBN 0-330-42759-8 p3
- Whelehan, Niall (2012). The Dynamiters: Irish Nationalism and Political Violence in the Wider World 1867-1900. Cambridge.
- "The Fenian Dynamite campaign 1881-85". Retrieved 2014-01-09.
- Secret War Exhibition, Imperial War Museum London
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.116
- Mikhail Bakunin. "Works of Mikhail Bakunin 1870". marxists.org. Retrieved 27 November 2015.
- Anarchism: A Documentary History of Libertarian Ideas
- "Anarchism: A Documentary History of Libertarian Ideas, Volume One - , - Black Rose Books". blackrosebooks.net. Retrieved 27 November 2015.
- Hoffman 1998, p. 5
- A History of Terrorism, by Walter Laqueur, Transaction Publishers, 2000, ISBN 0-7658-0799-8, p. 92
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.133
- "The Guillotine's Sure Work; Details of the Execution of Vaillant, the Anarchist", The New York Times, 1984-02-06.
- Blight, David W. "The Good Terrorist". The Washington Post. Retrieved May 4, 2010.
- Otto Scott, The Secret Six: John Brown and the Abolitionist Movement (Murphys, Calif.: Uncommon Books, 1979, 1983), 3.
- Tomasky, Michael (December 2, 2009). "Let's debate John Brown: terrorist, or no?". The Guardian (UK). Retrieved February 25, 2014.
- Reynolds, David S. (December 1, 2009). "Freedom's Martyr". New York Times. Retrieved February 25, 2014.
- Horwitz, Tony (December 1, 2009). "The 9/11 of 1859". New York Times. Retrieved February 25, 2014.
- Horn, 1939, p. 9.
- Jackson 1992 ed., pp. 241-242.
- "Terrorism 2000/2001" (PDF). Archived from the original (PDF) on 2009-03-20. Retrieved 2009-03-08.
- Marty Gitlin, The Ku Klux Klan: A Guide to an American Subculture (2009)
- "ISL: Ku Klux Klan in Indiana". www.in.gov. Retrieved 10 January 2018.
- Balakian, Peter. The Burning Tigris: The Armenian Genocide and America's Response. New York: Harper Perennial, 2004. p.104
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.193
- Hoffman, Bruce. Inside Terrorism. New York: Columbia University Press, 2006. Page 51.
- Ross, Jeffrey Ian. Political Terrorism: An Interdisciplinary Approach. New York: Peter Lang Press, 2006. p.34
- Hoffman 1998, p. 11
- Kaplan, Robert. Balkan Ghosts: A Journey Through History. New York: Picador, 2005. p.56
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.189
- Danforth, Loring. The Macedonian Conflict. Princeton University Press, 1997. p.87
- Kaplan, Robert. Balkan Ghosts: A Journey Through History. New York: Picador, 2005. p.57
- Bell, J. Bowyer. Terror Out of Zion: Irgun Zvai Leumi, Lehi and the Palestine Underground, 1929-1949. Avon, 1985. p.14
- "Jewish-Zionist Terror". 150m.com. Archived from the original on 8 December 2015. Retrieved 27 November 2015.
- Lia, Brynjar. The Society of the Muslim Brothers in Egypt: The Rise of an Islamic Mass Movement 1928-1942. Ithaca Press, 2006. p.53
- Rowland, Peter (1978). David Lloyd George:a biography. Macmillan. p. 228. Retrieved 2015-11-27.
- Fontanka 16: The Tsars' Secret Police, by Charles A. Ruud, Sergei A. Stepanov
- "100 years since the assassination of Archduke Franz Ferdinand: How did". 28 June 2014. Retrieved 10 January 2018.
- "First World War: Reports of the assassination of Archduke Franz Ferdinand in Sarajevo". 8 November 2008. Retrieved 10 January 2018 – via www.theguardian.com.
- "Hitler vs. Stalin: Who Was Worse?". New York Review of Books. 27 January 2011.
- For example: Getty, J. Arch (1993). "2: The Politics of Repression Revisited". In Getty, John Arch; Thompson Manning, Roberta. Stalinist Terror: New Perspectives. Cambridge: Cambridge University Press. p. 57. ISBN 9780521446709. Retrieved 2017-10-02.
[...] V. I. Nevskii, former head of the Lenin Library, directly accused Bukharin of leading a 'terrorist center.' [...] Ezhov gave a report summarizing the mounting 'evidence' against Bukharin as leader of the 'terrorist plot' along with the Trotskyists.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.185
- "BBC - History - 1916 Easter Rising - Aftermath - The Executions". bbc.co.uk. Retrieved 27 November 2015.
- Chaliand, p.185: "Just before Easter 1920, the IRA simultaneously attacked more than 300 police stations..."
- Hart, Peter. Mick: The Real Michael Collins. p.241
- Coogan, Tim. Michael Collins: The Man Who Made Ireland. New York: Palgrave Macmillan, 2002. p.92
- Colin Shindler, The Land Beyond Promise:Israel, Likud and the Zionist Dream, I.B.Tauris, 2001 p.177
- Hugh Dalton letter to Lord Halifax 2/7/1940
- article by Matthew Carr Author The Infernal Machine: A History of Terrorism Archived December 2, 2008, at the Wayback Machine.
- Geraghty (1998), p.347
- Dingley, James. The IRA: The Irish Republican Army. ABC-CLIO, 2012. p.82
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. P. 212-213.
- Zadka, Saul. Blood in Zion: How the Jewish Guerrillas Drove the British Out of Palestine. London: Brassey Press, 2003. P. 42.
- Juergensmeyer, Mark. Terror in the Mind of God: The Global Rise of Religious Violence. Berkeley, University of California Press, 2001. P. 64.
- Hoffman 1998, P. 26.
- Ehud Sprinzak, Brother Against Brother: Violence and Extremism in Israeli Politics from Altalena to the Rabin Assassination, Simon and Schuster, 1999 p.35.
- Sachar, Howard. A History of Israel: From the Rise of Zionism to Our Time. New York: Knopf, 2007. P. 247.
- "This Week in History: The King David Hotel bombing". The Jerusalem Post - JPost.com. Retrieved 27 November 2015.
- Clarke, Thurston. By Blood and Fire, G. P. Puttnam's Sons, New York, 1981
- "History of the Movement". likud.org.il. Retrieved 27 November 2015.
- Howard Sachar: ''A History of the State of Israel, pps[clarification needed] 265-266
- Quetteville, Harry de (21 July 2006). "Israel celebrates Irgun hotel bombers". Retrieved 29 December 2017.
- Segev, Tom (1999). One Palestine, Complete. Metropolitan Books. pp. 360–362. ISBN 0-8050-4848-0.
- Shai Lachman, "Arab Rebellion and Terrorism in Palestine 1929-39: The Case of Sheikh Izz al-Din al-Qassam and His Movement", in Zionism and Arabism in Palestine and Israel, edited by Elie Kedourie and Sylvia G. Haim, Frank Cass, London, 1982, p. 55.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. P. 213.
- Pedahzur, Ami. The Israeli Response to Jewish terrorism and violence. Defending Democracy. New York: Manchester University Press, 2002. P. 77.
- Resistance - An Analysis of European Resistance to Nazism 1940-1945, by M. R. D. Foot
- John Keegan as quoted in The Irish War, by Tony Geraghty
- "Programmes - Most Popular - All 4". Channel 4. Retrieved 27 November 2015.
- SOE in France. An Account of the Work of the British Special Operations Executive in France 1940-1944. By M. R. D. Foot (1966).
- Churchill's Secret Army, Carlton UK, Channel 4, 2000
- http://www.bbc.co.uk/news
- Geraghty (1998), p.346
- Hoffman 1998, p. 33
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. p.227
- Amy Zalman. "Where Did Left Wing Terrorism Go?". About.com News & Issues. Retrieved 27 November 2015.
- Vietnam: A History, Stanley Karnow,1983
- How the Soviet Union Transformed Terrorism/250433/
- "Fidel Castro". Encyclopædia Britannica. Retrieved 27 November 2015.
- "Tower Commission Report Excerpts". ucsb.edu. Retrieved 27 November 2015.
- The Power of Nightmares, BBC, 2004
- Crile, George (2004). Charlie Wilson's War. Atlantic Monthly Press. pp. 111–112. ISBN 0-8021-4124-2.
- Lia, Brynjar. The Society of the Muslim Brothers in Egypt: The Rise of an Islamic Mass Movement 1928 – 1942. Reading, UK: Ithaca Press, 2006. P. 35.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley, US: University of California Press, 2007. P. 274.
- Mitchell, Richard. The Society of the Muslim Brothers. Oxford University Press, 1993. P. 74.
- "The Moderate Muslim Brotherhood." Robert S. Leiken & Steven Brooke, Foreign Affairs Magazine.
- Stora, Benjamin. Algeria, 1830-2000: A Short History. Cornell University Press, 2004. P. 36.
- Galula, David. Pacification in Algeria, 1956-1958. RAND Corporation Press, 2006. P. 14.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. P. 216.
- Millar, S.N. 'Arab Victory: Lessons from the Algerian War (1954–62)', British Army Review No 145 Autumn 2008, p. 49
- Rubin, Barry. Revolution Until Victory?: The Politics and History of the PLO. Harvard University Press, 1996. P. 7. .
- Hoffman 1998, P. 47.
- Pike, J. Palestine Liberation Organization (PLO). Intelligence Resource Program. Federation of American Scientists, 1998-08-08.
- Reeve, Simon. One Day in September: The Full Story of the 1972 Munich Olympics Massacre and the Israeli Revenge Operation. Arcade Publishing, 2006. P. 32.
- Hoffman, p. 46.
- Cobban, Helena.The Palestinian Liberation Organisation: People, Power and Politics. Cambridge University Press, 1984. P. 147.
- Council on Foreign Relations. Terrorism Havens: Palestinian Authority Council on Foreign Relations. December 2005
- Khoury, Jack. "U.S. filmmakers plan documentary on Ma'alot massacre", Haaretz, March 7, 2007.
- "Bullets, Bombs and a Sign of Hope", Time, May 27, 1974.
- "Reality Check: Understanding the Mujahedin-e Khalq (PMOI/MEK)". The Huffington Post. Retrieved 27 November 2015.
- "U.S. Terrorism Report: MEK and Jundallah - The Iran Primer". usip.org. Retrieved 27 November 2015.
- Roy, Olivier (2005). Turkey Today: A European Nation?. London: Anthem Press. p. 170.
- Peterson, Scott (2007-07-06). "Turkish Kurds: some back the state". Christian Science Monitor.
- Kurlansky, Mark. The Basque History of the World: The Story of a Nation. New York: Penguin, 2001. P. 224.
- "Goiz Argi". goizargi.com. Retrieved 27 November 2015.
- Hoffman 1998, P. 191.
- Weinberg, Leonard. Global Terrorism: A Beginner's Guide. New York: Oneworld, 2008. P. 43.
- Chaliand, Gerard. The History of Terrorism: From Antiquity to al Qaeda. Berkeley: University of California Press, 2007. P. 251.
- Chaliand, p. 250
- "Provisional Irish Republican Army (IRA) (aka, PIRA, "the provos," Óglaigh na hÉireann) (UK separatists)". Council on Foreign Relations. Archived from the original on 7 June 2010. Retrieved 27 November 2015.http://www.cfr.org/separatist-terrorism/provisional-irish-republican-army-ira-aka-pira-provos-oglaigh-na-heireann-uk-separatists/p9240
- Chaliand, p. 251
- Coogan, p. 356
- Morris, Nigel (August 14, 2001). "Suspected IRA men arrested in Colombia". The Independent. London. Archived from the original on March 22, 2009. Retrieved May 4, 2010.
- Rayment, Sean (March 10, 2002). "IRA link to PLO examined in hunt for deadly sniper". The Daily Telegraph. London. Retrieved May 4, 2010.
- McKittrick, David (October 4, 2002). "As three men go before a Colombian judge today, will their fate seal the course of peace in Ireland?". The Independent. London. Retrieved May 4, 2010.[dead link]
- Blumenau, Bernhard. "The United Nations and Terrorism. Germany, Multilateralism, and Antiterrorism Efforts in the 1970s", Palgrave Macmillan, 2014, ch. 1. ISBN 978-1-137-39196-4
- "Red Army Faction boss to be freed". BBC News. November 24, 2008. Retrieved May 4, 2010.
- Blumenau, Bernhard. "The United Nations and Terrorism. Germany, Multilateralism, and Antiterrorism Efforts in the 1970s", Palgrave Macmillan, 2014, ch. 2. ISBN 978-1-137-39196-4
- Ed Vulliamy, Secret agents, freemasons, fascists... and a top-level campaign of political 'destabilisation', The Guardian, December 5, 1990
- Hoffman, p.16
- Chaliand, p.227
- "www.canadiansoldiers.com". www.canadiansoldiers.com. Retrieved 10 January 2018.
- Front de libération du Québec. The Canadian Encyclopedia Archived 2011-07-20 at the Wayback Machine.
- Amy Zalman. "Narcoterrorism". About.com News & Issues. Retrieved 27 November 2015.
- [?url=http://eur-lex.europa.eu/LexUriServ/site/en/oj/2005/l_340/l_34020051223en00640066.pdf "Archived copy"] Check
|archiveurl=value (help) (PDF). Archived from the original on December 3, 2009. Retrieved April 19, 2012.
- Foreign Terrorist Organizations (FTOs) Archived 2007-12-12 at the Wayback Machine. http://www.state.gov/j/ct/rls/other/des/123085.htm
- "Backgrounder:The Jewish Defense League". adl.org. Retrieved 27 November 2015.
- Bohn, Michael K. (2004). The Achille Lauro Hijacking: Lessons in the Politics and Prejudice of Terrorism. Brassey's Inc. p. 67.
- JDL group profile from National Consortium for the Study of Terror and Responses to Terrorism
- Brinkley, Joel (October 6, 1988). "Israel Bans Kahane Party From Election". The New York Times. Retrieved May 4, 2010.
- Archived April 16, 2009, at the Wayback Machine.
- Gina M. Pérez. Fuerzas Armadas de Liberación Nacional (FALN). Encyclopedia of Chicago. Retrieved on 2007-09-05
- "Congressional testimony of Louis J. Freeh". Federal Bureau of Investigation. 2001-05-10. Archived from the original on 2007-10-08. Retrieved 2007-10-10.
- The Weather Underground, produced by Carrie Lozano, directed by Bill Siegel and Sam Green, New Video Group, 2003, DVD.
- Japanese Red Army (JRA) Profile http://www.start.umd.edu/tops/terrorist_organization_profile.asp?id=59 The National Memorial Institute for the Prevention of Terrorism Terrorism Knowledge Base (online)
- Richardson, John. Paradise Poisoned: Learning About Conflict, Terrorism and Development from Sri Lanka's Civil Wars. International Center for Ethnic Studies, 2005. p.29
- Hoffman, p.139
- Globalisation, Democracy and Terror, Eric Hobsbawm
- Chaliand, p.353
- "Sri Lanka - Living With Terror". Frontline. PBS. May 2002. Retrieved 2009-02-09.
- "MAU MAU TERRORISM IN KENYA". millbanksystems.com. Retrieved 27 November 2015.
- "Mau Mau uprising: Bloody history of Kenya conflict". BBC News. Retrieved 27 November 2015.
- "The British must not rewrite the history of the Mau Mau revolt". Telegraph.co.uk. 6 June 2013. Retrieved 27 November 2015.
- "Archived copy". Archived from the original on 2015-12-08. Retrieved 2015-10-31. "Archived copy". Archived from the original on 2015-12-08. Retrieved 2015-10-31.
- "Fighting the Mau Mau". google.co.kr. Retrieved 27 November 2015.
- "Manifesto of Umkhonto we Sizwe". African National Congress. 16 December 1961. Archived from the original on 2006-12-17. Retrieved 2006-12-30.
- Statement of Nelson Mandela at Rivonia trial Archived 2009-02-21 at the Wayback Machine. "Archived copy". Archived from the original on 2015-03-20. Retrieved 2015-01-21.
- Jonathan Fine. "Contrasting Secular and Religious Terrorism". Middle East Forum. Retrieved 27 November 2015.
- The Catholic Institute for International Relations (1987). "Right to Survive: Human Rights in Nicaragua" (print). The Catholic Institute for International Relations.
- "NICARAGUA". hrw.org. Retrieved 27 November 2015.
- Uhlig, Mark A. (February 27, 1990). "Turnover in Nicaragua; NICARAGUAN OPPOSITION ROUTS SANDINISTAS; U.S. PLEDGES AID, TIED TO ORDERLY TURNOVER". New York Times. Retrieved May 4, 2010.
- Douglas O. Linder. "Opening statement of prosecutor Joseph Hartzler in the Timothy McVeigh trial". umkc.edu. Archived from the original on 25 November 2010. Retrieved 27 November 2015.
- The Oklahoma City Bombing Archived 2013-05-22 at the Wayback Machine., 2004-8-9
- "McVeigh Remorseless About Bombing", The Associated Press, March 29, 2001
- "... eight were Islamic fundamentalists. Twenty-seven were Communists and Socialists. Three were Christians http://www.theamericanconservative.com/articles/the-logic-of-suicide-terrorism/. The American Conservative, July 18, 2005. Verified 22 June 2008.
- Hezbollah Archived 2006-09-27 at the Wayback Machine. The US Council on Foreign Relations, 2006-07-17
- Sites, Kevin (Scripps Howard News Services). "Hezbollah denies terrorist ties, increases role in government Archived 2008-06-04 at the Wayback Machine. " 2006-01-15
- "Frontline: Target America: Terrorist attacks on Americans, 1979-1988", PBS News, 2001. Accessed 4 February 2007
- "Lebanon.com Newswire - Local News March 20 2003". lebanon.com. Retrieved 27 November 2015.
- Jamail, Dahr (2006-07-20). "Hezbollah's transformation". Asia Times. Retrieved 2007-10-23.
- Wright, Lawrence, Looming Tower, Knopf, 2006, p. 123
- "Lockerbie bomber freed from jail". BBC News. August 20, 2009. Retrieved May 4, 2010.
- Moshe Elad, Why were we surprised?, Ynet News 07-02-2008
- Chaliand, p.356
- Levitt, Matthew Hamas: Politics, Charity, and Terrorism in the Service of Jihad. Yale University Press, 2007.
- John Pike. "HAMAS (Islamic Resistance Movement)". globalsecurity.org. Retrieved 27 November 2015.
- See also: Hamas#International designation of Hamas
- "Currently listed entities". Department of Public Safety and Emergency Preparedness. November 22, 2012. Archived from the original on February 9, 2009.http://www.publicsafety.gc.ca/cnt/ntnl-scrt/cntr-trrrsm/lstd-ntts/crrnt-lstd-ntts-eng.aspx
- Israel At 'War to the Bitter End,' Strikes Key Hamas Sites December 29, 2008, Fox News
- "Profile: Hamas Palestinian movement". BBC News. Retrieved 27 November 2015.
- 問10.ハマスとは何ですか。Ministry of Foreign Affairs of Japan.' 日本は、ハマスを、国連安保理決議1373に基づいて、外国為替及び外国貿易法(外為法)に基づく資産凍結措置の対象としています。'On the basis of United Nations Security Council Resolution 1373, Japan applies to Hamas the frozen assets measures in accordance with its Foreign Exchange and Foreign Trade Law (Foreign Exchange and Foreign Trade Control Law).'
- "テロ資金対策". 外務省. Retrieved 27 November 2015.
- According to Michael Penn, (Japan and the War on Terror: Military Force and Political Pressure in the US-Japanese Alliance, I.B. Taurus 2014 pp.205-206), Japan initially welcomed the democratic character of the elections that brought Hamas to power, and only set conditions on its aid to Palestine, after intense pressure was exerted by the Bush Administration on Japan to alter its policy.
- ""Country reports on terrorism 2005"" (PDF). Retrieved 10 January 2018.
- 'Hamas's Izz al-Din al-Qassam Brigades,' Australian National Security:'Like its parent, Hamas is a multifaceted, well organised and relatively moderate organisation renowned for its extensive social service networks in the Palestinian Territories.'
- "Proscribed Terrorist Organisations". UK Home Office. Archived from the original (PDF) on 30 June 2006. Retrieved 31 July 2014.
- King Abdullah Says No To Hamas. September 17, 2013. Khaled Abu Toameh.
- "How to Confront Russia's Anti-American Foreign Policy" The Heritage Foundation. June 27, 2007
- Richard Boudreaux, 'Palestinian parliament OKs coalition government / Norway announces recognition, will restore ties cut in '06 ,' San Francisco Chronicle 18 March 2007
- Daniel Möckli, 'Switzerland’s Controversial Middle East Policy,' Center for Security Studies, Zurich Vol.3, No. 35, June 2008
- Juliana Barbassa, 'Brazil Terrorism Laws: No One Is A Terrorist,' Huffington Post 3 September 2015.
- "Gaza flotilla: Turkey threat to Israel ties over raid". BBC News. June 4, 2010. Archived from the original on January 26, 2011. Retrieved January 26, 2011.
- "Bank of China may have helped Hamas kill Jews". Free Zionism. Retrieved 30 March 2014.
- Abha Shankar (September 19, 2013). "Bank of China Terror Financing Case Moves Forward". Investigative Project on Terrorism. Retrieved 30 March 2014.
- Joshua Davidovich (December 18, 2013). "The China bank is not the issue here, dude". The Times of Israel. Retrieved 30 March 2014.
- Zambelis, Chris. "China's Palestine Policy". Jamestown.org. Retrieved 2014-08-02.
- Mirren Gidda,'Hamas Still Has Some Friends Left,' Time 25 July 2014.
- "Foreign Terrorist Organizations". U.S. Department of State. Retrieved 27 November 2015.
- 1994: Jewish settler kills 30 at holy site BBC On This Day
- In the Spotlight: Kach and Kahane Chai Archived 2006-11-22 at the Wayback Machine. Center for Defense Information October 1, 2002
- Terror Label No Hindrance To Anti-Arab Jewish Group New York Times, 19 December 2000
- Kahane Chai (KACH) Public Safety Canada Archived March 6, 2007, at the Wayback Machine.
- Council Decision of 21 December 2005 implementing Article 2(3) of Regulation (EC) No 2580/2001 on specific restrictive measures directed against certain persons and entities with a view to combating terrorism and repealing Decision 2005/848/EC Archived 7 January 2006 at the Wayback Machine. Official Journal of the European Union, 23 December 2005
- Foreign Terrorist Organizations (FTOs) Archived 2007-12-12 at the Wayback Machine. http://www.state.gov/j/ct/rls/other/des/123085.htm U.S. Department of State, 11 October 2005
- Alona Ferber (June 14, 2016). "How Israel Must Fight Violent Jewish Extremists". Haaretz. Retrieved 10 July 2016.
- CDC website, Centers for Disease Control and Prevention, Aum Shinrikyo: Once and Future Threat?, Kyle B. Olson, Research Planning, Inc., Arlington, Virginia
- "Sarin attack remembered in Tokyo". BBC News. March 20, 2005. Retrieved May 4, 2010.
- In Depth: Air India http://www.cbc.ca/news/canada/memorial-for-air-india-victims-unveiled-1.681526 – The Victims, CBC News Online, 16 March 2005
- McNee, p. 146.
- 6 Days, Director: Toa Fraser, Writer: Glenn Standring, 2017
- Hoffman, p.154
- Smith, Sebastian. Allah's Mountains: The Battle for Chechnya. Tauris, 2005. p.200
- Modest Silin, Hostage, Nord-Ost siege, 2002 Archived 2008-06-26 at the Wayback Machine., Russia Today, 27 October 2007
- Gas "killed Moscow hostages", BBC News, 27 October 2002.
- "Moscow court begins siege claims", BBC News, 24 December 2002
- "Moscow siege gas 'not illegal'". bbc.co.uk. Retrieved 27 November 2015.
- Jonathan Steele (July 11, 2006). "Shamil Basayev – Chechen politician seeking independence through terrorism". Obituary. London: Guardian Unlimited.
one-time guerrilla commander who turned into a mastermind of spectacular and brutal terrorist actions ... served for several months as prime minister
- "Terrorists bomb trains in Madrid - Mar 11, 2004 - HISTORY.com". Retrieved 10 January 2018.
- CNN Library (4 November 2013). "Spain Train Bombings Fast Facts". CNN. Retrieved 27 November 2015.
- Library, CNN. "July 7 2005 London Bombings Fast Facts". Retrieved 10 January 2018.
- "Norway honors victims of terrorist attacks". cnn.com. Retrieved 27 November 2015.
- "Exclusive video: Man with bloodied hands speaks at Woolwich scene". ITV News. Retrieved 27 November 2015.
- "French security forces kill gunmen, end terror rampage". 9 January 2015. Archived from the original on 13 January 2015. Retrieved 15 January 2015.
- "French security forces kill gunmen to end terror rampage; 20 dead in 3 days of violence". 9 January 2015. Retrieved 15 January 2015."Archived copy". Archived from the original on 2015-11-19. Retrieved 2015-11-27.
- "Al Qaeda branch claims Charlie Hebdo attack was years in the making". 15 January 2015. Retrieved 15 January 2015.
- Bremner, Charles (7 January 2015). "Islamists kill 12 in attack on French satirical magazine Charlie Hebdo". The Times.
- "Attentat contre " Charlie Hebdo " : Charb, Cabu, Wolinski et les autres, assassinés dans leur rédaction". Le Monde (in French). Retrieved 2015-11-27.
- "Deadly attack on office of French magazine Charlie Hebdo". BBC News. Retrieved 2015-11-27.
- "Charlie Hebdo attack: What we know so far", BBC News, 8 January 2015.
- "EN DIRECT. Massacre chez "Charlie Hebdo" : 12 morts, dont Charb et Cabu". Le Point.fr (in French). Retrieved 2015-11-27.
- "Les dessinateurs Charb et Cabu seraient morts". L'Essentiel (in French). France. 7 January 2015. Retrieved 7 January 2015.
- Conal Urquhart. "Paris Police Say 12 Dead After Shooting at Charlie Hebdo". Time.
Witnesses said that the gunmen had called out the names of individual from the magazine. French media report that Charb, the Charlie Hebdo cartoonist who was on al-Qaeda's most wanted list in 2013, was seriously injured.
- Victoria Ward. "Murdered Charlie Hebdo cartoonist was on al Qaeda wanted list". The Telegraph. Retrieved 2015-11-27.
- "The Globe in Paris: Police identify three suspects". The Globe and Mail.https://www.theglobeandmail.com/news/news-video/video-french-police-identify-suspects-in-deadly-attack/article22352019/
- Adam Withnall, John Lichfield, "Charlie Hebdo shooting: At least 12 killed as shots fired at satirical magazine's Paris office", The Independent, 7 January 2015.
- Higgins, Andrew; De La Baume, Maia (8 January 2015). "Two Brothers Suspected in Killings Were Known to French Intelligence Services". The New York Times. Retrieved 8 January 2015.
- "Paris shooting: Female police officer dead following assault rifle attack morning after Charlie Hebdo killings". The Independent. Retrieved 9 January 2015.
- "Un commando organisé". Libération. Retrieved 8 January 2015.
- "Paris Attack Suspect Dead, Two in Custody, U.S. Officials Say". NBC News. Retrieved 8 January 2015.
- "France, Islam, terrorism and the challenges of integration: Research roundup". JournalistsResource.org, retrieved Jan. 23, 2015.
- "EN DIRECT. Porte de Vincennes: 5 personnes retenues en otage dans une épicerie casher". Le Parisien. 9 January 2015.
- "EN DIRECT – Les frères Kouachi et le tireur de Montrouge abattus simultanément". Le Figaro. Retrieved 2015-11-27.
- "Quatre otages tués à Paris dans une supérette casher". Libération. 9 January 2015.
- Matthew Weaver. "Charlie Hebdo attack: French officials establish link between gunmen in both attacks". the Guardian. Retrieved 10 January 2015.
- http://www.telegraph.co.uk/news/worldnews/europe/france/11999723/Paris-terror-attacks-La-Belle-Equipe-survivor-so-traumatised-she-cant-speak.html
- "One 35-pound bomb in Brussels attack failed to go off; suicide note found". Retrieved 10 January 2018.
- "What To Know About the Brussels Terrorist Attacks". Time. Retrieved 10 January 2018.
- "Backgrounder: al-Qaeda (a.k.a. al-Qaida, al-Qa'ida)" http://www.cfr.org/terrorist-organizations-and-networks/al-qaeda-k-al-qaida-al-qaida/p9126 Jayshree Bajoria & Greg Bruno. Council on Foreign Relations, Updated: December 30, 2009
- terror: the legal response to the financing of global terrorism Jimmy Gurulé, 2009, p. 63
- The U.S. Embassy Bombings Trial - A Summary PBS, Oriana Zill
- United States District Court, Southern District of New York (February 6, 2001). "Testimony of Jamal Ahmad Al-Fadl". United States v. Usama bin Laden et al., defendants. James Martin Center for Nonproliferation Studies. Archived from the original on November 10, 2001. Retrieved 2008-09-03.
- "Bin Laden claims responsibility for 9/11". CBC News. October 29, 2004.
- "Terrorists Hijack 4 Airliners, Destroy World Trade Center, Hit Pentagon; Hundreds Dead". washingtonpost.com. Retrieved 27 November 2015.
- Hezbollah Attacks Since May 2000 Archived 2009-01-25 at the Wayback Machine. http://www.aijac.org.au/news/article/no-mercy-in-this-religious-war Mitchell Bard, the Jewish AIJAC, 2006-07-24
- "The Middle East Today". google.com. Retrieved 27 November 2015.
- Harel, Amos; Avi Isacharoff (2004). The Seventh War. Tel-Aviv: Yedioth Aharonoth Books and Chemed Books. pp. 274–75. ISBN 9789655117677.
- Human Capital and the Productivity of Suicide Bombers pdf Archived January 27, 2013, at the Wayback Machine. http://scholar.harvard.edu/files/benmelech/files/jep_0807.pdf Journal of Economic Perspectives Volume 21, Number 3, Summer 2007. pp. 223–38
- Q&A: Gaza conflict, BBC News 18-01-2009
- Gaza's rocket threat to Israel, BBC 21-01-2008
- Martin Patience, Playing cat and mouse with Gaza rockets, BBC News 28-02-2008
- "Iran's Enemy Is Not America's Friend" Jamsheed K. Choksy. Foreign Policy, October 10, 2009.
- "Preparing the Battlefield" Seymour Hersh. New Yorker, July 7, 2008.
- "The Secret War Against Iran" Brian Ross. ABC News, April 3, 2007.
- CNN, Gul Tuysuz, Faith Karimi and Greg Botelho,. "Istanbul bomber had ISIS links, minister says". Retrieved 10 January 2018.
- "Benazir Bhutto assassinated - CNN.com". www.cnn.com. Retrieved 10 January 2018.
- "Benazir Bhutto Assassination Case: Musharraf Responsible For Pakistan Prime Minister's Death, Witness Siegel Claims". 17 October 2015. Retrieved 10 January 2018.
- "Mumbai Massacre - Background Information - Secrets of the Dead - PBS". 24 November 2009. Retrieved 10 January 2018.
- Friedman, Thomas (2009-02-17). "No Way, No How, Not Here". The New York Times. Retrieved 2010-05-17.
- Indian Muslims hailed for not burying 26/11 attackers, Sify News, 2009-02-19
- Schifrin, Nick (2009-11-25). "Mumbai Terror Attacks: 7 Pakistanis Charged – Action Comes a Year After India's Worst Terrorist Attacks; 166 Die". ABC News. Retrieved 2010-05-17.
- "HM announces measures to enhance security" (Press release). Press Information Bureau (Government of India). 2008-12-11. Retrieved 2008-12-14.
- "A year after attacks, Mumbai is just as vulnerable; at vigils, many call for police reform" (Press release). Chicago Tribune. 2009-11-26. Retrieved 2009-11-26.http://www.philstar.com/breaking-news/526944/mumbai-commemorates-anniversary-attacks
- Black, Ian (2008-11-28). "Attacks draw worldwide condemnation". London: The Guardian. Retrieved 2008-12-05.
- "Jakarta terror attacks: Will parts of Southeast Asia become ISIS' satellite cities?". Retrieved 10 January 2018.
- CNN, Pat St. Claire, Greg Botelho and Ralph Ellis,. "Tashfeen Malik, the San Bernardino shooter: Who was she? - CNN". Retrieved 10 January 2018.
- TOTAL DESTRUCTION OF THE TAMIL TIGERS: The Rare Victory of Sri Lanka's Long War, Paul Moorcraft
- Hoffman, Bruce (1998). Inside Terrorism. New York: Columbia University Press.
|
<urn:uuid:d35bb3ee-62f7-4b48-9890-d804b011ed9d>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00409.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9262403845787048,
"score": 3.53125,
"token_count": 24183,
"url": "https://en.m.wikipedia.org/wiki/History_of_terrorism"
}
|
1922 Encyclopædia Britannica/Savings Movement
SAVINGS MOVEMENT. The origin and development of what became known in England as the “War Savings Movement” provides the subject-matter of one of the most interesting chapters in the economic history of the World War. In the United States, to which reference is made in a subsequent section, the Savings and Economy movement was no less remarkable.
Institutions for the normal encouragement of thrift on the part of the people of the United Kingdom were making steady progress up to the date of the outbreak of the war in 1914. From that date onwards the pace of their advance was materially accelerated. The amount due to depositors in the Post Office Savings Bank increased 28% in the decade 1903-13, while during the five years 1913-8 they increased by 42%. The amount due to depositors in the Trustee Savings Banks increased 3.2% in the decade 1903-13, while during the years 1913-20 it increased by 12.5%. These figures give a general indication of the growth of the savings of the people during the war period, but they do not tell the whole story. In the atmosphere created by the War Savings movement, and in the circumstances which for a time materially improved the financial position of the wage-earning classes, not only did existing savings institutions develop rapidly, but a new national thrift machinery was brought into being and its operations met with remarkable success.
Cost of the World War. — Within six months of the outbreak of hostilities in Aug. 1914, it became evident to those who were more closely in touch with realities that the World War would be a prolonged struggle, in which it would be necessary for the combatant nations to marshal their entire resources of production. Modern warfare was seen to demand not only that there should be a high percentage of the population in the fighting forces, but also large numbers of civilians producing on a huge scale military equipment of the most varied character. The enormous volume of goods and services which had to be requisitioned is best expressed in terms of the national expenditure. The largest amount spent by Great Britain in war in a single year before 1914 was £71,000,000. The Revolutionary and Napoleonic wars cost in the aggregate £831,000,000 spread over 20 years, an average annual expenditure of £42,000,000; the Crimean War cost £675,000,000 in three financial years, or an average annual expenditure of £225,000,000; while the S. African War of 1899-1902 cost £211,000,000 spread over four years. In the face of the expenditure during the World War of 1914-9, these figures are insignificant. The money spent by the Government of Great Britain during the five financial years cannot be placed at less than from £8,000,000,000 to 9,000,000,000. At one period the average daily expenditure rose to the enormous figure of nearly 7,000,000 sterling.
It was not, however, till the year 1915 was well advanced that the full meaning of the cost of the World War in terms of goods and services began to be appreciated, even by those in high places. During the first few months of the war the inevitable dislocation of industry caused by the calling-up of men of military age and the interference with the normal markets led to a considerable amount of unemployment, and steps were taken by the Government and by the public for the relief of distress. This period of unemployment lasted but a short time and far less distress was caused than had been anticipated. The increased demand for men for the fighting forces and the rapid organization of special war work in many directions quickly absorbed the unemployed. Women were drafted into industry in ever-increasing numbers. In the meantime, the normal production of goods was reduced and stocks diminished. Prices rose rapidly owing to the excess of demand over supply and wages were raised sympathetically. Gradually a large amount of overtime became general, and, in many instances, owing to several members of the same family being in receipt of good wages and on account of overtime, the incomes of working-class families reached very substantial figures. By the summer of 1915 the purchasing power of the people of the country had been very considerably increased.
With the increased demand for goods which followed this rise of the purchasing power of the masses, prices mounted still higher; and with the growth of credit, which was required to cover the increased payments of wages, a dangerous situation was created. By the middle of 1915 it was obvious that it was of paramount importance that the personal expenditure of the people of the country should be checked, and that, in fact, the stopping of individual expenditure was quite as important as the raising of money for the war. It was seen that while from the financial point of view it was desirable that the expenditure on the war should be covered as far as possible by monies raised by taxation, and next by loans from money saved by the people of the country, it was equally important that the mass of the people should reduce their personal expenditure in order to release the resources of the country in capital and labour for the production of the essentials of war. The military advisers of the nation were calling for still larger numbers of men for the fighting forces. Recruiting became more and more urgent. The war factories were crying out for tens of thousands of hands for the production of the vast stores needed on all the fighting fronts. At the same time, the demand of the people through their daily expenditure, stimulated by high wages and big incomes, was automatically retaining labour in the production of things which were not only not necessary for their subsistence, but were often mere luxuries. Again, the expenditure of individuals tended to increase the purchase of imported goods, necessitating either increased exports demanding labour for their production, or adversely affecting the Exchanges and necessitating the export of gold or the sale of foreign securities. The real difficulty of the situation was seen to be the scarcity of human labour to produce the necessaries of war rather than the finding of money to pay for them. Thus the exigencies of the recruiting agencies and national factories led directly to the “goods and services” point of view and to an imperative demand for personal saving.
The dangers of the situation were emphasized by Mr. Lloyd George in May 1915, and it soon became evident that drastic steps would have to be taken to enforce economy throughout all ranks of the community and particularly among the wage-earners, whose aggregate purchasing power had reached dimensions which made their personal expenditure the largest factor in the situation.
Early Efforts for Saving. — During the autumn of 1915, a vigorous mission was undertaken by a voluntary body known as the United Workers, who by the holding of lectures and meetings throughout the country did much to explain the facts to the people and prepared the ground for more concentrated effort later. About the same date a Parliamentary War Savings Committee was established, and through its efforts local war thrift committees were set up in a number of the larger towns of the country. All these efforts were, however, to a large extent ineffective, owing to the absence of any form of investment security specially adapted for persons of small means. The machinery of the Post Office Savings Bank and the Trustee Savings Banks, allowing for deposits at low interest, was inadequate to cope with the situation.
The history of the Post Office Savings Bank during the first year of the war fairly accurately indicates the trend of events. The outbreak of the war saw a sharp run on the Post Office Savings Bank deposits, a run accentuated by the actual shortage of coinage which persisted even up to the end of August. The net withdrawals from the Post Office Savings Bank Department from the declaration of war to the end of Aug. were £2,500,000 in excess of deposits. After Aug. 1914 confidence was quickly restored and deposits began to come in freely. Before the end of Sept. they had exceeded the withdrawals, and so completely did the tide turn that the deposits for the three months ended April 30 1915 exceeded the withdrawals by £4,400,000, or were £3,000,000 in excess of deposits in the corresponding quarter of 1914. For the five months from Jan. 1 1915 to May 31 1915, the balance due to depositors increased by over £6,500,000 as compared with an increase of £1,700,000 during the corresponding period of 1914.
Good as these results were in themselves, it became, however, increasingly clear that the Post Office Savings Bank alone, with the rate of interest on deposits at 2½%, was not sufficient to stimulate saving in the country to the extent that was necessary. Several times pressure was brought to bear on the Government with a view to getting the interest on the savings bank deposits increased, but this pressure was resisted. Other small attempts were made to attract saving. During the issue of the 4½% War Loan in June 1915, scrip vouchers of 5s. and 10s. and scrip certificates of £1 and £5 were issued by the Post Office. The scrip vouchers, when they amounted to £5 or a multiple of £5, and the scrip certificates could be exchanged at any money order post-office during the first fortnight of Dec. 1915, the owner being duly registered as a holder of a corresponding amount of War Loan and being given a stock certificate. Interest was allowed on the scrip vouchers according to the month of purchase and provision was made for repurchase by the Post Office at face value of any vouchers in excess of the £5 multiple. The aggregate result was that, between Nov. 1915 and Dec. 1920, scrip certificates amounting to £3,967,965 and scrip vouchers amounting to £1,049,838 were exchanged for 4½% or 5% War Loan or Exchequer bonds. The £5 scrip certificates were only exchangeable for 45 % War Loan, but the scrip vouchers could be held for subsequent loans. These were the chief official steps taken to facilitate saving by the people up to the end of 1915. In Nov. of that year the Government was pressed to increase the maximum sum which depositors might pay into the Post Office Savings Bank in any one year, but it was pointed out that this would require fresh legislation, the existing limits having been fixed by the Savings Bank Acts of 1891 and 1893.
Expression had been given to the need for action in a letter to The Times in the summer of 1915 from “A Banker.” The force of his contentions was widely recognized, and this letter may be regarded as the germ from which the War Savings movement was started. This was followed by an important manifesto signed by some of the foremost men in the world of business published in The Times in November. It was a straight-forward statement calling the nation to thrift and urging concentration on the production of essentials only, eschewing non-essentials by universal personal economy.
Montagu Report. — Finally, in Dec. 1915, the Chancellor of the Exchequer (Mr. R. M’Kenna) set up a Committee under the chairmanship of the financial secretary to the Treasury (Mr. E. S. Montagu) to consider the question of getting contributions to War Loans from the working-classes. The final report of this Committee (Cd. 8179), dated Jan. 26 1916, marked the birth of the War Savings movement as a national organization.
An interim report had been issued on Dec. 28 1915, recommending the removal for the period of the war and six months after of the restrictions which limited the amount deposited by any one depositor in the Post Office and Trustee Savings Banks to £50 in any one year and £200 in all. The Committee also recommended that Exchequer bonds of the denominations of £5, £20 and £50 should be placed on sale at all post-offices, provision being made for the deposit of the bonds at the post-office and the issue of books in which the deposit of the bonds would be recorded. The Chancellor of the Exchequer recommended the adoption of these proposals and they were concurred in by a Treasury minute of the same date. Two series of bonds, with interest at the rate of 5% per annum and 6% per annum respectively, were on sale in 1916 and brought into the Exchequer nearly £44,000,000.
The final report of the Committee pointed out that there were two separate objects to be attained by the successful solution of the problem of the small investor: (a) the reduction of general consumption, which would tend to check the rise in prices; and (b) the raising of a certain amount of money for the prosecution of the war. The needs of the small investor were described as being: (a) a simple method of investing savings; (b) a guarantee that the capital value of the investment will not depreciate; (c) the ability to withdraw savings at short notice; and (d) the knowledge that as high a rate of interest is paid on the money of the small investor as on that of the large. It was further pointed out that both propaganda and organization were essential to success in making any appeal for savings. The report recommended the appointment of two committees one to carry on propaganda and to establish on a large scale voluntary War Savings associations for coöperative saving, and the second to devise and approve various schemes of saving and to safeguard their financial soundness. In order to meet the needs of the small investor the Committee recommended the issue of a new form of Government security in the shape of “War Savings Deposits” of 15s. 6d. each, each deposit entitling the subscriber to receive £1 on the fifth anniversary of the date of the deposit.
National War Savings Committees. — The Chancellor of the Exchequer adopted the recommendations, and on Feb. 8 1916 the two committees were appointed. (These two committees were amalgamated in the following April under the title of the “National War Savings Committee,” separate committees being established for Scotland and Ireland.)
War Savings Certificates. — On Feb. 19 1916, the projected savings deposits were issued under the revised title of “War Savings Certificates.” The War Savings certificate must rank as one of the most ingenious and successful financial instruments ever conceived. For the first time in history a security was offered to the people which by its nature tended to concentrate the mind on the growth of capital value through the accumulation of interest, rather than on the annual return in the form of dividends. This feature of the “small investor's Treasury bill,” as it has been called, has had, undoubtedly, a far-reaching psychological effect. It may be said to have projected the mind of the investor towards an ultimate personal use of the accumulated proceeds of his investment after a considerable term of years, and to have reduced the motive of investment merely as a means of providing an annual sum to be spent on its arrival. To the intrinsic merits of the certificate the success of the War Savings movement is, to a great extent, attributable. The certificates were purchasable for 15s. 6d. and could be cashed at any time. At the end of 12 months a certificate could be cashed for 15s. 9d. After this period its cash value increased by a penny a month, and at the end of five years it could be cashed for £1; that is to say, an additional 3d. was added to the value at the end of the fifth year beyond the increase of a penny a month. Subsequently, by Section 4 of the War Loans Act, 1919, the life of the certificates issued, or to be issued, was automatically increased to ten years, the value of the certificates rising after the end of the fifth year by a penny a month until the end of the tenth year, when a further 1s. would be added, making the final encashment value 26s. By the Finance Act, 1918, Section 41, and the War Loans Act, 1919, Section 4, it was provided that the encashment of certificates held by any individual owner could be postponed beyond the period of maturity until the maturity of the last-dated certificate in his possession, such certificates held over increasing in value at a flat rate of a penny a month. Section 42 of the Finance Act, 1916, provided that the accumulated interest payable in respect to any War Savings certificate issued by the Treasury through the Post Office, under which the purchaser by virtue of an immediate payment of 15s. 6d. became entitled after five years to receive the sum of £1, should not be liable to income tax so long as the amount of the certificates held by the purchaser did not exceed the amount for the time being authorized to be held under regulations made by the Treasury. To avoid the serious consequences which would result to the revenue if income taxpayers generally were to use this form of investment, it was originally arranged to confine the issue of War Savings certificates to persons whose total income from all sources did not exceed £300 a year. Experience, however, showed this limitation to be undesirable. The necessity for a declaration as to income at the time of the purchase of the certificates caused administrative difficulties, and by reason of the income limit many wage-earners who were temporarily drawing large wages were unable to buy certificates. In view of these facts, the Committee recommended the Treasury to abolish the income limit, and the restriction was removed on June 10 1916. All formalities in regard to deduction and recovery, proof of exemption or title to abatement from income tax were dispensed with, and a limit of 500 certificates was put on the number allowed to be held by any one person.
By the Finance Act of 1918, it was provided that if a person's holding was brought by inheritance above 500 £1 certificates or their equivalent, the excess might be held without liability to any penalty or to income tax, so long as the person did not purchase for his own benefit, or have purchased for him, any further certificates while holding more than 500 certificates in all.
The War Savings certificate was ingenious not only from the financial standpoint, but also in its form. The certificates were issued in books, upon the cover of which the name of the holder and his address had to be inscribed. The book was of no value except to the person whose name was written upon it. The certificate contained a small panel on its right-hand side, to which the receipt for the purchase price had to be affixed, and the certificate was not valid until this had been done. The receipt was printed on green paper, and each receipt had a number which became the official number of the certificate. The certificate was registered at the money order department of the Post Office as belonging to the particular individual in whose name it was issued. It was necessary to have the signature of the owner to prevent the certificate being cashed by any unauthorized person. In order to provide for this, the receipt which was affixed to the certificate was only the left-hand portion of an original form of receipt, while the right-hand portion, having upon it the corresponding number, had to be filled in by the applicant and handed back to the postmaster. This portion contained the full name and address and signature of the purchaser and formed the basis of the registration system. When the certificate was cashed at a later date, the number on the certificate and the signature of the applicant on the request for repayment could be compared with that portion of the receipt which had been filed. Certificates might be bought by one person on behalf, and in the name of, another person, the signature of the beneficial owner being, if possible, supplied. A cut-out signature from a letter or other document was accepted, but if a signature was not available, it was obtained later by the Post Office. In the case of children under seven years of age the signature was not required. After the receipt had been stuck in the certificate book and a certificate had thus been completed, it could only be transferred to another person in exceptional circumstances and by permission of the Postmaster-General. A fee of 1s. was charged in respect to each transferee. Certificates were not negotiable, and their value would not be paid to anyone but the holder whose signature was registered by the Post Office. Holders over 16 years of age could make nominations of their holdings in case of death. Every nomination had to be on a proper form, which could be obtained from the Controller of the money order department, and required to be received by the Controller during the lifetime of the holder. In addition to the receipts for the payment for single certificates costing 15s. 6d., each of which was stuck into a certificate book, single documents representing 12 or 25 certificates could be obtained from any money order post-office and most banks. These consisted of two parts divided by a perforation, the left-hand portion for registration, and the right-hand portion to be retained by the purchaser. Books were not supplied for these certificates. Documents were also issued for any number of certificates from 26 to 500, both inclusive. These were not kept at local post-offices, but were issued by the Controller and Accountant General of the Post Office, to whom application with remittance was made direct or through a bank. They were applied for on a special form and issued a day or two after receipt of the application. If a certificate, or book of certificates, were lost, a new certificate, or book of certificates, would be issued at a charge of 1s., provided the serial numbers could be furnished to the Controller of the money order department.
On Dec. 4 1920, the old print of War Savings certificates was withdrawn from sale at post-offices and banks, and on Dec. 6 “National Savings” certificates were substituted. The change was legalized by the Savings Bank Act of 1920, and was one of title only. The conditions attaching to the old certificates still applied.
The savings certificate formed the basis of the operations of the War Savings associations, which were established under the auspices of local War Savings committees and affiliated to the National War Savings Committee.
War Savings Associations. — Not less important than the War Savings certificate was the system of association, or club, proposed by the Montagu Committee. In their final report the Committee pointed out that the would-be investor should not, if it could be avoided, be left to himself to seek for an investment. Facilities for investment should be provided by agencies in close touch with him; and these agencies, having succeeded in inducing him to save, should endeavour by careful propaganda and by thorough organization to persuade him to make the continuance of saving a matter of habit. The Committee emphasized the advantages of placing an agency between the small investor and the State which could collect and manage the savings of the small investor. It was pointed out that the Government could enter into no contractual relationship with the individual investor, unless it assumed complete control over the schemes adopted and also supervised in detail the actual administration of the societies themselves. They added that the organization of such control and supervision would require the creation of a new Government department, which, apart from the question of the expense involved, it would have been impossible to staff during the war. Also, the rigidity of procedure which a State system would inevitably involve would be fatal to the free local initiative on which the success of such a scheme would depend. At the same time, if societies, many of which have at their command no expert financial knowledge, were left free to develop schemes without supervision or control, some of them would not unlikely become insolvent. The problem was to obtain the best safeguards which could be secured for the financial soundness and efficient administration of the different schemes, while leaving the responsibility for both administration and results with the societies themselves, and they recommended that the committee which should be appointed by the Government, and to which the various investment societies might be affiliated, should be regarded, not as representing the Government, but as an independent body of experts acting on behalf of the societies themselves. Its duties would be primarily of an advisory character, but it could properly refuse to recognize any society the constitution and rules of which it did not approve and withdraw recognition from any society which might fail to satisfy the committee that it was being properly administered. The committee could, if it saw fit, organize a system of inspection and audit of the operations and accounts of the affiliated societies and by these means secure a very substantial measure of control over their operations.
Local War Savings Committees. — In accordance with these views, the War Savings Committee embarked upon a widespread scheme for the promotion of savings associations, delegating the propagandist work in a large measure to local committees which were set up throughout the country. Before the war was over there were in existence in England and Wales 60 county committees and 1,840 local war savings committees acting as propagandist agencies under the general control of the central body, while the War Savings associations set up under their auspices numbered over 40,000 with a membership of approximately 4,000,000 people. (At the end of 1920 there were still 1,701 local committees and over 28,000 associations.) A savings association could be formed by any number of people who were willing to work together to secure the attainment of its objects. In practice it was found that an association could readily be formed by those who were already corporate in some way; for example, by those who were members of a trade union, a friendly society or a coöperative society, by fellow workers in a shop or factory, or by the members of a church, chapel or social club. Each association had its governing committee, secretary and treasurer.
Scotland and Ireland, with their separate organizations, developed the movement on similar lines. The total number of voluntary workers in the movement was estimated to be between 200,000 and 250,000.
Official Agents. — By the end of 1917, when nearly 30,000 War Savings associations had been affiliated, there had been established on an average one association for every 1,200 inhabitants in England and Wales. Most of the social and industrial groups were covered, but it was realized that a large section of the wage-earning population and, possibly, the most highly paid, did not readily join War Savings associations. Many employees objected to joining associations to whose books their employers might have access. They were of opinion that knowledge of the fact that they were saving money might tend to diminish the force of any claim they might make for enhanced wages on account of the increased cost of living. With a view to reaching the prospective small investors of this class, it was decided to add to the number of places where War Savings certificates and National War Bonds could be bought. Certificates were on sale at all money order offices and at most banks, but the majority of the class of persons under consideration had no banking account and had no reason to enter a bank. The Post Office staff was obviously unable to make any special effort to push the sale of Government securities, having regard to the heavy mortgage on their time caused by the manifold additional duties which the exigencies of the war period cast upon them. It was therefore arranged to license certain tradesmen and firms as official agents for the sale of certificates and bonds. These agents purchased the securities outright with their own funds and received the certificates and bonds dated, but unregistered. They then resold the certificates and bonds to their customers and others. By the end of the war, these securities were on sale at more than 14,000 shops and other establishments throughout the country. Very large numbers of certificates in the aggregate were sold in this way. The success of the system is noteworthy in that it involved the sacrifice by the official agents of the interest upon the capital used for the purchase of stocks of certificates between the dates of purchase and sale.
Savings Schemes. — The National Committee, following the guidance of the Montagu Committee, had also set itself the task of preparing various model schemes of coöperative saving to meet the requirements of the people. The following schemes were evolved at various times: —
Scheme 1. — Money subscribed through a savings association was invested in Post Office Exchequer bonds. For each £5 Exchequer Bond a subscriber paid 2s. a week for 50 weeks, or 10s. a month for 10 months. All sums subscribed were remitted to the Treasury each week, the Treasury paying interest on the amounts received at the rate of 5% per annum. The bonds and cash payments due to members were distributed half-yearly, e.g. in the case of subscriptions beginning May 1916 bonds and cash were distributed June 1 1917, weekly subscribers receiving a cash payment of 2s., and monthly subscribers 1s. 9d. The cash distributed was free of income tax, but had to be included in the income-tax return of members. It could be paid at the Post Office or could be credited to an account in a savings bank. This scheme was not adopted on a wide scale and was abandoned at a later date. Schemes involving sub- scriptions for certificates were found in practice to be more popular and more easily worked.
Scheme 2A. — Monies subscribed through an association are invested in War Savings certificates. Subscriptions of 6d. or any number of sixpences are accepted. War Savings certificates are purchased from the Post Office with the cash received by the secretary from the members, and they are dated at the time of purchase, but they are not registered. Each member when he pays his first subscription is given a book. His subscriptions are entered in the book as and when they are paid. When the subscription of any member amounts to 15s. 6d., he is given a certificate and the registration portion of the certificate is then filled in and lodged at the Post Office. The method of distributing certificates of different dates and consequently of different encashment values is settled by the committee of each association. Members can withdraw before reaching the full 15s. 6d. and the amount deposited is repaid, but without interest. The advantage of the scheme lies in this, that if 31 people individually save 6d. a week for 31 weeks, they will each have a certificate at the end of 31 weeks, but if they join an association to which they pay 6d. a week, the association is able to buy one certificate each week, and at the end of 31 weeks it will have 31 certificates. The first of these certificates is dated 30 weeks earlier than a certificate bought by any member acting alone. On the average, they will be dated 15 weeks earlier and consequently will mature 15 weeks earlier. The books are provided free of cost by the National Committee. The book-keeping is necessarily somewhat detailed, but it is essential for the protection of members.
This scheme was probably the most widely adopted.
Scheme 2B is similar to Scheme 2A, but the certificates are not distributed until one year after the subscriptions of any member amount to 15s. 6d. The scheme was not widely adopted, people preferring to get their certificates immediately they had made up their 15s. 6d.
Scheme 3 is in essence a savings bank all the money received being invested in War Savings certificates. The minimum subscription is one penny. Any number of pennies are accepted. Subscriptions are withdrawable at 14 days' notice, or without notice in urgent cases. Each member has a book in which subscriptions are entered. On the completion of the payment of 15s. 6d. the member is registered as being entitled to the payment of £1 at the end of five years. The certificates are not distributed but are held by the association until they mature. A few associations in schools adopted this scheme, but after a time the majority ended by distributing the certificates to their members and adopting Scheme 2A.
Scheme 4 was a scheme for investment by instalments in Exchequer bonds and War Savings certificates, the Treasury paying interest on the amounts received at 5% per annum. During the war no part of the amounts paid into the Treasury were withdrawable in cash. When an Exchequer bond or certificate was fully paid for the Treasury issued the security to the association for delivery to the member entitled to it, the cost of the securities being charged to the amount standing to the credit of the association with the Treasury. Cash was to be returned to the association three months after the end of the war. This scheme was not found satisfactory and was little adopted.
Scheme 5 is a scheme similar in principle to Scheme 2A, but subscriptions are paid by buying from the association sixpenny coupons. The coupons are of a special “Swastika” design and can only be used for subscribing to associations by whom they are issued. The association is supplied on credit with coupons issued by the Central Committee and these have to be accounted for. The association overprints its coupons with its own serial number. Members get a coupon for each 6d. and place the coupon on a card. When the card is full it is exchanged for one of the certificates already purchased by the combined subscriptions of the members. As full cards of coupons come in they are sent to the Central Office in reduction of the association's liability for those supplied on credit. (At a later date the coupons were issued to the associations in the standing imprest system.) This scheme involved little or no ordinary book-keeping. A register of the issue of certificates was kept. The only clerical work involved of necessity was the keeping of a careful stock of the coupons. The scheme was adopted on a large scale and by some of the largest associations. As a general rule, local committees handled the distribution of the coupons in their districts. This threw a heavy burden on the local secretaries. Considerable difficulty was experienced in many instances in clearing coupon stock accounts, and the distribution of coupons on an enormous scale threw a large amount of work on the head office. The scheme is gradually being replaced by a more simple system of cards and sayings stamps procurable from any post-office.
Scheme 6 is a special scheme under which employers purchase certificates in advance for employees with their own funds. The certificates are purchased in blank, that is to say, unregistered, and sold to the employees by any form of instalment system preferred. The employer in effect makes a free grant to his employees of the interest accruing on the money between the date of purchase and the date of sale.
Scheme 7 is a development of an earlier system under which the Post Office issued cards upon which 31 ordinary sixpenny postage stamps could be affixed by anyone. A card when filled with stamps was exchangeable at any money order office for a War Savings certificate. There was no advantage from coöperation. It was merely a simple device to enable people to save the money for a certificate by instalments of 6d. each.
When the Armistice was signed the National Committee gave careful consideration to devise some alternative scheme to avoid the heavy clerical labour entailed in the working of Schemes 2A, 2B, 3 and 5. This labour had been obtainable during the war on a voluntary basis and it is possible that the very labour itself indirectly assisted the movement in its early days in that it gave the officials of associations the knowledge that they were doing something definite for the benefit of the country in wartime. In 1918, the Post Office agreed to the issue of a distinctive adhesive war savings stamp with the Britannia head design. This stamp was placed on sale at all post-offices. Special savings cards containing 31 spaces were issued to savings associations. Treasurers and secretaries of associations provided themselves with stocks of the stamps, which they were authorized to procure as credit stocks, and they issued these to their members for cash. With the cash they purchased more stamps. The cards when filled were exchangeable for certificates at any money order office, and savings stamps purchased at any post-office or through any agency could be used. The scheme possessed considerable elasticity, as it enabled members of one association on transferring their residence to join another association and complete their subscriptions, or they could fill their cards with stamps purchased anywhere and exchange them for certificates anywhere. The disadvantage lay in the absence of the benefit of the early dating of certificates which was given by the other schemes — an advantage which, it was found in practice, was so generally appreciated that the new scheme, in spite of the saving of labour to the officials of associations, was not widely adopted. After considerable thought the scheme was revised and early in 1921 a system was introduced which, while maintaining the simplicity of Scheme 7, also gave the benefit of the early dating of certificates. The predating of certificates is secured by the use of date labels. The date labels (printed in pairs) are supplied by the National Savings Committee to the association officials. Whenever the official purchases Britannia head savings stamps, he can present at the post-office one pair of these date labels for every 31 sixpenny stamps purchased. The post-office official stamps the labels with a date stamp of that day. When a member of the association presents a card filled up with savings stamps all of which have been purchased from the association, the secretary affixes to the certificate which is issued in exchange for the card one of the officially dated date labels — one date label is affixed to the signature portion of the certificate and its fellow or counterpart is fixed on the counterpart of the certificate in the certificate book. This scheme therefore preserves the full benefit of early dating due to coöperative purchase and yet reduces the clerical work of the association official to the smallest compass. The only book which it is thought advisable for the official to keep is a control receipt book for acknowledging receipt of members' completed cards given in exchange for certificates, this serving also as a register of certificates, in case the member loses his certificate book.
The value of savings stamps sold to Nov. 30 1920 was £1,739,000, of which approximately £1,464,000 had been exchanged for savings certificates.
Municipal Savings Banks. — The Municipal Savings Bank (War Loan Investment) Act, 1916, authorized the establishment, subject to certain restrictions, of municipal savings banks in municipal boroughs with populations exceeding 250,000. The only municipality to adopt this Act was Birmingham, where a bank was started at the end of Sept. 1916. The “Birmingham Corporation Act, 1919” extended the powers of the Corporation and authorized it to establish a savings and housing bank.
Navy, Army and Air Services. — Although military savings banks and facilities for saving in the army had existed since 1859, with the recruiting of large numbers of civilians for the new armies it was found that the normal methods of saving were insufficient to attract very large sums of money.
On the issue of the 4½% War Loan in June 1915 it was felt right that due facilities should be afforded the men in the army for making their investments through the Post Office issue of the Loan. Arrangements were accordingly made for any soldier whose pay account was sufficiently in credit to invest by instalments of 5s., 10s., £1 or £5, the amount being debited to his account and transferred to the Post Office through the regimental paymaster. Similar arrangements were made for the navy and the scheme was found to work so smoothly that it was eventually extended to Exchequer bonds and War Savings certificates as they became available, and ultimately for deposits in the Post Office Savings Bank. The Post Office undertook the safe custody of the War Savings certificates and bonds for the investors. Later in 1916, by arrangement with the War Office, a special officer was entrusted with the work of establishing war savings associations in the army, with very satisfactory results. In 1917, £186,682 was saved through the army associations; in 1918, £3,162,975; in 1919, £1,804,380; and in 1920, over £1,000,000, making a grand total of £6,000,000. In June 1920 the Army Council, finding the savings associations had such a beneficial effect, made an order that all units both at home and abroad should form savings associations, and arrangements were made for command paymasters stationed abroad to hold stocks of certificates. The Air Ministry at the same time issued an order on similar lines. The War Savings movement was also carried into the navy and merchant service, suitable arrangements being made for remittance of monies through the paymasters and pay offices.
Schools. — It would be impossible to give even the briefest summary of the War Savings movement without reference to the work done by the savings associations in the schools of the country. Thanks to the influence of the Board of Education, and, particularly, to the efforts of a number of inspectors of the Board who were lent for service with the National Committee and who acted as the secretaries of the county committees and as local representatives of the Committee in the provinces, but, above all, thanks to the whole-hearted efforts of thousands of schoolmasters and mistresses throughout the country, there was scarcely an elementary school in the United Kingdom without an efficient and vigorous association. Before the war a very large number of schools had their penny banks. No attempt was made to supplant these. With the coöperation of the savings banks in connexion with which these penny banks were operated, arrangements were made to continue the penny bank system with the savings association methods, and often the two systems were carried on in the same school side by side. The old penny bank system as a “short term” saving machinery had a value which it would have been undesirable to destroy, while it naturally led by stages to the “long term” saving by means of the certificate. Most of the schools continued their banks and associations after the Armistice, and in no section of the community is the movement more alive and progressive to-day. It is impossible to say what proportion of the savings of the country stand in the names of the children, but it must amount to many millions sterling and this alone must have an incalculable effect on the future.
Propaganda. — The human machine created by the National Savings Committee was stimulated, from time to time, by every kind of publicity method. Thousands of public meetings were held and lectures given; educational pamphlets and leaflets dealing with the elements of economics were distributed; special campaigns with such stimulating machinery as “tank banks” were inaugurated; a system of commissioners and organizers in touch with headquarters kept closely in touch with the local committees; special organizations dealt with the army and the navy, munition works and other factories. The local authorities rendered invaluable assistance to the local committees by the loan of staff, the provision of office accommodation and in many other ways. The London and provincial press were consistently sympathetic to the movement and gave freely of their space to record its activities and assist its campaigns. During the war the organization was, from time to time, utilized by the Chancellor of the Exchequer to assist in the public issues of War Bonds and War Loans. During these periods invaluable help was given by leading press experts, who, in cooperation with the National War Savings Committee, undertook the control of special publicity campaigns (see War Loan Publicity Campaigns). These campaigns for the special issues greatly stimulated the small investor. On each occasion of the issue of a great public loan numbers of new associations came into being and the weekly purchases of certificates were very much increased. One of the most significant results of the adoption of these methods of publicity and propaganda was the great extension of the numbers of individual citizens holding Government securities. Whereas before the war it was estimated that there were some 345,000 holders of Government securities, it is calculated that no less than 17 million people have to-day a holding in some form of State loan; while the aggregate amount subscribed by small investors through the Post Office for War Loans and other Government securities, including savings certificates, was nearly £500,000,000 at the end of 1920.
Withdrawals. — The Montagu Committee laid emphasis on the fact that the small investor wishes to be able to withdraw his savings at short notice without loss of capital. “The financial emergencies of life come upon the working man with startling suddenness. He may be thrown out of employment, or an illness or death in the family may result in an immediate call. He has not the facilities for credit which the wealthy or even the middle classes enjoy and money only obtainable at six or twelve months' notice is of little use to him.” There is no doubt that the losses sustained by the working-classes from their investments through the Post Office in Consols and other similar long-dated securities through the automatic fall in capital value due to the rise in the general rate of interest has had in the past an adverse influence on thrift. Hence the arrangements that War Savings certificates should be repayable at a definite value which is never less than the amount invested, and within two or three days of demand, that is to say, allowing time for identification of the registered holder to avoid payment to a wrongful possessor.
An analysis of the withdrawals of savings certificates is interesting. The total number of certificates sold in the United Kingdom from Feb. 16 1916 to the end of Dec. 1920, was 440,076,000 in £1 units, of a total value at 15s. 6d. each of £343,259,000. The total repayments due to withdrawals, including interest, amounted to £61,404,089, of which £3,521,948 8s. 7d. represented interest. The percentage of the value of certificates repaid (excluding interest) to total value of certificates issued was 18.01 per cent. This percentage may be considered satisfactory when one considers the calls upon the small investor and the fact that the current rate of interest on the shares of well-established commercial and industrial concerns since the Armistice has been very attractive. Much money has been withdrawn for housing, as is evidenced by the case of Higham Ferrers in Northants, a town of 2,500 people, where no less than 50 men have bought their houses through investments in savings certificates.
Post-Armistice Period. — In 1917 a committee was appointed by the National Committee to consider what facilities for saving should be provided for the small investor after the war. The committee in their report stated that the habit of saving had, as a result of the War Savings movement, been formed by many people of all classes who had not previously acquired it, and that this habit ought not to be allowed to lapse and that the State should continue to encourage saving after the war by continuing to offer special facilities to the small investor. They saw no reason to suppose that the State would at any time be unable to use profitably the money of the small investor. They pointed out that the ordinary borrowing capacity of the State would be severely taxed by the necessity for renewing and, when possible, consolidating the floating debt, and they considered it worthy of serious consideration whether a plan might not be adopted for applying the proceeds of post-war borrowing from the small investor in order to secure funds for public utility services, such as the housing of the working-classes and other projects of social urgency, the funds for which it might be difficult, if not impossible, to raise otherwise for a considerable period. The committee strongly advised the preservation of the savings machinery established during the war and recommended the permanent continuance, subject to modifications, of the War Savings certificate. The continuance of the savings organization was also recommended by the “Committee on Financial Facilities” appointed in 1917. In their report, dated Nov. 21 1918, they said: —
“We are impressed by the enormous potential increase in the number of the small investors. The continuance on the part of the people of this country of the habit of investing their savings constitutes a most important factor in the provision of the capital necessary for the rapid reconversion of trade and industry. It is impossible to over-estimate the value of the work done by the war savings associations throughout the country, in encouraging habits of thrift and economy. Government securities furnish by far the best and safest medium for the investment of small sums of money, and we are glad to notice that steps are to be taken, by means of savings associations, to continue the policy which had proved so successful during the war.”
British Savings Associations Affiliated at Dec. 31 1919
| Clubs and
|Rutl.||20,346||. .||. .||16||. .||2||23||41|
|Yorks, E. R.||432,759||112||30||85||10||22||87||346|
|Yorks, N. R.||419,546||42||13||103||. .||7||130||295|
|Yorks, W. R.||3,045,377||1,l60||294||1,235||52||166||313||3,220|
|Brecknock||59,287||4||11||32||. .||. .||11||58|
|Merioneth||45,565||1||. .||34||1||. .||19||55|
|Overseas||. .||. .||. .||. .||. .||. .||20||20|
|Army Associations||. .||. .||. .||. .||. .||. .||936||936|
In addition the undermentioned Savings Associations were affiliated under special schemes:—
|School Post Office||587|
Sales and Repayments of National War Savings Certificates (Feb. 1916-Dec. 1920)
for War Loan, etc.
|1916 Feb.-Dec.||54,430,604||42,183,718||287,448||. .|
|1917 6 months ended June||56,381,849||43,695,933||1,294,750||492|
|1917 6 months ended Dec||30,083,722||23,314,884||1,840,983||10,972|
|1918 6 months ended June||74,210,407||57,513,066||2,372,099||36,524|
|1918 6 months ended Dec||65,594,472||50,835,716||3,914,892||85,216|
|1919 6 months ended June||53,173,874||41,209,752||7,926,293||272,769|
|1919 6 months ended Dec||48,778,963||37,803,697||11,938,325||597,968|
|1920 6 months ended June||32,741,850||25,374,933||17,096,541||1,202,495|
|1920 6 months ended Dec||25,045,649||19,410,378||14,733,338||1,3l6,38l|
|Totals Feb. 1916-Dec. 1920||440,441,390||£341,342,077||£61,404,669||£3,522,817|
Contributions of the British Small Investor, 1914-9
(Decreases are printed in italics)
|Post Office Issues||War
| 4½* and 5%†
|5 and 6%
| 5% National
|Total for five months 1914||1,152,000||. .||. .||. .||. .||. .||1,152,000|
|Total for year 1915||6,456,000||39,961,000*||. .||. .||. .||. .||33,505,000|
|Total for year 1916||11,938,000||138,000||43,900,000||. .||42,371,000||290,000||97,781,000|
|Total for year 1917||5,683,000||36,606,000†||4,092,000||10,856,000§||66,824,000||3,133,000||120,928,000|
|Total for year 1918||38,813,000||. .||. .||38,700,000§||108,349,000||6,287,000||179,575,000|
|Total for year 1919||43,541,000¶||. .||. .||13,700,000§
|Aug. 1914 to Dec. 1919||94,671,000||76,429,000||47,992,000||80,556,000||296,557,000||29,574,000||566,631,000|
¶ The deposits included £55,109,506 on account of war gratuities to soldiers and sailors.
N.B. — During the year ending Dec. 31 1920, 57, 787,499 certificates of a cash value of £44,785,311 were sold, and repayments, including exchange for War Loans, etc. (excluding interest), amounted to £31,829,879.
Immediately after the Armistice steps were taken to consolidate the position of the organization and to render permanent the machinery which had been set up during the previous three years. The county committees were disbanded, their work having been delegated to local committees which they had formed in practically every local area in the country. Steps were taken to devise a complete representative system throughout the organization. Adopting the association, or savings club, as the fundamental unit of the movement, steps were taken to ensure representation of the associations on the local committees. The local committees in their turn elected representatives on a new body called “The National Savings Assembly,” which was to meet twice a year to discuss questions relative to the movement and at one of these meetings to elect representatives on the National Savings Committee, which, by the authority of the Government, dropped the word “war” out of its title. At the same time the personnel of the National Committee was considerably strengthened. In 1921 it formed a powerful body composed of representatives of Government departments and corporations and interests connected with thrift, together with representatives of the savings organizations in London and the provinces elected ort a wide franchise, so that its continued influence could not fail to be beneficial to the community.
Savings and Local Government Finance. — In the summer of 1920 a step was taken which might well have far-reaching effects on the relations between local and Imperial finance.
The Finance Act 1920, Section 59, provided that 50% of the proceeds of the sales of savings certificates could be invested through the National Debt commissioners in local loans stock or bonds on the security of the local loans fund. Half the proceeds of the gross sales after Oct. 1 1920, in the area of each local authority, would be available, if required, for loans to meet authorized expenditure in connexion with the assisted housing scheme of that authority. These loans were to be made, irrespective of the ratable value of the local authority, by the Public Works Loan commissioners, on the terms in force for the time being for ordinary loans to local authorities from the local loans fund for subsidized housing schemes. In the first instance, such loans would be restricted to housing purposes, but it was hoped that, when the existing difficulties with regard to housing finance had been overcome, the scheme would be given a more general application and that the system would become a permanent feature of local finance, bringing to the aid of local authorities a new source of capital which many of them had long been seeking. The authorities derive the greater part of the benefit under the scheme, since, although they receive only half the proceeds of the certificates sold, they are not responsible for finding any of the money required to meet withdrawals.
A critic of the ordinary savings bank in the last century said: “The savings bank is after all only a slot in the wall, with a sure grasp, but no tongue to advise it. Having no fructifying use for the money that comes to it from productive employment it closes over it like a grave and effectually sterilizes it”; and Sir E. Brabrook, Chief Registrar of Friendly Societies in 1897, said he “could look upon ordinary savings banks merely as infantile efforts in thrift.” He regarded “a person who deposited his money in a savings bank so that it should be kept safe for him by someone else as very much less worthy of encouragement than a person who used his savings in some way in coöperation with other people for his own benefit or the benefit of others.” He “did not look upon the progress of the savings bank with unalloyed satisfaction, but only as one step to self-help.”
The system of linking up National Savings certificates with local finance becomes, in effect, a national credit bank spread over the whole country. The credits of the small investor, even the half-pennies and pennies saved by the school-children, are rendered, through the machinery of the savings certificates, the Post Office, the National Debt commissioners, the Treasury, the local loans fund and the local authorities, available for investment in social and beneficial enterprise for the good of the people themselves. Owing to the widespread area from which the money is raised, short-term borrowing can be used for long-term loans with the minimum of risk, while saving is stimulated amongst the very class to whom in the past it has been most difficult to teach economy and saving. The linking-up of “saving” with the definite use of the money saved continues effectually the teaching of the war and inculcates the lessons of economy, and goes far to meet Sir E. Brabrook's criticism of the savings bank. The system is certain to stimulate the interest of the small investors in local finance generally. Not only will this be a source of financial strength to the local authorities, but educationally it will be a great advantage, and the active coöperation of the local authorities and the savings committee should do much to stimulate habits of thrift and saving.
The American savings movement is dealt with later. As regards other countries in the war it may be noted that the British National Committee had its organization in the East for the sale of War Savings certificates, the China and Japan War Savings Association having nine centres in China and three in Japan. The Japanese Government itself during the war sent its representatives to inquire into the methods of the National Savings Committee, and established its own system of National Savings certificates with terms of three, five and ten years.
In Canada, war savings and thrift stamps were issued by the Canadian Government.
The Government of S. Africa after the Armistice placed “Union Loan Certificates” on sale at every post-office where savings bank or money order business is transacted. The S. African scheme closely resembled the British savings scheme. Cards were issued with spaces for 15 one-shilling stamps. The cards were issued at an initial price of sixpence. When the card was completed, it could be exchanged for a 15s. 6d. certificate which is worth £1 in five years. The maximum purchasing limit is £387. 10s. 0d. for 500 certificates. The S. African Government also adopted the scheme of associations in savings clubs on the British model.
Statistics. — In the preceding tables statistics are given of the results of the work done under the National Savings Committee.
Upon the declaration of war by the United States in April 1917 it became evident that the nation must practise strict economy if the huge war-time expenditures were to be successfully financed and material aid given to the Allies. Not merely in money, but in consumption (which means money), the resulting movement for economy among the American people was vigorously taken up. As a first step toward conservation, President Wilson on May 19 1917 outlined a food control programme and appointed Herbert Hoover Food Administrator, and Congress passed the law commonly known as the Lever Act, effective Aug. 10 1917 — “an Act to provide further for the national security and defence, by encouraging the production and conservation of supply and controlling the distribution of food products and fuel.” The administration of the Act was under the direction of a U.S. Food Administrator and a U.S. Fuel Administrator. The Food Administration summed up its purpose in the motto: “Food will win the war.” The following specific ends were sought: (1) to save food and eliminate waste; (2) to distribute food equitably and cheaply; (3) to stimulate production; (4) to prevent hoarding; (5) to save transportation; (6) to provide for the needs of the U.S. army and navy; (7) to secure the largest possible amount of food for the Allies.
The most vital early need both for America and for the Allies was the conservation of sugar and wheat. The shipping shortage was so acute that it was impossible to procure the large surplus of raw sugar in Java, amounting to nearly 1,000,000 tons. Exports of sugar from the United States for the year 1917 were more than 17 times the average for the three years preceding the war. In Aug. 1917 the cost of spot sugar reached $9.15 per cwt. seaboard basis, and the demand was still unfilled. During this month an International Sugar Committee was appointed. Under the operation of this committee the price of Cuban raw sugar declined to $6.90 by Sept. 14, which was the fixed maximum for the season's crop. The prices to the consumer were maintained at from 8½ cents to 10 cents per lb., varying with the location. As the difference of one cent per lb. added to the price of sugar meant an added burden on American homes of $72,000,000, the importance of the sugar regulations is evident. As the needs of the United States and of the Allies became more acute, the Licence System governing dealers in food supplies was put into effect and various regulations adopted which governed the producer and consumer alike. In order to control the sugar situation it was announced on May 2 1918 that on and after May 15 sugar should not be sold for manufacturing purposes either by refiners, wholesalers or retailers, except upon the presentation and cancellation of certificates issued by a State Federal Food Administrator, showing the quantity of sugar sold. Retailers were restricted from selling sugar to consumers in quantities greater than 2 lb. for city residents and 5 lb. for those residing in the country, except for home canning, in which cases the dealer was required to secure certificates for the amount sold. By the operation of this system and the voluntary restriction of household consumption, a saving of between 400,000 and 600,000 tons was effected in 1918.
The most serious crisis faced by the Food Administration during its operations was the wheat shortage of the season 1917-8. In the United States the crop, following the exceedingly short harvest of the previous year, was only sufficient to meet normal demands for home consumption. France and England, which together normally produce about one-half the wheat they consume, both suffered very great crop losses, and their total production was considerably less than one-third their normal consumption. In Jan. 1918 an official communication was received from Great Britain stating that, unless America could send the Allies at least 75,000,000 bus. of wheat over and above what they had exported up to Jan. 1, there was grave fear that the war would be lost because of the lack of food. The United States Food Administration replied to this advice: “We will export every grain that the American people save from their normal consumption. We believe our people will not fail to meet the emergency.” All manufacturers in the united States using wheat flour in the production of various foods were placed under licence, and either strictly limited in their use of wheat to a definite percentage of their normal requirements or were denied the use of wheat entirely. Wheatless days and other measures for wheat conservation were established. Mills were permitted to grind only a certain percentage of the amount of wheat milled during a corresponding period the previous year. Wholesale dealers were prohibited from purchasing wheat flour in excess of 70% of the amount they had purchased during a corresponding period of the previous year. In sales to consumers the retailers were required to sell an equal quantity of substitutes to the purchaser at the time wheat flour was sold. The pledge-card campaign was started in Oct. 1917, and between 13,000,000 and 14,000,000 women registered in support of food conservation by substitution. Between Oct. 1 1917 and Aug. 1 1918 hotels, restaurants, dining cars and clubs of the country effected a saving of more than 50,000,000 lb. of flour and wheat products. Flour-mills were required to raise their percentage of extraction to 74% and to eliminate altogether the sale of patent flours. This resulted in a saving of 13,504,300 bus. of wheat. Bakers were required to use a certain percentage of substitute flour in all breads, and this resulted in the saving of 16,830,000 bus. of wheat. These various measures made it possible for the United States to send abroad in 1918 approximately 140,000,000 bus. of wheat.
The importance of fats and oils in the diet of a people caused the Food Administration to lay stress on the conservation of meat products. Export of fats to neutrals was greatly restricted and the amount of fats used in bakery products limited. In 1918 1,125,397 short tons of hog products were exported as against 839,000 in the fiscal year ending June 30 1899, the largest in any previous year. In March 1918 exports averaged 10,000,000 lb. a day. Normally the United States exports yearly a little over 10% of its total pork production. In 1918, under the pressure of war needs, nearly 20% of a much larger production was exported. In 1918 773,000,000 lb. of beef were exported, or over three and a half times the exports on the average of the three war years. These supplies were made available by the conservation of meats formerly wasted, by volunteer rationing and by the adoption in many localities of meatless days and meatless meals.
As the demand on transportation facilities became increasingly heavy, it was vital to keep the routes by which food passed from the producer to the consumer as active as possible. The tremendous increase in the exportation of food and munitions, coupled with the shortage of ocean tonnage, congested eastern terminals. To remedy this condition, a regulation was promulgated providing an average increase in the minimum car-loads of about 50% over those of the published tariffs of the carriers. Thus the number of cars required for the distribution of the commodities on the list of non-perishable groceries was reduced fully 25%. Much material formerly wasted was salvaged by the Waste Reclamation Service, organized originally under the War Industries Board and later transferred to the Department of Commerce. One million five hundred thousand tons of book and writing material were made in 1918 from old paper. The total value of all waste material reclaimed during 1918 was approximately $1,500,000,000. In monthly reports as to garbage utilization during 1918 it was shown that the redemption plants reclaimed more than 50,000,000 lb. of garbage grease and 160,000 tons of fertilizer tankage from garbage.
Several conservation projects were developed in conjunction with food conservation. The National Emergency Food Garden Corporation put 1,500,000 ac. of city and town land under cultivation in 3,000,000 gardens, resulting in an increase of the food supply to the value of over $350,000,000 in one year. The School Garden Army, 6,000,000 strong, raised and preserved fruits and vegetables and also aided in the utilization of wasteproducts. Community canning kitchens were widely conducted. The Women's Land Army had during the summer of 1918 units in 20 different states, showing an enrolment of 10,000 in camps and 5,000 in emergency units. They were engaged in fruit packing, dairy work, truck gardening and general farming. Cash-and-carry plans were encouraged and the limitation of deliveries to one a day to any family or on any one route was recommended.
The U.S. Fuel Administration began its work in Aug. 1917, with Dr. Harry A. Garfield as director. The Administration set out to accomplish: (1) increased production; (2) better distribution; (3) fair sale prices; (4) the elimination of waste. Small production was largely due to strikes. The Fuel Administration succeeded in getting employers and employees into agreement and eliminated much of this difficulty. In April 1918 a nation-wide plan designed to insure equitable distribution of coal was put into effect. An essential feature was the zoning system, by which more than 5,000,000 tons formerly shipped from eastern mines to western territory adjacent to western mines was saved for the eastern states where the demand of war industries was greatest. All the price-fixing was done by territory. Inspectors visited each one of the 250,000 industrial plants in the United States using large amounts of coal and worked out with the management systems of conservation. In one week 50,000 tons of coal were thus saved in Pittsburgh alone. Rationing was put into effect, the supply of coal to non-essential industries being greatly reduced. It was estimated that this saved over 1,000,000 tons. All industries were held to their minimum needs. Stores and office buildings were encouraged to take their electric current from central plants. The “skip-stop” system on electric street railways by which no stops were made at unimportant crossings resulted in a great saving. Economy was also effected by lightless nights, which affected window lighting, electric display and street illumination. Home instruction was given in the operation of heating systems and in the use of electricity. For several weeks heatless Mondays were observed in stores, office buildings and places of amusement. A saving of 12,700,000 tons of coal for the first half of the coal year was thus effected.
On March 19 1918 the President approved the legislation entitled “An Act to save daylight, and to provide standard time for the United States.” The purpose of this legislation was to conserve daylight and the Act is commonly known as the “Daylight-Saving Law.” It provided for setting the clocks of the nation ahead one hour at two o'clock on the morning of the last Sunday in March of each year and for retarding them by one hour at the same time on the last Sunday in Oct. of each year. By the same piece of legislation the United States was divided into five standard zones. After the repeal of this Act in Aug. 1919, several of the states enacted daylight-saving laws. The operation of the daylight-saving plan caused the saving in seven months of approximately 1,250,000 tons of coal.
Gasoline-less Sundays were inaugurated in Aug. 1918. A cessation of Sunday motoring from 75% to 99% was effected. This resulted in an estimated saving of 1,000,000 bar. of gasoline, from which it is known that 500,000, or 10 shiploads, were sent overseas. The order governing the use of gasoline was withdrawn on Oct. 20 1918.
Under the provisions of “An Act to authorize the President to increase the military establishment of the United States,” approved May 18 1917, and later amended, the President was authorized to raise and maintain military forces by selective draft “under such regulations as the President may prescribe not inconsistent with the terms of this Act.” Under this law certain exemptions were made removing the liability to military service from those whose industrial occupations were deemed essential to the proper prosecution of the war. Along similar lines several of the states passed like enactments, commonly termed “Work or Fight laws,” by which those who had been exempted from military service were forced to accept employment in essential industries or else join the military or naval service and thus conserve the man-power of the nation. Non-essential occupations were listed and because of the simultaneous enactment of a drastic law against loafing in the state of New York the New York City Federal Employment Service was overrun with applications. Over 6,000 were registered July 1, and the next day after the order had been given publicity one bureau registered over 10,000. The majority were from the non-essential occupations, together with a small percentage of the idle or vagrant classes.
The Conservation Division of the War Industries Board was established May 9 1918. Its purpose was to eliminate wasteful or unessential uses of labour, material, equipment and capital. Its specific aim was: (1) to secure the maximum reduction in the number of styles, varieties, sizes, colours or finish of products of the various industries; (2) to eliminate accessories which used material for adornment or convenience, but which were not essential; (3) to substitute materials which were plentiful for those which were scarce; (4) standardization; (5) reduction of waste; (6) economy in samples; (7) economy in containers and packing. The length and swing of men's sack coats and overcoats and the width of facing were limited, the size of samples reduced and each manufacturer restricted to not more than 10 models of sack suits for the season. This resulted in a saving of from 12 to 15% of material. A saving of 33% of wool used in the knitting of sweaters was effected by the reduction in styles and colours. For example, only one shade of green was used where formerly there were many. Manufacturers of shoes were restricted to white, black and tan; wasteful features were eliminated and height limited. As a result one tanner reduced his line from 81 colours and shades to 3, and manufacturers in general reduced their line by about two-thirds. A schedule issued Sept. 13 1918 to manufacturers of rubber footwear provided for the elimination of 5,500 styles, with an estimated annual saving of 29,012,600 cartons, 5,245,300 sq. ft. of shipping and storage space, 2,250,272 lb. of material to be dyed, 74,750 lb. of starch, 30,380 gal. of varnish, 125,300 Ib. of tissue paper and 49,617 days of labour.
In addition to the efforts of the War Industries Board there were numerous appeals by Government officials and patriotic organizations to conserve clothing and shoes. As a result a very great proportion of the people wore garments which in normal times would have been discarded. Patching and remaking of clothing became popular practices. Although it is impossible to estimate the saving effected, it is undoubtedly true that many millions of dollars, which would ordinarily have gone for the purchase of wearing apparel, were used to purchase Liberty Bonds and to aid various war philanthropies.
The Pulp and Paper Section of the War Industries Board was organized June 6 1918 to restrict the use of paper and its products and thus to save fuel, transportation and labour. On July 5 1918 the following preliminary economies were requested of all newspapers publishing daily and weekly editions: that they (1) discontinue acceptance of the return of unsold copies; (2) discontinue the use of all samples and complimentary copies; (3) discontinue giving copies to anybody except for office working copies or where required by statute law in the case of office advertising; (4) discontinue giving free copies to advertisers except not more than one copy each for checking purposes; (5) discontinue arbitrary forcing of copies on news-dealers; (6) discontinue the buying back of papers at either wholesale or retail; (7) discontinue payment of salaries or commissions to agents, dealers or newsboys for the purpose of securing equivalent of return privileges; (8) discontinue all free exchanges. On Sept. 20 the following additional regulations went out: no publisher shall sell his paper at retail less than his published prices; no publisher shall use premium contests or similar means to stimulate his circulation; no publisher shall issue holiday, industrial or Sunday special numbers. These regulations brought about a saving in paper during Sept. of 10.4% of the average monthly tonnage during the six months preceding and in Oct. of 5%. There was produced in Sept. 1918 104,209 tons and in Oct. 110,498 tons. All regulations relative to paper were withdrawn on Dec. 15 1918.
The universal response by the people of the United States to the request that they lend money to the Government to provide necessary funds for the prosecution of the war was one of the most significant things of the war period. Millions of people purchased Liberty Bonds and Victory Notes in various denominations from $50 to $10,000 (see Liberty Loan Publicity Campaigns), and other millions invested in the smaller War Savings securities. Early in the war President Wilson made the statement: “I doubt that many good by-products can come out of a war, but if our people learn from this war to save, then the war is worth all it has cost us in money and material.” This statement, together with the desirability of having the entire nation participate in financing the war, suggested the underlying purpose behind the war savings movement, which was put into operation in Dec. 1917. Section 6 of the Second Liberty Bond Act, approved Sept. 24 1917, authorized the Secretary of the Treasury “to borrow from time to time on the credit of the United States for the purpose of this Act and to meet public expenditures authorized by law, such sums as in his judgment may be necessary and to issue therefor at such price or prices and upon such terms and conditions as he may determine War Savings Certificates of the United States on which interest to maturity may be discounted in advance at such rate or rates and computed in such manner as he may prescribe.” The Act further provided that “each War Savings Certificate so issued shall be payable at such time, not exceeding five years from the date of its issue, and may be redeemable before maturity, upon such terms and conditions as the Secretary of the Treasury may prescribe.” A limitation of $2,000,000,000 was placed by the Act upon the amount of War Savings Certificates which might be outstanding at any one time; it also provided that no person should be sold at any one time certificates amounting to more than $100, and it also placed a $1,000 limitation upon the amount of certificates which might be held by any one person. The original Act was amended by the Act approved Sept. 24 1918, which increased the amount of certificates which might be issued from $2,000,000,000 to $4,000,000,000, removed the $100 limitation on the amount of certificates which might be sold to any one person at any one time, and also altered the previous Act by allowing persons to hold an amount not to exceed $1,000 worth of any series of certificates.
Pursuant to the authorization contained in the original Act, the Secretary of the Treasury appointed a committee of five, with Frank A. Vanderlip as chairman, to confer with him as to the form of security and the terms on which it should be issued. Following the recommendation of this committee, the Secretary of the Treasury offered for sale on Dec. 3 1917 an issue of War Savings Certificate Stamps, Series of 1918. Each certificate stamp when affixed to a War Savings Certificate (a folder with spaces for 20 stamps) would have a fixed maturity value of $5, with the date of maturity not to exceed five years, the purchase price to vary one cent each month throughout the year of issue, beginning in Jan. at $4.12, increasing to $4.23 in December. The stamps might be redeemed before maturity, their redemption value increasing one cent each month. There were also provided 25-cent Thrift Stamps, bearing no interest and not redeemable for cash, but to be accumulated on a Thrift Card until there were 16, when they could be exchanged for a War Savings Certificate Stamp by paying the additional odd cents necessary to cover the current price of the War Savings Certificate Stamp. Succeeding issues of War Savings Certificate Stamps were on Jan. 1 1919, Jan. 1 1920 and Jan. 1 1921.
In addition to the original securities there were offered in July 1919 Treasury Savings Certificates, one of $100 and the other $1,000 maturity value. Treasury Savings Certificates were registered at the Treasury Department at the time of purchase and increased in redemption value monthly on the same interest basis as War Savings Certificate Stamps. In Jan. 1921 there were offered for sale $1 non-interest-bearing Treasury Savings Stamps and $25 Treasury Savings Certificates, in addition to the other Treasury Savings Securities.
Following the working out of the types of securities in 1917, an organization for their sale was effected. In addition to the- National War Savings Committee, consisting of the chairman and four members, the Secretary of the Treasury appointed six Federal directors, each having general supervision over approximately two Federal Reserve Districts; and 52 state directors, each of whom had complete charge of War Savings activities in his state or part thereof. The National War Savings Committee and the six Federal directors functioned at the National War Savings Committee headquarters in Washington. It was the duty of this sales organization to obtain coöperation from the heads of all enterprises operating nationally and then to decentralize the work through the Federal directors to the respective state directors coming under their jurisdiction, the ultimate goal being to offer every man, woman and child in the United States the privilege of aiding the Government by investing in Government securities, and at the same time to develop habits of thrift. The War Savings securities were put on sale at every post-office, at banks and in thousands of voluntary agencies. House-to-house canvass for their sale was made by postmen, boy scouts, representatives of insurance companies and members of women's organizations. In the autumn of 1918 the Treasury Department created a Savings Division of the War Loan Organization, which took over the work previously carried on by the National War Savings Committee, so that the people of the country might be taught for their peace-time value the lessons of thrift and saving learned during the war. The specific ends sought were: (1) to develop and protect all war issues of Government securities; (2) to sell Treasury Savings securities; (3) to make permanent the habits of regular saving and investment in U.S. Government securities. The Savings Division was placed in charge of a Director of Savings, with an organization in Washington, and one in each of the 12 Federal Reserve Districts.
School Government Savings systems were established. Instruction in thrift, saving and the principles of sound finance was introduced in schools throughout the nation. At the annual convention of the National Education Association in July 1920 a committee of state superintendents was appointed to work out with the Savings Division the best plans for placing the savings movement permanently in the American school system. The American Federation of Labor and various labour bodies passed resolutions commending the work of the Savings Division and calling on the Government to make permanent the policy of issuing small securities. Many local labour organizations invested their reserve funds in Government securities. In industrial plants throughout the country Government Savings Associations were established and the employees put aside small amounts regularly each week in Government Savings securities. Women's organizations of the country during the years 1919 and 1920 created the office of thrift chairman in their boards of officials. They took up the study of finance at club meetings, promoted the use of the household budget and with the savings thus effected purchased Government securities.
The total sale of War Savings securities from Dec. 3 1917 to Jan. 1 1921 amounted in round figures to $1,176,111,000. The total redemption of War Savings securities for the same period amounted to $415,174,000. (W. M. Le.)
- The author was Mr. R. H. Brand, a partner in the London firm of Lazards, and a well-known writer on finance. (H. Ch.)
|
<urn:uuid:9c0dfdfc-402f-4c79-9a2e-8f6ccc5ebbca>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00609.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.971851110458374,
"score": 3.359375,
"token_count": 17364,
"url": "https://en.wikisource.org/wiki/1922_Encyclop%C3%A6dia_Britannica/Savings_Movement"
}
|
Tartan Day - Customs - Legends - Rings - E-Cards
Scottish History Timeline: 1st to 9th Centuries
- 30, 000 BC
- Homosapians discovered by archeologists dated at this time period. Homo Habilis is 5 million years ago.
- 12,000 B.C. - 8000 B.C.
- Ice Age: global glacial melting. A meteor shower devastates the planet causing massive tidal waves. Legend says when the sea level lowers migrations occur from Grand Dolina, Burgos, Spain [pink quartz ax/27 people at Sima de los Huesos: Bone Pit] through the Strait Bab al Mandeb: Gate of Tears at Eritrea: The Red Sea,.
- 7000 BC - 5000 B.C.
- Second glacial melting. Ice Age ends. Archeologists discover wood ash below layers of peat suggesting earliest settlers burned clearings. Tumulus of Kercado: Red Goddess Tomb at Carnac: Red Place in Brittany France built 5,700 BC.
- 4000 BC - 3000 BC
- Sea-borne immigrants arrive from Europe with: cattle, grain, sheep, pigs, barley, and wheat. Neolithic passage graves, gallery graves, and community graves found. Bronze Age 3,400 B.C trading in ores, gold and baltic amber funerary breastplates, daggers. 3,201 BC Sumerians record from Feb 17th in the month of Hilu to March 30th in the month of Eshil the Great Flood. the Flood of Noah in the Tanakh. Meteor shower shortly before 3000 B.C. results in tidal waves along the coasts of Europe & North America
International: Serre Paradis city of the Arecomici: Fertile Ones Celts at Nîmes. The menhir of Courbessac called La Poudriere stands in a field, near the airstrip, a limestone monolith of over 2 metres in height. Uruk period of Sumerian civilisation: Wheat is their grain. Cities of: Ur [sacked 2004 BC], Eridu, Nippur, Kish & Uruk [sacked 2000 BC]. Upper and lower Egypt unified in 3,200 B.C. Caral City, Perú
- 3000 BC - 2000 BC
- .Scara Brae site: Archeologists find compact houses, sheep and cattle bones, necklaces. The people are called the Beaker Folk because of their handleless pottery. Megalithic stone circles, henges, cairns, individual burial stone cists, and burial pits found. Bronze Age work from Greece called Minoan by Sir Arthur Evans after the archeological sites founder Minos Kalokairinos of 1877, who is named after King Minos. Their city, Acrothere, under volcanic ash is on the island of Thera. Ptolemy VI & Cleopatra Is Temple of Kom Ombo in Upper Egypt, 180 BC, call the people Keftiu. Hieroglyphics have also been transcribed as Caphtor: Grain People. The untranslated Greek Hieroglyphics are called Linear A. Iron Age Mycanean hieroglyphics called Linear B. Bronze age Phrygia [Turkey] home of King Midas texts untranslated.
International: 2700 BC: Emperor Huangdi & Emperess Xilingshi find white silkworm moths in the mulberry trees. The cocoons dropped in water unravel to silk. 2600 BC: Maya civilization at Cuello, Belize cultivating chocolate, chilis, vanilla, papayas, & pineapples.
- 2000 BC - 730 BC
- Celts, who are Indo-Europeans descended from the Kurgan: Queens Tomb civilisation of Russia, arrive in Europe. They settle in France and move into the Iberian peninsula. In Hallstatt, Oberösterreich: Upper Austria 3,000+ graves are discovered in a salt-mine next to the Hallstättersee: Hallstatt Sea. Archeologists under Ramsauer in 1846 discover gold torcs, armlets, brooches, jewelry, weapons, drinking vessels, and mirrors. The word Kurgan coined by UCLA Archaeology Professor Marija Gimbutas (Vilnius, Lithuania Jan 23, 1921 – Los Angeles Feb 2, 1994). 730 BC: Settlers from Chalcis, Ionic Greece, arrive in Sicily & name it Zankle: Sickle, from the shape of the harbor.
International: Annau, Turkmenistan establishes Silk Road. Olmec Empire: Anahuac. Gerion of Cush invades Spain in 1883 B.C, mines gold and is slain by the Egyptians. His sons flee to Ireland, the Scottish Hebrides, and the Toltec Empire. 15th cen BC: Hurrian area ranging from the Iranian mountains to Syria becomes Mitanni state. 18th Dynasty [1567 to 1320 B.C. King Tut]. Queen Hatshepsut (pronounced hat SHEHP soot) a.k.a Ma Kara 1503-1483 BC. of the 18th Dynasty at Karnak. Jezabel of Tyre establishes worship of Phoenician dieties. Elijah kills 450 priests of Baal and flees to the Kingdom of Judah. 14th cen: Hittite Empire under Suppiluliumas I defeats Mitanni & reduces its king, Mattiwaza, to vassalage. Assyria is independent. 1159 BC: Icelands Hekla III volcano erupts, expelling 12 cubic kilometres of rock into the atmosphere and causing large-scale failures of the crop harvest in Egypt. The presence of significant quantities of volcanic soot in the air prevents sunlight from reaching the ground and also arrests global tree growth for almost two full decades until 1140 BC. Royal tomb-builders of Set Maat her imenty Waset a.k.a Deir el Medina set up a labor strike when their food provisions are reduced. Pharoah Rameses III of the Twentieth Dynasty & Queen Iset defeat the Aegean Sea Peoples in two battles. The surviving sea people set up Philistia ,Papyrus Harris I a.k.a Papyrus British Museum 9999 purchased by collector Anthony Charles Harris (1790–1869) chronicles the Pharoahs building of Luxor, Karnak, funerary temple and administrative complex at Medinet-Habu, the plot on his life over the successor to the throne: Ramses IV with Queen Iset-Isis or Pentaware with Queen Tey, and his vast donations of land, gold statues and monumental construction to Egypts various temples at Piramesse, Heliopolis, Memphis, Athribis, Hermopolis, This, Abydos, Coptos, El Kab and other cities in Nubia and Syria. Queen Athaliah of Judah kills all her male descendants except for Jehosheba and Joash & is overthrown by priests in 837 BC. Queen Artemisia: Halicarnassus, one of the Seven Wonders of the World. Nebuchadnezzar II rebuilds Babylon with hanging gardens. Lamanai, Belize: Submerged Mayan Crocodile city built by the New River lagoon with step pyramids.
- 730 BC - 381 BC:
- Start of Iron Age, La Tène culture. Celts build oppida, fortified cities that control trade routes and move into the Italian peninsula and Alps. Tribes who move to the British Isles are: Britons: White Cow of Everlasting Milk, Gaels: Love, Cymri: White Grain Queen Stone shrine & metal forgery built at Cadbury Hill. Frankfurt limestone hill-graves are called Altkönig: Old King. Rome: Numa Pompilius (April 21, 753 BC - 674 BC) becomes the second king of Rome in 716 BC when founder Romulus a.k.a Janus Quirinus (771 BC-716 BC: born to a princess) goes up to the sky [Senatorial murder. Twins Romulus & Remus were ordered killed by their grand-uncle the King of Alba Longa, but on the banks of the Albula River a.k.a Tiber they are rescued by lupa: prostitute or she-wolf a.k.a Larentina: goddess of death & mother of the Lares, Greek Mania: mother of the Manes. Mother Rhea Silva aka Ilia a Vestal Virgin, is sent to drown in the water, the Tiber married her and flooded because of the injustice. Albula is Celtic for white with sediment. Anio: water from heaven: St. Anne flows into it. Janus a.k.a Anis wife was Jana a.k.a Diana Greek legend says Trojan hero Aeneas, built the city after the destruction of Troy by the Greeks in the Trojan War. Some versions combine the two, making Romulus & Remus descendants of Aeneas] Parallel Lives records Pompilius as curing the pestilence of Italy in 724 BC. with the Anchilia shield. Rome is surrounded by seven wooded hills in central Italy. The Italian Peninsula juts into the Mare Nostrum: Our Sea: Mediterranean Sea. 600 BC: Emperor Lucius Tarquinius Priscus orders the Cloaca Maxima: Great Sewer built by Etruscan engineers and forced labor from the Roman poor. Romes sewage is carried into the River Tiber. 510 BC: Rome seizes power from the Etruscans. 430 BC: Plague pit with 1000 tombs discovered in Kerameikon, Athens by Efi Baziotopoulou-Valavani. Thucydides describes the panic caused by the plague which struck Athens & killed one third of the population. He wrote that bodies were abandoned in temples & streets, to be subsequently collected & hastily buried. 400 BC: Socrates trialed & executed. Pythagoreans executed and under the Edict of Italy flee to Lucania & Thebes. Greek Ambassador Megasthanese visits Chandragupta Mouryas Indian empire and writes that the Andhras are a powerful Nationality possessing thirty towns, 100,000 infantry, 2000 cavalry, 1000 elephants. 398 BC: Romans seize the Etruscan city of Veii. 383 BC: Emperor Gratian invades the French Dauphiné, seizes the mountain Oppidum Cularo, where the Drac: Dragon & Isère rivers merge from the Allobroges & renames the area of Belladonna: Dark Sacred Tree – Gratianopolis /Grenoble. [Site Saint-Martin-de-Miséré 994 AD & Hercynian forest]. Gratian is murdered in Lyon by the Roman armies of Britain. Celts are challenged by Rome for control of the North sea. Chief Brennus clashes with the Macedonian army of Alexander the Great and destroys them at the Alia river. Celts enter the Senate and Senator Marcus Papiris savagely beats a Celtic warrior with a staff. The Celts burn the city of Rome to the ground in 387 BC, destroy the Oracle of Delphi, and ransom Rome for a 1000 pounds of gold.
- 381 BC - 298 BC:
- Battus à Bollène: Romans fight the Allobroges: White Cow of Everlasting Milk People from Oppidum Cularo & the Arvenes: Seed People in Provence. Vadomarius chieftain of the Allamanic tribe Brisgavi / Breisgauer murdered in 368 by the Romans according to Ammianus Marcellinus. Today the southern region of the Black Forest is named Breisgau. Jan 2, 366 BC: Alamanni cross the frozen Rhine to battle the Romans. First Samnite War: 343-341 BC: In Samnium [Basilicata, Campania]. Samnites: Summer People establish a garrison in Naples. Rome agrees to a peace treaty. Second Samnite War: 326-304 BC: Romans violate the peace treaty, invading the Liri river valley. Roman consuls captured & establish a five-year treaty. Romans return for revenge & win. 315 BC: Rome loses at Lautulae. 312 BC: Rome constructs Via Appia military road. 311 BC: Etruscan cities join the Samnites against Rome. 306 BC: Via Valeria military road.
International: Alexander drives Pharoah Cingris from Egypt into Ethiopeia & founds Alexandria. Greek-Macedonian Arsinoe marries the King of Thrace. He is killed in battle and she escapes to Egypt, where her brother Ptolemy II reigns with his wife, Arsinoe. The first Arsinoe ousts the second Arsinoe (the wife) and marries her brother to become Arsinoe II. She encourages worship of her as a Goddess. Ptolemy II founds the Royal Library of Alexandria and the Musaeum: The Temple of the Muses (museum in English). 300 BC: Greek Explorer Pytheas (pronounced PIHTH ee uhs) sailed around the British Isles & enters the North Sea, mentioning a land called Thule (pronounced THOO lee), believed to be Norway. The Mayan city of Chicanná: House of Snakes Jaws is built in Campeche, México. The doorway to the city is the mouth of Itzamná, the creator god, in the form of the Earth Monster. The high priests enter the Maya underworld and emerge transfigured. Linked with Becán: Road of the Serpent and Xpujil: Place of the Cattails nearby. .
- 298 BC - 133 BC:
- Third Samnite War: 298-290 BC: Romans invade northern Etruria & Umbria battling the Samnites: Summer People, Lucanians: Raven People, Bruttians & Thurioi: River People, invading Croton, Lokroi, and Rhegium: Red King City, Decius Vibelius massacres the inhabitants of Rhegium. Rome founds colonies in Apulia & Lucania, the most important of which was Venuzia/Venusia: Venosa, Apulia [Apulia: Phoebus Apollo]. General Barbula placed there to prevent Samnites & Lucanians from joining Pyrrhus army. Romans defeat a Gallic army at Lake Vadimon in Etruria & annex the land of the Senones: ager Gallicus along the Adriatic. Battle of Heraclea: Pyrrhic War: 280 B.C: Rome invades Heraclea, an Ancient Greek city in Lucania, S Italy, not far from the Gulf of Tarentum. King Pyrrhus of Epirus defeats Publius Valerius Laevinus at the river Siris with the combined forces of Greeks from Taranto: Bull Place [S. Italy: Philocharis Ainesias navy], Thurii, Metapont, Heraclea & Epirus [Bulgaria: Forces of: Antigonus II Gonatas of Antioch, King of Syria, Egypt]. Bronze tablets giving Roman municipal laws found nearby. Diocletian seizes Antioch & proclaims Turkey Eastern Rome: Asia Minor First Punic War: Sicilian War 264-241 BC: Punic a Roman word for Poemy: Phoenicia: Carthage. Volsinii, the last free Etruscan city destroyed. Appius Claudius Caudex invades Messana [N Sicily], Lucius Caecilius invades Palermo, Metellus Caius Duilius invades Syracuse [S Sicily]. Sicilians sold as slaves. Marsala seiged. Carthaginians under Hamilcar Barca (b290 BC- d228 BC) hold out for 8 years. Hamilcar moves his operation base to Spain. Rome controls the peninsula from Sicily to the Apennine frontier. Battle of Telamon / Battle of Talamone: 225 BC. Rome exterminates the Senones: Ancient Ones of Sénonais, France. The Parisii Breuci: White Trout People settle in Par-Ys-Lutetia: Paris. The Boii: Milk Peoples forces of 50000 infantry & 20000 cavalry caught in the Po Valley between Lucius Aemilius Papus & Gaius Atilius Regularis. Regulus beheaded at Via Aurelia. Papus legions pass from Liguria to Emilia devastating the country of the Boii, they reach Ariminium & from there to the Via Flaminia & Rome. Celtic dead number 40000 & 10 000 prisoners including King Concolitanus. Second Punic War 218-201 BC: Hannibals War: Battaglia della Trebbia: Battle of Trebbia: 218 BC Gaules & Ligurians with Hannibal Joy of Baal Barcas (b247 BC - d183 BC ) & younger brother Mago ambush Titus Sempronius Longus at the Trebbia: Trinity River by concealing themselves among the streambeds. At dawn, Hannibals Numidian light cavalry attack with elephants from the front while Mago comes from the rear. Romans drown. 217 BC: Hannibal wounds Gnaeus Scipio at the Ticinius River Canne della Battaglia: Battle of Cannae: 216 BC: Hannibal routes the 50,000+ army of Lucius Aemilius Paullus & C.Terrentius Varro on the Apulian plain in the vicinity of Cannae. Roman knights gold rings are collected in baskets & later poured out onto the floor of the Carthaginian senate. Lucius Aemilius Paulus & 80 senators killed at the Numidian tribunal. Varro escapes. M. Junius Pera decrees the defeat as divine wrath. He orders live burial of two Vestal Virgins & the human sacrifice of a Gallic & Greek man & woman. Gnaeus Scipios Marseille naval warships attack Spain. Indibilis of the Illergetes, overlord of the tribes of northern & central Spain fight back & lose. Hannibals brother Hasdrubal escapes with his army to Italy from Spain but is defeated & killed at Metaurus in 207 BC.
International: 256 BC: Gaius Atilius Regularis invades Carthage & informs the Carthaginians they must give up Sicily, Sardinia & pay an annual tribute to Rome. Carthaginians hire the Spartan general Xanthippus to fight Regularis. The Roman army is trampled by elephants. Roman navy is beached & drowns. Regularis is executed in Carthage. Rome withdraws from Carthage. Second Punic War: 202 BC: Rome seizes Libya & establishes the Africa Proconsularis which has its administrative centre at Carthage. 202 B.C. - 220 A.D: Han Dynasty of China Third Punic War: Destruction of Carthage: 149-146 BC: Scipio the Younger, captures the city of Carthage, burns it to the ground & sells the survivors into slavery. In 133 & 123 B.C., two Roman tribunes try to help the poor. Tiberius Gracchus & his brother, Gaius Gracchus, promote a program to distribute state-owned land to the poor. The majority of the Senate oppose them, & both brothers are assassinated. 138-109 B.C: Diplomat Zhang Qian (pronounced jahng chee ehn, & also spelled Chang Ch’ien) travels under Chinese Emperor Wudi (Wu-ti) to the Aral Sea, Uzbekistan. Laid the foundation for the silk trade between China & the Roman Empire.
- 113 BC - 44 BC
- 113 BC: Cimbri: White Grain Goddess Celts destroy four Roman armies in Arausio, Gaule where the LArc de Orange stands. Battle of Vercellae: Cimbri: White Goddess People, Helvetii: Sun People & Teutones: Tribe battle Emperor Lucius Cornelius Sulla at Raurii. Ambiorix of the Senones escapes & dissappears. 100BC: Belgae of Gaule set up Kingdom in South Britain. 73 BC: Spartacus, the slave leader begins his revolt at Capua against Gnaeus Magnus Pompey. Roman general Pompey conquered eastern Asia Minor, Syria, & Judea. He returned to Rome a popular hero, but the Senate refused to recognize his victories. As a result, Pompey & two other Roman leaders - Julius Caesar & Marcus Lucinius Crassus - formed a three-man political alliance called the First Triumvirate in 60 B.C. Crassus died in warfare in 53 B.C. Other Roman leaders then tried to split the two surviving members of the Triumvirate. 62 AC: Allobroges under Chief Catugnat seize Valence & attack Roman General Lentinus in Provence. Oppidum Solonium invaded. De Bello Gallico: Gallic Wars: 58 BC - 51 BC: Julius Caesar attacks the Helvetti: Sun People under Orgetorix: Golden Boar King who are migrating to France. Caesars 8 legions kill 60% of the 470,000 population & the rest escape into Helvetia, the mountains of Switzerland. [Helvetia is the female personification of Switzerland. She has braided hair, wreath, flowing gown, spear & a shield with the Swiss flag]. He sends Caius Volusensis in a warship to attack the Morini: Great Sea People and to survey the coastline for an invasion of Britain. 55 - 44 BC: Julius Caesar invades Britain for its wealth of tin and pearls after enslaving and plundering the Aduatuci Celts of Gaule. 80 war ships in two legions, Legio VII & Legio X invade Dover and are met by mass forces gathering on the hills and cliffs overlooking the shore. The Romans look for an open beach further up the coast at Walmer. Roman cavalry and exposed ships are destroyed by a fierce storm and withdraw. 800 ships built lower for easier beaching return to Walmer. King Cassivellaunus of the Trinovantes-Catuvellauni: Trinity of Grains (modern Herfordshire), the Cenimagni, Segontiaci, Ancalites, Bibroci & Cassi, engage in guerrilla warfare against the Romans. Cassivellaunus stronghold is sieged, but the four kings of Kent: Clear Water Place: Cingetorix, Carvilius, Taximagulus and Segovax attack the Roman camp on the coast. A tribute is agreed on with Commius, king of the Atrebates (modern West Sussex) acting for a time as Caesar's personal representative, and Caesar returns to Gaule. Siege of Alesia: 51 BC Aduatuci Celts under Vercingetorix fight back. Caesar kills 40,000 Gaules, but is routed. Vercingetorix & 800 men escape to Gergovia. Caesar attacks Gergovia, Vercingetorix surrenders, is paraded through Rome, then executed. 43 BC: Lieutenant Munatius Plancus forms a Roman Colony at Oppidum Lugdunum: Raven Fortress [Lyon, France] and declares himself Governor of Gaule. The three parts of Gaule mentioned by Caesar meet at Lyon and Plancus seat is there. The Celtic sun god Lugh (Light) is equated by the Romans to Mercurius + dunum: hill-fort in Gaulish. Lugs totem is a cock (rooster), which modern French people associate with le coq.
International:Altun Ha: Stone Water City built in Belize. Temple of the Green Tomb with jade figurines, mounds, dam and aqueduct. The commercial trading center runs for a thousand years. Jugurthine War: 112 BC Emperor Lucius Cornelius Sulla against King Jugurtha of Numidia. Marius army defeats hims 106 BC. 48 BC: Battle of Pharsalus: Thessaly, Greece between Gaius Julius Caesar & his son-in-law Gnaeus Magus Pompey. Pompey flees to Alexandria & is murdered by Ptolomey XIII, Cleopatra VIIs brother. The Royal Library of Alexandria, containing 700,000+ scrolls is burnt to a crisp by Julius Caesar. Caesar kills Ptolomey, impregnates Cleopatra & restores her throne.
- 44 BC - 19 AD
- Roman Revolution: Julius Caesar is stabbed to death by the Roman Senate in the toilets behind a theatre. Triumviri reipublicae constituendae: Caesars great-nephew Octavian shares power with Lepidus [Hispania/Africa] & Antonius/Mark Antony [Gaule/Judea/Media/ Parthia/Armenia/Egypt/Libya/Syria] until Mark Antony declares Ptolemy XV Caesar a.k.a Caesarion: Little Caesar, son of Julius Caesar & Cleopatra VII of Egypt heir to the throne. 40 B.C.: The Greek geographer and historian Strabo writes of communities of Celtic women living in Gaule (in present-day France) and the sacred sexual rituals they perform with one another. Battle of Actium: Octavians Legio VI Victrix & VI Ferrata capture Alexandria on August 1, 30 BC, kill Caesarion, and rule Egypt. Octavian names himself Caesar Augustus the Principate: & rules as emperor with his wife Livia Drusilla. Antony & Cleopatras three children are paraded through the streets of Rome in golden chains. The girl is married off to the King of Numidia & the two boys are killed. Marsi & Chatti War: The Chatti: Grain People Celts of Germany are massacred and survivors battle Octavian Octavian builds the 353 mile long Upper Germanic Limes a.k.a Rhaetian Limes from the North Sea at Katwijk in the Netherlands along the Rhine to Eining/Kelheim on the Danube. The fortification contains 60 castles and 900 watchtowers. 29-13 BC: Cantabrian War: Octavian sends Legio I Augusta, Legio II Augusta, Legio IV Macedonica, Legio V Alaudae, Legio VI Victrix, Legio IX Hispana, Legio X Gemina, Legio XX Valeria Victrix, Ala II Gallorum, Cohors II Gallorum, Ala II Thracum Victrix Civium Romanorum, Cohors IV Thracum Aequitata, Ala Parthorum, Ala Augusta to Spain to steal Asturi gold and Cantabrian iron. The Cantabri and Asturi Celts fight back with guerilla warfare, hiding in the mountains, and attacking with ranged weapons. Strabo writes that the Cantabri sing hymns of victory while being crucified because they have died soldiers and free men. He mentions suicide by sword, fire. Silius Italicus writes the poison they use is made from the seeds of the yew tree, a plant with mythic significance. Octavians son Tiberius Nero is the next emporer and he sends Legio VI and X Gemina to Zaragoza. The governor of Hispania Tarraconensis, Servius Sulpicius Galba, marches to Rome with Legio VI and overthrows Tiberius who kills himself. The resistance continues for seventy more years even though Rome considers the Cantabri surrendered. 19 AD: Battle of Teutonberg Forest: King Arminius of the Chatti kills a Roman general and escapes into the Teutonberg forest. Rome sends an army, calvary, and ships over German lakes and discover bleached bones of dead Roman soldiers in forest groves. The Roman army moves toward Gaule, where they think Arminius is, and are killed by quicksand and drowning. Surviving soldiers are paid off, sent home, and honored with a feast of bravery. The losing Roman general is poisoned.
- 23 - 61 AD
- Celtic Frisii revolt on having to furnish oxen, wives, children, and land to the Romans and gibbet them. Romans send in entire calvary and lose. Deserters tell Roman historian Tacitus that 900 Romans were cut to pieces in Braduenna wood and another 400 were slaughtered near a house. In the city of Rome Emporer Tiberius Nero kills nobles and puts their money into his private account. Nobles commit suicide so their wills are valid. Tiberius is finally smothered to death with blankets. Caligula accedes the throne, goes mad, bankrupts the treasury of Rome, invades Gaule & is assasinated. 44 AD: Emperor Claudius invades Britain against Cassivelaunus’s great-grandson King Cymbeline a.k.a Conubelinus of the Trinovantes. Emperor Claudius poisoned by his wife, and Nero is the next emporer. Nero massacres Druids on the Isle of Mon at Anglesey (the Druid women wear black and dishevel their hair. Lucans Pharsalia written), poisons his brother, murders the consul, tortures Epicharis on the rack, butchers 20 men and has his mother hacked to death by soldiers. Roman noblemen kill their relatives and offer them up as sacrifical victims to the gods.
- 61 - 79 AD
- Roman soldiers flog Queen Boudica of Britain, and rape her two daughters. Queen Boudica of the Iceni leads the Celtic tribes of Iceni and Trinovantes to massacre the Romans at Colchester and burn their temple. She is defeated at London and poisons herself rather than to become a prisoner. Cadbury Hill in Britain shows evidence of a massacre in the middle of the 1st century A.D. Celts are killed in the streets and at the gates of their hill fort. Helvetti and Treveri Celts of Gaule massacred and sold into slavery. 69-70: The Roman navy is destroyed by the Germans at Cremona. Batavi Rising: The Bructeri of Hellweg and wise woman Veleda, the spiritual leader of the Batavi rising. She is captured by the Romans. The Goths, a Germanic tribe from Scandinavia (Norway, Iceland, Sweden, Denmark), divide in two, with the Visigoths moving to the mouth of the Danube (Romania), and the Ostrogoths to the north shore of Black Sea (Ukraine). Nero is assasinated, Galba assasinated, Otho abdicates throne, Vitellius is murdered, and Emporer Vespasian dies under rebel soldiers. Rome is burnt and looted and a senator is decapitated by the mob. Emporer Titus ascends throne and is assasinated Domitian ascends throne.
- 80 - 126 AD
- Julius Agricola invades Scotland and enslaves the people of Ross, Dalgoan, Bochastle, and Ardargie. Highland Scots attack under Galgacus using guerilla warfare and win. Agricola withdraws to Rome in defeat and is killed by Domitian. Domitian subdues the Chatten-Kelten population of the Main River with military general Trajan. The Celtic city of Moguntiacum (Mainz/Mayence. 30 miles west of Frankfurt on the West side of the Rhine) becomes the capital city of the Roman province Germania Superior on October 27, 90AD. The tombstone of the Celt Blussus and his family is at the Oppidum, now Castle Weisenauand. Domitian is assasinated in 96 AD. Nerva is the next Emperor, dying in 98 AD. Emperor Trajan invades the kingdom of Dacia, on the northern bank of the Danube River in 101 AD, conquering King Decebalus. He seizes the Dacian capital of Sarmizegetusa, destroys it and is granted the title Dacicus Maximus. In 106 AD he resettles the entire area with Romans making it Romania. In 113 AD he invades Armenia and annexes it to the Roman Empire. He then seizes Babylon [Babil, Iraq], Seleucia [18 miles south of Baghdad] & Susa [Shush, Iran] declaring Mesopotamia [Old Persian: Between Rivers] a new province of the Empire but withdraws from Mesopotamian rebels dying in 117 AD at Selinus [Turkey]. Hadrian succeeds. Emporer Hadrian arrives in Scotland bearing the title Britannicus. He builds a wall to keep the Scots from attacking Romanized Britain. They break through the wall and are subdued.
International: The Olmec Empire declines.
- 126 - 206 AD
- Successor Emperor Antoninus Pius. The Antonine Wall is built in central Scotland. Romans battle the Scots and are defeated. They retreat behind Hadrians Wall. Antoninus Pius dies in 161 AD and his nephew Marcus Aurelius assumes power. Marcomannic War between the Celts and the Romans. Marcus Aurelius The Last of the Five Good Emperors kills Pothinus, the Bishop of Lugdunum. Second Bishop Saint Irenaeus où Saint-Irénée (ca. 130-202 CE) visiting Rome at the time of the murder. Irenaeus Greek work describing his discipleship to Bishop of Smyrna Polycarp is translated into Latin and the Greek is lost. Marcomanni finish the Emperor off, killing him on March 17, 180 at Vindobona (modern Vienna). His ashes are returned to Rome and rest in Hadrians mausoleum (modern Castel SantAngelo). The orignal bronze equestrian sculpture of him is at the Musei Capitolini. His son Commdus becomes Emperor, rapes his sister Lucilla, murders his wife Crispina in 183, and is finally strangled to death in his bath by the wrestler Narcissus in 192. Emperor Severus, the next Emperor, invades Scotland and lays waste to the country. Scots attack with guerilla warfare and win. The Emporer returns to Rome, vows to exterminate the Scots, but then dies.
- 206 - 306 AD
- Crisis of the Third Century / Military Anarchy / Imperial Crisis (235-284) Emperor Macrinus (Elagabalus assasinates him), Elagabalus (assasinated by cousin Alexander Severus in 222 & his body thrown into the Tiber River), Alexander Severus (murdered by soldiers after defeat at Persia), Gordion I & II (assasinated by Maximinus Thrax in 238), Maximinus Thrax (assasinated by Pupienus & Balbinus), Pupienus & Balbinus (assasinated by Gordian III), Gordion III (killed by Phillipus), Phillipus (assasinated by Decius in 244), Decius (makes emporer worship mandatory & is killed by the Goths in 249), Hostilian (killed by the plague as Gallus marches on Rome), Gallus & son Volusianus (assasinated by Aemilanus), Aemilanus (assasinated by Valerian in 253). 258: Roman provinces of Gaule, Britain & Hispania break off to form the Gallic Empire. The Gallic Empire fights Valerian. 260: Provinces of Syria, Palestine & Aegyptus become the Palmyrene Empire. Valerian captured by Persian King Shapur I. The next Emperor, Gallienus is assasinated by Claudius II in 268. Battle of Naissus / Battle of Lake Benacus: Emperor Claudius II Gothicus drives back the Alamanni and recovers Hispania from the Gallic Empire. He murders Saint Valentine who opposed war recruitment. 270: Battle of Placentia: Claudius II killed by the Goths. Brother Claudius Quintllius takes power & is assasinated by Aurelian on the battlefield. Emperor Aurelian routed in Placentia 271: Battle of Fano: Aurelian routes the Allemani at Pavia. For this, he receives the title Gothicus Maximus He invades the Balkans, killing Goth leader Cannabaudes & creates Dacia Ripensis with Serdica as the capital. 272: Seizes Palmyrene Empire ruled by Queen Zenobia. 274: Invades Gallic Empire. Gallic Emperor Tetricus II allows Gaule & Britain to return to the empire at Châlons-en-Champagne by deserting to the Roman camp. 275: Aurelian assasinated. 275: Emperor Marcus Claudius Tacitus has the works of Gaius Cornelius Tacitus of 117 AD reprinted and placed in public libraries. He is assasinated. Florianus (276: assasinated by Probus after 88 days in power), Probus (assasinated by Carus), Carus (282: assasinated), 283: Emperor Carinus, son of Carus, destroys the Allemanni Celts of the Rhine. He is assasinated by tribune at Morava. 284: Dominate, the Tetrarchy: Emperor Diocletian splits the empire in half and other reforms allowed it to continue, eventually entering a new phase known as the Dominate, the Tetrarchy, and the Later Roman Empire. Diocletian bans alchemal books, invades Egypt, Armenia, Persia, Mesopotamia, and builds a palace on the Dalmatian coast of Yugoslavia 306: Emperor Constantine Chlorus invades Scotland and subdues some tribes: the Caledones and other Picts. (Pict is Latin for painted men. Pritani means people of the designs) Northern Scottish tribes attack and plunder the Border districts. They are joined by invading Saxon tribes and move to London. General Theodosius is called in. He defeats the invaders. Romans invade Paris and the Parisii take refuge on the island.
- Constantius Chlorus dies. Emporer Constantine I, son of Constantius Chlorus, converts to Christanity after seeing a cross in the sky that says, By this light you shall conquer. He builds the basilica of Saint Peter, some housing estates, exterminates the Vatici Celts of Côte dOr, Gaule and names Romes spiritual center after them. Julian the Apostate proclaimed emperor of Rome; Lutetia is renamed Paris (Civitas Parisiorum, City of the Parisians). The Alamanni fight Julian in Strasbourg in 357. They are defeated and king Chonodomarius is taken prisoner. On January 2, 366 they cross the frozen Rhine in large numbers to invade the Roman Empire. Magnus Maximus (Macsen Wledig), a Spanish mercenary, holds Segontium Oppida near Caernarfon in AD 383 and is declared Emperor. He captures Rome with troops which include Britons. In 388 AD he is killed by Byzantine Emperor Theodosius. 391: Emperor Theodosius orders the destruction of all pagan temples, and Patriarch Theophilus of Alexandria complies. Mithreum, Musaeum: Temple of the Muses & Serapeum Library razed to the ground, phalli of Priapus carried through the forum. God images are molten into pots and other utensils for the use of the Alexandrian church; and for relief of the poor. Rome sends a legion as the Scots revolt and invade Britain. Rome withdraws completely from Britain. Emporers Julian, Jovian, Valens, Valentinian I assasinated. Alaric I, King of the Visigoths, captures & loots Rome in 409, dying in 410 in Cosenza. Smallpox, measles, locusts, famine and the plague kill 98 percent of the Chinese population.
- 413- 476 AD
- The Great Migration: Emperor Valentinian III hires Hun mercenaries to move westward into Europe and drive the Goths out of Germany. The Goths, Suevi Celts and Alans revolt and sack Rome. Stilicon the Vandal of Rome recovers Rome, exterminates the Alans [Portuguese] and the Suevi; and renames Baetica, Spain Vandalusia. Justinian gives his daughter in marriage to the Vandals. The plague spreads to Britain. According to the Triads of Britain: It arose from the corpses of the Irishmen who were slaughtered in Manuba, after they had oppressed northern Wales for the space of twenty-nine years 451 AD: Armenian Schism: The Armenian people of the Black Sea, who are descendants of the Kurgan civilization of Russia, are converted to Christianity. Monotheism, the belief that Christ has only one nature and it is divine, is decided at the Council of Ephèse. Nestorism: Not a union of natures and Eutychès: Christ is Divine and Human at the same time are declared heresies and punished by death. Emporers Honorius, Valentinian, Byzantine Arcadius all assasinated. Jutes, Saxons, and Angles turn on the Britons and Cornovii kingdom of Cornwall and exterminate them. Survivors are enslaved or escape to Brittany and Scotland Saxons form Kingdom of Bernicia. Burgundians under Flavius Aetius capture Celtic Rhineland city Borbetomagus and rename it Worms.
International: Chichén Itzá on the Yucután peninsula of México founded by the Maya-Itzas who came led by Itzamna after they separated from Acalon.
- 476 - 563 AD
- King Conall of the Scottish Dal Ríata kingdom dies in battle. Aidan is victorious over the Saxons of Bernicia, Northumberland who invade Scotland. King Vortigern rules Britain with Hengest of Kent marrying his daughter. 488: King Aesc of Kent. King Cerdic of Wessex-West Saxony. 550: King Maelgwynn Gwynedd of Gwynedd, Wales. Yellow Plague ravishes Scotland and the rest of Britain. Called The Second Plague in the Triads of Britain the infection of the Yellow Plague of Rhoss, on account of the corpses which were slain there, and if anyone went within reach of the effluvia he died immediately.
International: The Fall of Rome: Odoacer the Ostrogoth names himself king of Italy, but is subdued. Pope Felix III excommunicates Patriarch Acacius of Constantinople.Bubonic Plague hits Rome. Pope Alaric II publishes Lex Romana Visigothorum: Jews are separated from the general population. Christianity becomes the state religion of Israel and marriage between Christians and Jews is made illegal. Jews speak Hebrew, Yiddish with Ashkenasim of Germany; Ladino with Sephardim of Spain, and Judeo-Arabic in North Africa. 489- 493: Theodoric the Ostrogoth a.k.a Dietrich of Bern seizes Rome. Justinian of Constantinople, nephew of Justinian of Rome, closes the Greek school of philosophy at Athens [scholars go to Persia and Syria], condemns the Gnostics, publishes Corpus Iuris Civilis and invades Persia, Armenia, and North Africa. The Nika revolt, destroy Constantinople, and are defeated by General Belisarius.
- 563 - 633 AD
- Missionary Columba arrives at the sacred druid island of Iona and converts the survivors to Christianity. Battle of Catterick [Battle of the Long Mountain] The Saxons invade and occupy Cadbury Hill. Archeological excavation finds the foundation for a building in the shape of a cross, but it is not completed. Stone wall, southwest tower, and the outlines of a sixty-three foot long x 34 foot hall. 597: Battle of Culdremna: St Augustine lands in Kent, under Pope Gregory, converts King Ethelbert Bretwalda to Christianity and introduces Roman Christian Church to England. He is first Archbishop of Canterbury. Ethelbert is proclaimed overlord of all other regional kings: Table-Mên: Ethelbert, 5th king of Kent; Cissa, 2d king of the South Saxons, Kingills, 6th king of the West Saxons; Sebert, 3d king of the East Saxons, Ethelfred, 7th king of the Northumbers; Penda, 5th king of the Mercians, Sigebert, 5th king of the East Angles: 600 AD. 616: King Edwin of Deira, Northumbria overthrows Ethelfred-Ethelfrith and rules Northumbria until 632. Some believe naming Edinburgh, Scotland. 633: King Oswald of Deira kills King Cadwallon of Gwynned, Wales.
International:Pope Gregory I names Rome the Holy Roman Empire and begins the peaceful conversion of the Jews. Pope Pelagius II and the Lombards destroy the Gothic kingdom of the Gepidae, Italy. Insubri a district of Lombardy containing Milan, Como, Pavia, Lodi, Novara, and Vercelli. Clovis king of the Franks conquers Cologne, founds the Merovignian empire, expels the Visigoths to Spain and partitions Germany. Poles settle in western Galicia, Ukrainians settle in eastern Galicia. Slavs attack Frankish stronghold of Thuringia and secure its independence. Chinese Tang Dynasty: 618 A.D - 907 AD. Aristocrat Li Yuan overthrows the Sui emperor [Sui Dynasty 581 AD - 618 AD] and becomes the first Tang ruler. 627 AD: His son, Li Shimin, renames himself Emperor Tang Taizong, destroys competitors for the throne, forces Turkish nomads out of Northern China, has armies conquer parts of Tibet & Turkestan, & open overland trade routes from China to India & central Asia. The routes give missionaries an overland entrance into China & allow Chinese Buddhist pilgrims to visit India. Muhammed is born. His flight from Mecca marks the start of the Muslim era. Bubonic Plague sweeps Europe .
- 633 - 685 AD
- Scots become tributary to the Angles of Northumbria. King Drest Mac (Mac means Divine Son Of) Domnall of the Scottish Cruithne [People of Wheat] replaced with Bridei Mac Bile after revolt against Angles fails. 634: Battle of Winwaed River King Cadafael ap Cynfedw of Gwynedd, Wales, and King Penda of Mercia vs King Oswiu of Northumbria. They are defeated at Leeds. King Owen of Strathclyde kills King Domnal Brecc of Dalriada at the Battle of Strathcarron Scots defeated in four battles on Jura island. Bridei wastes Orkney Islands and conquers Fortrenn. Battle of Penselwood: King Cenwalh and the Wessex Saxons invade Dumnonia under King Culmin. Angles invade Ireland and destroy monasteries. Angles defeated at Battle of Dun Nechtan and are enslaved. The Celtic religious center of Glastonbury is taken over by the Benedictines under Beorhtwald, its first Saxon abbot. Council of Constantinople forbids fire leaping.
International: Caliph Omarn conquers Jerusalem. Crosses cannot be publicly displayed outside church buildings. Reccesvinth of Spain produces Forum Judicum: a.k.a Visigothic Law Code or Fuero juzgo in use through the Middle Ages. Jews are forbidden to testify against Christians, their property is seized and they are enslaved. The Rus invade the Ukraine. Wu Chao becomes Empress of China [Wu Hau Huang-ti] conquers and annexes Korea. 711: Abbasaid Dynasty: Capital Baghdad, Persia. Moroccan general Tariq ibi z-Zaid seizes Spain and puts it under Muslim control. Spain is named Al-Andaluz and is part of the Umayyad Empire. jabalu t-tarîq, Gibraltar is named after him. Amir Abd al-Rahman I introduces the date palm to Spain, as well as rice pilaf, lamb-stew, white-sugar, rose & quince jellies, marzipan, apricots, oranges, limes, artichokes, spinach, eggplant, coriander.
- 687-719 AD
- Selbach usurps the throne of Scottish Dal Ríata, burns Dunolly, and slaughters the Cinel Cathbath nobles. Dalriads defeated at Valle Limnae. King Nechtan of the Cruithne declares allegiance to the Church of Rome. The Columban clergy at Iona is dispossessed of their lands and driven out. Clergy are replaced with Romanized clergy from Ireland and Northumbrian Angles. Selbach is defeated in a naval battle, resigns crown to his son Dungal, and enters a monastery.
International: Pacal, one of Palenques greatest kings, dies and is interred at the Mayan Temple of the Inscriptions which he had built. His son Chan Bahlum builds the Temple of the Sun and The Temple of the Foliated Cross. Palenque is in Chiapas in the Gulf of México. The nearby Otulum River feeds Palenque Citys aqueduct.
- 724-768 AD
- Dungal ejected from Dal Ríata throne and replaced with Ewen. Dungal invades Toraigh Island and plunders it. King Angus of the Cruithne attacks and wounds him at Dunleithfinn fort. Angus lays waste Dal Ríada, captures Dunadd, burns Creic, casts Dungall and his brother into chains [Battle of Newburgh-on-Tyne], drowns two sub-kings, and attacks the Saxons. The Strathclyde Britons under King Teudebur / Tewdr defeat Prince Talorgen of the Cruithne and Angus at the Battle of Mugdock. 757: King Ethelbald of Mercia succeeded by Offa after assasination. Offa’s Dyke is the boundary between England and Wales. King Angus of the Cruithne then succeeded by Ciniod, son of Wredech. King Aedh Find of the Dal Ríata fights battle with Ciniod. Ciniod succeeded by Alpin, King of the Saxons, who is the son of Wroid.
International: 732 AD: The Umayyads, a family that ruled the Muslim world from 661 to 750 AD, defeated by Charles ‘Martel: The Hammer’ of Herstal in Southern France. China’s Tang Dynasty armies invade Bactria & Kashmir, defeating Arab-Tibetan forces. 745: Pope Gregory III establishes Anglo-Saxon Bishop Bonifatius in Mainz. 750: Pépin le Bref of Herstal deposes Frankish King Childéric III under the sanction of Pope Zacharia in the city of Soissons. Gregory of Tours Historium Francorum states that the Franks originally lived in Pannonia [the mouth of the Danube on the Black Sea], but later settled on the banks of the Rhine. They adopted their name (circa. 11 BC) following their defeat and relocation by Drusus (Celtic: Oak Priests) under the leadership of a certain chieftain called Franko (Saxon: franca: throwing axe) – replacing the earlier tribal name Sicambri-Sugambri – said to be an offshoot of the Cimmerians or Scythians 751: Revolt in Turkestan closes China’s trade routes to the Middle East. 763 A.D: Tibetan forces invade China and battle the forces of the Tang Rulers for 80 years.
- 775-845 AD
- The First Reich: The Holy Roman Empire: Pépins son Charlemagne crowned Emperor of the West by Pope Leo III. Charlemagne conquers the Saxons and gives them a choice between baptism and execution. When they refuse to convert, he has 4500 of them beheaded in one morning. Leo IVs widow rejects Charlemagnes marriage proposal and is deposed in 802. 802: Egbert of Wessex rebels against Mercia to rule southern England. He annexes the kingdom of Kent to Wessex and forces the Northumbrians to submit to his overlordship. He conquers the Britons of southwest England. Danes attack Lindisfarne and Iona. King Arthur of Ceredigion, Wales dies. 825: Degannwy, capital of Gwynedd: White Grain Province, Wales, under King Merfyn Frych is struck by lighning and burnt to the ground. Saxons overrun Cornwall. Alpin Mac Eocha of Dal Ríata decapitated in Galloway by the Cruithne. Kenneth Mac Alpin invites King Drost of the Cruithne and his noblemen to dinner. He kills all of them, seizes Scone, and rules all of Scotland. (12) 845 A.D. Nomenoë revolts against Charles the Bald, defeats him, and forces him to recognize the independence of Brittany, and to forgo the annual tribute which he had exacted. Villemarqués ballad describes the incident.
|
<urn:uuid:2c3f7961-42ae-4cb2-beed-ba99fe172a30>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00009.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8852962851524353,
"score": 3.421875,
"token_count": 11296,
"url": "http://www.tartanplace.com/tartanhistory/tartanhisear.html"
}
|
World War II persecution of Serbs
|World War II persecution of Serbs|
|Part of World War II|
Serbs, expelled from their homes in the Independent State of Croatia, march out of town carrying large bundles.
|Location|| Independent State of Croatia
Territory of the Military Commander in Serbia
Albanian Kingdom (1939–1943)
Albanian Kingdom (1943–1944)
|Deaths||Estimates vary and are disputed. It is agreed that total amount of Serbian deaths ranges from 300,000 to 500,000, while the number of Serbs killed in concentration camps is estimated to be around 100,000|
|Perpetrators||Ustaše government of the Independent State of Croatia, Albanian collaborationists, Axis occupation forces|
|Motive||Racial laws that also caused The Holocaust in Croatia and Porajmos|
The World War II persecution of Serbs, also known as the Serbian Genocide, refers to the widespread genocidal persecution of Serbs that included extermination, expulsions and forced religious conversions of large numbers of ethnic Serbs by the Ustaše regime in the Independent State of Croatia, and also the atrocities carried out by Albanian collaborators and Axis occupying forces during World War II.
The number of Serbs persecuted by the Ustaše is very high, but the exact extent is the subject of much debate and estimates vary widely. Yad Vashem estimates over 500,000 murdered, 250,000 expelled and 200,000 forcibly converted to Catholicism. The United States Holocaust Memorial Museum has estimated that Ustaša authorities murdered between 320,000 and 340,000 ethnic Serb residents of Croatia and Bosnia between 1941 and 1945 (the period of Ustaše rule), of whom between 45,000 and 52,000 were murdered at the Jasenovac concentration camp alone. According to the Federal Institute for Statistics in Belgrade, the "actual" figure of the casulties suffered within Yugoslavia's border of war-related causes during the second world war was ca. 597,323 deaths. Of these, 346,740 were Serbs and 83,257 were Croats.
- 1 Background
- 2 Independent State of Croatia
- 3 Territory of the Military Commander in Serbia
- 4 Albanian role and Kosovo
- 5 Controversy
- 6 Commemoration
- 7 Victims
- 8 Aftermath
- 9 See also
- 10 Annotations
- 11 References
- 12 Sources
In April 1941, the Kingdom of Yugoslavia was invaded by the Axis powers. Subsequently, the newly created Axis puppet state known as the Independent State of Croatia (NDH) implemented genocidal policies against its Serb, Jews and Romanis. The NDH utilized the Ustaše movement to persecute Serbs by killing thousands of them and forcing large numbers of people to convert to the Roman Catholic faith.
The ideology of the Ustaše movement was a blend of Nazism and Croatian nationalism. The Ustaše supported the creation of a Greater Croatia that would span to the River Drina and to the outskirts of Belgrade. The movement emphasized the need for a racially "pure" Croatia and promoted the extermination of Serbs, Jews and Gypsies.
A major ideological influence on the Croatian nationalism of the Ustaše was the 19th-century nationalist Ante Starčević. Starčević was an advocate of Croatian unity and independence and was both anti-Habsburg and anti-Serb. He envisioned the creation of a Greater Croatia that would include territories inhabited by Bosniaks, Serbs, and Slovenes, considering Bosniaks and Serbs as Croats who had been converted to Islam and Orthodox Christianity and considering the Slovenes to be "mountain Croats". He argued that the large Serb presence in territories claimed by a Greater Croatia was the result of recent settlement, encouraged settlement by Habsburg rulers, and influx of groups like Vlachs who took up Orthodox Christianity and identified themselves as Serbs. The Ustaše used Starčević's theories to promote the annexation of Bosnia and Herzegovina to Croatia and recognized Croatia as having two major ethnocultural components: Catholic Croats and "Muslim Croats", as the Ustaše saw the Islam of the Bosnian-Muslims as a religion which "keeps true the blood of Croats." Armed struggle, genocide and terrorism were glorified by the group.
Independent State of Croatia
After Nazi forces entered into Zagreb on April 10, 1941 Pavelić's closest associate Slavko Kvaternik proclaimed the formation of the Independent State of Croatia on a Radio Zagreb broadcast. Meanwhile, Pavelić and several hundred Ustaše volunteers left their camps in Italy and travelled to Zagreb, where Pavelić declared a new government on 16 April 1941. He accorded himself the title of "Poglavnik" (German: Führer, English: Chief leader. The Independent State of Croatia was declared to be on Croatian "ethnic and historical territory".
This country can only be a Croatian country, and there is no method we would hesitate to use in order to make it truly Croatian and cleanse it of Serbs, who have for centuries endangered us and who will endanger us again if they are given the opportunity.— Milovan Žanić, the minister of the NDH Legislative council, on 2 May 1941,
Jasenovac concentration camp
||This section may stray from the topic of the article into the topic of another article, Jasenovac concentration camp. (June 2012)|
A large portion of the atrocities occurred in the notorious Jasenovac concentration camp. It was the largest extermination camp in the Balkans. The Ustaše interned, tortured and brutally executed men, women and children in the camp. Serbs constituted the majority of inmates. Upon arrival at the camp, the prisoners were marked with colors, similar to the use of Nazi concentration camp badges: blue for Serbs, and red for communists (non-Serbian resistance members), while Roma had no marks. In several instances, inmates with blue badges were murdered immediately upon arrival.
Serbs were predominantly brought from the Kozara region, where the Ustaše captured areas formerly held by Partisans. These were brought to the camp without sentence, most destined for immediate execution, accelerated via the use of machine-guns. Aside from sporadic and random killings and deaths due to the poor living conditions, many inmates arriving at Jasenovac were scheduled for systematic extermination. An important criterion for selection was the duration of a prisoner's anticipated detention. Strong men capable of labor and sentenced to less than three years of incarceration were allowed to live. All inmates with indeterminate sentences or sentences of three years or more were immediately scheduled for execution, regardless of fitness.
The so-called "manual-means-of-execution", the Ustaše's favorites, were executions that took part in utilizing sharp or blunt craftsmen tools: knives, saws, hammers, etc. The preferred manual-weapon of many Ustaše guards was the Srbosjek (or Serbcutter). This knife was originally a type of agricultural knife manufactured for wheat sheaf cutting. The upper part of the knife was made of leather, as a sort of a glove, designed to be worn with the thumb going through the hole, so that only the blade protruded from the hand. It was a curved, 12 cm long knife with the edge on its concave side. The knife was fastened to a bowed oval copper plate, while the plate was fastened to a thick leather bangle.
These mass-executions took place at various locations.
- At the "Granik" ramp, by the Sava, internees were hung by a crane, had their intestines and necks slashed, then thrown into the river after being hit by blunt tools in the head. Later, the inmates were tied in pairs, and then cut in the stomach and thrown into the river alive.
- In the vicinity of Donja Gradina, empty areas were encircled with wire and used for slaughter. The victims were slewn with knives, or had their skulls smashed with mallets. The memorial site includes 117 acres and 105 mass graves.
- Mlaka and Jablanac were two sites used as collection and labor camps for the women and children in camps III and V, but also as places where many of these women and children, as well as other groups, were executed at the Sava bank in between the two locations.
- At Velika Kustarica, as many as 50,000 people were killed during the 1941–42 winter.
On the night of 29 August 1942, the prison guards made bets among themselves as to who could slaughter the largest number of inmates. One of the guards, Petar Brzica, boasted that he had cut the throats of about 1,360 new arrivals. A gold watch, a silver service, a roasted suckling pig and a bottle of Italian wine were among his rewards. Others who confessed to participating in the bet included Ante Zrinušić, who killed some 600 inmates, and Mile Friganović, who gave a detailed and consistent report of the incident. Friganović admitted to having killed some 1,100 inmates. He specifically recounted his torture of an old man, known as "Vukasin"; he attempted to compel the man to bless Ante Pavelić, which the old man refused to do, even after Friganović had cut off his ears, nose and tongue after each refusal. Ultimately, he cut out the old man's eyes, tore out his heart, and slashed his throat.
In April 1945, as Partisan units approached the camp, the camp's supervisors attempted to erase traces of the atrocities by working the death camp at full capacity. On 22 April, 600 prisoners revolted; 520 were killed and 80 escaped. Before abandoning the camp shortly after the prisoner revolt, the Ustaše killed the remaining prisoners and torched the buildings, guardhouses, torture rooms, the "Picilli Furnace", and all the other structures in the camp, apparently in an effort to make the number of victims impossible to definitively ascertain. Upon entering the camp, the Partisans found only ruins, soot, smoke, and the skeletal remains of thousands of victims.
Stara Gradiška concentration camp
||This section may stray from the topic of the article into the topic of another article, Stara Gradiška concentration camp. (June 2012)|
The Stara Gradiška concentration camp was built at the site of the Stara Gradiška prison. The camp was specially constructed for women and children and it became notorious for the crimes committed against them. The camp was guarded by the Ustaše and several female Croatian nurses. Inmates were killed using different means, including firearms, mallets, machetes and knives. At the "K", or "Kula" unit, Jewish and Serbian women, with weak or little children, were starved and tortured at the "Gagro Hotel", a cellar which an Ustaša named Nikola Gagro used as a place of torture.
Other inmates were killed using poisonous gas. The first to be gassed were the women and children that arrived from camp Djakovo with gas vans that Simo Klaić called "green Thomas". The method was later replaced with stationary gas-chambers with Zyklon B and sulfur dioxide. Gas experiments were conducted initially at veterinary stables near the "Economy" unit, where horses and then humans were killed using sulphur dioxide and later Zyklon B. Gassing was tested on children in the yard, where the camp commandant, Ustaša sergeant Ante Vrban, viewed its effects. Most gassing deaths occurred in the attics of "the infamous tower", where several thousand children from the Kožara region were killed in May, and 2,000 more in June 1942. Subsequently, smaller groups of 400-600 children, and a few men and women, were gassed.
Sisak children's concentration camp
At Sisak, near Jasenovac, the Ustaše presence was vigilant. Early in 1942, the local synagogue was vandalized and robbed utterly by Croat extremists, and the building was later transformed to house a worker's hall. The inhabitants of Sisak were quickly brought to the Ustaše's attention, and those of them that were of Serbian and other, non-Croat kinship were tormented.
A large camp was later erected and it held more than 6,600 Serb and Roma children throughout World War II. The children, aged between 3 and 16, were housed in abandoned stables, ridden with filth and pests. Malnutrition and dysentery seriously impaired their health. They were fed daily with a portion of thin gruel and treated horribly by their captors.
Jastrebarsko concentration camp
The camp housed Serbian children between the ages of one month to fourteen years and was operational for two months in 1942. The camp was set up specifically for Serb children from the Kozara region of Croatia. During its two months of operation, 1,018 children died in the camp. Ilovara Francis, a gravedigger who was paid "per piece", claimed to have buried 768 children in a six-week period. Another 1,300 children were transported to Jasenovac. On 26 August 1942, the Yugoslav Partisans liberated the camp, freeing approximately 700 children.
Jadovno concentration camp
The Jadovno concentration camp was located in а valley near Mount Velebit. It occupied an area of 1250 square meters and was fenced with barbed wire 4 metres high. The guards were posted 1 km all around the concentration camp's barbed wire. Prisoners, mostly Serbs, arrived from the town of Gospić where the Ustaše selected their victims. The Jadovno Victims Association has stated that in 132 days in the camp 40,123 victims were killed. Among them 38,010 were Serbs, 1,998 were Jews, 88 Croats, 11 Slovenes, 9 Muslims, 2 Hungarians, 2 Czechs, 1 Russian, 1 Roma and 1 Montenegrin.
The atrocities committed by the Ustaše stunned many observers. Brigadier Sir Fitzroy Maclean, Chief of the British military mission to the Partisans commented, "Some Ustaše collected the eyes of Serbs they had killed, sending them, when they had enough, to the Poglavnik ... for his inspection or proudly displaying them and other human organs in the cafés of Zagreb."
The Ustaše also cremated living inmates, who were sometimes drugged and sometimes fully awake, as well as corpses. The first cremations took place in the brick factory ovens in January 1942. Engineer Hinko Dominik Picilli perfected this method by converting seven of the kiln's furnace chambers into more sophisticated crematories. Some bodies were buried rather than cremated, however, and exhumed after the war.
A large number of massacres were committed. The most notable ones were:
- Gudovac massacre — 184–196 Serbs were massacred by the Ustaše.
- Glina massacre — 260 Serbs were herded into a church and killed by gunfire. Those who converted to Catholicism were spared.
- Javor massacre — Hundreds of Serbs murdered in Javor, near Srebrenica, and Ozren.
- Korita massacre — 176 Serbs massacred and their bodies were thrown into a pit called the Koritska Jama.
- Kosinj massacre — Approximately 600 Serbs massacred by the Ustaše.
- Metković massacre — 280 Serbs massacred by the Ustaše in Metković on 25 June 1941.
- Otecac massacre — 331 Serbs massacred by the Ustaše, including a Serbian Orthodox priest forced to convert to Roman Catholicism before having his heart cut out of his chest.
- Prebilovci massacre — Approximately 650 Serbs murdered by the Ustaše.
The Ustaše recognized both Roman Catholicism and Islam as the national religions of Croatia, but held the position that Eastern Orthodoxy, as a symbol of Serbian identity, was their foe. They never recognized the existence of the Serb people on the territories of Croatia or anywhere else in the world, for that matter – they referred to them only as "Croats of the Eastern faith", also referring to Bosnian-Muslims (or Bosniaks) as "Croats of the Islamic faith". The Ustaše in power banned the use of the expression "Serbian Orthodox faith" and mandated the use of the expression "Greek-Eastern faith" in its place. Some 250,000 Serbs were converted into Catholicism in a six-month-period in 1941. Hundreds of Serbian Orthodox Christian churches were closed, destroyed, or plundered during Ustaše rule. On 2 July 1942, the Croatian Orthodox Church was founded to replace the institutions of the Serbian Orthodox Church.
|This section requires expansion with: 1942–45. (November 2015)|
In a six-month period in 1941, some 120,000 Serbs were expelled to Nazi-occupied Serbia, and tens of thousands fled. The general plan was that prominent people be deported first, so that property could be nationalized and the remaining Serbs be more easily manipulated. By the end of September 1941, about half of the Serbian Orthodox clergy, 335 priests, had been expelled.
In 1941, the Nazi puppet Independent State of Croatia banned the use of Cyrillic, having regulated it on 25 April 1941, and in June 1941 began eliminating "Eastern" (Serbian) words from the Croatian language, and shut down Serbian schools. Ante Pavelić ordered, through the "Croatian state office for language", the creation of new words from old roots (some which are used today), and purged many Serbian words.
Territory of the Military Commander in Serbia
|This section requires expansion with: atrocities in Serbia. (November 2015)|
- Kragujevac massacre. Between 18–21 October 1941, men and boys were rounded up by German soldiers and members of the Serbian Volunteer Command from the vicinity of Kragujevac, Serbia. All males from the town between the ages of sixteen and sixty were assembled, including high school students; 2,778 were shot. The massacre was a direct reprisal for German losses in a battle with Partisans and Chetniks in early October. The German High Command decided, based on reports that bodies had been mutilated by the guerrillas, that the punishment must be particularly harsh. A German report stated "The executions in Kragujevac occurred although there had been no attacks on members of the Wehrmacht in this city, for the reason that not enough hostages could be found elsewhere."
- 1942 raid in southern Bačka. The most notable war crime during the occupation was the mass murder of the civilians, mostly of Serb and Jewish ethnicity, performed by Hungarian Axis troops in January 1942 raid in southern Bačka. The total number of civilians killed in the raid was 3,808. Locations that were affected by the raid included Novi Sad, Bečej, Vilovo, Gardinovci, Gospođinci, Đurđevo, Žabalj, Lok, Mošorin, Srbobran, Temerin, Titel, Čurug and Šajkaš.
During the four years of occupation of Vojvodina, the Axis forces committed numerous war crimes against civilian population: about 50,000 people in Vojvodina were murdered and about 280,000 were arrested, violated or tortured. The victims belonged to several ethnic groups that lived in Vojvodina, but the largest number of the victims were of the Serb, Jewish and Romani ethnicity.
Albanian role and Kosovo
During World War II, with the fall of Yugoslavia in 1941, the Italians placed the land inhabited by ethnic Albanians under the jurisdiction of an Albanian quisling government, including Kosovo, whose inclusion into a geo-political Albanian entity was followed by extensive persecution of non-Albanians (mostly Serbs) by Albanian fascists. Most of the war crimes were perpetrated by the 21st Waffen Mountain Division of the SS Skanderbeg (1st Albanian) and the Balli Kombëtar.
In April 1943, Reichsführer-SS Heinrich Himmler created the 21st SS Division manned by Albanian and Kosovar Albanian volunteers. From August 1944, the division participated in operations against the Yugoslav Partisans and in massacring local Serbs.[page needed] SS-Brigadeführer August Schmidthuber, one of the commanders of the division, was captured in 1945 and turned over to Yugoslav authorities. Schmidthuber was put on trial in February 1947 by a Yugoslav military tribunal in Belgrade, on charges of participating in massacres, deportations and atrocities against civilians. The tribunal sentenced him to death by hanging and he was executed on 27 February 1947.
Revisionism in modern-day Croatia
In 1989, future President of Croatia, Franjo Tuđman, who had been a Partisan during WWII, but later embraced a radical nationalism, published Horrors of War: Historical Reality and Philosophy, in which he questioned the official numbers of victims killed by the Ustaše during the Second World War. In his book, Tuđman claimed that fewer than thirty-thousand people died at Jasenovac. Tuđman estimated that a total of 900,000 Jews had perished in the Holocaust. Tuđman's views and his government's toleration of Ustaša symbols frequently strained relations with Israel.
Possibly the most overt and well-known example of ultranationalist, anti-Serb sentiment in contemporary Croatian public life is Thompson, a Croatian rock band that has on numerous occasions been protested against for having sung Ustaše songs, most notably Jasenovac i Gradiška Stara. People publicly displaying Ustaše affiliation at major Thompson concerts in Croatia and elsewhere is a frequent occurrence, leading to complaints from the Simon Wiesenthal Center.
In 2006, a video was leaked showing Croatian President Stipe Mesić giving a speech in Australia in the early 1990s, in which he said that the Croats had "won a great victory on April 10th" (the date of formation of the Independent State of Croatia in 1941), and that Croatia needed to apologize to no one for Jasenovac.
Revisionism in Croatian diaspora
In 2008, in Melbourne, Australia, when a restaurant owned by people of Croatian descent held a celebration to honour Ustaša leader Ante Pavelić. The event was an "outrageous affront both to his victims and to any persons of morality and conscience who oppose racism and genocide", Dr. Efraim Zuroff, of the Simon Wiesenthal Center, stated. According to local press reports, a large photograph of Pavelić was hung in the restaurant, T-shirts with his picture and that of two other commanders in the 1941–1945 Ustaše government were offered for sale at the bar, and the establishment of the Independent State of Croatia was celebrated. Zuroff noted this was not the first time that Croatian émigrés in Australia had openly defended Croat Nazi war criminals. "It is high time that the authorities in Australia find a way to take the necessary measures to stop such celebrations, which clearly constitute racist, ethnic, and anti-Semitic incitement against Serbs, Jews, and Gypsies".
Position of the Roman Catholic Church
For the duration of the war, the Vatican kept full diplomatic relations with the Independent State of Croatia and granted Pavelić an audience with its papal nuncio in the capital Zagreb, albeit not an official diplomatic meeting. The nuncio was briefed on the efforts of the Ustaše to convert ethnic-Serbs to Catholicism. Some former priests, mostly Franciscans, particularly in, but not limited to, Herzegovina and Bosnia, took part in the atrocities themselves. Miroslav Filipović was a Franciscan friar (from the Petrićevac monastery) who joined the Ustaše on 7 February 1942 in a brutal massacre of 2,730 Serbs of the nearby villages, including 500 children. He was reportedly subsequently dismissed from his order. He became the Chief Guard of the Jasenovac concentration camp where he was nicknamed "Fra Sotona" ("Friar Satan"). When he was hanged for war crimes, he wore his clerical garb, although some claim he had been defrocked.
The Ustaše had sent large amounts of gold that it had plundered from Serbian and Jewish property owners during World War II into Swiss bank accounts. Of a total of 350 million Swiss Francs, about 150 million was seized by British troops; however, the remaining 200 million (ca. 47 million dollars) reached the Vatican. In October 1946, the American intelligence agency SSU alleged that these funds are still held in the Vatican Bank. This matter is the crux of a recent class action suit against the Vatican Bank and other defendants.
The Jasenovac Memorial Museum reopened in November 2006 with a new exhibition designed by a Croatian architect, Helena Paver Njirić, and an Educational Center, designed by the firm Produkcija. The Memorial Museum features an interior of rubber-clad steel modules, video and projection screens, and glass cases displaying artifacts from the camp. Above the exhibition space, which is quite dark, is a field of glass panels inscribed with the names of the victims.
The New York City Parks Department, the Holocaust Park Committee and the Jasenovac Research Institute, with the help of then-Congressman Anthony Weiner (D-NY), established a public monument to the victims of Jasenovac in April 2005 (the sixtieth anniversary of the liberation of the camps.) The dedication ceremony was attended by ten Yugoslavian Holocaust survivors, as well as diplomats from Serbia, Bosnia and Israel. It remains the only public monument to Jasenovac victims outside the Balkans.
To commemorate the victims of the Kragujevac massacre, the whole of Šumarice, where the killings took place, was turned into a memorial park. There are several monuments there: the monument to the murdered schoolchildren and their teachers, the "Broken Wing" monument, the monument of pain and defiance and the monument "One Hundred for One", the monument of resistance and freedom. Serbian poet Desanka Maksimović wrote a poem about the massacre titled Krvava Bajka (A Bloody Fairy Tale).
Historians have had difficulty calculating and agreeing on the number of victims. The first figures to be offered by the state-commission of Croatia ranged from around 500,000 to 600,000 people killed. The official estimate of the number of victims in Yugoslavia was 700,000; however, beginning in the 1990s, the Croatian side began suggesting substantially smaller numbers. The exact numbers continue to be a subject of great controversy and hot political dispute, with the Croatian government and Croatian institutions pushing for a much lower number even as recently as September 2009. The estimates vary due to lack of accurate records, the methods used for making estimates, and sometimes the political biases of the estimators. In some cases, entire families were exterminated, leaving no one to submit their names to the lists. On the other hand, it has been found that the lists include the names of people who died elsewhere, whose survival was not reported to the authorities, or who are counted more than once on the lists. The casualty figures for the whole of Yugoslavia sways between the maximum 1,700,000 and the more conservative figures between 1,500,000. or one million.
Historical documentation sources
The documentation from the time of Jasenovac revolves around the different sides in the battle for Yugoslavia: The Germans, Italians and Ustaše on the one hand, and the Partisans and the Allies on the other. There are also sources originating from the documentation of the Ustaše themselves and of the Vatican. German generals issued reports of the number of victims as the war progressed. German military commanders gave different figures for the number of Serbs, Jews, and others killed by the Ustaše on the territory of the Independent State of Croatia. They circulated figures of 400,000 Serbs (Alexander Löhr); 350,000 Serbs (Lothar Rendulic); around 300,000 (Edmund Glaise von Horstenau); in 1943; "600-700,000 until March 1944" (Ernst Fick); 700,000 (Massenbach).
Hermann Neubacher stated:
The recipe, received by the Ustaše leader and Poglavnik, the president of the Independent State of Croatia, Ante Pavelić, resembled genocidal intentions from some of the bloodiest religious wars: "A third must become Catholic, a third must leave the country, and a third must die!" This last point of the Ustaše's program was accomplished. When prominent Ustaše leaders claimed that they slaughtered a million Serbs (including babies, children, women and old men), that is, in my opinion, a boastful exaggeration. On the basis of the reports submitted to me, I believe that the number of defenseless victims slaughtered to be three-quarters of a million."
Italian soldiers, overwhelmed and disgusted by the atrocious slaughter, reported similar figures to their commanders. The Vatican's sources also cite similar figures, e.g. an example of 350,000 ethnic-Serbs slaughtered by the end of 1942 (Eugen Tisserant)
Vjekoslav "Maks" Luburić, the commander-in-chief of all the Croatian camps, announced the great "efficiency" of the Jasenovac camp at a ceremony as early as 9 October 1942. During the banquet which followed, he reported with pride, obviously intoxicated: "We have slaughtered here at Jasenovac more people than the Ottoman Empire was able to do during its occupation of Europe." Other Ustaše sources give other estimates: a circular of the Ustaše general headquarters that reads: "the concentration and labor camp in Jasenovac can receive an unlimited number of internees". In the same spirit, Miroslav Filipović-Majstorović, once captured by Yugoslav forces, admitted that during his three months of administration, 20,000 to 30,000 people had been killed. Since it became clear that his confession was an attempt to somewhat minimize the rate of crimes committed in Jasenovac, having, for an example, claimed to have personally killed 100 people, extremely understated, Filipović-Majstorović's figures are deemed to be lower than the true numbers, which some sources have estimated at 30,000-40,000.
A report of the National Committee of Croatia for the investigation of the crimes of the occupation forces and their collaborators, dated 15 November 1945, which was commissioned by the new government of Yugoslavia under Josip Broz Tito, stated that 500,000-600,000 people were killed at the Jasenovac complex. These figures were cited by researchers Israel Gutman and Menachem Shelach in the Encyclopedia of the Holocaust (1990) and the Simon Wiesenthal Center. Mosa Pijade and Edvard Kardelj used this number in the war reparations meetings. Thus the proponents of these numbers were subsequently accused of artificially inflating them for purpose of obtaining war reparations. All in all, the state-commission's report has been the only public and official document about number of victims during 45 years of second Yugoslavia.
The state's total war casualties of 1,700,000 as presented by Yugoslavia at the Paris Peace Treaties, were produced by a math student, Vladeta Vučković, at the Federal Bureau of Statistics. He later admitted his estimates included demographic losses (i.e., factoring in the estimated population increase), while actual losses would have been significantly lower. Vučković estimates were rejected by Germany during war reparations talks.
Between 22 and 27 June 1964, exhumations of bodies and the use of sampling methods was conducted at Jasenovac by Vida Brodar and Anton Pogačnik from Ljubljana University, and Serbian anthropologist Srboljub Živanović from the University of Novi Sad. During the Yugoslav wars, Živanović published what he claimed were the full results of the studies, which he claimed had been suppressed by Tito's government to put less emphasis on the crimes of the Ustaše. According to Živanović, the research gave strong support to the victim counts of more than 500,000, with estimates of 700,000-800,000 being realistic, stating that in every mass grave there are 800 skeletons.
- The Jasenovac Memorial Area maintains a list of the names of 80,914 Jasenovac victims, including 45,923 Serbs, 16,045 Romanies, 12,765 Jews, 4,197 Croats, 1,113 Bosnian Muslims and 871 people of other ethnic backgrounds. The memorial estimates total deaths at 85,000 to 100,000.
- The Belgrade Museum of the Holocaust keeps a list of the names of 80,022 victims (mostly from Jasenovac), including approximately 52,000 Serbs, 16,000 Jews, 12,000 Croats and 10,000 Romanies.
- Antun Miletić, a researcher at the Military Archives in Belgrade, has collected data on Jasenovac since 1979. His list contains the names of 77,200 victims, of which 41,936 are Serbs.
- In 1998, the Bosniak Institute published SFR Yugoslavia's final List of War Victims from the Jasenovac Camp (created in 1992). The list contained the names of 49,602 victims at Jasenovac, including 26,170 Serbs, 8,121 Jews, 5,900 Croats, 1471 Romanies, 787 Bosnian Muslims, 6,792 of unidentifiable ethnicity, and some listed simply as "others". Another list from that institution, naming victims that died between April and November 1944, lists 4,892 names.
Estimates by Holocaust institutions
The Yad Vashem center claims that more than 500,000 Serbs were murdered in Croatia, 250,000 were expelled, and another 200,000 were forced to convert to Catholicism. including those killed at Jasenovac. The same figures are concluded by the Simon-Wiesenthal center.
Menachem Shelach and Israel Gutman state the number of victims as 600,000 in the Encyclopedia of the Holocaust from 1990, and that 20,000-25,000 of them were Jews. However, they only mention Jasenovac as the site where the murders took place. Further, they mention that most of the Croatian Jewish victims after August 1942 were deported to Auschwitz. On the other hand, however, as of 2012, the United States Holocaust Memorial Museum estimates that the Ustaše regime murdered between 45,000 and 52,000 ethnic Serbs in Jasenovac between 1941 and 1945, and that during the period of Ustaše rule, a total of between 320,000 and 340,000 ethnic Serbs were killed in Croatia or Bosnia.
In the 1980s, calculations were made by Serb statistician Bogoljub Kočović and Croat economist Vladimir Žerjavić, who claimed that total number of victims in Yugoslavia was less than 1,700,000 which was the official estimate at the time, both concluding that the number of victims was around one million. Žerjavić calculated furthermore, claiming that the number of victims in the Independent State of Croatia was between 300,000 and 350,000, including 80,000 victims in Jasenovac, as well as thousands of deaths in other camps and prisons.
However, these estimates have been dismissed as biased and unreliable especially on the Serbian side. The mere 0.1% change of the (unknown) birth rate would contribute more to the number of victims than Žerjavić's claim of the number of Serbs killed in Jasenovac (50,000) and his calculation has a deficiency rate of 30%. Žerjavić has been dismissed as a nationalist even by Kočović, and his estimates of the number of fatalities in the Bosnian War of the 1990s (300,000 killed) was three times greater than ICTY data and Bosnian official estimates after the war (100,000 killed), and sheds light on problems with his credibility. He was accused by some Croatian historians of being a plagiarist and the "court statistician".
Serbian experts criticized these estimates as far too low, since the demographic calculations assumed arbitrarily that the growth rate for Serbs in Bosnia (which was absorbed by the Independent State of Croatia during the Second World War) was equal to the total growth rate throughout the former Yugoslavia (1.1% at the time). According to Serbian sources, however, the actual growth rate in this region was 2.4% (1921–31) and 3.5% (1949–53). This method is considered very unreliable by critics because there is no reliable data on total births during this period, yet the results depend strongly on the birth rate - just a change of 0.1% in birth rate changes the victim count by 50,000. According to the census, the number of Serbs between last prewar (1931) and first post war (1948) census has gone up from 1,028,139 to around 1,200,000. The Yugoslav Federal Bureau of Statistics in 1964 created a list of World War II victims with 597,323 names and deficiency estimated at 20-30% which is giving between 750,000 and 780,000 victims. Together with estimated 200,000 killed collaborators and quislings,[clarification needed] the total number would reach about one million. This Yugoslav Federal Bureau of Statistics list was declared a state secret in 1964 and only published in 1989.
After World War II, most of the remaining Ustaša went underground or fled to countries such as Australia, Canada, the United States and Germany, with the assistance of Roman Catholic clerics and grassroots supporters. Yugoslav President Marshal Josip Broz Tito never visited the sites where massacres of Serbs took place, particularly Jasenovac, as he sought to make the people of Yugoslavia forget the Ustaše's crimes in the name of "brotherhood and unity".
Israeli President Moshe Katsav visited Jasenovac in 2003. His successor, Shimon Peres, paid homage to the camp's victims when visited Jasenovac on 25 July 2010 and laid a wreath at the memorial. Peres dubbed the Ustaše's crimes to be a "demonstration of sheer sadism".
On 17 April 2011, in a commemoration ceremony, Croatian President Ivo Josipović warned that there were, "attempts to drastically reduce or decrease the number of Jasenovac victims", adding "faced with the devastating truth here that certain members of the Croatian people were capable of committing the cruelest of crimes, I want to say that all of us are responsible for the things that we do." At the same ceremony, then Croatian Prime Minister Jadranka Kosor said, "there is no excuse for the crimes and therefore the Croatian government decisively rejects and condemns every attempt at historical revisionism and rehabilitation of the fascist ideology, every form of totalitarianism, extremism and radicalism ... Pavelić's regime was a regime of evil, hatred and intolerance, in which people were abused and killed because of their race, religion, nationality, their political beliefs and because they were the others and were different."
- Ante Pavelić, leader of Croatia during the Second World War, shot by Blagoje Jovović, a Montenegrin Serb working for the Yugoslavian secret service, near Buenos Aires, Argentina on 9 April 1957. Pavelić later died of his injuries in a hospital in Madrid, Spain.
- Dido Kvaternik, considered the second most important person in Croatia after Ante Pavelić, died in a car accident along with his two daughters, in Argentina in 1962.
- Miroslav Filipović–Majstorović (born Tomislav Filipović), a Franciscan friar, reportedly expelled from the order, who was infamous for his commands of Jasenovac and Stara-Gradiška, was known as Fra Satana (Father Satan) for his cruelty. He was captured by the Yugoslav communist forces, tried and executed in 1946, wearing his clerical garb.
- Maks Luburić was the commander of the Ustaška Odbrana, or Ustaše Defense, thus being held responsible for all crimes committed under his supervision in Jasenovac, which he visited approximately two to three times per month. He fled to Spain, where he was assassinated in 1969 by a fellow Ustaša.
- Mile Budak, a Croatian politician, executed for war crimes and crimes against humanity on 7 June 1945.
- Dinko Šakić fled to Argentina, but was eventually extradited, tried and sentenced, in 1999, by Croatian authorities to 20 years in prison, dying in prison in 2008. His wife, Nada, was the sister of Maks Luburić.
- Petar Brzica was an Ustaša officer who, on the night of 29 August 1942, allegedly slaughtered over 1,360 people. Brzica's fellow Ustaše took part in that crime, as part of a competition of throat cutting. Brzica's post-war fate is unknown.
- Anti-Serb sentiment
- Catholic clergy involvement with the Ustaše
- Glina, Croatia
- Kragujevac massacre
- Hungarian occupation of Yugoslav territories
- Banat (1941–44)
- The Holocaust
- It is commonly known in Serbian simply as Genocid nad Srbima (Геноцид над Србима), and scarcely as Genocid nad Srbima u Drugom svetskom ratu (Геноцид над Србима у Другом светском рату), Genocid nad Srbima u NDH (Геноцид над Србима у НДХ), Ustaški genocid nad Srbima (Усташки геноцид над Србима), etc.
- Žerjavić, Vladimir (1993). Yugoslavia - Manipulations with the number of Second World War victims. Croatian Information Centre. ISBN 0-919817-32-7.
- "Žrtve licitiranja - Sahrana jednog mita, Bogoljub Kočović". NIN (in Serbian). 12 January 2006. Retrieved 8 May 2012.
- "Jasenovac". Jewishvirtuallibrary.org. Retrieved 22 April 2013.
- Binder, David (16 May 1991). "The Serbs and Croats: So Much in Common, Including Hate". The New York Times. Retrieved 16 January 2012.
- Nenad Antonijević (15 March 2005). "Albanski zločini nad Srbima na Kosovu i Metohiji u Drugom svetskom ratu - Nacistički genocid nad Srbima". Pravoslavlje #912. Politika A.D. ISSN 0555-0114. Retrieved 9 April 2012.
- Pavle Dželetović Ivanov (7 September 2003). "Zapisi o arbanaškim zločinima nad Srbima (11) - Džamija na zgarištu". Glas javnosti (in Serbian).
- Pavlowitch 2008, p. 34.
- "Serbian Genocide". http://combatgenocide.org/. Retrieved 5 September 2015. External link in
- MacDonald, David Bruce (2002). Balkan Holocausts?: Serbian and Croatian Victim Centered Propaganda and the War in Yugoslavia (1.udg. ed.). Manchester: Manchester University Press. p. 261. ISBN 978-0-7190-6467-8.
- Mylonas, Christos (2003). Serbian Orthodox Fundamentals: The Quest for an Eternal Identity. Budapest: Central European University Press. p. 115. ISBN 978-963-9241-61-9.
- Jonsson, David J. (2006). Islamic economics and the final jihad: the Muslim brotherhood to Leftist/Marxist - Islamist alliance. Xulon Press. p. 504. ISBN 978-1-59781-980-0.
- "Croatia" (PDF). Shoah Resource Center - Yad Vashem.
- "Jasenovac". United States Holocaust Memorial Museum. 2007. Retrieved 26 September 2007.
- "CROATIA: MYTH AND REALITY by C. Michael McAdams" (PDF). 16 August 1992. Retrieved 16 May 2015.
- Hoare 2007, pp. 20–24.
- Yahil 1987, pp. 349.
- Hory & Broszat 1964, pp. 13–38.
- Viktor Meier. Yugoslavia: a history of its demise. English edition. London, UK: Routledge, 1999, p. 125.
- Tomasevich (2001), pp. 351–52
- Bernd Jürgen Fischer (ed.). Balkan strongmen: dictators and authoritarian rulers of South Eastern Europe. Purdue University Press, 2007. p. 207.
- Fischer 2007, p. 207.
- Fischer 2007, pp. 207–08.
- Butić-Jelić, Fikreta. Ustaše i Nezavisna Država Hrvatska 1941–1945. Liber, 1977
- Djilas, p. 114.
- Fischer 2007
- Tomasevich (2001), p. 466
- "Deciphering the Balkan Enigma: Using History to Inform Policy" (PDF). Retrieved 3 June 2011.
- Lo State-commission, pp. 30, 40-41
- Secanja jevreja na logor Jasenovac, pp. 40–41, 98, 131, 171
- See: Encyclopedia of the holocaust, "Jasenovac"
- State-commission, pp. 9–11, 46-47
- "Land/Forstwirtschaft: Garbenmesser". Hr-online.de.
- Taborišče smrti--Jasenovac by Nikola Nikolić (author), Jože Zupančić (translator), Založba "Borec", Ljubljana 1969
The knife described on page 72: 'Na koncu noža, tik bakrene ploščice, je bilo z vdolbnimi črkami napisano "Grafrath gebr. Solingen", na usnju pa reliefno vtisnjena nemška tvrtka "Graeviso"'
Picture of the knife with description on page 73: 'Posebej izdelan nož, ki so ga ustaši uporabljali pri množičnih klanjih. Pravili so mu "kotač" - kolo - in ga je izdelovala nemška tvrtka "Graeviso"'
- State-commission, pp. 13, 25, 27, 56-57, 58-60
- "Donja Gradina Memorial Site".
- State-commission, pp. 38–39
- The Glass Half Full, by Alan Greenhalgh; ISBN 0-9775844-1-0, p. 68
- Howard Blum, Wanted!: The Search for Nazis in America, Quadrangle/New York Times Book Co., 1977.
- The.Holocaust research project
- "Timebase Multimedia Chronography (TM) - Timebase 1945". Humanitas-international.org. Retrieved 15 May 2013.
- The Destruction of the European Jews by Raul Hilberg, Yale University Press, 2003; ISBN 0-300-09557-0, 9780300095579, page 760
- Koncentracioni logor Jasenovac 1941–1945: dokumenta By Antun Miletić, Goran Miletić, Dušan M. Obradović, Mile Simić, Natalija Matić Narodna knjiga, Beograd, 1986, pp. 766, 921
- "Zlocini Okupatora Nijhovih Pomagaca Harvatskoj Protiv Jevrija", pp. 144–45
- Shelach, p. 196, and in "Zločini fašističkih okupatora i njihovih pomagača protiv Jevreja u Jugoslaviji", by Zdenko Levental, Savez jevrejskih opština Jugoslavije, Beograd 1952, pp. 144–145
- Mirko Persen, "Ustaski Logori", p. 105
- Secanja jevreja na logor Jasenovac, pp. 40–41, 58, 76, 151
- Shelach, p. 196–197
- Menachem Shelach (ed.), History of the Holocaust: Yugoslavia, p. 162
- Avro Manhattan, The Vatican's Holocaust
- War of Words: Washington Tackles the Yugoslav Conflict by Danielle S. Sremac, Praeger (30 October 1999); ISBN 0-275-96609-7/ISBN 978-0-275-96609-6, pp. 38–39
- "Concentration Camp Listing". Jewish Virtual Library. Retrieved 25 September 2010.
- Ramet (2006), p. 116
- "Numbers of victims at Jadovno victims association". Jadovno.com. Retrieved 15 May 2013.
- Pyle, Christopher H.; Extradition, politics, and human rights; Temple University Press, 2001; ISBN 1-56639-823-1; p. 132.
- Lukajić, . "Fratri i Ustase Kolju".
[interview with Borislav Seva] "they threw Rade Zrnic into the brick factory fires alive!"
- State-commission, pp. 14, 27, 31, 42-43, 70
- Paris 2011, p. 132.
- Ramet, Sabrina P. (2006). The Three Yugoslavias: State-Building and Legitimation, 1918–2004. Indiana University Press. p. 119. ISBN 0-271-01629-9.
- Tomasevich (2001), p. 536
- Misha Glenny; Tom Nairn (1999). The Balkans, 1804-1999: nationalism, war and the great powers. Granta. p. 500. ISBN 978-1-86207-050-9.
- Paris 1953, p. 104.
- Paris 1953, p. 82.
- Paris 1953, p. 60.
- Ramet, Sabrina P. (2006). The Three Yugoslavias: State-Building and Legitimation, 1918–2004. Indiana University Press. p. 120. ISBN 0-271-01629-9.
- Paris 1953, p. 59.
- Copley, Gregory. Defense & Foreign Affairs Strategic Policy. Volume XX, Number 12, 31 December 1992 (English)
- Ramet 2006, p. 118.
- Ramet 2006, p. 119.
- Cohen 1996, p. 90.
- Tomasevich 2001, p. 546.
- Tomasevich 2001, p. 394.
- Sabrina P. Ramet (2006). The Three Yugoslavias: State-building and Legitimation, 1918-2005. Indiana University Press. pp. 312–. ISBN 0-253-34656-8.
- Enver Redžić (2005). Bosnia and Herzegovina in the Second World War. Psychology Press. pp. 71–. ISBN 978-0-7146-5625-0.
- Alex J. Bellamy (2003). The Formation of Croatian National Identity: A Centuries-old Dream. Manchester University Press. pp. 138–. ISBN 978-0-7190-6502-6.
- David M. Crowe (13 September 2013). Crimes of State Past and Present: Government-Sponsored Atrocities and International Legal Responses. Routledge. pp. 61–. ISBN 978-1-317-98682-9.
- Bernd Jürgen Fischer (2007). Balkan Strongmen: Dictators and Authoritarian Rulers of South Eastern Europe. Purdue University Press. pp. 228–. ISBN 978-1-55753-455-2.
- Pavlowitch 2008, p. 62.
- Pomeranz, Frank. Fall of the Cetniks, History of the Second World War, vol 4, p. 1509
- Singleton, Frederick Bernard (1985). A Short History of the Yugoslav Peoples. Cambridge University Press. p. 194. ISBN 0-521-27485-0.
- Roberts (1973), p. 328
- Zvonimir Golubović (1991). Racija u južnoj Bačkoj 1942. godine. Istorijski muzej Vojvodine. pp. 146–147.
- Enciklopedija Novog Sada, Sveska 5. Novi Sad. 1996. p. 196.
- Dimitrije Boarov (2001). Politička istorija Vojvodine: u trideset tri priloga. CUP. p. 183.
- Danilo Zolo. Invoking humanity: war, law, and global order. London, UK/New York, NY: Continuum International Publishing Group (2002), p. 24.
- Mojzes (2011), p. 95
- Bogdanović, Dimitrije: "The Book on Kosovo", 1990. Belgrade: Serbian Academy of Sciences and Arts, 1985, p. 2428.
- Genfer, Der Kosovo-Konflikt, Munich: Wieser, 2000, p. 158.
- Williamson, G. The SS: Hitler's Instrument of Terror
- History of the United Nations War Crimes Commission and the Development of the Laws of War (p. 528), United Nations War Crimes Commission, London: HMSO, 1948.
- "Нацистички ген оцид над Србима - Православље - НОВИНЕ СРПСКЕ ПАТРИЈАРШИЈЕ". Pravoslavlje.org.rs. Retrieved 15 May 2013.
- "www.glas-javnosti.rs". Arhiva.glas-javnosti.rs. Retrieved 15 May 2013.
- Pavle Dzeletovic Ivanov, booknear.com; accessed 12 July 2015.
- Drago Hedl (10 November 2005). "Croatia's Willingness To Tolerate Fascist Legacy Worries Many". BCR Issue 73. IWPR. Retrieved 30 November 2010.
- Schemo, Diana Jean (22 April 1993). "Anger Greets Croatian's Invitation To Holocaust Museum Dedication". The New York Times. Retrieved 14 June 2011.
- "Croatia probes why Hitler image was on sugar packets". Reuters. 20 February 2007. Retrieved 12 October 2012.
- "Wiesenthal Center Expresses Outrage At Massive Outburst of Nostalgia for Croatian fascism at Zagreb Rock Concert; Urges President Mesić to Take Immediate Action", wiesenthal.com; accessed 4 March 2014.
- (Croatian) Vijesti.net: "stari govor Stipe Mesića: Pobijedili smo 10. travnja!", index.hr; accessed 4 March 2014.
- Lefkovits, Etgar (16 April 2008). "Melbourne eatery hails leader of Nazi-allied Croatia, Jerusalem Post, 16 April 2008". Jerusalem Post. Retrieved 22 March 2012.
- Paris 1953, p. 100.
- Paris 1961, p. 160.
- "Mass grave of history: Vatican's WWII identity crisis". JPost. 23 February 2010. Retrieved 22 April 2013.
- "History of the holocaust: Yugoslavia"
- Federal Bureau of Statistics in 1964, Newspaper Danas, 21 November 1989.
- "Croatian holocaust still stirs controversy". BBC News. 29 November 2001. Retrieved 29 September 2010.
- "Balkan 'Auschwitz' haunts Croatia". BBC News. 25 April 2005. Retrieved 29 September 2010.
No one really knows how many died here. Serbs talk of 700,000. Most estimates put the figure nearer 100,000.
- Le Operazioni della unita Italiane in Jugoslavia. Rome (1978), pp. 141-48
- C. Falconi, The Silence of Pius XII, London (1970), p. 3308
- Paris 1961, p. 132.
- State-commission, p. 62
- Avro Manhattan, The Vatican's Holocaust
- Paris 1953, p. 67.
- Shelach, p. 189
- Tomasevich (2001), p. 718
- Danijela Nadj. "Vladimir Zerjavic - How the number of 1.7 million casualties of the Second World War has been derived". Hic.hr. Retrieved 15 May 2013.
- "Southeast Times: Exhibition aims to show truth about Jasenovac". Setimes.com. 27 November 2006. Retrieved 15 May 2013.
- Anzulovic, Branimir. Heavenly Serbia: From Myth to Genocide, C. Hurst & Company. London (1999).
- Bošnjački Institut. Jasenovac: Žrtve rata prema podacima statističkog zavoda Jugoslavije. Bošnjački Institut Sarajevo, Sarajevo 1998.
- Yad Vashem website; accessed 28 May 2014.
- "Jasenovac" (PDF). Yad Vashem. Retrieved 28 May 2014.
- Shelach, Menachem; Gutman, Israel (1990). Encyclopedia of the Holocaust vol.1. pp. 739–740.
- Žerjavić actually first calculated 53,000, later brought up to 70,000 and eventually to 80,000. The details of his calculations remain disputed.
- Žerjavić accused of plagiarism, hic.hr; accessed 12 July 2015.
- "President Mesić in Vojnić". Retrieved 12 October 2012.
- "Israel's Shimon Peres visits 'Croatian Auschwitz'". EJ Press. 25 July 2010. Retrieved 12 October 2012.
- "Israel's Peres visits Croatian Auschwitsz". France24. Retrieved 12 October 2012.
- "Croatian Auschwitz must not be forgotten". B92. 17 April 2011. Retrieved 12 October 2012.
- State-commission for the investigation of the crimes of the occupation forces and their collaborators, pp. 31–32
- State-commission, pp. 28–29
- State-commission, pp. 50, 72
- Mirković, Jovan (2014). Crimes against Serbs in the Independent State of Croatia - photomonograph / Злочини над Србима у Независној Држави Хрватској - фотомонографија. Svet knjige, Belgrade. ISBN 978-86-7396-465-2.
- Lomović, Boško (2013). Knjiga o Dijani Budisavljević. Belgrade: Svet knjige. ISBN 978-86-7396-445-4.
- Lomović, Boško (2014). Heroine from Innsbruck – Diana Obexer Budisavljević. Belgrade: Svet knjige. ISBN 978-86-7396-488-1.
- Lomović, Boško (2014). Die Heldin aus Innsbruck – Diana Obexer Budisavljević. Belgrade: Svet knjige. ISBN 978-86-7396-487-4.
- Cohen, Philip J. (1996). Serbia's Secret War: Propaganda and the Deceit of History. Texas A&M University Press. ISBN 0-89096-760-1.
- Cvetković, Dragan (2011). "Holocaust in Independent State of Croatia" (PDF).
- Djilas, Aleksa (1991). The Contested Country: Yugoslav Unity and Communist Revolution, 1919-1953. Harvard University Press. ISBN 978-0-674-16698-1.
- Fischer, Bernd J. (2007). Balkan Strongmen: Dictators and Authoritarian Rulers of South-Eastern Europe. Purdue University Press. ISBN 1-55753-455-1.
- Goldstein, Ivo (2001). Croatia: A History. C. Hurst & Co. ISBN 0-7735-2017-1.
- Israel Gutman, ed. (1990). "Ustase". Encyclopedia of the Holocaust. 4. Macmillan..
- Hehn, Paul N. (1971). "Serbia, Croatia and Germany 1941–1945: Civil War and Revolution in the Balkans". Canadian Slavonic Papers. University of Alberta. 13 (4): 344–373. Retrieved 8 April 2012.
- Hoare, Marko Attila (2007). The History of Bosnia: From the Middle Ages to the Present Day. London, UK: Saqi. ISBN 978-0-86356-953-1.
- Hory, Ladislaus; Broszat, Martin (1964). Der kroatische Ustascha-Staat 1941-1945. De Gruyter. ISBN 978-3-486-70375-7.
- Judah, Tim (2000). The Serbs: History, Myth and the Destruction of Yugoslavia. Yale University Press. ISBN 0-300-08507-9.
- Lituchy, Barry M. (6 July 2006). Jasenovac and the Holocaust in Yugoslavia: analyses and survivor testimonies. Jasenovac Research Institute. ISBN 978-0-9753432-0-3.
- Milazzo, Matteo J. (1975). The Chetnik Movement & the Yugoslav Resistance. Johns Hopkins University Press. ISBN 0-8018-1589-4.
- Mojzes, Paul (2011). Balkan Genocides: Holocaust and Ethnic Cleansing in the 20th Century. Rowman & Littlefield Publishers, Inc. Retrieved 22 March 2013.
- Mulaj, Klejda (2008). Politics of Ethnic Cleansing: Nation-State Building and Provision of In/Security in Twentieth-Century Balkans. Lexington Books. ISBN 0-7391-1782-3.
- Pavlowitch, Stevan K. (2002). Serbia: the History behind the Name. London, UK: C. Hurst & Co. Publishers. ISBN 978-1-85065-476-6.
- Pavlowitch, Stevan K. (2008). Hitler's New Disorder: The Second World War in Yugoslavia. New York: Columbia University Press. ISBN 1-85065-895-1.
- Ramet, Sabrina P. (2006). The Three Yugoslavias: State-Building and Legitimation, 1918–2005. New York: Indiana University Press. ISBN 0-253-34656-8.
- Roberts, Walter R. (1973). Tito, Mihailović and the Allies 1941–1945. Rutgers University Press.
- Thomas, N., K. Mikulan, and C. Pavelic. Axis Forces in Yugoslavia 1941–45. London, UK: Osprey, 1995; ISBN 1-85532-473-3
- Tomasevich, Jozo (1975). War and Revolution in Yugoslavia, 1941–1945: The Chetniks. 1. San Francisco, CA: Stanford University Press. ISBN 0-8047-0857-6.
- Tomasevich, Jozo (2001). War and Revolution in Yugoslavia, 1941–1945: Occupation and Collaboration. 2. San Francisco, CA: Stanford University Press. ISBN 0-8047-3615-4.
- Yahil, Leni (1987). The Holocaust: The Fate of European Jewry, 1932–1945. Tel Aviv, Israel: Schocken Publishing House, Ltd. ISBN 978-0-19-504523-9.
- Edmond Paris (2011) . Genocide in Satellite Croatia, 1941-1945: A Record of Racial and Religious Persecutions and Massacres. Literary Licensing, LLC. ISBN 978-1-258-05936-1.
|
<urn:uuid:ecefa0a9-9abe-49e4-ba97-cb9ac01247b1>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00209.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9256421327590942,
"score": 3.296875,
"token_count": 14146,
"url": "https://infogalactic.com/info/World_War_II_persecution_of_Serbs"
}
|
Friday, March 31, 2006
Ultraslow ridges hold new clues to crust's formation
At the top of the world in the late summer of 2001, the U.S. Coast Guard's icebreaker Healy carved a slow path through the ice-covered Arctic Ocean. On board, marine geologist Henry Dick sent dredge after dredge through the ice to the seafloor, searching for telltale rocks that would help shed light on how Earth's crust forms. "People said, 'You'll never get a single rock off the seafloor,'" Dick says. "They said, 'You can't dredge in the ice.'" But in fact, Dick's team collected more than 200 rocks—many of which turned out to be pieces of Earth's mantle.
Under the ice and 2 kilometers of water was a 1,800-km-long underwater mountain range known as the Gakkel Ridge. The Healy's expedition, conducted in tandem with the German icebreaker Polarstern, was the first exploration to that Arctic ridge to attempt to collect geological samples.
The surprising discovery of mantle rocks indicated that Gakkel Ridge is one of only two places known on the planet where the tectonic plates that make up Earth's hard outer crust slide apart and expose large slabs of the mantle on the seafloor. That mantle is normally buried under 6 km of crustal rock.
The other site, the 8,800-km-long Southwest Indian Ridge (SWIR), is on the far side of the world. Like the Gakkel Ridge, the SWIR is utterly remote. It's located beneath treacherous high seas.
Oceanographers are only now beginning to explore these areas in detail. They have already made surprising geological finds, including the exposed mantle. They've also uncovered evidence at both ridges of hydrothermal vents, fissures in the seafloor through which circulating, magma-heated seawater escapes. Researchers say that these two ridges may represent a new class of tectonic boundary, called an ultraslow-spreading ridge. The finding offers scientists the chance to explore new ideas about how Earth's crust forms and to study the rich ecosystems spawned by the vents.
Read more about it at Science News online.
Voters have an opportunity this spring to pledge support for water quality protection on the Island - and in particular for the recommendations of the Massachusetts Estuaries Project, a landmark study that will provide critical tools for managing the watersheds around the Vineyard's coastal ponds.
"To me the coastal ponds are probably the one most critical environmental feature on Martha's Vineyard and it drives the economy. I think that is why people come here," said Bruce Rosinoff, a coordinator for the estuaries project, from Edgartown.
The project is a six-year collaboration between the state Department of Environmental Protection and the School of Marine Science and Technology at the University of Massachusetts at Dartmouth. It is using hard science and state-of-the-art technology to analyze the health and nutrient carrying capacity of virtually all the estuaries in southeastern Massachusetts.
On the Island, Sengekontacket Pond, Edgartown Great Pond, Tisbury Great Pond, Lagoon Pond and Lake Tashmoo are all enrolled in the study. The first reports are due later this year, and among them will be one addressing the Edgartown Great Pond.
At the special town meeting Tuesday night, Tisbury voters unanimously approved a nonbinding resolution to give careful consideration to the results and recommendations of the estuaries project, and to work with other towns to preserve and restore the quality of the Island's ponds and waterways.
Read the rest here.
Gulf summit ends with a discussion of harmful algae
By Brandi Dean Caller-Times
March 31, 2006
When red tide bloomed in Corpus Christi last year, no one knew what caused it, how long it would last or how to get rid of it.
Now an expert on the microscopic algae that kills fish, turns water red and wreaks havoc with respiratory systems has added one more item to the list of unknowns: whether it's coming back.
"We don't have any good way of predicting it from year to year," said Tracy Villareal, a research associate professor at the University of Texas Marine Science Institute.
Villareal, who spoke on harmful algal blooms as a panelist at the State of the Gulf of Mexico Summit on Thursday, said that while he believes science is moving in the direction of understanding red tide, it's slow going. The three-day event sponsored by the Texas A&M University-Corpus Christi's Harte Research Institute for Gulf of Mexico Studies, ended Thursday.
"This is a difficult funding area for oceanography at the moment," he said. "We're talking about a very expensive science. The ocean is very large and we're very small."
One thing he could say, however, is that red tide is increasing in the number of blooms. In the past 15 years, Villareal said more blooms have been reported than in the previous 50. Some of it may be attributable to better reporting, but probably not all of it.
"My suspicion is we're not going to see fewer of them," he said.
But that's not the only thing the Gulf Coast has to be aware of, in terms of harmful algae. There's also Ciguatera, which is the most likely of algal blooms to cause problems for people. Villareal said that between 50,000 and 500,000 people are affected by it every year - the span is large because most doctors wouldn't recognize its many and varied symptoms. There have been reports of the algae in Texas, Villareal said, and it also may be getting worse. The algae lives on reefs, so as humans build more artificial reefs - such as those created by oil rigs - it may spread.
Nancy Rabalais, professor at the Louisiana Universities Marine Consortium, added dead zones to the list of Gulf Coast menaces. Dead zones are areas of oceans with little oxygen. Rabalais said they are because of nutrients - particularly nitrogen - that travel down the Mississippi River and into the Gulf. They eventually become food for bacteria that uses up the oxygen, leaving little for fish and shrimp.
Rabalais said one way of decreasing the dead zone is to decrease the use of nitrogen in fertilizer, but she found one other effective method: hurricanes. The storms stir up the water and redistribute oxygen.
"But that's not really the solution I'm advocating," Rabalais said.
The cause is the ever-increasing level of CO2 in the atmosphere. And as well as devastating marine ecosystems, the knock-on effects of increasing acidification include harm to major economic activities such as tourism and fishing.
These are the conclusions of the first review of the state-of-knowledge about the acidification of the oceans. The report was produced by an international group of scientists, commissioned by the Royal Society, the UK's national academy of science.
The oceans are naturally alkaline but, since the industrial revolution, the sea surfaces have been turning ever more acidic. The report says that if CO2 emissions continue at current rates then by 2100 the pH of the sea will fall by as much as 0.5 units from its current level of pH 8.2. The pH scale runs from 1 (acidic) to 14 (alkaline), with 7 being neutral. And in the case of the oceans, the change would be effectively irreversible.
"It will take many thousands of years for natural processes to return the oceans to their pre-industrial state," says John Raven, at the University of Dundee, UK.
Raven and his colleagues looked at possible ways of neutralising the growing acidity, such as dumping chalk - a highly alkaline substance - into the sea, but all their ideas carried major problems of their own. "The only way to minimise the long-term consequences is to decrease CO2 emissions," Raven says.
The sea life expected to be worst hit include organisms that produce calcium carbonate shells, as these are harder to form in acidic waters. That means that corals, crustaceans, molluscs and certain plankton species will be at risk.
"It would not kill penguins, orca and other big animals directly, but it would affect the food chain with potentially damaging effects on larger animals," Raven explains.
Coral reefs face a three-pronged attack, the report says. There is global warming and coastal pollution, and now acidification. Raven says we can expect to see degradation of coral reefs in the tropics.
And it does not stop there. There is an important group of photosynthetic plankton called coccolithophores that grow calcium carbonate shells and form giant "blooms" in spring and summer before sinking to the bottom of the ocean.
But the increasing acidification will hinder their ability to grow, meaning they remove less carbon from the atmosphere. This in turn will result in more carbonic acid being formed at the seas' surface.
"Calcium carbonate helps organisms to sink and enhances the biological pump," says Andrew Watson, an environmental biologist at the University of East Anglia, UK. The sea has absorbed about half of the CO2 produced by humans in the last 200 years and currently takes up one tonne of the gas each year for every person on the planet. But if the water becomes too acidic, the pump will not work and the ability of the oceans to mop up CO2 will fall, he says.
"Most climate scientists think the Kyoto targets themselves are wholly inadequate," Watson adds. "We need a sharp decline in CO2 emissions, down to half of today's levels."
SOURCE - New Scientist
Thursday, March 30, 2006
Galway, Ireland [RenewableEnergyAccess.com] An initiative to open a wave energy test site one and a half miles off the coast of Spiddal, County Galway, is under way with the arrival of the first wave energy generator, Wavebob, which has arrived at Galway Docks. Made possible by the Marine Institute and Sustainable Energy Ireland, the 37-hectare Galway Bay test site will be open for engineers to field-test other prototype ocean-energy generators as well, all in the interest of harnessing the power of the Atlantic Ocean.
"The most energetic waves in the world are located off the West coast of Ireland," said Peter Heffernan, Marine Institute, CEO. "The technology to harness the power of the ocean is only just emerging and Ireland has the chance to become a market leader in this sector."
Wavebob will test a quarter-scale prototype, which is hoped to provide the most accurate evidence to date for the cost and performance potential for the device. Wavebob has already gone through a rigorous path of theoretical modeling followed by small-scale prototype testing in wave tanks. Some of this testing has been performed at the Hydraulics and Maritime Research Center, University College Cork.
Both agencies have been working closely to develop a research and development strategy for ocean energy technology in Ireland. This strategy will define a phased approach toward product development complemented by an outline of the investment levels required to sustain the development of an ocean energy industry in Ireland. The Marine Institute and Sustainable Energy Ireland (SEI) have invested Euro 300,000 [USD$365,000] in university-based research and a further Euro 850,000 [>USD$1 million] in industry-based research of ocean energy technology.
It is expected that the implementation of the Ocean Energy Development Strategy will see a progressive increase in the range and scale of research and innovation investment. In 2004, Teresa Pontes of Portugal, an ocean energy expert who spoke at an EurOcean marine science event in Galway, said that up to 20 million homes in Europe could be powered by clean, renewable energy from the sea. She estimated that by harnessing energy from waves and ocean currents, Europe could produce around 200 terawatt (200 million megawatt) hours per year of electrical power.
The Marine Institute, which hosted the event during Ireland's EU presidency, is mandated to spearhead all aspects of marine R&D leading to the sustainable development of Ireland's 220 million acres of underwater territory and has also drafted a comprehensive marine research and development strategy for the next seven years.
Though pastoral when British explorer John Hanning Speke became the first European to explore its shores, today, Lake Victoria in East Africa is one of the most populous regions in the world. The lake provides food, transport, and electricity to more than 30 million people, but its resources are limited. Despite its impressive size—it’s the third-largest lake in the world—Lake Victoria is shallow, resembling, in Speke’s words, “the temporary deposit of a vast flood overspreading a large flat surface.” Until the Owens Falls Dam began to regulate water levels from the lake’s only outlet in 1954, the amount of water in the lake jumped drastically from year to year depending on rainfall. Though water levels continued to vary after the dam was built, they remained more than 11.9 meters above a gauge in Jinja, Uganda. But in early 2006, the Jason-1 satellite revealed that Lake Victoria had reached lows not seen since well before the dam was built.
Read more here.
This is major. Humans haave already fucked up Lake Victoria by introducing a food and game fish, the Nile Perch, to the lake which has decimated the native fish populations, increased runoff and nutrients are also creating hazards for these cichlids which use visual cues to spawn, among many other problems facing the lake. This is a sad sad time for the African Rift Lakes...
Tropical Cyclone Glenda formed off the northwestern coast of Australia on March 27, 2006. The storm quickly built into a powerful and well-defined cyclone during the next day. Powerful winds have whipped up surf along the coastline of Western Australia’s Pilbara region and brought powerful winds and rain to the islands off the Kimberly coast. As of March 29, 2006, the storm had reached Category 5 status, the maximum rating possible for a cyclone.
This photo-like image was acquired by the Moderate Resolution Imaging Spectrometer (MODIS) on the Terra satellite on March 29, 2006, at 10:40 a.m. local time (02:40 UTC). It shows Cyclone Glenda as a well-developed storm, sitting 525 kilometers (330 miles) west of Broome. Clouds from the storm covered most of the northwest coastline of Western Australia. Sustained, peak winds in the storm system were roughly 220 kilometers per hour (140 miles per hour) at this time. The storm’s spiraling clouds appear as a nearly solid white disk, but in several places, it appears as though some clouds are “boiling” up above the rest.
Predictions as of 2:55 a.m. Australian Western Standard Time on March 30 were that the storm would cross the coast between Exmouth and Karatha on Thursday afternoon or night as a very dangerous storm. The Australian Bureau of Meteorology predicted that wind speeds near the storm center could reach 265 kilometers per hour (165 miles per hour) as the storm comes ashore. Many coastal communities were being evacuated by State Emergency Services ahead of the storm.
That is the second massive cyclone of the season to hit Australia... anyone still think global warming has nothing to do with it???
HONOLULU, Hawaii (29 Mar 2006) -- Warning signs to keep out of the water were posted Wednesday along part of Waikiki's world-famous beaches because of high bacteria levels from a massive sewage spill.
Ocean currents shifted toward Oahu's south shore beaches, carrying millions of gallons of raw sewage that was diverted into a canal from a broken pipe and into the ocean.
"What we feared has happened. The bacteria has kind of spread through areas of Waikiki," state Health Department spokesman Kurt Tsue said.
Environmentalists and residents fear long-term damage to the fragile coral reef and other marine life in the area.
"This is absolutely disgusting that here at the doorstep of our economic engine we have untreated sewage on the beaches. This should have never happened," said Jeff Mikulina, director of the Sierra Club, which filed a lawsuit in 2004 alleging deficiencies in the city's wastewater system.
The normally packed beaches remained open but were mostly empty. Rainy weather kept many tourists away, and those on the beach were greeted by signs that warned against swimming or fishing, saying, "Sewage contaminated water. Exposure to water may cause illness."
The city was monitoring the water in a several-mile stretch from Diamond Head to near downtown Honolulu. Bacteria levels near the beaches were not at threatening levels, but enough to put out a warning, Tsue said.
The city was trying to determine how much untreated sewage has been diverted into the canal, which empties into the ocean between two of Hawaii's most famous beach areas _ Waikiki and Ala Moana.
It could exceed 50 million gallons, considering an average 15 million gallons of wastewater a day flows out of Waikiki.
The city has been using pumps around the clock since the sewer line broke early Friday. Repairs on the 42-inch sewer main were completed Wednesday and the diversion into the canal was finally stopped.
The pipe, installed in 1964, cracked after heavy rain flooded the aging sewer system.
Tuesday, March 28, 2006
Two thousand and six is emerging as the year Americans finally wake up to the reality of global warming. Of course, E has been hammering away at the issue for six years or more, but now it has momentum, with the release of several new books and a Time magazine cover story ("Be Worried, Be Very Worried") April 3.
An ABC/Time/Stanford University poll accompanying the article confirmed that Americans are finally focusing on the problem. Today, 85 percent of Americans believe that global warming is occurring, versus 13 percent who don't. Sixty percent of respondents admit to worry about it either "a great deal" or "a good amount." Sixty-eight percent think the federal government should do more to combat it. (It's doing virtually nothing now.)
HERE ARE SOME OF THE EXMAPLES LISTED IN THE ARTICLE, WHICH YOU SHOULD READ HERE:
The California Coast: Migrating Species
New York: The Virus Specter
Florida: Dying Coral
Pacific Northwest: The Incredible Shrinking Glaciers
Antigua: Stronger Storms
ALL I CAN SAY IT'S ABOUT TIME THE AMERICAN PUBLIC START BECOMING MORE INFORMED AND WISE TO THE FACT THAT GLOBAL WARMING IS OCCURRING
A major initiative has been launched to conserve the fragile wildlife of the islands of the Pacific.
It includes a commitment to protect nearly a third of coastal waters and a fifth of the land area of Micronesia.
The announcement was made on the fringes of a UN conference on the protection of the world's biodiversity.
Scientists have warned that the variety of life on Earth is declining at a rate unprecedented since the demise of the dinosaurs.
In a separate move, one of the world's largest marine parks will be created in the Pacific island nation of Kiribati to protect an extraordinary untouched coral ecosystem.
Islands contain a disproportionate number of the world's species, as their isolation over millions of years has resulted in separate evolutionary pathways.
For example, the exotic white-crested Kagu (Rhynochetos jubatus) is the sole member of an entire bird family, and lives only on the island of New Caledonia.
The islands are home to 500 species of fish
Some 16% of the world's known plant species have evolved on islands and their coastal waters contain half of the planet's variety of marine life.
This isolation makes the wildlife uniquely vulnerable to extinction as environmental changes in just a small area can easily wipe out entire species.
Half of all known extinctions have involved island species, including the notorious case of the dodo on Mauritius.
Current threats include deforestation, over-fishing and the degradation of coral reefs, 30% of which are already severely damaged.
Future of fishing
The initiative to increase protection of Pacific islands was launched by the president of the tiny nation of Palau, an island group with a human population of barely 20,000.
Turtles mating, Phoenix Islands Protected Area
The reserve will allow marine wildlife to develop in peace
Its aim is to provide effective protection by 2020 of 30% of the inshore marine life of the ocean region of Micronesia, and of 20% of land ecosystems.
At the launch of the programme in the Brazilian city of Curitiba, a total of $18m was pledged towards conservation in Micronesia, coming from a combination of government funds, conservation organisations and international finance institutions.
The new marine protected area in Kiribati will cover an area twice the size of Portugal, and will heavily restrict human activities in the Phoenix Islands, a group of eight coral atolls between Hawaii and Fiji.
They are nearly uninhabited, and have stunned conservation scientists with an extraordinary variety of unique wildlife including 120 species of coral and more than 500 fish, some new to science.
In addition, it is an important stopping point for migrating birds and sea turtles.
While the Phoenix Islands are still in a remarkably pristine condition, the creation of the new protected area is designed to prevent future damage from over-fishing and to offset the impact of climate change.
This will involve setting up an endowment fund to compensate the government of Kiribati for revenue it could have got from the issuing of commercial fishing licences, and also to finance professional management of the wildlife.
It is hoped that by protecting coral ecosystems, the long-term future for small-scale fishing can be secured for people in the region, as the reefs provide important spawning grounds.
The island initiative is being contrasted with the slow pace of global efforts to address the crisis of biodiversity loss, with the government negotiations at this UN convention getting bogged down in arguments over finance and the rules for sharing profits from products such as drugs obtained through traditional knowledge of plants.
There has been concern from conservation organisations that while a growing proportion of land-based ecosystems are at least officially protected, the process of designating ocean areas for conservation has barely begun.
Russ Mittermeier, of the group Conservation International, which is helping to sponsor the Phoenix Islands protection scheme, said: "This is a major milestone for marine conservation efforts in the Pacific and for island biodiversity."
"The Republic of Kiribati has shown unprecedented vision for long-term conservation of its precious marine biodiversity."
Research from Newcastle University for the British Government's Department for Environment, Food and Rural Affairs (DEFRA) indicates that such proposals by environmentalists are misguided.
Marine protected areas (MPAs) would need to be tens of thousands of square miles in size – at least as big as the size of Wales – and be established for decades to restore levels of cod and haddock, says the report.
Moreover, creating large MPAs would be likely to intensify fishing in the waters left open for business, so further measures to reduce activity would have to be brought in.
However, the report's authors suggest that these 'drastic' measures are unlikely to be feasible and would require a significant policy shift for them to be implemented.
They also acknowledge that there is an 'information deficit' regarding the costs and benefits of MPAs, particularly in the case of the North Sea, and call for more research.
Environmentalists and public bodies such as the Royal Commission on Environmental Pollution are lobbying the British Government to introduce MPAs in parts of the North Sea to conserve marine life and restore fish stocks.
A team of marine ecologists from the University of Newcastle upon Tyne were asked by DEFRA to assess likely effects of MPAs in UK waters.
The report highlights that many MPA advocates are basing their opinions on scientific evidence garnered from small, conservation-oriented MPAs largely in tropical waters.
Although the Newcastle team acknowledges that MPAs have brought many benefits to the tropics and elsewhere, it stresses this experience can not be applied to the North Sea, which possesses very different habitats and species.
According to the report, small MPAs have conservation and localised fishery benefits in the UK, which is good news for shellfish and finfish (e.g. scallops and lobsters).
The MPAs will have to be very large to achieve recovery of North Sea cod stocks, though.
"Evidence suggests closing off small areas of the ocean won't deliver results with regard to highly mobile species like cod and haddock," said Professor Nicholas Polunin of Newcastle University's School of Marine Science and Technology.
"You need to create bigger protected areas and enforce them for several decades if you are to see a significant, lasting effect on stocks, which are massively depleted to historically low levels.
"However, this would raise the problem of intensive fishing activity in areas that are left open and further fishing restrictions would have to be brought in to address this."
This article brings up valid points. Much of the MPAs in place today are located in tropical waters, especially around reefs and the like. These fish are often not highly migratory, and therefore, MPAs work. But for larger, coldwater migratory species, small no take zones may or may not help. I am not suggesting they shouldn't try, because something needs to be done. Cod is afterall the fish that changed the world, and needs to be saved. Speaking of which, youu should read the book called "Cod: A Biography of the Fish That Changed the World" by Kurlansky, good read.
DURHAM, New Hampshire, March 28, 2006 (ENS) - Around the world, seagrass beds are in decline, says a scientist who has been studying the shallow water ecosystems for decades. As these underwater meadows disappear, so do commercially valuable shellfish and fish, waterfowl and other wildlife, water quality, and erosion prevention.
Frederick Short, research professor of natural resources and marine science at the University of New Hampshire, compares seagrass beds to forests on the ocean floor.
From the Hudson Bay, where the Cree Nation enlisted him to transplant their diminishing eelgrass beds, to the Pacific Island of Palau, Short has found the same thing.
"Almost everywhere we start monitoring seagrass, it’s declining," he says. While conclusive global results are not yet available, Short believes human impact is responsible for the decline.
Short, who founded the global monitoring program SeagrassNet in 2001, has been studying eelgrass, a type of seagrass found in the Northern Atlantic, for more than 20 years.
While he still conducts research at the University of New Hampshire’s Jackson Estuarine Laboratory on the Great Bay Estuary in Durham, he also collaborates with teams of researchers to monitor seagrass health at 45 sites in 17 countries worldwide.
Among the most productive plant communities on the planet, seagrass beds serve as protective nurseries for juvenile fish and shellfish, a habitat for many marine species, and a feeding ground for predatory fish, waterfowl and large sea creatures like manatees and sea turtles.
The root and rhizome system of these flowering plants stabilizes sediments, protecting the coastline from currents and weather-related erosion. Seagrass is an effective filter of nutrients and particulates, and it is the basis of a detrital food chain that feeds fish and shellfish.
At a state park in Malaysia, SeagrassNet has charted a decline since 2001 at both a "pristine" site and a less protected site.
Satellite imaging showed researchers that the impact was not due to a global force like climate change, but rather to on-shore logging that had increased the level of water-borne sediments at both sites, decreasing light reaching the bottom, where seagrasses grow.
Short and his SeagrassNet colleagues have not ruled out global climate change as a factor in the decline of seagrass beds. But the reasons for seagrass declines appear to be more localized. "Human pollution of the water has been the biggest issue," he says.
In remote areas of the Hudson and James Bays in sub-Arctic Canada, where members of the Cree Nation noticed their seagrass beds diminishing, Short observed that the beds were in the plume of fresh water released from a nearby Hydro-Quebec power plant. The fresh water influx decreased the salinity so much that the seagrass could no longer survive.
When seagrass beds disappear, Short says, the impact is major. A disease outbreak in the 1930s wiped out 90 percent of eelgrass in the North Atlantic. The scallop fishery in the mid-Atlantic disappeared, says Short, and "it’s never really come back."
In Thailand, where SeagrassNet researchers have begun investigating the impact of the December 2004 tsunami on seagrass, the beds provide local fishers with significant shellfish. "If the seagrass beds disappear, so do the people’s protein sources," says Short.
His work in Thailand highlights the reason for the worldwide monitoring program. Prior to SeagrassNet, little was known about seagrass in many locations around the world. With no baseline, assessing the impact of a disturbance like the tsunami is difficult.
Short is adding new sites to SeagrassNet around the world and, in New England, researching effective ways of restoring eelgrass to areas where water quality has improved.
A site selection model he has developed helps researchers determine areas that are optimal for restoring diminished eelgrass beds with sod-like patches.
As SeagrassNet researchers input their data into an online database, Short is now working on data analysis from the first five years of SeagrassNet monitoring.
At the same time, he will continue to add new sites to the global monitoring network. "It’s growing just as fast as I can grow it," he says.
For more information visit: http://www.seagrassnet.org
THIS IS of particular importance to me because I have been working with seagrasses for the last 3 years. I am also going back to school and will be working with eelgrass in Long Island. This is a very important species in so much as many, many commercially and recreationally important fin and shellfish species rely on seagrasses as a refuge or for food during some part of their life cycle. We need to protect these vital ecosystems...
Monday, March 27, 2006
By Juliet Eilperin, The Washington Post | March 27, 2006
Highly mobile fishing fleets are exploiting the sea's resources at an unsustainable rate, according to a new paper published Friday by more than a dozen international researchers in the journal Science.
The paper, which looks at how ''roving bandits" swoop in and plunder fisheries at a rapid rate, examines how some fish populations have collapsed within a matter of years. In Maine, the sea urchin became a popular commodity in Japanese sushi markets in the mid-1980s: After peaking in 1993, the catches declined precipitously.
The paper, authored by 15 Canadian, Australian, US, Swedish, and Dutch ecologists, social scientists, and resource economists, concludes that even marine protected areas such as the Great Barrier Reef Marine Park, the largest marine protected area in the world, ''is too small to fully maintain stocks of marine mammals, turtles and sharks that migrate across its boundaries."
Dalhousie University professor Boris Worm, one of the co-authors, said ''existing marine protected areas are too small, too few and too far apart to prevent the tragedy of the oceans, which is arising due to the unbridled demand for seafood."
The study cited new export demands from the restaurant and aquarium trades, more sophisticated fishing technology, and rapid air transport of fish.
Newswise — Over the past three years, two distinguished panels - the U.S. Commission on Ocean Policy and the Pew Oceans Commission - released major reports that found our oceans and coasts in serious trouble. Like the 9/11 Commission, the Ocean Commissions proposed detailed recommendations to avert a crisis in homeland security. The homeland crisis in our oceans, however, is getting worse, not better, says a Florida oceanographer.
“So far, those who could make a big difference have ignored most of the recommendations,” said Frank Muller-Karger, a professor at the University of South Florida College of Marine Science who served on the 16-member U.S. Commission on Ocean Policy. “Inaction continues to impact our health, economy and jobs. We continue to be short-sighted and manage our ocean resources by crisis rather than by conscience.”
According to Muller-Karger, the nation's marine territory is at risk, suffering from increasing pollution, over fishing and problems related to over-development.
“Federal waters are like federal lands - whatever is in them belongs to all of us, not to the government, or some company,” charges Muller-Karger. “Our goal should be to hold our government accountable for managing our resources properly, and doing it in a way that does not deplete them in the lifetime of our generation. To me, this means that we need an ecosystem-based management strategy.”
Muller-Karger advocates for the need to recognize that water, air and living and non-living resources do not follow - or live by - political boundaries.
”The oceans are an interconnected web of animals, plants, and people living across a complex geography,” he explains. “A change in one area sends a ripple that affects everything else in the system.”
Members of both commissions created the Joint Ocean Commission Initiative, which released an Ocean Policy Report Card evaluating efforts made by the administration and Congress.
“The results are discouraging,” he says. "The average grade was a barely passing D+.”
One serious problem, says Muller-Karger, is that we have upwards of 14 federal agencies charged with ocean issues. These agencies are, in turn, overseen by more than 60 congressional committees and subcommittees.
“There is too much duplication, too little coordination and too little funding," he charges.
Moving in the right direction doesn’t have to be very expensive. The U.S. Commission on Ocean Policy carefully estimated the costs of each of its recommendations. They concluded that an Ocean Policy Trust Fund of between two and four billion dollars per year should be established to better support management of our oceans and coasts.
“No one, as of yet, is talking about such a trust fund, which leads to an “F” on the report card for lack of funding,” asserts Muller-Karger. “This is a modest investment compared to the ultimate benefit of protecting our property, life, and coastal resources. It is an investment that will stimulate our coastal economies.”
Muller-Karger also suggests that all is not lost. Through public awareness and public activity, through events such as the upcoming “Oceans Day,” which Floridians will celebrate April 19, information about the importance of our ocean resources can be communicated to policy makers.
Historically, Oceans Day has been a day for impressing lawmakers with the importance of preserving the natural integrity of the oceans and taking positive steps to repair what has imperiled them, especially the harm done by neglect and abuse at the hands of humankind. Eagerly supported by students, staff and faculty at the University of South Florida’s College of Marine Science, the Florida Ocean Alliance’s (http://www.floridaoceanalliance.org) celebration of Oceans Day 2006 in Tallahassee promises to highlight the plight of our oceans as well as celebrate their beauty and emphasize their importance as an irreplaceable resource.
So today, we needed to test our Mako engine to make sure it was running fine before our six week sampling period. So we launched out of Biscayne National Park range station and shot across the bay to Elliot Key, but we decided to check out Boca Chita Key instead. Its a small key that has natural inlets running along both north and south sides and the ocean on its east side (of course). Anyways, its a cool little place, where you can dock your boat and camp out overnight if you want. Like I already said, it is small, and you can see the whole thing in less than an hour. But if you have the time, you can go snorkeling around the island, there's lots of grass beds, its not too deep, and reef formations on the ocean side (although I haven't snorkeled out on the ocean side). We saw some pretty cool stuff today, including a big scrawled cowfish in inches of water, so content in its feeding it didn't seem to notice that its back was out of the water.
On the ocean side of the island there are lots of tidal pools (the island is like an old coral reef head that become exposed in lowering sea levels)which did not have too much life other than sergeant majors and sea urchins, although in one pool there was a baby tang, even though it didn't stick around long enough to get a good look at it. It was fun looking around there none-the-less and you should definitely check it out sometime.
Tokyo, Mar 27, 2006 (JCN) - Fujicco, one of Japan's leading processed food manufacturers, announced on March 23 that it has discovered a unique property of a hot-water extract of kelp in collaboration with the Tokyo University of Marine Science and Technology.
Specifically, the two partners have found that the hot-water extract of kelp substantially suppresses the rise of blood glucose levels in glucose-tolerance tests using mice.
Further, they have elucidated that the extract inhibits the absorption of sugar from the intestines. Fujicco expects that these findings will lead to the development of novel health foods.
Details of this achievement will be presented at the Annual Meeting of the Japanese Society of Fisheries Science to be held in Kochi from March 29 to April 2.
Sunday, March 26, 2006
Anybody remember Pfiesteria hysteria? It was a raging crisis in the Chesapeake in 1997, when schools of menhaden pocked with sores went belly up in the Pocomoke River. The toxic dinoflagellate Pfiesteria piscicida was suspected of killing them, and of making people sick, too. Scientists and health officials swarmed in, concerned the affliction would spread. Nine years and millions of dollars later, here's what researchers at the Virginia Institute of Marine Science have to say about Pfiesteria:
Never happened. "My best scientific consideration is that Pfiesteria is not an issue in the Chesapeake and never was," says Wolfgang Vogelbein of VIMS. After extensive research, he suspects a fungus-like organism was to blame.
"The researchers were never able to isolate the [Pfiesteria] toxin," says Harley Speir of Maryland's Department of Natural Resources. "We do know those ugly sores were from a fungus, not Pfiesteria."
We bring up old news because a new affliction with a weird name looms in the nation's largest estuary. It's mycobacteriosis, a sickness caused by bacteria from the same germ family that produces tuberculosis and leprosy in humans. VIMS says up to 70 percent of rockfish (striped bass) in the bay are infected, and though there's no way to estimate subsequent mortality in the wild, myco is considered 100 percent fatal when it strikes rockfish in aquaculture ponds.
It's a slow "wasting disease" like TB that leaves stricken fish skinny and weak. It affects internal organs first. By the end, rockfish are emaciated and covered in red sores. A strain of myco also causes "fish handler's disease," a nasty hand infection resulting from exposure through cuts or nicks. In severe cases it gets deep into joints, forcing doctors to amputate digits or even limbs.
Mycobacteria is clearly no joking matter. Resource managers have been aware of its presence in bay rockfish for nearly a decade and believe it may be getting more prevalent. Some believe overall natural mortality for rockfish is increasing and myco may be a cause. Many think declining water quality and declining food sources in the bay stress fish to the point they are more susceptible to these naturally occurring bacteria.
On the other hand, coastal stocks of stripers are at a record high and some think the crisis is overblown. There are opinions of every stripe but little solid information. Worried officials reassure anglers and seafood-eaters that rock, the bay's premier sport and table fish, is safe to eat. But they also caution diners to stay away from raw rockfish and tell anglers to handle fish carefully, particularly if they show lesions, as a small percentage of rockfish do.
A front-page story on myco in The Post two weeks ago sent rockfish market prices plunging by half. It came at a rough time for the sportfishing industry. Charter skippers worry that apprehensive anglers now won't book spring trips. Rockfish season opens April 15 in Maryland and in the lower Potomac, May 1 in the District and May 15 in Virginia's portions of the bay.
It seems like a decade doesn't go by without a crisis on the bay. In the 1970s it was Kepone, a poison dumped into the James River that afflicted fish; in the '80s it was acid rain, thought to be ruining rockfish spawning success; in the '90s it was Pfiesteria. Now comes myco.
"When I saw that headline ["Chesapeake's Rockfish Overrun by Disease," Post, March 11], I thought 'Here we go again,' " said Mike Slattery, assistant secretary of Maryland's DNR.
But officials are far from unconcerned. Almost a decade after myco was first detected in rockfish, there are many more questions than answers. Vogelbein, who is seeking federal grants to ramp up research at VIMS, lists these unanswerables atop the list:
What is the extent of rockfish mortality? Does the disease go dormant? Can rockfish get over it? Does it affect spawning success? Can it spread to other species?
READ THE REEST OF THE ARTICLE HERE.
Saturday, March 25, 2006
I just sent an email to ExxonMobil to let them know that I will not buy ExxonMobil gasoline unless -- and until -- ExxonMobil takes meaningful action to curb global warming, to invest in renewable energy, and to pay for the damages done by the Exxon Valdez oil spill in Alaska.
It's time to hold ExxonMobil responsible for putting corporate profits over protection of wildlife and wildlife habitat. Please join me in boycotting ExxonMobil! http://go.care2.com/66837
Thursday, March 23, 2006
The largest habitats on Earth are located in the vast, dark plains at the bottom of the ocean. Yet because of their remoteness, many aspects of this mostly unexplored world remain mysterious.
New research led by Scripps Institution of Oceanography at the University of California, San Diego, has produced a rare insight into animal populations in the deep sea.
In first-of-its-kind research published in the March issue of the journal Ecology, David Bailey, Henry Ruhl and Ken Smith of Scripps analyzed fish and other marine animals over a 15-year period in the deep sea of the eastern North Pacific Ocean. At the site, the source of one of the longest time-series studies of any abyssal area in the world, the scientists found a threefold increase in fish abundance, an upsurge that appears to have been driven by an increase in the food available to the animals.
Bailey says the study is a unique glimpse into fish populations undisturbed by human influence.
“This is a rare study of a large marine fish population that doesn’t get commercially fished,” said Bailey. “Other fish populations have their abundances, body sizes and life histories altered by fisheries activities, so our study probably gives us some information about how fish communities work when they are not driven by human exploitation.”
The Ecology study follows research published in 2004 by Ruhl and Smith that showed that significant changes in the deep-sea environment were likely driven by changes at the surface of the ocean by El Niño and La Niña events
Such oceanographic events, along with longer-term shifting called the Pacific Decadal Oscillation, can bring more nutrients to surface waters. While animals near the surface can rapidly benefit, it can be months to years later for changes to extend to the ocean bottom, leading to a proliferation of bottom-dwelling invertebrate animals that make up some part of the food supply of deep-sea fishes.
This appears to have been the case from 1989 to 2004, when the researchers found a nearly three-fold increase in deep-sea fish called grenadiers, animals related to cod that are also known as “rattails.” Species included Coryphaenoides armatus, or abyssal grenadier, an animal found worldwide at depths of 2,000 meters and greater, and Coryphaenoides yaquinae, a fish of which little is known and that is found only in the deep North Pacific.
Grenadiers eat a range of foods, from the dead bodies of fish and whales to invertebrates such as worms and crustaceans. The most commonly observed animals on the seafloor include sea cucumbers, sea urchins and brittle stars, and these appeared to form part of the grenadiers’ diet. The researchers used the abundances of these animals as an indicator of food supply to the fish. Large changes in the abundances of these animals were followed by changes in the numbers of fish, with both groups increasing in number over the 15-year study.
The researchers say their results indicate that animals in the deep sea live in an environment in which food supply drives population levels, called a “bottom-up control,” rather than a “top-down control” situation in which predator pressure controls prey abundances.
“The predominant trend had been that people thought that fish have a powerful effect on their environment, and they drive the changes in everything else,” said Bailey, a postdoctoral researcher at Scripps and lead author of the study. “What we’ve seen is the reverse, that fish are responding to a change in their habitat. We think that a lot of fish communities are fundamentally changed by fishing. Our study is really nice in that we are working on populations that have never been fished, so their population dynamics can be seen being driven by natural processes.”
Comparing these observations to those for shallow water, the researchers speculate that deep-ocean and shallow-water fish communities' work differently. A possible reason is that the deep ocean is dependent for its food on material falling from the communities nearer the sea surface; this food supply is smaller and less predictable than that available to most shallow-water fish. The effects of this difference on the dynamics of fish communities are not known, and are being explored using mathematical models as the investigators move forward with this project.
Information for the research paper was derived from “Station M,” a study site 136 miles west of the California coast that has been explored by members of Smith’s laboratory since 1989. The researchers obtained images of the animals through a camera mounted on a sled towed across the ocean floor at more than 13,000 feet deep.
The research was supported by the National Science Foundation, the University of California, Scripps Institution of Oceanography and a Marie Curie Outgoing International Fellowship (European Union).
BOULDER, Colo., March 23 (AScribe Newswire) -- Ice sheets across both the Arctic and Antarctic could melt more quickly than expected this century, according to two studies that blend computer modeling with paleoclimate records. The studies, led by scientists at the National Center for Atmospheric Research (NCAR) and the University of Arizona, show that Arctic summers by 2100 may be as warm as they were nearly 130,000 years ago, when sea levels eventually rose up to 20 feet (6 meters) higher than today.
Bette Otto-Bliesner (NCAR) and Jonathan Overpeck (University of Arizona) report on their new work in two papers appearing in the March 24 issue of Science. The research was funded by the National Science Foundation, NCAR's primary sponsor. The study also involved researchers from the universities of Calgary and Colorado, the U.S. Geological Survey, and The Pennsylvania State University.
Otto-Bliesner and Overpeck base their findings on data from ancient coral reefs, ice cores, and other natural climate records, as well as output from the NCAR-based Community Climate System Model (CCSM), a powerful tool for simulating past, present, and future climates.
"Although the focus of our work is polar, the implications are global," says Otto-Bliesner. "These ice sheets have melted before and sea levels rose. The warmth needed isn't that much above present conditions."
The two studies show that greenhouse gas increases over the next century could warm the Arctic by 5-8 degrees Fahrenheit (3-5 degrees Celsius) in summertime. This is roughly as warm as it was 130,000 years ago, between the most recent ice age and the previous one. The warm Arctic summers during the last interglacial period were caused by changes in Earth's tilt and orbit. The CCSM accurately captured that warming, which is mirrored in data from paleoclimate records.
Read the rest of the article here.
Well not really, but this is pretty cool:
"Big Brother" Peers Into Black Drum Bedrooms
A team of scientists from the University of South Florida College of Marine Science (CMS) recently deployed unique instrumentation to locate sound producing black drum fish that raise a loud chorus when they spawn and then determine if the sound production was matched by real results - tight clusters of newly fertilized fish eggs.
"Sound production by black drum serves as proxy for spawning," said marine biologist Jim Locascio. "We want to see if it is possible to find out how much sound production from the black drum equals how much egg production."
Locascio teamed with chemist and environmental scientist Eric Steimle (http://usfnews.usf.edu/page.cfm?link=article&aid=504), who developed a radio-controlled guided surface vehicle (GSV). For the deployment, Steimle's GSV carried a DIDSON imagining sonar and a hydrophone listening device to eavesdrop on the spawning sounds of black drum.
To compliment data collected by the DIDSON, USF Center for Ocean Technology engineer Bill Flanery contributed SIPPER, an imaging sensor mounted underneath the GSV. SIPPER was installed to digitally image and count small particles in the water - from seaweed to plankton - as the GSV -criss-crossed the football field-sized canal area. They hoped that SIPPER would discover clusters of newly fertilized fish eggs in the singing locations.
"The DIDSON creates images from sound and provides near video-like quality," explained Locascio. "We used DIDSON as a high resolution fish finder to image the spawning fish and then have SIPPER image and count the eggs being produced."
Their previous research used only hydrophones to locate the sound producing fishes, but this research attempted to image the adults and newly spawned eggs, giving a comprehensive look into the activity and production of the spawning population.
To test the unique system, the team traveled in mid-March to black drum spawning grounds in the canal system of Cape Coral, Florida, near Charlotte Harbor. The canal system is in the back yards of residential areas where Locascio and colleagues in 2005 were called in by residents to help explain odd noises from the canal, sounds so loud and spooky that residents were left unnerved. CMS researchers subsequently identified the sound as black drum males crooning love songs during spawning. The canal system is not far from an area near where CMS researchers recorded fish raising a chorus when Hurricane Charley rolled over the same waters in 2004 (http://usfnews.usf.edu/page.cfm?link=article&aid=685).
"All systems operated well and weÂre analyzing the data," reported Locascio after the test.
Locascio and plankton biologist Andrew Remsen, who are analyzing the images collected by SIPPER, want to determine if the sounds picked up by the hydrophones are concurrent with the number of eggs seen in the SIPPER images. To do so, they are looking at tens of thousands of digital pictures that are cross-referenced to exact locations in the area that SIPPER and DIDSON, riding under the GSV, surveyed together in real-time.
SIPPER can image and identify objects down to 1/4 of a millimeter and image subjects as small as a human hair while viewing 15 liters of water every second.
According to Flanery, an early concern was that riding just beneath the surface SIPPER would be imaging a lot of bubbles and that it might not be able to easily tell the difference between bubbles and fish eggs.
"We were pleasantly surprised to find that the images were easily distinguishable because SIPPER's recognition software was able to sort the images," said Flanery.
This was the first field assignment for SIPPER and DIDSON to jointly ride as GSV passengers and explore the relationship between male fish spawning sounds and the discovery of newly fertilized eggs. It was also the first deployment of the third version of SIPPER, SIPPER3.
"Eric's vehicle never carried so much weight at one time, but it performed magnificently," said Locascio. "SIPPER also set new achievement levels."
Spawning season in the area is drawing to a close, so the research team is checking their data a fine-tuning the equipment for bigger experiments when the encore begins.
The University of South Florida is on track to become one of the nation's top 50 public research universities. USF received more than $287 million in research contracts and grants last year, and it is ranked by the National Science Foundation as one of the nation's fastest growing universities in terms of federal research and development expenditures, and by the Carnegie Foundation as one of the 95 top universities nationwide in research activity. The university has a $1.3 billion annual budget and serves nearly 43,250 students on campuses in Tampa, St. Petersburg, Sarasota/Manatee and Lakeland. In 2005, USF entered the Big East athletic conference.
ArrayUniversity of South Florida experts say 2005's record number of hurricanes might have developed because of elevated surface sea temperatures.
Robert Weisberg, a USF College of Marine Science hurricane expert, and colleague Jyotika Virmani say the storms developed because the elevated surface sea temperatures did not fall, as usually occurs, during the 2004-05 winter.
The 2004 and early 2005 hurricane seasons were connected,said Weisberg, a physical oceanographer who also serves on the Committee on New Orleans Regional Hurricane Protection Projects.
The unusually warm SSTs that developed in the Atlantic Ocean in the fall of 2004 did not decrease as much as usual in winter, so SSTs were higher than normal in the spring of 2005.
Weisberg and Virmani explained their theory in a recent issue of Geophysical Research Letters.
WELL DUH!!! Seriously thanks for the insight... Why don't you tell us why instead of being Captain Obvious???
Tuesday, March 21, 2006
SYDNEY, Australia (20 Mar 2006) -- When marine scientist Ray Berkelmans went scuba diving at Australia's Great Barrier Reef earlier this year, what he discovered shocked him -- a graveyard of coral stretching as far as he could see.
"It's a white desert out there," Berkelmans told Reuters in early March after returning from a dive to survey bleaching -- signs of a mass death of corals caused by a sudden rise in ocean temperatures -- around the Keppel Islands off Queensland.
Australia has just experienced its warmest year on record and abnormally high sea temperatures during summer have caused massive coral bleaching in the Keppels. Sea temperatures touched 84 Fahrenheit, the upper limit for coral.
High temperatures are also a condition for the formation of hurricanes, such as Katrina which hit New Orleans in 2005.
"My estimate is in the vicinity of 95 to 98 percent of the coral is bleached in the Keppels," said Berkelmans from the Australian Institute of Marine Science.
Marine scientists say another global bleaching episode cannot be ruled out, citing major bleaching in the Caribbean in the 2005 northern hemisphere summer, which coincided with one of the 20 warmest years on record in the United States.
"In 2002, it would appear the Great Barrier Reef went first and then the global bleaching followed six to 12 months later. Is it the same this time around? No," said Berkelmans. "The Caribbean beat us to it. We seem to be riding on the back of that event. We don't know what is ahead in six months for the Indian Ocean reefs as they head into their summer." "This might be part of a global pattern where the warm waters continue to get warmer."
Other threats to coral reefs -- vast ecosystems often called the nurseries of the seas -- include pollution, over-fishing, coastal development and diseases.
Can coral recover?
Corals are vital as spawning grounds for many species of fish, help prevent coastal erosion and also draw tourists.
Bleaching is due to higher than average water temperatures, triggered mainly by global warming, scientists say. Higher temperatures force corals to expel algae living in coral polyps which provide food and color, leaving white calcium skeletons. Coral dies in about a month if the waters do not cool.
Berkelmans said the Keppels had previously bounced back from bleaching once the waters had cooled. But if temperatures remained abnormally high then that would be much more difficult.
Many scientists say global temperatures are rising because fossil fuel emissions from cars, industry and other sources are trapping the earth's heat. Experts worry some coral reefs could be wiped out by the end of the century.
Global warming could also damage corals by raising world sea levels by up to a meter by 2100. That could result in less light reaching deeper corals, threatening the important algae.
The Great Barrier Reef -- the world's largest living reef formation stretching 1,250 miles north to south along Australia's northeast coast -- was the first to experience what turned out to be global coral bleaching in 1998 and 2002.
The Keppels bleaching is as severe as those two events and scientists say the threat of widespread bleaching is moderate.
"Sea temperatures in all regions of the Great Barrier Reef are at levels capable of causing thermal stress to corals," said the Great Barrier Reef Marine Park Authority's February report.
The U.S. National Oceanic and Atmospheric Administration's Coral Reef Watch said the 2005 Caribbean bleaching centered on the U.S. Virgin Islands, but stretched from the Florida Keys to Tobago and Barbados in the south and Panama and Costa Rica.
Reef Watch said sea temperature stress levels in the Caribbean in 2005 were more than treble the levels that normally cause bleaching and almost double the levels that kill coral.
"Time will tell whether there was large-scale mortality or not," said Professor Robert Van Woesik from the Florida Institute of Technology in a statement issued by Australia's Queensland University. He said corals did have some ability to bounce back but that this was an unusually warm event.
Queensland University's Professor Ove Hoegh-Guldberg, head of a group of 100 scientists monitoring bleaching, said scientists were concerned about how close in time the two severe bleaching episodes were.
"The 2006 Great Barrier Reef event comes soon after the worst incidence of coral bleaching in the Caribbean in October 2005," said Hoegh-Guldberg who also went diving on the Keppels where he said damage was extensive.
"The traces suggest we are tracking the temperature profile of 2001-2002, which led to the worst incidence of coral bleaching ... for the Great Barrier Reef," he said.
In 2002, between 60 and 95 percent of the reefs that make up the Great Barrier Reef were bleached. Most corals survived but in some locations up to 90 percent were killed.
Hoegh-Guldberg said projections from 40 climate models suggested that oceans would warm by as much as three to four degrees Celsius in the next 100 years.
"We're starting to get into very dangerous territory where what we see perhaps this year will become the norm and of course extreme events will become more likely," he said. "The climate is changing so quickly that coral reefs don't keep up ... the loss of that ecosystem would be tremendous."
Tropical Cyclone Larry formed off the northeastern coast of Australia on March 18, 2006. The cyclone gained power rapidly and came ashore on Queensland’s eastern coastline, where it hammered beaches with heavy surf, tore roofs off buildings, and perhaps most destructively, flattened trees in banana plantations over a wide area. The Australian Broadcasting Corporation reported early estimates that as much as 90 percent of the Australian banana crop may have been lost in this single storm. Since many trees have been destroyed, it may be many years before the banana industry recovers.
When the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite observed the storm at 2:55 p.m. Australian Eastern Daylight Savings Time (03:55 UTC) on March 20, 2006, Tropical Cyclone Larry had come well ashore onto the mainland, losing much of its power as it traveled westward. At the time of this image, Larry had peak winds of around 140 kilometers per hour (85 miles per hour), significantly less strength than it had possessed just one day before.
The high-resolution image provided above is provided at the full MODIS spatial resolution (level of detail) of 250 meters per pixel. The MODIS Rapid Response System provides this image at additional resolutions.
NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center.
LARRY is an awesome name for a tropical storm. By the way, don't forget to check out Earth Observatory. It kicks ass...
Monday, March 20, 2006
DELAWARE (20 Mar 2006) -- Move over, Superman, with your X-ray vision. Marine scientists have now figured out a way to "see through" the ocean's surface and detect what's below, with the help of satellites in space.
Using sensor data from several U.S. and European satellites, researchers from the University of Delaware, NASA's Jet Propulsion Laboratory, and the Ocean University of China have developed a method to detect super-salty, submerged eddies called "Meddies" that occur in the Atlantic Ocean off Spain and Portugal at depths of more than a half mile. These warm, deep-water whirlpools, part of the ocean's complex circulatory system, help drive the ocean currents that moderate Earth's climate.
The research marks the first time scientists have been able to detect phenomena so deep in the ocean from space -- and using a new multi-sensor technique that can track changes in ocean salinity.
The lead author of the study was Xiao-Hai Yan, Mary A. S. Lighthipe Professor of Marine Studies at the University of Delaware and co-director of the UD Center for Remote Sensing. His collaborators included Young-Heon Jo, a postdoctoral researcher in the UD College of Marine Studies, W. Timothy Liu from NASA's Jet Propulsion Laboratory in Pasadena, California, and Ming-Xia He, from the Ocean Remote Sensing Institute at the Ocean University of China in Qingdao, China. Their results are reported in the April issue of the American Meteorological Society's Journal of Physical Oceanography.
"Since Meddies play a significant role in carrying salty water from the Mediterranean Sea into the Atlantic, new knowledge about their trajectories, transport, and life histories is important to the understanding of their mixing and interaction with North Atlantic water," Yan notes. "Ultimately, we hope this information will lead to a better understanding of their impact on global ocean circulation and global climate change."
First identified in 1978, Meddies are so named because they are eddies -- rotating pools of water -- that flow out of the Mediterranean Sea. A typical Meddy averages about 2,000 feet (600 meters) deep and 60 miles (100 kilometers) in diameter, and contains more than a billion tons (1,000 billion kilograms) of salt.
Read the rest here.
The government may let squid fishers take advantage of a sudden blooming in squid numbers this year -- a step that would require a relaxation in the numbers of sea lions that are normally killed in the process of developing the squid catch.
Fisheries Minister Jim Anderton is floating the move but said he is concerned about the uncertain science.
The southern squid trawl fishery operates around the Auckland Islands, from February through to April or May, or until the fishing-related mortality limit for sea lions is reached. New Zealand sea lions eat squid and are at risk of drowning when they chase squid into trawl nets.
The mortality limit, which is reviewed annually, is currently 97. The proposal would relax the number to 150 just for this season, he said.
Mr Anderton is consulting with stakeholders on this issue until the end of March, and will announce his decision shortly after that.
"It is a difficult decision to set limits on the number of deaths allowable every year before closing the area to squid fishing, but I am advised that this proposed change should not adversely threaten the viability of the sea lion population.
"Indeed the scientific advice I have previously received suggested that a mortality limit of 555 sea lions in the current season should not threaten the viability of the population.
"However, in light of uncertainties in applying a scientific model to the real world I am still exercising considerable caution," Mr Anderton said.
"This year there is more squid in New Zealand's southern waters than usual, but these squid are so short lived that they may not be around in these numbers next year.
"New Zealand is presented with an opportunity to capitalise on this valuable resource, at a time that would particularly benefit our economy," he said.
The idea is certain to raise objections from some quarters because the New Zealand sea lion, formerly known as the Hooker's sea lion, is classified as threatened under the Marine Mammals Protection Act.
Cruise line operators will face additional protected marine areas with the launch of global mapping project by the International Council of Cruise Lines (ICCL) and Conservation International (CI) to improve biodiversity.
This joint venture initiative will enforce marine areas such as coral reefs, seamounts and shellfish growing areas that are currently absent in cruise line navigational charts.
New practices include adhering to no-discharge zones and a policy of no discharge within four miles of shore unless the ship is using an advanced wastewater purification system (AWPS).
The mapping initiative was one of 11 recommendations made by an independent science panel comprised of marine experts and chaired by world renowned marine biologist Dr Sylvia Earle.
The recommendations explored a variety of issues including wastewater discharge, installation of AWPS, disposal of sewage bio-residues, and increasing passenger awareness on waste management practices.
ICCL president Michael Crye said the ICCL would implement a majority of the panel’s recommendations immediately.
Chair of the science panel and executive director of CI’s Global Marine Division, Dr Earle commended the cruise industry’s support for the project.
“The science panel understands individual cruise ships and transportation routes will impact how each recommendation can be carried out. Implementation of this mapping exercise will be an important first step as the industry begins the process of reviewing and integrating the science panel’s recommendations into their operations,” Dr Earle said.
Executive director of Conservation International’s Center for Environmental Leadership in Business, Glenn Prickett said the global mapping initiative was an example of how the conservation community could work cooperatively with the cruise industry to achieve conservation aims.
SAN DIEGO, March 20 (UPI) -- University of California-San Diego scientists say the same technology used to image brain tumors is taking the field of marine biology to new dimensions.
Researchers from the university's Keck Center for Functional Magnetic Resonance Imaging and the Scripps Institution of Oceanography have been awarded a National Science Foundation grant to create a high-resolution, three-dimensional MRI online catalog of fishes from Scripps's Marine Vertebrate Collection.
"This project will ... (use) a new tool and a new way to present information about fishes," said Philip Hastings, professor and curator of the Scripps collection. "It's part of our general effort to make the collection more available to a wider audience."
The five-year, nearly $2.5 million project supports development and application of new MRI technology that penetrates soft body tissue to provide 3-D images of physiological structures.The Scripps' collection is among the largest and most comprehensive collections of its kind in the world, containing 90 percent of all known families of fishes.
THIS ROCKS!!! With a good 3-D online database from a place like SCRIPPS, identification will become much easier... Let's face it the Peterson's Guides are hardly good for anything and Fishbase hardly has shit on any species except the most popular ones. This would be great for the scientific community, good 3-D imaging instead of shitty guides that say snout goes 2-3 times into head or fish with small scale-less pit on top of snout... I cannot wait!
Thursday, March 16, 2006
TUVALU (12 Mar 2006) -- Japan has pledged hundreds of thousands of dollars to the tiny Pacific nation of Tuvalu as it struggles under the weight of imported fuel costs - and denied that the funds were a bribe to win support for its whaling.
Tokyo inked similar deals last year for two other small nations in the Pacific, Nauru and Kiribati.
All three nations are members of the International Whaling Commission and support Japan's drive to reverse the IWC's 20-year moratorium on commercial whale hunting. They "absolutely" support Japan, said Geoffrey Palmer, New Zealand's commissioner to the IWC. "All recently joined the IWC and all are Japan supporters."
But Yujiro Akatsuka, an official at the Japanese Foreign Ministry's Economic Cooperation Bureau, denied the grants were linked to the nations' backing for Japan's whaling push.
"We want to support continued economic development in these countries," Akatsuka said Friday in Tokyo. "These kind of projects have no relation to their support in the IWC."
Tokyo has repeatedly failed to muster the three-fourths majority in the 66-member IWC needed to overturn the commercial whaling ban which took effect in 1986.
It has denied accusations of "vote- buying" among small, developing nations in the Caribbean and Africa as well as the Pacific in efforts to gain enough international support for its policies.
In July last year, former officials from Dominica, Grenada and the Solomon Islands claimed that Japan bribed their governments with aid to win support for its bid to overturn the international ban on commercial whaling. Japan denied the charge.
Tuvalu Prime Minister Maatia Toafa on Friday signed an agreement at the Japanese embassy in Fiji's capital, Suva, for 100 million yen (HK$6.6 million) of funds to meet 37 percent of the nation's fuel costs this year.
Most modern, educated Japanese do not eat whale and reject government propaganda that whaling is synonymous with being Japanese. But a small and politically powerful coalition of ultranationalist politicians, 'yakuza' crime bosses, fishing industry leaders and government controlled media including NHK promote the full-scale slaughter of whales in terms reminiscent of Japan's World War II rhetoric.
Thanking Japan's ambassador, Masashi Namekawa, Toafa said he was pleased with the speed with which Japan responded to requests for help, made in September last year.
"Japan can count on Tuvalu's support at various international forums on matters of common interest," he said.
Toafa said the grant will keep Tuvalu's power station running, allowing the Pacific island state to buy enough fuel to maintain two small transport ships and two fishing boats at sea. He said fuel import costs were the equivalent of 20 percent of total imports.
Tuvalu's 9,500 people live on nine coral atolls with a total area of 27 square kilometers, running their country on an annual budget of about US$5 million (HK$39 million).
This is absolutely incredible. And the main problem is that no nations have the backbone to stand up for things like this. We should be encouraging other nations not to join Japan in the crusade for commercial whaling, not sitting idle, just letting Japan agarner more support to rape the oceans. This is BULLSHIT... My next automobile will NOT BE JAPANESE!
Wednesday, March 15, 2006
NEW BEDFORD — Determined to ease proposed federal fishing regulations that would go into effect May 1, state and local government officials and fishing representatives are headed to Washington, D.C., today to meet with the top administrator of the National Oceanic and Atmospheric Administration.
Lt. Gov. Kerry Healey organized the meeting with Vice Adm. Conrad C. Lautenbacher Jr. to discuss potential alternatives to the proposed regulations, which would cut fishing days by nearly 50 percent to reduce overfishing on depleted stocks of cod and flounder.
Lt. Gov. Healey — who is running for governor — is "very concerned" about the impact the rules would have on the Massachusetts fishing industry, said her spokeswoman Laura Nicoll.
New Bedford Mayor Scott W. Lang and Dr. Brian Rothschild of the UMass Dartmouth School for Marine Science and Technology will join Lt. Gov. Healey at today's meeting, which is scheduled for 10:45 a.m. at the U.S. Department of Commerce. The meeting will also be attended by state Sen. Bruce E. Tarr, R- Gloucester, state Division of Marine Fisheries director Paul J. Diodati and Gloucester fisherman Vito Giacalone.
The proposed regulations would cut fishermen's already limited fishing days by 40 percent due to a new counting method that would calculate each actual day at sea as 1.4 days. In addition, fishermen would face an 8 percent reduction in their total days at sea, as required by federal fishing regulations, known as Amendment 13.
NOAA Fisheries Service designed the interim rules as a temporary way to reduce fishing pressure until a more permanent groundfish plan — approved by the New England Fishery Management Council in early February — is adopted sometime this summer.
Local fishermen have decried the interim rules, saying they would push some boat owners out of business. A few boat owners said they plan to keep their boats tied to the docks until the council's less stringent rules go into effect.
Oil Platforms Should Be Kept As Artificial Reefs, Scientist Says
By TIM MOLLOY
SANTA BARBARA, Calif. (March 14) - Marine biologist Milton Love drives
a hybrid car, displays a banner of left-wing revolutionary Che Guevara
on his laboratory wall - and has backing from Big Oil.
The reason: his finding that oil platforms off California's central
coast are a haven for species of fish whose numbers have been
dramatically reduced by overfishing.
That is good news to oil executives, who are looking for reasons not to
pay hundreds of millions of dollars to remove the platforms once the
crude stops flowing.
Environmentalists say oil companies are simply trying to escape their
"Just because fish are there doesn't mean the platform constitutes
habitat," says Linda Krop, an attorney for the Santa Barbara-based
Environmental Defense Center. "That's like taking a picture of birds on
a telephone wire and saying it's essential habitat."
The 27 platforms - skeletal-looking structures that house dormitories,
offices and massive pumps - were installed over the past four decades
and now produce 72,000 barrels of oil a day. Environmentalists and
coastal residents despise them for spoiling the view and disrupting the
Federal law requires oil companies to remove the platforms when
operations are complete, though no one knows whether it will be years
or decades before the deposits under the sea floor run out.
Oil companies already are pressing state and federal officials to keep
the rigs in place, citing Love's finding that platforms provide homes
for bocaccio, cowcod and other fish.
The National Oceanic and Atmospheric Administration said last week it
might consider the idea but wants to know more about the effects of oil
platforms on marine life.
Since the 1950s, when heavy fishing began in the region, some species
of fish have been reduced to 6 percent of their previous numbers,
according to Love. Some fisheries have closed, and the fishing fleet
has shrunk by a third.
Love, a researcher at the University of California at Santa Barbara,
films fish from a submarine and then counts them in his lab. He says
some platforms are surrounded with fish packed as tightly as "cocktail
wieners in a can."
"If anyone wants to come up and count the fish, we'll provide the first
beer," Love says. "But they're going to have to bring the rest. And
they're going to need a few cases because we have 11 years of
Love gets about 80 percent of his research money from the government
and the rest from the California Artificial Reef Enhancement Program, a
nonprofit group funded almost entirely by oil companies. It has
contributed about $100,000 a year to Love's research since 1999,
executive director George Steinbach says.
Love says no amount of oil money can sway his research - fish either
cluster at the platforms or they don't. And because they do, he says
his personal opinion is that the rigs should stay in place, cut below
the waterline so that ships can pass safely over them.
"If you remove a platform you'll kill many millions of animals," he
Environmentalists say if the platforms were removed, fish would return
to the underwater boulder fields and rocky outcroppings that form
natural reefs along the Southern California coast.
In the Gulf of Mexico, more than 200 rigs have been converted into
artificial reefs, either by toppling them or by lopping them off.
Krop, the environmental lawyer, says rig-to-reef conversions make more
sense in the Gulf of Mexico because the waters there have a mud bottom
and fewer natural reefs.
Converting platforms between Long Beach and Point Conception north of
Santa Barbara could be $600 million to $1 billion cheaper than removing
them, Steinbach says. He says the oil companies would contribute up to
half their savings to state conservation programs.
Widespread opposition from environmentalists and residents has killed
legislation that would have allowed such a deal.
Now I cannot say without looking at actual data which side I would be on. However, I do know how artificial reefs are havens for fish, it serves as more structure than natural reefs and can support higher numbers of fish. I don't know what the rocky reefs are like in California, but if they are large numbers of fish, commercially important or otherwise, at these oil rig structures, then why not keep them, by removing the above water sections? I don't see how it is such a bad deal, especially knowing the economic benefits through both fisheries and tourism of having them...
Tuesday, March 14, 2006
AT LEAST SOMEONE IN KANSAS HAS COMMON SENSE!!!:
In response to the Kansas Board of Education's 2005 Science Standards, Fort Hays State University's Faculty Senate recently passed its own statement of opposition and endorsed another.
At February's meeting, the FHSU Faculty Senate passed Resolution 05-02, stating that the senate does not support including intelligent design in state education science standards.
The Faculty Senate resolution reads:
In response to the recent decision to include Intelligent Design in the Kansas state science standards, the Faculty Senate of Fort Hays State University resolves:
It is the role and responsibility of the scientific community to assess the merit of the subject matter taught in the science classrooms of our public schools.
As such, the Faculty Senate of Fort Hays State University does not support the inclusion of material, such as Intelligent Design, which has so far failed to withstand scientific scrutiny based on rigorous and verifiable peer-reviewed research.
"We, as a Faculty Senate, feel it is important to have educational materials stand upon their own merits rather than be imposed by an outside agency," said Dr. Win Jordan, president of the Faculty Senate and assistant professor of accounting and information systems.
"We had asked our University Affairs Committee to develop a statement in response to the recent Intelligent Design controversy. The committee responded with our Resolution 05-02."
At the same meeting, the senate also endorsed a position statement by the Kansas Association of Teachers of Science, presented to them by Dr. Paul Adams, Anschutz professor of education and physics and a member of the KATS panel asked to distribute the KATS statement.
"As we reviewed the statement, we found it to be clear, well thought out, and compelling. We therefore agreed to endorse it," said Jordan.
The KATS position statement was released by the organization's Board of Directors. In a cover letter, board President David Pollock said, "The Kansas Association of Teachers of Science (KATS) is the largest science teacher association in the state of Kansas. The 18 elected board members represent elementary through college teachers. The following is the official position of KATS that was passed at the regularly scheduled board meeting January 21, 2006."
Pollock is a teacher at Hays High School, Hays USD 489.
The KATS response to the Kansas State Board of Education Science Standards 2005 reads:
Kansas Association of Teachers of Science response to the Kansas State Board of Education adoption of the 2005 Science Standards:
The Kansas Association of Teachers of Science (KATS) is committed to promoting quality science teaching and the scientific literacy of both students and citizens throughout the state of Kansas. Accordingly, the KATS Board of Directors rejects on both scientific and pedagogical grounds the 2005 State Science Standards approved by the Kansas Board of Education (KBOE). The 2005 Standards neither promote quality teaching nor the development of scientific literacy.
As the state-level affiliate of the National Science Teachers Association (NSTA), KATS is the largest organization in Kansas representing teachers of science. We offer our unhesitating support to teachers who continue to emphasize science teaching that parallels contemporary scientific understanding as it is practiced throughout the world as a search for natural causes.
By redefining science in the Kansas Science Education Standards, the KBOE is promoting intelligent design tenets that purport supernatural explanations as valid scientific theories. Given that the goal of the intelligent design movement includes replacing scientific explanations with theistic understanding and to see this design theory inappropriately imposed on our religious, cultural, moral, and political life; the KATS Board of Directors adamantly opposes turning Kansas science classrooms into theatres of political and religious turmoil blurring the Constitutional ideals of separation of Church and State.
Therefore, KATS resolves that:
--Kansas teachers of science should continue to teach science as it is practiced throughout the world, and not attribute natural phenomena to supernatural causation;
--Kansas teachers of science should explore with their students the extensive evidence for evolutionary theory and actively refute the so-called evidence against evolution, as outlined in the new science standards;
--The Kansas Association of Teachers of Science recognizes that the KBOE is exhibiting educational irresponsibility in ignoring mainstream scientific understandings by substituting its own religiously-motivated agenda;
--State assessments should not include items related to the disputed portions of the 2005 Standards, as these statements do not reflect the global view of the science community;
--The KBOE should reconsider the inclusion of non-scientific ideas about the origins and development of life in order not to damage the prospects for student admission to high-quality colleges and universities;
--The KBOE should be aware that their anti-science actions are in direct conflict with the recent Kansas Bioscience Initiative;
Be it further resolved, that the Board of Directors of the Kansas Association of Teachers of Science (KATS) does not support and disassociates itself from these Kansas Science Education Standards (2005) as approved by the Kansas State Board of Education and recommends continued use of the 2001 Standards for curriculum development and assessment.
"As FHSU Faculty Senate President," said Jordan, "I found both the resolution and endorsement of the KATS statement to be appropriate, even necessary, to help maintain the quality of education in Kansas."
|
<urn:uuid:89c88583-4e2b-4db9-8aae-73a25e80132b>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00609.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9538891911506653,
"score": 4.03125,
"token_count": 17893,
"url": "http://jmcarroll-marinebio.blogspot.com/2006/03/"
}
|
The impact of railways on Victorian society was immense. Railways provided the speediest means of transport during the nineteenth century. At first some railways charged the first-class passenger more than he would have had to pay making the same journey inside a coach, and a second-class passenger more than he would have been charged for an outside seat. It was not long, however, before railways began to give fares a competitive edge over the charges of coach travel. It was a policy that resulted in a rapid increase in the volume of passenger traffic.
By the early 1840s half fares for under 12s were generally widely available making family rail travel more widespread and still further augmented the volume of passenger travel. Initially the poor were not encouraged to travel unless it was in search of work or to fulfil urgent family responsibilities. Railway companies made no provision for that type of traveller who had gone by carrier's wagon rather than by the outside of a coach because it was cheaper. Robert Stephenson told the Select Committee on Railways in July 1839 that there was 'a class of people who had not yet had the advantage from the railways which they ought, that is the labouring classes.' The practice of providing third-class carriages on trains spread gradually. It was not, however, until the Railway Act 1844 with its clauses making the provision of third-class accommodation on at least one train a day in each direction obligatory, that the working class could count on penny-a-mile travel under minimum conditions of comfort. Between 1849 and 1870 the number of third-class passengers increased nearly six fold whereas the increase in other areas was only fourfold. The boost to third-class travel was further increased after 1874 when the Midland Railway abolished second class and greatly improved the comfort for third-class passengers. Other companies followed suit and by 1890 the difference in the standards of accommodation had been substantially narrowed.
Freight traffic did not increase as rapidly as passenger travel partly because of the relative cheapness of inland navigation. One problem the early railway companies faced was through-traffic: getting goods from one part of the country to another through different companies' lines. Although a Railway Clearing House was established in January 1842 it was five years before it began to operate effectively and there was a substantial reduction in the cost of freight. It is important to identify those areas where reduction in the cost of freight had social as well as economic results:
- In the early 1850s the volume of freight carried by railway first exceeded that carried by canals.
- The movement of coal was the bread and butter of British railways, the tonnage carried always well over half the total of freight traffic. In 1865, for example, 50 million tons of coal was carried compared to 13 million tons of other minerals [especially iron] and nearly 32 million tons of other merchandise. Up to 1914 the volume of mineral traffic increased at a faster rate than traffic in other goods. Despite this, the mineral trade was less profitable than the other type of freight, earning only 45 per cent of freight revenue in 1913. The railways made available, at a lower cost, the fuel that was the lifeblood of the basic industries. On the other hand the presence of about 1,400,000 mostly small wagons, many of which were used to carry coal, cluttered up the tracks and led to congestion on the railway network. Railways' contribution to the prosperity of coal mining came through their cheapening of delivery costs and the consequent vast extension of the use of coal in manufactures and in domestic heating.
- The growth of the iron industry was sustained by orders of rails, locomotives and rolling stock..
- The demand for locomotives and rolling stock was so great that it accounted for at least 20 per cent of the engineering industry's output in the later 1840s. Railways spread the engineering industry into areas that were previously regarded as agricultural: for example, the South Eastern Railway established its locomotive and carriage works at Ashford in Kent in 1845 and the Great Western Railway decided to set up a similar establishment at Swindon five years earlier. Though railway engineering made significant early achievements, its later progress was unimpressive. The main reasons for this were. The dominance of the chief mechanical engineer over development policy in many of the larger railway companies. These engineers were highly individualist and liked to be known for the distinctive features on the locomotives they designed. This led to little standardisation of design, with consequently to little attempt to maximise economy of operation. Considerations of engineering excellence took precedence over better cost accounting and the need for more adequate statistics on the operating efficiency of freight trains. This did not mean that there were no improvements in the efficiency of the locomotive after 1850. The replacement of wrought iron with steel rails meant that weightier and more powerful locomotives could be used. The substitution of coal for coke as fuel after 1870 made feasible higher steam pressure, greater speeds and heavier trains. Some lines were electrified before 1914 but the mileage was very small: 314 out of 23,911 route miles of track.
- Railway costs meant that manufacturing centres were able to undercut the largely hand-based local centres of production. The economic and social life of towns and villages became less diversified. A comparison of local directories demonstrates this very clearly. By 1900 although the names of trades had sometimes survived, the character of the business that went on behind the shop front had changed radically. Tradesmen had, in many cases, ceased to be craftsmen. They became dealers or shopkeepers, selling goods made in some remote manufacturing centre elsewhere in Britain or even in America or Germany.
- One effect of the railways was the elimination of local differences in farm prices. Not only could farmers bring in fertilisers, they could send their produce to market more easily and cheaply. This had important consequences for people's diet. Take, for example, the transformation of the system of marketing livestock. Traditionally meat was supplied to London and other large centres of population 'on the hoof'. Animals were driven from the farms to the final fattening grounds near the main markets. In early 1830 34 drovers guided 182,000 sheep a year from south Lincolnshire to London while a further 52 men drive 26,520 oxen on the same route. It was an expensive and time-consuming operation. When the rail link from Cambridge to London was opened, the greater part of the journey could be completed in less than a day cutting costs considerably. The gradual disappearance of the long-distance droving industry occurred in Scotland in the 1850s and 1860s and led to the emergence of new markets at railheads like Lairg, Lockerbie and Lanark. The fatstock farmer could now expect a better financial return because less of his product was being wasted in the process of marketing and the customer could get a cheaper and fresher product. A similar process can be seen in the transformation of Britain's fisheries.
- Railway investment broadened the social spread of those involved in risk capital. Before 1830 the investment habit was largely confined to the members of the mercantile and landed interests whose opportunities for obtaining a secure return on their savings had been strictly limited. Railways demanded a quite unprecedented volume of capital and in order to obtain it companies were obliged to lower the denomination of shares allowing people from the lower middle and even upper working class to invest. The result was a permanent change in investment trends: before 1830 little over one twentieth of national income was invested annually, by 1850 the proportion was one tenth -- more people were investing more money.
Railways were regarded as symbolic of the progressive spirit. Sponsors of railway companies were often also supporters of parliamentary reform, municipal reform and free trade. In 1907 the author of a survey of the Essex economy wrote 'It was not easy to lay down rails in the soft Essex soils and a good deal of the country is still untouched by railroads and therefore quietly unprogressive in spirit.' It is certainly true that the arrival of the railway was often accompanied by the introduction of other changes. The railway first came to the Isle of Wight in 1864, after sustained opposition from local landowners for nearly twenty years. The editor of the local newspaper grumbled that many of the visitors bought by the railway had great difficulty in finding the beauty spots and that not only improved signposting but also better roads and gas lighting were urgently needed. Two months later he announced the formation of a gas company, 'the prospects of success being so very encouraging'. The 'progressive spirit' can be seen in other respects:
- The development of the railway system resulted in the general acceptance of Greenwich time as a standard. Before 1840 different parts of the country operated at different times. The Great Western timetable of 30 July 1841 said that 'London time is about four minutes earlier than Reading time, seven and a half minutes before Cirencester and 14 minutes before Bridgwater.' The disadvantages of not having a standard time applicable to all parts of the United Kingdom were increasingly obvious. The result was the gradual standardisation of time during the 1850s.
- There is no denying the influence of railways in starting that standardisation of language and speech that was carried forward more speedily by the influence of radio and television. The railways of Wales, for example, were powerful agencies in the decline of Welsh speaking in the principality.
- The creation of a national railway system was one of the preconditions -- together with technological changes in printing, the growth in literacy levels and the abolition of stamp duty in newspapers in 1836 and the 1850s -- for the establishment of mass circulation daily newspapers. In 1830 only 41,412 daily papers left London through the postal system. Railways extended the radius of circulation and newspapers could be delivered to all but the remotest parts of the country within a day of publication. In 1866 the railway companies agreed that the standard charge for newspapers should be half the ordinary parcel rate. As early as 1848 potential demand was so great that the firm of W.H.Smith & Son chartered six special trains to get newspapers through to Glasgow within ten hours of publication in London.
Railways contributed to the increasing secularisation of British society through the development of leisure and the development of tourism. The holiday began as a holy day. At the Bank of England there were 44 such holidays in 1808 though this had been reduced to four by 1834. Elsewhere, however, there was a new movement in the opposite direction: in many factories holidays were declared at the will of the owners as a device for saving wages when business was slack. Every important form of leisure activity that existed in the Victorian period and which some suppose to have been introduced by the railways had their origins in the eighteenth or early nineteenth century. Holidays were not something created by railways for the few, but arguably were for the masses. Railways were quick to recognise the opportunities presented by train excursions. One way of filling empty seats in passenger coaches and so offset high overhead costs was to provide excursion tickets at lower prices than the standard tickets: in 1846 for example the Bodmin & Wadebridge company ran a cheap train for those wishing to see a public execution. The first trunk line to show a positive attitude to the promotion of leisure traffic was the London & Brighton and in 1844 it became the chief pioneer of excursion trains in southern England.
It was above all the Great Exhibition of 1851 that provided the greatest opportunity for railways to promote excursion travel. Without the railways the Exhibition could not have matched up to the imaginative ambitions of those who planned it and, in the end, those ambitions were surpassed. More than forty years later Thomas Hardy conjured up an excursion train: 'an absolutely new departure in the history of travel runs to the Exhibition from Dorcester....' When the Exhibition closed, railways stood higher in general estimation than they had done before. Their system was now revealed as a working unit, able to concentrate attention and energy from all the most populous parts of the island at once on a single object in London. The running of these trains quickly came to be accepted by all the railway companies having substantial passenger business. There were other special events when their services were again in demand: the Manchester Art Treasures Exhibition in 1857, the International Exhibition in London in 1862 and big exhibitions in Glasgow in 1888 and 1901. The seaside traffic grew; excursions carried race-goers into the suburbs of the great towns balanced later by those bringing passengers into towns to see football and cricket matches. It is difficult to estimate the quantity of the excursion traffic as only the Royal Commission on Railways 1865-7 dealt with the issue, and then only cursorily.
When excursion trains first appeared, it was common practice to run them on Sundays. Sunday observance affected the provision of railway services from the start and there was a conflict between the prompting of conscience and the pressing claims for business efficiency. The Sunday timetable, as the system developed in the 1840s and 1850s, came to differ widely from one railway to another but two generalisations can be made. First, the Post Office was empowered to compel railways to carry mail at any time it appointed; and since it had both to deliver and collect letters on Sundays it insisted on the provision of a good many Sunday mail trains. This was uneconomic for the railway companies and so nearly all mail trains also carried passengers. Secondly, on all British railways the Sunday service was very much less liberal than that offered on weekdays unlike continental Europe where there was very little difference.
In Scotland there were many lines on which no Sunday trains of any kind ran in the Victorian age: the suburban system in Glasgow, for example, was almost wholly shut down. In England and Wales the policy of providing Sunday trains was often criticised and sometimes strongly opposed. The clergy of the diocese of Winchester complained that their congregations were much reduced in the summer time. In 1846 Francis Close, the Evangelical parson of Cheltenham said, when Sunday trains began to serve the town 'Another page of Godless legislation, another national sin invokes the displeasure of the Almighty.' His ranting was ignored and the trains continued to run. This heavy-handed approach was not, however, the only approach used. The case against running trains was sometimes stated with considerable restraint. The Sabbatarians treated Sunday as both a day of observance and as a day of rest. However, some secularly minded people argued that what was at issue here was really a battle of classes. The Sabbatarians seemed to them to be denying to the poor what would remained accessible to the rich, who kept their own carriages and could travel as they chose. Ought not the railways, as an instrument of social mobility, be available to all? The Duke of Wellington, hardly a radical thinker, thought they should. Two dreadful accidents, in 1858 and 1861, of excursion trains were seen as a judgement of God on the sin of providing Sunday excursion trains.
There were signs by the 1850s of support for increased railway facilities on Sundays but the control by the anti-travel lobby seemed to be growing stronger. In 1856 Parliament turned down attempts to open the chief London museums and galleries on a Sunday afternoon by an eight-to-one majority. In 1861 5.7 per cent of the system was closed on Sundays; by 1871 it was 18.9 per cent. By 1914, about 3700 miles of the system in England and Wales were closed on a Sunday, a little over 22 per cent of the whole. The railways' Sunday business had never been large and was carried out at a substantially higher cost than the weekday business. There were therefore strong arguments for keeping it down. Railway company needed to relay track and daylight hours were only available on Sundays: most of the conversion of the gauge on the Great Western Railway was carried out at weekends.
The Sabbatarians began to revive at the end of the century. The Anti-Sunday-Travelling Union launched a new periodical, Our Heritage, in 1895 fomenting criticism of all the railways' Sunday services. Protesters soon began to lobby management and disturb shareholders meetings. Their doctrine no longer represented prevalent thinking. The successful body in these years was on the other side: the National Sunday League founded in 1855 to support the Sunday opening of museums and parks. The Sunday service was often very slow. The policy of the various companies was to impose a strict rigidity of their own as far as timetabling was concerned. In part this was because of their anxiety not to offend Sabbatarian susceptibilities, the requirement of carrying mail and the demands of railwaymen for additional Sunday pay. Railways had begun by offering emancipation: opportunities to travel over substantial distances on Sunday. In doing so it violated the old Protestant Sunday in Britain. Despite opposition the railways' intervention enlarged the choice open to those individual consciences. In this sense they can be seen as progressive.
With the development of the Victorian excursion system can tourism and the family holiday. There are several reasons for the close relationship between tourism and the railways, unlike on the Continent. The smallness of Britain was itself an invitation to provide this kind of service. A quick trip to the coast was an attractive proposition. This combined with a large increase in average wages between 1860 and 1913 of 72 per cent and the increase in paid holidays. Victorian working men had more money to spend and more leisure time in which to spend it than workers in France and Germany. British governments ignored the excursion business because it was the affair of the railway companies, and theirs alone, to determine when and where these trains should run. As a result there was a continually increasing provision of excursions to match public demand.
The railway companies fostered the habit of taking short holidays over the weekend. In doing so it developed a practice that emerged in the late eighteenth century. The London & South Western seems to have been the first railway to encourage such behaviour: in 1842 it offered tickets at reduced fares from London to Southampton and Gosport on Saturday for return either on the same day, or Sunday or Monday. In 1844 the South Eastern ran six excursions to Dover with the option of extending the journey to France for the weekend. The motive here was profit. Other railways had different motives. Mark Huish, manager of the North Western, did not like Sunday trains for religious reasons [he was a strong Nonconformist]. He believed that the weekend ticket provided a substitute for Sunday travel. The Saturday-Monday holiday does not seem to have acquired the name 'week-end' until 1870. By the late 1880s the habit had evidently grown. Bradshaw, the railway guide, shows early morning trains running up to London to Mondays only from Eastbourne, Hastings, Ramsgate and Yarmouth as well as from Llandudno to Liverpool, Manchester and Birmingham. By 1914 there were ten such trains altogether but there never seems to have been any in Scotland. For the middle classes the development of this type of service had two advantages. First it have them the opportunity to take weekend holidays. Secondly, it allowed the family to live away from the major conurbations and the husband could go home for the weekend.
There are major problems for historians in examining Victorian tourism and the impact railways made. There are no parliamentary enquiries or official statistics. Census returns give little information since they were taken in April when few tourists were on the move. As the tourist traffic grew, in extent and complexity, many of the British railway companies put part of it into the hands of agents or outside firms. The 'tour' was nothing new; guidebooks had been produced since the seventeenth century, the Grand Tour was part of the education for the wealthy in the eighteenth century. By 1824 steamboats were plying the east coast from London to Leith, the port of Edinburgh; one was named The Tourist. In 1845 two men appreciating what could be done with railways and steamers came forward with offers to arrange this kind of travel, guaranteeing accommodation on trains and ships in return for a single payment: Joseph Crisp in Liverpool and Thomas Cook in Leicester. It is Cook on whom we focus:
- Cook offered two tours by train from Leicester to Liverpool and on by a steamboat to North Wales in 1845. The following year he organised a tour to Scotland.
- Cook never had any monopoly in the travel business, not with his liberal principles would he have sought one. But his firm remained much the most famous. His outstanding quality was his imagination, served by intense energy and in the early 1860s his lack of real business sense resulted in the firm moving from Leicester to London and his son John Mason Cook playing a more active role on the strict business side.
- The result was the genesis of what today is called the package holiday.
The railways' excursion and tourist business had come to be very substantial indeed before 1914. In the early years, down to 1851, the pleasure of travel was combined with the discomfort and fear of travelling by the new trains. Improvisation was the characteristic feature of tourism. By the 1860s the business had became, as a general rule, to be well managed. Trains were more tolerable to travel in and excursions had become an accepted part of the British railway system. Seaside resorts strove hard to achieve rail links with London or the great industrial centres. When Torquay achieved this ambition in 1848 a public holiday was declared in the town. The railway did not reach Bournemouth until 1870 but in the following decade its population grew from 5,896 to 16,859 and by 1911 reached 78,674. Excursions became most obvious in the mass movements on or around the principal public holidays. There would have been little point in Parliament passing the Bank Holidays Act 1871 had there not existed a railway system capable of carrying thousands of wage earners and their families to the seaside and back in a day at remarkably cheap rates.
Urban growth and creation was influenced by the emergent railway system. Some towns owed their very existence to the enterprise of a railway company. Others would not have grown if the railway had not helped to provide access to markets for the goods they produced and yet others had their character radically altered as a result of the extension of railway communications.
- In 1841 no such place as Crewe appeared in the national census. There were only two small parishes of Monks Copenhall and Church Copenhall with a combined population of 747. Communications in the area were poor; roads were covered with 'excessively deep' ruts. By the end of 1842 four routes converged at this point of the railway system establishing links with Manchester, Birmingham, Chester and Liverpool. In late 1841 the Grand Junction Railway started to build its locomotive and carriage works there and the town grew very rapidly. By 1901, the year in which it produced its 4,000th locomotive, Crewe had a population of 42,074. Less spectacular was the development of Wolverton in Buckinghamshire. In 1838 the London & Birmingham Railway decided to establish its engine works on a site conveniently placed between the two cities. Population grew: from 417 in 1831 to 2,070 by 1851. By 1907 there was employment for 4,500 men and boys in the railway carriage works. Swindon became the engineering centre for the Great Western Railway employing 14,000 men by the turn of the century. Railway workshops were not always built in rural settings and it is easy for historians to overlook those in the urban environment. Stratford, in east London, fulfilled the same role for the Great Eastern Railway as Swindon was for the Great Western.
- Dozens of towns, though not the creation of railway companies, owed their rapid development to the presence of good communications. The spectacular emergence of Barrow in Furness as a major industrial centre after 1840 was associated with the expansion of iron mining and smelting; the Furness Railway played a decisive role in opening up the district that had earlier been remote and difficult of access. Middlesborough, though not so geographically remote, was a parallel case in that the railway was an essential agency in the growth of the iron and steel industry.
The railways' influence in opening up new urban centres continued up to 1914. After 1860 the construction of branch lines helped to create or enlarge residential suburbs of large towns and cities rather than establish completely new industrial towns. However, the early development of railways was not entirely constructive and when the railway companies extended their ownership of property within already existing cities their role was also partly destructive. By 1900 the railways had over five per cent of the central areas of London and Birmingham, more than seven per cent of the corresponding districts of Glasgow and Manchester and nine per cent of central Liverpool. This led to considerable dispossession of the powerless and the poor. In building new stations, goods yards and stables in the centre of big cities, railway companies avoided large factories. It was far cheaper and less complicated to buy up large numbers of individual houses, especially where one landlord owned them. As a result railways contributed to the creation of urban ghettos and inner city deprivation.
- As well as those dispossessed by the building of a new depot, there was usually an influx of labour into an area as more casual labour would be needed. At the same time as the number of houses and rooms were reduced, the demand increased. It was all very well suggesting that those displaced should find new homes outside the city centre. But, as one witness to the Royal Commission on Metropolis Railway Termini expressed it 'the poor man was chained to the spot; he had neither the leisure to walk nor the money to ride....'
- Money to ride' implied travelling at the normal third -class, penny-a-mile rate established from 1844 and this proved to be too expensive. Parliament did attempt to require railway companies to provide workmen's trains at concessionary rates that were very cheap. In the Cheap Train Act 1883 Parliament intervened in a more comprehensive manner and extended what some railway companies had already begun to do. The result was a dramatic increase in workmen's tickets. An average of 26,000 was issued daily in the London area in 1882; by 1912 a quarter of all suburban rail passengers travelled with these tickets.
- With parliamentary encouragement the railways had made something of a contribution to the dispersal into healthier districts of the people living in the grossly overcrowded city centres. However, the continued existence of slums a generation after the passage of the 1883 Act makes it very clear that it would be wholly misleading to suggest that the housing problem could be solved simply by a policy of concessionary fares.
The importance of the railways to social developments between 1830 and 1914 cannot be underestimated. Railways impinged on the lives of all sections of society, increased mobility, improved diet as well as introducing a degree of uniformity on the diversities of regional and local experience. They engendered wonder and fear, changed the landscape of the country whether rural or urban, and liberated society from the constraints and slowness of existing modes of travel. Yet railways could not have been as successful as they were without those modes of travel. Roads provided short-distance feeders to railway stations; canals still carried heavy goods; and England was still a horse-driven society in 1914.
On the social impact of railways Jack Simmons The Victorian Railway, Thames & Hudson, 1991 is a work of major importance.
It was 111 years before another bad accident occurred to an excursion train running on a Sunday!!
On this see Jeremy Black The Grand Tour, Allan Sutton, revised edition, 1992.
|
<urn:uuid:ff20747e-ada1-4ed8-a331-60990a700111>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00209.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.981316328048706,
"score": 4.21875,
"token_count": 5641,
"url": "http://richardjohnbr.blogspot.com/2008/04/railways-and-society.html"
}
|
Catholic Encyclopedia (1913)/Missouri
The State of Missouri was carved out of the Louisiana Territory, and derives its name from the principal river flowing through its centre. The name (pronounced Miz-zoo'ri) signifies "big muddy" in the Indian language. Geographically, Missouri is the central commonwealth of the Federal Union.
BOUNDARIES AND AREA
The boundaries are the State of Iowa on the north; Arkansas on the south; on the east the Mississippi River separates it from Illinois, Kentucky, and Tennessee; on the west it is bounded by Nebraska, Kansas, and the new State of Oklahoma. It lies between 40°30' and 36°30' N. lat., except that a small projection, between the Rivers St. Francis and Mississippi, extends about 34 miles farther south between Tennessee and Arkansas. The area of the state is 69,415 square miles.
The Missouri River follows the western boundary of the state as far south as Kansas City; then turning east, it flows across the state and empties itself into the Mississippi about twelve miles above St. Louis. The portion of the state lying north of the Missouri is a great extent of gently rolling prairie, intersected here and there by streams which are lined with timber and flow south into the Missouri or east into the Mississippi. The western portion of the state, north of the Missouri River, is generally level, but rises to about one thousand feet above sea-level in the northwestern corner of the state. The eastern portion, north of the Missouri River, is more broken, with some hilly land bordering the Mississippi and Missouri rivers. The portion of the state south of the Missouri is more rolling; it is well wooded, especially in the south-east, with some swamp lands in the extreme south-eastern section. The Ozark Mountains break into the south central part of the state, but rise to no considerable height (highest elevation 1600 feet). West of these mountains the land is rolling, but arable and fertile, being especially adapted to fruit-growing. It is in this section that the famous Missouri red apples are grown in the greatest quantities.
According to the first federal census of Missouri; taken in 1810, the state had then 20,845 inhabitants. The census of 1910 places the population at 3,293,335. According to the Missouri Bureau of Labor Statistics for 1909, the population of the state at the beginning of that year was 3,925,335.
Agricultural and Farm Products
The value of the output of farm crops alone for the year 1908 was $171,815,553. Of the total crop valuation $98,607,605 consisted of Indian corn, in the production of which Missouri is the first state in the Union. The greater portion of the crop is consumed by live stock within the state; this portion is not estimated in the surplus given below. The surplus in livestock for the year ending 31 December, 1908, consisting of cattle, horses, hogs, mules, and sheep, was 7,097,055 head,; valued at $112,535,494. Missouri is constantly gaining as a wool-producing state; in 1908 there was $1,306,922 worth of wool sold. The farm-yard products are important items in the agricultural statistics; the surplus of poultry, eggs, and feathers for the year 1908 was $44,960,973. Missouri has never been considered an important dairying state, but since 1904 there has been a remarkable growth in this industry. The statistics in 1904 show an estimated total value from the dairies of $4,900,783, while the statistics of 1908 give a total value of $20,651,778. The cotton crop of 1908 brought, $3,723,352.
Mines and Timber
In 1907 the Federal authorities ranked Missouri the chief lead-producing state of the Union. The returns from the smelters for 1908 show that the state mined enough lead ore to produce 122,451 tons of primary lead. The total valuation of the lead produced in 1908 was $8,672,873. For 1908 the State Mining Department placed the production of zinc ore at 197,499 tons, and its value at $6,374,719. Nickel, copper, and cobalt are among the valuable minerals produced in Missouri. According to the United States geological survey of 1907, Missouri and Oregon were the only states producing nickel: 400 tons of metallic nickel, 200 tons of metallic cobalt, and 700 tons of metallic copper were produced in 1908. Iron ore to the value of $218,182 was produced in the year 1908. There was an output of $26,204 in silver. In the production of clay and shale goods Missouri held seventh rank in 1908. In cement the state also held seventh place. The total output in lime, cement, brick, and tiling for 1908 aggregated a value of $8,904,013. Petroleum wells exist in one or two counties close to the Kansas border, and some natural gas has been found in the state. Coal exists in abundance, the value of the output in 1908 being $5,644,330. The products of the forests of Missouri produced in 1908 over 450,000,000 feet of assorted lumber with an estimated valuation of $8,719,822, while over $4,000,000 worth of railroad ties were also produced in that year.
The following table of surplus products, given out by the Bureau of Labour Statistics in 1909, is a concise statement of the surplus of the state which was added to the commerce of the world during 1908.
RÉSUMÉ OF VALUATIONS BY GROUPS
- Live stock: $112,535,494
- Farm crops: 34,991,518
- Mill products 30,283,689
- Farmyard products 44,960,973
- Apiary and cane products 117,694
- Forest products 22,958,014
- Dairy products 8,260,711
- Missouri "Meerschaum" products 424,449
- Nursery products 1,061,173
- Liquid products 1,210,739
- Fish and game products 636,629
- Packing-house products 1,872,318
- Cotton products 3,723,352
- Medicinal products 95,398
- Vegetable and canned goods 6,692,426
- Fresh fruit 5,089,384
- Wool and mohair 1,308,812
- Mine and quarry products 24,992,789
- Stone and clay products 8,904,013
- Unclassified products: 4,623,953
- Total value: $314,743,528
MEANS OF COMMUNICATION
Although the Mississippi River runs the full length of the eastern boundary of the state, and the Missouri flows directly through the state, neither of these streams is of any considerable commercial value as a means of communication or transportation. Railroad facilities, however, are ample, there being 7991 miles of main line with about 3000 miles of sidings. There are 63 steam systems operating in the state. There are one railroad bridge, one street-car bridge, and one combination railroad, street-car, and passenger bridge across the Mississippi River at St. Louis, and a municipal free bridge for the accommodation of railroads, electric roads, wagons, and foot traffic, is in process of construction.
The State University of Missouri was established by legislative act approved on 11 February, 1839, and the university was located at Columbia, Boone County, on 24 June, 1839. The corner-stone of the main building was laid on 4 July, 1840. Courses of instruction in academic work were begun on 14 April, 1841, and a Normal Department was established in 1867 and opened in September, 1868. The College of Agriculture and Mechanic Arts and the School of Mines and Metallurgy were made departments of the university in 1870, the School of Mines and Metallurgy being located at Rolla. The law department was opened in 1872, the medical department in 1873, the engineering department in 1877, and the department of journalism in 1908. In 1888 the Experiment Station was established under Act of Congress, and the Missouri State Military School in 1890. For the scholastic year 1908 there were enrolled in the entire university 3033 students. The officers of instruction and administration consisted of 104 professors, 64 instructors, and 54 assistants. Apart from the above-mentioned institutions, which are all under the supervision of the University of Missouri proper, the state maintains the Lincoln Institution at Jefferson City for the education of negro children in agriculture and mechanic arts.
The state is divided into 10,053 school districts. The total number of teachers in the public schools in the year 1908 was 17,998, the total number of pupils being 984,659. For the year ending 1 July, 1908, the public schools cost the tax-payers $12,769,689.93. The law requires that every child with sound body and mind, from six to fourteen years of age, attend either a public or private school during each school year. Missouri has the largest permanent interest-bearing school-fund of any state in the Union. This fund in 1908 amounted to $14,014,335.45. Apart from the primary and high schools there are six state normal institutions, of which one is located in each of the following cities: Columbia (Teachers' College), Kirksville, Warrensburg, Cape Girardeau, Springfield, and Maryville.
The first settlement was made at Ste. Genevieve in 1735 by the French, and the second by the French at St. Louis in 1764. The Spanish also came up the river in search of gold, and St. Louis was soon a busy trading centre for the citizens and the Indians inhabiting the surrounding territory. From the eastward soon came emigrants from other states - especially Kentucky, Tennessee, and the Virginias - and later came the emigrants from foreign shores, particularly the Germans, Irish, and some Scotch. The later growth of the state has been made up of settlers from almost all of the states lying to the eastward, but more particularly from those mentioned, with many from Maryland and the Carolinas. There are settlements of Italians, Hungarians, and Bohemians, but on the whole these nationalities make up only a small part of the population. St. Louis is a cosmopolitan city, but the predominant strains of foreign blood are German and Irish.
ADMISSION TO THE UNION
Missouri was admitted into the Union conditionally on 2 March, 1820, and was formally admitted as a state on 10 August, 1821, during the presidential administration of James Monroe. At a convention held at St. Louis on 19 July, 1820, the people passed on the Act of Congress, which was approved in March of the same year, and a constitution was drawn up and a new state established. Under this constitution, in August, 1820, the people held a general election, at which state and county officers were chosen and the state government organized. The constitution now in force was adopted by vote of the people on 30 October, 1875, and came into operation on 30 November of the same year.
NOTABLE EVENTS IN POLITICAL HISTORY
The admission of Missouri as a state provoked much bitter discussion in Congress, and terminated in what has since been known as "The Missouri Compromise". This bill provided that Missouri should be admitted as a slave state, but forever prohibited slavery in the remainder of the Louisiana Territory lying north of 36°30' N. lat., which line is the southern boundary of Missouri. The matter of slavery was the cause of many controversies during the early history of the state, and during the Civil War over 100,000 soldiers were contributed to the Union army and 50,000 to the Confederacy.
MATTERS DIRECTLY AFFECTING RELIGION
Freedom of Worship
Section 5, Article 2, of the Constitution of 1875 provides "that all men have a natural and indefeasible right to worship Almighty God according to their own conscience; that no person can, on account of his religious opinions, be rendered ineligible to any office of trust or profit under this State, nor be disqualified from testifying, or from serving as a juror; that no human authority can control or interfere with the rights of conscience; that no person ought, by any law, to be molested in his person or estate, on account of his religious persuasion or profession; but the liberty of conscience hereby secured shall not be so construed as to excuse acts of licentiousness, nor to justify practices inconsistent with the good order, peace or safety of this State, or with the rights of others. "The recognition of a God herein manifested does not in any way prejudice the interests of atheists. That a man is an atheist or has peculiar religious opinions does not prejudice him as a witness (11 Mo. App. 385). Sunday regulations are not void on account of peculiar religious opinions of certain citizens (20 Mo. 214); nor can a contract be voided by one voluntarily entering into it on the ground that it requires him to live up to certain religious beliefs (Franta v. Bohemian Roman Catholic C. U., 164 Missouri, 304). The Constitution also provides that no person can be compelled to erect, support, or attend any place or system of worship, or to maintain or support any priest, minister, preacher, or teacher of any sect, church, creed, or denomination of religion; but if any person shall voluntarily make a contract for any such object, he shall be held to the performance of the same; that no money shall ever be taken from the public treasury directly or indirectly, in aid of any church, sect, or denomination of religion, or in aid of any priest, preacher, minister, or teacher thereof as such; and that no preference shall be given to nor any discrimination made against any church, sect, or creed of religion, or any form of religious faith or worship; that no religious corporation can be established in this state, except such as may be created under a general law for the purpose only of holding the title to such real estate as may be prescribed by law for church edifices, parsonages, and cemeteries.
The law provides that the Sabbath shall not be broken by the performance of any labour, other than works of necessity, on the first day of the week, commonly called Sunday, and the master is held to account for compelling or permitting his servants or apprentices to labour on that day. But any member of a religious society which observes any other day than Sunday as the Sabbath, is not bound to observe Sunday as such. Horse-racing, cock-fighting, and playing games, as well as hunting game, are forbidden on Sunday. The selling of any wares or merchandise, the opening of any liquor saloon, and the sale of fermented or distilled liquors are forbidden on Sunday.
Administering of Oaths
Every public official is required to take an oath to perform the duties of his office and to support the Constitution of the United States and of the State of Missouri, and all witnesses in every court are required to give their testimony "under oath"; however, any person who declares that he has conscientious scruples against taking any oath or swearing in any form, is permitted to make his solemn declaration or affirmation concluding with the words "under the pain and penalty of perjury". Where it appears that the person to be sworn has any particular mode of swearing in addition to or in connexion with the usual form of administering oaths, which to him is a more solemn and binding obligation, the court or officer administering the oath is required to adopt the form most binding on the conscience of the person to be sworn. Any person believing in any other than the Christian religion, is sworn according to the prescribed ceremonies of his own religion, if there be any such (sec. 8840 to 8845 R. S. 1899).
Use of Prayer in Legislature
There is no statutory provision for a chaplain for either branch of the legislature, but the rules of these bodies provide for a chaplain for each, who is paid out of a contingency fund. The chaplain is elected by the legislative body for each session. No Catholic priest has ever been elected to this position.
Seal of Confession
Section 4659 R. S. 1899 provides that a minister of the Gospel or a priest of any denomination shall be incompetent to testify concerning the confession made to him in his professional character in the course of discipline enjoined by the rules or practice of such denomination.
MATTERS AFFECTING RELIGIOUS WORK
Incorporation of Churches
No religious corporation can be established in this state except such as may be created under the general law for the purpose only of holding the title of such real estate as may be necessary for churches, schools, parsonages, and cemeteries. There is no constitutional or statutory recognition, as in some states, of any churchman in his official capacity. The property of a diocese, for example, is vested in the individual and not in the bishop as such.
Exemption from Taxes and Public Duties
The constitution of the state exempts from taxation church property to the extent of one acre in incorporated cities or towns, or within one mile from such cities or towns. Church property to the extent of five acres more than one mile from incorporated cities or towns is exempt from taxation. These exemptions are subject to the provision that such property is used exclusively for religious worship, for schools, or for purposes purely charitable.
The law also provides that no clergyman shall be compelled to serve on any jury. Ministers of the Gospel may select such books as are necessary for the practice of their profession, and the same are exempt from attachment under execution. It is not lawful for any city or municipality to exact a tax or licence fee from any minister of the Gospel for authorizing him to follow his calling.
Marriage and Divorce
Marriages are forbidden and void between first cousins, or persons more nearly related than first cousins, such as uncles and nieces, etc. Any judge of a court of record or justice of the peace, or any ordained or licensed preacher of the Gospel, who is a citizen of the United States, may perform a marriage ceremony. A licence of marriage is required, and no licence will be issued to a male under the age of twenty-one or to a female under eighteen without the consent of the father of the minor or if the father cannot act, of the mother or guardian. The law requires that the person performing the marriage ceremony shall return a certificate of the service to the state authorities. The causes for divorce are enumerated in the statute, and, besides the usual clause, it is provided that a divorce may be granted when it is proved that the offending person "has been guilty of conduct that makes the condition of the complaining party intolerable". This clause makes it possible to secure a divorce on any grounds that the judge considers sufficient, and is thought to be the source of some abuse. Residence of one year in the state is required before a petition for divorce may be filed. There is no statutory prohibition against divorced persons marrying at any time after a decree of divorce has been granted.
Every parish of any considerable size in the state maintains a parochial school. There are 228 parochial schools in the state with 38,098 children in attendance. Each diocese has its own school-board, and a uniform system of text-books is used throughout the diocese. There are eight colleges and academies for boys with 1872 students in attendance, and 38 academies and institutions of higher education for girls with 4480 pupils in attendance. The St. Louis University, conducted by the Jesuit Fathers, is one of the leading educational institutions of the country. It conducts a school of divinity, a school of philosophy and science, a school of medicine, a school of dentistry, an institute of law, and an undergraduate and academic department. There is a total of 950 lay students in attendance. No parochial or private schools receive any assistance or support from the state, and all citizens are required to contribute to the support of the public schools regardless of whether their children attend a private or a public institution.
There are in the state 10 orphan asylums with 1248 inmates; 25 hospitals; 2 deaf-mute institutions with 60 inmates; 3 homes for aged persons; 1 industrial and reform school; 1 foundling asylum, and 1 newsboys' home - all under Catholic auspices. The state does not contribute anything to the Catholic orphanages, but the foundling asylum in St. Louis receives some remuneration for keeping waifs who are found by the police and intrusted to that institution.
There is a State Board of Charities and Corrections, of which the governor is a member ex officio. This board has general supervision over the charitable institutions conducted by the state. There is a state hospital at Fulton, at St. Joseph, at Nevada, and at Farmington. There is a state Confederate Soldiers' Home at Higginsville, and a State Federal Soldiers' Home at St. James. A school for the deaf is maintained at Fulton, a school for the blind at St. Louis, and a colony for the feeble-minded and epileptic at Marshall. The Missouri State Sanitarium for the treatment of tuberculosis is located at Mt. Vernon on the crest of the Ozark.
SALE OF LIQUOR
Intoxicating liquors may be sold only by licensed saloon-keepers. In cities of two thousand or more inhabitants the application for licence must be accompanied by a petition asking that the licence be granted. This petition must be signed by a majority of the tax-paying citizens owning property on the block or square in which the saloon is to be kept. In cities or towns of less than two thousand inhabitants the petition must be signed by a majority of the tax-paying citizens, and a majority in the block where the saloon is to be kept. The law provides that the licence may be revoked upon the application of any person showing to the county court that the licence-holder does not keep an orderly house, and it is provided that one (1) whose licence has been revoked, (2) who has violated any of the provisions of the licence law, (3) who has sold liquors to any minor, (4) who has employed in his business of saloon-keeper any person whose licence has been revoked, shall not be entitled to a licence. The law prohibits (1) the sale of intoxicating liquors to habitual drunkards, minors, or Indians, (2) the keeping of female employees in saloons, and (3) the keeping, exhibiting, or using of any piano, organ, or any other musical instrument in a saloon. These laws are generally enforced. The law provides that upon application by petition to the county court signed by one-tenth of the qualified voters of any county, who shall reside outside of the cities or towns having a population of 2500 or more, an election shall be held to determine whether or not spirituous liquors shall be sold within the limits of such county. In cities or towns with a population of 2500 or more, the petition is made by one-tenth of the qualified voters to the body having legislative functions therein. If a majority of the qualified voters at such election vote against the sale of intoxicating liquors, no licence can be issued for the sale of liquor within such jurisdiction. Section 3034 R. S. of 1899 provides among other things that nothing in the law shall be so construed as to prevent the sale of wine for sacramental purposes.
PRISONS AND REFORMATORIES
The state penitentiary is at Jefferson City; there is a reformatory for boys at Booneville and an industrial home for girls at Chillicothe. The law provides for the appointment of a chaplain for the penitentiary by the warden and the board of inspectors, consisting of the state treasurer, auditor, and attorney-general. The law makes no reference to the religious denomination of the chaplain, but provides that his selection shall be governed by his special qualifications for the performance of the duties devolving upon him. He is required to conduct at least one service each Sunday; to visit convicts in their cells at least once a month, when practicable; to visit the sick in the hospital at least once a day; to hold religious services in the hospital once a week. He shall have charge of the prison library and the purchase of books; he shall officiate at the funeral of each convict, and be present at his burial; he is paid the salary of $1200 per annum. The law further provides that clergymen of every denomination of the City of Jefferson shall at all times have free access to the prison, or may visit any convict confined therein - subject only to such rules as may be necessary for the good government and discipline of the penitentiary - and may administer rites and ceremonies of the Church to which such convict belongs, if it be so desired. There is no statutory provision for a chaplain at the reformatory or the industrial home. Such religious ceremonies as are held at these institutions are conducted by those interested in the work through arrangements made with the officials in charge. Such ceremonies are largely within the discretion of the officials, but the spirit of the law as laid down for the penitentiary prevails. This is also true of the state insane asylum and the reform schools and jails of the cities. In a majority of these institutions religious services are held by Catholic priests at regular intervals, and accommodations are provided for the celebration of Mass and the administration of the sacraments.
The courts are accustomed to permit every charitable use to stand, which comes fairly within the Statute of Elizabeth. While this statute has not been incorporated in the state laws, its general provisions have been followed by the decisions. A case involving the Mullanphy will, which left a fund to furnish relief "to all poor emigrants and travellers coming to St. Louis on their way bona fide to settle in the West", reported in 29 Mo. 543, brought out an early discussion of charitable bequests; this provision was declared valid, and, as a precedent, has been generally followed. There is no statutory limitation, as in some states, upon the amount that may be bequeathed or devised to charity. The Constitution of 1865 prohibited all bequests and devises of land for religious purposes. A bequest for Masses was held void under this section of the constitution. An outright gift to the Archbishop of St. Louis was also held void because it was shown there was an understanding that the money was to be used for religious purposes (Kenrick vs. Cole, 61 Missouri, 572). This section was omitted from the Constitution of 1875, and the courts have been liberal since in construing such bequests as charitable and therefore valid.
DIOCESES AND CATHOLIC POPULATION
The state is divided into three dioceses those of St. Louis, Kansas City, and St. Joseph. The Diocese of St. Louis comprises all of the eastern half of the state; that of Kansas City the western portion of the state, south of the Missouri River, and the Diocese of St. Joseph the western portion of the state, north of the Missouri River. The Catholic population in 1909 was 452,703. There are about 3000 Catholic negroes in the state, with one church in St. Louis and one coloured priest. There is one coloured Catholic school with 110 pupils, and one orphan-asylum for coloured children, conducted by the Oblate Sisters of Providence.
FIRST CATHOLIC MISSIONS
The Cross was planted among the Indians who inhabited the region now known as Missouri during the first half of the sixteenth century by De Soto, who was buried in the waters of the Mississippi in May, 1542. Marquette descended the Mississippi as far south as the thirty-fourth degree in 1673, more than a century and a quarter after De Soto had marched northward, and tells us that he preached the Gospel to all of the nations he met. It is thought by some that there was a white settlement at the mouth of the River Des Pères in Missouri, a few miles south of St. Louis, even before the historical settlement of Cahokia, Illinois (the sole centre of civilization in the Mississippi Valley for some time), but the first permanent settlement of which we have any record was made at Ste. Geneviève about 1734. Among the oldest records in the state are those of the Catholic church at Ste. Geneviève. There was also a mission in 1734 at Old Mines, which was a military station in Missouri. Ste. Geneviève and Old Mines were attended by priests from Cahokia. The first mission was established in St. Louis in 1764, and the first church was built in 1770. A mission was established at Carondelet in 1767. Fredericktown, New Madrid, St. Charles, and Florissant were missionary points during the last half of the eighteenth century. The Lazarist Fathers were established at Perryville in 1818, and the Jesuits at Florissant in 1823. The early settlements were made up of French, many of them coming from Canada. A great many German Catholics came to the state during the first part of the nineteenth century, but the first German sermon of which we have any record was preached by Rev. Joseph A. Lutz at St. Louis in 1832. During this same period a large portion of the immigration was made up of Irish Catholics. The names of many of the early settlement's bear evidence of the Catholicism of those who were first established there. The later immigration into the state has been made up of almost every nationality, and almost all of the Catholic countries are represented. A famous episode in the state's history was Archbishop Kenrick's successful resistance to the test oath required by the Drake Constitution of 1865. He finally won the case in the Supreme Court of the United States (see OATH, MISSOURI TEST).
PRINCIPAL RELIGIOUS DENOMINATIONS
According to the Bulletin issued by the Department of Commerce and Labour Bureau of the Census concerning religious bodies in 1906, the total population of church members in the State of Missouri was 1,199,239, and the principal religious denominations were as follows: Roman Catholics, 382,642; Baptists, 218,353; Congregationalists, 11,048; Disciples or Christians, 166,137; German Evangelical, 32,715; Lutherans, 46,868; Methodists, 214,004; Presbyterians, 71,999; Episcopalians, 13,328; Reformed Bodies, 1284; United Brethren bodies, 3316; other Protestant bodies, 23,166; Latter-day Saints, 8042; all other bodies, 6439. Thus, 33.9 per cent of the total number of church-going people in the state are Catholics, the Baptists having the next highest percentage (18.2), and the Methodists being third (17.8).
HOUCK, Hist. of Missouri (Philadelphia, 1908); WILLIAMS, Hist. of the State of Missouri (Columbia, 1904); BILLON, Annals of St. Louis (St. Louis, 1880); SCHARF, St. Louis City and County (Philadelphia, 1883); Jesuit Relations; BECK, Gazetteer of Missouri (St. Louis; 1875); IRVING, Conquest of Florida (New York, 1851); Constitution of Missouri; Revised Statutes (1899); Red Book; Bureau of Labour Statistics (Jefferson City, 1909); Manual of the State of Missouri, 1909-10; Bulletin No. 103, Religious Bodies, 1906, Bureau of the Census (Washington).
JOHN L. CORLEY
|
<urn:uuid:e529955e-e67e-46a7-af2d-eb6c67d00ee2>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00610.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9621173143386841,
"score": 3.265625,
"token_count": 6618,
"url": "https://en.wikisource.org/wiki/Catholic_Encyclopedia_(1913)/Missouri"
}
|
|Scientific Name:||Equus ferus Boddaert, 1785|
|Infra-specific Taxa Assessed:|
Equus przewalskii Poliakov, 1881
|Taxonomic Notes:||Current scientific review of the taxonomy of wild equids (Groves 1986) places Przewalski's Horse as a subspecies of the extinct Equus ferus. Although Przewalski's Horse (Equus ferus przewalskii) can hybridize with domestic horses (Equus ferus caballus) to produce fertile offspring (Ryder et al. 1978, Trommerhausen-Smith et al. 1979), the existence of 2n = 66 chromosomes in Przewalski's Horse identifies it as being more different from its domestic relatives (2n = 64) than are any two breeds of domestic horse (Ryder 1994). Furthermore, mitochondrial DNA research has shown that the Przewalski's Horse is not the ancestor of modern domestic horses (Vilà et al. 2001). Przewalski's Horse also show a number of consistent differences in their appearance as compared to domestic horse breeds: the mane is short and erect when in good body condition; forelocks are nearly nonexistent; the upper part of the tail has short guard hairs; a dark dorsal stripe runs from the mane down the spine to the tail; several dark stripes can be present on the carpus and generally the tarsus (Groves 1994). Przewalski's Horses grow a thick mane in winter, which contrary to domestic horses they shed each spring with the rest of their winter coat.
Other studies of the genetic differences between Przewalski's and domestic horses have indicated very little genetic distinction between them. Only four alleles at four separate serological marker loci have been identified as specific to Przewalski's Horse (Bowling and Ryder 1987); the vast majority of blood protein variants are present in both Przewalski's and domestic horses and even the fastest evolving DNA region known in mammals (the mitochondrial DNA control region), does not show significant differences between the two types of horse (Ishida et al. 1995, Oakenfull and Ryder 1998). Thus it is clear that Przewalski's and domestic horses are very closely related and have in the past interbred, but the fixed chromosomal number difference between them indicates that they are distinct populations (Oakenfull et al. 2000). A variety of molecular studies support their phylogenetic relationship as sister taxa (Steiner et al. 2012, Côté et al. 2013) diverging between 150,000 and 250,000 years ago (Goto et al. 2011, Steiner and Ryder 2011).
|Red List Category & Criteria:||Endangered D ver 3.1|
|Assessor(s):||King, S.R.B., Boyd, L., Zimmermann, W. & Kendall, B.E.|
|Reviewer(s):||Moehlman, P.D. & Kaczensky, P.|
Previously listed as Extinct in the Wild (EW) from the 1960s up to the assessment in 1996. The species was then reassessed as Critically Endangered (CR) due to at least one surviving mature individual in the wild. Successful reintroductions have qualified this species for reassessment. The population is currently estimated to consist of more than 50 mature individuals free-living in the wild for the past seven years. This taxon is threatened by small population size and restricted range, potential hybridization with domestic horses, loss of genetic diversity, and disease. As the population size is small, it is vulnerable to stochastic events such as severe weather. Equus ferus przewalskii qualifies as Endangered (EN) under Criterion D.
|Previously published Red List assessments:|
|Range Description:||Until the late 18th century, this species ranged from the Russian Steppes east to Kazakhstan, Mongolia and northern China. After this time, the species went into catastrophic decline. The last wild population of Przewalski’s Horse (Equus ferus przewalskii) survived until the mid-20th century in southwestern Mongolia and adjacent Gansu, Xinjiang, and Inner Mongolia (China). Wild horses were last seen in 1969, north of the Tachiin Shaar Nuruu in Dzungarian Gobi Desert in Mongolia (Paklina and Pozdnyakova 1989).|
All extant wild horses belong to the subspecies Equus ferus przewalskii. The first visual account of Przewalski's-type wild horses date from more than 20,000 years ago. Rock engravings, paintings, and decorated tools dating from the late Gravetian to the late Magdalenian (20,000-9,000 BC), were discovered in caves in Italy, southern France, and northern Spain; 610 of these were horse figures (Leroi-Gourhan 1971). Many cave drawings in France show horses that look like Przewalski’s Horse (Mohr 1971). In prehistoric times, the species probably roamed widely over the steppes of Central Asia, China, and Europe (Ryder 1990), although wild horses in Europe could have been Tarpans (Equus ferus gmelini).
The first written accounts of Przewalski's Horse originate from Tibet, recorded by the monk Bodowa, who lived around 900 AD. In the "Secret History of the Mongols", there is also a reference to wild horses that crossed the path of Chinggis Khaan during his campaign against Tangut in 1226, causing his horse to rear and throw him to the ground (Bokonyi 1974). That the wild horse was a prestigious gift, denoting its rarity or that it was difficult to catch, is shown by the presentation of a Przewalski’s Horse to the emperor of Manchuria by Chechen-Khansoloj-Chalkaskyden, an important Mongolian, circa 1630 (Zevegmid and Dawaa 1973). In a Manchurian dictionary of 1771, Przewalski’s Horse is mentioned as "a wild horse from the steppe" (Dovchin 1961).
Przewalski's Horse was not described in Linnaeus's "Systema Naturae" (1758) and remained largely unknown in the West until first mentioned by John Bell, a Scottish doctor who travelled in the service of Tsar Peter the Great in 1719-1722 (Mohr 1971). His account of the expedition, "A Journey from St Petersburg to Peking", was published in 1763. Bell and subsequent observers all located horses known at that time within the area of 85-97°E and 43-50°N (Chinese-Mongolian border). Wild horses were reported again from what is now China by Colonel Nikolai Mikailovich Przewalski, an eminent explorer, at the end of the 19th century. He made several expeditions by order of Tsar Alexander the Second of Russia to Central Asia, aiming to reach Tibet. While returning from his second expedition in Central Asia, he was presented with the skull and hide of a horse shot about 80 km north of Gutschen (in present-day China, around 40°N, 90°E). The remains were examined at the Zoological Museum of the Academy of Science in St Petersburg by I.S. Poliakov, who concluded that they were a wild horse, which he gave the official name Equus przewalskii (Poliakov 1881). Further reports came from the brothers Grigory and Michael Grum-Grzhimailo, who travelled through western China from 1889-1890. In 1889, they discovered a group in the Gashun area and shot four horses: three stallions and a mare. The four hides and the skulls of the three stallions, together with an incomplete skeleton, were sent back to the Zoological Museum in St. Petersburg. They were able to observe the horses from a short distance and gave the following account: "Wild horses keep in bands of no more than ten, each herd having a dominant stallion. There are other males, too, but they are young and, judging by the hide of the two-year old colt that we killed, the dominant male treats them very cruelly. In fact, the hide showed traces of numerous bites" (Grum-Grzhimailo 1982).
After the 'rediscovery' of the Przewalski's Horse for western science, western zoos and wild animal parks became interested in this species for their collections. Several long expeditions were mounted to catch animals. Some expeditions came back empty-handed and some had only seen a glimpse of wild Przewalski's Horses. It proved difficult to catch adult horses, because they were too shy and fast. Capture of foals was considered the best option as when chased they would become exhausted and lag behind their group (Hagenbeck 1909), although this may have involved killing adult harem members in the process (Bouman and Bouman 1994). Four expeditions that managed to catch live foals took place between 1897 and 1902. Fifty-three of these foals reached the west alive. Between the 1930s and the 1940s only a few Przewalski's Horses were caught and most died. One mare (Orlitza III) was caught as a foal in 1947 and was the last wild mare to contribute to the Przewalski's Horse gene pool in Europe. In Mongolia several Przewalski's Horses were captured and crossbred with domestic horses by the Mongolian War Ministry (Bouman and Bouman 1994).
In subsequent years the captive population increased, and since the 1990s reintroduction efforts have started in Mongolia and China; Mongolia was the first country where truly wild reintroduced populations existed within the historic range. Reintroductions in Mongolia began in the Great Gobi B Strictly Protected Area in the Dzungarian Basin (9,000 km2) and Hustai National Park in Mongol Daguur Steppe (570 km2) in 1994 (King and Gurnell 2005). A third reintroduction site, Khomintal, (2,500 km2), in the Great Lakes Depression, was established in 2004, as a buffer zone to the Khar Us Nuur National Park in Valley of the Lakes (C. Feh pers. comm.). Releases began in the Kalamaili Nature Reserve (17,330 km2), Xinjiang Province, China in 2001 and in the Dunhuang Xihu National Nature Reserve (6,600 km2), Gansu Province, China in 2010 (Liu et al. 2014), although almost all of these animals are corralled and fed in winter (Qing Cao pers. comm.). Further reintroduction sites are planned in Kazakhstan and Russia (W. Zimmerman pers. comm.).
Regionally extinct:Kazakhstan; Russian Federation; Ukraine
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||The history of population estimates and trends in the Przewalski's Horse has been described by Wakefield et al. (2002). Small groups of horses were reported through the 1940s and 1950s in an area between the Baitag-Bogdo ridge and the ridge of the Takhin-Shaar Nuruu (which translated from Mongolian, means 'Yellow Mountain of the Wild Horse'), but numbers appeared to decline dramatically after World War II. The last confirmed sighting in the wild was made in 1969 by the Mongolian scientist N. Dovchin. He saw a stallion near a spring called Gun Tamga, north of the Takhin-Shaar Nuruu, in the Dzungarian Gobi (Paklina and Pozdnyakova 1989). Subsequent annual investigations by the Joint Mongolian-Soviet Expedition failed to find conclusive evidence for their survival in the wild (Ryder 1990). Chinese biologists conducted a survey in northeastern Xinjiang from 1980 to 1982 (covering the area of 88-90°E and 41°31'-47°10'N) without finding any horses (Gao and Gu 1989). The last native wild populations had disappeared.|
Of the 53 animals recorded in the Studbook as having been brought into zoological collections in the west, fewer than 25% contributed any genes to the current living population. All Przewalski's Horses alive today are descended from 12 wild-caught individuals, and as many as four domestic horse founders described below, which were the nucleus of the captive breeding programme (Bowling and Ryder 1987). Eleven of the wild-caught individuals were brought into captivity between 1899 and 1902 with the last of them dying in 1939. The twelfth founder (Orlitza III) was captured as a foal in 1947. A thirteenth founder was born in 1906 in Halle (Germany) to a wild-caught stallion and a domestic Mongolian mare, and a fourteenth founder is a female born in Askania Nova (Ukraine) to a Przewalski's Horse stallion and a domestic female of a Tarpan type. In spite of the introgression of domestic horse blood, the current population is genetically very close to the original wild horses (Bowling et al. 2003).
As of 1 January 2014, the number of living captive and reintroduced animals in the International Studbook was 1,988 (883 males.1101 females.4 sex unknown). In addition to animals held in captivity and those already re-introduced, there have been a number of animals released into very large enclosures (reserves): Le Villaret, France (~4 km2; 2013: 18.18), Askania Nova, Ukraine (30 km2; 2014: 24.46), and Hortobágy National Park, Hungary (700 km2; 2014: 125.129). Bukhara, Uzbekistan (51 km2) had 19.17.1 horses in 2008 (W. Zimmermann pers. comm.) and 24 horses by 2013 (O. Pereladova pers. comm.). The unfenced Chernobyl exclusion zone (2,600 km2) in Ukraine contained 32.36 horses in 2008 (W. Zimmermann pers. comm.), and approximately 60 horses in early 2014 (T. Mousseau pers. comm.).
There are now approximately 387 free-ranging reintroduced and native-born Przewalski's Horses in Mongolia at three reintroduction sites (Zimmerman 2014). Between 1992 and 2004, 90 captive-born horses were transported to the Takhin Tal acclimatization site, from where they were released into the Great Gobi B Strictly Protected Area (SPA) (ITG International Takhi Group, Zimmermann 2008). A further three males were translocated from Hustai National Park to Takhin Tal in 2007 (Zimmermann 2008). In 2008 there were approximately 111 free-ranging horses in this subpopulation (Zimmerman 2008, Kaczensky and Walzer 2004). By December 2009 there were 138 individuals, but due to an extremely harsh winter (dzud) in 2009/2010 the population suffered extreme mortality: in April 2010 only 49 individuals remained (Kaczensky et al. 2011). By 2012 the population had increased to 71. By the end of 2013 there were 90 horses forming six harems and several bachelor groups. Sixteen foals were born in 2013; three of these foals died, and one adult male disappeared and is presumed dead (P. Kaczensky pers. comm.).
From 1992 to 2000, 84 horses were brought to Hustai National Park (NP) by the Foundation for the Preservation and Protection of the Przewalski Horse and Mongolian Association for Conservation of Nature and the Environment (MACNE) from reserves in Europe (King and Gurnell 2005). As of the middle of 2012 this population had approximately 275 individuals (Zimmerman 2014). By the end of 2013, there were 297 horses, of which 228 were members of 29 harems and the rest were bachelors. Sixty-four foals were born in 2013, with a 61% survival rate by year’s end: 25 foals, four yearlings, and seven adults died during 2013 (Usukhjargal 2013).
A third reintroduction site was started in 2004 at Seriin Nuruu in the Khomiin Tal buffer zone of the Khar Us Nuur National Park in western Mongolia (Association pour le Cheval de Przewalski: TAKH). Twenty-two individuals consisting of four pre-established families and one male bachelor group were brought from the reserve at Le Villaret, France between 2004 and 2005, and four horses from Prague Zoo were added in 2011 (Association TAKH, Zimmermann 2008). By the end of 2013 this population had 40 horses; eight foals were born in 2013 and 3threeof these died, as did two adult stallions (C. Feh pers. comm.).
In previous assessments of the reintroduced population in Mongolia, mature individuals were considered to be those that were born in the wild and five years of age. Individuals born in captivity were not counted as mature until they had reproduced in the wild, and produced offspring that were at least five years old (so potentially reproductive). The population grew from 55 mature individuals in the wild in 2006 (52: 26.26 in Hustai NP, 3: 1.2 in Gobi NP), to 79 in 2007 (Hustai NP: 33.35; Great Gobi B SPA: 3.8), 104 in 2008 (Hustai NP: 39.51; Great Gobi B SPA: 7.7), and 151 in 2009 (Hustai NP: 52.66; Great Gobi B SPA: 15.18). The winter of 2009/2010 was very severe and there was high mortality of Przewalski's Horses, particularly in the Gobi. In 2010, Hustai NP's mature population was 117 (53.64) and Great Gobi B SPA's number of mature individuals was reduced to 17 (8.9), giving a total population of 134 mature individuals. In 2012 the criterion for captive-born horses to be included as mature individuals was tightened to require them to have produced reproductively viable offspring (i.e., the reintroduced animal reproduced, and at least one of its offspring also reproduced); mature (≥5 years old) wild-born individuals continued to be included. Under these criteria, there were 178 mature individuals in the wild at the end of 2012: 153 (65.88) in Hustai NP, 23 (5.18) in Great Gobi B SPA, and 2 (1.1) in Khomiin Tal. Hence for a period of seven years, the mature population of Przewalski's Horses in Mongolia has been more than 50 individuals. Although this means that the Przewalski's Horse qualifies as Endangered (EN) it should be borne in mind that most of these individuals are from one reintroduction site and climatic perturbations like the extremely harsh winter in 2009/2010 can have very negative effects on small populations (Kaczensky et al. 2011). In China, the Wild Horse Breeding Centre (WHBC) in Xinjiang Province has established a large captive population of Przewalski's Horses (Liu et al. 2014). Since 2001 horses have been released into the nearby Kalamaili Nature Reserve (KNR), which had a population of 99 in 2012 and 121 in 2013. One harem group is roaming free on the Chinese side of the Dzungarian Gobi (Xinjiang); another 102 horses are roaming free during summer time but are returned to the acclimatization pen during the winter (Zimmermann et al. 2008; Qing Cao pers. comm.). The Gansu Endangered Species Research Center (GESRC) also has a captive breeding programme and has released at least seven horses into the Dunhuang Xihu National Nature Reserve (DXNNR) in 2010 and 2012 (Liu et al. 2014); all of these horses are fed in winter. A total of 59 foals have been born in the wild in China since 2009, with an estimated 19 individuals surviving in 2013 (Qing Cao pers. comm.). Until better data are available these animals are not known for sure to meet the criterion of mature individuals for a reintroduced species so have not been included in the species population size for this assessment.
|Current Population Trend:||Increasing|
|Habitat and Ecology:||Przewalski’s Horses exhibit a harem defense polygyny (Van Dierendonck et al. 1996). After dispersing from their natal band at approximately 2 years of age, males enter bachelor groups consisting of other young males and unsuccessful older stallions. When they are five years of age or older, stallions attempt to form harems of semi-permanent membership that are held year-round. They take over already-established harems, steal mares from rivals, or are joined by females dispersing from their natal harem at approximately two to three years of age (L. Boyd pers. comm.; Zimmermann et al. 2009).|
Przewalski's Horse formerly inhabited steppe and semi-desert habitats. As most of this range became converted to agriculture, degraded or was increasingly occupied by livestock, the species became restricted to semi-desert habitats with limited water resources (Van Dierendonck and de Vries 1996). Lowland steppe vegetation was preferentially selected by horses at Hustai National Park and seasonal movements were affected by the availability of the most nutritious vegetation (King and Gurnell 2005). The breadth of species consumed and dietary overlap with other ungulates increased in winter, compared to summer, although forage did not appear to be limiting (Siestes et al. 2009). In the Gobi the Przewalski's Horses also selected for the most productive plant communities (Kaczensky et al. 2008).
The species is not territorial; home range sizes in Hustai NP varied from 120 to 2,400 ha and, in addition to grazing sites, included a permanent water source, patches of forest, and ridges with rocky outcrops (King and Gurnell 2005). In Great Gobi B SPA, home ranges of 150 to 825 km2 were reported (Kaczensky et al. 2008).
Because the historic range is not precisely known, there has been much debate about the areas in which Przewalski's Horses were last seen: was it merely a refuge or was it representative of the typical/preferred habitat? The Mongolia Takhi Strategy and Plan Work Group (MTSPWG 1993) concluded that the historic range may have been wider but that the Dzungarian Gobi, where they were last seen, was not a marginal site to which the species retreated as they had access to the rich habitats of mountain valleys and more oases than in the present time (Sokolov et al. 1990), due to these areas being occupied by herders and their livestock. Although grass and water are more available in other parts of Mongolia, these areas often have harsher winters. Subsequently, others provided evidence that the Gobi is an edge habitat, rather than an optimal habitat for Przewalski's Horses (Kaczensky et al. 2008), and certainly also subject to severe winters with devastating consequences for the population (Kaczensky et al. 2011). Studies of feral horses have shown that they are able to live and reproduce in semi-desert habitats but their survival and reproductive success is clearly sub-optimal compared to feral horses on more mesic grassland (Berger 1986). Van Dierendonck and de Vries (1996) suggest that the wild horse is primarily a steppe herbivore that can survive under arid conditions when there is access to waterholes.
|Movement patterns:||Not a Migrant|
|Use and Trade:||There is currently no use or trade in Przewalski's Horses. Hunting is not currently a threat to the species, though this needs to be monitored. It is believed that capture of animals for cross-breeding as racehorses is a potential future use, and threat.|
A number of causes have been cited for the final extinction of Przewalski's Horses in Mongolia and China. Among these are significant cultural and political changes (Bouman and Bouman 1994), hunting (Zhao and Liang 1992, Bouman and Bouman 1994), military activities (Ryder 1993), climatic change (Sokolov et al. 1992), and competition with livestock and increasing land use pressure (Sokolov et al. 1992, Ryder 1993, Bouman and Bouman 1994). Capture expeditions probably diminished the remaining Przewalski's Horse populations by killing and dispersing the adults (Van Dierendonck and de Vries 1996). The harsh winters of 1945, 1948, and 1956 probably had an additional impact on the small population (Bouman and Bouman 1994). Increased pressure on, and rarity of waterholes in their last refuge should also be considered as a significant factor contributing to their extinction (Van Dierendonck and de Vries 1996).
For the reintroduced populations, small population size and limited spatial distribution is the primary threat, followed by potential hybridization with domestic horses and competition for resources with domestic horses and other livestock. Wherever Przewalski's Horses come into contact with domestic horses, there is the risk of hybridization and transmission of diseases. Recently, illegal mining in the protected areas is an additional threat to their viability. In Hustai NP it has been noted that overgrazing of the buffer-zone and continued pressure on the reserve are possible consequences of the enhanced economic activity in this area (Bouman 1998); however, the second phase of the project (1998-2003) paid much more attention to sustainable development of the buffer-zone. In the western section of the Great Gobi B SPA livestock grazing by nomads and military personnel continues, particularly in fall, winter and spring; however, the core zone is largely free from human influence all year round. Infectious diseases transmitted from domestic horses and their parasites, notably Babesia equi, B. caballi and strangles (infection by Streptococcus equi), are a major threat to small reintroduced populations originating from zoos (Roberts et al. 2005, King and Gurnell 2005). As was observed during 2009/2010, severe winters can result in significant mortality. While predation occurs naturally as for any wild ungulate, if excessive there could be impacts on this small population.
There is concern over loss of genetic diversity after being reduced to a very small population and maintained in captivity for several generations. Sixty per cent of the unique genes of the studbook population have been lost (Ryder 1994). Loss of founder genes is irretrievable and further losses must be minimized through close genetic management. Furthermore, inbreeding depression could become a population-wide concern as the population inevitably becomes increasingly inbred (Ballou 1994). However, correct management of the population can slow these losses significantly, as has been achieved since the organization of the regional captive-breeding programs. Fortunately, Przewalski's Horses have been shown to have both higher nuclear and mitochondrial nucleotide diversity than many domestic horse breeds in spite of the population bottlenecks they have experienced (Goto et al. 2011).
At the ‘Endangered Wild Equid Workshop’ held in Ulaanbataar in 2010 the following threats were identified:Loss of population due to stochastic events (i.e. severe winter);
Specific actions needed for each threat category were identified and described.
Przewalski's Horse is legally protected in Mongolia. It is protected as Very Rare under part 7.1 of the Law of the Mongolian Animal Kingdom (2000). Hunting has been prohibited since 1930, and the species is listed as Very Rare under the 1995 Mongolian Hunting Law (MNE 1996). It is listed as Critically Endangered in both the 1987 and 1997 Mongolian Red Books (Shagdarsuren et al. 1987, MNE 1997), and in the Regional Red List for Mongolia (Clark et al. 2006). The taxon's re-introduced range in Mongolia is almost entirely within protected areas. It is listed on CITES Appendix I (as Equus przewalskii).
The following conservation actions are in place:
Conservation actions required:
Websites for the reintroduction sites in Mongolia with further details and ways of supporting them are:
|Errata reason:||The name of an Assessor "Zimmerman, W." was corrected to "Zimmermann, W."|
|Citation:||King, S.R.B., Boyd, L., Zimmermann, W. & Kendall, B.E. 2015. Equus ferus (errata version published in 2016). The IUCN Red List of Threatened Species 2015: e.T41763A97204950.Downloaded on 23 February 2018.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
|
<urn:uuid:127f112d-c4d7-4cbe-9e3f-092604bd2518>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00610.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9423949718475342,
"score": 3.53125,
"token_count": 6206,
"url": "http://www.iucnredlist.org/details/41763/0"
}
|
Number game, any of various puzzles and games that involve aspects of mathematics.
Mathematical recreations comprise puzzles and games that vary from naive amusements to sophisticated problems, some of which have never been solved. They may involve arithmetic, algebra, geometry, theory of numbers, graph theory, topology, matrices, group theory, combinatorics (dealing with problems of arrangements or designs), set theory, symbolic logic, or probability theory. Any attempt to classify this colourful assortment of material is at best arbitrary. Included in this article are the history and the main types of number games and mathematical recreations and the principles on which they are based. Details, including descriptions of puzzles, games, and recreations mentioned in the article, will be found in the references listed in the bibliography.
At times it becomes difficult to tell where pastime ends and serious mathematics begins. An innocent puzzle requiring the traverse of a path may lead to technicalities of graph theory; a simple problem of counting parts of a geometric figure may involve combinatorial theory; dissecting a polygon may involve transformation geometry and group theory; logical inference problems may involve matrices. A problem regarded in medieval times—or before electronic computers became commonplace—as very difficult may prove to be quite simple when attacked by the mathematical methods of today.
Mathematical recreations have a universal appeal. The urge to solve a puzzle is manifested alike by young and old, by the unsophisticated as well as the sophisticated. An outstanding English mathematician, G.H. Hardy, observed that professional puzzle makers, aware of this propensity, exploit it diligently, knowing full well that the general public gets an intellectual kick out of such activities.
The relevant literature has become extensive, particularly since the beginning of the 20th century. Some of it is repetitious, but surprisingly enough, successive generations have found the older chestnuts to be quite delightful, whether dressed in new clothes or not. Much newly created material is continually being added.
People have always taken delight in devising “problems” for the purpose of posing a challenge or providing intellectual pleasure. Thus, many mathematical recreations of early origin that have reappeared from time to time in new dress seem to have survived chiefly because they appeal to man’s sense of curiosity or mystery. A few survived from the ancient Greeks and Romans: little was known about them during the Dark Ages, but a strong interest in such problems arose during the Middle Ages, stimulated partly by the invention of printing, partly by enthusiastic writers of arithmetic texts, and partly by the rivalry and disputations among early algebraists and scholars. Such activities were most prominent on the Continent, particularly in Italy and Germany. Notable contributors included Rabbi ben Ezra (1140), Fibonacci (Leonardo of Pisa; 1202), Robert Recorde (1542), and Girolamo Cardano (1545).
Kinds of problems
The problems in general were of two kinds: those involving the manipulation of objects, and those requiring computation. The first required little or no mathematical skill, merely general intelligence and ingenuity, as for example, so-called decanting and difficult crossings problems. A typical example of the former is how to measure out one quart of a liquid if only an eight-, a five-, and a three-quart measure are available. Difficult crossings problems are exemplified by the dilemma of three couples trying to cross a stream in a boat that will hold only two persons, with each husband too jealous to leave his wife in the company of either of the other men. Many variants of both types of problems have appeared over the years.
Problems involving computation also took on a variety of forms; some were as follows:
Finding a number
Think of a number, triple it, and take half the product; triple this and take half the result; then divide by 9. The quotient will be one-fourth the original number.
For example, in “God greet you, all you 30 companions,” someone says: “If there were as many of us again and half as many more, then there would be 30 of us.” How many were there?
The chessboard problem
How many grains of wheat are required in order to place one grain on the first square, 2 on the second, 4 on the third, and so on for the 64 squares?
The lion in the well
This is typical of many problems dealing with the time required to cover a certain distance at a constant rate while at the same time progress is hindered by a constant retrograde motion. There is a lion in a well whose depth is 50 palms. He climbs 1/7 of a palm daily and slips back 1/9 of a palm. In how many days will he get out of the well?
These are typified by the movements of bodies at given rates in which some position of these bodies is given and the time required for them to arrive at some other specified position is demanded.
Pioneers and imitators
The 17th century produced books devoted solely to recreational problems not only in mathematics but frequently in mechanics and natural philosophy as well. The first important contribution was that of the Frenchman Claude-Gaspar Bachet de Méziriac, one of the earliest pioneers in this field, who is remembered for two mathematical works: his Diophanti, the first edition of a Greek text on the theory of numbers (1621), and his Problèmes plaisans et delectables qui se font par les nombres (1612). The latter passed through five editions, the last as late as 1959; it was the forerunner of similar collections of recreations to follow. The emphasis was placed on arithmetic rather than geometric puzzles. Among the outstanding problems given by Bachet were questions involving number bases other than 10; card tricks; watch-dial puzzles depending on numbering schemes; the determination of the smallest set of weights that would enable one to weigh any integral number of pounds from one pound to 40, inclusive; and difficult crossings or ferry problems.
In 1624 a French Jesuit, Jean Leurechon, writing under the pen name of van Etten, published Récréations mathématiques. This volume struck the popular fancy, passing through at least 30 editions before 1700, despite the fact that it was based largely on the work of Bachet, from whom he took the simpler problems, disregarding the more significant portions. Yet it did contain some original work, and it served as a model for others, including Mydorge and Schwenter. The first English edition (1633) bore the title: Mathematicall Recreations, or a Collection of Sundrie Problemes, extracted out of the Ancient and Moderne Philosophers, as Secrets in Nature, and Experiments in Arithmeticke, Geometrie, Cosmographie, Horologographie, Astronomie, Navigation, Musicke, Opticks, Architecture, Staticke, Machanicks, Chimestrie, Waterworkes, Fireworks, etc. Not vulgarly made manifest until this Time . . . . Most of which were written first in Greeke and Latine, lately compiled in French, by HENRY VAN ETTEN Gent. And now delivered in the English Tongue with the Examinations, Corrections, and Augmentations [translated by William Oughtred].
The rising tide of interest was exploited by French mathematicians Claude Mydorge, whose Examen du livre des récréations mathématiques was published in 1630, and Denis Henrion, whose Les Récréations mathématiques avec l’examen de ses problèmes en arithmétique, géométrie, méchanique, cosmographie, optique, catoptrique, etc., based largely upon Mydorge’s book, appeared in 1659. Leurechon’s book, meanwhile, had found its way into Germany: Daniel Schwenter, a professor of Hebrew, Oriental languages, and mathematics, assiduously compiled a comprehensive collection of recreational problems based on a translation of Leurechon’s book, together with many other problems that he himself had previously collected. This work appeared posthumously in 1636 under the title Deliciae Physico-mathematicae oder Mathematische und Philosophische Erquickstunden. Immensely popular, Schwenter’s book was enlarged by two supplementary editions in 1651–53. For some years thereafter Schwenter’s enlarged edition was the most comprehensive treatise of its kind, although in 1641–42 the Italian Jesuit Mario Bettini had issued a two-volume work called Apiaria Universae Philosophiae Mathematicae in Quibus Paradoxa et Nova Pleraque Machinamenta Exhibentur, which was followed in 1660 by a third volume entitled Recreationum Mathematicarum Apiaria Novissima Duodecim . . . . And in 1665 one Johann Mohr in Schleswig published an imitation of Schwenter under the title of Arithmetische Lustgarten.
In England, somewhat belatedly, William Leybourn, a mathematics teacher, textbook writer, and surveyor, in 1694, published his Pleasure with Profit: Consisting of Recreations of Divers Kinds, viz., Numerical, Geometrical, Mechanical, Statical, Astronomical, Horometrical, Cryptographical, Magnetical, Automatical, Chymical, and Historical. The title page further states that the purpose of the book was to “recreate ingenious spirits and to induce them to make farther scrutiny into these sublime sciences, and to divert them from following such vices, to which Youth (in this Age) are so much inclined.” Much of the volume is conventional textbook material, for most of Leybourn’s published works grew out of his teaching.
18th and 19th centuries
The 18th century saw a continuation of this interest. Published in England were volumes by Edward Hatton, Thomas Gent, Samuel Clark, and William Hooper. In 1775 Charles Hutton published five volumes of extracts from the Ladies’ Diary dealing with “entertaining mathematical and poetical parts.” On the Continent there appeared several writers, including: Christian Pescheck, Abat Bonaventura, the Dutch writer Paul Halcken, and Edme-Gilles Guyot’s four volumes of Nouvelles Récréations physiques et mathématiques, etc. (1769, 1786). But by far the outstanding work was that of Jacques Ozanam, the precursor of books to follow for the next 200 years. First published in four volumes in 1694, his Récréations mathématique et physiques went through many editions; based on the works of Bachet, Mydorge, Leurechon, and Schwenter, it was later revised and enlarged by Montucla, then translated into English by Charles Hutton (1803, 1814) and again revised by Edward Riddle (1840, 1844).
The first half of the 19th century produced only a moderate number of lesser writers on mathematical recreations, but the second half of the 19th century witnessed a crescendo of interest, culminating in the outstanding contributions of Édouard Lucas, C.L. Dodgson (Lewis Carroll), and others at the turn of the century. Lucas’ four-volume Récréations mathématiques (1882–94) became a classic. The mathematical recreations of Dodgson included Symbolic Logic and The Game of Logic; Pillow Problems and A Tangled Tale, 2 vol. (1885–95).
Among the more colourful figures at the turn of the 20th century were two Americans named Sam Loyd, father and son. Tremendously successful in making puzzles, the elder Loyd sold his weekly puzzle column to a national syndicate for years, and, in addition, created or adapted hundreds of mechanical puzzles fashioned of cardboard, wood, and metal that were also financially rewarding. When Loyd II died in 1934 at the age of 60, it was estimated that he had produced at least 10,000 puzzles.
In Germany, Hermann Schubert published Zwölf Geduldspiele in 1899 and the Mathematische Mussestunden (3rd ed., 3 vol.) in 1907–09. Between 1904 and 1920 Wilhelm Ahrens published several works, the most significant being his Mathematische Unterhaltungen und Spiele (2 vol., 1910) with an extensive bibliography.
Among British contributors, Henry Dudeney, a contributor to the Strand Magazine, published several very popular collections of puzzles that have been reprinted from time to time (1917–67). The first edition of W.W. Rouse Ball’s Mathematical Recreations and Essays appeared in 1892; it soon became a classic, largely because of its scholarly approach. After passing through 10 editions it was revised by the British professor H.S.M. Coxeter in 1938; it is still a standard reference.
Outstanding work was that of Maurice Kraitchik, editor of the periodical Sphinx and author of several well-known works published between 1900 and 1942.
About the middle third of the 20th century, there was a gradual shift in emphasis on various topics. Up to that time interest had focussed largely on such amusements as numerical curiosities; simple geometric puzzles; arithmetical story problems; paper folding and string figures; geometric dissections; manipulative puzzles; tricks with numbers and with cards; magic squares; those venerable diversions concerning angle trisection, duplication of the cube, squaring the circle, as well as the elusive fourth dimension. By the middle of the century, interest began to swing toward more mathematically sophisticated topics: cryptograms; recreations involving modular arithmetic, numeration bases, and number theory; graphs and networks; lattices, group theory; topological curiosities; packing and covering; flexagons; manipulation of geometric shapes and forms; combinatorial problems; probability theory; inferential problems; logical paradoxes; fallacies of logic; and paradoxes of the infinite.
Types of games and recreations
Arithmetic and algebraic recreations
Number patterns and curiosities
Some groupings of natural numbers, when operated upon by the ordinary processes of arithmetic, reveal rather remarkable patterns, affording pleasant pastimes. For example:
Another type of number pleasantry concerns multigrades; i.e., identities between the sums of two sets of numbers and the sums of their squares or higher powers—e.g.,
An easy method of forming a multigrade is to start with a simple equality—e.g., 1 + 5 = 2 + 4—then add, for example, 5 to each term: 6 + 10 = 7 + 9. A second-order multigrade is obtained by “switching sides” and combining, as shown below:
On each side the sum of the first powers (S1) is 22 and of the second powers (S2) is 156.
Ten may be added to each term to derive a third-order multigrade:
Switching sides and combining, as before:
In this example S1 = 84, S2 = 1,152, and S3 = 17,766.
This process can be continued indefinitely to build multigrades of successively higher orders. Similarly, all terms in a multigrade may be multiplied or divided by the same number without affecting the equality. Many variations are possible: for example, palindromic multigrades that read the same backward and forward, and multigrades composed of prime numbers.
Other number curiosities and oddities are to be found. Thus, narcissistic numbers are numbers that can be represented by some kind of mathematical manipulation of their digits. A whole number, or integer, that is the sum of the nth powers of its digits (e.g., 153 = 13 + 53 + 33) is called a perfect digital invariant. On the other hand, a recurring digital invariant is illustrated by:
(From Mathematics on Vacation, Joseph Madachy; Charles Scribner’s Sons.)
A variation of such digital invariants is
Another curiosity is exemplified by a number that is equal to the nth power of the sum of its digits:
An automorphic number is an integer whose square ends with the given integer, as (25)2 = 625, and (76)2 = 5776. Strobogrammatic numbers read the same after having been rotated through 180°; e.g., 69, 96, 1001.
It is not improbable that such curiosities should have suggested intrinsic properties of numbers bordering on mysticism.
The problem of the four n’s calls for the expression of as large a sequence of integers as possible, beginning with 1, representing each integer in turn by a given digit used exactly four times. The answer depends upon the rules of operation that are admitted. Two partial examples are shown.
For four 1s:
For four 4s:
(In M. Bicknell & V. Hoggatt, “64 Ways to Write 64 Using Four 4’s,” Recreational Mathematics Magazine, No. 14, Jan.–Feb. 1964, p. 13.)
Obviously, many alternatives are possible; e.g., 7 = 4 + Square root of√4 + 4/4 could also be expressed as 4!/4 + 4/4, or as 44/4 - 4. The factorial of a positive integer is the product of all the positive integers less than or equal to the given integer; e.g., “factorial 4,” or 4! = 4 × 3 × 2 × 1. If the use of factorial notation is not allowed, it is still possible to express the numbers from 1 to 22 inclusive with four “4s”; thus 22 = (4 + 4)/.4 + Square root of√4. But if the rules are extended, many additional combinations are possible.
A similar problem requires that the integers be expressed by using the first m positive integers, m > 3 (“m is greater than three”) and the operational symbols used in elementary algebra. For example, using the digits 1, 2, 3, and 4:
Such problems have many variations; for example, more than 100 ways of arranging the digits 1 to 9, in order, to give a value of 100 have been demonstrated.
All of these digital problems require considerable ingenuity but involve little significant mathematics.
The term “crypt-arithmetic” was introduced in 1931, when the following multiplication problem appeared in the Belgian journal Sphinx:
The shortened word cryptarithm now denotes mathematical problems usually calling for addition, subtraction, multiplication, or division and replacement of the digits by letters of the alphabet or some other symbols.
An analysis of the original puzzle suggested the general method of solving a relatively simple cryptarithm:
- 1. In the second partial product D × A = D, hence A = 1.
- 2. D × C and E × C both end in C; since for any two digits 1–9 the only multiple that will produce this result is 5 (zero if both digits are even, 5 if both are odd), C = 5.
- 3. D and E must be odd. Since both partial products have only three digits, neither D nor E can be 9. This leaves only 3 and 7. In the first partial product E × B is a number of two digits, while in the second partial product D × B is a number of only one digit. Thus E is larger than D, so E = 7 and D = 3.
- 4. Since D × B has only one digit, B must be 3 or less. The only two possibilities are 0 and 2. B cannot be zero because 7B is a two digit number. Thus B = 2.
- 5. By completing the multiplication, F = 8, G = 6, and H = 4.
- 6. Answer: 125 × 37 = 4,625.
(From 150 Puzzles in Crypt-Arithmetic by Maxey Brooke; Dover Publications, Inc., New York, 1963. Reprinted through the permission of the publisher.)
Such puzzles had apparently appeared, on occasion, even earlier. Alphametics refers specifically to cryptarithms in which the combinations of letters make sense, as in one of the oldest and probably best known of all alphametics:
Unless otherwise indicated, convention requires that the initial letters of an alphametic cannot represent zero, and that two or more letters may not represent the same digit. If these conventions are disregarded, the alphametic must be accompanied by an appropriate clue to that effect. Some cryptarithms are quite complex and elaborate and have multiple solutions. Electronic computers have been used for the solution of such problems.
Mathematical paradoxes and fallacies have long intrigued mathematicians. A mathematical paradox is a mathematical conclusion so unexpected that it is difficult to accept even though every step in the reasoning is valid. A mathematical fallacy, on the other hand, is an instance of improper reasoning leading to an unexpected result that is patently false or absurd. The error in a fallacy generally violates some principle of logic or mathematics, often unwittingly. Such fallacies are quite puzzling to the tyro, who, unless he is aware of the principle involved, may well overlook the subtly concealed error. A sophism is a fallacy in which the error has been knowingly committed, for whatever purpose. If the error introduced into a calculation or a proof leads innocently to a correct result, the result is a “howler,” often said to depend on “making the right mistake.”
has a continually greater sum the more terms are included, but the sum always remains less than 2, although it approaches nearer and nearer to 2 as more terms are included. On the other hand, the series
is called divergent: it has no limit, the sum becoming larger than any chosen value if sufficient terms are taken. Another paradox is the fact that there are just as many even natural numbers as there are even and odd numbers altogether, thus contradicting the notion that “the whole is greater than any of its parts.” This seeming contradiction arises from the properties of collections containing an infinite number of objects. Since both are infinite, they are for both practical and mathematical purposes equal.
The so-called paradoxes of Zeno (c. 450 bce) are, strictly speaking, sophisms. In the race between Achilles and the tortoise, the two start moving at the same moment, but, if the tortoise is initially given a lead and continues to move ahead, Achilles can run at any speed and never catch up. Zeno’s argument rests on the presumption that Achilles must first reach the point where the tortoise started, by which time the tortoise will have moved ahead to another point, and so on. Obviously, Zeno did not believe what he claimed; his interest lay in locating the error in his argument. The same observation is true of the three remaining paradoxes of Zeno, the Dichotomy, “motion is impossible”; the Arrow, “motionless even while in flight”; and the Stadium, or “a given time interval is equivalent to an interval twice as long.” Beneath the sophistry of these contradictions lie subtle and elusive concepts of limits and infinity, only completely explained in the 19th century when the foundations of analysis became more rigorous and the theory of transfinite numbers had been formulated.
Common algebraic fallacies usually involve a violation of one or another of the following assumptions:
Three examples of such violations follow:
Thus a is both greater than b and less than b.
An example of an illegal operation or “lucky boner” is:
Polygonal and other figurate numbers
Among the many relationships of numbers that have fascinated man are those that suggest (or were derived from) the arrangement of points representing numbers into series of geometrical figures. Such numbers, known as figurate or polygonal numbers, appeared in 15th-century arithmetic books and were probably known to the ancient Chinese; but they were of especial interest to the ancient Greek mathematicians. To the Pythagoreans (c. 500 bce), numbers were of paramount significance; everything could be explained by numbers, and numbers were invested with specific characteristics and personalities. Among other properties of numbers, the Pythagoreans recognized that numbers had “shapes.” Thus, the triangular numbers, 1, 3, 6, 10, 15, 21, etc., were visualized as points or dots arranged in the shape of a triangle.
Square numbers are the squares of natural numbers, such as 1, 4, 9, 16, 25, etc., and can be represented by square arrays of dots, as shown in . Inspection reveals that the sum of any two adjacent triangular numbers is always a square number.
Oblong numbers are the numbers of dots that can be placed in rows and columns in a rectangular array, each row containing one more dot than each column. The first few oblong numbers are 2, 6, 12, 20, and 30. This series of numbers is the successive sums of the series of even numbers or the products of two consecutive numbers: 2 = 1·2; 6 = 2·3 = 2 + 4; 12 = 3·4 = 2 + 4 + 6; 20 = 4·5 = 2 + 4 + 6 + 8; etc. An oblong number also is formed by doubling any triangular number (see ).
The gnomons include all of the odd numbers; these can be represented by a right angle, or a carpenter’s square, as illustrated in . Gnomons were extremely useful to the Pythagoreans. They could build up squares by adding gnomons to smaller squares and from such a figure could deduce many interrelationships: thus 12 + 3 = 22, 22 + 5 = 32, etc.; or 1 + 3 + 5 = 32, 1 + 3 + 5 + 7 = 42, 1 + 3 + 5 + 7 + 9 = 52, etc. Indeed, it is quite likely that Pythagoras first realized the famous relationship between the sides of a right triangle, represented by a2 + b2 = c2, by contemplating the properties of gnomons and square numbers, observing that any odd square can be added to some even square to form a third square. Thus
and, in general, a2 + b2 = c2, where a2 = b + c. This is a special class of Pythagorean triples (see below Pythagorean triples).
Besides these, the Greeks also studied numbers having pentagonal, hexagonal, and other shapes. Many relationships can be shown to exist between these geometric patterns and algebraic expressions.
Polygonal numbers constitute a subdivision of a class of numbers known as figurate numbers. Examples include the arithmetic sequences
When new series are formed from the sums of the terms of these series, the results are, respectively,
These series are not arithmetic sequences but are seen to be the polygonal triangular and square numbers. Polygonal number series can also be added to form threedimensional figurate numbers; these sequences are called pyramidal numbers.
The significance of polygonal and figurate numbers lies in their relation to the modern theory of numbers. Even the simple, elementary properties and relations of numbers often demand sophisticated mathematical tools. Thus, it has been shown that every integer is either a triangular number, the sum of two triangular numbers, or the sum of three triangular numbers: e.g., 8 = 1 + 1 + 6, 42 = 6 + 36, 43 = 15 + 28, 44 = 6 + 10 + 28.
The study of Pythagorean triples as well as the general theorem of Pythagoras leads to many unexpected byways in mathematics. A Pythagorean triple is formed by the measures of the sides of an integral right triangle—i.e., any set of three positive integers such that a2 + b2 = c2. If a, b, and c are relatively prime—i.e., if no two of them have a common factor—the set is a primitive Pythagorean triple.
A formula for generating all primitive Pythagorean triples is
in which p and q are relatively prime, p and q are neither both even nor both odd, and p > q. By choosing p and q appropriately, for example, primitive Pythagorean triples such as the following are obtained:
The only primitive triple that consists of consecutive integers is 3, 4, 5.
Certain characteristic properties are of interest:
- 1. Either a or b is divisible by 3.
- 2. Either a or b is divisible by 4.
- 3. Either a or b or c is divisible by 5.
- 4. The product of a, b, and c is divisible by 60.
- 5. One of the quantities a, b, a + b, a - b is divisible by 7.
It is also true that if n is any integer, then 2n + 1, 2n2 + 2n, and 2n2 + 2n + 1 form a Pythagorean triple.
Certain properties of Pythagorean triples were known to the ancient Greeks—e.g., that the hypotenuse of a primitive triple is always an odd integer. It is now known that an odd integer R is the hypotenuse of such a triple if and only if every prime factor of R is of the form 4k + 1, where k is a positive integer.
Most numbers are either “abundant” or “deficient.” In an abundant number, the sum of its proper divisors (i.e., including 1 but excluding the number itself) is greater than the number; in a deficient number, the sum of its proper divisors is less than the number. A perfect number is an integer that equals the sum of its proper divisors. For example, 24 is abundant, its divisors giving a sum of 36; 32 is deficient, giving a sum of 31. The number 6 is a perfect number, since 1 + 2 + 3 = 6; so is 28, since 1 + 2 + 4 + 7 + 14 = 28. The next two perfect numbers are 496 and 8,128. The first four perfect numbers were known to the ancients. Indeed, Euclid suggested that any number of the form 2n − 1(2n − 1) is a perfect number whenever 2n − 1 is prime, but it was not until the 18th century that the Swiss mathematician Leonhard Euler proved that every even perfect number must be of the form 2n − 1(2n − 1), where 2n − 1 is a prime.
A number of the form 2n − 1 is called a Mersenne number after the French mathematician Marin Mersenne; it may be prime (i.e., having no factor except itself or 1) or composite (composed of two or more prime factors). A necessary though not sufficient condition that 2n − 1 be a prime is that n be a prime. Thus, all even perfect numbers have the form 2n − 1(2n − 1) where both n and 2n − 1 are prime numbers. Until comparatively recently, only 12 perfect numbers were known. In 1876 the French mathematician Édouard Lucas found a way to test the primality of Mersenne numbers. By 1952 the U.S. mathematician Raphael M. Robinson had applied Lucas’ test and, by means of electronic digital computers, had found the Mersenne primes for n = 521; 607; 1,279; 2,203; and 2,281, thus adding five more perfect numbers to the list. By the 21st century, more than 40 Mersenne primes had been found.
It is known that to every Mersenne prime there corresponds an even perfect number and vice versa. But two questions are still unanswered: the first is whether there are any odd perfect numbers, and the second is whether there are infinitely many perfect numbers.
Many remarkable properties are revealed by perfect numbers. All perfect numbers, for example, are triangular. Also, the sum of the reciprocals of the divisors of a perfect number (including the reciprocal of the number itself) is always equal to 2. Thus
In 1202 the mathematician Leonardo of Pisa, also called Fibonacci, published an influential treatise, Liber abaci. It contained the following recreational problem: “How many pairs of rabbits can be produced from a single pair in one year if it is assumed that every month each pair begets a new pair which from the second month becomes productive?” Straightforward calculation generates the following sequence:
The second row represents the first 12 terms of the sequence now known by Fibonacci’s name, in which each term (except the first two) is found by adding the two terms immediately preceding; in general, xn = xn − 1 + xn − 2, a relation that was not recognized until about 1600.
Over the years, especially in the middle decades of the 20th century, the properties of the Fibonacci numbers have been extensively studied, resulting in a considerable literature. Their properties seem inexhaustible; for example, xn + 1 · xn − 1 = xn2 + (−1)n. Another formula for generating the Fibonacci numbers is attributed to Édouard Lucas:
The ratio (Square root of√5 + 1) : 2 = 1.618 . . ., designated as Φ, is known as the golden number; the ratio (Square root of√5 − 1) : 2, the reciprocal of Φ, is equal to 0.618 . . . . Both these ratios are related to the roots of x2 − x − 1 = 0, an equation derived from the Divine Proportion of the 15th-century Italian mathematician Luca Pacioli, namely, a/b = b/(a + b), when a < b, by setting x = b/a. In short, dividing a segment into two parts in mean and extreme proportion, so that the smaller part is to the larger part as the larger is to the entire segment, yields the so-called Golden Section, an important concept in both ancient and modern artistic and architectural design. Thus, a rectangle the sides of which are in the approximate ratio of 3 : 5 (Φ−1 = 0.618 . . .), or 8 : 5 (Φ = 1.618 . . .), is presumed to have the most pleasing proportions, aesthetically speaking.
Raising the golden number to successive powers generates the sequence that begins as follows:
In this sequence the successive coefficients of the radical Square root of√5 are Fibonacci’s 1, 1, 2, 3, 5, 8, while the successive second terms within the parentheses are the so-called Lucas sequence: 1, 3, 4, 7, 11, 18. The Lucas sequence shares the recursive relation of the Fibonacci sequence; that is, xn = xn − 1 + xn − 2.
If a golden rectangle ABCD is drawn and a square ABEF is removed, the remaining rectangle ECDF is also a golden rectangle. If this process is continued and circular arcs are drawn, the curve formed approximates the logarithmic spiral, a form found in nature (see ). The logarithmic spiral is the graph of the equation r = kΘ, in polar coordinates, where k = Φ2/π. The Fibonacci numbers are also exemplified by the botanical phenomenon known as phyllotaxis. Thus, the arrangement of the whorls on a pinecone or pineapple, of petals on a sunflower, and of branches from some stems follows a sequence of Fibonacci numbers or the series of fractions
Geometric and topological recreations
The creation and analysis of optical illusions may involve mathematical and geometric principles such as the proportionality between the areas of similar figures and the squares of their linear dimensions. Some involve physiological or psychological considerations, such as the fact that, when making visual comparisons, relative lengths are more accurately perceived than relative areas.
For treatment of optical illusions and their illusory effects, including unorthodox use of perspective, distorted angles, deceptive shading, unusual juxtaposition, equivocal contours or contrasts, colour effects, chromatic aberration, and afterimages, see the articles illusion; hallucination.
Some geometric fallacies include “proofs”: (1) that every triangle is isosceles (i.e., has two equal sides); (2) that every angle is a right angle; (3) that if ABCD is a quadrilateral in which AB = CD, then AD must be parallel to BC; and (4) that every point in the interior of a circle lies on the circle.
The explanations of fallacious proofs in geometry usually include one or another of the following: faulty construction; violation of a logical principle, such as assuming the truth of a converse, or confusing partial inverses or converses; misinterpretation of a definition, or failing to take note of“necessary and sufficient” conditions; too great dependence upon diagrams and intuition; being trapped by limiting processes and deceptive appearances.
At first glance, drawings such as those in Oscar Reutersvard of Sweden, who made them the central features of a set of Swedish postage stamps.appear to represent plausible three-dimensional objects, but closer inspection reveals that they cannot; the representation is flawed by faulty perspective, false juxtaposition, or psychological distortion. Among the first to produce these drawings—also called undecidable figures—was
In 1958 L.S. Penrose, a British geneticist, and his son Roger Penrose, a mathematical physicist, introduced the undecidable figures called strange loops. One of these is the Penrose square stairway ( ), which one could apparently traverse in either direction forever without getting higher or lower. Strange loops are important features of some of M.C. Escher’s lithographs, including “Ascending and Descending” (1960) and “Waterfall” (1961). The concept of the strange loop is related to the idea of infinity and also to logical paradoxes involving self-referential statements, such as that of Epimenides (see below Logical paradoxes).
A mathematical curve is said to be pathological if it lacks certain properties of continuous curves. For example, its tangent may be undefined at some—or indeed any—point; the curve may enclose a finite area but be infinite in length; or its curvature may be undefinable. Some of these curves may be regarded as the limit of a series of geometrical constructions; their lengths or the areas they enclose appear to be the limits of sequences of numbers. Their idiosyncrasies constitute paradoxes rather than optical illusions or fallacies.
Von Koch’s snowflake curve, for example, is the figure obtained by trisecting each side of an equilateral triangle and replacing the centre segment by two sides of a smaller equilateral triangle projecting outward, then treating the resulting figure the same way, and so on. The first two stages of this process are shown in . As the construction proceeds, the perimeter of the curve increases without limit, but the area it encloses does approach an upper bound, which is 8/5 the area of the original triangle.
In seeming defiance of the fact that a curve is “one-dimensional” and thus cannot fill a given space, it can be shown that the curve produced by continuing the stages in , when completed, will pass through every point in the square. In fact, by similar reasoning, the curve can be made to fill completely an entire cube.
The Sierpinski curve, the first few stages of which are shown in , contains every point interior to a square, and it describes a closed path. As the process of forming the curve is continued indefinitely, the length of the curve approaches infinity, while the area enclosed by it approaches 5/12 that of the square.
A fractal curve, loosely speaking, is one that retains the same general pattern of irregularity regardless of how much it is magnified; von Koch’s snowflake is such a curve. At each stage in its construction, the length of its perimeter increases in the ratio of 4 to 3. The mathematician Benoit Mandelbrot has generalized the term dimension, symbolized D, to denote the power to which 3 must be raised to produce 4; that is, 3D = 4. The dimension that characterizes von Koch’s snowflake is therefore log 4/log 3, or approximately 1.26.
Beginning in the 1950s Mandelbrot and others have intensively studied the self-similarity of pathological curves, and they have applied the theory of fractals in modelling natural phenomena. Random fluctuations induce a statistical self-similarity in natural patterns; analysis of these patterns by Mandelbrot’s techniques has been found useful in such diverse fields as fluid mechanics, geomorphology, human physiology, economics, and linguistics. Specifically, for example, characteristic “landscapes” revealed by microscopic views of surfaces in connection with Brownian movement, vascular networks, and the shapes of polymer molecules are all related to fractals.
A maze having only one entrance and one exit can be solved by placing one hand against either wall and keeping it there while traversing it; the exit can always be reached in this manner, although not necessarily by the shortest path. If the goal is within the labyrinth, the “hand-on-wall” method will also succeed, provided that there is no closed circuit; i.e., a route that admits of complete traverse back to the beginning ( ).
If there are no closed circuits—i.e., no detached walls—the maze is “simply connected”; otherwise the maze is “multiply connected.” A classic general method of “threading a maze” is to designate a place where there is a choice of turning as a node; a path or node that has not yet been entered as a “new” path or node; and one that has already been entered as an “old” path or node.
The procedure is as follows:
- Never traverse a path more than twice.
- When arriving at a new node, select either path.
- When arriving at an old node or at a dead end by a new path, return by the same path.
- When arriving at an old node by an old path, select a new path, if possible; otherwise, an old path.
Although recreational interest in mazes has diminished, two areas of modern science have found them to be of value: psychology and communications technology. The former is concerned with learning behaviour, the latter with improved design of computers.
Geometric dissection problems involve the cutting of geometric figures into pieces that can be arranged to form other geometric figures; for example, cutting a rectangle into parts that can be put together in the form of a square and vice versa. Interest in this area of mathematical recreations began to manifest itself toward the close of the 18th century when Montucla called attention to this problem. As the subject became more popular, greater emphasis was given to the more general problem of dissecting a given polygon of any number of sides into parts that would form another polygon of equal area. Then, in the early 20th century, interest shifted to finding the minimum number of pieces required to change one figure into another.
According to a comprehensive theory of equidecomposable figures that was outlined in detail about 1960, two polygons are said to be equidecomposable if it is possible to dissect, or decompose, one of them into a finite number of pieces that can then be rearranged to form the second polygon. Obviously, the two polygons have equal areas.
According to the converse theorem, if two polygons have equal areas, they are equidecomposable.
In the method of complementation, congruent parts are added to two figures so as to make the two new figures congruent. It is known that equicomplementable figures have equal areas and that, if two polygons have equal areas, they are equicomplementable. As the theory advanced, the relation of equidecomposability to various motions such as translations, central symmetry, and, indeed, to groups of motions in general, was explored. Studies were also extended to the more difficult questions of dissecting polyhedra.
On the “practical” side, the execution of a dissection, such as converting the Greek cross into a square ( ), may require the use of ingenious procedures, some of which have been described by H. Lindgren (see Bibliography).
A quite different and distinctly modern type of dissection deserves brief mention, the so-called squaring the square, or squared rectangles. Thus, the problem of subdividing a square into smaller squares, no two of which are alike, which was long thought to be unsolvable, has been solved by the means of network theory. In this connection, a squared rectangle is a rectangle that can be dissected into a finite number of squares; if no two of these squares are equal, the squared rectangle is said to be perfect. The order of a squared rectangle is the number of constituent squares. It is known that there are no perfect rectangles of orders less than 9, and that there are exactly two perfect rectangles of order 9. (One of these is shown as .) The dissection of a square into unequal squares, deemed impossible as early as 1907, was first reported in 1939.
Graphs and networks
The word graph may refer to the familiar curves of analytic geometry and function theory, or it may refer to simple geometric figures consisting of points and lines connecting some of these points; the latter are sometimes called linear graphs, although there is little confusion within a given context. Such graphs have long been associated with puzzles.
If a finite number of points are connected by lines (vertices, and the lines are called the edges. If every pair of vertices is connected by an edge, the graph is called a complete graph ( ). A planar graph is one in which the edges have no intersection or common points except at the edges. (It should be noted that the edges of a graph need not be straight lines.) Thus a nonplanar graph can be transformed into an equivalent, or isomorphic, planar graph, as in and . An interesting puzzle involves the problem of the three wells. Here ( ) A, B, and C represent three neighbours’ houses, and R, S, and T three wells. It is desired to have paths leading from each house to each well, allowing no path to cross any other path. The proof that the problem is impossible depends on the so-called Jordan curve theorem that a continuous closed curve in a plane divides the plane into an interior and an exterior region in such a way that any continuous line connecting a point in the interior with a point in the exterior must intersect the curve. Planar graphs have proved useful in the design of electrical networks.), the resulting figure is a graph; the points, or corners, are called the
A connected graph is one in which every vertex, or point (or, in the case of a solid, a corner), is connected to every other point by an arc; an arc denotes an unbroken succession of edges. A route that never passes over an edge more than once, although it may pass through a point any number of times, is sometimes called a path.
Modern graph theory (in the sense of linear graphs) had its inception with the work of Euler in connection with the “Königsberg bridge problem” and was, for many years, associated with curves now called Eulerian paths—i.e., figures that can be drawn without retracing edges or lifting the pencil from the paper. The city of Königsberg (now Kaliningrad) embraces the banks and an island of the forked Pregel (Pregolya) River; seven bridges span the different branches (see ). The problem was: Could a person leave home, take a walk, and return, crossing each bridge just once? Euler showed why it is impossible.
Briefly stated, Euler’s principles (which apply to any closed network) are as follows:
- The number of even points—i.e., those in which an even number of edges meet—is of no significance.
- The number of odd points is always even; this includes the case of a network with only even points.
- If there are no odd points, one can start at any point and finish at the same point.
- If there are exactly two odd points, one can start at either of the odd points and finish at the other odd point.
- If there are more than two odd points, the network cannot be traced in one continuous path; if there are 2n odd points and no more, it can be traced in n separate paths.
Thus, in Figure 15, traversed by Eulerian paths; and cannot; shows a network corresponding to the Königsberg bridge problem, in which the points represent the land areas and the edges the seven bridges.and can be
Networks are related to a variety of recreational problems that involve combining or arranging points in a plane or in space. Among the earliest was a puzzle invented by an Irish mathematician, Sir William Rowan Hamilton (1859), which required finding a route along the edges of a regular dodecahedron that would pass once and only once through every point. In another version, the puzzle was made more convenient by replacing the dodecahedron by a graph isomorphic to the graph formed by the 30 edges of the dodecahedron ( ). A Hamilton circuit is one that passes through each point exactly once but does not, in general, cover all the edges; actually, it covers only two of the three edges that intersect at each vertex. The route shown in heavy lines is one of several possible Hamilton circuits.
Graph theory lends itself to a variety of problems involving combinatorics: for example, designing a network to connect a set of cities by railroads or by telephone lines; planning city streets or traffic patterns; matching jobs with applicants; arranging round-robin tournaments such that every team or individual meets every other team or individual.
Cartographers have long recognized that no more than four colours are needed to shade the regions on any map in such a way that adjoining regions are distinguished by colour. The corresponding mathematical question, framed in 1852, became the celebrated “four-colour map problem”: Is it possible to construct a planar map for which five colours are necessary? Similar questions can be asked for other surfaces. For example, it was found by the end of the 19th century that seven colours, but no more, may be needed to colour a map on a torus. Meanwhile the classical four-colour question withstood mathematical assaults until 1976, when mathematicians at the University of Illinois announced that four colours suffice. Their published proof, including diagrams derived from more than 1,000 hours of calculations on a high-speed computer, was the first significant mathematical proof to rely heavily on artificial computation.
A flexagon is a polygon constructed from a strip of paper or thin metal foil in such a way that the figure possesses the property of changing its faces when it is flexed. First discussed in 1939, flexagons have become a fascinating mathematical recreation. One of the simplest flexagons is the trihexaflexagon, made by cutting a strip of suitable material and marking off 10 equilateral triangles. By folding appropriately several times and then gluing the last triangle onto the reverse side of the first triangle, the resulting model may be flexed so that one of the faces disappears and another face takes its place.
Puzzles involving configurations
One of the earliest puzzles and games that require arranging counters into some specified alignment or configuration was Lucas’ Puzzle: in a row of seven squares, each of the three squares at the left end is occupied by a black counter, each of the three squares at the right end is occupied by a white counter, and the centre square is vacant. The object is to move one counter at a time until the squares originally occupied by white counters are occupied by black, and vice versa; black counters can be moved only to the right and white only to the left. A counter may move to an adjacent vacant square or it may jump one counter of the other colour to occupy a vacant square. The puzzle may be enlarged to any number of counters of each colour. For n counters of each kind the number of required moves is n(n + 2).
A similar puzzle uses eight numbered counters placed on nine positions. The aim is to shift the counters so that they will appear in reverse numerical order; only single moves and jumps are permitted.
Well known, but by no means as trivial, are games for two players, such as ticktacktoe and its more sophisticated variations, one of which calls for each player to begin with three counters (3 black, 3 white); the first player places a counter in any cell, except the center cell, of a 3 × 3 diagram; the players then alternate until all the counters are down. If neither has won by getting three in a row, each, in turn, is permitted to move a counter to an adjacent square, moving only horizontally or vertically. Achieving three in a row constitutes a win. There are many variations. The game can be played on a 4 × 4 diagram, each player starting with four counters; sometimes diagonal moves are permitted. Another version is played on a 5 × 5 pattern. Yet another interesting modification, popular in Europe, is variously known as mill or nine men’s morris, played with counters on a board consisting of three concentric squares and eight transversals.
Another game of this sort is played on a diamond-shaped board of tessellated hexagons, usually 11 on each edge, where by “tessellated” we mean fitted together like tiles to cover the board completely. Two opposite edges of the diamond are designated “white”; the other two sides, “black.” Each player has a supply of black or white counters. The players alternately place a piece on any vacant hexagon; the object of the game is for each player to complete an unbroken chain of his pieces between the sides designating his colour. Though the game does not end until one of the players has made a complete chain, it may meander across the board; it cannot end in a draw because the only way one player can block the other is by completing his own chain. The game was created by Piet Hein in 1942 in Denmark, where it quickly became popular under the name of polygon. It was invented independently in the United States in 1948 by John Nash, and a few years later one version was marketed under the name of hex.
In addition to the aforementioned varieties of a class of games that can be loosely described as “three in a row” or “specified alignment,” many others also exist, such as three- and four-dimensional ticktacktoe and even a computer ticktacktoe. The game strategy in ticktacktoe is by no means simple; an excellent mathematical analysis is given by F. Schuh.
Recreational problems posed with regard to the conventional chessboard are legion. Among the most widely discussed is the problem of how to place eight queens on a chessboard in such a way that none of the queens is attacking any other queen; the problem interested the great German mathematician C.F. Gauss (c. 1850). Another group of problems has to do with the knight’s tour; in particular, to find a closed knight’s tour that ends at the starting point, that does not enter any square more than once, but that passes through all the squares in one tour. Problems of the knight’s tour are intimately connected with the construction of magic squares. Other chessboard problems are concerned with determining the relative values of the various chess pieces; finding the maximum number of pieces of any one type that can be put on a board so that no one piece can take any other; finding the minimum number of pieces of any one type that can be put on a board so as to command all cells; and how to place 16 queens on a board so that no three of them are in a straight line.
One of the best known of all puzzles is the Fifteen Puzzle, which Sam Loyd the elder claimed to have invented about 1878, though modern scholars have documented earlier inventors. It is also known as the Boss Puzzle, Gem Puzzle, and Mystic Square. It became popular all over Europe almost at once. It consists essentially of a shallow square tray that holds 15 small square counters numbered from 1 to 15, and one square blank space. With the 15 squares initially placed in random order and with the blank space in the lower right-hand corner, the puzzle is to rearrange them in numerical order by sliding only, with the blank space ending up back in the lower right-hand corner. It may overwhelm the reader to learn that there are more than 20,000,000,000,000 possible different arrangements that the pieces (including the blank space) can assume. But in 1879 two American mathematicians proved that only one-half of all possible initial arrangements, or about 10,000,000,000,000, admitted of a solution. The mathematical analysis is as follows. Basically, no matter what path it takes, as long as it ends its journey in the lower right-hand corner of the tray, any numeral must pass through an even number of boxes. In the normal position of the squares ( ), regarded row by row from left to right, each number is larger than all the preceding numbers; i.e., no number precedes any number smaller than itself. In any other than the normal arrangement, one or more numbers will precede others smaller than themselves. Every such instance is called an inversion. For example, in the sequence 9, 5, 3, 4, the 9 precedes three numbers smaller than itself and the 5 precedes two numbers smaller than itself, making a total of five inversions. If the total number of all the inversions in a given arrangement is even, the puzzle can be solved by bringing the squares back to the normal arrangement; if the total number of inversions is odd, the puzzle cannot be solved. Thus, in there are two inversions, and the puzzle can be solved; in there are five inversions, and the puzzle has no solution. Theoretically, the puzzle can be extended to a tray of m × n spaces with (mn − 1) numbered counters.
The puzzle of the Tower of Hanoi is widely believed to have been invented in 1883 by the French mathematician Édouard Lucas, though his role in its invention has been disputed. Ever popular, made of wood or plastic, it still can be found in toy shops. It consists essentially of three pegs fastened to a stand and of eight circular disks, each having a hole in the centre. The disks, all of different radii, are initially placed (see ) on one of the pegs, with the largest disk on the bottom and the smallest on top; no disk rests upon one smaller than itself. The task is to transfer the individual disks from one peg to another so that no disk ever rests on one smaller than itself, and, finally, to transfer the tower; i.e., all the disks in their proper order, from their original peg to one of the other pegs. It can be shown that for a tower of n disks, there will be required 2n − 1 transfers of individual disks to shift the tower completely to another peg. Thus for 8 disks, the puzzle requires 28 − 1, or 255 transfers. If the original “needle” (peg) was a tower with 64 disks, the number of transfers would be 264 − 1, or 18,446,744,073,709,551,615; this is exactly the same number required to fill an 8 × 8 checkerboard with grains of wheat, 1 on the first square, 2 on the second, 4 on the next, then 8, 16, 32, etc.
The term polyomino was introduced in 1953 as a jocular extension of the word domino. A polyomino is a simply connected set of equal-sized squares, each joined to at least one other along an edge. The simpler polyomino shapes are shown in . Somewhat more fascinating are the pentominoes, of which there are exactly 12 forms ( ). Asymmetrical pieces, which have different shapes when they are flipped over, are counted as one.
The number of distinct polyominoes of any order is a function of the number of squares in each, but, as yet, no general formula has been found. It has been shown that there are 35 types of hexominoes and 108 types of heptominoes, if the dubious heptomino with an interior “hole” is included.
Recreations with polyominoes include a wide variety of problems in combinatorial geometry, such as forming desired shapes and specified designs, covering a chessboard with polyominoes in accordance with prescribed conditions, etc. Two illustrations may suffice.
The 35 hexominoes, having a total area of 210 squares, would seem to admit of arrangement into a rectangle 3 × 70, 5 × 42, 6 × 35, 7 × 30, 10 × 21, or 14 × 15; however, no such rectangle can be formed.
Can the 12 pentominoes, together with one square tetromino, form an 8 × 8 checkerboard? A solution of the problem was shown around 1935. It is not known how many solutions there are, but it has been estimated to be at least 1,000. In 1958, by use of a computer, it was shown that there are 65 solutions in which the square tetromino is exactly in the centre of the checkerboard.
Piet Hein of Denmark, also known for his invention of the mathematical games known as hex and tac tix, stumbled upon the fact that all the irregular shapes that can be formed by combining three or four congruent cubes joined at their faces can be put together to form a larger cube. There are exactly seven such shapes, called Soma Cubes; they are shown in. No two shapes are alike, although the fifth and sixth are mirror images of each other. The fact that these seven pieces (comprising 27 “unit” cubes) can be reassembled to form one large cube is indeed remarkable.
Many interesting solid shapes can be formed from the seven Soma Cubes, shapes resembling, for example, a sofa, a chair, a castle, a tunnel, a pyramid, and so on. Even the assembling of the seven basic pieces into a large cube can be done in more than 230 essentially different ways.
As a recreation, the Soma Cubes are fascinating. With experience, many persons find that they can solve Soma problems mentally. Psychologists who have used them find that the ability to solve Soma problems is roughly correlated with general intelligence, although there are some strange anomalies at both ends of the distribution of intelligence. In any event, people playing with the cubes do not appear to want to stop; the variety of interesting structures possible seems endless.
There is a wide variety of puzzles involving coloured square tiles and coloured cubes. In one, the object is to arrange the 24 three-colour patterns, including repetitions, that can be obtained by subdividing square tiles diagonally, using three different colours, into a 4 × 6 rectangle so that each pair of touching edges is the same colour and the entire border of the rectangle is the same colour.
More widely known perhaps is the 30 Coloured Cubes Puzzle. If six colours are used to paint the faces there result 2,226 different combinations. If from this total only those cubes that bear all six colours on their faces are selected, a set of 30 different cubes is obtained; two cubes are regarded as “different” if they cannot be placed side by side so that all corresponding faces match. Many fascinating puzzles arise from these coloured squares and cubes; many more could be devised. Some of them have appeared commercially at various times under different names, such as the Mayblox Puzzle, the Tantalizer, and the Katzenjammer.
A revival of interest in coloured-cube problems was aroused by the appearance of a puzzle known as Instant Insanity, consisting of four cubes, each of which has its faces painted white, red, green, and blue in a definite scheme. The puzzle is to assemble the cubes into a 1 × 1 × 4 prism such that all four colours appear on each of the four long faces of the prism. Since each cube admits of 24 different orientations, there are 82,944 possible prismatic arrangements; of these only two are the required solutions.
This puzzle was soon superseded by Rubik’s Cube, developed independently by Ernő Rubik (who obtained a Hungarian patent in 1975) and Terutoshi Ishigi (who obtained a Japanese patent in 1976). The cube appears to be composed of 27 smaller cubes, or cubelets; in its initial state, each of the six faces of the cube is made up of nine cubelet faces all of the same colour. In the commercial versions of the puzzle, an internal system of pivots allows any layer of nine cubelets to be rotated with respect to the rest, so that successive rotations about the three axes cause the cubelet faces to become scrambled. The challenge of restoring a scrambled cube to its original configuration is formidable, inasmuch as more than 1019 states can be reached from a given starting condition. A thriving literature quickly developed for the exposition of systematic solutions (based on group theory) of scrambled cubes.
Nim and similar games
A game so old that its origin is obscure, nim lends itself nicely to mathematical analysis. In its generalized form, any number of objects (counters) are divided arbitrarily into several piles. Two people play alternately; each, in turn, selects any one of the piles and removes from it all the objects, or as many as he chooses, but at least one object. The player removing the last object wins. Every combination of the objects may be considered “safe” or “unsafe”; i.e., if the position left by a player after his move assures a win for that player, the position is called safe. Every unsafe position can be made safe by an appropriate move, but every safe position is made unsafe by any move. To determine whether a position is safe or unsafe, the number of objects in each pile may be expressed in binary notation: if each column adds up to zero or an even number, the position is safe. For example, if at some stage of the game, three piles contain 4, 9, and 15 objects, the calculation is:
Since the second column from the right adds up to 1, an odd number, the given combination is unsafe. A skillful player will always move so that every unsafe position left to him is changed to a safe position.
A similar game is played with just two piles; in each draw the player may take objects from either pile or from both piles, but in the latter event he must take the same number from each pile. The player taking the last counter is the winner.
Games such as nim make considerable demands upon the player’s ability to translate decimal numbers into binary numbers and vice versa. Since digital computers operate on the binary system, however, it is possible to program a computer (or build a special machine) that will play a perfect game. Such a machine was invented by American physicist Edward Uhler Condon and an associate; their automatic Nimatron was exhibited at the New York World’s Fair in 1940.
Games of this sort seem to be widely played the world over. The game of pebbles, also known as the game of odds, is played by two people who start with an odd number of pebbles placed in a pile. Taking turns, each player draws one, or two, or three pebbles from the pile. When all the pebbles have been drawn, the player who has an odd number of them in his possession wins.
Predecessors of these games, in which players distribute pebbles, seeds, or other counters into rows of holes under varying rules, have been played for centuries in Africa and Asia and are known as mancala games.
Problems of logical inference
Many challenging questions do not involve numerical or geometrical considerations but call for deductive inferences based chiefly on logical relationships. Such puzzles are not to be confounded with riddles, which frequently rely upon deliberately misleading or ambiguous statements, a play on words, or some other device intended to catch the unwary. Logical puzzles do not admit of a standard procedure or generalized pattern for their solution and are usually solved by some trial-and-error method. This is not to say that the guessing is haphazard; on the contrary, the given facts (generally minimal) suggest several hypotheses. These can be successively rejected if found inconsistent, until, by substitution and elimination, the solution is finally reached. The use of various techniques of logic may sometimes prove helpful, but in the last analysis, success depends largely upon that elusive capacity called ingenuity. For convenience, logic problems are arbitrarily grouped in the following categories.
The brakeman, the fireman, and the engineer
The names, not necessarily respectively, of the brakeman, fireman, and engineer of a certain train were Smith, Jones, and Robinson. Three passengers on the train happened to have the same names and, in order to distinguish them from the railway employees, will be referred to hereafter as Mr. Smith, Mr. Jones, and Mr. Robinson. Mr. Robinson lived in Detroit; the brakeman lived halfway between Chicago and Detroit; Mr. Jones earned exactly $2,000 per year; Smith beat the fireman at billiards; the brakeman’s next-door neighbour, one of the passengers, earned exactly three times as much as the brakeman; and the passenger who lived in Chicago had the same name as the brakeman. What was the name of the engineer?
The following problem is typical of the overlapping-groups category. Among the members of a high-school language club, 21 were studying French; 20, German; 26, Spanish; 12, both French and Spanish; 10, both French and German; nine, both Spanish and German; and three, French, Spanish, and German. How many club members were there? How many members were studying only one language?
Truths and lies
Another kind of logical inference puzzle concerns truths and lies. One variety is as follows: The natives of a certain island are known as knights or knaves, though they are indistinguishable in appearance. The knights always tell the truth, and the knaves always lie. A visitor to the island, meeting three natives, asks them whether they are knights or knaves. The first says something inaudible. The second, pointing to the first, says, “He says that he is a knight.” The third, pointing to the second, says, “He lies.” Knowing beforehand that only one is a knave, the visitor decides what each of the three is.
In a slightly different type, four men, one of whom was known to have committed a certain crime, made the following statements when questioned by the police:
Archie: Dave did it.
Dave: Tony did it.
Gus: I didn’t do it.
Tony: Dave lied when he said I did it.
If only one of these four statements is true, who was the guilty man? On the other hand, if only one of these four statements is false, who was the guilty man? (From 101 Puzzles in Thought and Logic by C.R. Wylie, Jr.; Dover Publications, Inc., New York, 1957. Reprinted through the permission of the publisher.)
The problem of the smudged faces is another instance of pure logical deduction. Three travellers were aboard a train that had just emerged from a tunnel, leaving a smudge of soot on the forehead of each. While they were laughing at each other, and before they could look into a mirror, a neighbouring passenger suggested that although no one of the three knew whether he himself was smudged, there was a way of finding out without using a mirror. He suggested: “Each of the three of you look at the other two; if you see at least one whose forehead is smudged, raise your hand.” Each raised his hand at once. “Now,” said the neighbour, “as soon as one of you knows for sure whether his own forehead is smudged or not, he should drop his hand, but not before.” After a moment or two, one of the men dropped his hand with a smile of satisfaction, saying: “I know.” How did that man know that his forehead was smudged?
A final example might be the paradox of the unexpected hanging, a remarkable puzzle that first became known by word of mouth in the early 1940s. One form of the paradox is the following: A prisoner has been sentenced on Saturday. The judge announces that “the hanging will take place at noon on one of the seven days of next week, but you will not know which day it is until you are told on the morning of the day of the hanging.” The prisoner, on mulling this over, decided that the judge’s sentence could not possibly be carried out. “For example,” said he, “I can’t be hanged next Saturday, the last day of the week, because on Friday afternoon I’d still be alive and I’d know for sure that I’d be hanged on Saturday. But I’d known this before I was told about it on Saturday morning, and this would contradict the judge’s statement.” In the same way, he argued, they could not hang him on Friday, or Thursday, or Wednesday, Tuesday, or Monday. “And they can’t hang me tomorrow,” thought the prisoner, “because I know it today!”
Careful analysis reveals that this argument is false, and that the decree can be carried out. The paradox is a subtle one. The crucial point is that a statement about a future event can be known to be a true prediction by one person but not known to be true by another person until after the event has taken place.
Highly amusing and often tantalizing, logical paradoxes generally lead to searching discussions of the foundations of mathematics. As early as the 6th century bce, the Cretan prophet Epimenides allegedly observed that “All Cretans are liars,” which, in effect, means that “All statements made by Cretans are false.” Since Epimenides was a Cretan, the statement made by him is false. Thus the initial statement is self-contradictory. A similar dilemma was given by an English mathematician, P.E.B. Jourdain, in 1913, when he proposed the card paradox. This was a card on one side of which was printed:
“The sentence on the other side of this card is TRUE.”
On the other side of the card the sentence read:
“The sentence on the other side of this card is FALSE.”
The barber paradox, offered by Bertrand Russell, was of the same sort: The only barber in the village declared that he shaved everyone in the village who did not shave himself. On the face of it, this is a perfectly innocent remark until it is asked “Who shaves the barber?” If he does not shave himself, then he is one of those in the village who does not shave himself and so is shaved by the barber, namely, himself. If he shaves himself, he is, of course, one of the people in the village who is not shaved by the barber. The self-contradiction lies in the fact that a statement is made about “all” the members of a certain class, when the statement or the object to which the statement refers is itself a member of the class. In short, the Russell paradox hinges on the distinction between those classes that are members of themselves and those that are not members of themselves. Russell attempted to resolve the paradox of the class of all classes by introducing the concept of a hierarchy of logical types but without much success. Indeed, the entire problem lies close to the philosophical foundations of mathematics.
|
<urn:uuid:5aba4ce6-c1ae-4f19-aee3-042248ff8ee2>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00610.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9537005424499512,
"score": 3.265625,
"token_count": 16164,
"url": "https://www.britannica.com/topic/number-game"
}
|
Quantitative real-time PCR is becoming mature technology for the quantification of nucleic acids. It is spreading wide outside its original use in the research laboratories, becoming preferred technology for a range of applications, many that require specialised solutions and adaptations. Integration with pre-analytical steps and post-processing operations are becoming key challenges.
The idea of the polymerase chain reaction (PCR) was born in 1983 when Kary Mullis was taking a ride in the mountain range in California1,2. It is as simple as brilliant. Based on the natural ability of the polymerase enzyme to copy nucleic acids, Kary Mullis reasoned that using a heat stable polymerase it should be possible to automate the reaction to perform multiple copying events by cycling the temperature.
The double stranded DNA molecule is strand separated by heating to 95°C, then the temperature is lowered to allow short synthetic DNA oligonucleotide primers to anneal to complementary sequence in the DNA template, and finally the temperature is set to 72°C, at which the heat stable Thermus aquaticus (Taq) polymerase extends the primers into full length copies. Since both strands are copied, the number of DNA molecules is doubled in each cycle. Using PCR, virtually any DNA can be amplified starting from a single copy to a large number of molecules that can readily be analysed or used for engineering. Development of PCR was a major breakthrough as a qualitative analytical tool, but it was not quantitative. The amount of PCR product produced depended on the amount of reagents added rather than on the amount of starting material.
In the beginning of the 90s Russell Higuchi discovered that PCR can be performed in the presence of nucleic acid stains that become fluorescent upon binding the DNA. The fluorescence from the dyes could be measured throughout the reaction, making it possible to monitor the accumulation of the PCR product in real time. By registering the number of PCR amplification cycles required to obtain a particular amount of product characterised by certain dye fluorescence, it was possible to calculate the number of target molecules the sample contained initially. The approach was named quantitative real-time PCR or qPCR for short. The analytical sensitivity of qPCR was only limited by sampling effects, since a single molecule was sufficient to generate product, and its dynamic range was virtually unlimited. Reproducibility was also impressive, considering the technique gives exponential response. pandemic H1N1/09 and regular seasonal flu3. Rapid influenza diagnostic tests (RIDTs) based on the detection of the influenza viral nucleoprotein antigen, for example, show only 10-70% sensitivity compared to the qPCR test for the novel virus4. So far only qPCR-based diagnostic tests have gained FDA approval. This total dominance of qPCR as primary test for the new pathogen reflects its emerging status as gold standard for pathogen detection in diagnostics.
Although influenza testing has dominated the news in the past year, the most common molecular diagnostic tests are HCV, HBV and HIV, which account for some 85% of the testing. In the US more than 2 million quantitative HIV tests are performed annually. Currently a handful of large companies compete for this market. All use qPCR on license from Roche, apart from bioMerieux which uses NASBA and Chiron/Bayer which uses branch DNA technology – these are the only other FDA approved quantitative tests. This picture is expected to change rapidly as the qPCR patents expire within the next few years as many more kit suppliers will be able to enter. Currently, only few qPCR instruments are licensed for diagnostics and approved for in vitro diagnostics (IVD). This will also change when patents expire and will make it easier for kit manufacturers to sell their products, since many of the new instruments will be open platforms that are compatible with qPCR kits from most suppliers. These open platforms will mainly be attractive for smaller hospitals and laboratories, where cost savings and flexibility are important. For high throughput laboratories that perform large number of routine tests, fully automatic systems such as the COBAS® AmpliPrep/COBAS® TaqMan® from Roche5, the m2000 RealTime System from Abbott6 and the RotorgeneQ-based system from Qiagen7 are the most attractive. These systems are almost fully automated but they are voluminous, occupying large bench space. The next generation of integrated systems based on microtechnology will be exciting. These primarily target small laboratories, the doctor’s office, and may ultimately be available for point-of-care testing. First system on the market is the qPCR instrument from Enigma Diagnostics8. Enigma FL is completely self-contained. The entire process from collection of the raw sample to delivery of an end result takes less than 30 minutes. The system operates with ambient stored reagents in a single disposable cartridge and meets the need for diagnostic systems that are portable and easy to use with minimal operator training and expertise. The Enigma ML is suited to settings where usage is lower and space is a premium, eg in the doctor’s office, pharmacy or intensive care unit. It incorporates a disposable cartridge that accommodates either liquid or swab samples without requirements for manual processing. All reagents and sample preparation tools are held on the self-contained cartridge and all steps are automated. Another exciting system is the GeneDisc Cycler from Gene Systems, a part of Pall Life Sciences9. It is an automated, miniaturised qPCR system that performs gene amplification in a disposable GeneDisc preloaded with reagents. It will be combined with Genextract HD for the standardised extraction of 48 parallel samples. Currently the GeneDisc is only available for food pathogen testing. The global molecular diagnostic market was $2.9 billion in 2008 with a CAGR of 7.8%, out of which infectious disease testing accounted for USD $1.9 billion with a CAGR of 6.5%.
It is not correct to refer to the high-throughput instruments as next generation qPCR; they will not replace the by now traditional 96/384-well instruments, which are most suited for the small research lab where most operations such as sample preparation and loading are done manually. But they do constitute a new generation of qPCR instruments that open for applications that are either not practical or cost-efficient on the conventional instruments. The new generation high throughput qPCR instruments are represented by the OpenArray from Life Technologies10, the BIOMARK from Fluidigm11, the LC1536 from Roche12, and soon also the SmartChip from Wafergen13. They are all built on different platforms. The BIOMARK is a microfluidic system based on the company’s proprietary valves. The dynamic array for expression profiling loads 96 assays on one side and 96 samples on the other side, which are then mixed into
A year ago I summarised in DDW the emerging qPCR applications and in this news story I follow up on those and other important happenings in the qPCR field during the past year.
Closed, automated systems for infectious disease testing
2009 was the year of the Swine flu outbreak. First detected in April in Veracruz, Mexico, the new virus with combination of genes from swine, avian, and human influenza viruses, spread quickly around the world and was in June declared pandemic by the World Health Organization (WHO) and the US Centers for Disease Control (CDC). The CDC recommended qPCR for the new virus as other tests were unable to differentiate between 96 × 96 = 9,216 reaction chambers for parallel qPCR analysis. The BIOMARK platform is not compatible with the popular dye reporter SYBRGreen I, which adsorbs to the particular material of the micro channels. However, recently this was solved with the introduction of Chromofy dye (Figure 1)14.
The OpenArray uses a chip with 3,072 33-nanolitre reaction volumes in a footprint the size of microscope slide. The assays are loaded using proprietary robotics, dried-down and sent to the users who add sample and master mix15. LC1536 is the big brother of Roche’s very successful LC480, running a 1536-well plate that requires as little as 0.5μl reaction volumes. In the second half of 2010 Wafergen plans to launch its SmartChip, which has 72 × 72 = 5,184 nanowells. Today these instruments are for research use only. Considering the fast development in multimarker diagnostics of complex diseases, we expect this will change and they will become platforms for multimarker diagnostics, prognostics and theranostics, where some 20-100 markers are expected to be sufficient and often optimum to give the most reliable indications.
Miniaturised and lab-on-chip platforms
Exciting developments of Micro-Electro- Mechanical Systems (MEMS) technology has recently allowed the migration of qPCR machines to lab-on-a-chip systems and holds promise to eventually bring qPCR to the doctor’s office. The main advantage of miniaturised systems is their speed. The reduced heat capacity of the much smaller reaction volumes allow for shorter cycles, since temperature equilibria are attained faster. The chip designed by Neuzil, for example, performs 40 cycles within six minutes with excellent amplification performance16. Analytical sensitivity is generally sufficient to detect a single molecule if it is present, and high reproducibility can be achieved. Limitation is the very high sensitivity of PCR to inhibitors, which makes it impossible to analyse crude test samples. Another limitation is the small reaction volume, which requires the sample to be concentrated. For field applications the miniaturised PCR systems must be interfaced, preferably integrated with sample preparation and concentration units17,18.
The exponential nature of PCR is its key strength that contributes to its high sensitivity and wide dynamic range, but it is also the Achilles’ heel, since it limits precision. Although replicate qPCR response curves show excellent reproducibility the exponential increase in the amount of template limits the precision to detect a difference between samples to about 50% when expressed in copy numbers. Analysing gene copy number variations this is the difference between a normal diploid genome and a trisomy. In expression analysis measurement precision is even lower, since additional pre-processing steps add confounding variation.
At about the same time Russ Higuchi was developing qPCR, the idea of quantifying target numbers by PCR using limiting dilution was conceived by Sykes et al19. Diluting a sample to such an extent that it contains a very small number of target molecules, the sample can be aliquoted into reaction containers that each initially is either blank or contains a single template molecule only. When amplified by PCR the number of target molecules in the initial sample will correspond to the number of positive PCRs20. In 1999 Bert Vogelstein named the technique digital PCR and used it to quantify K-ras mutations in stool DNA from colorectal cancer patients21. However, as long as PCR was performed mainly by manual dispensing in 96 or 384-well plates, digital PCR remained esoteric with few applications. This will change rapidly with the advent of the high throughput platforms presented above. Since digital PCR is conceptually an end-point PCR technique rather than real-time PCR, it opens the arena also for other PCR platforms, such as the innovative RainStorm™ microdroplet-based technology developed at RainDance Technologies22, that produces picolitre-volume droplets at a rate of 10 million per hour. Each droplet is the functional equivalent of a reaction chamber with encapsulated PCR reagents, reporter molecules and, under limiting dilution conditions, either none or one template molecule23. The droplets are carried in a continuous oil flow through alternating denaturation and annealing zones, resulting in rapid (55 second cycles) and efficient PCR amplification. The formation of product evidencing presence of template molecules in the individual droplets is measured as fluorescence within the microfluidic chip.
Digital PCR enhances our ability to discriminate between copy numbers. Four from five copies can be distinguished using some 1,200 chambers, while with 8,000 chambers 11 from 10 copies can be separated24,25. Critical with these new platforms is that they allow for automatic distribution of a sample into a very large number of reaction containers for qPCR analysis, which is prerequisite for any biological and medical studies and eventually for clinical applications. The important digital PCR applications we foresee to become popularised within near future include early detection of mutations16, detection of non-cultivatable pathogens against excessive backgrounds26, copy number variations (Figure 2), analysis of fetal DNA in plasma27 and qPCR tomography28.
Pre-analytics, experimental design and publication guidelines
qPCR is the final analytical step in a process of quantifying target nucleic acids that typically involves several upstream steps starting with sampling followed by extraction and in the case of RNA analysis also reverse transcription to produce cDNA. Frequently additional steps such as storage, freezing/thawing, fixation, transportation etc, are required. All these steps contribute to the variation in the analytical process of quantifying the amount of target nucleic acid in a test sample and must be considered.
For studies within one laboratory, the best approach is to perform a small fully nested pilot study, from which results the variance contributions from the different pre-processing steps can be estimated29. The following study can then be costoptimised for performance in terms of using optimum replicates at the different levels and also sufficient biological subjects to achieve a required power. The approach can also be used to compare different protocols, kits and approaches. MultiD Analysis offers software GenEx for this planning30. Results published so far suggest that most variance is contributed from the natural variation among studied subject, sampling for tissue samples and in a few cases from the reverse transcription. The qPCR step does not contribute appreciably. Clearly, future efforts should be on improving the pre-analytical steps, rather than fine-tuning the qPCR. In Europe the project SPIDIA co-ordinated by QIAGEN has been launched to tackle the standardisation and improvement of pre-analytical procedures for in vitro diagnostics31. The activities cover all steps from the creation of evidence-based guidelines and tools for the pre-analytical phase to the testing and optimisation of these tools through the development of novel assays and biomarkers. The biomarkers shall be suitable to control for the natural degradation that occurs when nucleases are released as cells are damaged, the physical and chemical degradation that occurs when samples are preserved and, most importantly, the activation of many genes that occurs due to the stress and changed environment when samples are collected. Improved methods and procedures to control the quality and integrity of the sampled material are very much needed, as are standardised procedures to minimise variability between measurements in different laboratories and among independent studies. Guidelines are requisite for the maturing of qPCR into a robust, accurate and reliable nucleic acid quantification technology. As described and exemplified by Bustin, ill-assorted pre-assay conditions, poor assay design and inappropriate data analysis methodologies have resulted in the recurrent publication of data that are at best inconsistent and at worst irrelevant and even misleading32. A step in that direction was taken by the set of guidelines that propose a minimum standard for the provision of information for qPCR experiments (‘MIQE’)33. MIQE aims to restructure to-day’s free-for-all qPCR methods into a more consistent format that will encourage detailed auditing of experimental detail, data analysis and reporting principles. Key points of the MIQE guidelines are to present the design of the experiment, describe the sample and how it is collected, describe the extraction process of the nucleic acids and test the integrity of the DNA, which can be done using microfluidic analysis on the Agilent Bioanalyzer34 or on the Bio-Rad Experion35, alternatively with differential assays such as the 3’/5’ approach, specify the reverse transcription conditions, describe the qPCR target, describe the qPCR oligonucleotides used and the detailed qPCR protocol, the validation of any standards and reference genes used, and details of the qPCR data analysis that was performed.
Learning more about the complexity of the overall process of collecting, preparing and analysing samples for their nucleic acid content and about the underlying biological variation due to natural diversity, we are facing the challenge of separating the noise caused by these factors that overlays the relevant effect on gene expression caused by the environmental influence or drug that we are studying. Adequately-designed studies have many subjects, often multiple samples collected from each subject, levels of technical replicates that are analysed for multiple genes of interest combined with validated genes for normalisation, and various controls. The studies are often run over multiple plates, occasionally over long periods of time. Analysing these kinds of data with general tools such as Microsoft Excel is not an option; the risk is too large to make errors or miss accounting for some of the variability. The instrument softwares do not offer appropriate analytical tools either. There is, however, no need to reinvent the wheel. The statistical methods to pre-process and then analyse these kind of measurement data are known, and since recently are being made available to the qPCR community in user-friendly softwares dedicated to the challenge of processing and mining qPCR data. Market-leading GenEx from MultiD Analysis supports all important qPCR platforms on the market, handles multiplate/ multicentre/multilevel studies, with appropriate controls, and in addition of performing all basic comparisons, such as absolute quantification with standard curves and relative quantification with appropriate univariate tests, GenEx offers powerful classification methods for expression profiling and multimarker diagnostics25. Another option is the StatMiner from Integromics, which also offers user-friendly and advanced analysis of qPCR data36.
It is obvious qPCR is developing into niches that have distinct customer bases. Dominant are the infectious disease applications, which are targeted by IVD-approved instruments that preferably are fully automated for whichever FDA/CE-approved kits are available, and small research laboratories that require open, flexible systems. Upcoming new niches are the high throughput platforms that require special loading systems, but reduce substantially cost per run and open for novel applications such as digital PCR. They will also be suitable for multimarker diagnostics of complex diseases. Forthcoming niches are closed miniaturised, or at least smaller systems with integrated sample preparation that will target small diagnostic laboratories and the doctor’s office. Major focus is on the pre-analytical steps, where there is plenty of room for improvement of product yield and quality, and where guidelines are needed for diagnostic applications, and on experimental design and postprocessing of information to retrieve as valid and valuable biological information as possible from a study. There is also a need to improve the quality of published data. DDW
Dr Mikael Kubista is head of the department of gene expression at the institute of Biotechnology of the Czech Academy of Sciences in Prague, and CEO and founder of TATAA Biocenters (www.tataa.com), leading providers of qPCR services in Europe. TATAA has an intensive R&D programme related to qPCR and has developed several important products such as the Chromofy and visiblue dyes, the 1-step extraction, RT, qPCR CelluLyser reagent and proprietary panels for the identification of optimum reference genes, and for the profiling of embryonic stem cells and tumour cells. TATAA also offers hands-on training courses in qPCR and molecular diagnostics worldwide (www.tataa.com/Courses/Courses.html) and arranges the main qPCR symposia in Europe (www.qpcrsymposium.eu) and in the US (www.qpcrsymposium.com).
1 Dancing Naked in the Mind Field. Ed. Kary Mullis (2000).
3 Interim Guidance on Specimen Collection, Processing, and Testing for Patients with Suspected Novel Influenza A (H1N1) Virus Infection. CDC.gov. Centers for Disease Control and Prevention. www.cdc.gov/h1n1flu/specimen collection.htm.
4 Hurt, AC et al. Performance of influenza rapid point-of-care tests in the detection of swine lineage A(H1N1) influenza viruses. Influenza and Other Respiratory Viruses 2009;3(4):171-76.
5 http://molecular.roche.com/ platforms/fully_automated_pcr _systems.html.
6 http://international.abbott molecular.com/m2000SPm2000 RT_51644.aspx.
7 http://www1.qiagen.com/ Products/ByLabFocus/MDX/.
8 http://www.enigmadiagnos tics.com.
10 http://www.lifetechnologies. com/.
15 Brenan, C, Morrison, T. High throughput, nanoliter quantitative PCR. Drug Discovery Today: Technologies 2, 247-253 (2005).
16 Neuzil, P, Zhang, C, Pipper, J, Oh, S, Zhuo, L. Ultra fast miniaturized real-time PCR: 40 cycles in less than six minutes. Nucleic Acids Res. 2006 Jun 28; 34(11):e77.
17 Lee, D, Chen, PJ, Lee, GB. Biosens Bioelectron. The evolution of real-time PCR machines to real-time PCR chips. 2010 Mar 15;25(7):1820- 4. Epub 2009 Nov 27.
18 Liu, P, Mathies, RA. Integrated microfluidic systems for high-performance genetic analysis. Trends Biotechnol. 2009 Oct;27(10):572-81. Epub 2009 Aug 24.
19 Sykes, PJ, Neoh, SH, Brisco, MJ, Hughes, E, Condon, J, Morley, AA. Quantitation of targets for PCR by use of limiting dilution. Biotechniques. 1992 Sep;13(3):444-9.
20 Kalinina, O, Lebedeva, I, Brown, J, Silver, J. Nanoliter scale PCR with TaqMan detection. Nucleic Acids Res. 1997 May 15;25(10):1999- 2004.
21 Vogelstein, B, Kinzler, KW. Digital PCR. Proc Natl Acad Sci U S A. 1999 Aug 3;96(16):9236-41.
23 Kiss, MM, Ortoleva- Donnelly, L, Beer, NR, Warner, J, Bailey, CG, Colston, BW, Rothberg, JM, Link, DR, Leamon, JH. High-throughput quantitative polymerase chain reaction in picoliter droplets. Anal Chem. 2008 Dec 1;80(23):8975-81.
24 Dube, Simant, Qin, Jian, Ramakrishnan, Ramesh. Mathematical Analysis of Copy Number Variation in a DNA Sample Using Digital PCR on a Nanofluidic Device. PLoS ONE 3 (2008), pp. e2876-e2883.
25 Weaver, Suzanne, Dube, Simant, Mir, Alain, Qin, Jian, Sun, Gang, Ramakrishnan, Ramesh, Jones, Robert C and Livak, Kenneth J. Taking qPCR to a higher level: Analysis of CNV reveals the power of high throughput qPCR to enhance quantitative resolution. Methods Volume 50, Issue 4, April 2010, Pages 271-276. The ongoing Evolution of qPCR.
26 Ottesen, EA, Hong, JW, Quake, SR, Leadbetter, JR. Microfluidic digital PCR enables multigene analysis of individual environmental bacteria. Science. 2006 Dec 1;314(5804):1464-7.
27 Lun, Fiona MF, Chiu, Rossa WK, Chan, KC Allen, Leung, Tak Yeung, Lau, Tze Kin and Lo, YM Dennis. Microfluidics Digital PCR Reveals a Higher than Expected Fraction of Fetal DNA in Maternal Plasma. Clinical Chemistry. 2008;54:1664-1672.
28 Sindelka, R, Sidova, M, Svec, D, Kubista, M. Spatial expression profiles in the Xenopus laevis oocytes measured with qPCR tomography. Methods. 2010 Jan 4. doi:10.1016/j.ymeth.2009. 12.011.
29 Tichopad, A, Kitchen, R, Riedmaier, I, Becker, C, Ståhlberg, A, Kubista, M. Design and optimization of reversetranscription quantitative PCR experiments. Clin Chem. 2009 Oct;55(10):1816-23. Epub 2009 Jul 30.
32 Bustin, SA. Why the need for qPCR publication guidelines? – The case for MIQE. Methods. 2010 Apr;50(4):217-26. Epub 2009 Dec 16.
33 www.tataa.com/files/PDF/ Clin%20Chem%2055,%204%20 %282009%29.pdf.
|
<urn:uuid:507a1599-6f0d-46a1-b613-cc4221ae20f6>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00610.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9135234951972961,
"score": 3.265625,
"token_count": 5323,
"url": "http://www.ddw-online.com/enabling-technologies/p92822-real-time%20pcr%20-%20where%20we%20are%20and%20where%20are%20we%20heading.%20%20spring%2010.html"
}
|
This page uses content from Wikipedia and is licensed under CC BY-SA.
|History of China|
|Neolithic c. 8500 – c. 2070 BC|
|Xia dynasty c. 2070 – c. 1600 BC|
|Shang dynasty c. 1600 – c. 1046 BC|
|Zhou dynasty c. 1046 – 256 BC|
|Spring and Autumn|
|Qin dynasty 221–206 BC|
|Han dynasty 206 BC – 220 AD|
|Three Kingdoms 220–280|
|Wei, Shu and Wu|
|Jin dynasty 265–420|
|Eastern Jin||Sixteen Kingdoms|
|Northern and Southern dynasties
|Sui dynasty 581–618|
|Tang dynasty 618–907|
|(Second Zhou dynasty 690–705)|
|Five Dynasties and
|Northern Song||Western Xia|
|Yuan dynasty 1271–1368|
|Ming dynasty 1368–1644|
|Qing dynasty 1644–1912|
|Republic of China 1912–1949|
|People's Republic of China 1949–present|
Written records of the history of China date from as early as 1500 BC, from the Shang dynasty (c. 1600–1046 BC). Ancient historical texts such as the Records of the Grand Historian (c. 100 BC) and the Bamboo Annals (296 BC) describe a Xia dynasty (c. 2070–1600 BC) before the Shang, but no writing on a durable medium have survived. The Shang ruled in the Yellow River valley, which is commonly held to be the cradle of Chinese civilization. However, Neolithic civilizations originated at various cultural centers along both the Yellow River and Yangtze River. These Yellow River and Yangtze civilizations arose millennia before the Shang. With thousands of years of continuous history, China is one of the world's oldest civilizations, and is regarded as one of the cradles of civilization.
The Zhou dynasty (1046–256 BC) supplanted the Shang, and introduced the concept of the Mandate of Heaven to justify their rule. The central Zhou government began to weaken due to external and internal pressures in the 8th century BC, and the country eventually splintered into smaller states during the Spring and Autumn period. These states became independent and warred with one another in the following Warring States period. Much of traditional Chinese culture, literature and philosophy first developed during those troubled times.
In 221 BC Qin Shi Huang conquered the various warring states and created for himself the title of Huangdi (Chinese: 皇帝) or "emperor" of the Qin, marking the beginning of imperial China. However, the oppressive government fell soon after his death, and was supplanted by the longer lived Han dynasty (206 BC–220 AD). Successive dynasties developed bureaucratic systems that enabled the emperor to control vast territories directly. In the 21 centuries from 206 BC until AD 1912, routine administrative tasks were handled by a special elite of scholar-officials. Young men, well-versed in calligraphy, history, literature, and philosophy, were carefully selected through difficult government examinations. China's last dynasty was the Qing (1644–1912), which was replaced by the Republic of China in 1912, and in the mainland by the People's Republic of China in 1949, resulting in two de facto states claiming to be the legitimate government of all China.
Chinese history has alternated between periods of politically unity and peace, and periods of war and failed statehood – the most recent being the Chinese Civil War (1927–1949). China was occasionally dominated by steppe peoples, most of whom were eventually assimilated into the Han Chinese culture and population. Between eras of multiple kingdoms and warlordism, Chinese dynasties have ruled parts or all of China; in some eras control stretched as far as Xinjiang and Tibet, as at present. Traditional culture, and influences from other parts of Asia and the Western world (carried by waves of immigration, cultural assimilation, expansion, and foreign contact), form the basis of the modern culture of China.
What is now China was inhabited by Homo erectus more than a million years ago. Recent study shows that the stone tools found at Xiaochangliang site are magnetostratigraphically dated to 1.36 million years ago. The archaeological site of Xihoudu in Shanxi Province is the earliest recorded use of fire by Homo erectus, which is dated 1.27 million years ago. The excavations at Yuanmou and later Lantian show early habitation. Perhaps the most famous specimen of Homo erectus found in China is the so-called Peking Man discovered in 1923–27. Fossilised teeth of Homo sapiens dating to 125,000–80,000 BC have been discovered in Fuyan Cave in Dao County in Hunan.
Early evidence for proto-Chinese millet agriculture is radiocarbon-dated to about 7000 BC. The earliest evidence of cultivated rice, found by the Yangtze River, is carbon-dated to 8,000 years ago. Farming gave rise to the Jiahu culture (7000 to 5800 BC). At Damaidi in Ningxia, 3,172 cliff carvings dating to 6000–5000 BC have been discovered, "featuring 8,453 individual characters such as the sun, moon, stars, gods and scenes of hunting or grazing". These pictographs are reputed to be similar to the earliest characters confirmed to be written Chinese. Chinese proto-writing existed in Jiahu around 7000 BC, Dadiwan from 5800 BC to 5400 BC, Damaidi around 6000 BC and Banpo dating from the 5th millennium BC. Some scholars have suggested that Jiahu symbols (7th millennium BC) were the earliest Chinese writing system. Excavation of a Peiligang culture site in Xinzheng county, Henan, found a community that flourished in 5,500 to 4,900 BC, with evidence of agriculture, constructed buildings, pottery, and burial of the dead. With agriculture came increased population, the ability to store and redistribute crops, and the potential to support specialist craftsmen and administrators. In late Neolithic times, the Yellow River valley began to establish itself as a center of Yangshao culture (5000 BC to 3000 BC), and the first villages were founded; the most archaeologically significant of these was found at Banpo, Xi'an. Later, Yangshao culture was superseded by the Longshan culture, which was also centered on the Yellow River from about 3000 BC to 2000 BC.
Bronze artifacts have been found at the Majiayao culture site (between 3100 and 2700 BC), The Bronze Age is also represented at the Lower Xiajiadian culture (2200–1600 BC) site in northeast China. Sanxingdui located in what is now Sichuan province is believed to be the site of a major ancient city, of a previously unknown Bronze Age culture (between 2000 and 1200 BC). The site was first discovered in 1929 and then re-discovered in 1986. Chinese archaeologists have identified the Sanxingdui culture to be part of the ancient kingdom of Shu, linking the artifacts found at the site to its early legendary kings.
Ferrous metallurgy begins to appear in the late 6th century in the Yangzi Valley. An bronze tomahawk (铁刃青铜钺) with a blade of meteoric iron excavated near the city of Gaocheng (藁城) in Shijiazhuang (now Hebei province) has been dated to the 14th century BC. For this reason, some authors[who?] have used the term "Iron Age" by convention for the transitional period of c. 500 BC to 100 BC, roughly corresponding to the Warring States period of Chinese historiography. An Iron Age culture of the Tibetan Plateau has tentatively been associated with the Zhang Zhung culture described in early Tibetan writings.
The dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou, Henan in 1959. With few clear records matching the Shang oracle bones, it remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. Excavations that overlap the alleged time period of the Xia indicate a type of culturally similar groupings of chiefdoms. Early markings from this period found on pottery and shells are thought to be ancestral to modern Chinese characters.
According to ancient records, the dynasty ended around 1600 BC as a consequence of the Battle of Mingtiao.
Archaeological findings providing evidence for the existence of the Shang dynasty, c. 1600–1046 BC, are divided into two sets. The first set, from the earlier Shang period, comes from sources at Erligang, Zhengzhou, and Shangcheng. The second set, from the later Shang or Yin (殷) period, is at Anyang, in modern-day Henan, which has been confirmed as the last of the Shang's nine capitals (c. 1300–1046 BC). The findings at Anyang include the earliest written record of Chinese past so far discovered: inscriptions of divination records in ancient Chinese writing on the bones or shells of animals — the so-called "oracle bones", dating from around 1500 BC.
31 kings reigned over the Shang dynasty. During their reign, according to the Records of the Grand Historian, the capital city was moved six times. The final (and most important) move was to Yin in 1350 BC which led to the dynasty's golden age. The term Yin dynasty has been synonymous with the Shang dynasty in history, although it has lately been used to refer specifically to the latter half of the Shang dynasty.
Chinese historians living in later periods were accustomed to the notion of one dynasty succeeding another, but the actual political situation in early China is known to have been much more complicated. Hence, as some scholars of China suggest, the Xia and the Shang can possibly refer to political entities that existed concurrently, just as the early Zhou is known to have existed at the same time as the Shang.
Although written records found at Anyang confirm the existence of the Shang dynasty, Western scholars are often hesitant to associate settlements that are contemporaneous with the Anyang settlement with the Shang dynasty. For example, archaeological findings at Sanxingdui suggest a technologically advanced civilization culturally unlike Anyang. The evidence is inconclusive in proving how far the Shang realm extended from Anyang. The leading hypothesis is that Anyang, ruled by the same Shang in the official history, coexisted and traded with numerous other culturally diverse settlements in the area that is now referred to as China proper.
The Zhou dynasty (1046 BC to approximately 256 BC) was the longest-lasting dynasty in Chinese history. By the end of the 2nd millennium BC, the Zhou dynasty began to emerge in the Yellow River valley, overrunning the territory of the Shang. The Zhou appeared to have begun their rule under a semi-feudal system. The Zhou lived west of the Shang, and the Zhou leader had been appointed Western Protector by the Shang. The ruler of the Zhou, King Wu, with the assistance of his brother, the Duke of Zhou, as regent, managed to defeat the Shang at the Battle of Muye.
The king of Zhou at this time invoked the concept of the Mandate of Heaven to legitimize his rule, a concept that would be influential for almost every succeeding dynasty. Like Shangdi, Heaven (tian) ruled over all the other gods, and it decided who would rule China. It was believed that a ruler had lost the Mandate of Heaven when natural disasters occurred in great number, and when, more realistically, the sovereign had apparently lost his concern for the people. In response, the royal house would be overthrown, and a new house would rule, having been granted the Mandate of Heaven.
The Zhou initially moved their capital west to an area near modern Xi'an, on the Wei River, a tributary of the Yellow River, but they would preside over a series of expansions into the Yangtze River valley. This would be the first of many population migrations from north to south in Chinese history.
In the 8th century BC, power became decentralized during the Spring and Autumn period, named after the influential Spring and Autumn Annals. In this period, local military leaders used by the Zhou began to assert their power and vie for hegemony. The situation was aggravated by the invasion of other peoples from the northwest, such as the Qin, forcing the Zhou to move their capital east to Luoyang. This marks the second major phase of the Zhou dynasty: the Eastern Zhou. The Spring and Autumn period is marked by a falling apart of the central Zhou power. In each of the hundreds of states that eventually arose, local strongmen held most of the political power and continued their subservience to the Zhou kings in name only. Some local leaders even started using royal titles for themselves. China now consisted of hundreds of states, some of them only as large as a village with a fort.
As the era continued, larger and more powerful states annexed or claimed suzerainty over smaller ones. By the 6th century BC most small states had disappeared from being annexed and just a few large and powerful principalities dominated China. Some southern states, such as Chu and Wu, claimed independence from the Zhou, who undertook wars against some of them (Wu and Yue). Many new cities were established in this period and Chinese culture was slowly shaped.
Once all these powerful rulers had firmly established themselves within their respective dominions, the bloodshed focused more fully on interstate conflict in the Warring States period, which began when the three remaining élite families in the Jin state – Zhao, Wei and Han – partitioned the state. Many famous individuals such as Lao Zi, Confucius and Sun Tzu lived during this chaotic period.
The Hundred Schools of Thought of Chinese philosophy blossomed during this period, and such influential intellectual movements as Confucianism, Taoism, Legalism and Mohism were founded, partly in response to the changing political world. The first two philosophical thoughts would have an enormous influence on Chinese culture.
After further political consolidation, seven prominent states remained by the end of 5th century BC, and the years in which these few states battled each other are known as the Warring States period. Though there remained a nominal Zhou king until 256 BC, he was largely a figurehead and held little real power.
Numerous developments were made during this period in culture and mathematics, examples include an important literary achievement, the Zuo Commentary on the Spring and Autumn Annals, which summarizes the preceding Spring and Autumn period and the bundle of 21 bamboo slips from the Tsinghua collection, which was invented during this period dated to 305 BC, are the worlds' earliest example of a two digit decimal multiplication table, indicating that sophisticated commercial arithmetic was already established during this period.
As neighboring territories of these warring states, including areas of modern Sichuan and Liaoning, were annexed, they were governed under the new local administrative system of commandery and prefecture (郡縣/郡县). This system had been in use since the Spring and Autumn period, and parts can still be seen in the modern system of Sheng & Xian (province and county, 省縣/省县).
The final expansion in this period began during the reign of Ying Zheng, the king of Qin. His unification of the other six powers, and further annexations in the modern regions of Zhejiang, Fujian, Guangdong and Guangxi in 214 BC, enabled him to proclaim himself the First Emperor (Qin Shi Huang).
The Imperial China Period can be divided into three subperiods: Early, Middle, and Late.
Major events in the Early subperiod include the Qin unification of China and their replacement by the Han, the First Split followed by the Jin unification, and the loss of north China. The Middle subperiod was marked by the Sui unification and their supplementation by the Tang, the Second Split, and the Song unification. The Late subperiod included the Yuan, Ming, and Qing dynasties.
Historians often refer to the period from Qin dynasty to the end of Qing dynasty as Imperial China. Though the unified reign of the First Qin Emperor lasted only 12 years, he managed to subdue great parts of what constitutes the core of the Han Chinese homeland and to unite them under a tightly centralized Legalist government seated at Xianyang (close to modern Xi'an). The doctrine of Legalism that guided the Qin emphasized strict adherence to a legal code and the absolute power of the emperor. This philosophy, while effective for expanding the empire in a military fashion, proved unworkable for governing it in peacetime. The Qin Emperor presided over the brutal silencing of political opposition, including the event known as the burning of books and burying of scholars. This would be the impetus behind the later Han synthesis incorporating the more moderate schools of political governance.
Major contributions of the Qin include the concept of a centralized government, and the unification and development of the legal code, the written language, measurement, and currency of China after the tribulations of the Spring and Autumn and Warring States periods. Even something as basic as the length of axles for carts—which need to match ruts in the roads—had to be made uniform to ensure a viable trading system throughout the empire. Also as part of its centralization, the Qin connected the northern border walls of the states it defeated, making the first Great Wall of China.
A major Qin innovation that lasted until 1912 was reliance upon a trained intellectual elite, the Scholar-official ("Scholar-gentlemen"). They were civil servants appointed by the Emperor to handle daily governance. Talented young men were selected through an elaborate process of imperial examination. They had to demonstrate skill at calligraphy, and had to know Confucian philosophy. Historian Wing-Tsit Chan concludes that:
After Emperor Qin Shi Huang's unnatural death due to the consumption of mercury pills, the Qin government drastically deteriorated and eventually capitulated in 207 BC after the Qin capital was captured and sacked by rebels, which would ultimately lead to the establishment of a new dynasty of a unified China. Despite the short 15-year duration of the Qin dynasty, it was immensely influential on China and the structure of future Chinese dynasties.
The Han dynasty was founded by Liu Bang, who emerged victorious in the Chu–Han Contention that followed the fall of the Qin dynasty. A golden age in Chinese history, the Han dynasty's long period of stability and prosperity consolidated the foundation of China as a unified state under a central imperial bureaucracy, which was to last intermittently for most of the next two millennia. During the Han dynasty, territory of China was extended to most of the China proper and to areas far west. Confucianism was officially elevated to orthodox status and was to shape the subsequent Chinese civilization. Art, culture and science all advanced to unprecedented heights. With the profound and lasting impacts of this period of Chinese history, the dynasty name "Han" had been taken as the name of the Chinese people, now the dominant ethnic group in modern China, and had been commonly used to refer to Chinese language and written characters. The Han dynasty also saw many mathematical innovations being invented such as the method of Gaussian elimination which appeared in the Chinese mathematical text Chapter Eight Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC, more than 1500 years before the Europeans came up with the method in the 18th century.
After the initial Laissez-faire policies of Emperors Wen and Jing, the ambitious Emperor Wu brought the empire to its zenith. To consolidate his power, Confucianism, which emphasizes stability and order in a well-structured society, was given exclusive patronage to be the guiding philosophical thoughts and moral principles of the empire. Imperial Universities were established to support its study and further development, while other schools of thought were discouraged.
Major military campaigns were launched to weaken the nomadic Xiongnu Empire, limiting their influence north of the Great Wall. Along with the diplomatic efforts led by Zhang Qian, the sphere of influence of the Han Empire extended to the states in the Tarim Basin, opened up the Silk Road that connected China to the west, stimulating bilateral trade and cultural exchange. To the south, various small kingdoms far beyond the Yangtze River Valley were formally incorporated into the empire.
Emperor Wu also dispatched a series of military campaigns against the Baiyue tribes. The Han annexed Minyue in 135 BC and 111 BC, Nanyue in 111 BC, and Dian in 109 BC. Migration and military expeditions led to the cultural assimilation of the south. It also brought the Han into contact with kingdoms in Southeast Asia, introducing diplomacy and trade.
After Emperor Wu, the empire slipped into gradual stagnation and decline. Economically, the state treasury was strained by excessive campaigns and projects, while land acquisitions by elite families gradually drained the tax base. Various consort clans exerted increasing control over strings of incompetent emperors and eventually the dynasty was briefly interrupted by the usurpation of Wang Mang.
In AD 9, the usurper Wang Mang claimed that the Mandate of Heaven called for the end of the Han dynasty and the rise of his own, and he founded the short-lived Xin ("New") dynasty. Wang Mang started an extensive program of land and other economic reforms, including the outlawing of slavery and land nationalization and redistribution. These programs, however, were never supported by the landholding families, because they favored the peasants. The instability of power brought about chaos, uprisings, and loss of territories. This was compounded by mass flooding of the Yellow River; silt buildup caused it to split into two channels and displaced large numbers of farmers. Wang Mang was eventually killed in Weiyang Palace by an enraged peasant mob in AD 23.
Emperor Guangwu reinstated the Han dynasty with the support of landholding and merchant families at Luoyang, east of the former capital Xi'an. Thus, this new era is termed the Eastern Han dynasty. With the capable administrations of Emperors Ming and Zhang, former glories of the dynasty was reclaimed, with brilliant military and cultural achievements. The Xiongnu Empire was decisively defeated. The diplomat and general Ban Chao further expanded the conquests across the Pamirs to the shores of the Caspian Sea, thus reopening the Silk Road, and bringing trade, foreign cultures, along with the arrival of Buddhism. With extensive connections with the west, the first of several Roman embassies to China were recorded in Chinese sources, coming from the sea route in AD 166, and a second one in AD 284.
The Eastern Han dynasty was one of the most prolific era of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng.
By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states tried to gain predominance in the period of the Three Kingdoms. This time period has been greatly romanticized in works such as Romance of the Three Kingdoms.
After Cao Cao reunified the north in 208, his son proclaimed the Wei dynasty in 220. Soon, Wei's rivals Shu and Wu proclaimed their independence, leading China into the Three Kingdoms period. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families.
In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived.
The Jin dynasty was severely weakened by interceine fighting among imperial princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang’an. In 317, a Jin prince in modern-day Nanjing became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin.
Northern China fragmented into a series of independent kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been "sinicized" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During the period of the Sixteen Kingdoms, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze Basin and Delta.
In the early 5th century, China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify.
In the north, the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han.
Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other.
The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the Equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage were enforced throughout the unified empire. Buddhism took root as a prominent religion and were supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty.
According to historian Mark Edward Lewis:
The Tang dynasty was founded by Emperor Gaozu on 18 June 618. It was a golden age of Chinese civilization and considered to be the most prosperous period of China with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time.
The second emperor, Taizong, is widely regarded as one of the greatest emperors in Chinese history, who had laid the foundation for the dynasty to flourish for centuries beyond his reign. Combined military conquests and diplomatic maneuvers were implemented to eliminate threats from nomadic tribes, extend the border, and submit neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes began from port cities such as Guangzhou. There was extensive trade with distant foreign countries, and many foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and imitated by neighboring countries, most notably, Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator who travelled to India on his own, and returned with, "over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics."
Underlying the prosperity of the early Tang dynasty was a strong centralized bureaucracy with efficient policies. The government was organized as "Three Departments and Six Ministries" to separately draft, review, and implement policies. These departments were run by royal family members as well as scholar officials who were selected by imperial examinations. These practices, which matured in the Tang dynasty, were continued by the later dynasties, with some modifications.
Under the Tang "equal-field system" all land was owned by the Emperor and granted to people according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the "Fubing system". These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners.
The dynasty continued to flourish under the rule of Empress Wu Zetian, the only empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least 50 million people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai, and Du Fu.
At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event that devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as Jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government relied heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were irreversibly lost for subsequent centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy.
In late Tang period, the empire was worn out by recurring revolts of regional warlords, while internally, as scholar-officials engaged in fierce factional strife, corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, along with the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division.
The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the "Ten Kingdoms".
Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of the China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominated in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment the Song dynasty in 960, which would eventually annihilated the remains of the "Ten Kingdoms" and reunified China.
In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (also known as Bianjing). In 979, the Song dynasty reunified most of the China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227.
Aiming to recover the strategic Sixteen Prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its primitive nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty.
Within the Liao Empire, the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279.
Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there were extensive maritime trade with neighbouring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses travelled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins.
The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty.
There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the historical work, the Zizhi Tongjian ("Comprehensive Mirror to Aid in Government"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui.
The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161.
The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279.
The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of the Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongolian heartland and other regions, inheriting the largest share of territory of the divided Mongol Empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland.
After the peace treaty in 1304 that ended a series of Mongols civil wars, the emperors of the Yuan dynasty were upheld as the nominal Great Khan (Khagan) of the greater Mongol Empire over other Mongol Khanates, which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the silk road was controlled entirely by a single state, facilitating the flow of people, trade, and cultural exchange. Network of roads and a postal system were established to connect the vast empire. Lucrative maritime trade, developed from the previous Song dynasty, continued to flourish, with Quanzhou and Hangzhou emerging as the largest ports in the world. Adventurous travelers from the far west, most notably the Venetian, Marco Polo, would have settled in China for decades. Upon his return, his detail travel record inspired generations of medieval Europeans with the splendors of the far East. The Yuan dynasty was the first ancient economy, where paper currency, known at the time as Chao, was used as the predominant medium of exchange. Its unrestricted issuance in the late Yuan dynasty inflicted hyperinflation, which eventually brought the downfall of the dynasty.
While the Mongol rulers of the Yuan dynasty adopted substantially to Chinese culture, their sinicization was of lesser extent compared to earlier conquest dynasties in Chinese history. For preserving racial superiority as the conqueror and ruling class, traditional nomadic customs and heritage from the Mongolian steppe were held in high regard. On the other hand, the Mongol rulers also adopted flexibly to a variety of cultures from many advanced civilizations within the vast empire. Traditional social structure and culture in China underwent immense transform during the Mongol dominance. Large group of foreign migrants settled in China, who enjoyed elevated social status over the majority Han Chinese, while enriching Chinese culture with foreign elements. The class of scholar officials and intellectuals, traditional bearers of elite Chinese culture, lost substantial social status. This stimulated the development of culture of the common folks. There were prolific works in zaju variety shows and literary songs (sanqu), which were written in a distinctive poetry style known as qu. Novels of vernacular style gained unprecedented status and popularity.
Before the Mongol invasion, Chinese dynasties reported approximately 120 million inhabitants; after the conquest had been completed in 1279, the 1300 census reported roughly 60 million people. This major decline is not necessarily due only to Mongol killings. Scholars such as Frederick W. Mote argue that the wide drop in numbers reflects an administrative failure to record rather than an actual decrease; others such as Timothy Brook argue that the Mongols created a system of enserfment among a huge portion of the Chinese populace, causing many to disappear from the census altogether; other historians including William McNeill and David Morgan consider that plague was the main factor behind the demographic decline during this period. In the 14th century China suffered additional depredations from epidemics of plague, estimated to have killed 25 million people, 30% of the population of China.
Throughout the Yuan dynasty, there was some general sentiment among the populace against the Mongol dominance. Yet rather than the nationalist cause, it was mainly strings of natural disasters, and incompetence governance that triggered widespread peasant uprisings since the 1340s. After the massive naval engagement at Lake Poyang, Zhu Yuanzhang prevailed over other rebel forces in the south. He proclaimed himself emperor and founded the Ming dynasty in 1368. The same year his northern expedition army captured the capital Khanbaliq. The Yuan remnants fled back to Mongolia and sustained the regime. Other Mongol Khanates in Central Asia continued to exist after the fall of Yuan dynasty in China.
The Ming dynasty was founded by Zhu Yuanzhang in 1368, who proclaimed himself as the Hongwu Emperor. The capital was initially set at Nanjing, and was later moved to Beijing from Yongle Emperor's reign onward.
Urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil.
Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the voyages of Zheng He.
Hongwu Emperor, being the only founder of Chinese dynasties from the peasant origin, had laid the foundation of a state that relied fundamentally in agriculture. Commerce and trade, which flourished in the previous Song and Yuan dynasties, were less emphasized. Neo-feudal landholdings of the Song and Mongol periods were expropriated by the Ming rulers. Land estates were confiscated by the government, fragmented, and rented out. Private slavery was forbidden. Consequently, after the death of the Yongle Emperor, independent peasant landholders predominated in Chinese agriculture. These laws might have paved the way to removing the worst of the poverty during the previous regimes. Towards later era of the Ming dynasty, with declining government control, commerce, trade and private industries revived.
The dynasty had a strong and complex central government that unified and controlled the empire. The emperor's role became more autocratic, although Hongwu Emperor necessarily continued to use what he called the "Grand Secretariat" to assist with the immense paperwork of the bureaucracy, including memorials (petitions and recommendations to the throne), imperial edicts in reply, reports of various kinds, and tax records. It was this same bureaucracy that later prevented the Ming government from being able to adapt to changes in society, and eventually led to its decline.
The Yongle Emperor strenuously tried to extend China's influence beyond its borders by demanding other rulers send ambassadors to China to present tribute. A large navy was built, including four-masted ships displacing 1,500 tons. A standing army of 1 million troops (some estimate as many as 1.9 million[who?]) was created. The Chinese armies conquered and occupied Vietnam for around 20 years, while the Chinese fleet sailed the China seas and the Indian Ocean, cruising as far as the east coast of Africa. The Chinese gained influence in eastern Moghulistan. Several maritime Asian nations sent envoys with tribute for the Chinese emperor. Domestically, the Grand Canal was expanded and became a stimulus to domestic trade. Over 100,000 tons of iron per year were produced. Many books were printed using movable type. The imperial palace in Beijing's Forbidden City reached its current splendor. It was also during these centuries that the potential of south China came to be fully exploited. New crops were widely cultivated and industries such as those producing porcelain and textiles flourished.
In 1449 Esen Tayisi led an Oirat Mongol invasion of northern China which culminated in the capture of the Zhengtong Emperor at Tumu. Since then, the Ming became on the defensive on the northern frontier, which led to the Ming Great Wall being built. Most of what remains of the Great Wall of China today was either built or repaired by the Ming. The brick and granite work was enlarged, the watchtowers were redesigned, and cannons were placed along its length.
At sea, the Ming became increasingly isolationist after the death of the Yongle Emperor. The treasure voyages which sailed Indian Ocean were discontinued, and the maritime prohibition laws were set in place banning the Chinese from sailing abroad. European traders who reached China in the midst of the Age of Discovery were repeatedly rebuked in their requests for trade, with the Portuguese being repulsed by the Ming navy at Tuen Mun in 1521 and again in 1522. Domestic and foreign demands for overseas trade, deemed illegal by the state, led to widespread wokou piracy attacking the southeastern coastline during the rule of the Jiajing Emperor (1507–1567), which only subsided after the opening of ports in Guangdong and Fujian and much military suppression. The Portuguese were allowed to settle in Macau in 1557 for trade, which remained in Portuguese hands until 1999. The Dutch entry into the Chinese seas was also met with fierce resistance, with the Dutch being chased off the Penghu islands in the Sino-Dutch conflicts of 1622–1624 and were forced to settle in Taiwan instead. The Dutch in Taiwan fought with the Ming in the Battle of Liaoluo Bay in 1633 and lost, and eventually surrendered to the Ming loyalist Koxinga in 1662, after the fall of the Ming dynasty.
The Ming dynasty intervened deeply in the Japanese invasions of Korea (1592-98), which ended with the withdrawal of all invading Japanese forces in Korea, and the restoration of the Joseon dynasty, its traditional ally and tributary state. The regional hegemony of the Ming dynasty was preserved at a toll on its resources. Coincidentally, with Ming's control in Manchuria in decline, the Manchu (Jurchen) tribes, under their chieftain Nurhaci, broke away from Ming’s rule, and emerged as a powerful, unified state, which was later proclaimed as the Qing dynasty. It went on to subdue the much weakened Korea as its tributary, conquered Mongolia, and expanded its territory to the outskirt of the Great Wall. The most elite army of the Ming dynasty was to station at the Shanhai Pass to guard the last stronghold against the Manchus, which weakened its suppression of internal peasants uprisings.
The Qing dynasty (1644–1911) was the last imperial dynasty in China. Founded by the Manchus, it was the second conquest dynasty to rule the entire territory of China and its people. The Manchus were formerly known as Jurchens, residing in the northeastern part of the Ming territory outside the Great Wall. They emerged as the major threat to the late Ming dynasty after Nurhaci united all Jurchen tribes and established an independent state. However, the Ming dynasty would be overthrown by Li Zicheng's peasants rebellion, with Beijing captured in 1644 and the Chongzhen Emperor, the last Ming emperor, committing suicide. The Manchus allied with the former Ming general Wu Sangui to seize Beijing, which was made the capital of the Qing dynasty, and then proceeded to subdue the Ming remnants in the south. The decades of Manchu conquest caused enormous loss of lives and the economic scale of China shrank drastically. In total, the Qing conquest of the Ming (1618–1683) cost as many as 25 million lives. Nevertheless, the Manchus adopted the Confucian norms of traditional Chinese government in their rule and were considered a Chinese dynasty.
The Manchus enforced a 'queue order,' forcing the Han Chinese to adopt the Manchu queue hairstyle. Officials were required to wear Manchu-style clothing Changshan (bannermen dress and Tangzhuang), but ordinary Han civilians were allowed to wear traditional Han clothing, or Hanfu. Most Han then voluntarily shifted to wearing Qipao anyway. The Kangxi Emperor ordered the creation of the Kangxi Dictionary, the most complete dictionary of Chinese characters that had been compiled. The Qing dynasty set up the Eight Banners system that provided the basic framework for the Qing military organization. Bannermen could not undertake trade or manual labor; they had to petition to be removed from banner status. They were considered a form of nobility and were given preferential treatment in terms of annual pensions, land, and allotments of cloth.
Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Xinjiang, Tibet, and Mongolia were also formally incorporated into Chinese territory. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories, an uprising of three generals in Southern China who had been denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga (Zheng Chenggong) in 1662 after the fall of the Southern Ming, and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk.
By the end of Qianlong Emperor's long reign, the Qing Empire was at its zenith. China ruled more than one-third of the world's population, and had the largest economy in the world. By area it was one of the largest empires ever.
In the 19th century the empire was internally stagnant and externally threatened by western powers. The defeat by the British Empire in the First Opium War (1840) led to the Treaty of Nanking (1842), under which Hong Kong was ceded to Britain and importation of opium (produced by British Empire territories) was allowed. Subsequent military defeats and unequal treaties with other western powers continued even after the fall of the Qing dynasty.
Internally the Taiping Rebellion (1851–1864), a quasi-Christian religious movement led by the "Heavenly King" Hong Xiuquan, raided roughly a third of Chinese territory for over a decade until they were finally crushed in the Third Battle of Nanking in 1864. This was one of the largest wars in the 19th century in terms of troop involvement; there was massive loss of life, with a death toll of about 20 million. A string of civil disturbances followed, including the Punti–Hakka Clan Wars, Nian Rebellion, Dungan Revolt, and Panthay Rebellion. All rebellions were ultimately put down, but at enormous cost and with millions dead, seriously weakening the central imperial authority. The Banner system that the Manchus had relied upon for so long failed: Banner forces were unable to suppress the rebels, and the government called upon local officials in the provinces, who raised "New Armies", which successfully crushed the challenges to Qing authority. China never rebuilt a strong central army, and many local officials became warlords who used military power to effectively rule independently in their provinces.
In response to calamities within the empire and threats from imperialism, the Self-Strengthening Movement was an institutional reform in the second half of the 1800s. The aim was to modernize the empire, with prime emphasis on strengthening the military. However, the reform was undermined by corrupt officials, cynicism, and quarrels within the imperial family. As a result, the "Beiyang Fleet" were soundly defeated in the First Sino-Japanese War (1894–1895). The Guangxu Emperor and the reformists then launched a more comprehensive reform effort, the Hundred Days' Reform (1898), but it was soon overturned by the conservatives under Empress Dowager Cixi in a military coup.
At the turn of the 20th century, the violent Boxer Rebellion opposed foreign influence in Northern China, and attacked Chinese Christians and missionaries. When Boxers entered Beijing, the Qing government ordered all foreigners to leave. But instead the foreigners and many Chinese were besieged in the foreign legations quarter. The Eight-Nation Alliance sent the Seymour Expedition of Japanese, Russian, Italian, German, French, American, and Austrian troops to relieve the siege. The Expedition was stopped by the Boxers at the Battle of Langfang and forced to retreat. Due to the Alliance's attack on the Dagu Forts, the Qing government in response sided with the Boxers and declared war on the Alliance. There was fierce fighting at Tientsin. The Alliance formed the second, much larger Gaselee Expedition and finally reached Beijing; the Qing government evacuated to Xi'an. The Boxer Protocol ended the war.
Frustrated by the Qing court's resistance to reform and by China's weakness, young officials, military officers, and students began to advocate the overthrow of the Qing dynasty and the creation of a republic. They were inspired by the revolutionary ideas of Sun Yat-sen. A revolutionary military uprising, the Wuchang Uprising, began on 10 October 1911, in Wuhan. The provisional government of the Republic of China was formed in Nanjing on 12 March 1912. The Xinhai Revolution ended 2,000 years of dynastic rule in China.
After the success of the overthrow of the Qing dynasty, Sun Yat-sen was declared President, but Sun was forced to turn power over to Yuan Shikai, who commanded the New Army and was Prime Minister under the Qing government, as part of the agreement to let the last Qing monarch abdicate (a decision Sun would later regret). Over the next few years, Yuan proceeded to abolish the national and provincial assemblies, and declared himself emperor in late 1915. Yuan's imperial ambitions were fiercely opposed by his subordinates; faced with the prospect of rebellion, he abdicated in March 1916, and died in June of that year.
Yuan's death in 1916 left a power vacuum in China; the republican government was all but shattered. This ushered in the Warlord Era, during which much of the country was ruled by shifting coalitions of competing provincial military leaders.
In 1919, the May Fourth Movement began as a response to the terms imposed on China by the Treaty of Versailles ending World War I, but quickly became a nationwide protest movement about the domestic situation in China. The protests were a moral success as the cabinet fell and China refused to sign the Treaty of Versailles, which had awarded German holdings to Japan. The New Culture Movement stimulated by the May Fourth Movement waxed strong throughout the 1920s and 1930s. According to Ebrey:
The discrediting of liberal Western philosophy amongst leftist Chinese intellectuals led to more radical lines of thought inspired by the Russian Revolution, and supported by agents of the Comintern sent to China by Moscow. This created the seeds for the irreconcilable conflict between the left and right in China that would dominate Chinese history for the rest of the century.
In the 1920s, Sun Yat-sen established a revolutionary base in south China, and set out to unite the fragmented nation. With assistance from the Soviet Union (itself fresh from a Lenin's takeover ), he entered into an alliance with the fledgling Communist Party of China. After Sun's death from cancer in 1925, one of his protégés, Chiang Kai-shek, seized control of the Kuomintang (Nationalist Party or KMT) and succeeded in bringing most of south and central China under its rule in a military campaign known as the Northern Expedition (1926–1927). Having defeated the warlords in south and central China by military force, Chiang was able to secure the nominal allegiance of the warlords in the North. In 1927, Chiang turned on the CPC and relentlessly chased the CPC armies and its leaders from their bases in southern and eastern China. In 1934, driven from their mountain bases such as the Chinese Soviet Republic, the CPC forces embarked on the Long March across China's most desolate terrain to the northwest, where they established a guerrilla base at Yan'an in Shaanxi Province. During the Long March, the communists reorganized under a new leader, Mao Zedong (Mao Tse-tung).
The bitter struggle between the KMT and the CPC continued, openly or clandestinely, through the 14-year-long Japanese occupation of various parts of the country (1931–1945). The two Chinese parties nominally formed a united front to oppose the Japanese in 1937, during the Second Sino-Japanese War (1937–1945), which became a part of World War II. Japanese forces committed numerous war atrocities against the civilian population, including biological warfare (see Unit 731) and the Three Alls Policy (Sankō Sakusen), the three alls being: "Kill All, Burn All and Loot All".
Following the defeat of Japan in 1945, the war between the Nationalist government forces and the CPC resumed, after failed attempts at reconciliation and a negotiated settlement. By 1949, the CPC had established control over most of the country (see Chinese Civil War). Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against the Japanese. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. During the civil war both the Nationalists and Communists carried out mass atrocities, with millions of non-combatants killed by both sides. These included deaths from forced conscription and massacres. When the Nationalist government forces were defeated by CPC forces in mainland China in 1949, the Nationalist government retreated to Taiwan with its forces, along with Chiang and most of the KMT leadership and a large number of their supporters; the Nationalist government had taken effective control of Taiwan at the end of WWII as part of the overall Japanese surrender, when Japanese troops in Taiwan surrendered to Republic of China troops.
Major combat in the Chinese Civil War ended in 1949 with Kuomintang (KMT) pulling out of the mainland, with the government relocating to Taipei and maintaining control only over a few islands. The Communist Party of China was left in control of mainland China. On 1 October 1949, Mao Zedong proclaimed the People's Republic of China. "Communist China" and "Red China" were two common names for the PRC.
The PRC was shaped by a series of campaigns and five-year plans. The economic and social plan known as the Great Leap Forward caused an estimated 45 million deaths. Mao's government carried out mass executions of landowners, instituted collectivisation and implemented the Laogai camp system. Execution, deaths from forced labor and other atrocities resulted in millions of deaths under Mao. In 1966 Mao and his allies launched the Cultural Revolution, which continued until Mao's death a decade later. The Cultural Revolution, motivated by power struggles within the Party and a fear of the Soviet Union, led to a major upheaval in Chinese society.
In 1972, at the peak of the Sino-Soviet split, Mao and Zhou Enlai met US president Richard Nixon in Beijing to establish relations with the United States. In the same year, the PRC was admitted to the United Nations in place of the Republic of China, with permanent membership of the Security Council.
A power struggle followed Mao's death in 1976. The Gang of Four were arrested and blamed for the excesses of the Cultural Revolution, marking the end of a turbulent political era in China. Deng Xiaoping outmaneuvered Mao's anointed successor chairman Hua Guofeng, and gradually emerged as the de facto leader over the next few years.
Deng Xiaoping was the Paramount Leader of China from 1978 to 1992, although he never became the head of the party or state, and his influence within the Party led the country to significant economic reforms. The Communist Party subsequently loosened governmental control over citizens' personal lives and the communes were disbanded with many peasants receiving multiple land leases, which greatly increased incentives and agricultural production. This turn of events marked China's transition from a planned economy to a mixed economy with an increasingly open market environment, a system termed by some as "market socialism", and officially by the Communist Party of China as "Socialism with Chinese characteristics". The PRC adopted its current constitution on 4 December 1982.
In 1989 the death of former general secretary Hu Yaobang helped to spark the Tiananmen Square protests of that year, during which students and others campaigned for several months, speaking out against corruption and in favour of greater political reform, including democratic rights and freedom of speech. However, they were eventually put down on 4 June when PLA troops and vehicles entered and forcibly cleared the square, with many fatalities. This event was widely reported, and brought worldwide condemnation and sanctions against the government. A filmed incident involving the "tank man" was seen worldwide.
CPC general secretary and PRC President Jiang Zemin and PRC Premier Zhu Rongji, both former mayors of Shanghai, led post-Tiananmen PRC in the 1990s. Under Jiang and Zhu's ten years of administration, the PRC's economic performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. The country formally joined the World Trade Organization in 2001.
Although the PRC needs economic growth to spur its development, the government began to worry that rapid economic growth was degrading the country's resources and environment. Another concern is that certain sectors of society are not sufficiently benefiting from the PRC's economic development; one example of this is the wide gap between urban and rural areas. As a result, under former CPC general secretary and President Hu Jintao and Premier Wen Jiabao, the PRC initiated policies to address issues of equitable distribution of resources, but the outcome was not known as of 2014[update]. More than 40 million farmers were displaced from their land, usually for economic development, contributing to 87,000 demonstrations and riots across China in 2005. For much of the PRC's population, living standards improved very substantially and freedom increased, but political controls remained tight and rural areas poor.
|Wikimedia Commons has media related to History of China.|
|Wikivoyage has a travel guide for Chinese Empire.|
|
<urn:uuid:360eb8f3-0f19-4518-a35a-ad95b7db7fda>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814872.58/warc/CC-MAIN-20180223233832-20180224013832-00010.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9689062833786011,
"score": 3.96875,
"token_count": 13820,
"url": "https://readtiger.com/wkp/en/History_of_China"
}
|
A diving regulator is a pressure regulator that reduces pressurized breathing gas to ambient pressure and delivers it to the diver. The gas may be air or one of a variety of specially blended breathing gases. The gas may be supplied from a scuba cylinder carried by the diver or via a hose from a compressor or high pressure storage cylinders at the surface in surface-supplied diving. A gas pressure regulator has one or more valves in series which reduce pressure from the source, and use the downstream pressure as feedback to control the rate of flow and thereby the delivered pressure, lowering the pressure at each stage.
Diving regulator: First and second stages, low pressure inflator hose and submersible pressure gauge
|Other names||Demand valve|
|Uses||Reduces pressurized breathing gas to ambient pressure and delivers it to the diver|
|Inventor||Manuel Théodore Guillaumet (1838), Benoît Rouquayrol (1860)|
|Related items||Lightweight demand helmet
The terms "regulator" and "demand valve" are often used interchangeably, but a demand valve is a regulator that delivers gas only while the diver is inhaling and reduces the gas pressure to ambient. In single hose regulators, the demand valve is the second stage, which is either held in the diver's mouth by a mouthpiece or attached to the full-face mask or helmet. In twin hose regulators the demand valve is included in the body of the regulator which is usually attached directly to the cylinder valve or manifold outlet.
A pressure reduction regulator is used to control the delivery pressure of the gas supplied to a free-flow helmet, in which the flow is continuous, to maintain the downstream pressure which is provided by the ambient pressure of the exhaust and the flow resistance of the delivery system - mainly the umbilical - and not influenced by the breathing of the diver. Gas reclaim systems use a third kind of regulator to control the flow of exhaled gas to the return hose. Rebreather systems may also use regulators to control the flow of fresh gas, and demand valves, known as automatic diluent valves, to maintain the volume in the breathing loop during descent.
The performance of a regulator is measured by the cracking pressure and work of breathing, and the capacity to deliver breathing gas at peak inspiratory flow rate at high ambient pressures without excessive pressure drop. For some applications the capacity to deliver high flow rates at low ambient temperatures without jamming due to freezing is important.
The diving regulator is a mechanism which reduces the pressure of the supply of breathing gas and provides it to the diver at approximately ambient pressure. The gas may be supplied on demand, when the diver inhales, or as a constant flow past the diver inside the helmet or mask, from which the diver uses what is necessary, while the remainder goes to waste.:49
The gas may be provided directly to the diver, or to a rebreather circuit, to make up for used gas and volume changes due to depth variations. Gas supply may be from a high-pressure scuba cylinder carried by the diver, or from a surface supply through a hose connected to a compressor or storage system.
Both free-flow and demand regulators use mechanical feedback of the downstream pressure to control the opening of a valve which controls gas flow from the upstream, high-pressure side, to the downstream, low-pressure side of each stage. Flow capacity must be sufficient to allow the downstream pressure to be maintained at maximum demand, and sensitivity must be appropriate to deliver maximum required flow rate with a small variation in downstream pressure, and for a large variation in supply pressure. Open circuit scuba regulators must also deliver against a variable ambient pressure. They must be robust and reliable, as they are life-support equipment which must function in a relatively hostile environment (sea water).
Diving regulators use mechanically operated valves. In most cases there is ambient pressure feedback to both first and second stage, except where this is avoided to allow constant mass flow through an orifice in a rebreather, which requires a constant upstream pressure.
Open circuit demand valve Edit
A demand valve detects when the diver starts inhaling and supplies the diver with a breath of gas at ambient pressure. This is done by a mechanical system linking a pressure differential sensor (diaphragm) to a valve which is opened to an extent proportional to the displacement of the diaphragm difference. The pressure difference between the inside of the mouthpiece and the ambient pressure outside the diaphragm required to open the valve is known as the cracking pressure. This cracking pressure difference is usually negative but may be slightly positive on a positive pressure regulator (a regulator that maintains a pressure inside the mouthpiece, mask or helmet, which is slightly greater than the ambient pressure). Once the valve has opened, gas flow should continue at the smallest stable pressure difference reasonably practicable while the diver inhales, and should stop as soon as gas flow stops. Several mechanisms have been devised to provide this function, some of them extremely simple and robust, and others somewhat more complex, but more sensitive to small pressure changes.:33
The demand valve has a chamber, which in normal use contains breathing gas at ambient pressure. A valve which supplies medium pressure gas can vent into the chamber. Either a mouthpiece or a full-face mask is connected to the chamber for the diver to breathe from. The mouthpiece can be direct coupled or connected by a flexible low-pressure hose. On one side of the chamber is a flexible diaphragm to control the operation of the valve. The diaphragm is protected by a cover with holes or slits through which outside water can enter freely.
When the diver starts to inhale, the removal of gas from the casing lowers the pressure inside the chamber, and the external water pressure moves the diaphragm inwards operating a lever. This lifts the valve off its seat, releasing gas into the chamber. The inter-stage gas, at about 8 to 10 bars (120 to 150 psi) over ambient pressure, expands through the valve orifice as its pressure is reduced to ambient and supplies the diver with more gas to breathe. When the diver stops inhaling the chamber fills until the external pressure is balanced, the diaphragm returns to its rest position and the lever releases the valve to be closed by the valve spring and gas flow stops.:
When the diver exhales, one-way valves (made from a flexible air-tight material) flex outwards under the pressure of the exhalation, letting gas escape from the chamber. They close, making a seal, when the exhalation stops and the pressure inside the chamber reduces to ambient pressure.:108
The vast majority of demand valves are open circuit, which means that the exhaled gas is discharged into the surrounding environment and lost. Reclaim valves can be fitted to helmets to allow the used gas to be returned to the surface for reuse after removing the carbon dioxide and making up the oxygen. This process, referred to as "push-pull", is technologically complex and expensive and is only used for deep commercial diving on heliox mixtures, where the saving on helium compensates for the expense and complications of the system, and for diving in contaminated water, where the gas is not reclaimed, but the system reduces the risk of contaminated water leaking into the helmet through an exhaust valve.
Open circuit free-flow regulatorEdit
These are generally used in surface supply diving with free-flow masks and helmets. They are usually a large high-flow rated industrial gas regulator that is manually controlled at the gas panel on the surface to the pressure required to provide the desired flow rate to the diver. Free flow is not normally used on scuba equipment as the high gas flow rates are inefficient and wasteful.
Constant flow scubaEdit
In constant-flow regulators the first stage is a pressure regulator providing a constant reduced pressure, and the second stage is a plain on/off valve. These are the earliest type of breathing set flow control. The diver must open and close the supply valve to regulate flow. Constant flow valves in an open circuit breathing set consume gas less economically than demand valve regulators because gas flows even when it is not needed. Before 1939, diving and industrial open circuit breathing sets with constant-flow regulators were designed by Le Prieur, but did not get into general use due to very short dive duration. Design complications resulted from the need to put the second-stage flow control valve where it could be easily operated by the diver.
The cost of breathing gas containing a high fraction of helium is a significant part of the cost of deep diving operations, and can be reduced by recovering the breathing gas for recycling. A reclaim helmet is provided with a return line in the diver's umbilical, and exhaled gas is discharged to this hose through a reclaim regulator, which ensures that gas pressure in the helmet cannot fall below the ambient pressure.:150–151 The gas is processed at the surface in the helium reclaim system by filtering, scrubbing and boosting into storage cylinders until needed. The oxygen content may be adjusted when appropriate.:151–155:109 The same principle is used in built-in breathing systems used to vent oxygen-rich treatment gases from a hyperbaric chamber, though those gases are generally not reclaimed. A diverter valve is provided to allow the diver to manually switch to open circuit if the reclaim valve malfunctions, and an underpressure flood valve allows water to enter the helmet to avoid a squeeze if the reclaim valve fails suddenly, allowing the diver time to switch to open circuit without injury.:151–155
Reclaim regulators are also sometimes used for hazmat diving to reduce the risk of backflow through the exhaust valves into the helmet. In this application there would not be an underpesssure flood valve, but the pressure differences and the squeeze risk are relatively low.:109
Rebreather systems used for diving recycle most of the breathing gas, but are not based on a demand valve system for their primary function. Instead, the breathing loop is carried by the diver and remains at ambient pressure while in use. Regulators used in scuba rebreathers are described below.
The automatic diluent valve (ADV) is used in a rebreather to add gas to the loop to compensate automatically for volume reduction due to pressure increase with greater depth or to make up gas lost from the system by the diver exhaling through the nose while clearing the mask or as a method of flushing the loop. They are often provided with a purge button to allow manual flushing of the loop. The ADV is virtually identical in construction and function to the open circuit demand valve, but does not have an exhaust valve. Some passive semi-closed circuit rebreathers use the ADV to add gas to the loop to compensate for a portion of the gas discharged automatically during the breathing cycle as a way of maintaining a suitable oxygen concentration.
The bailout valve (BOV) is an open circuit demand valve built into a rebreather mouthpiece or other part of the breathing loop. It can be isolated while the diver is using the rebreather to recycle breathing gas and opened while at the same time isolating the breathing loop when a problem causes the diver to bail out onto open circuit. The main distinguishing feature of the BOV is that the same mouthpiece is used for open and closed-circuit, and the diver does not have to shut the Dive/Surface valve, remove it from their mouth, and find and insert the bailout demand valve in order to bail out onto open circuit. Although costly, this reduction in critical steps makes the integrated BOV a significant safety advantage.
Constant mass flow addition valves are used to supply a constant mass flow of fresh gas to an active type semi-closed rebreather to replenish the gas used by the diver and to maintain an approximately constant composition of the loop mix. Two main types are used: the fixed orifice and the adjustable orifice (usually a needle valve). The constant mass flow valve is usually based on a gas regulator that is isolated from the ambient pressure so that it provides an absolute pressure regulated output (not compensated for ambient pressure). This limits the depth range in which constant mass flow is possible through the orifice, but provides a relatively predictable gas mixture in the breathing loop. An over-pressure relief valve in the first stage is used to protect the output hose. Unlike most other diving regulators, these do not control the downstream pressure, but they do regulate the flow rate.
Manual and electronically controlled addition valves are used on manual and electronically controlled closed circuit rebreathers (mCCR, eCCR) to add oxygen to the loop to maintain set-point. A manually or electronically controlled valve is used to release oxygen from the outlet of a standard scuba regulator first stage into the breathing loop. An over-pressure relief valve on the first stage is necessary to protect the hose. Strictly speaking, these are not pressure regulators, they are flow control valves.
The first recorded demand valve was invented in 1838 in France and forgotten in the next few years; another workable demand valve was not invented until 1860. On November 14, 1838, Dr. Manuel Théodore Guillaumet of Argentan, Normandy, France, filed a patent for a twin-hose demand regulator; the diver was provided air through pipes from the surface to a back mounted demand valve and from there to a mouthpiece. The exhaled gas was vented to the side of the head through a second hose. The apparatus was demonstrated to and investigated by a committee of the French Academy of Sciences:
On June 19, 1838, in London, William Edward Newton filed a patent (no. 7695: "Diving apparatus") for a diaphragm-actuated, twin-hose demand valve for divers. However, it is believed that Mr. Newton was merely filing a patent on behalf of Dr. Guillaumet.
In 1860 a mining engineer from Espalion (France), Benoît Rouquayrol, invented a demand valve with an iron air reservoir to let miners breathe in flooded mines. He called his invention régulateur ('regulator'). In 1864 Rouquayol met the French Imperial Navy officer Auguste Denayrouze and they worked together to adapt Rouquayrol's regulator to diving. The Rouquayrol-Denayrouze apparatus was mass-produced with some interruptions from 1864 to 1965. As of 1865 it was acquired as a standard by the French Imperial Navy, but never was entirely accepted by the French divers because of a lack of safety and autonomy.
In 1926 Maurice Fernez and Yves Le Prieur patented a hand-controlled constant flow regulator (not a demand valve), which used a full-face mask (the air escaping from the mask at constant flow).
In 1937 and 1942 the French inventor, Georges Commeinhes from Alsace, patented a diving demand valve supplied with air from two gas cylinders through a full-face mask. Commeinhes died in 1944 during the liberation of Strasbourg and his invention was soon forgotten. The Commeinhes demand valve was an adaptation of the Rouquayoul-Denayrouze mechanism, not as compact as was the Cousteau-Gagnan apparatus.
It was not until December 1942 that the demand valve was developed to the form which gained widespread acceptance. This came about after French naval officer Jacques-Yves Cousteau and engineer Émile Gagnan met for the first time in Paris. Gagnan, employed at Air Liquide, had miniaturized and adapted a Rouquayrol-Denayrouze regulator used for gas generators following severe fuel restrictions due to the German occupation of France; Cousteau suggested it be adapted for diving, which in 1864 was its original purpose.
The single hose regulator, with a mouth held demand valve supplied with intermediate pressure gas from the cylinder valve mounted first stage, was invented by Australian Ted Eldred in the early 1950s in response to patent restrictions and stock shortages of the Cousteau-Gagnan apparatus in Australia. In France, 1955, a patent was taken out by Bronnec & Gauthier for a single hose regulator, later produced as the Cristal Explorer. Over time, the convenience and performance of improved single hose regulators would make them the industry standard.:7 Performance continues to be improved by small increments, and adaptations have been applied to rebreather technology.
The single hose regulator was later adapted for surface supplied diving in lightweight helmets and full-face masks in the tradition of the Rouquayrol-Denayrouze equipment to economise on gas usage. By 1969 Kirby-Morgan had developed a full-face mask - the KMB-8 Bandmask - using a single hose regulator. This was developed into the Kirby-Morgan SuperLite-17B by 1976
Secondary (octopus) demand valves, submersible pressure gauges and low pressure inflator hoses were added to the first stage.[when?]
In 1994 a reclaim system was developed in a joint project by Kirby-Morgan and Divex to recover expensive helium mixes during deep operations.
Mechanism and functionEdit
The parts of a regulator are described as the major functional groups in downstream order as following the gas flow from the cylinder to its final use and accessories that are not part of the primary functional components but are commonly found on contemporary regulators. Some historically interesting models and components are described in a later section.
Single-hose two-stage open-circuit demand regulatorsEdit
Most contemporary diving regulators are single-hose two-stage regulators. They consist of a first-stage regulator, and a second-stage demand valve. An intermediate-pressure hose connects these components to transfer air, and allows relative movement within the constraints of hose length and flexibility. Other intermediate-pressure hoses supply optional additional components.
The first stage of the regulator is mounted to the cylinder valve or manifold via one of the standard connectors (Yoke or DIN). It reduces cylinder pressure to an intermediate pressure, usually about 8 to 11 bars (120 to 160 psi) higher than the ambient pressure, also called interstage pressure, medium pressure or low pressure. The breathing gas is then supplied to the second stage through a hose.:17-20
A balanced regulator first stage automatically keeps a constant pressure difference between the interstage pressure and the ambient pressure even as the tank pressure drops with consumption. The balanced regulator design allows the first stage orifice to be as large as needed without incurring performance degradation as a result of changing tank pressure.:17-20
The first stage generally has several low-pressure outlets (ports) for second-stage regulators, BCD inflators and other equipment; and one or more high-pressure outlets, which allow a submersible pressure gauge (SPG) or gas-integrated diving computer to read the cylinder pressure. The valve may be designed so that one low-pressure port is designated "Reg" for the primary second stage regulator, because that port allows a higher flow rate to give less breathing effort at maximum demand. A small number of manufacturers have produced regulators with a larger than standard hose and port diameter for this primary outlet.:50
The mechanism inside the first stage can be of the diaphragm type or the piston type. Both types can be balanced or unbalanced. Unbalanced regulators have the cylinder pressure pushing the first stage upstream valve closed, which is opposed by the intermediate stage pressure and a spring. As cylinder pressure falls the closing force is less, so the regulated pressure increases at lower tank pressure. To keep this pressure rise within acceptable limits the high-pressure orifice size is limited, but this decreases the total flow capacity of the regulator. A balanced regulator keeps about the same ease of breathing at all depths and pressures, by using the cylinder pressure to also indirectly oppose the opening of the first stage valve.:17-20
Piston-type first stageEdit
Some components of piston-type first stages are easier to manufacture and have a simpler design than the diaphragm type. They may need more careful maintenance because some internal moving parts may be exposed to water and any contaminants in the water.:9-13
The piston in the first stage is rigid and acts directly on the seat of the valve. The pressure in the intermediate pressure chamber drops when the diver inhales from the second stage valve, this causes the piston to lift off the stationary valve seat as the piston slides into the intermediate pressure chamber. The now open valve permits high pressure gas to flow into the medium pressure chamber until the pressure in the chamber has risen enough to push the piston back into its original position against the seat and thus close the valve.:9-13
Diaphragm-type first stageEdit
Diaphragm-type first stages are more complex and have more components than the piston type. Their design makes them particularly suited to cold water diving and to working in saltwater and water containing a high degree of suspended particles, silt, or other contaminating materials, since the only parts exposed to the water are the valve opening spring and the diaphragm, all other parts are sealed off from the environment. In some cases the diaphragm and spring are also sealed from the environment.:9-13
The diaphragm is a flexible cover to the medium (intermediate) pressure chamber. When the diver consumes gas from the second stage, the pressure falls in the medium pressure chamber and the diaphragm deforms inwards pushing against the valve lifter. This opens the high pressure valve permitting gas to flow past the valve seat into the medium-pressure chamber. When the diver stops inhaling, pressure in the medium pressure chambers rises and the diaphragm returns to its neutral flat position and no longer presses on the valve lifter shutting off the flow until the next breath is taken.:9-13
If a regulator stage has an architecture that compensates for a change of upstream pressure on the moving parts of the valve so that a change in supply pressure does not affect the force required to open the valve, the stage is described as balanced. Upstream and downstream valves, first and second stages, and diaphragm and piston operation can be balanced or unbalanced, and a full description of a stage will specify which of all of these options apply. For example a regulator may have a balanced piston first stage with a balanced downstream second stage. Both balanced and unbalanced piston first stages are fairly common, but most diaphragm first stages are balanced. Balancing the first stage has a greater overall affect on thr performance of a regulator, as the variation in supply pressure from the cylinder is much greater than the variation in interstage pressure, even with an unbalanced first stage. However the second stage operates on very a small pressure differential and is more sensitive to variations in supply pressure. Most top range regulators have at least one balanced stage, but it is not clear that balancing both stages makes a noticeable difference to performance.:17–20
Connection of first-stage regulator to the cylinder valve or cylinder manifoldEdit
In an open circuit scuba set, the first-stage of the regulator has an A-clamp, also known as a yoke or international connection, or a DIN fitting to connect it to the pillar valve of the diving cylinder. There are also European standards for scuba regulator connectors for gases other than air.
Yoke valves (sometimes called A-clamps from their shape) are the most popular regulator connection in North America and several other countries. They clamp the high pressure inlet opening of the regulator against the outlet opening of the cylinder valve, and are sealed by an O-ring in a groove in the contact face of the cylinder valve. The user screws the clamp in place finger-tight to hold the metal surfaces of cylinder valve and regulator first stage in contact, compressing the o-ring between the radial faces of valve and regulator. When the valve is opened, gas pressure presses the O-ring against the outer cylindrical surface of the groove, completing the seal. The diver must take care not to screw the yoke down too tightly, or it may prove impossible to remove without tools. Conversely, failing to tighten sufficiently can lead to O-ring extrusion under pressure and a major loss of breathing gas. This can be a serious problem if it happens when the diver is at depth. Yoke fittings are rated up to a maximum of 240 bar working pressure.
The DIN fitting is a type of direct screw-in connection to the cylinder. The DIN system is less common worldwide, but has the advantage of withstanding greater pressure, up to 300 bar, allowing use of high-pressure steel cylinders. They are less susceptible to blowing the O-ring seal if banged against something while in use. DIN fittings are the standard in much of Europe and are available in most countries. The DIN fitting is considered more secure and therefore safer by many technical divers.:117
Adapters are available enabling a DIN first-stage to be attached to a cylinder with a yoke fitting valve, and for a yoke first stage to be attached to a DIN cylinder valve.:118
Most cylinder valves are currently of the K-valve type, which is a simple manually operated screw-down on-off valve. In the mid-1960s, J-valves were widespread. J-valves contain a spring-operated valve that is restricts or shuts off flow when tank pressure falls to 300-500 psi, causing breathing resistance and warning the diver that he or she is dangerously low on air. The reserve air is released by pulling a reserve lever on the valve. J-valves fell out of favor with the introduction of pressure gauges, which allow divers to keep track of their air underwater, especially as the valve-type is vulnerable to accidental release of reserve air and increases the cost and servicing of the valve. J-valves are occasionally still used when work is done in visibility so poor that the pressure gauge cannot be seen, even with a light.:167–178:Sec 7.2.2
A medium (intermediate) pressure hose is used to carry breathing gas (typically at between 8 and 10 bar above ambient) from the first stage regulator to the second stage, or demand valve, which is held in the mouth by the diver, or attached to the full face mask or diving helmet.:88 The standard interstage hose is 30 inches (76 cm) long, but 40 inches (100 cm) hoses are standard for Octopus regulators and 7 feet (2.1 m) hoses are popular for technical diving, particularly for cave and wreck penetration where space constraints may make it necessary to swim in single file while sharing gas. Other lengths are also available. Most low pressure ports are threaded 3/8"UNF, but a few regulators were marketed with one 1/2"UNF port intended for the primary demand valve. High pressure ports are almost exclusively 7/16"UNF. There is no possibility of connecting a hose to the wrong pressure port.:112
Second-stage or Demand valveEdit
In an upstream valve, the moving part works against the pressure and opens in the opposite direction to the flow of gas. They are often made as tilt-valves, which are mechanically extremely simple and reliable, but are not amenable to fine tuning.:14
If the first stage leaks and the inter-stage over-pressurizes, the second stage downstream valve opens automatically resulting in a "freeflow". With an upstream valve, the result of over-pressurization may be a blocked valve. This will stop the supply of breathing gas and possibly result in a ruptured hose or the failure of another second stage valve, such as one that inflates a buoyancy device. When a second stage upstream tilt valve is used a relief valve should be included by the manufacturer on the first stage regulator to protect the intermediate hose.:9
If a shut-off valve is fitted between the first and second stages, as is found on scuba bailout systems used for commercial diving and in some technical diving configurations, the demand valve will normally be isolated and unable to function as a relief valve. In this case an overpressure valve must be fitted to the first stage if it does not already have one. As very few contemporary (2016) scuba regulator first stages are factory fitted with overpressure relief valves, they are available as aftermarket accessories which can be screwed into any low pressure port available on the first stage.
Most modern demand valves use a downstream rather than an upstream valve mechanism. In a downstream valve, the moving part of the valve opens in the same direction as the flow of gas and is kept closed by a spring. The usual form of downstream valve is a spring-loaded poppet with a hard elastomer seat sealing against an adjustable metal "crown" around the inlet orifice. The poppet is lifted away from the crown by a lever operated by the diaphragm.:13–15 Two patterns are commonly used. One is the classic push-pull arrangement, where the actuating lever goes onto the end of the valve shaft and is held on by a nut. Any deflection of the lever is converted to an axial pull on the valve shaft, lifting the seat off the crown and allowing air to flow.:13 The other is the barrel poppet arrangement, where the poppet is enclosed in a tube which crosses the regulator body and the lever operates through slots in the sides of the tube. The far end of the tube is accessible from the side of the casing and a spring tension adjustment screw may be fitted for limited diver control of the cracking pressure. This arrangement also allows relatively simple pressure balancing of the second stage.:14,18
A downstream valve will function as an over-pressure valve when the inter-stage pressure is raised sufficiently to overcome the spring pre-load. If the first stage leaks and the inter-stage over-pressurizes, the second stage downstream valve opens automatically. if the leak is bad this could result in a "freeflow", but a slow leak will generally cause intermittent "popping" of the DV, as the pressure is released and slowly builds up again.:
Some demand valves use a small, sensitive pilot valve to control the opening of the main valve. The Poseidon Jetstream and Xstream and Oceanic Omega second stages are examples of this technology. They can produce very high flow rates for a small pressure differential, and particularly for a relatively small cracking pressure. They are generally more complicated and expensive to service.:16
Exhaust valves are necessary to prevent the diver inhaling water, and to allow a negative pressure difference to be induced over the diaphragm to control the demand valve. The exhaust valves should operate at a very small pressure difference, and cause as little resistance to flow as reasonably possible, without being cumbersome and bulky. Elastomer mushroom valves serve the purpose adequately,:108 though duckbill valves were also common in twin-hose regulators. Where it is important to avoid leaks back into the regulator, such as when diving in contaminated water, a system of two sets of valves in series can reduce the risk of contamination. A more complex option which can be used for surface supplied helmets, is to use a reclaim exhaust system which uses a separate flow regulator to control the exhaust which is returned to the surface in a dedicated hose in the umbilical.:109
The exhaust manifold (exhaust tee, exhaust cover, whiskers) is the ducting that protects the exhaust valve(s) and diverts the exhaled air to the sides so that it does not bubble up in the diver's face and obscure the view. This is not necessary for twin hose regulators as they exhaust air behind the shoulders.:33
A standard fitting on single-hose second stages, both mouth-held and built into a full-face mask or demand helmet, is the purge-button, which allows the diver to manually deflect the diaphragm to open the valve and cause air to flow into the casing. This is usually used to purge the casing or full-face mask of water if it has flooded. This will often happen if the second stage is dropped or removed from the mouth while under-water.:108 It is either a separate part mounted in the front cover or the cover may be made flexible and serves as the purge button. Depressing the purge button presses against the diapragm directly over the lever of the demand valve, and this movement of the lever opens the valve to release air through the regulator. The tongue may be used to block the mouthpiece during purging to prevent water or other matter in the regulator from being blown into the diver's airway by the air blast. This is particularly important when purging after vomiting through the regulator.
The purge button is also used by recreational divers to inflate a delayed surface marker buoy or lifting bag. Any time that the purge button is operated, the diver must be aware of the potential for a freeflow and be ready to deal with it.
User adjustable flow modifiersEdit
It may be desirable for the diver to have some control over the flow characteristics of the demand valve. The usual adjustable aspects are cracking pressure and the feedback from flow rate to internal pressure of the second stage housing. The inter-stage pressure of surface supplied demand breathing apparatus is controlled manually at the control panel, and does not automatically adjust to the ambient pressure in the way that most scuba first stages do, as this feature is controlled by feedback to the first stage from ambient pressure. This has the effect that the cracking pressure of a surface supplied demand valve will vary slightly with depth, so some manufacturers provide a manual adjustment knob on the side of the demand valve housing to adjust spring pressure on the downstream valve, which controls the cracking pressure. The knob is known to commercial divers as "dial-a-breath". A similar adjustment is provided on some high-end scuba demand valves, to allow the user to manually tune the breathing effort at depth:17
Scuba demand valves which are set to breathe lightly (low cracking pressure, and low work of breathing) may tend to free-flow relatively easily, particularly if the gas flow in the housing has been designed to assist in holding the valve open by reducing the internal pressure. The cracking pressure of a sensitive demand valve is often less than the hydrostatic pressure difference between the inside of an air-filled housing and the water below the diaphragm when the mouthpiece is pointed upwards. To avoid excessive loss of gas due to inadvertent activation of the valve when the DV is out of the diver's mouth, some second stages have a desensitising mechanism which causes some back-pressure in the housing, by impeding the flow or directing it against the inside of the diaphragm.:21
Twin-hose open-circuit demand scuba regulators Edit
The "twin", "double" or "two" hose configuration of scuba demand valve was the first in general use. This type of regulator has two large bore corrugated breathing tubes. One tube is to supply air from the regulator to the mouthpiece, and the second tube delivers the exhaled gas to a point where the ambient pressure is identical to the demand diaphragm, where it is released through a rubber duck-bill one-way valve, and comes out of the holes in the cover. Advantages of this type of regulator are that the bubbles leave the regulator behind the diver's head, increasing visibility, reducing noise and producing less load on the diver's mouth, They remain popular with some underwater photographers and Aqualung brought out an updated version of the Mistral in 2005.
In Cousteau's original aqualung prototype, there was no exhaust hose, and the exhaled air exited through a one-way valve at the mouthpiece. It worked out of water, but when he tested the aqualung in the river Marne air free-flowed from the regulator before it could be breathed when the mouthpiece was above the regulator. After that, he had the second breathing tube fitted. Even with both tubes fitted, raising the mouthpiece above the regulator increases the delivered pressure of gas and lowering the mouthpiece reduces delivered pressure and increases breathing resistance. As a result, many aqualung divers, when they were snorkeling on the surface to save air while reaching the dive site, put the loop of hoses under an arm to avoid the mouthpiece floating up causing free flow.
Ideally the delivered pressure is equal to the resting pressure in the diver's lungs as this is what human lungs are adapted to breathe. With a twin hose regulator behind the diver at shoulder level, the delivered pressure changes with diver orientation. if the diver rolls on his or her back the released air pressure is higher than in the lungs. Divers learned to restrict flow by using their tongue to close the mouthpiece. When the cylinder pressure was running low and air demand effort rising, a roll to the right side made breathing easier. The mouthpiece can be purged by lifting it above the regulator(shallower), which will cause a free flow.:341
Twin hose regulators have been superseded almost completely by single hose regulators and became obsolete for most diving since the 1980s.
The original twin-hose regulators usually had no ports for accessories, though some has a high pressure port for a submersible pressure gauge. Some later models have one or more low-pressure ports between the stages, which can be used to supply direct feeds for suit or BC inflation and/or a secondary single hose demand valve, and a high pressure port for a submersible pressure gauge. The new Mistral is an exception as it is based on the Aqualung Titan first stage. which has the usual set of ports.
The twin-hose arrangement with a mouthpiece or full-face mask is common in rebreathers, but as part of the breathing loop, not as part of a regulator. The associated demand valve comprising the bail-out valve is a single hose regulator.
The mechanism of the twin hose regulator is packaged in a usually circular metal housing mounted on the cylinder valve behind the diver's neck. The demand valve component of a two-stage twin hose regulator is thus mounted in the same housing as the first stage regulator, and in order to prevent free-flow, the exhaust valve must be located at the same depth as the diaphragm, and the only reliable place to do this is in the same housing. The air flows through a pair of corrugated rubber hoses to and from the mouthpiece. The supply hose is connected to one side of the regulator body and supplies air to the mouthpiece through a non-return valve, and the exhaled air is returned to the regulator housing on the outside of the diaphragm, also through a non-return valve on the other side of the mouthpiece and usually through another non-return exhaust valve in the regulator housing - often a "duckbill" type.
A non-return valve is usually fitted to the breathing hoses where they connect to the mouthpiece. This prevents any water that gets into the mouthpiece from going into the inhalation hose, and ensures that once it is blown into the exhalation hose that it cannot flow back. This slightly increases the flow resistance of air, but makes the regulator easier to clear.:341
Some early twin hose regulators were of single-stage design. The first stage functions in a way similar to the second stage of two-stage demand valves, but would be connected directly to the cylinder valve and reduced high pressure air from the cylinder directly to ambient pressure on demand. This could be done by using a longer lever and larger diameter diaphragm to control the valve movement, but there was a tendency for cracking pressure, and thus work of breathing, to vary as the cylinder pressure dropped.
Twin-hose without visible regulator valve (fictional)Edit
This type is mentioned here because it is very familiar in comics and other drawings, as a wrongly-drawn twin-hose two-cylinder aqualung regulator, with one wide hose coming out of each cylinder top to the mouthpiece with no apparent regulator valve, much more often than a correctly-drawn twin-hose regulator (and often of such breathing sets being used by combat frogmen): see Frogman#Errors about frogmen found in public media. It would not work in the real world.
In Europe, EN 250: 2014 – Respiratory Equipment – Open Circuit Self - Contained Compressed Air Diving Apparatus – Requirements, Testing and Marking defines the minimum requirements for breathing performance of regulators, and BS 8547:2016 defines requirements for demand regulators to be used at depths exceeding 50 m. EN 13949: 2003 – Respiratory Equipment – Open Circuit Self-Contained Diving Apparatus for use with Compressed Nitrox and Oxygen – Requirements, Testing, Marking defines requirements for regulators to be used with raised levels of oxygen.
EN 15333 – 1: 2008 COR 2009 – Respiratory Equipment – Open-Circuit Umbilical Supplied Compressed Gas Diving Apparatus – Part 1: Demand Apparatus. and EN 15333 – 2: 2009 – Respiratory Equipment – Open-Circuit Umbilical Supplied Compressed Gas Diving Apparatus – Part 2: Free Flow Apparatus are the relevant standards for surface supplied equipment.
EN 14143: 2013 – Respiratory Equipment – Self-Contained Re-Breathing Diving Apparatus defines minimum requirements for rebreathers.
The original Cousteau twin-hose diving regulators could deliver about 140 litres of air per minute at continuous flow and that was officially thought to be adequate, but divers sometimes needed a higher instantaneous rate and had to learn not to "beat the lung", i.e. to breathe faster than the regulator could supply. Between 1948 and 1952 Ted Eldred designed his Porpoise single hose regulator to supply up to 300 liters per minute.
Various breathing machines have been developed and used for assessment of breathing apparatus performance. ANSTI Test Systems Ltd (UK) has developed a testing machine that measures the inhalation and exhalation effort in using a regulator. Publishing results of the performance of regulators in the ANSTI test machine has resulted in big performance improvements.
Malfunctions and failure modesEdit
Most regulator malfunctions involve improper supply of breathing gas or water leaking into the gas supply. There are two failure modes, where the regulator shuts off delivery, which is extremely rare, and free-flow, where the delivery will not stop and can quickly exhaust a scuba supply.
- Inlet filter blockage
- The inlet to the cylinder valve may be protected by a sintered filter, and the inlet to the first stage is usually protected by a filter, both to prevent corrosion products or other contaminants in the cylinder from getting into the fine toleranced gaps in the moving parts of the first and second stage and jamming them, either open or closed. If enough dirt gets into these filters they themselves can be blocked sufficiently to reduce performance, but are unlikely to result in a total or sudden catastrophic failure.
- Either of the stages may get stuck in the open position, causing a continuous flow of gas from the regulator known as a free-flow. This can be triggered by a range of causes, some of which can be easily remedied, others not. Possible causes include incorrect interstage pressure setting, incorrect second stage valve spring tension, damaged or sticking valve poppet, damaged valve seat, valve freezing, wrong sensitivity setting at the surface and in Poseidon servo-assisted second stages, low interstage pressure.
- Sticking valves
- The moving parts in first and second stages have fine tolerances in places, and some designs are more susceptible to contaminants causing friction between the moving parts. this may increase cracking pressure, reduce flow rate, increase work of breathing or induce free-flow, depending on what part is affected.
- In cold conditions the cooling effect of gas expanding through a valve orifice may cool either first or second stage sufficiently to cause ice to form. External icing may lock up the spring and exposed moving parts of first or second stage, and freezing of moisture in the air may cause icing on internal surfaces. Either may cause the moving parts of the affected stage to jam open or closed. If the valve freezes closed, it will usually defrost quite rapidly and start working again, and may freeze open soon after. Freezing open is more of a problem, as the valve will then free-flow and cool further in a positive feedback loop, which can normally only be stopped by closing the cylinder valve and waiting for the ice to thaw. If not stopped, the cylinder will rapidly be emptied.
- Intermediate pressure creep
- This is a slow leak of the first stage valve. The effect is for the interstage pressure to rise until either the next breath is drawn, or the pressure exerts more force on the second stage valve than can be resisted by the spring, and the valve opens briefly, often with a popping sound, to relieve the pressure. the frequency of the popping pressure relief depends on the flow in the second stage, the back pressure, the second stage spring tension and the magnitude of the leak. It may range from occasional loud pops to a constant hiss. Underwater the second stage may be damped by the water and the loud pops may become an intermittent or constant stream of bubbles. This is not usually a catastrophic failure mode, but should be fixed as it will get worse, and it wastes gas.
- Gas leaks
- Air leaks can be caused by burst or leaky hoses, defective o-rings, blown o-rings, particularly in yoke connectors, loose connections, and several of the previously listed malfunctions. Low pressure inflation hoses may fail to connect properly, or the non-return valve may leak. A burst low pressure hose will usually lose gas faster than a burst high pressure hose, as HP hoses usually have a flow restriction orifice in the fitting that screws into the port,:185 as the SPG does not need high flow, and a slower pressure increase in the gauge hose is less likely to overload the gauge, while the hose to a second stage must provide high peak flow rate to minimize work of breathing. A relatively common o-ring failure occurs when the yoke clamp seal extrudes due to insufficient clamp force or elastic deformation of the clamp by impact with the environment.
- Wet breathing
- Wet breathing is caused by water getting into the regulator and compromising breathing comfort and safety. Water can leak into the second stage body through damaged soft parts like torn mouthpieces, damaged exhaust valves and perforated diaphragms, through cracked housings, or through poorly sealing or fouled exhaust valves.
- Excessive work of breathing
- High work of breathing can be caused by high inhalation resistance, high exhalation resistance or both. High inhalation resistance can be caused by high cracking pressure, low interstage pressure, friction in second stage valve moving parts, excessive spring loading, or sub-optimum valve design. It can usually can be improved by servicing and tuning, but some regulators cannot deliver high flow at great depths without high work of breathing. High exhalation resistance is usually due to a problem with the exhaust valves, which can stick, stiffen due to deterioration of the materials, or may have an insufficient flow passage area for the service.
- Juddering, shuddering and moaning
- This is caused by an irregular and unstable flow from the second stage, It may be caused by a slight positive feedback between flow rate in the second stage body and diaphragm deflection opening the valve, which is not sufficient to cause free-flow, but enough to cause the system to hunt. It is more common on high-performance regulators which are tuned for maximum flow and minimum work of breathing, particularly out of the water, and often reduces or resolves when the regulator is immersed and the ambient water damps the movement of the diaphragm and other moving parts. Desensitising the second stage by closing venturi assists or increasing the valve spring pressure often stops this problem. Juddering may also be caused by excessive but irregular friction of valve moving parts.
- Physical damage to the housing or components
- Damage such as cracked housings, torn or dislodged mouthpieces, damaged exhaust fairings, can cause gas flow problems or leaks, or can make the regulator uncomfortable to use or difficult to breathe from.
As gas leaves the cylinder it decreases in pressure in the first stage, becoming very cold due to adiabatic expansion. Where the ambient water temperature is less than 5 °C any water in contact with the regulator may freeze. If this ice jams the diaphragm or piston spring, preventing the valve closing, a free-flow may ensue that can empty a full cylinder within a minute or two, and the free-flow causes further cooling in a positive feedback loop. Generally the water that freezes is in the ambient pressure chamber around a spring that keeps the valve open and not moisture in the breathing gas from the cylinder, but that is also possible if the air is not adequately filtered. The modern trend of using plastics to replace metal components in regulators encourages freezing because it insulates the inside of a cold regulator from the warmer surrounding water. Some regulators are provided with heat exchange fins in areas where cooling due to air expansion is a problem, such as around the second stage valve seat on some regulators.
Cold water kits can be used to reduce the risk of freezing inside the regulator. Some regulators come with this as standard, and some others can be retrofitted. Environmental sealing of the diaphragm main spring chamber using a soft secondary diaphragm and hydrostatic transmitter:195 or a silicone, alcohol or glycol/water mixture antifreeze liquid in the sealed spring compartment can be used for a diaphragm regulator. Silicone grease in the spring chamber can be used on a piston first stage. The Poseidon Xstream first stage insulates the external spring and spring housing from the rest of the regulator, so that it is less chilled by the expanding air, and provides large slots in the housing so that the spring can be warmed by the water, thus avoiding the problem of freezing up the external spring.
Pressure relief valveEdit
A downstream demand valve serves as a fail safe for over-pressurization: if a first stage with a demand valve malfunctions and jams in the open position, the demand valve will be over-pressurized and will "free flow". Although it presents the diver with an imminent "out of air" crisis, this failure mode lets gas escape directly into the water without inflating buoyancy devices. The effect of unintentional inflation might be to carry the diver quickly to the surface causing the various injuries that can result from an over-fast ascent. There are circumstances where regulators are connected to inflatable equipment such as a rebreather's breathing bag, a buoyancy compensator, or a drysuit, but without the need for demand valves. Examples of this are argon suit inflation sets and "off board" or secondary diluent cylinders for closed-circuit rebreathers. When no demand valve is connected to a regulator, it should be equipped with a pressure relief valve, unless it has a built in over pressure valve, so that over-pressurization does not inflate any buoyancy devices connected to the regulator.
A diving regulator has one or two 7/16"UNF high pressure ports upstream of all pressure-reducing valves to monitor the gas pressure remaining in the diving cylinder, provided that the valve is open. There are several types of contents gauge.
Standard submersible pressure gauge Edit
The standard arrangement has a high pressure hose leading to a submersible pressure gauge (SPG) (also called a contents gauge). This is an analog mechanical gauge that is connected to the first stage by a high pressure hose. It displays with a pointer moving over a dial, usually about 50 millimetres (2.0 in) diameter. Sometimes they are mounted in a console, which is a plastic or rubber case that holds the air pressure gauge and other instruments such as a depth gauge, dive computer and/or compass.
These are coin-sized analog pressure gauges directly mounted to a high-pressure port on the first stage. They are compact, have no dangling hoses, and few points of failure. They are generally not used on back mounted cylinders because the diver cannot see them there when underwater. They are sometimes used on side slung stage cylinders. Due to their small size, it can be difficult to read the gauge to a resolution of less than 20 bars (300 psi).
Air integrated computersEdit
Some dive computers are designed to measure, display, and monitor pressure in the diving cylinder. This can be very beneficial to the diver, but if the dive computer fails the diver can no longer monitor his or her gas reserves. Most divers using a gas-integrated computer will also have a standard air pressure gauge. The computer is either connected to the first stage by a high pressure hose, or has two parts - the pressure transducer on the first stage and the display at the wrist or console, which communicate by wireless data transmission link; the signals are encoded to eliminate the risk of one diver's computer picking up a signal from another diver's transducer or radio interference from other sources.
Secondary demand valve (Octopus)Edit
As a nearly universal standard practice in modern recreational diving, the typical single-hose regulator has a spare demand valve fitted for emergency use by the diver's buddy, typically referred to as the octopus because of the extra hose, or secondary demand valve. The octopus was invented by Dave Woodward at UNEXSO around 1965-6 to support the free dive attempts of Jacques Mayol. Woodward believed that having the safety divers carry two second stages would be a safer and more practical approach than buddy breathing in the event of an emergency. The medium pressure hose on the octopus is usually longer than the medium pressure hose on the primary demand valve that the diver uses, and the demand valve and/or hose may be colored yellow to aid in locating in an emergency. The secondary regulator should be clipped to the diver's harness in a position where it can be easily seen and reached by both the diver and the potential sharer of air. The longer hose is used for convenience when sharing air, so that the divers are not forced to stay in an awkward position relative to each other. Technical divers frequently extend this feature and use a 5-foot or 7-foot hose, which allows divers to swim in single file while sharing air, which may be necessary in restricted spaces inside wrecks or caves.
The secondary demand valve can be a hybrid of a demand valve and a buoyancy compensator inflation valve. Both types are sometimes called alternate air sources. When the secondary demand valve is integrated with the buoyancy compensator inflation valve, since the inflation valve hose is short (usually just long enough to reach mid-chest), in the event of a diver running out of air, the diver with air remaining would give his or her primary second stage to the out-of-air diver, and switch to the inflation valve himself.
A demand valve on a regulator connected to a separate independent diving cylinder would also be called an alternate air source and also a redundant air source, as it is totally independent of the primary air source.
The mouthpiece is a part that the user grips in the mouth to make a watertight seal. It is a short flattened-oval tube that goes in between the lips, with a curved flange that fits between the lips and the teeth and gums. On the inner ends of the flange there are two tabs with enlarged ends, which are gripped between the teeth. Most recreational diving regulators are fitted with a mouthpiece. In twin-hose regulators and rebreathers, "mouthpiece" may refer to the whole assembly between the two flexible tubes. A mouthpiece prevents clear speech, so a full-face mask is preferred where voice communication is needed.
In a few models of scuba regulator the mouthpiece also has an outer rubber flange that fits outside the lips and extends into two straps that fasten together behind the neck.:184 This helps to keep the mouthpiece in place if the user's jaws go slack through unconsciousness or distraction. The mouthpiece safety flange may also be a separate component.:154 The attached neck strap also allows the diver to keep the regulator hanging under the chin where it is protected and ready for use. Recent mouthpieces do not usually include an external flange, but the practice of using a neck strap has been revived by technical divers who use a bungee or surgical rubber "necklace" which can come off the mouthpiece without damage if pulled firmly.
The original mouthpieces were usually made from natural rubber and could cause an allergic reaction in some divers. This has been overcome by the use of hypo-allergenic synthetic elastomers such as silicone rubbers.
Full-face mask or helmetEdit
This is stretching the concept of accessory a bit, as it would be equally valid to call the regulator an accessory of the full face mask or helmet, but the two items are closely connected and generally found in use together.
Most full face masks and probably most diving helmets currently in use are open circuit demand systems, using a demand valve (in some cases more than one) and supplied from a scuba regulator or a surface supply umbilical from a surface supply panel using a surface supply regulator to control the pressure of primary and reserve air or other breathing gas.
Lightweight demand diving helmets are almost always surface supplied, but full face masks are used equally appropriately with scuba open circuit, scuba closed circuit (rebreathers), and surface supplied open circuit.
The demand valve is usually firmly attached to the helmet or mask, but there are a few models of full face mask that have removable demand valves with quick connections allowing them to be exchanged under water. These include the Dräger Panorama and Kirby-Morgan 48 Supermask.
Buoyancy compensator and dry suit inflation hosesEdit
Hoses may be fitted to low pressure ports of the regulator first stage to provide gas for inflating buoyancy compensators and/or dry suits. These hoses usually have a quick-connector end with an automatically sealing valve which blocks flow if the hose is disconnected from the buoyancy compensator or suit.:50 There are two basic styles of connector, which are not compatible with each other. The high flow rate CEJN 221 fitting has a larger bore and allows gas flow at a fast enough rate for use as a connector to a demand valve. This is sometimes seen in a combination BC inflator/deflator mechanism with integrated secondary DV (octopus), such as in the AIR II unit from Scubapro. The low flow rate Seatec connector is more common and is the industry standard for BC inflator connectors, and is also popular on dry suits, as the limited flow rate reduces the risk of a blow-up if the valve sticks open. The high flow rate connector is used by some manufacturers on dry suits.
Various minor accessories are available to fit these hose connectors. These include interstage pressure gauges, which are used to troubleshoot and tune the regulator (not for use underwater), noisemakers, used to attract attention underwater and on the surface, and valves for inflating tires and inflatable boat floats, making the air in a scuba cylinder available for other purposes.
Also called combo consoles, these are usually hard rubber or tough plastic moldings which enclose the SPG and have mounting sockets for other diver instrumentation, such as decompression computers, underwater compass, timer and/or depth gauge and occasionally a small plastic slate on which notes can be written either before or during the dive. These instruments would otherwise be carried somewhere else such as strapped to the wrist or forearm or in a pocket and are only regulator accessories for convenience of transport and access, and at greater risk of damage during handling.
Recreational scuba nitrox serviceEdit
Standard air regulators are considered to be suitable for nitrox mixtures containing 40% or less oxygen by volume, both by NOAA, which conducted extensive testing to verify this, and by most recreational diving agencies.:25
Surface supplied nitrox serviceEdit
When surface supplied equipment is used the diver does not have the option of simply taking out the DV and switching to an independent system, and gas switching may be done during a dive, including use of pure oxygen for accelerated decompression. To reduce the risk of confusion or getting the system contaminated, surface supplied systems may be required to be oxygen clean for all services except straight air diving.
Regulators to be used with pure oxygen and nitrox mixtures containing more than 40% oxygen by volume should use oxygen compatible components and lubricants, and be cleaned for oxygen service.
Helium is an exceptionally nonreactive gas and breathing gases containing helium do not require any special cleaning or lubricants. However, as helium is generally used for deep dives, it will normally be used with high performance regulators, suitable for the depth.
Exotic examples of historical interestEdit
Ohgushi's Peerless RespiratorEdit
Invented in 1916 by Riichi Watanabi and the blacksmith Kinzo Ohgushi, and used with either surface supplied air or a 150 bar steel scuba cylinder holding 1000 litres free air, the valve supplied air to a mask over the diver's nose and eyes and the demand valve was operated by the diver's teeth. Gas flow was proportional to bite force and duration. The breathing apparatus was used successfully for fishing and salvage work and by the military Japanese Underwater Unit until the end of the Pacific War.
First stage with integral reserve valveEdit
A number of manufacturers produced integral reserve regulators in 1961 and 1962 with reasonable market acceptance. These regulators provided a lever operated mechanical reserve valve that restricted air flow when the pressure was below 500 psi. Alerted to having a low gas supply the diver would pull a rod to open the reserve valve and surface using the remaining gas. This feature provides reserve capacity on cylinders with plain valves. With this arrangement the reserve rod must also be transferred to the cylinder in use.:166,167
These unusual regulators were designed by Robert J. Dempster and made at his factory in Illinois, USA, from 1961 to 1965. The Demone Mark I and Demone Mark II are both two-stage regulators. The second-stage looks like the mouthpiece of a twin-hose regulator but has a small diaphragm on the front. The second-stage valve is inside the mouthpiece tube. The exhaled air goes into a corrugated coaxial exhaust hose which surrounds the intermediate-pressure hose and discharges about 60% of the way back to the first-stage to keep the bubbles away from the diver's face. Near the mouthpiece is a one-way valve to let outside water into the exhaust hose to avoid free flow if the diaphragm (at the mouth) is below the open end of the exhaust hose. The Mark I has hoses only on one side, and the Mark II has twinned intermediate-pressure hoses, each with its own coaxial exhaust hose and second stage, one assembly on each side of the diver's head, but with both second stages in the same mouthpiece housing and operated by the same diaphragm.:93–100 This version has no large visible regulator.[clarification needed]
Normalair breathing apparatusEdit
This system is unusual in that it used a single stage single hose demand valve in a full-face mask. The high pressure supply hose routes over the shoulder, but from an inverted cylinder, which allows the user to easily reach the valve.:249–253
Twin-hose with regulator on chestEdit
In this unusual configuration the cylinder(s) are on the diver's back and are connected by an intermediate-pressure hose to a twin-hose regulator on the diver's chest.
- A design described in Practical Mechanics magazine in January 1955 as a home-made aqualung with a first-stage on the cylinder top leading through an intermediate-pressure hose to a large round second-stage (a converted Calor Gas regulator) on the diver's chest connected to the diver's mouthpiece by a twin-hose loop.
- An early Australian design called a Lawson Lung was made in Sydney by a group of enthusiasts who were unable to get Aqua-lungs due to limited supply, based on the patented Costeau-Gagnan design, but modified to use available components. The regulators were made and tested at John Lawson’s jewellery factory in Greenwich, North Sydney. Only 12 were made, and had to be mounted on the chest to achieve acceptable performance.
Single stage pendulum open circuit scubaEdit
For a few years in the mid-1950s, Draeger made the Draeger Delfin II (their first scuba regulator - it was marketed as the Barakuda (now IAC) in the USA): this was a single stage single hose "pendulum"" regulator with only one ambient pressure (corrugated) hose: the exhaled air went back down the hose to the cylinder mounted regulator and was released to outside through a one-way valve inside the casing. The end of the flexible tube was connected to the mouthpiece by a short quarter-circle of hard tube. The two way hose would have caused dead space similar to a rebreather with a pendulum system.
Propulsive power from the stored energyEdit
The concept of a diving regulator where the energy released as the air expands from cylinder pressure to the surrounding pressure as the diver inhales, is used to power a propeller has been patented, but no product ever appeared on the market.
Full-face mask regulatorEdit
There have been some cases of a single-hose regulator final stage built into a full-face mask so that the mask's big front window, in conjunction with a flexible rubber seal joining it to its frame, functioned as a large and sensitive regulator diaphragm:
- Several versions of the Le Prieur breathing set. Yves Le Prieur first patented with Maurice Fernez, in 1926, a breathing apparatus using a mouthpiece, but as of 1933 he removed the mouthpiece and included a circular full-face mask in all following patents (like 1937, 1946 or 1947).
- In 1934 René Commeinhes, from Alsace (France), adapted a Rouquayrol-Denayrouze apparatus for the use of firefighters. With new 1937 and 1942 patents (GC37 and GC42), his son Georges adapted this invention to underwater breathing by means of a single hose connected to a full-face mask.
- Captain Trevor Hampton invented independently from Le Prieur a similar regulator-mask in the 1950s and submitted it for patent. The Royal Navy requisitioned the patent, but found no use for it and eventually released it. By then, the technology had advanced and it was too late to make this regulator-mask in bulk for sale.
In 1956 and for some years afterwards in Britain, factory-made aqualungs were very expensive, and many aqualungs of this type were made by sport divers in diving clubs' workshops, using miscellaneous industrial and war-surplus parts. One necessary raw material was a Calor Gas bottled butane gas regulator, whose 1950s version was like an aqualung regulator's second stage but passed gas all the time because its diaphragm was spring-loaded; conversion included changing the spring and making several big holes in the wet-side casing. The cylinder was often an ex-RAF pilot's oxygen cylinder; some of these cylinders were called tadpoles from their shape.
In least one version of Russian twin-hose aqualung, the regulator did not have an A-clamp but screwed into a large socket on the cylinder manifold; that manifold was thin, and meandered somewhat. It had two cylinders and a pressure gauge. There is suspicion that those Russian aqualungs started as a factory-made improved descendant of an aqualung home-made by British sport divers and obtained unofficially by a Russian and taken to Russia.
Practical Mechanics designEdit
This design was described in Practical Mechanics magazine in January 1955, as a home-made aqualung with a first-stage on the cylinder top leading through an intermediate-pressure hose to a large round second-stage (a converted Calor Gas regulator) on the diver's chest connected to the diver's mouthpiece by a twin-hose loop.
Manufacturers and their brandsEdit
- Air Liquide: La Spirotechnique, Apeks and Aqua Lung
- American Underwater Products (ROMI Enterprises, of San Leandro, Calif.): Aeris, Hollis Gear and Oceanic
- Atomic Aquatics
- Dive Rite
- HTM Sports: Dacor and Mares
- Poseidon Diving Systems AB
- NOAA Diving Program (U.S.) (28 Feb 2001). Joiner, James T, ed. NOAA Diving Manual, Diving for Science and Technology (4th ed.). Silver Spring, Maryland: National Oceanic and Atmospheric Administration, Office of Oceanic and Atmospheric Research, National Undersea Research Program. ISBN 978-0-941332-70-5. CD-ROM prepared and distributed by the National Technical Information Service (NTIS)in partnership with NOAA and Best Publishing Company
- Barsky, Steven; Neuman, Tom (2003). Investigating Recreational and Commercial Diving Accidents. Santa Barbara, California: Hammerhead Press. ISBN 0-9674305-3-4.
- Harlow, Vance (1999). "1 How a regulator works". Scuba regulator maintenance and repair. Warner, New Hampshire: Airspeed Press. pp. 1–26. ISBN 0-9678873-0-5.
- Harlow, Vance (1999). Scuba regulator maintenance and repair. Warner, New Hampshire: Airspeed Press. ISBN 0-9678873-0-5.
- Barsky, Steven (2007). Diving in High-Risk Environments (4th ed.). Ventura, California: Hammerhead Press. ISBN 978-0-9674305-7-7.
- Républic Française. Ministère du Commerce et de l'Industrie. Direction de la Propriété Industrielle. Brevet d'Invention Gr. 6. - Cl. 3. No. 768.083
- Cresswell, Jeremy (2 June 2008). "Helium costs climb as diver demand soars". energyvoice.com. Retrieved 15 November 2016.
- Crawford, J (2016). "Section 8.5 Bulk gas storage". Offshore Installation Practice (revised ed.). Oxford, UK: Butterworth-Heinemann. ISBN 9781483163192.
- Staff. "Ultrajewel 601 'Dirty Harry'". divingheritage.com. Diving Heritage. Retrieved 15 November 2016.
- Staff. "Closed Circuit Rebreather Mouthpieces-DSV/BOV(Dive/Surface Valve/Bail Out Valve)". www.divenet.com. Fullerton, California: Divematics,USA,Inc. Retrieved 16 November 2016.
- Académie des Sciences (16 September 1839). "Mécanique appliquée -- Rapport sur une cloche à plongeur inventée par M. Guillaumet (Applied mechanics—Report on a diving bell invented by Mr. Guillaumet)". Comptes rendus hebdomadaires des séances de l'Académie des Sciences (in French). Paris: Gauthier-Villars. 9: 363–366. Retrieved 26 September 2016.
- Perrier, Alain (2008). 250 Réponses aux questions du plongeur curieux (in French). Aix-en-Provence, France: Éditions du Gerfaut. p. 45. ISBN 9782351910337.
- Bevan, John (1990). "The First Demand Valve?" (PDF). SPUMS Journal. South Pacific Underwater Medicine Society. 20 (4): 239–240.
- "le scaphandre autonome". Archived from the original on 30 October 2012. Retrieved 17 November 2016.
Un brevet semblable est déposé en 1838 par William Newton en Angleterre. Il y a tout lieu de penser que Guillaumet, devant les longs délais de dépôt des brevets en France, a demandé à Newton de faire enregistrer son brevet en Angleterre où la procédure est plus rapide, tout en s'assurant les droits exclusifs d'exploitation sur le brevet déposé par Newton.A similar patent was filed in 1838 by William Newton in England. There is every reason to think that owing to the long delays in filing patents in France, Guillaumet asked Newton to register his patent in England where the procedure was faster while ensuring the exclusive rights to exploit the patent filed by Newton. Note: The illustration of the apparatus in Newton's patent application is identical to that in Guillaumet's patent application; furthermore, Mr. Newton was apparently an employee of the British Office for Patents, who applied for patents on behalf of foreign applicants. Also from "le scaphandre autonome" Web site: Reconstruit au XXe siècle par les Américains, ce détendeur fonctionne parfaitement, mais, si sa réalisation fut sans doute effective au XIXe, les essais programmés par la Marine Nationale ne furent jamais réalisés et l'appareil jamais commercialisé. (Reconstructed in twentieth century by the Americans, this regulator worked perfectly; however, although it was undoubtedly effective in the nineteenth century, the test programs by the French Navy were never conducted and the apparatus was never sold.)
- Dekker, David L. "1860. Benoit Rouquayrol – Auguste Denayrouze". Chronology of Diving in Holland. www.divinghelmet.nl. Retrieved 17 September 2016.
- Bahuet, Eric (19 October 2003). "Rouquayrol Denayrouze". Avec ou sans bulle ? (in French). plongeesout.com. Retrieved 16 November 2016.
- Commandant Le Prieur. Premier Plongée (First Diver). Editions France-Empire 1956
- Tailliez, Philippe (January 1954). Plongées sans câble (in French). Paris: Editions Arthaud. p. 52.
- Musée du Scaphandre website (in French). Espalion, France https://web.archive.org/web/20121030022352/http://www.espalion-12.com/scaphandre/autonomie/scaphandre_autonome.htm. Archived from the original on 30 October 2012. Missing or empty
|title=(help) Mentions the contributions of several French inventors: Guillaumet, Rouquayrol and Denayrouze, Le Prieur, René and Georges Commheines, Gagnan and Cousteau[dead link]
- Bronnec, Jean Armand Louis; Gautier, Raymond Maurice (26 November 1956). Brevet d'Invention No. T126.597 B63b Appareil respiratoire notament pour plongeurs (in French). Paris: Ministere de l'Industrie et du Commerce – via Website of Luca Dibiza.
- Lonsdale, Mark V. (2012). "Evolution of US Navy diving - Significant dates in Navy diving (1823 – 2001)". History of Navy Diving. Northwest Diving History Association. Retrieved 24 November 2016.
- Staff. "Environmental Dry Sealing System". First Stage Technology. Blackburn, United Kingdom: Apeks Marine Equipment. Retrieved 17 November 2016.
- Staff. "KM Over Pressure Relief Valve, Hi-Flow". Products. Santa Maria California: Diving Equipment Company of America (DECA). Retrieved 16 November 2016.
- Brittain, Colin (2004). "Protective clothing, scuba equipment and equipment maintenance". Let's Dive: Sub-Aqua Association Club Diver Manual (2nd ed.). Wigan, UK: Dive Print. p. 35. ISBN 0-9532904-3-3. Retrieved 6 January 2010.
- Brittain, Colin (2004). "Practical diver training". Let's Dive: Sub-Aqua Association Club Diver Manual (2nd ed.). Wigan, UK: Dive Print. p. 48. ISBN 0-9532904-3-3. Retrieved 6 January 2010.[permanent dead link]
- Vintage European Two Hose Regulator Collection
- Staff (16 February 2005). "Aqua Lung Debuts the Comeback of the Double Hose Regulator". Sport Diver. Bonnier corporation. Retrieved 16 May 2017.
- Warren, Steve (November 2015). "The History Boys". Divernet - Gear features. www.divernet.com. Retrieved 16 May 2017.
- Roberts, Fred M. (1963). Basic Scuba. Self-Contained Underwater Breathing Apparatus: Its Operation, Maintenance and Use (Enlarged Second ed.). New York: Van Nostrand Reinhold Co. ISBN 0 442 26824 6.
- Busuttili, Mike; Holbrook, Mike; Ridley, Gordon; Todd, Mike, eds. (1985). "The Aqualung". Sport diving – The British Sub-Aqua Club Diving Manual. London: Stanley Paul & Co Ltd. p. 36. ISBN 0-09-163831-3.
- Examples at ,
- examples in 'Frogman' comic online at
- Staff (August 2014). "Diving Breathing Apparatus" (PDF). Diving Standards. Dublin: Health and Safety Authority. Archived from the original (PDF) on 18 November 2016. Retrieved 18 November 2016.
- Committee PH/4/7 (31 March 2016). BS 8547:2016 - Respiratory equipment. Breathing gas demand regulator used for diving to depths greater than 50 metres. Requirements and test methods. London: British Standards Institute. ISBN 978 0 580 89213 4.
- Ryan, Mark (23 December 2010). "Little known dive history – The world's first single hose regulator". ScubaGadget – Scuba News Service. scubagadget.com. Retrieved 16 May 2017.
- Middleton, JR (1980). "Evaluation of Commercially Available Open Circuit Scuba Regulators". United States Navy Experimental Diving Unit Technical Report. NEDU-2-80. Retrieved 2008-06-12.
- Morson, PD (1987). "Evaluation of Commercially Available Open Circuit Scuba Regulators". United States Navy Experimental Diving Unit Technical Report. NEDU-8-87. Retrieved 2008-06-12.
- Warkander, DE (2007). "Comprehensive Performance Limits for Divers' Underwater Breathing Gear: Consequences of Adopting Diver-Focused Limits". United States Navy Experimental Diving Unit Technical Report. NEDU-TR-07-02. Retrieved 12 June 2008.
- Staff (22 February 1982). "MIL-R-24169 › Regulator, Air Demand, Single Hose, Diver S". US Department of Defence. Retrieved 27 November 2016.
- Reimers, SD (1973). "Performance Characteristics and Basic Design Features of a Breathing Machine for Use to Depths of up to 3000 Feet of Sea Water". United States Navy Experimental Diving Unit. Panama City, Florida: NEDU. NEDU-20-73. Retrieved 12 June 2008.
- Staff (11 June 2006). "The ANSTI Machine: Evaluating A Regulator's Breathing Characteristics". Gear. Winter Park, Florida: Scuba Diving. A Bonnier Corporation Company. Retrieved 15 November 2016.
- Harlow, Vance (1999). "10 Diagnosis". Scuba regulator maintenance and repair. Warner, New Hampshire: Airspeed Press. pp. 155–165. ISBN 0-9678873-0-5.
- Clarke, John (2015). "Authorized for cold-water service: What Divers Should Know About Extreme Cold". ECO Magazine: 20–25. Retrieved 7 March 2015.
- Ward, Mike (9 April 2014). Scuba Regulator Freezing - Chilling Facts & Risks Associated with Cold Water Diving (PDF). DL-Regulator Freeze Research Study (Report). Panama City, Florida: Dive Lab Inc. Retrieved 16 May 2017.
- Staff. "Xstream user manual: English" (PDF). Art. no 4695 Issue 081001-1. Västra Frölunda, Sweden: Poseidon Diving Systems. Archived from the original (PDF) on 4 March 2016. Retrieved 17 November 2016.
- Staff. "Suunto Wireless Tank Pressure Transmitter". Accessories and spare parts. Suunto. Retrieved 27 November 2016.
- Davis, Andy (2011). "How to Tie a Regulator Bungee Necklace". Scuba Tech Philippines. Retrieved 17 August 2017.
- Alexander, JE (1977). "Allergic reactions to mask skirts, regulator mouthpieces, and snorkel mouthpieces". South Pacific Underwater Medicine Society Journal. 7 (2). ISSN 0813-1988. OCLC 16986801. Retrieved 6 July 2008.
- Lombardi, Michael; Hansing, Nicolai; Sutton, Dave (March 2011). "About CEJN Component Parts" (PDF). CEJN - style Offboard Gas Supply Quick - Disconnect Subsystem for Closed - Circuit Rebreathers. diyrebreathers.com. Retrieved 27 November 2016.
- Staff. "Regulations (Standards - 29 CFR) - Commercial Diving Operations - Standard Number: 1910.430 Equipment". www.osha.gov. US Department of Labour. Retrieved 16 May 2017.
- Staff. Key to the treasury of the deep: Ohgushi's Peerless Respirators - Unrivalled in the world (PDF). Tokyo: Tokyo submarine industrial company. Retrieved 21 November 2016. Copy of an original users'manual by the manufacturers.
- Monday, Nyle C (2004). "Behind the Japanese Mask: The Strange Journey of Ohgushi's Peerless Respirator" (PDF). Historical Diver. Goleta ,California: Historical Diving Society U.S.A. 12 (2 Number 39): 25. ISSN 1094-4516. Retrieved 21 November 2016.
- Historical Diving Times, #42, Summer 2007, pp5-7
- Fearon, E. T. "Making an aqualung: How to construct your own underwater breathing apparatus" (PDF). Archived from the original (PDF) on 28 September 2007. Retrieved 29 September 2007.First published in: Newnes Practical Mechanics, January 1955
- Eldred, Tony. "Lawson Lung (Australia)". www.frogmanmuseum.com. Dominique Breheret. Retrieved 16 November 2016.
- "The Lawson Lung". www.divesrap.com. Retrieved 24 August 2017.
- Brown, Mel (2006). "Early Australian Diving – The Lawson Lung". Historical Diving Society Australia-Pacific. Retrieved 24 August 2017.
- Rare Vintage Two Hose Regulators (near end of page)
- Seveke, Lothar. "Dräger PA61/II". Das Alte Taucher (in German). Dresden: Lothar Seveke. Retrieved 16 November 2016.
- Andresen, John H, Jr (4 December 1962). "Propulsion system for underwater divers". Patent grant US3066638 A. Arlington, Virginia: United States Patent and Trademark Office. pp. 1–2. Retrieved 17 November 2016.
- "Archived copy" (PDF). Archived from the original (PDF) on 28 September 2007. Retrieved 29 September 2007.
- Staff. "History". About Aeris. San Leandro, California: American Underwater Products. Retrieved 16 November 2016.
- Staff. "About Hollis". www.hollis.com. San Leandro, California: American Underwater Products. Retrieved 16 November 2016.
- Staff. "Regulators". www.hollis.com. San Leandro, California: American Underwater Products. Retrieved 16 November 2016.
- Staff. "Dive Rite Regulators". Regulators Library. Lake City, Florida: Dive Rite. Retrieved 16 November 2016.
- Staff. "Regulators and gauges". Products. Västra Frölunda, Sweden: Poseidon Diving Systems AB. Archived from the original on 16 November 2016. Retrieved 17 November 2016.
- Staff. "Products: Regulators". www.tusa.com. Long Beach, California: Tabata USA, Inc. Retrieved 17 November 2016.
- Staff. "Regulators". www.zeagle.com. Retrieved 17 November 2016.
|
<urn:uuid:faa0c93a-6441-4135-b2a3-ea173c4e0ba8>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00410.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9162442088127136,
"score": 3.59375,
"token_count": 17730,
"url": "https://en.m.wikipedia.org/wiki/Diving_regulator"
}
|
Late Triassic–Late Cretaceous, 210–66 Ma
|Mounted skeleton of Apatosaurus louisae, Carnegie Museum|
Sauropoda (// or //), or the sauropods (//; sauro- + -pod, "lizard-footed"), are an infraorder of saurischian ("lizard-hipped") dinosaurs. They had very long necks, long tails, small heads (relative to the rest of their body), and four thick, pillar-like legs. They are notable for the enormous sizes attained by some species, and the group includes the largest animals to have ever lived on land. Well-known genera include Brachiosaurus, Diplodocus, Apatosaurus and Brontosaurus.
Sauropods first appeared in the late Triassic Period, where they somewhat resembled the closely related (and possibly ancestral) group "Prosauropoda". By the Late Jurassic (150 million years ago), sauropods had become widespread (especially the diplodocids and brachiosaurids). By the Late Cretaceous, those groups had mainly been replaced by the titanosaurs, which had a near-global distribution. However, as with all other non-avian dinosaurs alive at the time, the titanosaurs died out in the Cretaceous–Paleogene extinction event. Fossilised remains of sauropods have been found on every continent, including Antarctica.
The name Sauropoda was coined by O.C. Marsh in 1878, and is derived from Greek, meaning "lizard foot". Sauropods are one of the most recognizable groups of dinosaurs, and have become a fixture in popular culture due to their large sizes.
Complete sauropod fossil finds are rare. Many species, especially the largest, are known only from isolated and disarticulated bones. Many near-complete specimens lack heads, tail tips and limbs.
Sauropods were herbivorous (plant-eating), usually quite long-necked quadrupeds (four-legged), often with spatulate (spatula-shaped: broad at the tip, narrow at the neck) teeth. They had tiny heads, massive bodies, and most had long tails. Their hind legs were thick, straight, and powerful, ending in club-like feet with five toes, though only the inner three (or in some cases four) bore claws. Their forelimbs were rather more slender and ended in pillar-like hands built for supporting weight; only the thumb bore a claw. Many illustrations of sauropods in the flesh miss these facts, inaccurately depicting sauropods with hooves capping the claw-less digits of the feet, or multiple claws or hooves on the hands. The proximal caudal vertebrae are extremely diagnostic for sauropods.
The sauropods' most defining characteristic was their size. Even the dwarf sauropods (perhaps 5 to 6 metres, or 20 feet long) were counted among the largest animals in their ecosystem. Their only real competitors in terms of size are the rorquals, such as the blue whale. But, unlike whales, sauropods were primarily terrestrial animals.
Their body structure did not vary as much as other dinosaurs, perhaps due to size constraints, but they displayed ample variety. Some, like the diplodocids, possessed tremendously long tails, which they may have been able to crack like a whip as a signal or to deter or injure predators, or to make sonic booms. Supersaurus, at 33 to 34 metres (108 to 112 ft) long, was the longest sauropod known from reasonably complete remains, but others, like the old record holder, Diplodocus, were also extremely long. The holotype (and now lost) vertebra of Amphicoelias fragillimus may have come from an animal 58 metres (190 ft) long; its vertebral column would have been substantially longer than that of the blue whale. However, a research published in 2015 speculated that the size estimates of A. fragillimus may have been highly exaggerated. The longest dinosaur known from reasonable fossils material is probably Argentinosaurus huinculensis with length estimates of 25 metres (82 ft) to 39.7 metres (130 ft).
Others, like the brachiosaurids, were extremely tall, with high shoulders and extremely long necks. Sauroposeidon was probably the tallest, reaching about 18 metres (60 ft) high, with the previous record for longest neck being held by Mamenchisaurus. By comparison, the giraffe, the tallest of all living land animals, is only 4.8 to 5.5 metres (16 to 18 ft) tall.
The best evidence indicates that the most massive were Argentinosaurus (73 metric tons), Puertasaurus (80 to 100 metric tons ), Alamosaurus, Paralititan, Antarctosaurus (69 metric tons). There was poor (and now missing) evidence that so-called Bruhathkayosaurus, might have weighed over 175 metric tons but this has been questioned. The weight of Amphicoelias fragillimus was estimated at 122.4 metric tons but 2015 research argued that these estimates may have been highly exaggerated. The largest land animal alive today, the Savannah elephant, weighs no more than 10.4 metric tons (11.5 short tons).
Among the smallest sauropods were the primitive Ohmdenosaurus (4 m, or 13 ft long), the dwarf titanosaur Magyarosaurus (6 m or 20 ft long), and the dwarf brachiosaurid Europasaurus, which was 6.2 meters long as a fully-grown adult. Its small stature was probably the result of insular dwarfism occurring in a population of sauropods isolated on an island of the late Jurassic in what is now the Langenberg area of northern Germany. The diplodocoid sauropod Brachytrachelopan was the shortest member of its group because of its unusually short neck. Unlike other sauropods, whose necks could grow to up to four times the length of their backs, the neck of Brachytrachelopan was shorter than its backbone.
On or shortly before 29 March 2017 a sauropod footprint about 5.6 feet (1.7 meters) long was found at Walmadany in the Kimberley Region of Western Australia. The report said that it was the biggest known yet.
As massive quadrupeds, sauropods developed specialized graviportal (weight-bearing) limbs. The hind feet were broad, and retained three claws in most species. Particularly unusual compared with other animals were the highly modified front feet (manus). The front feet of sauropods were very dissimilar from those of modern large quadrupeds, such as elephants. Rather than splaying out to the sides to create a wide foot as in elephants, the manus bones of sauropods were arranged in fully vertical columns, with extremely reduced finger bones (though it is not clear if the most primitive sauropods, such as Vulcanodon and Barapasaurus, had such forefeet). The front feet were so modified in eusauropods that individual digits would not have been visible in life.
The arrangement of the forefoot bone (metacarpal) columns in eusauropods was semi-circular, so sauropod forefoot prints are horseshoe-shaped. Unlike elephants, print evidence shows that sauropods lacked any fleshy padding to back the front feet, making them concave. The only claw visible in most sauropods was the distinctive thumb claw (associated with digit I). Almost all sauropods had such a claw, though what purpose it served is unknown. The claw was largest (as well as tall and laterally flattened) in diplodocids, and very small in brachiosaurids, some of which seem to have lost the claw entirely based on trackway evidence.
Titanosaurs may have lost the thumb claw completely (with the exception of early forms, such as Janenschia). Titanosaurs were most unusual among sauropods, as in addition to the external claw, they completely lost the digits of the front foot. Advanced titanosaurs had no digits or digit bones, and walked only on horseshoe-shaped "stumps" made up of the columnar metacarpal bones.
Print evidence from Portugal shows that, in at least some sauropods (probably brachiosaurids), the bottom and sides of the forefoot column was likely covered in small, spiny scales, which left score marks in the prints. In titanosaurs, the ends of the metacarpal bones that contacted the ground were unusually broad and squared-off, and some specimens preserve the remains of soft tissue covering this area, suggesting that the front feet were rimmed with some kind of padding in these species.
Matthew Bonnan has shown that sauropod dinosaur long bones grew isometrically: that is, there was little to no change in shape as juvenile sauropods became gigantic adults. Bonnan suggested that this odd scaling pattern (most vertebrates show significant shape changes in long bones associated with increasing weight support) might be related to a stilt-walker principle (suggested by amateur scientist Jim Schmidt) in which the long legs of adult sauropods allowed them to easily cover great distances without changing their overall mechanics.
Along with other saurischian dinosaurs (such as birds and other theropods), sauropods had a system of air sacs, evidenced by indentations and hollow cavities in most of their vertebrae that had been invaded by them. Pneumatic, hollow bones are a characteristic feature of all sauropods. These air spaces reduced the overall weight of the massive necks that the sauropods had, and the air-sac system in general, allowing for a single-direction airflow through stiff lungs, made it possible for the sauropods to get enough oxygen.
The bird-like hollowing of sauropod bones was recognized early in the study of these animals, and, in fact, at least one sauropod specimen found in the 19th century (Ornithopsis) was originally misidentified as a pterosaur (a flying reptile) because of this.
Some sauropods had armor. There were genera with small clubs on their tails, like Shunosaurus, and several titanosaurs, such as Saltasaurus and Ampelosaurus, had small bony osteoderms covering portions of their bodies.
A study by Michael D’Emic and his colleagues from Stony Brook University found that sauropods evolved high tooth replacement rates to keep up with their large appetites. The study suggested that Nigersaurus, for example, replaced each tooth every 14 days, Camarasaurus replaced each tooth every 62 days, and Diplodocus replaced each tooth once every 35 days. The scientists found qualities of the tooth affected how long it took for a new tooth to grow. Camarasaurus's teeth took longer to grow than those for Diplodocus because they were larger.
It was also noted by D'Emic and his team that the differences between the teeth of the sauropods also indicated a difference in diet. Diplodocus ate plants low to the ground and Camarasaurus browsed leaves from top and middle branches. According to the scientists, the specializing of their diets helped the different herbivorous dinosaurs to coexist.
Sauropod necks have been found at over 50 feet in length, a full six times longer than the world record giraffe neck. Enabling this were a number of essential physiological features. The dinosaurs’ overall large body size and quadrupedal stance provided a stable base to support the neck, and the head was evolved to be very small and light, losing the ability to orally process food. By reducing their heads to simple harvesting tools that got the plants into the body, the sauropods needed less power to lift their heads, and thus were able to develop necks with less dense muscle and connective tissue. This drastically reduced the overall mass of the neck, enabling further elongation.
Sauropods also had a great number of adaptations in their skeletal structure. Some sauropods had as many as 19 cervical vertebrae, whereas almost all mammals are limited to only seven. Additionally, each vertebra was extremely long and had a number of empty spaces in them which would have been filled only with air. An air-sac system connected to the spaces not only lightened the long necks, but effectively increased the airflow through the trachea, helping the creatures to breathe in enough air. By evolving vertebrae consisting of 60% air, the sauropods were able to minimize the amount of dense, heavy bone without sacrificing the ability to take sufficiently large breaths to fuel the entire body with oxygen. According to Kent Stevens, computer-modeled reconstructions of the skeletons made from the vertebrae indicate that sauropod necks were capable of sweeping out large feeding areas without needing to move their bodies, but were unable to be retracted to a position much above the shoulders for exploring the area or reaching higher.
Another proposed function of the sauropods’ long necks was essentially a radiator to deal with the extreme amount of heat produced from their large body mass. Considering that the metabolism would have been doing an immense amount of work, it would certainly have generated a large amount of heat as well, and elimination of this excess heat would have been essential for survival. It has also been proposed that the long necks would have cooled the veins and arteries going to the brain, avoiding excessively heated blood from reaching the head. It was in fact found that the increase in metabolic rate resulting from the sauropods’ necks was slightly more than compensated for by the extra surface area from which heat could dissipate.
When sauropods were first discovered, their immense size led many scientists to compare them with modern-day whales. Most studies in the 19th and early 20th centuries concluded that sauropods were too large to have supported their weight on land, and therefore that they must have been mainly aquatic. Most life restorations of sauropods in art through the first three quarters of the 20th century depicted them fully or partially immersed in water. This early notion was cast in doubt beginning in the 1950s, when a study by Kermack (1951) demonstrated that, if the animal were submerged in several metres of water, the pressure would be enough to fatally collapse the lungs and airway. However, this and other early studies of sauropod ecology were flawed in that they ignored a substantial body of evidence that the bodies of sauropods were heavily permeated with air sacs. In 1878, paleontologist E.D. Cope had even referred to these structures as "floats".
Beginning in the 1970s, the effects of sauropod air sacs on their supposed aquatic lifestyle began to be explored. Paleontologists such as Coombs and Bakker used this, as well as evidence from sedimentology and biomechanics, to show that sauropods were primarily terrestrial animals. In 2004, D.M. Henderson noted that, due to their extensive system of air sacs, sauropods would have been buoyant and would not have been able to submerge their torsos completely below the surface of the water; in other words, they would float, and would not have been in danger of lung collapse due to water pressure when swimming.
Evidence for swimming in sauropods comes from fossil trackways that have occasionally been found to preserve only the forefeet (manus) impressions. Henderson showed that such trackways can be explained by sauropods with long forelimbs (such as macronarians) floating in relatively shallow water deep enough to keep the shorter hind legs free of the bottom, and using the front limbs to punt forward. However, due to their body proportions, floating sauropods would also have been very unstable and maladapted for extended periods in the water. This mode of aquatic locomotion, combined with its instability, led Henderson to refer to sauropods in water as "tipsy punters".
While sauropods could therefore not have been aquatic as historically depicted, there is evidence that they preferred wet and coastal habitats. Sauropod footprints are commonly found following coastlines or crossing floodplains, and sauropod fossils are often found in wet environments or intermingled with fossils of marine organisms. A good example of this would be the massive Jurassic sauropod trackways found in lagoon deposits on Scotland's Isle of Skye.
Many lines of fossil evidence, from both bone beds and trackways, indicate that sauropods were gregarious animals that formed herds. However, the makeup of the herds varied between species. Some bone beds, for example a site from the Middle Jurassic of Argentina, appear to show herds made up of individuals of various age groups, mixing juveniles and adults. However, a number of other fossil sites and trackways indicate that many sauropod species travelled in herds segregated by age, with juveniles forming herds separate from adults. Such segregated herding strategies have been found in species such as Alamosaurus, Bellusaurus and some diplodocids.
In a review of the evidence for various herd types, Myers and Fiorillo attempted to explain why sauropods appear to have often formed segregated herds. Studies of microscopic tooth wear show that juvenile sauropods had diets that differed from their adult counterparts, so herding together would not have been as productive as herding separately, where individual herd members could forage in a coordinated way. The vast size difference between juveniles and adults may also have played a part in the different feeding and herding strategies.
Since the segregation of juveniles and adults must have taken place soon after hatching, and combined with the fact that sauropod hatchlings were most likely precocial, Myers and Fiorillo concluded that species with age-segregated herds would not have exhibited much parental care. On the other hand, scientists who have studied age-mixed sauropod herds suggested that these species may have cared for their young for an extended period of time before the young reached adulthood. A 2014 study suggested that the time from laying the egg to the time of the hatching was likely to have been between 65 and 82 days. Exactly how segregated versus age-mixed herding varied across different groups of sauropods is unknown. Further examples of gregarious behavior will need to be discovered from more sauropod species to begin detecting possible patterns of distribution.
Since early in the history of their study, scientists, such as Osborn, have speculated that sauropods could rear up on their hind legs, using the tail as the third 'leg' of a tripod. A skeletal mount depicting the diplodocid Barosaurus lentus rearing up on its hind legs at the American Museum of Natural History is one illustration of this hypothesis. In a 2005 paper, Rothschild and Molnar reasoned that if sauropods had adopted a bipedal posture at times, there would be evidence of stress fractures in the forelimb 'hands'. However, none were found after they examined a large number of sauropod skeletons.
Heinrich Mallison (in 2009) was the first to study the physical potential for various sauropods to rear into a tripodal stance. Mallison found that some characters previously linked to rearing adaptations were actually unrelated (such as the wide-set hip bones of titanosaurs) or would have hindered rearing. For example, titanosaurs had an unusually flexible backbone, which would have decreased stability in a tripodal posture and would have put more strain on the muscles. Likewise, it is unlikely that brachiosaurids could rear up onto the hind legs, as their center of gravity was much farther forward than other sauropods, which would cause such a stance to be unstable.
Diplodocids, on the other hand, appear to have been well adapted for rearing up into a tripodal stance. Diplodocids had a center of mass directly over the hips, giving them greater balance on two legs. Diplodocids also had the most mobile necks of sauropods, a well-muscled pelvic girdle, and tail vertebrae with a specialised shape that would allow the tail to bear weight at the point it touched the ground. Mallison concluded that diplodocids were better adapted to rearing than elephants, which do so occasionally in the wild. He also argues that stress fractures in the wild do not occur from everyday behaviour, such as feeding-related activities (contra Rothschild and Molnar).
There is controversy over whether sauropods held their heads near vertically or horizontally. The claim that the long necks of sauropods were used for browsing high trees has been questioned on the basis of calculations of the energy needed to create the arterial blood pressure for the head if it was held upright. These calculations suggest this would have taken up roughly half of its energy intake. Further, to supply blood to the head vertically held high would have required blood pressure of around 700 mmHg (= 0.921 bar) at the heart. This would have needed hearts 15 times the size of the hearts of whales of similar size. This suggests it was more likely that the long neck was usually held horizontally to enable them to feed on plants over a very wide area without needing to move their bodies—a potentially large saving in energy for 30 to 40 ton animals. In support of this, reconstructions of the necks of Diplodocus and Apatosaurus show that they are basically straight with a gentle decline orientating their heads in a "neutral, undeflected posture" when close to ground.
However, research on living animals has suggested that sauropod heads were held in an upright S-shaped curve. Inference from bones about "neutral head postures", which suggest a horizontal position, may be unreliable according to this research. If applied to living animals it would imply that they also held their heads in this position, even though they in fact do not. Research published in 2013 would, however, take the flexibility of sauropod necks into doubt. Studies of the long necked ostrich, whose neck structure is close to that of sauropods, revealed that sauropods may not have had the flexible necks that the media has portrayed. Studies by Matthew Cobley et al revealed, using computer modeling, that muscle attachments and cartilage present in the neck would likely have limited the flexibility to a considerable degree. This discovery also reveals that sauropods may have had to move their whole bodies around to better access areas where they could graze and browse on vegetation. However, the study did not involve other long necked animals, such as giraffes, and cannot be completely proven without further evidence.
Sauropod trackways and other fossil footprints (known as "ichnites") are known from abundant evidence present on most continents. Ichnites have helped support other biological hypotheses about sauropods, including general fore and hind foot anatomy (see Limbs and feet above). Generally, prints from the forefeet are much smaller than the hind feet, and often crescent-shaped. Occasionally ichnites preserve traces of the claws, and help confirm which sauropod groups lost claws or even digits on their forefeet.
Sauropod tracks from the Villar del Arzobispo Formation of early Berriasian age in Spain support the gregarious behaviour of the group. The tracks are possibly more similar to Sauropodichnus giganteus than any other ichnogenera, although they have been suggested to be from a basal titanosauriform. The tracks are wide-gauge, and the grouping as close to Sauropodichnus is also supported by the manus-to-pes distance, the morphology of the manus being kidney bean-shaped, and the morphology of the pes being subtriangular. It cannot be identified whether the footprints of the herd were caused by juveniles or adults, because of the lack of previous trackway individual age identification.
Generally, sauropod trackways are divided into three categories based on the distance between opposite limbs: narrow gauge, medium gauge, and wide gauge. The gauge of the trackway can help determine how wide-set the limbs of various sauropods were and how this may have impacted the way they walked. A 2004 study by Day and colleagues found that a general pattern could be found among groups of advanced sauropods, with each sauropod family being characterised by certain trackway gauges. They found that most sauropods other than titanosaurs had narrow-gauge limbs, with strong impressions of the large thumb claw on the forefeet. Medium gauge trackways with claw impressions on the forefeet probably belong to brachiosaurids and other primitive titanosauriformes, which were evolving wider-set limbs but retained their claws. Primitive true titanosaurs also retained their forefoot claw but had evolved fully wide gauge limbs. Wide gauge limbs were retained by advanced titanosaurs, trackways from which show a wide gauge and lack of any claws or digits on the forefeet.
Occasionally, only trackways from the forefeet are found. Falkingham et al. used computer modelling to show that this could be due to the properties of the substrate. These need to be just right to preserve tracks. Differences in hind limb and fore limb surface area, and therefore contact pressure with the substrate, may sometimes lead to only the forefeet trackways being preserved.
In a study published in PLoS ONE on October 30, 2013, by Bill Sellers, Rodolfo Coria, Lee Margetts et al., Argentinosaurus was digitally reconstructed to test its locomotion for the first time. Before the study, the most common way of estimating speed was through studying bone histology and ichnology. Commonly, studies about sauropod bone histology and speed focus on the postcranial skeleton, which holds many unique features, such as an enlarged process on the ulna, a wide lobe on the ilia, an inward-slanting top third of the femur, and an extremely ovoid femur shaft. Those features are useful when attempting to explain trackway patterns of graviportal animals. When studying ichnology to calculate sauropod speed, there are a few problems, such as only providing estimates for certain gaits because of preservation bias, and being subject to many more accuracy problems.
To estimate the gait and speed of Argentinosaurus, the study performed a musculoskeletal analysis. The only previous musculoskeletal analysises were conducted on homonoids, terror birds, and other dinosaurs. Before they could conduct the analysis, the team had to create a digital skeleton of the animal in question, show where there would be muscle layering, locate the muscles and joints, and finally find the muscle properties before finding the gait and speed. The results of the biomechanics study revealed that Argentinosaurus was mechanically competent at a top speed of 2 m/s (5 mph) given the great weight of the animal and the strain that its joints were capable of bearing. The results further revealed that much larger terrestrial vertebrates might be possible, but would require significant body remodeling and possible sufficient behavioral change to prevent joint collapse.
Sauropods are gigantic, and descendants of surprisingly small ancestors. Basal dinosauriformes, such as Pseudolagosuchus and Marasuchus from the Middle Triassic of Argentina, weighed approximately 1 kg (2.2 lb) or, in most cases, less. At the evolutionary point named Saurischia, a rapid increase of bauplan size appeared, although more primitive members like Eoraptor, Panphagia, Pantydraco, Saturnalia and Guaibasaurus still retain a moderate size, possibly even less than 10 kg (22 lb). Even with these small, primitive forms, there is a notable size growth in sauropodomorphs, although scanty remains of this period of sauropod evolution make assumptions necessary as the size is difficult to interpret. There is one definite example of a derived sauropodomorph being small however, and that is Anchisaurus, which reached under 50 kg (110 lb), even though it is closer to the sauropods than Plateosaurus and Riojasaurus, which were upwards of 1 t (0.98 long tons; 1.1 short tons) in weight.
Compared to even derived sauropodomorphs, sauropods were huge. Their even larger size probably resulted because of an increased growth rate, which appears to have been linked with tachymetabolic endothermy, a condition that evolved in sauropodomorphs. Once branched into sauropods, sauropodomorphs continued steadily to grow larger, with smaller sauropods, like the Early Jurassic Barapasaurus and Kotasaurus, evolving into even larger forms like the Middle Jurassic Mamenchisaurus and Patagosaurus. Following the size change of sauropods, theropods continued to grow even larger, shown by an Allosaurus-sized coelophysoid from Germany. As one possible explanation for the increased body size is less risk of predation, the size evolution of both sauropods and theropods are probably linked.
Neosauropoda is quite plausibly the largest clade of dinosaurs ever to have existed, with a few exceptions. Most exceptions are hypothesized to be caused by island dwarfism, although there is a trend in Titanosauria towards a smaller body size. The titanosaurs, however, have also been some of the largest sauropods ever. Other than titanosaurs, a clade of diplodocoids, a group of giants, called Dicraeosauridae, is diagnosed by a small body size. No sauropods were very small, however, for even "dwarf" sauropods are larger than 500 kg (1,100 lb), a size reached by only about 10% of all mammalian species.
Although in general, sauropods were large, a gigantic size (40 t (39 long tons; 44 short tons) or more) was reached independently at multiple times in their evolution. Many gigantic forms existed in the Late Jurassic (specifically Kimmeridgian and Turonian), such as the turiasaur Turiasaurus and the diplodocoids Amphicoelias, Diplodocus and Supersaurus. Through the Early to Late Cretaceous, the giants Sauroposeidon, Paralititan, Argentinosaurus, Puertasaurus, Antarctosaurus giganteus, Dreadnoughtus schrani, Notocolossus and Futalognkosaurus lived, the earliest being a brachiosaurid, with all latter being titanosaurs. One sparsely known possible giant is Huanghetitan ruyangensis, only known from 3 m (9.8 ft) long ribs. All of the giant genera and species lived in the Late Jurassic to the Late Cretaceous, over a time span of 85 million years, and are independently evolved neosauropods.
Insular dwarfism is caused by a reduced growth rate in sauropods, the opposite of which led to the evolution of sauropods. Two well-known island dwarfs are the Cretaceous Magyarosaurus (at one point its identity as a dwarf was challenged) and the Jurassic Europasaurus, both from Europe. Even though these sauropods are small, the only way to prove they are true dwarfs is through a study of their bone histology. A study by Martin Sander and colleagues in 2006 examined eleven individuals of Europasaurus holgeri using bone histology and demonstrated that the small island species evolved through a decrease in the growth rate of long bones as compared to rates of growth in ancestral species on the mainland. Two other possible dwarfs are Rapetosaurus, which existed on the island of Madagascar, an isolated island in the Cretaceous, and Ampelosaurus, a titanosaur that lived on the Iberian peninsula of southern Spain and France. The possible Cetiosauriscus from Switzerland might also be a dwarf, but it has yet to be proven. One of the most extreme cases of island dwarfism is found in Europasaurus, a relative of the much larger Camarasaurus and Brachiosaurus, was about 6.2 m (20 ft) long, and so the diminutive size of Europasaurus is considered to be diagnostic of the genus. The cause of the size reduction found by the authors was a reduced growth rate, which is now considered to be why all dwarfs are so small.
The first scrappy fossil remains now recognized as sauropods all came from England and were originally interpreted in a variety of different ways. Their relationship to other dinosaurs was not recognized until well after their initial discovery.
The first sauropod fossil to be scientifically described was a single tooth known by the non-Linnaean descriptor Rutellum implicatum. This fossil was described by Edward Lhuyd in 1699, but was not recognized as a giant prehistoric reptile at the time. Dinosaurs would not be recognized as a group until over a century later.
Richard Owen published the first modern scientific description of sauropods in 1841, in his paper naming Cetiosaurus and Cardiodon. Cardiodon was known only from a two unusual, heart-shaped teeth (from which it got its name), which could not be identified beyond the fact that they came from a previously unknown large reptile. Cetiosaurus was known from slightly better, but still scrappy remains. Owen thought at the time that Cetiosaurus was a giant marine reptile related to modern crocodiles, hence its name, which means "whale lizard". A year later, when Owen coined the name Dinosauria, he did not include Cetiosaurus and Cardiodon in that group.
In 1850, Gideon Mantell recognized the dinosaurian nature of several bones assigned to Cetiosaurus by Owen. Mantell noticed that the leg bones contained a medullary cavity, a characteristic of land animals. He assigned these specimens to the new genus Pelorosaurus, and grouped it together with the dinosaurs. However, Mantell still did not recognize the relationship to Cetiosaurus.
The next sauropod find to be described and misidentified as something other than a dinosaur were a set of hip vertebrae described by Harry Seeley in 1870. Seeley found that the vertebrae were very lightly constructed for their size and contained openings for air sacs (pneumatization). Such air sacs were at the time known only in birds and pterosaurs, and Seeley considered the vertebrae to come from a pterosaur. He named the new genus Ornithopsis, or "bird face" because of this.
When more complete specimens of Cetiosaurus were described by Phillips in 1871, he finally recognized the animal as a dinosaur related to Pelorosaurus. However, it was not until the description of new, nearly complete sauropod skeletons from the United States (representing Apatosaurus and Camarasaurus) later that year that a complete picture of sauropods emerged. An approximate reconstruction of a complete sauropod skeleton was produced by artist John A. Ryder, hired by paleontologist E.D. Cope, based on the remains of Camarasaurus, though many features were still inaccurate or incomplete according to later finds and biomechanical studies. Also in 1877, Richard Lydekker named another relative of Cetiosaurus, Titanosaurus, based on an isolated vertebra.
In 1878, the most complete sauropod yet was found and described by Othniel Charles Marsh, who named it Diplodocus. With this find, Marsh also created a new group to contain Diplodocus, Cetiosaurus, and their increasing roster of relatives to differentiate them from the other major groups of dinosaurs. Marsh named this group Sauropoda, or "lizard feet".
The necks of the sauropod dinosaurs were by far the longest of any animal...
|Wikimedia Commons has media related to Sauropoda.|
None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.
All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.
The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.
|
<urn:uuid:13910bde-7293-4b6a-a8b8-03f0f4bef157>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00410.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9706622958183289,
"score": 3.921875,
"token_count": 7782,
"url": "http://www.mashpedia.com/Sauropoda"
}
|
History From America's Most Famous Valleys
Century Palatine Emigration
A British Government Redemptioner Project to Manufacture Naval Stores
by Walter Allen Knittle, Ph.D.
Department of History
College of the City of New York
Published Philadelphia, 1937
1. THE CAUSES OF THE EARLY
Page 1: SHIPLOADS OF German peoples, variously estimated from two thousand to thirty-two thousand, (1) arrived in London between May and November of 1709. A year earlier a small band of fifty had preceded them. As most of the latter and the greater part of the former group came from the Rhenish or Lower Palatinate, the name "Palatine" was applied indiscriminately to the rest of the immigrants, although they came from the neighboring territories as well. (2)
A contemporary pamphlet lists the home principalities as follows: the Palatinate, the districts of Darmstadt and Hanau, Franconia (including the area around the cities of Nuremburg, Balreuth and Wurzburg), the Archbishopric of Mayence, and the Archbishopric of Treves. The districts of Spires, Worms, Hesse-Darmstadt, Zweibriicken, Nassau, Alsace and Baden are also mentioned. (3) To this list Wurtemberg must be added,
1 John Stow, Survey of the Cities of London and Westminister (1720), II, 43 estimated the immigration of 1709 at two or three thousand; William Maitland, History of London (1756), I, 507 has twelve thousand as their number; a contemporary account in Das verlangte nicht erlangte Canaan ... oder Ausfihrliche Beschreihung von der unglucklichen Reise derer jiingsthin aus Teutschland nach dem Engellandischen in America gelegen Carolina und Pensylvanien.... (Franckfurt und Leipzig, 1711), 113, hereafter cited as Das verlangte nicht erlangte Canaan, gives the total number who went to England as 32,468.
2 "A Brief History of the Poor Palatine Refugees Lately Arrived in England" (July 18, 1709), in Ecclesiastical Records of the State of New York (Albany, 1902.), 111, 1782., hereafter cited as Eccles. Rec. Copies of the 1709 edition are in the British Museum and the National Library of Dublin. A 1710 edition may be examined in the Trinity College Library, Dublin. The name "Palatine" will be used below consistently in referring to all the German immigrants of this period, since it appears most convenient, if not strictly accurate.
Drawn by A. Cefola.
Page 2: since a number of Palatines are known to have emigrated thence, notably John Conrad Weiser. The area, from which the emigration poured, extended along both sides of the Rhine River and its tributaries, the Main and Neckar Rivers. It extended roughly from the junction of the Moselle and the Rhine south to Basle, Switzerland; and from Zweibrucken, alongside Lorraine, as far west along the Main as Baireuth, bordering the Upper (or Bavarian) Palatinate. (4)
Many causes were given for the unprecedented size of the emigration. That most frequently mentioned was devastation
See Map of Germany.
Page 3: by war. The end of the Thirty Years' War left the people of the Palatinate prostrate. True enough a remarkable recovery from this visitation was achieved, due to the fertility of the soil and the cooperation of the ruler, but prosperity was short-lived; in the latter part of the seventeenth century the Palatinate was repeatedly the stamping ground of Louis XIV's armies. Marshal Turenne thoroughly devastated the province in 1674. Moreover, protracted disputes among the neighboring princes, remaining from the religious wars of the early part of the century, gave rise to continuous warfare, in one instance between the Archbishop of Mayence assisted by the Duke of Lorraine, and the Elector Palatine. (5) In 1688-9 partly to vent his malice against Protestants, the Grand Monarch had the Palatinate laid waste again. The military necessities following William III's "conquest" of England probably made this step necessary. At any rate over two hundred years later the Heidelberg ruins left by this invasion were described as "the most interesting ruins in Europe." (6)
During the War of the
Spanish Succession, Marshal Villars
crossed the Rhine unexpectedly in May, 1707, terrorized
southwestern Germany, plundering and requisitioning freely
on the Palatinate, Wurtemberg, Baden and the Swabian
Circle. (7) In September of the same year, the French retired
across the Rhine, having, in the words of an angry colonel
in the English army, "overrun the lazy and sleepy Empire
and not only maintained a great army in it all the year, but
by contributions, sent money into France to help the King's
other affairs."' Not only was this invasion unnecessary from
(5) Theatrum Europaeum,
XI, 344, 497; L. Hiuser, Geschichte der Rheinischen
1856), 11, 62.9; N. M. Pletcher, Some Chapters from the
History of the Rhine
Country (N. Y., 1907), 94.
(6) J. G. Wilson, in American Historical Assoc. Reports (18911), 2-87.
(7) Townshend Mss. (Hist. Mss. Com. 11th report, Appendix), IV, 65, mentions "the plunder and the money they took by force from the good families of Strasbourg."
(8) C. T. Atkinson, "The War of the Spanish Succession, Campaigns and Negotiations," in Camb. Mod. Hist., V, 418.
Page 4: a military point of view but it was also a political blunder for it united Germany against Louis.(9) But for the people living in the war zone, these invasions wiped out the fruits of many new and promising revivals, and discouraged further struggle for better living conditions. (10)
To the curse of devastation was added an unkind prank of nature, when at the end of 1708 a winter, cruel beyond the precedent of a century, set in to blight the region. As early as the beginning of October the cold was intense, and by November 1st, it was said, firewood would not burn in the open air! In January of 1709 wine and spirits froze into solid blocks of ice; birds on the wing fell dead; and, it is said, saliva congealed in its fall from the mouth to the ground.(11) Most of Western Europe was frozen tight. The Seine and all the other rivers were icebound and on the 8th of January, the Rhone, one of the most rapid rivers of Europe, was covered with ice. But what had never been seen before, the sea froze sufficiently all along the coasts to bear carts, even heavily laden.(12) Narcissus Luttrell, a famous English diarist of that day, wrote of the great violence of the frost in England and in foreign parts, where several men were frozen to death in many countries." The Arctic weather lasted well into the fourth month. Perhaps
9 A. Hassall, "The Foreign Policy of Louis XIV," in Camb. Mod. Hist., V, 57.
10 Abel Boyer, The History of the Reign of Queen Anne digested into Annals 1709 (London, 1710), 166; hereafter cited as Boyer, Annals. Professor Julius Goebel, Sr., has performed a valuable service by publishing a collection of letters by a few emigrants Of 1709. These letters clearly show that the bad economic conditions were largely responsible for their authors' emigration. " Briefe Deurscher Auswanderer aus dem Jahre 1709, " in Jahrbuch der Deutsch = Amerikanischen Historischen Gesellschaft von Illinois (Chicago, Illinois, 1912.), 124-189.
11 R. N. Bain, "Charles XII and the Great Northern War," in Camb. Mod. Hist., V, 6oo.
12 Memoires ... du ... duc de Saint-Simon (Paris, 1857), IV, 28o; Journal du Marquis de Dangeau (Paris, 1857), XII, 303 et sef.
13 Narcissus Luttrell, Brief Relation of State Affairs (Oxford, 1857), VI, 393, 399 under dates of January 8th and January 25, 1709.
Page 5: the period of heaviest frost was from the 6th to the 25th of January. Then snow fell until February 6th. (14) The fruit trees were killed and the vines were destroyed. The calamity of this unusually bitter weather fell heavily on the husbandmen and vine-dressers, who in consequence made up more than half of the emigrants of 1709. (15)
Other influences almost as malign, though of a more chronic nature, were disturbing the inhabitants of the Rhine Valley. The splendor of Versailles had dazzled many petty rulers of Germany, who sought to emulate the gorgeous court life surrounding Louis XIV. The expenses of their lavish and arrogant living had to be met by heavy taxes on their subjects, often so exhausting as to leave the peasants themselves without bread. Naturally bitter feelings were aroused against the ruling class, who called themselves fathers of the people without exhibiting any traces of fatherly care for their welfare. The need for money to carry on war too made the taxes mount higher day by day. A letter from the Palatinate in 1681 mentioned that "Thousands would gladly leave the Fatherland if they had the means to do so," because of the French devastation and "besides this, we are now suffering the plague of high taxes." (16) Conditions did not improve during the next twenty-five years apparently, for an unbiased report from the Palatines waiting in Holland for transportation to England stated they came flying "to shake of the burdens they ly under by the hardshipps of their Princes governments and the contributions they must pay to the Enemy.", (17) Therefore,
Klopp, Der Fall des Hauses Stuart (Wien, 1887), 215.
15 Journal of House of Commons, XVI, 597; hereafter cited as C. J.; Eccles. Rec., 111, 1747, 1824; Public Record Office Mss., Colonial Office, 388/76, 56 ii, 64, 68-70, hereafter cited as P. R. 0., C. 0.; Friederich Kapp, Die Deutschen in Staate New York (New York, 1884), I, ig; Franz L6her, Geschichte und Zustdnde der Deutschen in Amerika (Cincinnati, 1847), 42.; Der Deutsche Pionier (Cincinnati, 1882), XIV, 2-95.
16 Letter of Henrich Frey, D. H. Bertolet, The Bertolet Family (Harrisburg, Pennsylvania, 1914), 173
17 Public Record Office, State Papers, 84/2-32-, 248, hereafter cited as P. R. 0., S. P.
Page 6: oppressive feudal exactions by the petty rulers may be regarded as one of the underlying reasons for the emigration."
Another cause suggested, and in general accepted in eighteenth century England, was religious persecution. Certainly religious conditions were of large importance in the early eighteenth century. To ingratiate themselves with benevolently inclined people, emigrants found it convenient to plead religious persecution. Friends of the immigration in England justified their help on religious grounds, while others fiercely attacked the authenticity of the rumored persecutions. The disagreement on this point has been perpetuated by descendants of that German stock, who are reluctant to forego a lustrous prestige equal to that of the Pilgrim Fathers.
What was the religious condition of the Germanies in 1709? Cuius regio, eius religio, established at the Peace of Augsburg (1555) and modified by the Treaty of Westphalia (1648), was still functioning. It recognized three churches: Catholic, Lutheran and Calvinist, and provided that the religion of the ruler should be the religion of the people. Under such conditions religious persecution might well exist. The belief that religious persecution was a cause is strengthened at first sight by the fact that the Elector of the Palatinate in 1709 was John William, Duke of Newburg, a Catholic. (19) There are no formal charges of persecution, however, about 1709. (20) Of course, this
18 Library of Congress MSS., Archdale MSS. 1694-1706, 57, hereafter cited as L. C., Archdale MSS.; Das verlangte nicht erlangte Canaan, 21; "Brief History," in Eccles. Rec., III, 1785 and 1794; W. H. Bruford, Germany in the 18th Century (Cambridge, Eng., 1935), 3 9, 121.-
19 The State of the Palatines for Fifty Years Past to This Present Time (London, 1709), 3. A 1710 edition of this pamphlet is published in Eccles. Rec., 111, 1820. The copy of the 1709 edition is in the Widener Library of Harvard University.
20 Reports of persecution by the Elector Palatine in 1709 refer to the Bavarian Palatinate and also to Silesia. Luttrell, op. cit., VI, 464, 483. These accounts are not to be attributed to John William, Elector Palatine, of the Rhenish or Lower Palatinate, a different man. Also see Monthly Mercury (July, 1709), XX, 248.
Page 7: might be due to the inexpediency of criticizing the Elector Palatine, an English ally in the War of the Spanish Succession then being waged. But by the same token, the Elector should have found it poor policy to affront his Protestant ally (England), by mistreatment of his own Protestant subjects. (21) John William had reigned since 1690. While there are reports of persecution in 1699, (22) were religious intolerance at that time the sole cause of the emigration, it should have driven away these German emigrants before 1709.
The disagreement on this point in the past, warrants a close examination of the religious composition of those immigrant groups in London. Of the first forty-one Germans of the 1708 immigration, fifteen were Lutherans and twenty-six Calvinists (or Reformed). (23) The fourteen others who joined the group in London were also Protestants. In their petition to the Queen this group, all Protestant, made no mention of religious persecution. They spoke though, of the French ravages in 1708 in the Rhine and Neckar Valleys. (24) For the 1709 immigration, four lists compiled in London exist of those who arrived from May 3rd to June 16th. Unfortunately no lists seems to have been made in London after that date, but for the 6500 Palatines then present these lists are informative and
21 The relations between England and the Palatinate were excellent at this time. The Elector Palatine secured the support of the English at the Vienna Court (British Museum Mss., Ad. Mss. 15866, 90, hereafter cited as B. M.) and was supplying his troops for English and Dutch use. The English used eleven battalions of Palatine troops in Catalonia in 1709. P. R. 0., S. P. 44/107, 221; S. P. 34/11, 154. In fact, on the occasion of the New Year in 1709 the rulers of England and the Palatinate exchanged greetings in their own handwriting, an unusually friendly proceeding. B. M., Add. Mss. 15866,156.
21 Eccles. Rec., 111, 1453 et seq.
22 Journal of the Commissioners of Trade and Plantations 1704-1708, 484; hereafter cited as B. T. Joar. The first Board of Trade report erred in referring to them as "These 41 poor Lutherans," Calendar of State Papers, Colonial America and West Indies 1706-8, 723; hereafter cited as C. C. In all cases the page, not the number of the document, is cited.
24 Ibid., 720.
Page 8: reliable. They were made by two German clergymen at the English court, John Tribbeko, chaplain to the late royal consort, Prince George of Denmark, and George Andrew Ruperti, minister of St. Mary's German Lutheran Church in Savoy. The 1770 families were distributed as follows: Lutherans, 550; Reformed, 693; Catholics, 512; Baptists, 12; Mennonites, 3. Almost one-third of the Palatines in London on June 16, 1709, were of the Catholic faith. (25)
Religious persecution by the Catholic Elector might drive out Protestants, but certainly not Catholics. It might still be held that the Protestants had fled from Catholic rulers and the Catholics from Protestant princes. Yet, on August 2, 1709, an English gentleman, Roger Kenyon, wrote to his sister-in-law that he had visited the Palatines on Blackheath, a commons seven miles southeast of London. He added that they "came over not on account of religious persecution, for most of them were under Protestant princes . . . . . ." (26) The real religious difficulties in Germany were those created by the clash of the various sects. Anton Wilhelm Bohme, pastor of the German Court Chapel of St. James and an influential friend of the Palatines at court, so advised a correspondent in Germany on May 26, 1710. Bohme mentions the desire of many people to seek a nonsectarian Christianity in Pennsylvania. The question which Bohme answered was whether it was deemed advisable that people, who on account of their conscience could no longer subscribe to any sect and therefore were tolerated almost nowhere, should carry out their desire to emigrate although they had no real certainty of God's will. In a fatherly fashion, Bohme advised them to examine their own conscience for the inner or motivating cause of such an important journey. Significantly, he wrote that many a man, after he had acquired flourishing acres in America, forgot the
P. R. 0., C. 0. 388/76,
56ii, 64, 68-70. The first list, that of May 6th,
is given in Appendix B, but not all the vital statistics in the list are
included for reasons mentioned there.
26 Kenyon MSS. (Hist. MSS. COM, 14th Report, Appendix), IV, 443.
Page 9: religious motivation of his pilgrimage. Such people degenerated so far that they were more concerned with the cultivation of their lands than of their souls. Bohme added that they stood as son many monuments, warming others not to allow greed to move them. (27)
Although Bohme strongly doubted the religious urge for the new world, he also mentioned disagreement with, and persecutions by, the authorities incited by religious zealots and orthodox Churchmen. These, he held, should be suffered for the sake of truth and the glorious blessing promised by the Lord. The persecutions must not have been severe, for Bohme confessed that he could not see how a Christian could, on account of the oppression suffered up up then, leave his fatherland. (28) The German divine dwelt at great length upon the dangerous temptations of religious squabbles.
The theory, that religious persecution was a most important cause for these emigrations, has been impaired by Bohme's letter. In his argument, he declared that only a very few of these people, when they came to England, had provided themselves with a prayerbook or similar religious work. Fewer still had a New Testament or Bible, and they would have remained without any were it not for the Queen's generosity. (29) This fact lends support to other evidence. The Catholic Elector Palatine John William had issued on November 21, 1705, a declaration promising liberty of conscience. (30) In 1707 a disinterested person testified to the sincere execution of the declaration. (31) On the 27th June, 1709, the Council of the declaration. (31) On the 27th of June, 1709, the Council of the
27 Das verlangte nicht erlangte Canaan, 15-30.
28 Ibid., 24.
29 Ibid., 22. One of the few Bibles brought from Germany at that time was that brought by Gerhart Schaeffer. This Lutheran Bible, published in Franckfurt am Mayn in 1701, is still in the possession of descendants of the Palatine Schaeffer, the Kingsley family of "The Rocks," Schoharie, N.Y.
30 Eccles. Rec., III, 1600.
31 John Toland, Declaration lately published by the Elector Palatine in favor of his Protestant Subjects ((London, 1714), 4.
<--Page 10, Page 11-->: Protestant Consistory in the Palatinate issued a statement denying the pretenses of emigrants that they were persecuted. (32) Indeed, a colonial report of the Evangelical Lutheran Congregation in Pennsylvania made this statement, "Some may think that it is unreasonable to care for these people, as the most of them went into this distant part of the globe from their own irregular impulse, and without necessity or calling, because it no longer suited them to comply with good order in their native lands." (32) The plea was made then not to make the children born in America suffer for the error of their parents.
Indeed a dispatch from Holland in June, 1709, reported that the Palatines, Protestants and Catholics, "seem to agree all very well, being several of them mixed together husbands and wives of different religion or united by parentage." Further, they were "flying not so much for religion" as for other reasons. (34) Considering these facts it must be concluded that religious persecution was not an important cause for the 1708-9 Palatine emigrations. Religious disputes and squabbles may have contributed in a minor way. Due to the special conditions exiting along the Rhine and in England, it was advantageous to pose as "poor German Protestants" persecuted for their faith. This will be discussed in greater detail below.
To devastation by war, oppression by petty princes imitating the "Sun Monarch," the destructive winter of 1708-9, and religious bickerings, may be added a desire for adventure so usual in the youth of any land. These causes created a dissatisfaction with their present lot, which only irritated another potent cause, that of land hunger. A number of Palatines in New York were overheard to remark, "We came to America to establish our families--to secure lands for our children on
32 "Brief History," in Eccles. Rec., III, 1793.
33 Hallesche Nachricchten (Oswald Trans., Philadelphia, 1881), III, 237.
34 P. R. O., S. P... 84/232, 249.
<--Page 13, Page 12-->: which they will be able to support themselves after we die." (35) But all these causes themselves would perhaps have been insufficient to call forth such a great emigration of large families with young children on their hands. How did the attraction of the foreign shore come to them?
To those Germans dissatisfied with their lot, effected by the conditions outlined above, came the enticing advertising of English proprietors of the colonies in America. Pamphlets extolling the climate and life in the New World were disseminated throughout the Rhine Valley. Agents for the proprietors entered into negotiations with interested parties. Adventurers life Francois Louis Micheland George Ritter engaged to bring companies of colonists. (36) Correspondence was carried on between proprietors and prospective settlers. All these activities were in the interests of Carolina or Pennsylvania.
One of the Germans, Ulrich Simmendinger by name, migrated with these groups to New York; (37) and having lost his two children in England, he and his wife, Anna Margaretta, returned to their fatherland about 1717. Shortly thereafter he published a little booklet, (38) giving an account of this experiences and containing a list of those people he had left behind in New York. For this reason it is valuable in the study of that emigration. Simmendinger says that assuredly his friends would no think he made this hazardous trip for excitement and adventure, particularly with his wife and children. His resolution was made under the paternal necessity of providing
35 Documentary History of State of New York (Albany, 1850), III, 658, hereafter cited as Doc. Hist.
36 Townshend, MSS. (Hist. MSS. Com., 11th Rept., Appendix), IV, 63; C.C., 1706-1708, 61.
37 Listed as one of the Palatines remaining at New York, 1710, Doc. Hist., III, 564.
38 Ulrich Simmindinger, Waarhaffte und glaubwwurdige Verzeichnuss jeniger. . . Persoonen welche sich Anno 1709. . . aus Teutschland in American oder Neue Welt begben. . . (Reuttlingen, ca. 1717). See Appendix F. below for list of families.
Page 14: for his own wife and children. He says nothing of religious persecution. Simmendinger apparently emigrated then with the intention of enjoying a better competence because of aid expected from the British Queen. (39) He further states that in the year 1709, in response to the genuinely golden promises written by the Englishmen, many other families from the Palatinate also set forth to England in order to go from there to Pennsylvania. (40)
In regard to the "golden promises," it is worth noticing that a British parliamentary committee investigating the causes of the immigration reported: "And upon the examination of several of them [the Palatines] what were the motives which induced them to leave their native country, it appears to the committee that there were books and papers dispersed in the Palatinate with the Queen's picture before the book and the Title Pages in Letters of Gold ( which from thence was called the Golden Book), to encourage them to come to England in order to be sent to Carolina or other of her Majesty's Plantations to be settled there. The book is chiefly a recommendation of that country." (41)
This work thus referred to might have been written by Kocherthal, as his book first appeared in 1706. (42) The Reverend
39 Ibid., 2-3 Simmendinger states this frankly. Frank R. Diffenderffer, "The German Exodus to England in 1709," in Pa ger. Soc. Proc. (1897) VII, 292 finds as one of the chief reasons for the emigration "the hope of bettering themselves."
40 "Dann als Anno 1709, auff die lauter guldene versperechenda Engelland=ische Schreiben/viele Familien aus der Pfalz. . . hinab nach Engelland/um von dar nach Pensylvaniam uber zugehen." Ibid., 2. Also, Friederich Kapp, Geschichte der Deutschen Einvanderung in Amerika (Leipzig, 1868), 86.
41 C.J., (April 14, 1711), XVI, 597.
42 V.H. Todd and J. Goebel, Christoph van Graffenried's Account of the Founding of New Bern (N.C. Hist. Com. Pub., Raleigh, N. C., 1920) 14, conclude that the Golden Book is the same as Kocherthal's. This may have been true, but Simmendinger speaks of Pennsylvania. See also Christopher Sauer, Pennsylvania Bericht (1754), quoted in Der deutsche Pionier, XIV, 295-6.
Page 15: Joshua Kocherthal, (43) described as a German evangelical minister, had not been to America at the time he published his book, but he had been in England to make inquiries about the colonies. (44) Did Kocherthal come to some agreement with important members of the ministry? Was he their agent or was he imply in the service of the proprietors of Carolina? No definite promises are made in his book but several passages, coupled with the Queen's picture and the gilded title page, might give the impression to the poor people into whose hands the book would come, that they might expect help from her, both in crossing the channel and after their arrival in England, in going to the colonies. One passage read, "Whereupon finally the proposal was made that the queen be presented with a supplication to whether she herself would not grant the ships . . . But these proposals are too extensive to describe here, and yet it is hoped that through them the effort will not be in vain, although in this matter no one can promise anything certain. . . . " (45) That its effect was great can be judged by its circulation. This handbook for Germans was so much in demand in the year 1709, that at least three more editions were printed. (46) In fact, the book continued to
43 This name has been spelled erroneously with a second K, "Kockerthal." by writers following documentary misspellings, apparently based on its pronunciation. The name appears on his tombstone to the Evangelical Lutheran Church, West Camp, N. Y. and uniformly in the British documents as "Kocherthal."
44 Todd and Goebel, op. cit., 13. Kocherthal may have been in communication with W. Killigrew, a gentleman much interested in Carolina, who in 1706 confidentially suggested to the British government that it buy out the Carolina proprietors through him at a low price, adding "I am in treaty with some thousand of Protestant People from foreign parts, who are desirous of to go thither when this affair is settled which naturally will increase the rent of the county and the customs by considerable for England." P. R. O., C. O. 5/306, 3i; C. C. 1706-1708, 183.
45 Ibid., 15, Kocherthal, Aussfuhrlich und umstandlicher Bericht von. . . Carolina (4th ed., Franckfurt, 1709), 28, hereafter cited as Kocherthal, Bericht.
46 Diffenderffer, op. cit., 317; A copy of the 4th impression is in the Library of Congress.
<--Page 16. Title Page of Kocherthal's Aussfuhrlich und umstandicher Bericht (4th edition). Courtesy of the Library of Congress.
<--Page 17: have such an effect, even after Kocherthal had gone to New York in 1708, that Reverend Anton Wilhelm Bohme, a friend of the Palatines at court and previously referred to, felt called upon to contribute several letters for a pamphlet under the title, Das verlangte nicht erlangte Canaan ("The desired, not acquired Canaan"), directed specifically against Kocherthal's roseate description of Carolina. (47)
An interesting collection of manuscripts now preserved in the Library of Congress throws light on the problem pre-
(47) Todd and Goebel, op, cit., 14. A copy is in the Historical Society of Pennsylvania Library in Philadelphia. M. H. Hoen, who wrote the foreword, should be credited with editorship at least.
Page 18: sented by Kocherthal's veiled promises. This collection, known as the Archdale Papers, contains correspondence of John Archdale, one of the proprietors of Carolina. As early as 1705, Archdale was arranging for a settlement in Carolina by what was called the High German Company of Thuringia. Polycrpus Michael Pricherbach, the German correspondent, writing from Langensalza in Thuringia, mentioned reading Richard Blome's English America, a description of the English possessions in the western hemisphere. This had been translated into German and published in Leipzig in 1697. Four deputies were sent over to London with the intention of visiting some English province in America. They met and talked with a Mr. Telner, who it seems represented the proprietors of Carolina. They then returned to Germany. (48) The plans probably miscarried as nothing was heard of the venture later.
However, two proposals, made by the High German Company of Thuringia, suggested to the proprietors of Carolina the kind of advertising to use with the greatest appeal in the Germanies. On September 2, 1705, the German Company asked the Carolina proprietors to announce "that all such as shall address themselves to them, After the first Transport (Seeing it is needless at the first shipping over) and are not able to pay any monie for their passage, should be transported free by your Lords without any payment as far as Carolina." This was to be repaid finally by years of service the the company in Carolina.
The second proposal was an inducement to be carried out only after the first transport had safely arrived in Carolina, "for what I am now going to say could not possibly be ventured sooner. There should be published by us and in our names, a short plain description of the good scituation and Conveniences of the Country, with the advantageous Conditions granted to us by the proprietors, there should also cir-
48 L. C. Archdale MSS. 1694-1706, 122.
Page 19: cumstancially be sett forth the great eveready proffetts that might be Expected from there, and subjoyned thereunto Expecially this clause, that a Poor Man hath only need to provide himself to come to London and then to pay nothing for his transport thence to Carolina whereby nothing which might recomend and make this country should be past by or omitted. Such printed and published description to be authorized by a short preffase by the Lords Proprietors, would then by good friends, left behind be everywhere made known and there being now to God no doubt but that in these hard times in Germany. . .," (49) colonization would be quickened.
In 1706 Kocherthal was not so particular as to require that he be settled in America first. He obliged the proprietors with his Aussfuhrlich und umstandlicher Bericht von der beruhmten Landschafft Carolina. . . .The Queen was substituted for the Lords Proprietors as the kindly benefactor and veiled promises were made. The fulfillment of the Thuringian suggestion is apparent. What is not so evident, is Kocherthal's remuneration. Kocherthal never even visited Carolina, much less settled there. On his arrival in England in 1708, he appealed to the Queen for aid in accordance with his pamphlet's hints. It would seem that the author was sincere in writing of the Queen's help, which was anticipated, as quoted above. Kocherthal was well received by the English government but was sent to New York. This will be related below.
Similar advertising concerning Pennsylvania was also producing air castles for disheartened Germans. William Penn, who later founded Pennsylvania, made several visits to the Rhine country, one in 1677. (50) Penn discussed religious matters with many Lutherans and Calvinists of the Rhine Valley. The
49 Ibid, 60 et. seq.
50 Sanuel M. Janney, The Live of William Penn (Philadelphia, 1852), 117 et. seq., recounts Penn's journey in that year and especially his friendship with Princess Elizabeth of the Palatinate.
Page 20: royal charter for Pennsylvania was granted in 1681. Shortly thereafter appeared in London a brief description of the new province: Some account of the Province of Pennsylvania in America. (51) Penn offered to sell one hundred acres of land for two English pounds and a low rental. He combined humanitarianism with business, for he advertised popular government, universal suffrage, and equal rights to all regardless of race or religious belief. Murder and treason were the only capital crimes; and reformation, not retaliation, was the object of punishment for their offenses. This book appeared in translation in Amsterdam the same year and its distribution in the upper Rhine country probably affected favorably the movement of Germans to Pennsylvania. (52)
Pennsylvania was the best advertised province and it was mainly due to the liberal use of printer's ink. No professional promoter or land speculator of the present day could have devised any scheme, which would have proved a greater success than the means taken by William Penn and his counsellor, Benjamin Furley, to advertise his province. (53) Various books were published for German consumption for over twenty years previous to the emigration of 1709. (54) Among them, Pastorious' Unstandige geographische Beschreibung (detailed geographical description) of 1700 and Daniel Falckner's Curieuse Nachricht von Pennsylvania (curious news from Penn-
51 Julius F. Sachse, The German Pietists of Provincial Pennsylvania 1694-1708 (Philadelphia, 1895), 440; E. E. Proper, Colonial Immigration Laws (Col. U. Studies in History, Economics and Public Law, 1900, XII, no. 2), 46.
52 Albert B. Faust, The German Element in the United States (New ed., N.Y., 1972) I, 32 et. seq; H.L. Osgood, English Colonies in the Eighteenth Century (New York, 1924), II 491; Sachse, op. cit., 443 et. seq.
53 J. F. Sachse, Curieuse Nachricht von Pennsylvania (of 1702), (Phila., private ed., 1905), 8. Sachse calls it "The book that stimulated the Great German Emigration to Pennsylvania in the early years of the eighteenth century." Also see Schse's account of literature used to induce German Emigration, Pa. Ger. Soc. Proc., VII, 175-198.
54 See Sachse's list of some fifty reprints of title-pages, Pa. Ger. Soc. Proc., VII, 201-256; Das verlangte nicht erlangte Canaan, 95.
<--Page 21. Portrait of William Penn. Courtesy of Pennsylvania-German Society.
Page 22: sylvania) of 1702 were combined into a single work in 1704 by the Frankfort company, for whom Falckner became attorney along with Benjamin Furley. (55)
One writer tells us that English agents were sent throughout the Palatinate to induce immigration, much in the same way as did our western railroad companies of a later date. These companies, having received large bounties inland from the government, sent agents throughout Europe to influence emigration so that their land grants might be settled and revenue-producing. (56) These early land agents, "Neulander," (57) or whatever they may be called, must have used to full advantage the reputation Penn and his colony had acquired in the Rhineland. (58) Simmendinger, quoted above, gave his expected destination as Pennsylvania. Luttrell reported foreign news on April 28th and May 12, 1709, of Palatines coming to England bound for Pennsylvania. (59) Penn's advertising was productive of good results at last.
Before the kind of help extended to the emigrants and the means employed by the British government can be understood, it is necessary that the position of England as the protector of England as the protector of the Protestant cause in Europe be understood. William of Orange with his wife Mary had taken the English throne from his father-in-law, James II, in 1688 to secure intervention by England and support for the Protestant cause on the continent against the encroachments of Catholic France. (60) As Louis XIV aged, he grew more intolerant. Counsels of moderation even aged, he grew more intolerant. Counsels of moderation even by the influential Madame de Maintenon were unavailing. In 1685 the Edict of Nantes, granting religious toleration to
55 Sachse, Falckner's Nachricht, 23-28.
56 John M. Brown, Brief Sketch of the First Settlement of the County of Schoharie by the Germans (Schoharie, 1823), 5.
57 Faust, op. cit., I, 61.
58 Kapp calls them "Speculators," and says they associated themselves with the Quakers. Die Deutschen, I, 20.
59 Luttrell, op. cit., VI, 434, 440.
60 G. N. Clark, The Later Stuarts 1660-1714 (Oxford, 1934), 143.
Page 23: French Protestants, was revoked and persecution followed. (61) Many Huguenots, as the French Protestants were called, fled to England, Germany and the New World. (62) When William declared war on France in 1689, he published a "Proclamation for the encouraging French Protestants to transport themselves into this Kingdom," promising that they would not only have his royal protection but that he would also "so aid and assist them in their several trades and ways of livelihood, as that their being in this realm might be comfortable an easy to them." (63)
Queen Anne on her accession in 1702 continued, under the guidance of the Marlboroughs and their relatives, those policies on which was predicted her right to the throne. (64) The Second Hundred Years' War entered its second phase, the War of the Spanish Succession. In diplomatic discussions the English sought to secure religious and civil rights for the Protestants on the continent. They even considered proposing in the negotiations for peace at Geertridenberg in 1708 that the change in a ruler's religion should not "influence the worship or revenues of his subject (wch is the most reasonable thing in the most), most of the evill effects proceeding from such a change of religion will be avoyded." (65) In other ways help was extended to foreign Protestants, such as those of Bergen and Courland, for example. At their petition collections were taken up in England under government auspices for
61 A. J. Grant, "The Governement of Louis XIV," in Camb. Mod. Hist., V, 24; Viscount St. Cyres, "The Gallican Church," ibid., V, 89.
62 J. S. Burn, History of the French, Walloon, Dutch and other Foreign Refugees Settled in England from the Reign of Henry VIII to the Revocation of the Edict of Nantes (London, 1746), 18. The number of names of French origin among the Palatine emigrants (See Shipping Lists in Appendix) suggest that many were French refugees fleeing a second time.
63 Paul de Rapin-Thoyras, History of England 1661-1725, trans. and continued by H. Tindal (London, 1744), XVI, 347.
64 Clark, op. cit., 212.
65 B. M., Add. MSS 28055, 425; P. R. O., S. P. 84/233, 38.
Page 24: funds for building of churches. (66) When on June 12, 1709, a French Protestant petitioned Queen Anne in behalf of "a million persecuted protestants," she assured her petitioner, "she had already given her ministers abroad instructions concerning the same and will doe for them what else lies in her power." (67) There are other indications of a similar nature, which show that the Protestants looked to the English Queen to take care of their interests. (68)
At this time Queen Anne was especially susceptible to Protestant appeals. Queen Anne's consort, Prince George of Denmark, died on October 28, 1708, "to the unspeakable grief of the Queen." (69) Prince George was of German Stock, (70) a Lutheran, and had brought many of his countrymen and co-religionists to London. The Royal Chapel in St. James Palace (Lutheran) established in 1700, owned its existence to him. (71) The funeral sermon which the Reverend Joh Tribbeko preached in the Royal Chapel on November 21st emphasized the Prince's interest in the Protestant cause. (72) It probably softened the Queen's grief to act as the gracious benefactress of the oppressed co-religionists of her departed husband. (73) At any rate she took a great deal of interest in relieving the Palatines in 1709.
A more important question is how far the English Ministry was aware of the advertising activities and how far it con-
66 P. R. O., S. P. 44/108, 25 (1708-1709).
67 Luttrell, op. cit., VI, 452.
68 Townsend MSS. (Hist. MSS. Com. 11th Report, Appendix), IV, 52.
69 B. M. Add. MSS 15866, 135; Add. Mss. 6309, 27; Egmont MSS. (Hist. MSS. Com. 7th Report, Appendix), II, 232; Agnes Strickland, Lives of the Queens of England (Boston, 1859), XII, 189.
70 L. Katscher, "German Life in London," in Nineteenth Century (May, 1887), XXI, 728.
71 Ibid., 738.
72 John Tribbeko, An Funeral Sermon on the Death of H. R. H. Prince George of Denmark (London, 1709), 27.
73 C. B. Todd, "Robert Hunter and the Settlement of the Palatines," in National Magazine (February, 1893, XVII, 292.
<--Page 25. Prince George of Denmark, royal consort of Queen Anne. Courtesy of Pennsylvania-German Society.
Page 26: tenanced them. The English policies were predicated on the postulates of mercantilism accepted by seventeenth century Europe. (74) These mercantilist doctrines attached a high value to a dense population, as an element of national strength. It was even argued that colonies would weaken the parent country by lessening the population. (75) In this view of migration, England would benefit by, and the Rhine countries would lose, and perhaps oppose, the movement of peoples. It was said to be "a Fundamental Maxim in Sound Politicks, that the Greatness, Wealth, and Strength of a Country, consist in the Number of its Inhabitants." (76) The preamble of an English law of 1709 observed that "the increase of people is a means of advancing the wealth and strength of a nation." (77) The States General of Holland echoed "that the Grandeur and Prosperity of a Country does in general consist in a Multitude of Inhabitants." (78) The Monthly Mercury, a contemporary English publication, discussing Holland's new law, remarked that "The States [were] sensible of the Truth of the Maxim that the number of Inhabitants is the Strength of a nation. . . " (79)
In pursuance of such aims, the English Parliament was bombarded with propaganda favorable to the naturalization of foreign Protestants. Under the heading "Some weighty considerations for Parliament," Archdale, the Carolina proprietor referred to before, wrote that 2,000 white people in Carolina were worth 100,000 at home. He argued that this
74 Clark, op. cit., 43; E. F. Heckscher, Mercantilism (London, 1935), II, 159.
75 Proper, Op. cit., 74.
76 [Francis Hare], The Reception of the Palatines Vindicated in a Fifth Letter to a Tory Member (London, 1711), 4, 37 et. seq. Hare was chaplain to the Duke of Marlborough.
77 (7) Anne, c. 5, Statutes of the Realm, IX, 63.
78 The State of the Palatines, 6; Eccles. Rec., III, 1775 and 1830.
79 Monthly Mercury (London, July, 1709), XX, 275; Josiah Child, A New Discourse on Trade, (1693 ed.), 154; Edgar S. Furniss, The Labourer in a System of Nationalism (Boston, 1920), 33.
Page 27: was due to their use of English goods and the products they exchanged so favorably for England. (80) He went on, "the body of Europe is under a general fermentation. . . which will more and more persecute an uneasy body of Protestants. . . [who] opprest with taxes, drained of their wealth and lyeing in the jealous sight of popery, are growne so uneasy, as to be willing to transplant themselves under the English Government." A petition from a Pennsylvania German asked for a naturalization act for German Protestants, who although inclined to emigrate were under great difficulties from lack of it. (81)
William Penn was the author of a general naturalization bill for the colonies. In urging its approval to a member of the House of Lords, he pointed out "the interest of England to improve and thicken her colonys with people not her own." (82) But early in January, 1709, Penn wrote to James Logan in Pennsylvania, "Tho' we have here a bill for Naturalization in the House, and I think I never writ so correctly, as I did to some members of Parliament, as well and discoursed them on that subject, . . . it moves but slowly. . . " (83)
Finally, giving way to the pressure, Parliament moved to encourage immigration and on February 5th, leave was given in the House of Commons to bring in a bill for naturalizing foreign Protestants. On the 28th the bill passed its first test vote on a motion to continue the old provision of the law, which lost 101 to 198. The bill was passed on March 7th by a vote of 203 to 77, but over the protests and opposition of the City of London, whose authorities wanted a clause inserted protecting their own rights to the duties paid by aliens. (84) On the 15th bill was agreed to by the Lords 65 to 20. Royal
80 L. C., Archdale MSS., 1694-1706, 151.
81 Ibid., 70; On naturalization, see A. H. Carpenter, "Naturalization in England and the American Colonies," in Amer. Hist. Review, IX, 288-303.
82 Huntington Library, H. S. MSS. 22285; hereafter cited as H. L.
83 Penn-Logan Corres. (Memoirs of Historical Society of Pa., X), II, 323.
84 Luttrell, op. cit., VI, 404, 408, 415, 417.
Page 28: assent made it a law on March 23rd. (85) This was the first general naturalization law in England. It provided that the naturalized had to take the oath of allegiance, and partake of the sacrament according to the Anglican ritual before witnesses, who signed a certificate to that effect. In addition, all the children of naturalized parents were to be considered natural born subjects. (68) The greatest benefit secured by the act was the right to purchase and hold land, which might be transmitted to one's children. Those naturalized were also permitted to take part in trade and commerce, usually forbidden to foreigners. (87)
Palatine or German immigrants were not particularly mentioned it appears. But Macpherson states, "This law was said to have been made with a particular view to the Protestant Palatines brought this year into England." (88) Certain it is that by the time the act was passed, the first wave of the emigration was already well on its way down the Rhine. (89) Still the news of the bill's consideration by the English Parliament may have reached prospective immigrants. That this act was a preparation for their coming, or even an added attraction for the immigration itself is highly probable. It would seem then, that the parties who urged and were successful in securing the passage of the naturalization law, were intimately connected with colonial projects in America. Men, such as Archdale and Penn, stimulated through agents and
85 C. J., XVI, 93, 108, 113, 123, 131, et. seq.; Eccles. Rec., III, 1724, 1832; Paul Chamberlen, History of the . . . Reign of Queen Anne (London, 1738), 312.
86 (7) Anne, c. 5, Statutes of the Realm, IX, 63.
87 L. C. Archdale MSS. 1694-1706, 70.
88 David Macpherson, Annals of Commerce (London, 1805), III, 6.
89 The first contingent of the Palatines arrived in London about May 3rd (B.T. Jour. 1708-1714, 26). They were over six weeks, a few weeks at least, at Rotterdam awaiting transportation and the time needed to cross the Channel, in addition tot he time spent on the way to Rotterdam, would certainly amount to two months. The Kocherthal party in 1708 needed tow months to travel from Frankfurt to London. Eccles. Rec., III, 1729.
Page 29: advertising a movement of people, who assured themselves that the British government had engaged to provide for them.
On the other hand the British authorities do not seem to have prepared for such a large immigration. In fact, the records of the Board of Trade and Privy Council may be searched in vain for evidence that the Palatine immigration was planned or at least expected and prepared for, other than by the general naturalization act just referred to. But this much is clear, the English government under Anne was embarking upon a mercantilist policy of colonial development, in which its population both at home and in the colonies was to be enlarged by stimulating and even subsidizing immigration from foreign shores.
Precedents existed for governmental controlled immigration for English dominions. In 1679, Charles II sent two shiploads to French Huguenots to South Carolina, in order ti introduce the cultivation of grapes, olives and the silkworm. (90) In 1694 Baron de Luttichaw petitioned for permission to import 200 Protestant families, some 1,000 persons, from the Germanies to his land in Ireland. (91) In 1697, King William offered a grant of 500 pounds to some Jamaica merchants to transplant men to Jamaica. (92) In 1706, Governor Dudley of Massachusetts Bay and New Hampshire, proposed that a colony of Scots be settled in Nova Scotia. (93) In the same year, Colonel Parke, governor of the Leeward Islands asked for "10,000 Scotch with otemeal enough to keep them for 3 or 4 months" to lead against [French] Martinique. He proposed to settle them there, if successful. (94) But reception of the Huguenots in England in Elizabeth's reign seemed to be the most applicable precedent, and it was strongly cited for that
90 Proper, op. cit., 81.
91 Cal. Treas. Papers 1557-1696, 396.
92 C. C. 1696-1697, 389.
93 C. C. 1706-1708, 31, 234, 439.
94 Ibid., 356, 358.
Page 30: purpose. (95) With the ambitious design of James II to unite all the colonies under one government, the resources of Parliament and the Crown were used to foster immigration.
In the reign of Queen Anne this idea took practical shape. Considerable sums of money were expended to assist Protestant refugees in making their way to England and the English colonies. For example, early in 1706 Secretary of State Hedges informed Governor Granville of Barbados concerning one Francisco Pavia and his family from Cadiz, whom "H.M. has not only bestowed her royal bounty upon. . . to transport them thither, but also recommended them to you, that you will give them all fitting countenance and assistance." (96) In the same year the Board of Trade at the behest of Secretary of State Hedges considered a proposal by Francois Louis Michel and George Ritter to settle some "4 or 500 Swiss Protestants. . . .on some uninhabited lands in Pennsylvania or on the frontier of Virginia." The last stipulation called for transportation with their effects from Rotterdam at Her Majesty's expense. The Board of Trade approved the proposal, and made practical suggestions for carrying it out. Indeed, the Board did not even find fault with the suggestion that the government should pay the cost of transportation, which it estimated would be eight pounds per head. (97) This proposal was carried out under private auspices with a handsome subsidy. These efforts were due largely to political and commercial motives, and partly to the genuine interest which England took in championing the Protestant cause in Europe. (98)
Still such a program of colonial development (99) had to be
95 [Hare], op. cit., 4; "Brief History," in Eccles, Rec., III, 1776.
96 C. C. 1706-1708, 14.
97 Ibid., 62, 79.
98 An evidence of this program was the negotiation with Penn for the purchase of his government. By the summer of 1712, the terms of the surrender had been agreed upon, 12,000 pounds, payable in four years, with certain stipulations. Janney. op. cit., 524.
Page 31: pursued with caution to avoid diplomatic intervention. Not all governments were ready to rid themselves of an undesirable religious sect by arranging deportation to British America as the Swiss canton of Bern did in 1710. (100) Indeed, as a rule, princes were not disposed to permit their subjects to be enticed from their obligations to them. (101) For this reason open invitations apparently were not issued. It can be concluded that the large German emigration of the second decade of the eighteenth century was due in a general way to these cause: (1) war devastion, (2) heavy taxation, (3) an exraordinary severe winter, (4) religious quarrels, but not persecutions, (5) land hunger on the part of the elderly and desire for adventure on the part of the young, (6) liberal advertising by colonial proprietors, and finally (7) the benevolent and active cooperation of the British government. (102) The background and causes of the Palatine emigration have been described, but the manner in which the British government participated in the actual movement has still to be pointed out. In particular, how did the emigration gather momentum? This will be discussed in Chapter III. Chapter II will describe the small 1708 immigration, which blazed the trail.
100 Indeed the Swiss authorities went so far as to ask the good offices of the British to prevent Dutch interference with the compulsory transportation of the Anabaptists through Holland. Letter from British Envoy Abraham Stanyan to Lord Townshend, April 5, 1710. Magg Bros. Cat., No. 522.
101 Todd and Goebel, op. cit., 13. It appears probable that the emigrations under discussion caused the Elector Palatine to treat his subjects better, as the Duchess of Orleans wrote to her half-sister Louisa, Raugravine in the Palatinate, so that "When those who have gone to Pennsylvania hear about it they will quickly return." Letters to Madam (London, 1924), II, 25.
102 Professor E. B. Greene is correct in this general conclusion as to the causes of this emigration. Provincial America 1690-1740 (New York, 1905, 230.
Copyright © 1998, -- 2003. Berry Enterprises. All rights reserved. All items on the site are copyrighted. While we welcome you to use the information provided on this web site by copying it, or downloading it; this information is copyrighted and not to be reproduced for distribution, sale, or profit.
|
<urn:uuid:ddd3c145-ef47-488e-b53f-cab0cd2003b8>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00610.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9566963315010071,
"score": 3.4375,
"token_count": 12821,
"url": "http://threerivershms.com/knittlech1.htm"
}
|
We are indebted to the National Eating Disorder Association (NEDA) for the following eating disorder terms and definitions. We supply them for purposes of helping parents, families, coaches, educators and provider better understand the nature and symptoms of eating disorders in children and assist in securing eating disorder help for their loved ones.
In the context of treatment for eating disorders, a treatment that does not use drugs or bring uncon- scious mental material into full consciousness. For example yoga, guided imagery, expressive therapy, and massage therapy are considered alternative therapies.
The absence of at least three consecutive menstrual cycles.
Slang for anorexia or anorexic.
ANAD (National Association of Anorexia Nervosa and Associated Disorders):
A nonprofit corporation that seeks to alleviate the problems of eating disorders, especially anorexia nervosa and bulimia nervosa.
A disorder in which an individual refuses to maintain minimally normal body weight, intensely fears gaining weight, and exhibits a significant disturbance in his/her perception of the shape or size of his/her body.
The use of excessive exercise to lose weight. [Ed. Note: This is not an officially recognized diagnosis under DSM V. Excessive exercise is a classic symptom of anorexia nervosa.]
used to prevent or treat convulsions.
used to prevent or treat nausea and vomiting.
A persistent feeling of dread, apprehension, and impending disaster. There are several types of anxiety disorders, including: panic disorder, agoraphobia, obsessive-compulsive disorder, social and specific phobias, and post-traumatic stress disorder. Anxiety is a type of mood disorder (see Mood Disorders).
Feeding disorder of infancy or early childhood has been renamed avoidant/restrictive food intake disorder (ARFID).
An alteration in the normal rhythm of the heartbeat.
A form of expressive therapy that uses visual art to encourage a patient’s growth of self awareness and self esteem to make attitudinal and behavioral changes.
A new group of medications used to treat psychiatric conditions, e.g. Olanzapine (brand name Zyprexa). These drugs may have fewer side effects than older classes of drugs used to treat the same psychiatric conditions.
An abbreviation used for binge eating and purging in the context of bulimic behavior.
Behavior Therapy (BT):
A type of psychotherapy that uses principles of learning to increase the frequency of desired behaviors and/or decrease the frequency of problem behaviors. When used to treat an eating disorder, the focus is on modifying behavioral abnormalities of the disorder by teaching relaxation techniques and coping strategies that affected individuals can use instead of restricting, binge eating and/or purging. Subtypes of BT include dialectical behavior therapy (DBT), exposure and response prevention (ERP), and hypno-behavioral therapy.
Binge Eating (also Bingeing):
Consuming an amount of food that is considered much larger than the amount that most individuals would eat under similar circumstances within a discrete period of time. Also referred to as “binge eating.”
The recipient of benefits from an insurance policy.
A technique that measures bodily functions, like breathing, heart rate, blood pressure, skin temperature, and muscle tension. Biofeedback is used to teach people how to alter bodily functions through relaxation or imagery. Typically, a practitioner describes stressful situations and guides a person through using relaxation techniques. The person can see how their heart rate and blood pressure change in response to being stressed or relaxed.
Body Dysmorphic Disorder or Dysmorphophobia:
A mental condition defined in the DSM-V in which the patient is preoccupied with a real or perceived defect in his/her appearance (see DSM-V).
The subjective opinion about one’s physical appearance based on self perception of body size and shape and the reactions of others.
Body Mass Index (BMI):
A formula used to calculate the ratio of a person’s weight to height. BMI is expressed as a number that is used to determine whether an individual’s weight is within normal ranges for age and sex on a standardized BMI chart. The U.S. Centers for Disease Control and Prevention Web site offers BMI calculators and standardized BMI charts.
A disorder defined in the DSM-V in which a patient binges on food an average of twice weekly in a three-month time period, followed by compensatory behavior aimed at preventing weight gain. This behavior may include excessive exercise, vomiting, or the misuse of laxatives, diuretics, other medications, and enemas.
A term used incorrectly to describe individuals who engage alternately in bulimic behavior and anorexic behavior. The correct diagnosis would be restricting anorexia, purging sub-type. "Bulimic behavior" (e.g. purging) is not a diagnosis but rather a symptom and one that can occur with anorexia as well as bulimia.
An approach to patient care in which a case manager working for an insurance company mobilizes people to organize appropriate services and supports for a patient’s treatment. A case manager coordinates mental health, social work, educational, health, vocational, transportation, advocacy, respite care, and recreational services, as needed. The case manager ensures that the changing needs of the patient and family members supporting that patient are met.
A federal act in 1985 that included provisions to protect health insurance benefits coverage for workers and their families who lose their jobs. The landmark Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) health benefit provisions became law in 1986. The law amends the Employee Retirement Income Security Act (ERISA), the Internal Revenue Code, and the Public Health Service Act to provide continuation of employer-sponsored group health coverage that otherwise might be terminated. The U.S. Centers for Medicare & Medicaid Services has advisory jurisdiction for the COBRA law as it applies to state and local government (public sector) employers and their group health plans.
Cognitive Therapy (CT):
A type of psychotherapeutic treatment that attempts to change a patient’s feelings and behaviors by changing the way the patient thinks about or perceives his/her significant life experiences. Subtypes include cognitive analytic therapy and cognitive orientation therapy.
Cognitive Analytic Therapy (CAT):
A type of cognitive therapy that focuses its attention on discovering how a patient’s problems have evolved and how the procedures the patient has devised to cope with them may be ineffective or even harmful. CAT is designed to enable people to gain an understanding of how the difficulties they experience may be made worse by their habitual coping mechanisms. Problems are understood in the light of a person’s personal history and life experiences. The focus is on recognizing how these coping procedures originated and how they can be adapted.
Cognitive Behavior Therapy (CBT):
A treatment that involves three overlapping phases when used to treat an eating disorder. For example, with bulimia, the first phase focuses on helping people to resist the urge to binge eat and purge by educating them about the dangers of their behavior. The second phase introduces procedures to reduce dietary restraint and increase the regularity of eating. The last phase involves teaching people relapse-prevention strategies to help them prepare for possible setbacks. A course of individual CBT for bulimia nervosa usually involves 16 to 20 hour-long sessions over a period of 4 to 5 months. It is offered on an individual, group, or self-managed basis. The goals of CBT are designed to interrupt the proposed bulimic cycle that is perpetuated by low self-esteem, extreme concerns about shape and weight, and extreme means of weight control.
Cognitive Orientation Therapy (COT):
A type of cognitive therapy that uses a systematic procedure to understand the meaning of a patient’s behavior by exploring certain themes such as aggression and avoidance. The procedure for modifying behavior then focuses on systematically changing the patient’s beliefs related to the themes and not directly to eating behavior.
Multiple physical and/or mental conditions existing in a person at the same time (see Dual Diagnosis).
Crisis Residential Treatment Services:
Short-term, round-the-clock help provided in a non-hospital setting during a crisis. The purposes of this care are to avoid inpatient hospitalization, help stabilize the individual in crisis, and determine the next appropriate step.
The treated condition or disorder is permanently gone, never to return in the individual who received treatment. Not to be confused with “remission” (see Remission).
Also known as tooth decay. The teeth of people with bulimia who using vomiting as a purging method may be especially vulnerable to developing cavities because of the exposure of teeth to the high acid content of vomit.
Depression (also called Major Depressive Disorder):
A condition that is characterized by one or more major depressive episodes consisting of two or more weeks during which a person experiences a depressed mood or loss of interest or pleasure in nearly all activities. It is one of the mood disorders listed in the DSM-V (see Mood Disorders).
Diabetic Omission of Insulin:
A non-purging method of compensating for excess calorie intake that may be used by a person with diabetes and an eating disorder.
Dialectical Behavior Therapy (DBT):
A type of behavioral therapy that views emotional deregulation as the core problem in bulimia nervosa. It involves teaching people with bulimia nervosa new skills to regulate negative emotions and replace dysfunctional behavior. A typical course of treatment is 20 group sessions lasting 2 hours once a week (see Behavioral Therapy).
Term used to describe any atypical eating behavior.
Behaviors that include any or all of the following: replacing food consumption with excessive alcohol consumption; consuming food along with sufficient amounts of alcohol to induce vomiting as a method of purging and numbing feelings. [Ed. Note: this is not a recognized medical term, but rather one popularized in the lay media.]
The fifth (and most current as of 2014) edition of the Diagnostic and Statistical Manual for Mental Disorders V published by the American Psychiatric Association (APA). This manual lists mental diseases, conditions, and disorders, and also lists the criteria established by APA to diagnose them. Several newly created eating disorders diagnoses are listed in this edition, including Avoidant/Restrictive Food Intake Disorder (see ARFID).
Two mental health disorders in a patient at the same time, as diagnosed by a clinician. For example, a patient may be given a diagnosis of both bulimia nervosa and obsessive-compulsive disorder or anorexia and major depressive disorder.
Eating Disorders Anonymous (EDA):
A fellowship of individuals who share their experiences with each other to try to solve common problems and help each other recover from their eating disorders.
Eating Disorders Not Otherwise Specified (ED-NOS):
Any disorder of eating that does not meet the criteria for anorexia nervosa or bulimia nervosa. This diagnosis has been discontinued under the DSM-V.
Eating Disorder Inventory (EDI):
A self-report test that clinicians use with patients to diagnose specific eating disorders and determine the severity of a patient’s condition.
Eating Disorder Inventory-2 (EDI-2):
Second edition of the EDI.
(slang) Eating disorder.
Acronym for eating disorder.
A physical condition that occurs when ionized salt concentrations (commonly sodium and potassium) are at abnormal levels in the body. This condition can occur as a side effect of some bulimic compensatory behaviors, such as vomiting. Severe electrolyte imbalance can be fatal.
A class of drugs that induces vomiting. Emetics may be used as part of a bulimic compensatory behavior to induce vomiting after a binge eating episode.
The injection of fluid into the rectum for the purpose of cleansing the bowel. Enemas may be used as a bulimic compensatory behavior to purge after a binge eating episode.
A treatment program in which people interact with horses and become aware of their own emotional states through the reactions of the horse to their behavior.
An individualized exercise plan that is written by a doctor or rehabilitation specialist, such as a clinical exercise physiologist, physical therapist, or nurse. The plan takes into account an individual’s current medical condition and provides advice for what type of exercise to perform, how hard to exercise, how long, and how many times per week.
Exposure and Response Prevention (ERP):
A type of behavior therapy strategy that is based on the theory that purging serves to decrease the anxiety associated with eating. Purging is therefore negatively reinforced via anxiety reduction. The goal of ERP is to modify the association between anxiety and purging by preventing purging following eating until the anxiety associated with eating subsides (see Behavioral Therapy).
A non-drug, non-psychotherapy form of treatment that uses the performing and/or visual arts to help people express their thoughts and emotions. Whether through dance, movement, art, drama, drawing, painting, etc., expressive therapy provides an opportunity for communication that might otherwise remain repressed.
Eye Movement Desensitization and Reprocessing (EMDR):
A non-drug and non-psychotherapy form of treatment in which a therapist waves his/her fingers back and forth in front of the patient’s eyes, and the patient tracks the movements while also focusing on a traumatic event. It is thought that the act of tracking while concentrating allows a different level of processing to occur in the brain so that the patient can review the event more calmly or more completely than before.
A form of psychotherapy that involves members of a nuclear or extended family. Some forms of family therapy are based on behavioral or psychodynamic principles; the most common form is based on family systems theory. This approach regards the family as the unit of treatment and emphasizes factors such as relationships and communication patterns. With eating disorders, the focus is on the eating disorder and how the disorder affects family relationships. Family therapy tends to be short-term, usually lasting only a few months, although it can last longer depending on the family circumstances.
A technique in which the patient is directed by a person (either in person or by using a tape recording) to relax and imagine certain images and scenes to promote relaxation, promote changes in attitude or behavior, and encourage physical healing. Guided imagery is sometimes called visualization. Sometimes music is used as background noise during the imagery session (see Alternative Therapy).
Health Insurance Portability and Accountability Act (HIPAA):
A federal law enacted in 1996 with a number of provisions intended to ensure certain consumer health insurance protections for working Americans and their families and standards for electronic health information and protect privacy of individuals’ health information. HIPAA applies to three types of health insurance coverage: group health plans, individual health insurance, and comparable coverage through a high-risk pool. HIPAA may lower a person’s chance of losing existing coverage, ease the ability to switch health plans, and/or help a person buy coverage on his/her own if a person loses employer coverage and has no other coverage available.
Health Insurance Reform for Consumers:
Federal law has provided to consumers some valuable–though limited–protections when obtaining, changing, or continuing health insurance. Understanding these protections, as well as laws in the state in which one resides, can help with making more informed choices when work situations change or when changing health coverage or accessing care. Three important federal laws that can affect coverage and access to care for people with eating disorders are listed below:
• Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA)
• Health Insurance Portability and Accountability Act of 1996 (HIPAA)
• Mental Health Parity Act of 1996 (MHPA)
• Patient Protection and Accountable Care Act of 2010 (aka Obamacare)
Health Maintenance Organization (HMO):
A health plan that employs or contracts with primary care physicians to write referrals for all care that covered patients obtain from specialists in a network of healthcare providers with whom the HMO contracts. The patient’s choice of treatment providers is usually limited.
The vomiting of blood.
A type of behavioral therapy that uses a combination of behavioral techniques such as self- monitoring to change maladaptive eating disorders and hypnotic techniques intended to reinforce and encourage behavior change.
An abnormally low concentration of glucose in the blood.
In-network benefits Health insurance benefits that a benefi- ciary is entitled to receive from a designated group (network) of healthcare providers. The “network” is established by the health insurer that contracts with certain providers to provide care for beneficiaries within that network.
A health insurance plan that reimburses the member or healthcare provider on a fee-for-service basis, usually at a rate lower than the actual charges for services rendered, and often after a deductible has been satisfied by the insured.
Independent Living Services:
Services for a person with a medical or mental health-related problem who is living on his/ her own. Services include therapeutic group homes, super- vised apartment living, monitoring the person’s compliance with prescribed mental and medical treatment plans, and job placement.
Intake Screening An interview conducted by health service providers when a patient is admitted to a hospital or treatment program.
International Classification of Diseases (ICD-10):
The World Health Organization lists international standards used to diagnose and classify diseases. The listing is used by the healthcare system so clinicians can assign an ICD code to submit claims to insurers for reimbursement for services for treating various medical and mental health conditions in patients. The code is periodically updated to reflect changes in classifications of disease or to add new disorders.
Interpersonal Therapy (IPT):
Also called interpersonal psychotherapy, IPT is designed to help people identify and address their interpersonal problems, specifically those involving grief, interpersonal role conflicts, role transitions, and interpersonal deficits. In this therapy, no emphasis is placed directly on modifying eating habits. Instead, the expectation is that the therapy will enable people to change as their interpersonal functioning improves. IPT usually involves 16 to 20 hour-long, one-on-one treatment sessions over a period of 4 to 5 months.
Ketosis A condition characterized by an abnormally elevated concentration of ketones in the body tissues and fluids, which can be caused by starvation. It is a complication of diabetes, starvation, and alcoholism.
Level of Care:
The care setting and intensity of care that a patient is receiving (e.g. inpatient hospital, outpatient hospital, outpatient residential, intensive outpatient, residential). Health plans and insurance companies correlate their payment structures to the level of care being provided and also map a patient’s eligibility for a particular level of care to the patient’s medical/psychological status.
(See Major Depressive Disorder)
Major Depressive Disorder:
A condition that is characterized by one or more major depressive episodes that consist of periods of two or more weeks during which a patient has either a depressed mood of loss of interest or pleasure in nearly all activities. (See Depression)
One or more slit-like tears in the mucosa at the lower end of the esophagus as a result of severe vomiting.
Treatment program for eating disorders based on the idea that psychiatric symptoms of people with eating disorders emerge as a result of poor nutrition and are not a cause of the eating disorder. A Mandometer is a computer that measures food intake and is used to determine a course of therapy.
(See State Mandates)
A generic term for any of a number of various types of therapeutic touch in which the practitioner massages, applies pressure to, or manipulates muscles, certain points on the body, or other soft tissues to improve health and well-being. Massage therapy is thought to relieve anxiety and depression in patients with an eating disorder.
A family-centered treatment program with three distinct phases. The first phase for a patient who is severely underweight is to regain control of eating habits and break the cycle of starvation or binge eating and purging. The second phase begins once the patient’s eating is under control with a goal of returning independent eating to the patient. The goal of the third and final phase is to address the broader concerns of the patient’s development.
Mealtime Support Therapy:
Treatment program developed to help patients with eating disorders eat healthfully and with less emotional upset.
Mental Health Parity Laws:
Federal and State laws that require health insurers to provide the same level of healthcare benefits for mental disorders and conditions as they do for medical disorders and conditions. For example, the federal Mental Health Parity Act of 1996 (MHPA) may prevent a group health plan from placing annual or lifetime dollar limits on mental health benefits that are lower, or less favorable, than annual or lifetime dollar limits for medical and surgical benefits offered under the plan.
(slang) For bulimia or bulimic.
Modified Cyclic Antidepressants:
A class of medications used to treat depression.
Monoamine Oxidase Inhibitors:
A class of medications used to treat depression. Mood Disorders Mental disorders characterized by periods of depression, sometimes alternating with periods of elevated mood. People with mood disorders suffer from severe or prolonged mood states that disrupt daily functioning. Among the general mood disorders classified in the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) are major depressive disorder, bipolar disorder, and dysthymia (see Anxiety and Major Depressive Disorder).
Motivational Enhancement Therapy (MET):
A treatment is based on a model of change, with focus on the stages of change. Stages of change represent constellations of intentions and behaviors through which individuals pass as they move from having a problem to doing something to resolve it. The stages of change move from “pre-contemplation,” in which individuals show no intention of changing, to the “action” stage, in which they are actively engaged in overcoming their problem. Transition from one stage to the next is sequential, but not linear. The aim of MET is to help individuals move from earlier stages into the action stage using cognitive and emotional strategies.
The psychotherapeutic use of movement as a process that furthers the emotional, cognitive, social, and physical integration of the individual, according to the American Dance Therapy Association.
Any of a number of behaviors engaged in by a person with bulimia nervosa to offset potential weight gain from excessive calorie intake from binge eating. Non-purging can take the form of excessive exercise, misuse of insulin by people with diabetes, or long periods of fasting.
Therapy that provides patients with information on the effects of their eating disorder. For example, therapy often includes, as appropriate, techniques to avoid binge eating, and advice about making meals and eating. The goals of nutrition therapy for individuals with anorexia and bulimia nervosa differ according to the disorder. With bulimia, for example, goals are to stabilize blood sugar levels, help individuals maintain a diet that provides them with enough nutrients, and help restore gastrointestinal health.
Obsessive-compulsive Disorder (OCD):
Mental disorder in which recurrent thoughts, impulses, or images cause inappropriate anxiety and distress, followed by acts that the sufferer feels compelled to perform to alleviate this anxiety. Criteria for mood disorder diagnoses can be found in the DSM-IV.
An eating disorder in which a person obsesses about eating only “pure” and healthy food to such an extent that it interferes with the person’s life. This disorder is not a diagnosis listed in the DSM-IV.
A type of drug therapy that interferes with the brain’s opioid receptors and is sometimes used to treat eating disorders.
A condition characterized by a decrease in bone mass with decreased density and enlargement of bone spaces, thus producing porosity and brittleness. This can sometimes be a complication of an eating disorder, including bulimia nervosa and anorexia nervosa.
Healthcare obtained by a beneficiary from providers (hospitals, clinicians, etc.) that are outside the network that the insurance company has assigned to that beneficiary. Benefits obtained outside the designated network are usually reimbursed at a lower rate. In other words, beneficiaries share more of the cost of care when obtaining that care “out of network” unless the insurance company has given the beneficiary special written authorization to go out of network.
(see Mental Health Parity Laws)
Partial Hospitalization (Intensive Outpatient):
For a patient with an eating disorder, partial hospitalization is a time-limited, structured program of medical and psychotherapy services provided through an outpatient hospital or community mental health center. The goal is to resolve or stabilize an acute episode of mental/behavioral illness.
Inflammation of the esophagus caused by reflux of stomach contents and acid.
Use of drugs for treatment of a mental or emotional disorder.
Treatment of a disease or condition using clinician-prescribed drugs.
Phenethylamine Monoamine Reuptake Inhibitors:
A class of drugs used to treat depression.
A health problem that existed or was treated before the effective date of one’s health insurance policy.
A healthcare facility (e.g., hospital, residential treat- ment center), doctor, nurse, therapist, social worker, or other professional who provides care to a patient.
An intensive, nondirective form of psychodynamic therapy in which the focus of treatment is exploration of a person’s mind and habitual thought patterns. It is insight-oriented, meaning that the goal of treatment is for the patient to increase understanding of the sources of his/her inner conflicts and emotional problems. Scientific evidence and research has clearly shown psychoanalysis to be ineffective in treating eating disorders such as anorexia.
A method of psychotherapy in which patients enact the relevant events in their lives instead of simply talking about them.
Psychodynamic theory views the human personality as developing from interactions between conscious and unconscious mental processes. The purpose of all forms of psychodynamic treatment is to bring unconscious mental material and processes into full consciousness so that the patient can gain more control over his/her life.
Psychodynamic Group Therapy:
Psychodynamic groups are based on the same principles as individual psychodynamic therapy and aim to help people with past difficulties, relation- ships, and trauma, as well as current problems. The groups are typically composed of eight members plus one or two therapists.
The treatment of mental and emotional disorders through the use of psychological techniques (some of which are described below) designed to encourage communication of conflicts and insight into problems, with the goal being relief of symptoms, changes in behavior leading to improved social and vocational functioning, and personality growth.
A treatment intended to teach people about their problem, how to treat it, and how to recognize signs of relapse so that they can get necessary treatment before their difficulty worsens or recurs. Family psycho-education includes teaching coping strategies and problem-solving skills
to families, friends, and/or caregivers to help them deal more effectively with the individual.
Psychopathological Rating Scale Self-Rating Scale for Affective Syndromes (CPRS-SA):
A test used to estimate the severity of depression, anxiety, and obsession in an individual
To evacuate the contents of the stomach or bowels by any of several means. In bulimia, purging is used to compensate for excessive food intake. Methods of purging include vomiting, enemas, and excessive exercise.
A technique involving tightly contracting and releasing muscles with the intent to release or reduce stress.
A period in which the symptoms of a disease are absent. Remission differs from the concept of “cure” in that the disease can return. The term “cure” signifies that the treated condition or disorder is permanently gone, never to return in the individual who received treatment.
Services delivered in a structured resi- dence other than the hospital or a client’s home.
Residential Treatment Center:
A 24-hour residential environ- ment outside the home that includes 24-hour provision or access to support personnel capable of meeting the client’s needs.
Selective Serotonin Re-uptake Inhibitors (SSRI):
A class of antidepressants used to treat depression, anxiety disorders, and some personality disorders. These drugs are designed to elevate the level of serotonin, a neurotransmitter . A low level of serotonin is currently seen as one of several neuro-chemical symptoms of depression. Low levels of serotonin in turn can be caused by an anxiety disorder, because serotonin is needed to metabolize stress hormones. Serotonin is derived from food, which is why someone with a restricting eating disorder will not benefit from SSRI therapy in the absence of adequate weight restoration.
A personality trait that comprises self- confidence, reliability, responsibility, resourcefulness, and goal- orientation.
Self-guided Cognitive Behavior Therapy:
A modified form of cognitive behavior therapy in which a treatment manual is provided for people to proceed with treatment on their own, or with support from a nonprofessional. Guided self-help usually implies that the support person may or may not have some professional training, but is usually not a specialist in eating disorders. The important characteristics of the self-help approach are the use of a highly structured and detailed manual-based CBT, with guidance as to the appropriateness of self-help, and advice on where to seek additional help.
An itemized written test in which a person rates his/her feeling towards each question; the test is designed to categorize the personality or behavior of the person.
A type of psychoanalysis that views anorexia and bulimia as specific cases of pathology of the self. According to this viewpoint, for example, people with bulimia nervosa cannot rely on human beings to fulfill their self-object needs (e.g., regulation of self-esteem, calming, soothing, vitalizing). Instead, they rely on food (its consumption or avoidance) to fulfill these needs. Self psychological therapy involves helping people with bulimia give up their pathological preference for food as a self-object and begin to rely on human beings as self-objects, beginning with their therapist.
A proclamation, order, or law from a state legislature that issues specific instructions or regulations. Many states have issued mandates pertaining to coverage of mental health benefits and specific disorders the state requires insurers to cover.
Use of a mood or behavior-altering substance in a maladaptive pattern resulting in significant impairment or distress of the user.
Substance Use Disorders:
The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) defines a substance use disorder as a maladaptive pattern of substance use leading to clinically significant impairment or distress, as manifested by one (or more) of the following, occurring within a 12-month period: (1) recurrent substance use resulting in a failure to fulfill major role obligations at work, school, or home; (2) recurrent substance use in situations in which it is physically hazardous; and (3) recurrent substance-related legal, social, and/ or interpersonal problems.
Sub-threshold Eating Disorder:
Condition in which a person exhibits disordered eating but not to the extent that it fulfills all the criteria for diagnosis of an eating disorder.
Supportive Residential Services:
(see Residential Treatment Center)
Psychotherapy that focuses on the management and resolution of current difficulties and life decisions using the patient’s strengths and available resources.
A type of psychotherapy provided over the telephone by a trained professional.
A class of drugs used to treat depression.
Therapeutic Foster Care:
A foster care program in which youths who cannot live at home are placed in homes with foster parents who have been trained to provide a structured environ- ment that supports the child’s learning, social, and emotional skills.
(slang) Photographs, poems, or any other stimulus that influences a person to strive to lose weight.
An organization that provides health insurance benefits and reimburses for care for beneficiaries.
A multidisciplinary care plan for each beneficiary in active case management. It includes specific services to be delivered, the frequency of services, expected duration, community resources, all funding options, treatment goals, and assessment of the beneficiary environment. The plan is updated monthly and modified when appropriate.
A class of drugs used to treat depression. Trigger A stimulus that causes an involuntary reflex behavior. A trigger may cause a recovering person with bulimia to engage in bulimic behavior again.
Thyroid Medication Abuse:
Excessive use or misuse of drugs used to treat thyroid conditions; a side effect of these drugs is weight loss.
Usual and Customary Rate (aka UCR):
An insurance term that indicates the amount the insurance company will reimburse for a particular service or procedure deemed "out of network". This amount is often less than the amount charged by the service provider. The patient is usually liable to the provider for the difference.
Programs that teach skills needed for self-sufficiency.
A system of physical postures, breathing techniques, and meditation practices to promote bodily or mental control and well-being.
|
<urn:uuid:738e3e14-7bdb-42b8-820e-cd6cd0ae35d5>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00210.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.925311803817749,
"score": 3.359375,
"token_count": 6900,
"url": "https://www.kartiniclinic.com/eating-disorder-resources/glossary-of-terms/"
}
|
3 Random VariablesA random variable is an uncertain numerical quantity whose value depends on the outcome of a random experiment. We can think of a random variable as a rule that assigns one and only one numerical value to each point of the sample space for a random experiment.
4 ExampleIf you play craps, the sum of the pips on the dice are the random variable. The random experiment is the rolling of the dice.
5 Random Process = tossing a fair coin three times. There are eight possible individual outcomes in this sample space. Since the coin is assumed to be fair, the eight outcomes can be assumed to be equally likely (that is, the probability assigned to each individual outcome is 1/8).
6 7.4.2 Rules of Probabilities To any event A, we assign a number P(A) called the probability of the event A.Assign a probability to each individual outcome, each being a number between 0 and 1, such that the sum of these individual probabilities is equal to 1, and,The probability of any event is the sum of the probabilities of the outcomes that make up that event.If the outcomes in the sample space are equally likely to occur, the probability of an event A is simply the proportion of outcomes in the sample space that make up the event A.
7 Discrete Random Variable A discrete random variable can assume at most a finite or infinite but countable number of distinct values.Example: Number of eggs a chicken lays in a day.Example: Number of bombs dropped on a city.Example: Number of casualties in a battle.
8 Continuous Random Variable A continuous random variable can assume any value in an interval or collection of intervals. It always has an infinite number of possible values.Example: Gallons of milk from a cow over its lifeExample: Number of hours that CNN broadcasted the Iraq war with out interruption.Example: Number of hours a battery will run a flashlight.
9 Give the values the random variable can take on: X is the difference between the number of heads and number of tails obtained when a fair coin is tossed 3 times.Y is the product of the pips for the roll of 2 fair dice.R is the time in minutes that this class lasts.
10 A discrete random variable can assume at most a finite or infinite but countable number of distinct values.A continuous random variable can assume any value in an interval or collection of intervals.
12 Probability Distribution of a Discrete RV The probability distribution of a discrete random variable X is a table or rule that assigns a probability to each of the possible values of the discrete random variable X.
13 Example 7.12Let X represent the number of people in an apartment. Assume the maximum in a single apartment is 7.What must be the probability of 7 people in a household for this to be a legitimate discrete distribution?Display this probability distribution graphically.What is the probability that a randomly chosen household contains more than 5 people?What is the probability that a randomly chosen household contains no more than 2 people?The probability that a randomly selected household has more than 2 but at most 4 people?
15 Let's Do It! 7.20 Sum of PipsCraps game = rolling 2 fair dice. Let X be the sum of the values on the two dice. What are the 36 possible pairs of faces of the 2 dice?Give the probability distribution function of X, then present the probability distribution function graphically.X23456789101112P(x)
16 Let's Do It! 7.20 Sum of Pips (continued) Find the P( X > 7 ).What is the probability of rolling a seven or an eleven on the next roll of the two dice?What is the probability of rolling at least a three on the next roll of the two dice? (Use the complement rule.)
17 The Mean of a Discrete Distribution The mean of a probability distribution is also called the expected value of the distribution.
18 The Variance and Standard Deviation of a Discrete Distribution The variance of a discrete probability distribution isThe standard deviation is given by:
19 Good News!The TI-83 knows how to do these calculations. You simply enter the values of the random variable in L1 and the probabilities in L2 and do the following command:1-Var Stats L1,L2
20 Apartments RevisitedWhat is the expected value, the variance, and the standard deviation of the number of people per apartment?
21 Let’s Do It 7.22 Sum of Pips Revisited Consider the game called craps in which two fair dice are rolled. Let X be the random variable corresponding to the sum of the two dice. Its probability distribution is given below:Calculate the mean of X, the expected sum of the values on the two dice. Also calculate the standard deviation.You provided a graph of this distribution in Let’s Do It! Is your expected value consistent with the idea of being the balancing point of the probability stick graph or histogram?
24 Combinations“nCr” represents the number of ways of selecting r items (without replacement) from a set of n distinct items where order of selection is not important.
25 Bernoulli VariableIf a random variable has exactly two possible outcome, success and failure and the probability of success remains fixed if the experiment is repeated under identical conditions, then the RV is dichotomous or Bernoulli.
26 Let's Do It! 7.26 Probability of a Success At a local community college there are 500 freshmen enrolled, 274 sophomores enrolled, 191 juniors enrolled, and 154 seniors enrolled. An enrolled student is to be selected at random. If a success is defined to be “senior”, what is the probability of a success? p = ________________________A standard deck of cards contain 52 cards, 13 cards of each of 4 suits. The four suits are spades, hearts, diamonds, and clubs. Each suit consists of 4 face cards (jack, queen, king, ace) and 9 numbered cards (2 through 10). A card is drawn from a well-shuffled standard deck of cards. If success is defined to be getting a “face card”, what is the probability of a success? p = ________________________A game consists of rolling two fair die. If success is defined to be getting “doubles”, what is the probability of a success? p = ________________________
27 Binomial Distribution A binomial random variable X is the total number of successes in n independent Bernoulli trials, on which each trial, the probability of success is p. We say X is B(n,p).Page 469
28 The Binomial Probability Distribution Where p = the probability of success in a single trialq = 1 – p (probability of failure)n = number of independent trialsx = number of successes in the n trials
29 Let's Do It! 7.27 Jury Decision In a jury trial there are 12 jurors. In order for a defendant to be convicted, at least 8 of the 12 jurors must vote guilty. Assume that the 12 jurors act independently (how one juror votes will not influence how any other juror votes). Also assume that for each juror the probability that they vote correctly is 0.85.If the defendant is actually guilty, what is the probability that the jury will render a correct decision?Identify the following: A trial = __________________________ n = number of independent trials = _______ p = probability of a success on each single trial = ________ x = number of successes in the n trials
31 ExampleA wart remover states it works on 95% of warts. If a total of 10 subjects are selected, what is the probability that 9 of the subjects will have their warts removed?
32 Continuous Random Variables The probability distribution of a continuous variable X is a curve such that the area under the curve over an interval is equal to the probability that the random variable X is in the interval. The values of a continuous probability distribution must be at least 0 and the total area under the curve must be 1. The uniform and normal distributions we studied in chapter 6 were continuous.
33 Approximating a Discrete RV with a Continuous One We can use the normal distribution to approximate the binomial when np ≥ 5 and np ≥ 5.If X is B(n, p) and np ≥ 5 and nq ≥ 5 then X can be approximated by
34 ExampleA wart remover states it works on 95% of warts. If a total of 1000 subjects are selected, what is the probability that 900 of the subjects will have their warts removed?
35 Let's Do It! 7.31 Applying for a Loan Suppose the time to process a loan application follows a uniform distribution over the range of 10 to 20 days. Sketch the probability distribution for X = time to process a loan application where X is U(10,20).What is the mean or expected processing time?Based on the distribution, what is the probability that a randomly selected loan application takes longer than two weeks to process?Given that the processing time for a randomly selected loan application is at least 12 days, what is the probability that it will actually take longer than two weeks to process?
|
<urn:uuid:d78c7b93-be93-4657-9e34-1512549b7501>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00011.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9074997305870056,
"score": 4.15625,
"token_count": 1912,
"url": "http://slideplayer.com/slide/1493934/"
}
|
Posttraumatic stress disorder[note 1] (PTSD) is a severe anxiety disorder that can develop after exposure to any event that results in psychological trauma. This event may involve the threat of death to oneself or to someone else, or to one's own or someone else's physical, sexual, or psychological integrity, overwhelming the individual's ability to cope. As an effect of psychological trauma, PTSD is less frequent and more enduring than the more commonly seen acute stress response. Diagnostic symptoms for PTSD include re-experiencing the original trauma(s) through flashbacks or nightmares, avoidance of stimuli associated with the trauma, and increased arousal— such as difficulty falling or staying asleep, anger, and hypervigilance. Formal diagnostic criteria (both DSM-IV-TR and ICD-10) require that the symptoms last more than one month and cause significant impairment in social, occupational, or other important areas of functioning. Psychological trauma PTSD is believed to be caused by experiencing any of a wide range of events which produces intense negative feelings of "fear, helplessness or horror" in the observer or participant. Sources of such feelings may include (but are not limited to): experiencing or witnessing childhood or adult physical, emotional, or sexual abuse; experiencing or witnessing physical assault, adult experiences of sexual assault, accidents, drug addiction, illnesses, medical complications; employment in occupations exposed to war (such as soldiers) or disaster (such as emergency service workers); or getting a diagnosis of a life-threatening illness. Children or adults may develop PTSD symptoms by experiencing bullying or mobbing. Approximately 25% of children exposed to family violence can experience PTSD. Preliminary research suggests that child abuse may interact with mutations in a stress-related gene to increase the risk of PTSD in adults. Multiple studies show that parental PTSD and other posttraumatic disturbances in parental psychological functioning can, despite a traumatized parent's best efforts, interfere with their response to their child as well as their child's response to trauma. Parents with violence-related PTSD may, for example, inadvertently expose their children to developmentally inappropriate violent media due to their need to manage their own emotional dysregulation. Clinical findings indicate that a failure to provide adequate treatment to children after they suffer a traumatic experience, depending on their vulnerability and the severity of the trauma, will ultimately lead to PTSD symptoms in adulthood. DSM-5 proposed diagnostic criteria changes In preparation for the May 2013 release of the DSM-5, the fifth version of the American Psychiatric Association's diagnostic manual, draft diagnostic criteria was released for public comment, followed by a two-year period of field testing. Proposed changes to the criteria (subject to ongoing review and research) include the following: Criterion A (prior exposure to traumatic events) is more specifically stated, and evaluation of an individual's emotional response at the time (current criterion A2) is dropped. Several items in Criterion B (intrusion symptoms) are rewritten to add or augment certain distinctions now considered important. Special consideration is given to developmentally appropriate criteria for use with children and adolescents. This is especially evident in the restated Criterion B – intrusion symptoms. Development of age-specific criteria for diagnosis of PTSD is ongoing at this time. Criterion C (avoidance and numbing) has been split into "C" and "D": Criterion C (new version) now focuses solely on avoidance of behaviors or physical or temporal reminders of the traumatic experience(s). What were formerly two symptoms are now three, due to slight changes in descriptions. New Criterion D focuses on negative alterations in cognition and mood associated with the traumatic event(s) and contains two new symptoms, one expanded symptom, and four largely unchanged symptoms specified in the previous criteria. Criterion E (formerly "D"), which focuses on increased arousal and reactivity, contains one modestly revised, one entirely new, and four unchanged symptoms. Criterion F (formerly "E") still requires duration of symptoms to have been at least one month. Criterion G (formerly "F") stipulates symptom impact ("disturbance") in the same way as before. The "acute" vs "delayed" distinction is dropped; the "delayed" specifier is considered appropriate if clinical symptom onset is no sooner than 6 months after the traumatic event(s).
|
<urn:uuid:ee949ad5-d584-464a-bf50-524b0d3ab2b0>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00211.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9300447702407837,
"score": 3.65625,
"token_count": 872,
"url": "http://slidegur.com/doc/196215/dsm-5-proposed-diagnostic-criteria-changes"
}
|
Minnesotans For Sustainability©
Sustainable Society: A society that balances the environment, other life forms, and human interactions over an indefinite time period.
Renewable Energy: What are the Limits?
It is commonly assumed that rich countries will be able to meet their energy demand from renewable sources. However the following evidence on existing and probable future efficiencies and costs indicates that it will not be possible to derive sufficient electricity or liquid fuels to sustain the present high per capita rates of consumption from renewable sources, let alone those growth will require. There must be a transition to reliance on renewables, but a sustainable future cannot be achieved without significant reduction in current material "living standards" and in gross economic activity. However, advocates of "The Simpler Way" argue that a radically alternative society based on frugal lifestyles, zero economic growth and local economic self-sufficiency could defuse global problems and provide a high quality of life.
In the last three decades considerable concern has emerged regarding limits to the future availability of energy in the quantities required by industrial-affluent societies. More recently Campbell (1997) and others have argued that the energy source on which industrial societies are most dependent, petroleum, is more scarce than had previously been thought, and that supply will probably peak between 2005 and 2015. (Fleay, 1995, Ivanhoe, 1995, Gever, et al, 1991, Hall, Cleveland and Kaufman, 1986, Laherrère, 1995, Duncan, 1997, Bentley, 2002, Youngquist, 1997.) These people argue that non-conventional sources such as tar sands and shale oil will not make a significant difference to the situation. The world discovery rate is currently about 40% of the world use rate. The USGS (2000) has recently arrived at a much higher estimate for ultimately recoverable petroleum, but this would only delay the peak by some 10 years.
If the discussion is expanded to take into account the energy likely to be required by the Third World the situation becomes much more problematic. If the present world population were to consume energy at the rich world per capita rate world supply would have to be 5 times its present volume. World population is likely to reach 9 billion by 2070. If 9 billion were to consume fossil fuels at present rich world per capita consumption rates all probably recoverable conventional, oil, gas, shale oil, uranium (through burner reactors), and coal (2000 billion tonnes assumed as potentially recoverable), would only last about 20 years. (Trainer, 1985.) As will be discussed below, when the universal commitment to economic growth is added, the magnitude of the problems associated with the future availability of conventional energy sources become much greater.
The alarming nature of the energy predicament is made most graphic if considered in relation to the greenhouse problem. In a technical report for the IPCC (Enting et al, 1994, 2001 electronic version) estimate that to stabilise the atmospheric concentration of carbon dioxide at 650 ppm, twice the 1970 level, annual emissions must not exceed 8-12 GT/y by the end of the 21st century. Such a target is much too high as we are now 30% above pre-industrial level of 270 ppm and serious effects are becoming evident. However if this target is taken and world population rises to 9 billion then the per capita emission allowable will be approximately 1 tonne. Yet the present Australian emission per capita from fossil fuel burning is 3.6 tonnes. In addition there is another 3 tonnes per capita released from land clearing, making a per capita total of 6.6 tonnes. (Enting et al show that or more acceptable targets emissions must be cut to zero and held there for decades.)
Thus the per capita use of fossil fuels should be cut to a small fraction of the present Australian amount. Clearly consumer-capitalist society cannot be sustainable unless vast quantities of energy can be derived from renewable sources to almost entirely substitute for fossil fuels, and to cope with continued economic growth. If not then a sustainable society must involve dramatic reduction in energy use.
Given this context in which there are grounds for expecting increasing and extreme energy scarcity in coming decades, there has been a strong tendency to assume without question that renewable sources can substitute for fossil sources. Because Australia receives more solar energy than most other developed regions of the world it is also commonly thought that Australia will be more able than most to meet its energy demand from solar sources. The following analysis concludes that with respect to the two crucial energy forms, electricity and liquid fuels, this assumption is mistaken, both in relation to existing costs and difficulties and to what is likely to be achieved by technical advance in the foreseeable future.
Unfortunately those most familiar with the problems in various renewable energy fields and their limitations tend not to be the best sources for realistic assessments of problems and potentials, given their interest in leaving a favourable impression of their field. Claims are often unduly optimistic. Predictions of costs have to be taken with caution. "Cost over-runs" that emerge when projects are attempted can be the result of glowing estimates designed to persuade investing authorities to sign on for uncertain ventures. Attempts from within the field to critically assess the potential are quite rare.
The basic question is whether renewable energy sources can provide virtually all the energy we need. When we hear that a particular country already derives X% of its electricity from the sun or the wind it seems a simple matter of continuing the trend until most or all of the energy demand is derived in the same way. However it is misleading to focus on the contribution a renewable source is playing when it is merely augmenting supply largely derived from coal and or nuclear sources. In that situation the significant problems set by the variability of renewables can be avoided. When the sun is not shining or the wind is not blowing more coal can be burned. However our problem is to develop systems in which almost all energy used comes from renewables, and that means we have to provide for large fluctuations in energy production and for the need to store large quantities of energy, and these problems make a significant difference to the viability of renewables.
It should be stressed that the following analysis is not an argument against the development of renewables. The final section argues that in a sustainable world we must live on renewables and that we can live well on them, but only after radical transition from capitalist-consumer society to "The Simpler Way."
Flat plate collection systems will be considered first.
The potential for solar electricity supply must be examined primarily in relation to the task of meeting winter demand. The following derivation assumes an ideal Australian site, at the tropic of Capricorn where the average daily solar incidence on a horizontal plane in winter is approximately 4.25 kWh/ squ.m. (University of Lowell Photovoltaic Program, 1991.) (For convenience "square metre" will be indicated by "m" hereafter.)
This means that the sun would be approximately 35-40 degrees from vertically overhead throughout most of winter. Thus the incidence of solar energy on panels set at optimum inclination would be 5.18 kWh/d in winter, and collectors set at this angle will be assumed for the following discussion . (Note that this maximises the achievement for winter performance but to maximise annual performance the tilt would only be at half this angle.)
It will be assumed that for 8 hours a day electricity from solar PV plants will be supplied directly, and for the other 16 hours it will have to be stored before being supplied to consumers. Night time electricity demand is about one-third lower than daytime demand (Mills and Keepin, 1993) so in the following discussion supply from a power plant will be assumed to be at the rate of 1000MW for 8 daylight hours and 670MW for the other 16 hours.
Although efficiencies above 25% are being achieved in the laboratory the efficiency of PV cells in use is reported by Kelly (1993) to be approximately 13%. (Evidence that actual performance is lower than this is given below.) At 13% efficiency each square metre of PV collection area would produce .67 kWh per day in winter in central Australia. A 15% loss of this output in transmission from the inland generating site to the coastal consuming areas will be assumed (derived from Ogden and Nitsch, 1993), along with a 7% loss for inversion from DC to AC current. Czick and Ernst, (2003),say that the loss would be 16% with today's technology but that with HVDC systems it could be 10%. The overall efficiency of delivering electricity directly to consumers in the daytime would therefore be 10.27%. In other words to deliver 1000MW, solar energy equivalent to 9737MW would have to fall on the collecting surface. Therefore to deliver 8 hours x 1000MW directly, 77,896MWh of solar energy would have to fall on the collector each day.
The most significant problems for solar electricity supply are set by the need to store energy for supply at night. Storage in the form of hydrogen gas will be assumed here. Other options will be considered below. The significant problems deriving from the occurrence of a series of continuously cloudy days will be ignored in the following analysis; obviously much greater storage capacity would be required.
The energy efficiency of producing hydrogen gas from electricity will be assumed to be approximately 70%. (Commercial supply in the US is currently via methane reforming at 65% efficiency.) Again a 15% loss in transmission and a 7% loss in inversion will be assumed. Generation of electricity by burning the hydrogen gas will be assumed to be 40% energy efficient. A higher figure for future fuel cell technology is discussed below. The combined effect of these efficiencies would mean that for each kWh of solar energy falling on the surface only .029 kWh would be delivered in the form of electricity after storage; i.e., the process would only be about 2.9% energy efficient. Thus the need to store a unit of energy increases the collection area required by a factor of abut 3.7.
To meet the 670MW demand for the 16 hours of the day when the sun is not
shining via a 2.9% efficient process, 373,519MWh of solar energy would have to
fall on the collection surface each day. Adding the direct and the night time
figures indicates a need for a total of 451,416MWh to fall on the collecting
surface each day. At 5.18kWh per square metre the collection area would
have to be 87 million square metres. Each square metre of collection
area would deliver on average .2 kWh of electricity per day.
PV module cost
The current wholesale cost of PV panels is approximately $5-6(A) per watt (half the retail cost.) (BP Solar Australia, 2003, Largent, 2003.) For the large Victoria Market project completed in 2001 the cost was $6/W. (Origin Energy, 2003.)
(The value of the Australian dollar used throughout is the c May 2003 value
of a little over half the US dollar.)
The "balance of system" cost
The "balance of system" cost, i.e., the cost of mounting panels, connecting wires, control devices etc., is probably the most important, but in general a rather uncertain factor in estimating the viability of PV systems. It has generally been assumed to yield a total system cost that is approximately double the cost of the modules. (Kelly, 1993, p. 300, Commissioners of the European Community, 1994, p. 24.) Solar Energy Systems (2003) estimate that BOS costs are around 43% of total system cost (personal communication.) However they also state that the installed system cost for grid connected systems is $12.50/W, indicating that balance of system costs make up 60% of the total. Largent (2003) says balance of system costs are 60-70% of final system cost. BP Solar, Australia, 2003 advise that balance of system costs make up 40-70% of total system costs. For the Austrian Energy Park 66.8kWp system the balance of system cost was 63% of the total cost.
These figures are for non-tracking systems. Systems in which the panels change their angle throughout the day to track the sun collect some 30% more energy (at low latitudes but at high latitudes there might be no difference at all; see Reichmuth and Robison, undated, Fig. 2, p. 3.), but have much higher balance of system costs. For example each of the 15 metre diametre tracking modules in the 10kWe Washington State system (Reichmuth and Robison, undated) uses 6.7 tonnes of steel, and costs $20,000-$25,000. Each of these supports 80m of PV panels, indicating a cost of $250-312/m for steel alone.
Reichmuth and Robison (op.cit, p. 4) state that conventional wisdom re the flat plate (as distinct from concentrator systems; see below) is that tracking is not justified due to the additional mechanical complexity involved.
If we assume 75 Watt panels, i.e., 150 peak watts per square metre the cost
per square metre would be $750 for the panels, and BOS costs are equal to panel
costs, then the cost for the whole system would be $1500 per square metre.
Therefore the cost of a generating plant 87 million square metres in area would
be $130.6 billion.
How does this figure compare with the cost of a coal fired plant?
The current cost of construction for a coal fired plant of 1000MW capacity is not a clear figure NSW Power authorities seem willing to give. However the cost of the recently completed Mt. Piper power station in N.S.W., Australia, $800 million. (Pacific Power, 1993, p. 104.) In 1997 the 2000MW Loy Yang plant in Victoria sold for $4.9 billion, indicating a sale price of $2.45b per 1000MW. ( Sydney Morning Herald, 2003.) Note this would be much more than a current construction cost.
Coal for 20 years will be assumed to cost $2 billion. Therefore the total
cost of the fossil fuel option will be assumed to be approximately $2.8 billion.
Thus the PV solar option would cost approximately 47 times the cost of the
coal option. (Taking into account externalities, especially the
environmental costs of coal use, would reduce this figure.) If a 30 year plant
life is assumed the multiple would be 33.
Other cost factors
The discussion to this point has dealt only with the cost of constructing the collection area, and there are many other factors that would multiply the final lifetime cost for the total system many times. The cost of construction plus fuel accounts for only about 28% of the present price of electricity generated by coal-fired plants. Following are several additional factors which would significantly increase the cost of the solar plant.
a) Operations and management costs, especially the cost of regular cleaning of the large collection area. For wind systems O and M costs over plant lifetime add approximately .7 of construction cost.
b) No provision has been made in the above estimate for the extra capacity needed to cope with extended cloudy periods. On clear days the home lighting system referred to at c) below generates around twice as much energy as is required, yet difficulties experienced in cloudy periods would not be eliminated if generating and battery capacity were doubled. In large scale systems the problem might be avoided if there was sufficient alternative generating capacity available in cloudy weather, such as hydro power. However this solution generally involves the problem of duplication of plant which will remain idle some of the time.
To provide storage capacity for a cloudy day for the output of a 1000MW power station must be (8 hr x 1000MW + 16hr x 600MW)100/2.9 = 645,517MWh. This is 1.43 times the amount derived above where storage for only the 16 night time hours is required. This means that collection area and cost for a system that can supply through 3 cloudy days in a row must be able to collect and store energy capable of generating 5.3 times as much electricity (deliverable as electricity) as it must deliver in a 24 hour period when storage is for only one night.
c) The actual performance of PV systems in the field can be well below expectations deriving from theoretical considerations, when all extraneous factors capable of affecting output have had an opportunity to operate. Theoretically electricity generated from wood fired steam plants should be produced at c 33% efficiency, but Hohenstein and Wright (1994, p. 162) provide figures showing that for the entire US electricity via wood system the actual performance was only 22%.
PV panel performance can be lowered by imperfect alignment, dust and water vapour in the atmosphere, dust on panels, ageing of the cells, losses in wiring and inverters, loss due to protective covering glass (Kelly, 1993, p. 300) and the heating effect of sunlight on the cells. The nominal ratings usually quoted derive from tests in ideal laboratory conditions which do not include the above factors. Especially important for systems not connected to the grid is the fact that when output exceeds demand or storage capacity much of the energy being generated can’t be used and has to be dumped. Similarly, a large scale system capable of meeting all demand in mid-winter would have approximately twice the required capacity in mid-summer, given that solar energy incidence is about twice as great in summer. Knapp and Jester (2001, p. 45) say that "system loses" due to wiring resistance, inverters etc., typically reduce output by 20%.
A home lighting system monitored in Sydney, at 34 degrees South, with a nominal rating of 11% efficiency on a cloudless summer day provides as useful energy only 5.7% of the solar energy falling on its surface. This includes the loss due to battery storage. Winter performance is even lower, because the sun is on a lower angle, shines for a shorter period, and its energy has to travel through more atmosphere. This is a tracking system. Systems involving stationary panels would be around 30% less efficient. These figures do not include losses due to the dumping of more than half the energy collected in summer when batteries are full. (yet battery capacity is too small for convenient supply in winter.) Because the average daily power delivered per panel is c .2 kWh, it would take about 70 years to pay back the c $500(A) panel cost, which is only c 25% of the system lifetime cost including batteries, if the energy was sold at the same price as coal-fired electricity is sold from the power station.
Data published in 1999 by BP Solarex (Corkish, undated, Ferguson 2000a) on a 390 square metre system in the UK, a 805 square metre system in Switzerland, and a 7960 square metre system in Toledo, Spain, show that over approximately three years the output of these systems was around 6-7% of the solar energy received by the respective collection areas.
The large Victoria Markets system installed in Melbourne in 2001 performs at c 11% efficiency. A smaller, 1.26kW system installed in Melbourne, with panels normal to the sun in mid winter, delivered as electricity only 8% of the solar energy falling on the panels, averaged over the 2.5 mid winter months. (Renew, 2001.)
An inspection of data on actual generating performance from the US Solar Electric Power Association (2002) also indicates that delivered electrical energy from recent large scale systems is often c 8% of the incident solar energy.
d) The energy cost of constructing the plant must be subtracted from its lifetime output before we can discuss the amount of energy it would actually deliver.
PV cell manufacturers usually claim payback periods of c 3 years. (Corkish, undated.) Knapp and Jester, (2000) report 1.8 years for thin film CIS and 3 years for silicon modules. However these figures are usually derived from performance under ideal laboratory conditions. As is noted in c) above many factors reduce panel performance below these levels and this means that real payback time in the field will in general be much longer than might be expected from the manufacturers' statements. Ferguson’s (2000a) estimates that for the Toledo system referred to above the energy needed to produce the panels would be .25 of the energy the system will produce (over an assumed 30 year lifetime in this analysis.) For the UK site the fraction was .38.
The figures usually stated for payback refer only to the energy cost of cell production. (Knapp and Jester say their figures relate to module production.) The dollar cost of PV cells is only about 40% of the cost of the panel or module when glass, aluminium or steel framing and wiring etc. are included (Kelly, 1993, p. 304)l, although this is probably not a good guide to energy costs. As has been explained, module cost is typically only half or less of the whole system dollar cost so the energy costs for the balance of system must be added before a realistic system energy cost figure is arrived at.
A full emergy accounting would also include the energy cost of constructing the factories, deliveries to it, mining of materials, retailing of the cells, the energy cost of plant lifetime operations and maintenance, etc., for the PV modules and for all components of the balance of the system. In other words the total emergy cost of the PV system includes the energy cost of all the work and production that would not have taken place had the plant not been built and operated for many years. Such estimates are not available but total energy costs are likely to be considerably greater than for the cell production costs that are usually focused on in discussions of PV payback.
The Knapp and Jester study seems thorough. If its figures are taken, and if the energy cost of the balance of system is equal to half that of the modules (an uncertain number), then it would take about 4.5 years to pay back the energy cost of producing a silicon cell system, i.e., 22.5% of the energy output of a plant with a 20 year lifetime, or 15% of the output of a plant with a 30 year lifetime.e) The basic cost calculation above does not take into account the plant's down time for repairs, breakdowns and general maintenance. If it is assumed that it would be out of operation 30% of the time, a typical figure for coal fired stations, then the necessary area and cost for a plant to deliver 1000MW constantly would have to be multiplied by 1.43. However PV plants are likely to be in operation for a much higher proportion of the time than coal-fired plant. If down time is 10%, the above cost, area etc figures must be multiplied by 1.1. (Repairs to solar systems might be carried out mostly at night.)
f) The cost of building and operating the hydrogen production, pumping and storage systems would be considerable. To store the hydrogen to meet night time demand would involve a huge storage volume given the low energy density of hydrogen. To retrieve the 10,560MWh from hydrogen via a process that is 70%x40% efficient would require storage of 37,700MWh of hydrogen. At 3kWh per cubic metre, the volume of hydrogen would be approximately 12 million cubic metres, or a mine shaft some 1,300 km long. Of course the cas would be compressed reducing the volume but increasing energy and plant costs. Even liquid hydrogen has only 25% of the energy density of petrol. (The difficulties in "the hydrogen economy" are discussed below.)
g) The cost of the plant to convert the stored hydrogen to electricity would have to be added. This would be comparable to the cost of a coal-fired power station (assuming the hydrogen is used as fuel to generate steam. The fuel cells of the future will probably be more efficient but at present are very expensive.)
h) The performance of PV cells degrades over time.
i) Most of the silicon for production of cells currently comes from scrap left over from computer industry, and would cost more if it had to produced specially for the solar industry.
j) The cost of the capital that would have to be borrowed to build the plant, i.e., the interest to be paid, might double the total construction cost figure from all the above factors combined. A coal-fired plant produces around 122.6 million MWh in its lifetime (assuming it is out of operation .3 of the time), so for a $2.8 billion construction plus fuel cost the cost of the electricity produced per kW is 2.28 cents (or 1.52 c for a 30 year life.) In Australia it is sold by the station operators at around 3ckWk. However, the 1998 Australian retail price of domestic electricity was 10.1 cents per kWh, which suggests that profit, operation and management and interest costs (and distribution costs, which PV can avoid, but only by incurring other costs; below) can be expected to multiply the cost of electricity due to plant construction cost by a factor of 4 to 6.
k) A decision to build large scale solar generating plant with the sort of costs under discussion here will obviously not be made until the cost of energy from other sources ceases to be cheaper than the energy generated by these solar plants. We must assume therefore that the cost of the energy required to build all components of the solar plant including cells, balance of system and all contributing factories, deliveries, trucks, tools etc., will be approximately the same as the price of the energy it will generate, which it has been indicated would be very high. Given that energy-intensive materials make up much of the construction cost, the cost of the plant would be far higher than that assumed in the above derivations, which assume present energy costs for construction and materials.
Combining these factors would indicate that the initial $130.6 billion cost
estimate might have to be multiplied several times.
Dollar payback periods
Although not central to the present discussion it is of interest to note the long times required for costly PV systems to meet their dollar construction costs. A 450W system offered by Pacific Power for $8500 (including the $2500 subsidy from the Federal government) would probably produce about 2kWh a day in Sydney (annual average). Coal fired electricity can be sold from the generator at 3-4 c per kWh in Australia. Thus if electricity generated by the three modules sold at the usual electricity price annual earnings would be $365x2x.03, i.e., $25.55, and it would take 400 years to earn the purchase price.
The Victoria Market system yields comparable figures. The $1.75 million system is expected to produce 290Mwh per year, which would sell for $9,600 at the price of coal-fired electricity. At this rate the system would take 182 years to pay its capital cost.
These have been comparisons with the price of electricity generated from abundant and thus cheap coal, and do not take into account the environmental costs of coal use. However these long payback periods indicate the magnitude of the increases in electricity cost that would have to be accepted in an economy based solely on renewables.
In their commendable efforts to stimulate the development of renewables
governments have given very generous subsidies (said to be 48Euro cents/kw for
German PV electricity, some 28 times the Australian cost of coal-fired
electicity.) It is not surprising that the Australian government is now
considering abandoning its subsidy scheme.
What difference might technical advance make?
The assumptions made within the above analysis are apparent and enable derivation of the conclusions that would follow if different assumptions about efficiencies and costs were made. If it is assumed a) that cells with 20% actual operating efficiency in the field (as distinct from nominal peak watt rating), compared with the 13% taken above, b) a cost of $2 per watt for PV cells, i.e., a 60% reduction, c) fuel cells producing electricity from stored hydrogen at 60% efficiency, then the cost of the plant to deliver 1000MW would only fall by about 60%, i.e, to the region of 20 times that of a coal fired plant plus fuel or a of nuclear plant. Note that this refers only to the plant needed to send the energy from the collection field, partly in the form of hydrogen and therefore does not include the cost of plant to convert the hydrogen into electricity. At present fuel cells are 4-6 times as costly per kW of capacity as conventional energy generating plant. (US DOE gives a multiple of 10 for car engines.)
The cost of PV cells has fallen significantly over the past 3 decades, but the trend seems to have flattened out now. (Kelly, 1993, Durning, 1997, p. 27.)The cost for the Victoria Market system was $6/w (higher than that assumed in the above analysis.) If the cost per square metre of PV technology fell to zero the cost of the large collection area required in the above discussion would still be very high. If the PV material was sprayed at no cost onto 6 mm toughened glass at the mid 1990s wholesale price of approximately $60 per square metre, the cost of the glass alone for the above 87 million square metre collection area would be $5,220 million. (Littlewood 2003 estimates the cost of PV glass in 2003 at $50/m, and at $70-80/m for curved glass for concentrating systems.)
In other words the "balance of system" cost sets a difficult limit when the collection area must be large, and one that is not likely to be greatly affected by technical advance as structures are simple and major breakthroughs in their design are not likely. As has been noted, in the early 1990s the BOS cost per metre seems to have been about the same as the cost of the panels, i.e., at present c $750/m .
Almost all of the materials cost of cells is due to aluminium, glass and
silicon; for silicon cells it is 85% and for thin film technology it is 97%.
(Knapp and Jester, 2000.) Thus there would seem to be little scope for cost
reduction from advances in the solar technology involved, although increased
scale of production might make a significant difference to overall costs.
PV roof cladding systems
Integration of PV cells into roofing etc. material wouldreduce balance of system costs, e.g., for support structures (and roofing replaced.) It would also avoid transmission loses and costs which make up one-third of the retail cost of US electricity, but only if systems are large enough to be completely independent of the grid. Such systems would involve the excess generating and storage capacity needed to cope with long cloudy periods. Decentralisation would probably increase some costs, especially for storage in many small units each with its own power conditioning equipment such as inverters and regulators and petrol driven backup generators.
Replacing roofing with PV panels sets the problem of whether the solar incidence where the house is located is adequate. For instance in Sydney, 34 degrees South, in winter the solar incidence is 2.78 kWh per day, only 2/3 of the 4.25kWh per square metre per day in central Australia where large scale centralised PV systems would be ideally located.
Rooftop collection surfaces are fixed in orientation and on average rooftops differ considerably from ideal orientation, and are subject to shading by other structures. It is likely that only about 40% of the surface of an average house roof would have an orientation enabling effective use as a solar collector in winter. In mid winter in Sydney the mid day sun is 56 degrees from vertically overhead, so a roof surface facing North with a 12 degree slope will be 44 degrees from ideal inclination. However because it is angled somewhat towards the sun it would intercept about 1.2 times the 2.78kWh/m/d incident on a horizontal plane in Sydney in mid-winter, i.e., 3.3kWh/m/d. This is only .78 of the 4.25kWh/m/d falling on a horizontal surface in Central Australia at the Tropic of Capricorn at that time of the year.
To supply the same amount of power as was assumed above for a centralised 1000MW PV plant (i.e., 1000MW for 8 hours without storage, and 660MW for 16 hours via storage, i.e., 451GWh), a rooftop collection area of 136 million square metres would be required. This is approximately 5 times the area likely to be adequately oriented on all Sydney domestic rooftops. The panels would cost in the region of $80 billion. Costs associated with the above 8 additional factors would also have to be taken into account. At this stage it is difficult to estimate the combined effect due to savings likely because supporting structures and roof cladding are not required, and remaining normal balance of system costs, such as power conditioning equipment, wiring etc. Roof tiles supplied and fixed cost only c $40/m, so this saving is not great relative to the cost of the panels.
If we assume a house roof area of 100 square metres, 40% of which is covered
with PV panels delivering electricity at the rate of .25kWh/m/d the system would
deliver 10.2kWh/d. The average residential electricity consumption for Australia
is .76 kW, or 18.2kWh per day. Thus the roof would only meet about half of the
house's electrical needs, at a panel cost of $24,000. (Note that no provision is
made here for storage, and most domestic use does not occur during hours of high
solar incidence.) To meet Australia's total electrical demand, 175GWh/y, would
require the equivalent of about 20 power stations each of 1000 MW capacity, and
therefore 2720million square metres of collection panels, which is approximately
13 times the area available on all residential roofs (making the above 40%
assumption and again ignoring factors a-i. above.) To also fuel a car via
rooftop PV panels would be to more or less double the magnitude of the task.
Concentrator PV technology
Large reductions in PV costs are promised by the development of cells that receive sunlight focused from reflectors, enabling the area of PV material to be much smaller than the area over which solar energy is collected. Cells capable of concentration factors of 1000 to 2000, and over 25% efficiency, are being developed. The ANU cells are 22% efficient (Smeltink, 2003.)
Swanson (2000) discusses the fact that although this approach has been under development since the early 1980s, it has not been taken up enthusiastically. One reason is that it is not as suitable for the many small and stand-alone tasks that the more simple flat plate technology is being used for. Concentrating systems are more complex, involving tracking, and thus best suited for bulk supply, and here their high cost has been the main impediment.
A experimental 20kWe peak system operating at Rockingham in Western Australia is in the early stages of operation, although performance has been reported (2003) as disappointing so far. The best output, 75 kWh/d represents an efficiency of about 5.5%. I have been unable to get any cost figures (especially balance of system costs) from the developers, although these would not be a clear guide to costs for eventual large scale production.
An experimental system at Australian National University (Corkish, undated) involves a concentration factor of around 40, i.e., the area of PV cells required is only 1/40 of that over which sunlight is collected. However the cost of the cells has been reported at $(A)65/W, (personal communication from ANU), which is 13 times the cost of normal cells.
Sala et al (2000) report on an experimental 480-kW system . Efficiency is reported at 8%, over a year. Total plant cost was Euro 2.13 million, or Euro 4,445 per Wp ($(US)4,256.) This is around $ (US) 8.75 per watt, or $(A)15/y. Remarkably the PV receiving module cost is given as US 81cents/W. ( (I am still attempting to clarify the contradiction between this and the ANU cost given above.)
The stated costs per watt for concentrating cells can be misleading. They are far lower than for flat plate PV cells, e.g., 80c/W vs $5/W, tempting one to ask why aren’t they used in flat plate systems. Apart from the fact that they do not work as well at one sun concentration, their cost would be much higher for use in such a situation. The situation seems to be that one square metre of concentrating reflector focusing 1000W of solar energy on concentrator cells operating at 38 suns and 35% efficiency will deliver 350W from 260 square centimetres of cells. Thus at 80c/W the cells would cost $280, or $1.08 per square centimetre. The 10,000 square centimetres of cells in a 1 square metre flat plate system would cost $750, or 7.5c per square centimetre. Thus it would be much more expensive to use concentrator cells in a flat plate system. (Smeltink, 2003, confirms this general account but reports that some cells cost 68c/W.)
The overall cost of concentrator systems will be determined primarily by the balance of system cost. As has been noted, for systems which do not track the sun this is usually assumed to be about as much as the cost for normal PV cells per metre. However concentrator systems must track the sun, so structures will have to be fairly substantial, involving supports for collecting surfaces, machinery and control systems, moveable in at least one dimension and capable of withstanding strong winds. Costs for these items are not likely to fall greatly due to technical break throughs as they already involve relatively simple structures.
Unfortunately it has not been possible to find clear and confident general figures for the balance of system costs for tracking systems, either for PV or solar trough. The support structures for the two would be similar if the heat exchange components of the latter are excluded, because in both cases a frame supports a parabolic or fresnel reflector and the whole assembly must be capable of movement about at least one axis (for seasonal change.) Note that because it has a U shaped cross section the area of the trough or concentrator reflector has to be greater than the area of the solar radiation intercepted. Strebkov (undated) states that the ratio is between 2 to 1 and 2.4 to 1. (Web pictures often seem to show lower ratios.) This effect does not occur with flat collectors and tends to increase the costs of trough systems. For the Rockingham, Western Australia project the curved glass for the reflectors cost $70-80 per square metre. (Littlewood, 2003.)
The cost of the SEGS VI system's collector was $(US)487/m (or about $(A)812/m), although this included heat collection apparatus. Strebkov (undated) says the cost of the collection field for central receiver solar thermal systems is $(US)200-600/m, although this would not be a good guide for trough systems.
In their discussion of another proposed trough system Brackman and Kearney
(2002) state that the collection field would make up 45% of the total cost.
Again unfortunately this figure includes heat absorption equipment, but it again
indicates that the balance of system cost in PV concentrating systems is likely
to be far more than the cost of the PV components. These figures are sobering
since they indicate that for trough thermal and concentrating PV systems the
equipment needed in addition to the heat absorption system or PV cells costs at
least twice as much as they cost.
Haberle et al estimate that for a 50Mw peak fresnel trough system in Egypt the reflector plus absorber add to only 7% of total cost, a remarkably low figure. This suggests that the rest of the plant that would be needed in a PV concentrations system could cost about 4.5 times the cost of the reflector plus absorber. (In Strebkov's example this reflector plus absorber cost does not include the power block which was 28% of total cost, nor "service and other costs" which were 36%.)
The overall costs given in the account by Haberle et al seem to be surprisingly low; i.e., Euro77 million total cost for a 50Mwpeak, 450,000 m system, (i.e., only Euro171/m but $(A)290/m). However the figures for the collection equipment are helpful re the problem of estimating PV concentrator BOS costs, i.e., again indicating that BOS cost is high compared with that of the PV component.
The cost breakdown given by Sala et al states that the cost of the "structure and tracking" and mirrors came to Euro327/m, or $(A)556. (However the rest of the BOS came to another Euro 180/m, making a total BOS cost of Euro 507/m. or $(A )862.) In other words the BOS was 61% of the total cost.
Tyner (2003) says collector costs for troughs in use are $(US)125/m ,so c $(A)250/m, assuming one-axis tracking, but $(US)200/m.
From this diverse and rather unsatisfactory evidence on trough systems (above
and see further below) it would seem that the collecting structures for
concentrating systems would cost in the region of $(A)300 per square metre. Thus
collector costs seem to constitute only a remarkably small proportion of total
cost for solar trough systems, indicating that even if PV concentrator
technology becomes very cheap the balance of system cost for very large
collection areas will remain very high. For instance at $300 per metre the BOS
cost of the 87 million square metre 1000MW flat plate collection system referred
to at the beginning of this paper would be $26 b. (Note that system assumed 13%
efficiency, whereas the efficiency of trough systems reported here has been
closer to 9%, suggesting that the $26b figure should be multiplied by 1.4.)
Other storage options
Energy storage via thermochemical processes would seem to be about as efficient as hydrogen gas storage (possibly somewhat less; Kaneff, 1992, p. 43.), although for large scale generation there would be a significant problem of storing very large volumes of gas temporarily. Storage of energy via methane reforming or ammonia recombination is more energy efficient than storage via hydrogen, yet these processes would require one cubic metre of gas storage per 1.54kWh, at normal pressure. Thus to store the energy from a power station for the 16 hours when the system was not generating would require a mine shaft approximately 1500 km long, assuming 60% energy storage efficiency. Obviously gases would be compressed to reduce space requirements but this incurs energy costs, discussed below with respect to the "hydrogen economy".
The vanadium battery promises a higher storage efficiency, initially 87% but this will deteriorate with recharge cycles. However current estimates of world potentially recoverable vanadium resources indicate that far too little exists for a world supply and storage system, especially when automobile demand is added to electric power demand. (Erickson, 1973, Trainer, 1995.)
Overall "in and out" efficiencies for operating of pumped storage systems have been reported from around 60%, although some claim that 80% might be a reasonable average. The Queensllland Office of Energy estimates 70%. If it is taken as 85%, 1.18 units of electrical energy would be required to provide 1 unit after storage. The efficiency of hydrogen storage and retrieval might be taken as is .7 ( for hydrogen generation) x .5 ( for probable future efficiency of generating electricity from hydrogen via fuel cells), i.e., .35%. Thus 1/.35 = 2.86 units of electrical energy are required to provide 1 unit after storage.
However the fact that pumped storage is much more energy efficient than hydrogen storage does not make such a big difference when the task is to store the 16 hours x 600MW output of a solar plant required over night. The hydrogen system must collect 8h x 1000MW for day time direct supply, plus 16 h x 600MW x 1/ 2.86, i.e., 27,456 MWh.
A pumped storage system would have to collect 8h x 1000MW plus 16 h x 600 x 1.18MWh i.e., 19,323MWh. Thus the system with the hydrogen storage requires collection of only 1.4 times as much energy as a system with pumped storage. Whereas the hydrogen system analysed above would be 47 times as expensive as a coal fired system, if the geography and infrastructure permitted pumped storage the system would still be 33 times as expensive.
If dams are not available close to where the solar energy is collected energy
must travel to the dam and then from there to where the electricity is to be
used. There are few if any dam of any significant elevation anywhere near the
best solar collection sites in the flat centre of Australia. Electricity
generated there would have to be transported long distances to dams, then long
distances to the main consumption regions, adding energy losses to the whole
system. Note that for pumped storage two large reservoirs are needed, fairly
close together, one high and one low.
The most promising solar electricity option seems to be solar trough thermal. DeLaquil et al (1993) report that costs for central receiver and dish-Stirling thermal systems are 1.14 and 1.43 times as expensive as for trough systems. Manci (2003) says the corresponding ratios for the costs of electricity produced are 1.6 and 2.5
From the Sandia website (www.energylan.sandia.gov/sunlab/program.htm) report of 1997 figures for the SEG VI 30MW system (Table 4), 57 GWh/y were generated from a plant costing $(US)119.2 million some years ago, after subtracting 1/3 of the power delivered which was generated from gas backup. A coal-fired plant operating at .7 capacity would generate 6132GWh/y, i.e., 108 times as much electricity. This indicates that the cost of a solar trough system capable of the same output would be $(US)12.8 billion (ignoring storage), i.e., $(A)21.1b.
However the annual average solar incidence at the SEG VI site is very high, 7.9kWh/m (probably exceeded on less than 5% of US land area, located at the South West corner). This is almost double that for the winter incidence in Central Australia, indicating that for mid winter supply from the latter site the comparable cost of a 1000MW plant would be in the region of $(A)42 billion. A PV plant large enough to generate 6132GWh/y, without storage, at 13% efficiency and at a 4.25kWh/m/d site would be $(A)48 billion. The comparison is made difficult by the fact that the figure for the trough plant includes all costs and that for the PV plant excludes factors a-i above.)
These rough estimates suggest that trough systems might cost half as much as fixed plate PV systems.
The figures given by Brackman and Kearney (2002) for the 1991 performance of SEGS IX, 483,960 m in a region where incidence averages 8kWh/m/d, indicate an efficiency of only c 7%.
Solar thermal systems involve the problem of "start up" threshold or delay. DeLalquil et al (1993)report that solar energy incidence must rise to over 300Wm before electricity is generated, even then at a low efficiency. At Sydney in winter solar incidence is over 400Wm for only 2 hours a day. (Morrison and Litwak, 1988.) In Central Australia it is above 400W/m, 500W/m and 600Wm for 6, 4 and 2 hours respectively. There would also be start up delays after the passage of cloud (unless there is salt storage provision; below.)
However Grasse and Geyer (2000) provide a valuable plot (Fig.22.) from SEG VI for the solar incidence, collector efficiency and generating rate, for a cloudless mid-summer day in 1997 in which incidence reached 1000W/m. The sun rose at 6.45 but there was no electrical output until 7.30 when solar incidence had risen to c 700W/m. At about 8 a.m. electricity output had reached around 75% of maximum but solar incidence was 800W/m. Peak generating output was only reached at 9 am when incidence was 1000W/m. Solar incidence fell to zero at 8 p.m but generation fell from its peak at 6.30. (There is less delay at the end of the day than at the start, presumably because at the start the system has to warm up.)
Also of interest is the fact that the system involved salt storage and because it is therefore important to collect all energy generated through the day the system is large enough to collect 48 MW for a short time around mid day although it averages only 30 MW for the day. This again is the general problem that variable renewables set; i.e., the need to build much more collecting capacity than the plant averages.
The start up problem probably confines trough systems to regions where long hot days are most common. PV systems seem viable though very costly in central and even Northern Europe but trough systems would seem not to be.
As with fixed flat plate collectors, solar thermal trough systems and PV trough concentrators suffer a "cos effect". Receiving surfaces are normal to the sun only at mid day and early and late in the area of sunlight they intercept is a fraction of the mid day area. This factor contributes to the start up delay. Wheras a tracking PV system can generate at almost full capacity as soon as the sun rises above the horizon, at this time of the day very little solar energy will be falling on troughs set on an East-West axis, because the sun is incident on them at a very low angle. (Dishes and troughs can be tilted to very low angles without shading each other but only if spaced very widely, setting other problems and costs.)
Within the above discussion of possible BOS costs for PV concentrating systems it was seen that reported BOS costs for trough systems seem to range from $US300 to $800. It is not likely that the balance of system costs for solar trough and PV concentrator systems will fall markedly, given that the technology involved is simple, involving supports and adjustment equipment for the reflectors. "There is little scope for future performance improvements or cost reductions for solar trough systems" (Commissioner of the European Communities, 1994, p. 25.)
If trough systems can only reach maximum efficiency for electricity generation in regions where solar incidence exceeds 800W/m for many hours a day they will be confined to restricted areas. This is not to say that they cannot make a valuable contribution in wider areas, such as pre-heating water for coal powered stations.
More recently solar trough designs have included provision for storage of heat in molten salt enabling solar systems to generate for several hours after the sun sets. The "in and out again" loss of energy has been reported at 15%, whereas for storage as hydrogen and conversion to electricity via fuel cells it might be only 65%.
Haberle et al (2003) say molten salt storage at 307 degrees is being used but there is no cost effective system in place for 390 degree heat. The lower temperature is associated with c 28% efficiency generation (Dey, 2003.) (I have seen an unrecorded recent reference to salt storage at 500+ degrees.) Mills (undated) reports amonia and rock bed heat storage systems at $(US) 673/Wp, which seems to be a considerable cost, although the meaning of this figure is not clear.
Systems for storing heat in salt have only been developed to provide for a
few hours. To provide for longer periods would involve very large additional
collection and storage plant. Thus these systems cannot help with the problem of
generating on cloudy days.
It seems clear that some regions of the world will be able to derive a considerable fraction of their electricity from the winds. However because of the lack of publicly accessible information on wind mapping in Australia it seems that little can be said with confidence regarding potential electricity generation.
The Sustainable Energy Development Authority's website estimates that in NSW 1 GW could be derived from wind. However in November 2002 demand was 11.5GW.
Mills’ study (2002) concluded that Australia has a large potential wind resource, but most of it is not useable due to "exclusion factors", notably long distance from grids. The cost of building lines to wind farms must be included in the cost of providing wind electricity.
Evidently the CSIRO now has good wind mapping information for NSW, but has not made it public. (Some information is given below.) However they have said that sites must have at least 8m/s average wind speeds, and the Federal Renewable Energy subsidy of 4c/kWh before generation becomes economically viable. (Personal communication.) This is surprising given that wind is usually thought to be economically viable in areas with over 7m/s winds.
The American Wind Association (2001)has said that three times present US electricity use could be derived from wind. Unfortunately many statements like this have been made but they leave important issues obscure, such as whether class 4 and 5 wind regions are included as potential. Class 4 winds are said to have 90% of potential wind energy, but it is far from clear whether their use will ever be viable.
A study reported in Planet Ark for June 2003 claims that US potential is far greater than previously thought when 262 ft towers are assumed, compared with the 164 ft towers in use today. Generating costs equal to those of coal fired electricity are claimed. I have not been able to clarify the nature, costs, problems associated with towers of such height, including possible storm failure rate.
A study by the Commission of the European Communities, (1994, p.
34.)concluded that "…realisable on shore technical potential is …about 350TWh,
23% of the Communities total electricity demand in 1990.
It is commonly assumed that windmills will perform at 25% capacity on average; i.e., that a 750MW will will have an average output of 188MW. Caution is required here. Firstly a mill's capacity is primarily a function of its location. Very good sites enable a mill to deliver over a long period 35% or more of the peak output it is capable of. However average capacity in the Netherlands, Denmark, Sweden and Germany has been reported as 22% (OPT Journal, 2003.) The average capacity achieved by Californian mills in 1990 was 18.6%. (Elliott, Wendell and Gower, 1991, p. 56.)
As with PV, performance in the field seems to yield efficiencies well below
those one might expect from theoretical analyses, or from lab tests in ideal
conditions. Although windmill efficiency can be expected to improve, the sites
first used will tend to be the best ones, indicating that we could expect
average capacity to decline over time as less ideal sites have to be used.
As with other renewables it is a relatively simple matter to introduce wind power within a system primarily based on other sources, enabling adjustment of the coal or nuclear generating rate to accommodate fluctuations in the renewable source. However when the wind contribution rises beyond a certain proportion of total demand problems arise, especially the need to leave some of the renewable component sources idle part of the time. It is commonly assumed that in good wind regions wind might be able to supply 20-25% of electrical energy produced by the system before a penetration problem arises.
Denmark is reported to have such a problem even though wind has only a 13% penetration, resulting in much energy being dumped at certain times, and much having to be sold at low prices. (Country Guardian, 2002.) This problem is said to have arisen regarding 34-45% of wind generated electricity in 2000.
Denmark's extensive development of wind energy has been facilitate by the
fact that its neighbours have made much less investment and have therefore been
able to buy Denmark's surplus when it was available. In a renewable energy world
there would be less scope for this. Denmark's problem suggests that the 25%
penetration in good wind regions commonly assumed might be optimistic.
The considerable penetration achieved by renewable energies in some countries has been due in part to large subsidies. While these are desirable in order to enable development of these industries, they can give a misleading impression regarding the viability of the technologies. Coal fired power can be produced for 2-4 c/kWh in many countries, yet in Australia Pacific Power pays home owners 10c/kWh for power fed into the grid from home rooftop systems. In Denmark the subsidies are "very large", 10 billion DKK per year, around DKK .45/kWh and the price of wind electricity is 4 to 5 times that of other electricity from other sources. (Country Guardian, 2002.) In Germany the subsidy for PV power is reported to be 48Euro. cents per kWh. Worldwatch (2001-2, p. 46) reports PV power in Germany receiving a 10 year interest free loan plus 50ckWh. I have a report that in the US the subsidy is 3.3c/kWh.
Although we should be willing to pay much more for renewable energy the question is at what point costs would become too high. We might be able to cope with a 5 fold increase in price, but a 10 fold increase would seem to be quite prohibitive.
Figures from a proposal by Babcock and Brown for a 200MW South Australian wind farm throw a little light on what seems to be a precarious financial situation. (Sydney Morning Herald, 17th July, 2003.) The project will cost $450 million, and will sell electricity at 80ckWh. Thus over 25 years and at 25% capacity income will be $1051 million. At the probable loan repayment rates ( from personal communication) interest on capital borrowed will probably be $250 million. Operations and management (at 2% of capital cost p.a.) will be c $225 million. Cost will therefore be in the region of $960 million, i.e., not much below total lifetime income. Annual earnings would therefore seem to be $3.6 million, or .8% of invested capital. Assuming a 30 year lifetime and a 30% capacity factor would improve things, but these figures make it difficult to see how projects could be viable without a subsidy that enables 2-3 times coal-fired generating cost to be charged. (The above estimates are not made with great confidence.)
As with solar energy, wind energy varies considerably over time. This is not such a problem if non-wind generators can be turned up when winds are low. However the question this paper is primarily concerned with is whether renewables can meet almost the whole of demand, which sets problems to do with storage and over-capacity. Ferguson (2003, p. 3.) notes how energy despatchers in the UK need firm commitments from wind farm operators regarding the amount of power they can deliver 4.5 hours ahead. Because the wind farms can't be very certain about this and because there are penalties for falling short, they tend to aim low and in one recent year ended up delivering only 86% of the energy wind farms generate.
More important is the large variation in wind energy and therefore capacity achieved from summer to winter. In Denmark, Germany, the Netherlands and Sweden the winter capacity of windmills in 2000 averaged 33% but the summer capacity averaged only 15%. In August 2000 German and Netherlands capacities were actually only 8% and 7%, after averaging 38% and 35% in February. Thus in these two countries capacity varied by a factor of 4 or 5, meaning that a system capable of fully meeting summer demand might be 80% idle in winter.
For Europe as a whole Czick and Ernst (2003) report that windmill capacity varies from 55% in February to 12.5% in May, and averages under 18% for the four warmest months of the year.
Europe as a whole has a 2.5 to 1 variation in wind energy from winter to summer, much the same as in the US. For Australia the variation is between 1 to 1.4 and 1 to 1.8 (http://www.iset.uni-kassel.de/abt/w3-w/folien/magdebO30901/folie_41.html)
In addition there can be significant variation in wind averages from year to
year, up to 25% according to the World Energy Council (1994, p. 152.)
The areas required
The area over which windmills must be placed to equate to a 1000MW power station is quite large. If 750 kW mills of 80m diameter are placed at 10 x 5 diameters, lose 13% of energy due to array interference and function at 25% capacity, then 6135 mills spaced over 2044 square km would be needed to deliver 1000MW (and three times as much in August 2000 in the Netherlands and Germany, given the low capacity factor discussed above.) This estimate does not take into account losses in connecting wiring and power conditioning equipment, nor in transmission from wind farms to users.
Europe probably has 120,000 square km of Class 5 land and above (7.5m/s or better average wind speed), which would enable the number of windmills corresponding to 57 power stations. The actual number possible for densely settled Europe would be much lower (due to the savage effect of the "exclusion factors" discussed below.)
Again in the US and Europe where considerable development of wind energy has taken place, performance figures currently reported for windmills will be associated with the best sites. As time goes by further development of wind farms will tend to be in less ideal sites, hence the overall capacity factor for the wind system might be expected to be lower than at present. (However, improvements in technology etc will tend to improve it.)
The area of Class 5 or better winds in the US would enable the equivalent of about 240 power stations, again ignoring the exclusion factor. US electrical energy, approximately 12.97 Quads in 1999, equates to about 433 power stations (operating at .7 capacity.) Again the effects of exclusion factors and losses in long distance transmission from the best US wind regions to the Eastern and Western cities would have to be added.
CSIRO modelling for NSW, Australia, indicates that within the best 90,000 square km of he state there are 550 square km with winds over 8m/s, and 7000 over 7m/s. (This is via a 2003 personal communication from the NSW Sustainable Energy Development Association but CSIRO has confirmed that the figures come from their recent mapping.) At 2,044 square km per power station this would correspond to .25 and 3.5 power stations, again ignoring exclusion factors. NSW peak power demand corresponds to about 16.5 power stations operating at .7 capacity. Note that there would probably be additional suitable areas outside the 90,000 squ. km surveyed, but probably not very much as this area would have been taken as the most promising area for wind generation.
Australia's total electricity demand in the late 1990s was 1200PJ, or 38GW.
This is equivalent to the output of 58 power stations functioning at .7
capacity. It was estimated above that a wind farm of 2044 square km is required
to replace one coal-fired 1000MW power station, or of 1430 square km to replace
one coal fired power station operating at .7 capacity. This indicates that the
area of windmills to provide Australian electricity demand would be 81,200
Surprisingly large proportion of the areas with good wind generating potential have to be excluded from use for a variety of reasons, primarily pre-existing use, and distance from electricity grids. It seems that for these reasons on-shore sites in Denmark, where wind supplies only 13% of electricity, is close to the limit due to these exclusion (and other) problems. (Country Guardian, 2002.)
The 1997 US EIA/DOE study (2002) came to the remarkable conclusion that "…many non-technical wind cost adjustment factors … result in economically viable wind power sites on only 1% of the area which is otherwise technically available…"
Elliott (1994, p. 8.) estimated that siting constraints would limit wind to
providing 10% of UK electricity demand. Elliot, Wendell, and Gomes (1991) state
that 75% of the class 7 wind area of the US would have to be excluded from use.
Offshore wind potential
The American Wind Energy Association (2000) estimates US off-shore potential
as 1/7 that of on-shore potential. The former is more expensive to construct and
Czick and Ernst (2003), discuss the possibility of linking the whole of Europe to regions such as Siberia several thousand kilometres away in order to overcome problems set by wind variability within smaller regions. The correlation between wind speeds falls as the area considered increases. At one point in time low winds might affect all mills in a small area but at that time good winds will probably be blowing in some other regions far away. The closer the correlation between winds within a given region approaches zero the closer the system will come to having constant electricity output (at a level corresponding to the average capacity factor for mills in the system.) (I have not been able to find evidence on the actual correlations that occur within specific regions; CSIRO Australia is reported to be working on this.)
Czick and Ernst argue that such a system would reduce the variability of supply to about 10% and enable the associated need for storage to be met by pumped storage using existing dams.
Systems of this kind would involve losses due to sending large quantities of electricity several thousand kilometres. Czick and Ernst state that at present these losses would be 16% but could fall to 10% given construction of HVDC lines. Transmission lines would probably be limited to 5GW each. The cost of these would have to be added to the cost of the wind energy system. Czick and Ernst estimate that HVDC transmission adds 30% to windmill costs.
A report from Electronix Corporation, Western Area Power Administration (no documentation available) says that 500KV lines capable of carrying 660MW cost $(US)600,000 per km, substations for 250 KV lines cost $160/kW, and undersea cable for 250MW lines cost $400,000 per km. These would seem to be substantial additions to the cost of long distance wind energy supply. Arnold (2003) reports that 5Gw HVDC lines from coal powered stations would add 40% to generating cost, at $2b (US0 for 5000km. Where lines are buried provision for heat dissipation would add to costs; some 100W/m. (I am trying to get costs for the Bass Straight line being constructed.)
However the cost of the feeder lines from windmills to the HVDC line would be substantial, given that a 5GW line would have to be connected to some 35,000 mills in a network over 10,000 square km. The connections between the mills would probably require some 17,000km of wiring.
According to one estimate if 5GW HVDC lines cost $1000/kW this would add 40% to the cost of coal-fired power. (email@example.com)
Another report notes the problem of conductor size and weight. For copper the diameter would have to b e 27cm and for aluminium 36 cm. Neither material has high tensile strength so pylons would have to be located much closer together than for normal transmission lines. Cost implications are not explored. (
These large scale systems would also encounter the problem of seasonal variability mentioned above. In winter there is about twice as much wind energy as in summer.
Czick and Ernst, indicate that for the intercontinental system they consider (from Europe to Kazakstan, and from Siberia to Mauritania) output would still be 50% higher in winter than for the 4 summer months, and November output would be lower than the winter average.
Czick and Ernst say this system could supply 30% of base load demand, if it had a non-wind backup capacity equal to 26% of the rated power of the windmills. This is a surprisingly large backup requirement for a system that is only capable of reducing supply from coal or nuclear by 30%.
Note also the political and moral difficulties that such a system involves.
It would harvest for Europe the wind resource from an area some 5-6 times as
large as Europe, in order to meet only 30% of (present) European demand. Surely
the many people living between Mauritania and Kazakstan would also like access
to energy harvested from their lands. In a just and sustainable world some
energy exporting might be acceptable but the figures Czick and Ernst give
indicate that there is no where sufficient wind in this large area to provide
European per capita electricity consumption for all people living within it.
Conclusions regarding Wind energy?
Again it is difficult at this stage to state confident conclusions about the potential of wind energy. In many regions , especially Europe, Canada, New Zealand, Central US, and Crete, it will clearly make a considerable contribution to electricity supply, but even in Europe problems of variability, integration and availability of space seem likely to limit the contribution to a small fraction of present demand. There are several very optimistic claims re US potential, including a Worldwatch claim that it could supply all US energy, not just electricity. (Such claims often refer only to the energy in the wind, not the quantity that can be harvested and delivered when and where it is needed.)
However Tyner (2002, p. 13) concludes "…under the most optimistic assumptions, the analysis suggests that wind power is capable of furnishing only a small fraction of the net energy needed to power the US economy…"
Long distance and inter-continental transport of energy via hydrogen seems to be ruled out by the high losses involved (see below), and HVDC seems more viable for long distance transport but involves high costs and significant losses, and sets problems to do with equity (not within the US.)
Australia's prospects seem to be much less promising than Europe's. The
resource might be quite large but most is presently far from grids. Again if
these were constructed their cost would have to be added to total wind system
costs, and losses in transmission would be significant. The above estimates re
areas required seem to indicate that wind cannot meet current more than a
fraction, say one-third, of demand. The rapid growth in demand for electicity
will be commented on below.
The second of the two crucial energy sources for industrial societies is liquid fuel and the potential solar source of this is biomass. The limit here seems to be much clearer and more severe than for electricity despite the fact that evidence and estimates on some of the basic variables again differ considerably.
Biomass yields and quantities
The limits to liquid fuel production are not primarily to do with the energy
return ratio (considered below). They are to do with quantity, i.e., the areas
of land available and the associated yields.
Non-plantation sources are far from sufficient to solve the problem. Lynd estimates that idle US cropland could provide only 14% - 28% of current US transport fuel (1991), even making the extremely optimistic assumption of 21 t/ha biomass production. (US corn plant growth is 15 t/ha with intensive application of fertilizer, water and pesticides on good soils. US average forest growth is only around 3 t/ha/y.) Di Pardo (undated) says that only 10% of US cropland is the maximum amount that could be used to produce cellulosic biomass inputs.
Lynd estimates that 186 million tonnes of waste biomass (dry) could be collected in the US (at under $56/t, 1994 dollar, the higher of two costs examined). Lynd (1996, p. 412.) says this would yield 20 billion gallons of ethanol, which is only equivalent to 6% of US petroleum consumption.
The Oak Ridge National Laboratory says US forest wastes could provide 8Q,
whereas all US energy is around 85-90 Q. (ORNL, undated.)
The plantation question should be seen in terms of what areas are likely to achieve what yields per year, via procedures that are sustainable over very large areas in the very long term. World average forest growth is around 2 t/ha/y (FAO, Undated) and the Australian average forest growth rate is probably well below the world average rate. However Mason (1992) says pine grows in Australian plantations at around 4t/ha/y on average and Bartle (2000) reports mallee harvest at 7.5 dry t/ha/y. Some Australian plantations achieve 10-12 t/ha/y growth, but these are in select small regions where conditions are unusually favourable. Giampietro et al (1997) say woody biomass can be harvested at 8.5 dry tonnes/ha/y, but this would assume relatively favourable growing conditions.
Australia's forests total approximately 41 million ha but the potentially harvestable area might be only 20 million ha when water catchments, national parks and the wishes of private owners are taken intro account. Nilson et al (1999) conclude that in general possibly 40% of existing forest areas might not be accessible to biomass harvesting, being on steep slopes, near creeks or on private land or protected catchment. (These restrictions would not apply to plantations established specially for biomass use.) In addition note should be taken of the fact that if Australia were to be self-sufficient in forest products local production might have to be increased considerably. (Imports are $2.7 b p.a., while exports are only $1.2b p.a..)
Also, approximately 6 million t/y of wood are presently being harvested p.a. for domestic heating in Australia. Current Australian and world timber and fuelwood demand are probably well beyond maximum sustainable quantities.
These figures indicate that Australia’s existing forests are not likely to be
capable of contributing significantly to the large quantity of biomass required.
(Estimates are given below.) As will be explained below, plantations for energy
production are not likely to solve the Australian problem. Currently there are
only about 1 million ha under plantations in Australia and its relatively poor
soils would probably place severe limits on the extent to which this area could
be increased and continuously cropped. Mercer says Australia might increase
plantations to 10 million ha. (1991, p. 81.) The required area (estimated
below) should be considered in comparison with the 23 million ha of pasture and
the 21 million ha of cropland presently in use in Australia.
Optimistic conclusions on the potential for biomass typically make very high assumptions regarding achievable biomass yields. For example Lynd (1996) and Foran and Mardon, (1999) assume dry weight yields can be 20-21t/ha/y, and these can be maintained year after year. Such discussions usually make reference to instances where yields of this order and greater have been achieved in specific locations or experimental conditions. For instance the Oak Ridge National Laboratory reports on switchgrass, willows and poplars in the US growing in experimental plots at 11-15 t/ha/y. (McLaughlan, 1999.) However for very large scale biomass production large areas of land would be required and it is not plausible that large areas with such yields can be found in the US, let alone in Australia with its poorer soils. (American agricultural yields per are are around twice Australian yields.) Personal communications from ORNL state that these high yields are likely from only about 20 million ha of US cropland. ORNL (undated) estimates that only 8 Q would be available for fuel prod8ction in the US (presumably not including potential plantations.) Hohenstein and Wright (1994, p. 187) found that only 91 million ha of US farmland could yield an average of 5 t of biomass per ha per year. Graham (1994, p, 187) concluded that 88 million ha of US farmland will be available by 2030, but 75% of this will not be suitable for bio-energy production, meaning that only 16.2 million ha will be available.
Consider the following yields for Australian agriculture; wheat, 1.9t/ha/y (i.e., grain; total plant biomass might be 3 t/.ha/y), fodder, 3.5t/ha/y, overall agricultural production excluding sugar cane, 2 t/ha/y. (US wheat straw is 3.3t.ha/y; Pimentel email.) In other words biomass yield from Australian cropland, which is obviously the best growing land available, is under 4t/ha/y (...after the application of 3.5 million tonnes of fertiliser and considerable pesticide and irrigation inputs.) (A.B.S., 1997-8.) It should also be noted that c15% of biomass harvested is lost in six month storage (Wright, 1994) and that biomass energy production is likely to take fertilizer applications comparable to those in agriculture. US corn production takes c 135 kg of nitrogen per ha per year, and wheat 60 kg. Panney and Mason (1994) estimate that biomass energy plantations will require 50-60 kg of Nitrogen per ha per year. These energy cost equivalents have not been included in the following derivations.
The Australian CSIRO Beyond 2025 Report (Foran and Mardon, 1999) argues that biomass energy for Australia could come from the areas that need to be replanted to remedy Australia's dryland salinity problem. However dryland could be expected to have biomass yields that are a small fraction of those for average Australian cropland. Nevertheless Bartle (2000) expects coppicing of Eucalyptus mallees to yield 5-7.5 dry tonnes of feedstock per ha per year, although there is at this stage little evidence on the areas that could sustain such yields or how yields will stand up over time. Continued harvesting from nutrient poor soils could be expected to lead to deterioration in growth rates, or to require fertilizer application thereby adding to the energy costs of production. Morrow (19 ) reports that half Australia's farmland should be fallowed.
Berndes, Hoogwijk and van den Broek (2003) review 17 studies of global total biomass yield potential. Unfortunately these differ greatly, in assumptions and conclusions, and some seem to involve quite implausible growth rate assumptions (e.g., 46-99t/ha. which Berndes et al say are not supported.) However inspection of the core plot of estimates is instructive. This includes a yield by area graph for world grain production, sloping down from c 7t/ha to meet the base line at 700m ha. If a line is drawn parallel to this curve but at twice its height, to represent a total potential biomass production, the total yield under this line approximates that for the average of the 17 estimates plotted on this Fig 6. This would represent a plantation yield of c 12 t/ha on a small amount of the best land, tapering to zero yield on the last of the c 1500 million ha assumed for biomass plantation. This is equivalent to an average yield of 10 t/ha from 600 million ha anticipated, i.e., a total yield of 6000 million tonnes. (The FAO estimate of the current world forest biomass harvest is 6.6 b t; undated.) This is 120EJ of primary energy, compared to the current world primary energy consumption they state, i.e., 410 EJ. If converted to methanol it would yield a net approximately 6,000 million tonnes x 40 gallons of petrol equivalent per tonne (this assumption is discussed below), i.e., 240 billion gallons, or 5.6 billion barrels, which is 21% of present annual world crude oil consumption. (Note that 6000 million ha compares with the present total world timber plantation area represented in Fig. 6 of only c 12 million ha.) FAO (undated) indicates that the present forest harvest is about equal to potential harvest.
The general limit on biomass growth and therefore on energy production from
biomass is set by photosynthesis. In natural ecosystems only about .07% of the
solar energy input becomes stored as energy within plant material, although in
special agricultural situations such as sugarcane growing the figure can rise to
.5%. (Pimentel, 1997, p. 14.) For a region averaging 5kWh/m/d of solar energy,
natural vegetation would be storing energy at the rate of approximately only 1.4
kW/ha ( i.e., average continuous flow over 24 hr). Pimentel notes that not all
of this will be harvestable as the plant will need to use perhaps 40% for its
growth processes. (1997, p. 14. ) The gross available energy flow (without the
significant losses due to biomass processing, conversion and storage) would
therefore be around .84kW/ha. This might be compared with the average per capita
US consumption rate for all forms of energy combined of approximately 10 kW.
Energy Return on Investment, (EROI)
Crucial in assessing the potential of biomass energy forms is the difference between the amount of energy produced in the required form and the amount of energy that has to be used to produce it. Two issues need to be distinguished here, firstly the proportion of the energy in the input biomass that ends up in the liquid fuel, which could be defined as the gross output, and secondly the amount of energy it takes to achieve this, which enables us to derive the net output.
Sometimes the ER measure is confused by including the energy content of the
biomass in the sum for total energy used in the production process. This is not
done in the following discussion. As with petrol, coal etc., the important
question is how much energy has to be used up to make available a unit of energy
in a particular form.
What proportion of energy in the biomass ends up in the liquid fuel?
The .84kW/ha flow of "potentially retrievable energy" into the feedstock, derived from the typical photosynthesis rate, is equivalent to an annual quantity of 26,490 MJ/ha . This is the energy content of 1.65 tonnes of wood, equivalent to 212 gallons of petrol. In some regions photosynthesis is much higher than average, but this 212 gallon gross figure would seem to indicate the upper limit for the energy output from a liquid fuel production process based on very large areas of land and therefore average photosynthesis. (For the net output the energy cost of production must be subtracted; see below.)
Ethanol production at present results in about 1/3 of the energy content of
the input biomass ending up in the ethanol (Lynd, 1966, Australian Bio-fuels
Association, 2003.) This is the equivalent of 53 gallons of petrol gross per
tonne of biomass. Lynd (personal communication) predicts that it will become
possible to convert up to 56% of the energy in the biomass to ethanol,
corresponding to a gross yield of 88 gallons of petrol per tonne of feedstock.
How much energy is needed to produce the liquid fuel?
The production of liquids from biomass usually has a low (sometimes zero or negative) net energy return on energy invested; i.e., it might require more energy to be put into the harvesting and distillation etc. than is available in the resulting fuel. Conclusions from different analysts vary significantly.
First it is important to consider how the accounting should be a carried out. For example should useful waste energy from the process be subtracted from the input energy before a net energy cost is arrived at. This could be appropriate if that waste energy can be used in the process. Evidently there are no possibilities for this in the production of ethanol from corn, but where cellulosic materials produce ethanol or methanol the lignin waste can be used to produce some of the electricity needed. It is not clear in Lynd's account what difference including electricity produced by lignin waste would make to the net energy required for ethanol production.
Secondly should we be concerned only about the input energy that must be in the form of liquid fuel, and subtract only this from output in order to arrive at a net energy return figure for liquid fuel production; i.e., should we ignore non-liquid fuel inputs? This might be acceptable if the non-liquid energy inputs needed are easily derived from other cheap and abundant sources. However in a sustainable energy world stretched for energy the large volumes of non-liquid inputs would also probably have to come mostly from biomass, so it seems appropriate to subtract all input energy costs from gross output energy when deriving an EROI figure. The electricity could in principle come from non-biomass sources independent of the ethanol plant (i.e., other than generated from the lignin by-product). However from the earlier discussion electricity supply will be a major problem so it will not be assumed here that surplus electricity will be available from external sources for liquid fuel production.
Pimentel and Pimentel, (1998) conclude that for ethanol produced from corn "...about 71% more energy is used to produce a gallon of ethanol than the energy contained in a gallon of ethanol." (See also Pimentel 1984,1991.) Ferguson says the net energy capture of biofuels is "..so low that these methods are barely viable." (Ferguson, 2000b.) Ulgiati (2001) concludes that the energy return from ethanol produced from maize in Italy is .59, rising only to 1.36 when energy credits from waste are maximised. Slesser and Lewis say the return is .3 from acid hydrolysis and .125 from enzymtic hydrolysis. Giampietro, Ulgiati and Pimentel (1997) conclude from their review that the net energy return ratio for ethanol ranges between .5 and 1.7, again apparently without taking into account energy needed to deal with the waste water. (However Ulgiati, 2001, estimates this at only 1.7% of the ethanol energy.)
Lorenz and Morris (1995) argue that recent technical improvements now enable a positive net energy ratio for ethanol from corn, but only if energy credits for non-ethanol outputs are given.
More recently Shapouri et al (2002) have stated that the energy return for the production of ethanol from corn is 1.34. Pimentel criticises this analysis for not taking into account all energy costs of production. However this figure is derived by subtracting from input energy the energy that would have been required to produce useful output products, such as corn meal. If the energy content of the non-liquid fuel co-product is disregarded Shapouri says the ER falls to 1.08. This is the relevant figure for our purposes, i.e., assessing the viability of ethanol as the major or sole source of the most crucial energy source, liquid fuel.
Pimentel's recent study (2003) concluded that to produce a unit of energy in the form of ethanol, from corn, takes 29% more energy than the ethanol contains. If energy credit is given for the dry distillers grain output from the process, the deficit is still 20%. This study took into account emergy inputs, and details criticism of the Shapouri et al study.
However most if not all of these estimates derive from studies of the production of ethanol from corn. Lynd (1996) argues that cellulosic inputs such as wood and grasses can have a energy return of 4.4 (1996, p. 439) , and over 7 in the long term future. Without disputing these figures, they are misleading and require careful interpretation. As noted above attention must be given to how energy return is defined, and which definition is most appropriate for our purposes of understanding the liquid fuel problem. Lynd's figure includes the energy output not in the form of ethanol. About 40% of the energy in the cellulosic input biomass ends up in un-fermentable lignin, which can be burnt to produce electricity. Lynd says the electrical energy produced is equivalent to 20% of the energy in the ethanol, so the thermal energy in the lignin is about 60% of that in the ethanol. Again our concern is only with the ER situation regarding the production of liquid fuels, meaning that we are not consoled by the fact that other forms of energy might be derived from ethanol production. Thus the ER might be 4.4 overall but for ethanol production alone it is only 2.75 from Lynd's account. Note however that this figure is much higher than Shapouri et al have more recently arrived at.
Lynd's figures indicate that one tonne of biomass input (20GJ, high value) will yield 6.6GJ of ethanol, but if ER is 2.75 then the energy needed to produce this ethanol is 2.4 GJ. Thus the net ethanol o utput would be 4.2GJ, equivalent to 34 gallons of petrol per tonne of input biomass. This is indeed the figure Lynd states in two sources for current technology. (1996 and 2003.)
It is not clear how the energy required as a liquid fuel to deal with the large volume of waste water has been taken into account in Lynd's figures. Giampietro, Ulgiati and Pimental, (1997, p. 591.) state that there would be 13 litres of high BOD sewage for each gross litre of ethanol produced, (1997, p. 210), requiring energy for treatment equivalent to 50% of the energy in the ethanol. Ulgiati (2001) says the figure rises to 33.58 litres per litre of net ethanol, i.e., after the energy cost of producing the ethanol have been deducted from the output. (However again he estimated the energy cost t only 1.7% of the energy in the ethanol.)
The large differences between Lynd, Shapouri and Pimentel regarding ER seem
to remain unreconciled at this point (personal communications). They are
probably due in part to the fact that Pimentel’s reference is mainly to corn as
the feedstock and to existing production systems whereas Lynd is discussing
cellulosic inputs and theoretical possibilities as no plants of this kind are in
commercial operation. (Lynd, 1996, p. 431.)
How large must energy return be in order to meet dollar costs?
How much greater than 1 must the energy return on energy invested be for biomass energy production to become economically viable?
Australia's hay/fodder production averages about 4 t/ha, and 30 bales/tonne, i.e., 120 bales /ha, and this would sell (pre-Australian 2002-3 drought) for about $550 gross income/ha. Australian Bureau of Agricultural Economics figures indicate that the cost of production is around $270-300/ha, meaning net income is c $270/ha.
The Australian Biofuels Association reports that in Australia one third of the energy in the input biomass ends up in the ethanol produced via current technology. Therefore about 26.4 GJ of ethanol would be produced per ha, equivalent to 790 litres of petrol. (In Australia the input material is mostly wheat, which probably has a high energy content compared with other options, such as switchgrass.)
Pimentel argues that the EROI figure of 1/1.34 arrived at by Shapouri et al (2002) is too high, but let us use the figure for illustrative purposes for a moment. For each unit of ethanol output .75 units of energy are needed to produce it, the net energy output from the process would be equivalent to 198 litres of petrol per ha. If a petrol producing firm sold this amount, (i.e., not taking retail mark up or taxes into account) it would probably earn about $79/ha…compared with the $550 that could be the gross income from the production of hay.
The farmer would be making about a $170/ha net loss, and the price of fuel sold to the petrol distributor would have to be about 7 times as high as it is now before the farmer would make as much as he does from producing hay.
In EROI terms, if the farmer is going to earn $550 gross from ethanol sold at around 40c per litre, the approximate Australian "wholesale" pre-tax price of petrol, he must produce the equivalent of a net 1375 litres of petrol per ha. If one third of the gross energy in the biomass ends up as net liquid fuel, he would have to produce ethanol with a gross energy equivalent of 5,500 litres of petrol/ha. Finally, if as Shapouri et al say, it takes 3 units of energy to produce 1 net unit of ethanol, then the biomass energy produced would have to be equivalent to about 22,000 litres of petrol per ha. That much energy (22,000x33MJ = 726GJ) would require a biomass yield of 36 tonnes per ha. (or 45 tonnes for lower energy content grasses etc.) US cropland probably averages about 4t/ha, world forest average annual growth is around 2 t/ha, plantation forestry yield is 7-10 t/ha. In other words yields would have to be some 10 times those for Australia's hay production.
The Australian Biofuels Association acknowledges that farmers could not make a living producing inputs to ethanol production. At present ethanol is produced in Australia largely from "free" inputs supplied by the wheat and sugar industries.
The point is that for biofuels to be economically viable against today's
petrol prices, yields would have to be many times higher than is likely for very
large scale biomass production, which would have to use much more than the high
yield lands available. Alternatively, for production of biomass inputs to be
economically viable we would have to pay many times the amount we now pay for
liquid fuels, from the above analysis, perhaps 10 times as much. And note again,
these estimates take Shapouri et al’s ER conclusion of 1.34 which includes
credit for co-products, while Pimentel says ethanol ER is negative, and they
also assume today's dollar costs for energy inputs, when these costs would be
much higher in a renewable era in which energy as much more costly.
Net ethanol production
Again clear and confident conclusions are elusive, but general impressions seem decisive.
Giampietro, Ugliati and Pimentel (1997) say that the ER for ethanol is negative, which would mean that no net fuel energy can be produced.
From the figures reported by Foran and Mardon (1999) it can be estimated that ethanol can be produced at a net energy yield equivalent to 26 gallons of petrol per tonne of dry wood feedstock.
From the above discussion of Lynd's figures the net yield seems to be
equivalent to 34 gallons of petrol per tonne of input, when credits for
co-products are ignored.
Methanol seems to be a more promising option than ethanol, (however note toxicity, below.)
Foran and Mardon (1999) conclude that the methanol option will yield approximately 2.6 times as much gross energy in liquid form per ha as ethanol, i.e., not taking production energy costs into account. From their figures, for an energy input of 68.6GJ, including the energy in the 2.2t of feedstock, a net 13 GJ of methanol can be produced. The assumptions are that 2.2 dry tonnes of wood yield 1 tonne of methanol (from future technology), and that it is reasonable to deduct the whole 9.4 GJ needed to produce the methanol. Thus one tonne of biomass input will yield 5.9GJ of methanol, net, equivalent to 47 gallons of petrol.
Ellington (1993) provides an analysis based on current energy costs, taking into account emergy factors such as steel and concrete used in construction of plant. He concludes that for each tonne of woody biomass input with an energy content of 18.89GJ, 9.95 GJ of methanol can be produced (i.e., 53% of the energy in the input biomass ends up in the methanol), but it takes 5.4 GJ to do this. The EROI is therefore 1.84. Each tonne of input biomass yields a net methanol output of 4.55 GJ, equivalent to 34 gallons of petrol.
Berndes et al. (2000) conclude that future technology could derive 72 gallons of methanol, equivalent to 36 gallons of petrol, from one tonne of cellulosic biomass. It is difficult to evaluate their account. They assume energy required at 1/3 that assumed by Ellington, and Giampetro et al. The difference regarding electricity required at the plant is large, 3.89 GJ vs .5 GJ per tonne of ethanol produced. It seems from their Table 1 that the .5GJ refers to electrical energy and should therefore have been accounted as 1.5GJ(th) (…although they assume 50% efficiency in production of electricity from biomass; again their discussion mostly assumes future technologies and efficiencies that might be achieved.) The footnotes b and c under Table 1 are not clear, dealing with how inputs are accounted, but they state that one way of accounting that could have been used would have cut their net yield by one third.
Again the differences are unsatisfactory. Berndes and Foran and Mardon are talking about what they think will become achievable, and Berndes assumes 50% efficiency for electrical generation, whereas Ellington is reporting on present technology. The factor causing most variability in conclusions is the amount of energy assumed to have been used in the process. For the purposes of the following discussion it will be assumed that one tonne of biomass can produce the equivalent of 40 gallons of petrol.
Unfortunately there seem to be significant problems regarding the toxicity of
methanol, especially with respect to motor repair. This factor has been reported
to have led BMW to abandon R and D on methanol technology.
The demand for liquid fuel
US petroleum use (in the mid 1990s) was approximately 6.6 billion barrels or 277 billion gallons per year. (Youngquist, 1997, p 187.) Transport was taking approximately 212 billion gallons. (US Department of Energy, 2000.)
In 1998-9 Australia used 1681 PJ of petroleum and 881 PJ of gas. (Australian
Bureau of Statistics, 2000.) Combined petroleum and gas consumption is the
equivalent of 20.5 billion gallons of petroleum. (Note that the energy
consumption rate is growing; see below.)
Can the demand be met?
If we take the 40 gallons of petrol equivalent per tonne of biomass input figure, giving no energy credit for energy co-products that can be used in the process, and assuming that all energy inputs come from biomass, then to meet the Australian oil plus gas demand of 2562PJ would require an input of 500 million tonnes of biomass pa. If we assume an average yield of 7 t/ha, 70 m ha would be needed, which is 3.5 times all cropland and almost twice all forest area. Such an average yield is highly unlikely from such a large area of Australian soil.
To meet the US petrol transport consumption, 212 b gallons, 5 billion tonnes of biomass would have to be harvested pa, and at an average yield of 7 t/ha this would require 700 m ha. This is about 3.5 times all cropland and 2.5 times all forest. To include US gas consumption would increase the biomass needed to 8.5 b tonnes. Note that in 1980 174 million tonnes of wood were already being used for US domestic heating. (Pimental, 1988, p. 189.) These figures align with Pimentel's conclusion that US energy use of 85 Q is 30% greater than the 54 Q of total solar energy captured by all US vegetation. (Pimentel, 1998, p. 197, 1994.) By 2003 US use had risen to 96Q.
Khashgi , et al. (2000) point out that present US ethanol production is equivalent to .8% of gasoline use, and is grown on 1% of US cropland, meaning that some 120% of all cropland would be needed for a gross production of US gasoline. From this the energy cost of ethanol production would have to be subtracted. At another point they say only 14 m ha might be available for energy production in the US by 2030, and this might produce 4.8 EJ, gross. US 1998 total energy use was 90EJ, indicating that 262 million ha would be needed for gross energy output, some 1.65 times all US cropland.
Ferguson (2003) takes Shapouri et al's figures and shows that one-third of US cropland would provide only 1.2% of the average US energy use per capita, 9kW (i.e., net energy yield.) This is a remarkable figure (the derivation set out seems quite sound) given the very high net yield of energy assumed, 18.3GJ/ha/y from corn. (It is high because a considerable fraction of this total is an energy credit given for a co-product of the process, which I argued above should not be given when the concern is the liquid fuel account.)
Most regions of the world seem to have much less capacity than Australia to meet liquid fuel demand from biomass. The Australian total cropland, pasture and forest area per capita, 4.9ha, is much higher than for most regions of the world. The figure for Europe is 1.6 ha, Africa 3.3, USA 2.8, Asia .55, and for the world 1.43.
Khashgi ,et al. (2000) refer to Johansson's estimate (1993) that 350 m ha might be available globally for biomass energy crops, and that this could yield 80 EJ. It is not made clear whether this is meant as a gross or net figure. Even if it were a net figure, global fossil fuel use is given as 320EJ, four times as high.
Johansson's conclusion roughly aligns with the conclusion arrived at by
Berndes above; the average production from the estimated world plantation
potential, 6000 million tonnes pa, would yield the equivalent of 20% of current
world petroleum consumption.
These considerations indicate that although a large volume of liquid and gas fuel could be produced from biomass, it is not plausible that this source could provide more than a small fraction of current demand.
It should also be noted that if petroleum becomes scarce there will be feedback effects making the biomass situation more difficult. For instance if there is less fuel available and at higher cost then irrigation, fertilizers and pesticides will become more scarce and costly and agriculture will tend to become more labour and land intensive, and agricultural produce will become more costly, reducing the availability and increasing the costs of inputs to biomass production. There will tend to be a shift from energy-intensive building materials such as kiln-fired brick, aluminium, steel and plastics to timber, again increasing pressure on biomass sources. Looming water shortages and the impact of the greenhouse problem will probably significantly reduce biomass production. Also global economic development is accelerating the rate at which people are moving to cities, where per capita energy and resource consumption is higher. (However the proportion of meat in Western diets could be reduced considerably, freeing much land for the production of biomass.)
The implausibility of biomass meeting the present liquid fuel demand indicated by the foregoing figures is reinforced by comments from others.
Giampietro, Ulgiati and Pimentel, (1997) find that to produce only 10% of US
energy via ethanol would require 37 times the commercial livestock feed
production. They say that providing US food plus energy via biomass would
require 15 times the existing cropland, 30 times the agricultural water
consumption, and 20 times present pesticide use. For Japan the cropland multiple
would be 148. (p. 591.) "...none of the biofuel technologies considered in our
analysis appears even close to being feasible on a large scale due to shortages
of both arable land and water..." (p. 593.)
Finally the ecological implications of large scale, intensive, continuous
biomass production are unknown. Some would argue that nutrient removal equates
to soil deterioration in the long run.
The magnitude of the problem is made clear when expressed in "footprint"
terms. (Wackernagel and Rees, 1996.) At the above output of 40 gallons of petrol
equivalent per ha /t/y, Australia’s per capita petrol consumption of 708 gallons
per year would require 17.7 tonnes of wood, or 2.4 ha at 7.5 t/ha yield. (Per
capita oil plus gas consumption would require 3.7 ha.) In addition Pimentel and
Pimentel (1997) estimate that 2.2 ha of forest would be needed to yield the
10,000kWh of electricity used by one person in a rich country per year. Thus per
capita liquid fuel plus gas plus electric energy production from biomass would
require 5.9 ha. To this must be added the productive land area needed for food,
water, settlements and pollution absorption etc. However the total global amount
of productive land per capita available is approximately only 1.2 ha. If
population rises to 9 billion (and the present rate of productive land loss
ceases), by 2070 the per capita area will be approximately .8 ha.
It is widely assumed that the ultimate solution to the energy problem will be via "the hydrogen economy". There are persuasive reasons for concluding that this is mistaken.
Firstly it is not commonly understood is that hydrogen is not an energy source; it is only a carrier, i.e., a form into which energy can be converted. The problem then is, from what source is one going to produce all the hydrogen we would need, and in a renewable energy world the only sources of significant quantities are PV, biomass or wind.
As was explained above for PV, converting energy to hydrogen and storing and transporting it involves formidable difficulties, energy losses, infrastructure requirements, and costs. These multiply the number of windmills etc that a system will need to cover the losses. For example to convert wind generated electricity to hydrogen with a 30% energy loss, then to convert the hydrogen to electricity when it is needed later at a 67% energy loss (or possibly 40-50% for a fuel cell) would mean that about four times as many windmills would be needed as to supply 1 kW via storage compared with supplying it direct.
Next we encounter the unique problems of storing and transporting hydrogen. It is a very light element and therefore even when compressed or liquified a large volume container does not hold much energy. Elliason and Bossel state that a 40 tonne tanker delivering hydrogen will only deliver the equivalent of 320 kg of petrol. ( This figure has been disputed; (LBST) state that it is 10 times too low if hydrogen is liquified, but even then a 40 t truck would only be delivering the equivalent of 3.5 t of petrol, and there would be a large energy loss in liquifying which is not taken into account here. According to Elliason and Bossel, to supply the petrol station with hydrogen will require 21 times as many tankers as would be needed to deliver the same quantity of energy in the form of petrol. They say that to replace today's demand for petrol for motor transport with hydrogen would mean that one sixth of the trucks on the road would be carrying hydrogen, and thus one sixth of the truck accidents would involve large quantities of hydrogen under pressure.
Liquifying the hydrogen results in a smaller volume for transportation. However to transform electricity into liquified hydrogen requires energy equivalent to about half the energy in the hydrogen. Furthermore energy must be used to keep the hydrogen at -253 degrees C. Overall energy consumed in storage is around .3% per day, i.e., to store hydrogen for the six months from summer to winter would use up energy equivalent to more than half the stored energy. Further losses would occur at filling points and through valves and joints.
In general storage tanks often cannot be completely emptied. When pressure in the delivery tank falls to that of the receiving vessel no more hydrogen will flow, unless more energy is used to pump and raise pressure. This means tankers must make return trips carrying some hydrogen back, and thus volumes actually transported can be somewhat lower than tanker volume might suggest.
Large scale intercontinental transport of liquid hydrogen by tanker also seems to be highly problematic. Wootton (2003) point s out that a modern LNG tanker delivers about 3 billion cf of gas. It would make about 12.6 trips p.a. from Nigeria to the US. US gas consumption is about 23 tcf/y. So the tanker can deliver .17% of demand. Note that the 32 bcf delivered is a gross figure; if the energy needed to produce, compress and transport the gas, and the losses, were taken into account it would seem quite unlikely that a high proportion of a nation's energy could be shipped long distance in the form of gas.
Transporting hydrogen via pipelines sets similar problems. The "hydrogen economy" vision usually assumes solar plants in the Sahara pumping hydrogen to Europe. This is a very unlikely proposition given the energy required to pump hydrogen long distances, again due to its low energy density. Elliason and Bossel conclude that to pump hydrogen gas 5000 km would take energy equivalent to 40% of the energy in the hydrogen delivered. (LBST dispute this claim, stating that if pipelines are 50% wider the loss falls to 36% for 500 km transport (…note 500, not 5000…not explained). This is still a formidable loss, and it would seem to prohibit inter-continental transportation of hydrogen. (Long distance transmission of electricity via HVDC lines involves less loss.)
It is not likely that hydrogen can be pumped through existing gas pipe lines. Firstly hydrogen makes metals brittle. Secondly gas pipelines lose energy, e.g., through joints. This is why engineers try to keep the pressures as low as possible. Hydrogen's small atomic size enables it to leak out more easily, yet because of its low density the temptation is to pump it at high pressure. However Lovins says existing pipelines can be used if fitted with plastic liners and the loss rate can be kept very low. He says that the recent claim that losses from a hydrogen economy might be so large as to damage the ozone layer are mistaken.
Hydrogen can be stored in the form of metal hydrides, but the tanks must be heavy and expensive, for example some 30 times the weight of a car's petrol tank. Unless the hydrogen is pure the hydrides will have reduced life expectancy. The weight of hydrogen stored is only around 1-2% of the weight of the metal in the storage medium.
Lovins (2003) argues that for automobiles compressed hydrogen gas storage will be best, via procedures that enable retrieval of some of the energy needed for compression using valves that regenerate power as the gas is released into fuel cells. (Net loss?)
Lovins points out that when the whole energy supply chain from oil well to wheels via petrol is compared with that from natural gas to wheels via hydrogen, the latter is 3 times as energy efficient as petrol. Thus he claims that very light and efficient hypercars could travel 5 times the distance on a unit of hydrogen energy as on a unit of petrol energy. This effect would be much reduced for transport vehicles where the predominant factor is not the lightness of the vehicle but the weight of the freight. As in Natural Capitalism, Lovins again fails to recognise any problem in providing enough natural gas to generate the hydrogen. (For a critical review see Trainer, in press.) He is assuming in effect that natural gas production can be increased by some 50%, when many believe that its availability is almost as problematic as petroleum, and is already causing alarm in the US.
These figures indicate that, as Eliason and Bossel say, long distance
transport of large volumes of hydrogen seems to be ruled out. They also note
that technical advance can't make much difference to this situation, because the
problems are set by the physics of hydrogen. (However some believe the energy
loss in the production of hydrogen can be cut from 35% to 20% or less.)
Derive hydrogen from coal?
Coal could be processed to yield hydrogen at large central plants enabling the carbon to be sequestered (the reference here is to underground or in the sea, not within forests.) Sequestration involves harvesting the carbon, transporting it to the site where it is to be located, and burying it. One source says the process costs 25% of energy that would have been produced had it not been carried out, that it doubles plant generating cost, and extracts only 90-95% of the carbon. (http.ftp.ecn.nl/pub/www/library6/conf/ipcc02/costs-02-06.pdf) Another unrecorded report states that the energy loss is in capture is36%, and 41% when sequestering the carbon dioxide is included.
If this process made coal into the major fuel world estimated coal resources
would not last long. Let us assume that, a) all present energy was to come from
coal, meaning that the present c 3 b t/y coal production would be multiplied by
3 (maybe 4-5 when losses in conversion to liquids are taken into account), b) if
all 6 billion people were to live as rich world people do now the result must be
multiplied by 5, if c) population grows to 9 b another multiple of 1.5 must be
applied, d) if energy use continues to grow as at present in Australia meaning
that by 2050 use per capita would be about 5 times as great as it is now, and e)
if 41% of the coal energy is lost in conversion to hydrogen and carbon
sequestration. Combining these multiples means that world coal output would have
to be some 306 times the present rate, so even if the potentially recoverable
resource is 2000 billion tonnes this would be exhausted in about 6 years.
It does not seem possible to answer this question at all confidently for Australia but following is a suggestive attempt. Firstly the important question is not to do with the overall quantity but the limits regarding the most problematic sources. In other words, it is likely that we will have no difficulty providing abundant renewable energy for space heating, via solar passive building design, but we will have great difficulty providing anything like the present quantity of liquid fuel, and electricity in winter (somewhat less problematic.) Thus liquid fuel is the weakest link that will limit the extent to which the whole society can continue as at present.
It would seem possible that in Australia a 10 tonne per ha yield from biomass
plantations could be achieved on 2 million ha of the best available land, with a
diminishing yield on another perhaps 20 million ha, falling to 5 t/ha.
Australian plantations presently yield 7-8 t/ha and some have estimated that the
limit for plantations would be about 10 m ha. This diminishing curve would
indicate a total yield of around 130 million tones, which would convert via
methanol (at 40 gallons of petrol per tonne, net) to about one-quarter of the
liquid fuel energy presently used. If the task is to meet oil plus gas demand
via biomass the fraction falls from 1/4 to 1/6.
The situation becomes much more difficult when the significance of economic growth is taken into account. An economy growing at 3% or 4% p.a. will double its output each 23 or 17 years respectively. It is not plausible that increases in production and consumption of this order can continue without significant increases in energy demand, meaning that the magnitude of the energy supply task and the associated costs discussed above can be expected to multiply greatly.
In deflated or real terms the rate of economic growth per capita in Australia over the past decade has been in the region of 2.3%. (Hamilton, 2002, p. 10.) With population growing at around 1% p.a., the real GDP growth rate has been 3.3% p. a. At this rate the total volume of producing and consuming taking place in 2050 will be about five times the present volume.
ABARE’s Energy Outlook 2000 shows that the average annual rate of growth in energy use in Australia over the decade of the 1990s was around 2.5% p. a. The Australian Yearbook shows that between 1982 and 1998 Australian energy use increased 50%, an arithmetical average growth rate of 3.13% p.a., and the rate has been faster in more recent years. (Graph 5.12.) The implication of these figures is very significant. If the 2.5% pa rate of increase were to continue to 2050 annual energy use would be about 4.5 times as great as it is now. In July 2003 Australian electricity authorities warned that blackouts are likely in coming years due to the rapid rate of increase in demand, estimated at almost 3% pa for the next 5 years. (ABC News, 31 July.) Robbins (2003) reports NEMMCO predicting growth over the next 10 years in NSW, Queensland and Victoria as 3.1%, 3.5% and 2.6% p.a. respectively.
Thus the commitment to growth greatly exacerbates the problem. It has been
argued above that renewables are not likely to be capable of meeting present
electricity and liquid fuel demand, but given the unstopable inertia behind
current growth trends demand will probably be 4 to 5 times as great bin 50 years
Two common counter arguments here must be briefly considered. The first is the assumption that economic growth will increasingly take place in the service and information sectors and not in energy-intensive sectors such as mining, agriculture and manufacturing. However many services are remarkably energy-intensive. Consider transport, travel and tourism. Services have been estimated to account for 27%, and 40% of Australian energy consumption by Common (1995) and Lenzen (1998)respectively. Many services such as retailing, insurance, construction, advertising, and security might not use much energy but they serve industries that are energy and resource intensive, e.g., producing and selling goods. Thus it is not plausible that an economy can constantly increase its service activity without significantly increasing its demand for energy.
The School of Physics at the University of Sydney might be taken as a "perfect" service industry, producing nothing material. However its energy use is quite high, averaging 2.1 kW per worker and 1 kw per $2.2 of expenditure. For the University as a whole the energy use rate was 3.57 kW per employee.
The second counter argument is that modern economies are "dematerialising", i.e., reducing the amount of materials and energy they require. Crude figures on "energy intensity", i.e., energy consumed in the economy per unit of GDP, seem to confirm this. However there are good reasons for concluding that this is misleading and that dematerialisation is not taking place. (Trainer, 2001.)
Firstly Gever et al. (1991) conclude that a significant proportion of the apparent effect is due to change to fuels of higher quality, e.g., gas rather than coal. (More economic value can be derived from a MJ of energy in the form of petroleum than coal, or electricity than gas, because the former sources are more flexible, transportable etc.) Secondly there is now a strong tendency for rich countries to import goods they previously manufactured, meaning that the energy used in the production of these goods is not tallied as having been used in the countries where they are consumed. An examination of US trade figures provides impressive evidence for this claim. (Trainer, 2001, Adrianse, 1997, US Department of Commerce, 1995.) This energy would be taken into account if "emergy" accounting were carried out, i.e., analysis of the energy costs of production.
Finally, the amount of garbage thrown out would seem to be an important indicator of the volume of materials and energy consumed, and garbage generation per capita in rich countries is not falling.
It is therefore not plausible that the Australian economy could continue to increase production and consumption at normal rates, for example rising to 8 or more times present levels of output by 2070, without seeing its present energy consumption multiply in coming decades. If all the world’s expected 9 billion people were to rise to the per capita "living standards" that Australia would have by 2070 given 3% growth, total world economic output would be more than 60 times as large as it is today, yet this paper indicates that it will not be possible to meet the present energy demand via renewables.
These are the sorts of considerations which lead those within the "limits to
growth" school to conclude that there is no realistic possibility of sustaining
industrial consumer societies committed to affluent "living standards" and
economic growth. (Trainer, 1995a, 1998, 1999.)
There is widespread belief that technical advance will solve the problems consumer-capitalist society is running into, eliminating any need to face up to radical change in lifestyles and the economy. Amory Lovins is well known for claiming that a "Factor Four" reduction can be achieved in the amount of resources required per unit of output. However such an achievement would be of almost no consequence given the magnitude of the limits to growth predicament we face.
Firstly, at the present rate of growth in production, consumption and resource use, a factor four reduction will be overtaken in about 40 years. More importantly, a factor 4 reduction would be far from sufficient to make possible a just and sustainable world.
In view of the evidence of alarming depletion of many resources and ecological systems, especially petroleum, forests, fisheries, the atmosphere, biodiversity, agricultural land and water etc, it would seem that the present aggregate global resource and environmental impacts and costs must be reduced dramatically before they become sustainable. Let us assume that this requires a reduction to one third of present resource use, (although the above greenhouse considerations indicate that a factor 27 reduction is closer to what is required.) In energy terms this would mean world energy use would have to be cut from 410EJ to 136 EJ.
Next we have to deal with the fact of extreme inequality in the global distribution of wealth and resources. About 1 billion people in the rich countries are taking about 3/4 of the resources produced each year, such as petroleum. The rich world per capita average is about 5 times the world average. In other words those who think technical fixes can make the present affluent-consumer-lifestyles of the rich countries possible for all people, in sustainable ways, are assuming that an overall 3x5 or factor 15 reduction in resource and ecological impact per unit of output or consumption can be made. In energy terms sharing the 136EJ among 6 billion people would provide about 22,000MJ per person, which is 1/10 of the amount per capita consumed in rich countries today.
World population is likely to multiply by 1.5, to reach 9 billion. To provide this number with the present rich world living standard in sustainable ways would therefore require a factor reduction of 3x5x1.5 or 22.5, i.e., to 15,000MJ per person.
Finally we have to deal with the implications of economic growth. If we were
to add a mere 3% economic growth to the above considerations, then by 2023 when
output had doubled we would have to achieve a factor 45 reduction , and by 2046
a factor 90 reduction, and we would have to go on doubling the figure every 23
years thereafter. Hawken, Lovins and Lovins believe 3% growth can continue for
70 years, given that they state that an 8-fold increase in economic output is
possible without increase in resource use. As has been explained, rich world
"living standards" would then be 8 times as great as they are now. If 9 billion
were to share those "living standards" world economic output would be about 60
times as great as it is now. Unless "technical fix" enthusiasts such as Hawken,
Lovins and Lovins are only concerned with guaranteeing high living standards to
the few who now have them, they are obliged to show how an approximately 180
factor improvement (3x5x1.5x8) in overall resource use and environmental impact
per unit of output is possible by around 2070. Thus a "Factor Four" reduction is
far less than that which technical advance would have to achieve in order to
make a sustainable and just world possible for all.
The foregoing estimates are imprecise and a number of gaps and unsettled questions remain, but the magnitudes of the numerical conclusions arrived at are so large that implausible assumptions would have to be made before it could be concluded that present electrical and liquid fuel demand could be met from solar sources, let alone demand anticipated in view of continued economic growth.
It should be emphasised again that the foregoing argument does not imply that renewable energy sources should be rejected. A large literature on the limits to growth predicament and alternatives to industrial consumer society indicates that a sustainable society can only be sensibly defined in terms of transition to "The Simpler Way". Its principles are, a) much simpler material living standards, b) high levels of social and economic self-sufficiency at national, local and household levels, c) a minor role for market forces, under firm social control, with prior consideration given to moral, welfare and ecological principles, d) thus a new economy, without growth and with a large non-monetary sector, e) more cooperative and participatory ways, f) heavy reliance on alternative technologies including renewable energy sources, earth building and Permaculture, and g) change to quite different values, especially frugality, cooperation and self-sufficiency.
Although at present the prospects for achieving such a radical transition would not appear to be at all promising, in the last two decades an Alternative Society Movement has begun to build settlements of the required kind. (See Douthwaite, 1996, Schwarz and Schwarz, 1998, Trainer 1995a, Hagmaier, et al., 2000, Federation of Intentional Communities, 2000.) According to this "Simpler Way" vision all could live well on renewable sources, but not at anything like current rich world per capita rates of energy consumption.
Hence the importance of thinking carefully about the potential of renewable
resources. Those who unthinkingly reinforce the assumption that these sources
are capable of sustaining consumer-capitalist society bear a heavy
responsibility. This is one of the key assumptions preventing consideration of
the claim that a sustainable and just society is not possible without transition
to The Simpler Way.
Adriaanse, A., (1997), Resource Flows, Washington, World Resource
AWEA, (American Wind Energy Association), (2001), Wind Energy Fact Sheet.
Bartle, J., (2000), New Perennial Crops; Mallee Euycalypts — A model, Large Scale Perennial Crop for the Wheatbelt, (Duplicated Manuscript.)
Bentley, R.E., (2002), "Global .oil and gas depletion; An overview", Energy Policy, 30, 189-205.
Berndes, G., M. Hoogwijk, and R. van den Broek, (2003), "The contribution of biomass in the future global energy supply; A review of 17 sudies.", Biomass and Bioenergy, 25, 1-28.
BP Solar Australia, 2003, Personal communication, C. Staggs.
Brackman, G., and D. Kearney, (2002), The Status and Prospects of CSP Technologies, International Executive Conference on Expanding the Market for Concentrating Solar Power, June 19-20, Berlin.
Campbell, J., (1997), The Coming Oil Crisis, Brentwood, England, Multiscience and Petroconsultants.
Commissioner of the European Communities, (1994), The European Renewable Energy Study, Brussels
Common, M., (1995), Sustainability and Policy, Cambridge, Cambridge University Press.
Corkish, R., (undated), Can solar cells ever recapture the energy invested in their manufacture?", Photovoltaic Special Research Centre, University of New South Wales, Australia.
Country Garden, (2002), "Unpredictable wind energy; The Danish dilemma", www.countrygarden,net/Denmark.htm.
Czick, G. and B. Ernst, (2003),"High wind power penetration by the systematic use of smoothing effects within huge catchment areas shown in a European example", firstname.lastname@example.org
Czick, G and G. Geibel, (2003), "A comparison of intra- and extra-European options for an energy supply with wind power". Gczisch@iset.uni-kassel.de
Dey, C., (2003), School of Physics, University of Sydney, Personal communication.
DeLaquil, P, D. Kearney, M .Geyer, R. Diner, (1993), "Solar-Thermal ElectricTechnology", Chapter 5, In T. B. Johansson, et al., Eds., Renewable Energy, Washington, Island Press.
Di Pardo, (Undated), "Outlook for biomass ethanol production and demand", (http
Douthwaite, R., (1996), Short Circuit, Dublin, Lilliput.
Duncan, R. C., (1997), "The world petroleum life-cycle; Encircling the production peak", Proceedings of the 13th SSI/Princeton Conf. Space Manufacturing; Space Studies Inst., Princeton. 267-274.
Durning, Å,. Micropower, Worldwatch, 2000.
El Bassam, N., (1998) Energy Plant Species; their Use and Impact on Environment and Development, London, James and James.
Elliason, B. and U. Bossel, The Future of the Hydrogen Economy; Bright or Bleak? www.woodgas.com/hydrogen-eonomy pdf
Ellington, R. T., M. Meo and D.A. El-Sayed, (1993), "The net greenhouse warming forcing of methanol produced from biomass", Biomass and Bioenergy, 4, 6, 405-418.
Elliot, D. L., C. C. Wendell and G. C. Gower, (1991), An assessment of the Available Windy Land Area and Wind Energy Potential of the Contiguous US, Department of Energy, Pacific North West Laboratory, Washington.
Elliott, D., (1994), Wind up!, Real World, 9, Spring.
Enting, I, Wigley, T., and Haimann, M, (1994), Technical Paper 31; Futire Em,issions and concentrations of carbon dioxide, CSIRO Division of Atmospheric Research, Melbourne.
Federation of Intentional Communities, (2000), Communities Directory, Louisa.
Erickson, R.L, (1973), "Crustal abundance of elements and mineral reserves and resources", in D. A. Brobst and W. P. Pratt, Eds., United States Mineral Reserves, Washington, Geological Survey Professional Paper, 820.
FAO, (Undated) )(http://www.fao.org/forestry/FOP/FOPW/GFSM/gfsmint-e.stm)
Ferguson, A., (2000a), The Net Energy Capture of Photovoltaics, (Draft 1), UK, Optimum Population Trust.
Ferguson, A., (2000b), Biomass and Energy, Optimum Population Trust, Jan.
Ferguson, A., (2003), "Wind/biomass energy capture; an update", Optimum Population Trust Journal, 3, 1.
Fleay, B. J., The Decline of the Age of Oil, Sydney, Pluto.
Foran, B., and C. Mardon, (1999), Beyond 2025: Transitions to the biomass-alcohol economy using ethanol and methanol, CSIRO Resource Futures Program, Canberra
Gever, J., et al., (1991), Beyond Oil, Colorado, University of Colorado Press.
Giampietro, M., S.Ulgiati, and D. Pimentel, (1997), "The feasibility of large scale biofuel production. Does an enlargement of scale change the picture?", Bioscience, 47, 9, Oct., 587-600.
Graham, R. L., (1994), "An analysis of the potential land base for energy crops in the conterminous United States", Biomass and Energy, 6, 3, 175-189.
Grasse, W and M. Geyer, (2000), "Solar Power and Chemical Energy Systems", Solar Paces Annual Report.
Haberle, A, C. Zahler, J de Lalllaing, j. Ven, M. Sureda, W. Graf, H. Lerchenmuller, v. Witwer, 2003), "The Solarmundo project; Advanced Technology forSolar Thermal Power Generation", International Solar Energy Society, 2001 Solar World Congress.
Hagmaier, S., J., Kommerall, M. Stengil, M. Wurfel, (2000), Eurotopia; Directory of Intentional Communities and Eco-villages in Europe, 2000/2001, Poppau, Okodorf Seiben Linden.
Hall, C. A. S., D. J. Cleveland and R. Kaufman, (1986), Energy and Resource Quality, New York, Wiley.
Hohenstein, W. G, and L. L. Wright, (1994), "Biomass energy production in the United States; An overview", Biomass and Energy, 6, 3, 161-173.
Ivanhoe, L. F., (1995), "Future oil supplies; There is a finite limit", World Oil, Oct. 77- 88.
Kaneff, S, (1992), Mass Utilization of Thermal Energy, Canberra, Energy Research Centre.
Kelly, H. C., (1993), "Introduction of Photovoltaic Technology", In T. B. Johansson, et al., Renewable Energy, Washington, Island Press.
Kheshgi, H. S., (2000), "The potential of biomass fuels in the context of global climate change,", Annual Review of Energy and Environment, (25), 199-244.
Knapp, K. E., and T. L. Jester, (2000-2001), "PV payback", Home Power, 20, Dec-Jan.
Largent, R., University of NSW Photovoltaic Special Research Centre, personal communication.
Laherrere, J., (1995), "World oil reserves; Which number to believe?", OPEC Bulletin, 26, 22, pp 9-13.
Littlewood, (2003), personal communication, Western Power, WA, www.westernpower.com.au.
Lorenz, D., and D. Morris, (1995), How Much Does It Cost To Make A Gallon Of Ethanol?, Institute for Local Self Reliance.
Lovins, A., (2003), Amory Lovins Hydrogen Primer, Rocky Mountain Institute website.
Lynd, L. R., K. J. Cashman, P. Nichols and C. E. Wyman, (1991),"Fuel ethanol from biomass," Science, 251, 1318-1323.
Lynd, L. .R., (1996), "Overview and evaluation of fuel from cellulosic biomass," Annual Review of Energy and Environment, 21, 403-465.
Lynd, L. R., H. Jin, J. G. Michels, C. E. Wyman, and B. Dale, (2003) "Bioenergy: Background, Potential and Policy, A policy briefing prepared for the Centre for Strategic and International Studies, Lee.Lynd@dartmouth.edu
Manci, T., (2003), Solar thermal technology; Sandia National Laboratories, (Personal communication).
Mason, P., (1992), Forest and Timber Inquiry, Resources Assessment Commission, AGPS Canberra.
McLaughlin, S, (1999), "Developing switchgrass as a bioenergy crop", In J. Janik, Ed., Perspectives On New Crops and Uses, ASHS, Press, Alexandria, VA.
Mercer, D., (1991), A Question of Balance, Annandale, Sydney, Federation Press.
Mills, D, (2002), "The creation of an Australian Wind Energy Atlas", paper fromDept of Geographical Sciences and Planning, University of Queensland.
Mills, D and B., Keepin, (1993), "Baseload solar power", Energy Policy, Aug., 841-857.
Mills, D. and G. Morrison, (undated), Modelling of Compact Linear Fresenel Reflector Powerplant Technology. Performance and Cost Estimates, School of Physics Sydney University, and School of Mechanical Engineering, University of NSW.
Morrison, G., and A. Litwak, (1988), Çondensed Solar Ratiation Data Base for Australia, Paper 1988/FMT/1 Mar.
Morrow, T.,(19), Growing For Broke,
Nilson, S, R.Colberg, R. Hagler, and P. Woodbridge, (1999), How sustainable are North American wood supplies?, Interim Report IR-99-003/Jan, IIASAa-2361, Laxenberg, Austria.
Ogden, J. M. and J. Nitch, (1993), "Solar hydrogen", in T. B. Johansson, et al, Eds., Renewable Energy, Washington, Island Press.
Optimum Population Trust, 2003, Journal, 3.1. April, p.4. The figure is derived from an analysis of Windstats Newsletters.
Origin Energy, (2003), personal communication, 11th April.
ORNL, (undated.) (http://bioenergy.ornl.gov/papers/misc/resource_estimates.html)
Pacific Power, (1993), Annual Report, Sydney.
Pimentel, D., et al., (1984) "The environmental and social costs of biomass", Bioscience, 34,(2), 89-94.
Pimentel, D., (1991), "Ethanol fuels, energy security economics and the environment", Journal of Agricultural and Environmental Ethics, 1-13.
Pimentel, D., (1994), Population and Environment,
Pimentel, D., and M. Pimentel, (1997), Food, Energy and Society, University of Colorado Press.
Pimentel, D., and M. Pimentel, (1998a), Energy and Dollar Costs of Ethanol Production with Corn, M. King Hubbert Centre for Petroleum Supplies, Newsletter 98/2.
Pimentel, D., (1998b), "Food vs biomass fuel", Advances in Food Research, 32, 1, 185-239.
Pimentel, D., (2003), "Ethanol fuels. Energy balance, economics and environmental impacts are negative.", Natural Resources Research, 12, 2 June, 127-134.
Reichmuth and Robison, (undated), "Peak riders of the purple sage: Description and analysis of a prototype utility scale photovoltaic application", Stellar process Inc, 60-7 Hazel St., Hood River, OR 97031, email@example.com.
Renew, Editorial, (1999), 68, July-Sept.
Sala, G., et al, (2000), "The 480 kWp Euclides-Thermie Power Plant; Installation, set up and first results,", 16th European Photovoltaic Solar Energy Conference, 1-5 May, Glasgow, UK.
Shapouri, H, J. A. Duffield and M. Wang, (2002), "The energetic balance of corn ethanol; an update," USDA, Agricultural Economics Report, no 813.
Solar Electric Power Association, (2002), http://www.SolarElectricPower.org/pv/pv_performance_data.cfm
Solar Energy Systems, (2003), www.sesltd.com.au/
Schwarz, W., and Schwarz, D., (1998), Living Lightly, London, Jon Carpenter.
Strebkov, et al, (undated), PV-Thermal static concentrator modules", firstname.lastname@example.org
Swanson, R. M., (2000), The promise of concentrators, Progress in Photovoltaics, Research and Applications, Prog. Photovolt. Res. Appl., 8, 93-111.
Sydney Morning Herald, (2003), "Loy Yang sale crystallises $1.4bn loss", 4th July, p. 19
Robbins, B., 2003), Article in Sydney Morning Herald, 31st June.
Trainer, F. E. (T.), (1985), Abandon Affluence, London, Zed Books.
Trainer, T. (F. E.), (1995), The Conserver Society; Alternatives for Sustainability, London, Zed Books.
Trainer, F. E. (T.), (1998a), Saving the Environment; What It Will Take, Sydney, University of NSW Press.
Trainer, F. E. (T.), (1999), "The limits to growth case in the 1990s", The Environmentalist, 19, 329 -339.
Trainer, F. E. (T.), The Simpler Way Website, http://www.arts.unsw.edu.au/tsw/
Trainer, F. E. (T.) ,(2001), Reply to R. Weiner, Technology in Society, 23, 523-524.
Trainer, F. E. (T), (In press), "Natural capitalism cannot overcome resource limits", Environment, Development, Sustainability.
Tyner, G., (2003a), personal communication.
Tyner, G., (2003), Net Energy Return From Wind Power, http://home.mmcable.com/oivf/index.html.
University of Lowell Photovoltaic Program, (1991), International Solar Irradiation Base, Lowell, MA.
U.S. Department of Commerce, (1995), Statistical Abstract of the United States, Washington.
U.S. Department of Energy, (2000), Annual Energy Review, www.iea.doe.gov/pub/energyoverview/1999
US EIA/DOE, (1997), US Department of Energy Characterization of US Energy Resources and Reserves, DOE/CE-0279 Washington DC, Dec, 1989. www.eia.doe.gov/oiaf/issues/wind_supply.html
US Geological Survey, (2000), USGS Reasseses Potential World Petroleum Resources, News Release, 22nd March, 119 National Centre, Reston, VA 20192.
Wackernagel, N. and W. Rees, (1996), Our Ecological Footprint, Philadelphia, New Society.
Wackernagel, M., L. Onisto, L. .Linares, A Falfan, I. Garcia, G. Guernera, (1997), The Ecological Footprint of Nations, Costa Rica, Centre for Sustainability Studies.
Wernick, I. K., (1996), "Consuming materials", Technological Forecasting and Social Change, 53, 111-122.
Wootton, R., (2003), Personal communication, email@example.com
World Energy Council, (1994), New Renewable Energy Resources; A guide to the Future, London, Routledge and Kegan Paul.
Worldwatch, (2001-2002), Vital Signs, Worldwatch Institute.
Wright,L. L., (1994), "Production technology status of woody and herbaceous crops," Biomass and Energy, 6, 3, 191-209.
Youngquist, W, (1997), Geo Destinies; The Inevitable Control of Earth Resources over Nations and Individuals, Portland, National Book Co.
[MFS note: works of several of the
cited authors are available on the "Sustainability Authors" page here.]
Please send mail to
firstname.lastname@example.org with questions or comments about this web site. Minnesotans For Sustainability
(MFS) is not
affiliated with any government body, private, or corporate entity.
Copyright © 2002, 2003, 2004
Minnesotans For Sustainability
|
<urn:uuid:5d61c69e-a881-4f29-a668-54259b21fe20>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814566.44/warc/CC-MAIN-20180223094934-20180223114934-00611.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9374608397483826,
"score": 3.34375,
"token_count": 30369,
"url": "http://www.mnforsustain.org/trainer_ted_limits_renewable_energy_0903.htm"
}
|
|This article is part of a series on|
A cooperative (also known as co-operative, co-op, or coop) is "an autonomous association of persons united voluntarily to meet their common economic, social, and cultural needs and aspirations through a jointly-owned and democratically-controlled enterprise". Cooperatives may include:
- non-profit community organizations
- businesses owned and managed by the people who use their services (a consumer cooperative)
- organisations managed by the people who work there (worker cooperatives)
- organisations managed by the people to whom they provide accommodation (housing cooperatives)
- hybrids such as worker cooperatives that are also consumer cooperatives or credit unions
- multi-stakeholder cooperatives such as those that bring together civil society and local actors to deliver community needs
- second- and third-tier cooperatives whose members are other cooperatives
Research published by the Worldwatch Institute found that in 2012 approximately one billion people in 96 countries had become members of at least one cooperative. The turnover of the largest three hundred cooperatives in the world reached $2.2 trillion – which, if they were to be a country, it would make them the seventh largest.[need quotation to verify]
One dictionary defines a cooperative as "a jointly owned enterprise engaging in the production or distribution of goods or the supplying of services, operated by its members for their mutual benefit, typically organized by consumers or farmers". Cooperative businesses are typically more economically resilient than many other forms of enterprise, with twice the number of co-operatives (80%) surviving their first five years compared with other business ownership models (41%). Cooperatives frequently have social goals which they aim to accomplish by investing a proportion of trading profits back into their communities. As an example of this, in 2013, retail co-operatives in the UK invested 6.9% of their pre-tax profits in the communities in which they trade as compared with 2.4% for other rival supermarkets.
The International Co-operative Alliance was the first international association formed (1895) by the cooperative movement. It includes the World Council of Credit Unions. A second organization formed later in Germany: the International Raiffeisen Union. In the United States, the National Cooperative Business Association (NCBA CLUSA; the abbreviation of the organization retains the initials of its former name, Cooperative League of the USA) serves as the sector's oldest national membership association. It is dedicated to ensuring that cooperative businesses have the same opportunities as other businesses operating in the country and that consumers have access to cooperatives in the marketplace. A U.S. National Cooperative Bank formed in the 1970s. By 2004 a new association focused on worker co-ops was founded, the United States Federation of Worker Cooperatives.
Since 2002 cooperatives and credit unions could be distinguished on the Internet by use of a .coop domain. Since 2014, following International Cooperative Alliance's introduction of the Cooperative Marque, ICA cooperatives and WOCCU credit unions can also be identified by a coop ethical consumerism label.
- 1 Origins
- 2 Social economy
- 3 Organizational and ideological roots
- 4 Meaning
- 5 Economic stability
- 6 Types of cooperatives
- 7 Types and number of cooperatives
- 7.1 Housing cooperative
- 7.2 Utility cooperative
- 7.3 Agricultural cooperative
- 7.4 Credit unions, cooperative banking and co-operative insurance
- 7.5 Federal or secondary cooperatives
- 8 Women in cooperatives
- 9 Cooperatives in popular culture
- 10 See also
- 11 References
- 12 Bibliography
- 13 External links
Cooperation dates back as far as human beings have been organizing for mutual benefit. Tribes were organized as cooperative structures, allocating jobs and resources among each other, only trading with the external communities. In alpine environments, trade could only be maintained in organized cooperatives to achieve a useful condition of artificial roads such as Viamala in 1472. Pre-industrial Europe is home to the first cooperatives from an industrial context.
In 1761, the Fenwick Weavers' Society was formed in Fenwick, East Ayrshire, Scotland to sell discounted oatmeal to local workers. Its services expanded to include assistance with savings and loans, emigration and education. In 1810, Welsh social reformer Robert Owen, from Newtown in mid-Wales, and his partners purchased New Lanark mill from Owen's father-in-law David Dale and proceeded to introduce better labour standards including discounted retail shops where profits were passed on to his employees. Owen left New Lanark to pursue other forms of cooperative organization and develop coop ideas through writing and lecture. Cooperative communities were set up in Glasgow, Indiana and Hampshire, although ultimately unsuccessful. In 1828, William King set up a newspaper, The Cooperator, to promote Owen's thinking, having already set up a cooperative store in Brighton.
The Rochdale Society of Equitable Pioneers, (RCEP) founded in 1844, is usually considered the first successful cooperative enterprise, used as a model for modern coops, following the 'Rochdale Principles'. A group of 28 weavers and other artisans in Rochdale, England set up the society to open their own store selling food items they could not otherwise afford. Within ten years there were over a thousand cooperative societies in the United Kingdom.
Cooperatives traditionally combine social benefit interests with capitalistic property-right interests. Cooperatives achieve a mix of social and capital purposes by democratically governing distribution questions by and between equal by not controlling members. Democratic oversight of decisions to equitably distribute assets and other benefits means capital ownership is arranged in a way for social benefit inside the organization. External societal benefit is also encouraged by incorporating the operating-principle of cooperation between co-operatives. In the final year of the 20th century, cooperatives banded together to establish a number of social enterprise agencies which have moved to adopt the multi-stakeholder cooperative model. In the years 1994–2009 the EU and its member nations gradually revised national accounting systems to "make visible" the increasing contribution of social economy organizations.
Organizational and ideological roots
The roots of the cooperative movement can be traced to multiple influences and extend worldwide. In the English-speaking world, post-feudal forms of cooperation between workers and owners that are expressed today as "profit-sharing" and "surplus sharing" arrangements, existed as far back as 1795. The key ideological influence on the Anglosphere branch of the cooperative movement, however, was a rejection of the charity principles that underpinned welfare reforms when the British government radically revised its Poor Laws in 1834. As both state and church institutions began to routinely distinguish between the 'deserving' and 'undeserving' poor, a movement of friendly societies grew throughout the British Empire based on the principle of mutuality, committed to self-help in the welfare of working people.
Friendly Societies established forums through which one member, one vote was practiced in organisation decision-making. The principles challenged the idea that a person should be an owner of property before being granted a political voice. Throughout the second half of the nineteenth century (and then repeatedly every twenty years or so) there was a surge in the number of cooperative organisations, both in commercial practice and civil society, operating to advance democracy and universal suffrage as a political principle. Friendly Societies and consumer cooperatives became the dominant form of organization amongst working people in Anglosphere industrial societies prior to the rise of trade unions and industrial factories. Weinbren reports that by the end of the 19th century, over 80% of British working age men and 90% of Australian working age men were members of one or more Friendly Society.
From the mid-nineteenth century, mutual organisations embraced these ideas in economic enterprises, firstly amongst tradespeople, and later in cooperative stores, educational institutes, financial institutions and industrial enterprises. The common thread (enacted in different ways, and subject to the constraints of various systems of national law) is the principle that an enterprise or association should be owned and controlled by the people it serves, and share any surpluses on the basis of each member's cooperative contribution (as a producer, labourer or consumer) rather than their capacity to invest financial capital.
The cooperative movement has been fueled globally by ideas of economic democracy. Economic democracy is a socioeconomic philosophy that suggests an expansion of decision-making power from a small minority of corporate shareholders to a larger majority of public stakeholders. There are many different approaches to thinking about and building economic democracy. Anarchists are committed to libertarian socialism and have focused on local organization, including locally managed cooperatives, linked through confederations of unions, cooperatives and communities. Marxists, who as socialists have likewise held and worked for the goal of democratizing productive and reproductive relationships, often placed a greater strategic emphasis on confronting the larger scales of human organization. As they viewed the capitalist class to be politically, militarily and culturally mobilized for the purpose of maintaining an exploitable working class, they fought in the early 20th century to appropriate from the capitalist class the society's collective political capacity in the form of the state, either through democratic socialism, or through what came to be known as Leninism. Though they regard the state as an unnecessarily oppressive institution, Marxists considered appropriating national and international-scale capitalist institutions and resources (such as the state) to be an important first pillar in creating conditions favorable to solidaristic economies. With the declining influence of the USSR after the 1960s, socialist strategies pluralized, though economic democratizers have not as yet established a fundamental challenge to the hegemony of global neoliberal capitalism.
Cooperatives as legal entities
A cooperative is a legal entity owned and democratically controlled by its members. Members often have a close association with the enterprise as producers or consumers of its products or services, or as its employees.
There are specific forms of incorporation for cooperatives in some countries, e.g. Finland and Australia. Cooperatives may take the form of companies limited by shares or by guarantee, partnerships or unincorporated associations. In the UK they may also use the industrial and provident society structure. In the US, cooperatives are often organized as non-capital stock corporations under state-specific cooperative laws. However, they may also be unincorporated associations or business corporations such as limited liability companies or partnerships; such forms are useful when the members want to allow:
- some members to have a greater share of the control, or
- some investors to have a return on their capital that exceeds fixed interest,
neither of which may be allowed under local laws for cooperatives. Cooperatives often share their earnings with the membership as dividends, which are divided among the members according to their participation in the enterprise, such as patronage, instead of according to the value of their capital shareholdings (as is done by a joint stock company).
Coop Marque and domain
Since 2002, ICA cooperatives and WOCCU credit unions could be distinguished by use of a .coop domain. In 2014, ICA introduced the Global Cooperative Marque for use by ICA's Cooperative members and by WOCCU's Credit Union members so they can be further identified by their coop ethical consumerism label. The marque is used today by thousands of cooperatives in more than a hundred countries.
The .coop domain and Co-operative Marque were designed as a new symbol of the global cooperative movement and its collective identity in the digital age. The domain and coop marque differentiates coop products and e-services offerings of the Movement from all other forms of business, both investor-owned and privately owned businesses. It specifically recognises its rapidly changing role in society, marked by the emergence of the digital cooperative. The .coop (dot coop) domain and a global Co-operative Marque are open for use within all types of ICA cooperatives and WOCCU credit unions on their products or digital services, in combination with individual cooperative's own labels.
The Co-operative Marque and domain is reserved just for co-operatives, credit unions and organisations that support co-operatives; is distinguished by its ethical badge that subscribes to the seven ICA Cooperative Principles and Co-op Values. Co-ops can be identified on the Internet through the use of the .coop suffix of internet addresses. Organizations using .coop domain names must adhere to the basic co-op values.
Coop principles and values
- Voluntary and open membership
- Democratic member control
- Economic participation by members
- Autonomy and independence
- Education, training and information
- Cooperation among cooperatives
- Concern for community
Cooperatives values, in the tradition of its founders, are based on "self-help, self-responsibility, democracy, equality, equity and solidarity." Co-operative members believe in the ethical values of honesty, openness, social responsibility and caring for others.
Such legal entities have a range of social characteristics. Membership is open, meaning that anyone who satisfies certain non-discriminatory conditions may join. Economic benefits are distributed proportionally to each member's level of participation in the cooperative, for instance, by a dividend on sales or purchases, rather than according to capital invested. Cooperatives may be classified as either worker, consumer, producer, purchasing or housing cooperatives. They are distinguished from other forms of incorporation in that profit-making or economic stability are balanced by the interests of the community.
Capital and the Debt Trap reports that "cooperatives tend to have a longer life than other types of enterprise, and thus a higher level of entrepreneurial sustainability". This resilience has been attributed to how cooperatives share risks and rewards between members, how they harness the ideas of many and how members have a tangible ownership stake in the business. Additionally, "cooperative banks build up counter-cyclical buffers that function well in case of a crisis," and are less likely to lead members and clients towards a debt trap (p. 216). This is explained by their more democratic governance that reduces perverse incentives and subsequent contributions to economic bubbles.
In the United Kingdom
A 2013 report published by the UK Office for National Statistics showed that in the UK the rate of survival of cooperatives after five years was 80 percent compared with only 41 percent for all other enterprises. A further study found that after ten years 44 percent of cooperatives were still in operation, compared with only 20 percent for all enterprises" (p. 109).
Other countries in Europe
A 2012 report published by The European Confederation of cooperatives and worker-owned enterprises active in industry and services showed that in France and Spain, worker cooperatives and social cooperatives “have been more resilient than conventional enterprises during the economic crisis”.
A 2010 report by the Ministry of Economic Development, Innovation and Export in Québec found that the five-year survival rate and 10-year survival rate of cooperatives in Québec to be 62% and 44% respectively compared to 35% and 20% for conventional firms. Another report by the BC-Alberta Social economy Research Alliance found that the three-year survival rate of cooperatives in Alberta to be 81.5% in comparison to 48% for traditional firms. Another report by the aforementioned Research Alliance found that in British-Columbia, the 5-year survival rates for cooperatives between 2000 and 2010 to be 66.6% in comparison to conventional businesses that had 43% and 39% in the years 1984 and 1993 respectively
In the United States of America
In a 2007 study by the World Council of Credit Unions, the 5-year survival rate of cooperatives in the United States was found to be 90% in comparison to 3-5% for traditional businesses.
Types of cooperatives
A non-monetary cooperative provides a service based on entirely voluntary labour in the maintenance and provision of a particular service or good, working in the identical manner of a library. These co-ops are locally owned and operated and provides the free rental of equipments of all kinds (bicycles, sports, gear). This idea has been said to reduce general human consumption of goods, a key subject in sustainable development.
A retailers' cooperative (known as a secondary or marketing cooperative in some countries) is an organization which employs economies of scale on behalf of its members to receive discounts from manufacturers and to pool marketing. It is common for locally owned grocery stores, hardware stores and pharmacies. In this case, the members of the cooperative are businesses rather than individuals.
The Best Western international hotel chain is actually a retailers' cooperative, whose members are hotel operators, although it refers to itself as a "nonprofit membership association." It gave up on the "cooperative" label after some courts insisted on enforcing regulatory requirements for franchisors despite its member-controlled status.
A worker cooperative or producer cooperative is a cooperative, that is owned and democratically controlled by its "worker-owners". There are no outside owners in a "pure" workers' cooperative, only the workers own shares of the business, though hybrid forms exist in which consumers, community members or capitalist investors also own some shares. In practice, control by worker-owners may be exercised through individual, collective or majority ownership by the workforce, or the retention of individual, collective or majority voting rights (exercised on a one-member one-vote basis). A worker cooperative, therefore, has the characteristic that the majority of its workforce owns shares, and the majority of shares are owned by the workforce. Membership is not always compulsory for employees, but generally only employees can become members either directly (as shareholders) or indirectly through membership of a trust that owns the company.
The impact of political ideology on practice constrains the development of cooperatives in different countries. In India, there is a form of workers' cooperative which insists on compulsory membership for all employees and compulsory employment for all members. That is the form of the Indian Coffee Houses. This system was advocated by the Indian communist leader A. K. Gopalan. In places like the UK, common ownership (indivisible collective ownership) was popular in the 1970s. Cooperative Societies only became legal in Britain after the passing of Slaney's Act in 1852. In 1865 there were 651 registered societies with a total membership of well over 200,000. There are now more than 400 worker cooperatives in the UK, Suma Wholefoods being the largest example with a turnover of £24 million.
A volunteer cooperative is a cooperative that is run by and for a network of volunteers, for the benefit of a defined membership or the general public, to achieve some goal. Depending on the structure, it may be a collective or mutual organization, which is operated according to the principles of cooperative governance. The most basic form of volunteer-run cooperative is a voluntary association. A lodge or social club may be organized on this basis. A volunteer-run co-op is distinguished from a worker cooperative in that the latter is by definition employee-owned, whereas the volunteer cooperative is typically a non-stock corporation, volunteer-run consumer co-op or service organization, in which workers and beneficiaries jointly participate in management decisions and receive discounts on the basis of sweat equity.
A particularly successful form of multi-stakeholder cooperative is the Italian "social cooperative", of which some 11,000 exist. "Type A" social cooperatives bring together providers and beneficiaries of a social service as members. "Type B" social cooperatives bring together permanent workers and previously unemployed people who wish to integrate into the labor market. They are legally defined as follows:
- no more than 80% of profits may be distributed, interest is limited to the bond rate and dissolution is altruistic (assets may not be distributed)
- the cooperative has legal personality and limited liability
- the objective is the general benefit of the community and the social integration of citizens
- those of type B integrate disadvantaged people into the labour market. The categories of disadvantage they target may include physical and mental disability, drug and alcohol addiction, developmental disorders and problems with the law. They do not include other factors of disadvantage such as unemployment, race, sexual orientation or abuse.
- type A cooperatives provide health, social or educational services
- various categories of stakeholder may become members, including paid employees, beneficiaries, volunteers (up to 50% of members), financial investors and public institutions. In type B cooperatives at least 30% of the members must be from the disadvantaged target groups
- voting is one person one vote
A consumers' cooperative is a business owned by its customers. Employees can also generally become members. Members vote on major decisions and elect the board of directors from among their own number. The first of these was set up in 1844 in the North-West of England by 28 weavers who wanted to sell food at a lower price than the local shops.
The world's largest consumers' cooperative is the Co-operative Group in the United Kingdom, which offers a variety of retail and financial services. The UK also has a number of autonomous consumers' cooperative societies, such as the East of England Co-operative Society and Midcounties Co-operative. In fact, the Co-operative Group is something of a hybrid, having both corporate members (mostly other consumers' cooperatives, as a result of its origins as a wholesale society), and individual retail consumer members.
Business and employment cooperative
Business and employment cooperatives (BECs) are a subset of worker cooperatives that represent a new approach to providing support to the creation of new businesses.
Like other business creation support schemes, BEC's enable budding entrepreneurs to experiment with their business idea while benefiting from a secure income. The innovation BECs introduce is that once the business is established the entrepreneur is not forced to leave and set up independently, but can stay and become a full member of the cooperative. The micro-enterprises then combine to form one multi-activity enterprise whose members provide a mutually supportive environment for each other.
BECs thus provide budding business people with an easy transition from inactivity to self-employment, but in a collective framework. They open up new horizons for people who have ambition but who lack the skills or confidence needed to set off entirely on their own – or who simply want to carry on an independent economic activity but within a supportive group context.
New generation cooperative
New generation cooperatives (NGCs) are an adaptation of traditional cooperative structures to modern, capital intensive industries. They are sometimes described as a hybrid between traditional co-ops and limited liability companies or public benefit corporations. They were first developed in California and spread and flourished in the US Mid-West in the 1990s. They are now common in Canada where they operate primarily in agriculture and food services, where their primary purpose is to add value to primary products. For example, producing ethanol from corn, pasta from durum wheat, or gourmet cheese from goat’s milk. A representative example of an operating NGC is the Fourth Estate (association), a multi-stakeholder NGC journalism association.
Types and number of cooperatives
The top 300 largest cooperatives were listed in 2007 by the International Co-operative Alliance. 80% were involved in either agriculture, finance, or retail and more than half were in the United States, Italy, or France. In the United States, cooperatives, particularly those in the Midwest, are analyzed at the University of Wisconsin Center for Cooperatives.
A housing cooperative is a legal mechanism for ownership of housing where residents either own shares (share capital co-op) reflecting their equity in the cooperative's real estate, or have membership and occupancy rights in a not-for-profit cooperative (non-share capital co-op), and they underwrite their housing through paying subscriptions or rent.
Housing cooperatives come in three basic equity structures
- In market-rate housing cooperatives, members may sell their shares in the cooperative whenever they like for whatever price the market will bear, much like any other residential property. Market-rate co-ops are very common in New York City.
- Limited equity housing cooperatives, which are often used by affordable housing developers, allow members to own some equity in their home, but limit the sale price of their membership share to that which they paid.
- Group equity or zero-equity housing cooperatives do not allow members to own equity in their residences and often have rental agreements well below market rates.
Members of a building cooperative (in Britain known as a self-build housing cooperative) pool resources to build housing, normally using a high proportion of their own labor. When the building is finished, each member is the sole owner of a homestead, and the cooperative may be dissolved.
This collective effort was at the origin of many of Britain's building societies, which however, developed into "permanent" mutual savings and loan organisations, a term which persisted in some of their names (such as the former Leeds Permanent). Nowadays such self-building may be financed using a step-by-step mortgage which is released in stages as the building is completed. The term may also refer to worker cooperatives in the building trade.
A utility cooperative is a type of consumers' cooperative that is tasked with the delivery of a public utility such as electricity, water or telecommunications services to its members. Profits are either reinvested into infrastructure or distributed to members in the form of "patronage" or "capital credits", which are essentially dividends paid on a member's investment into the cooperative. In the United States, many cooperatives were formed to provide rural electrical and telephone service as part of the New Deal. See Rural Utilities Service.
In the case of electricity, cooperatives are generally either generation and transmission (G&T) co-ops that create and send power via the transmission grid or local distribution co-ops that gather electricity from a variety of sources and send it along to homes and businesses.
In Tanzania, it has been proven that the cooperative method is helpful in water distribution. When the people are involved with their own water, they care more because the quality of their work has a direct effect on the quality of their water.
Agricultural cooperatives or farmers' cooperatives are cooperatives where farmers pool their resources for mutual economic benefit. Agricultural cooperatives are broadly divided into agricultural service cooperatives, which provide various services to their individual farming members, and agricultural production cooperatives, where production resources such as land or machinery are pooled and members farm jointly. Known examples of agricultural production cooperatives are the cranberry-and-grapefruit cooperative Ocean Spray, collective farms in socialist states and the kibbutzim in Israel.
Agricultural supply cooperatives aggregate purchases, storage, and distribution of farm inputs for their members. By taking advantage of volume discounts and utilizing other economies of scale, supply cooperatives bring down members' costs. Supply cooperatives may provide seeds, fertilizers, chemicals, fuel, and farm machinery. Some supply cooperatives also operate machinery pools that provide mechanical field services (e.g., plowing, harvesting) to their members.
Agricultural marketing cooperatives provide the services involved in moving a product from the point of production to the point of consumption. Agricultural marketing includes a series of interconnected activities involving planning production, growing and harvesting, grading, packing, transport, storage, food processing, distribution and sale. Agricultural marketing cooperatives are often formed to promote specific commodities.
Credit unions, cooperative banking and co-operative insurance
Credit unions are cooperative financial institutions that are owned and controlled by their members. Credit unions provide the same financial services as banks but are considered not-for-profit organizations and adhere to cooperative principles.
Credit unions originated in mid-19th-century Germany through the efforts of pioneers Franz Herman Schulze'Delitzsch and Friedrich Wilhelm Raiffeisen. The concept of financial cooperatives crossed the Atlantic at the turn of the 20th century, when the caisse populaire movement was started by Alphonse Desjardins in Quebec, Canada. In 1900, from his home in Lévis, he opened North America's first credit union, marking the beginning of the Mouvement Desjardins. Eight years later, Desjardins provided guidance for the first credit union in the United States, where there are now about 7,950 active status federally insured credit unions, with almost 90 million members and more than $679 billion on deposit.
Cooperative banking networks, which were nationalized in Eastern Europe, work now as real cooperative institutions. In Poland, the SKOK (Spółdzielcze Kasy Oszczędnościowo-Kredytowe) network has grown to serve over 1 million members via 13,000 branches, and is larger than the country’s largest conventional bank.
The oldest cooperative banks in Europe, based on the ideas of Friedrich Raiffeisen, are joined together in the 'Urgenossen'.
Federal or secondary cooperatives
In some cases, cooperative societies find it advantageous to form cooperative federations in which all of the members are themselves cooperatives. Historically, these have predominantly come in the form of cooperative wholesale societies, and cooperative unions. Cooperative federations are a means through which cooperative societies can fulfill the sixth Rochdale Principle, cooperation among cooperatives, with the ICA noting that "Cooperatives serve their members most effectively and strengthen the cooperative movement by working together through local, regional and international structures."
Cooperative wholesale society
According to cooperative economist Charles Gide, the aim of a cooperative wholesale society is to arrange "bulk purchases, and, if possible, organise production." The best historical example of this was the English CWS and the Scottish CWS, which were the forerunners to the modern Co-operative Group. Today, its national buying programme, the Co-operative Retail Trading Group performs a similar function.
A second common form of cooperative federation is a cooperative union, whose objective (according to Gide) is "to develop the spirit of solidarity among societies and... in a word, to exercise the functions of a government whose authority, it is needless to say, is purely moral." Co-operatives UK and the International Cooperative Alliance are examples of such arrangements.
Cooperative political movements
In some countries with a strong cooperative sector, such as the UK, cooperatives may find it advantageous to form political groupings to represent their interests. The British Cooperative Party, the Canadian Cooperative Commonwealth Federation and United Farmers of Alberta are prime examples of such arrangements.
The British cooperative movement formed the Cooperative Party in the early 20th century to represent members of consumers' cooperatives in Parliament, which was the first of its kind. The Cooperative Party now has a permanent electoral pact with the Labour Party meaning someone cannot be a member if they support a party other than Labour. Plaid Cymru also run a credit union that is constituted as a co-operative, called the 'Plaid Cymru Credit Union.' UK cooperatives retain a strong market share in food retail, insurance, banking, funeral services, and the travel industry in many parts of the country, although this is still significantly lower than other business models.
The Cooperative NATCCO Party (Coop-NATCCO) is a party-list in the Philippines which serves as the electoral wing of the National Confederation of Cooperatives (NATCCO). Coop-NATCCO has represented the Philippine co-operative sector in the Philippine 11th Congress since 1998.
Women in cooperatives
Since cooperatives are based on values like self-help, democracy, equality, equity, and solidarity, they can play a particularly strong role in empowering women, especially in developing countries. Cooperatives allow women who might have been isolated and working individually to band together and create economies of scale as well as increase their own bargaining power in the market. In statements in advance of International Women's Day in early 2013, President of the International Cooperative Alliance, Dame Pauline Green, said, "Cooperative businesses have done so much to help women onto the ladder of economic activity. With that comes community respect, political legitimacy and influence."
However, despite the supposed democratic structure of cooperatives and the values and benefits shared by members, due to gender norms on the traditional role of women, and other instilled cultural practices that sidestep attempted legal protections, women suffer a disproportionately low representation in cooperative membership around the world. Representation of women through active membership (showing up to meetings and voting), as well as in leadership and managerial positions is even lower.
Cooperatives in popular culture
My So-Called Housing Cooperative is a web series focusing on the humorous side of living in a housing co-op.
U.S. co-ops provide over 850 thousand jobs and create more than $74 billion in annual wages with revenue of nearly $500 billion.
- Artist cooperative
- Cooperative economics
- Co-operative living arrangements
- Collective ownership
- Common ownership
- Commune (intentional community)
- Cost the limit of price
- Danish cooperative movement
- Democratic socialism
- Employee-owned corporation
- Employee stock ownership plan
- FC Barcelona (the world's first cooperative-based football club)
- Friendly society
- History of the cooperative movement
- Industrial and provident society
- List of co-operative federations
- List of cooperatives
- Market Socialism
- Microfinance / microcredit
- Mondragón Cooperative Corporation
- Mutual aid
- Mutual organization
- Mutual Ownership Defense Housing Division
- Mutualism (economic theory)
- Online media cooperative
- Participatory democracy
- Participatory economics
- Polytechnic University of the Philippines College of Cooperatives and Social Development
- Friedrich Wilhelm Raiffeisen
- Rochdale Principles
- Social corporatism
- Social economy
- Social enterprise
- Social ownership
- Statement on the Cooperative Identity. Archived 4 February 2012 at the Wayback Machine. International Cooperative Alliance.
- "Membership in Co-operative Businesses Reaches 1 Billion - Worldwatch Institute".
Membership in co-operative businesses has grown to 1 billion people across 96 countries, according to new research published by the Worldwatch Institute for its Vital Signs Online publication.
- "The World Co-operative Monitor". monitor.coop.
- "Dictionary.com - Find the Meanings and Definitions of Words at Dictionary.com". Dictionary.com. Retrieved 2017-06-11.
- "Community investment index: giving back to neighbourhoods". thenews.coop. Archived from the original on 26 June 2015.
- "Community Impact - National Cooperative Bank". National Cooperative Bank, N.A. 2017. Retrieved 2017-06-11.
Chartered by Congress in 1978 and privatized in 1981 as a cooperatively owned financial institution, NCB was created to address the financial needs of an underserved market: cooperative owned organizations that operate for the benefit of their members, not outside investors.
- "1473 letter of intent to build a road, in (old) german" (PDF). Archived from the original (PDF) on 6 July 2011.
- Europe, CICOPA. "About Us".
- Carrell, Severin. Strike Rochdale from the record books. The Co-op began in Scotland., The Guardian, 7 August 2007.
- "Full text of "Dr. William King and the Co-operator, 1828–1830"". archive.org.
- "Dr. William King and the Co-operator, 1828–1830, T. W. MERCER, OL6459685M
- Marlow, Joyce, The Tolpuddle Martyrs, London :History Book Club, (1971) & Grafton Books, (1985) ISBN 0-586-03832-9
- Monzon, J. L. & Chaves, R. (2008) "The European Social Economy: Concept and Dimensions of the Third Sector", Annals of Public and Cooperative Economics, 79(3/4): 549-577.
- Gates, J. (1998) The Ownership Solution, London: Penguin.
- Rothschild, J., Allen-Whitt, J. (1986) The Cooperative Workplace, Cambridge University Press
- Weinbren, D. & James, B. (2005) "Getting a Grip: the Roles of Friendly Societies in Australia and Britain Reappraised", Labour History, Vol. 88.
- Ridley-Duff, R. J. (2008) "Social Enterprise as a Socially Rational Business", International Journal of Entrepreneurial Behaviour and Research, 14(5): 291-312.
- Rothschild, J., Allen-Whitt, J. (1986) The cooperative workplace, Cambridge University Press, Chapter 1.
- Cliff, T., Cluckstein, D. (1988) The Labour Party: A Marxist History, London: Bookmarks.
- "What is a co-operative - Co-operatives UK".
- Osuuskuntalaki (421/2013, Cooperatives act).§2: "Osuuskunta on jäsenistään erillinen oikeushenkilö, joka syntyy rekisteröimisellä." This translates as, "A cooperative is a legal person separate from its persons, born by registration." Finlex database. Retrieved 2015-12-04. (in Finnish)
- "Australian Co-operative Glossary".
- "Coop Marque". Coop Identity. International Cooperative Alliance.
- "Co-operatives, adopt the Co-operative Marque". Co-op Marque. International Co-operative Alliance.
- "Coop Identity". Coop Marque. International Cooperative Alliance.
- "Coop Marque Register". Domains.Coop. International Cooperative Alliance.
- "Co-operative identity, values & principles". ICA. International Cooperative Alliance.
- International Cooperative Alliance.Statement on the Cooperative Identity Archived 4 February 2012 at the Wayback Machine.. Retrieved on: 2011-07-31.
- Andrew McLeod (December 2006). Types of Cooperatives. Northwest Cooperative Development Centre. Retrieved on: 2011-07-31.
- "UN's official website". Retrieved 25 February 2012.
- "A11 Report - Alberta Co-op Survival (PDF)" (PDF).
- "10 Facts About Cooperative Enterprise - Grassroots Economic Organizing". www.geo.coop.
- In 2011 the official total was 11,264: ISTAT, 9° Censimento dell’industria e dei servizi (Roma, 2011)
- "New Generation Cooperatives - 10 Things You Need to Know". Government of Alberta: Agriculture and Rural Development. Retrieved 25 December 2011.
- Whitsett, Ross. Urban Mass: A Look at Co-op City. The Cooperator. December 2006.
- Cobia, David, editor, Cooperatives in Agriculture, Prentice-Hall, Englewood Cliffs, NJ (1989), p. 50.
- "Plaid Cymru Credit Union website". ucpccu.org.
- Ian Clarke, (2000) "Retail power, competition and local consumer choice in the UK grocery sector", European Journal of Marketing, Vol. 34 Iss: 8, pp.975 - 1002
- "What is a Cooperative?". un.org.
- Nippierd, A. (2002). "Gender issues in cooperatives." Geneva, Switzerland: International Labour Organization
- "Membership in Co-operative Businesses Reaches 1 Billion," WorldWatch Institute
- "Co-opoly: The Game of Co-operatives". The Toolbox for Education and Social Action.
- "Teach Your Children Well: Don't Play Monopoly", Truthout.org
- My So-Called Housing Cooperative. youtube.com.
- admin (6 April 2012). "Co-op FAQs and Facts".
- Neoliberal Co-optation of Leading Co-op Organizations, and a Socialist Counter-Politics of Cooperation (February 2015), Carl Ratner, Monthly Review, Volume 66, Number 9
- Cooperatives On the Path to Socialism? (February 2015), Peter Marcuse, Monthly Review, Volume 66, Number 9
- Japanese Consumers' Co-operative Union (2003). "co.op, 2003 Facts and Figures" (PDF). Archived from the original (PDF) on 15 May 2005.
- Isao Takamura (1995). "Japan: Consumer Co-op Movement in Japan".
- Armitage, S. (1991) 'Consequences of Mutual Ownership for Building Societies', The Service Industries Journal, October, Vol.11(4): pp. 458–480 (p. 471).
- Birchall, Johnston. "The International Co-operative Movement", 1997
- Brazda, Johann and Schediwy, Robert (eds.) "Consumer Co-operatives in a Changing World"(ICA), 1989
- Bernardi A., Monni S., eds., (2016), "The Co-operative firm – Keywords, Roma: RomaTrE-Press."
- Cooperative League of America. Co-operation 1921–1947
- Cornforth, C. J. et al. Developing Successful Worker Co-ops, London: Sage Publications, 1988.
- Curl, John. "For All The People: Uncovering the Hidden History of Cooperation, Cooperative Movements, and Communalism in America," PM Press, 2009
- Dana, Leo Paul 2010, "Nunavik, Arctic Quebec: Where Co-operatives Supplement Entrepreneurship," Global Business and Economics Review 12 (1/2), January 2010, pp. 42–71.
- Derr, Jascha. The cooperative movement of Brazil and South Africa, 2013
- Emerson, John. "Consider the Collective: More than business as usual" 2005. Article on graphic design and printing cooperatives.
- Gide, Charles. Consumers' Co-operative Societies, 1922
- Holyoake, George Jacob. The History of Co-operation, 1908
- Llewellyn, D. and Holmes, M. (1991) 'In Defence of Mutuality: A Redress to an Emerging Conventional Wisdom', Annals of Public and Co-operative Economics, Vol.62(3): pp. 319–354 (p. 327).
- Masulis, R. (1987) 'Changes in Ownership Structure: Conversions of Mutual Savings and Loans to Stock Charter', Journal of Financial economics, Vol.18: pp. 29–59 (p. 32).
- Paton, R. Reluctant Entrepreneurs, Open University Press, 1989.
- Rasmusen, E. (1988) 'Mutual banks and stock banks', Journal of Law and Economics, October, Vol.31: pp. 395–421 (p. 412).
- Van Deusen, David. (2006) Co-ops: The Changing Face of Employment in the Green Mountains, Z Magazine.
- Vicari S., (2015), "2014 Annual Report on FAO’s projects and activities in support of producer organizations and cooperatives"
- Vieta, Marco (ed.) "The New Cooperativism" in Affinities: A Journal of Radical Theory, Culture, and Action, Vol. 4, Issue 1, 2010
- Warbasse, James Peter. Cooperative Peace, 1950
- Warbasse, James Peter. Problems Of Cooperation, 1941
- Whyte, W. F. and Whyte, K. K. Making Mondragon, New York: ILR Press/Itchaca, 1991.
- Zeuli, Kimebrly A. and Cropp, Robert. Cooperatives: Principles and practices in the 21st century, 2004
- Understanding Cooperatives, a curriculum on cooperative business for secondary school students.
- India: Re-inventing cooperatives by increasing youth involvement
|Wikisource has the text of the 1911 Encyclopædia Britannica article Co-operation.|
Media related to Cooperatives at Wikimedia Commons
- Venezuela's Cooperative Revolution from Dollars & Sense magazine
- United Nations 2012 International Year of Cooperatives (IYC) official website
|
<urn:uuid:5ea95f5a-015f-467a-aae0-9d0cfaa2abaf>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00611.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.938906192779541,
"score": 3.4375,
"token_count": 9172,
"url": "https://en.wikipedia.org/wiki/Cooperative"
}
|
The story of the family of Jacoby and Berta (née Grunbaum) Seckel contributes much to our understanding of the dynamics of the small city of Themar in the late 1800s/early 1900s. We learn more about one of Themar’s major Jewish families, the Grünbaums, members of which were in Themar from the beginning in the late 1860s/early 1870s until the end of the Jewish community in the Holocaust. It tells us how the rapid growth of the Jewish community from its beginnings in the late 1860s was the result of the in-migration of Jews from other communities rather than natural increase. We learn more about the internal dynamics of Themar’s economic structure: how Jewish businesses were established, closed, and/or passed into the hands of other family members.
Within a larger context, the family’s story contributes to our knowledge of why and how many German families — Jewish and non-Jewish — either moved from small urban centres to larger and still larger ones in search of economic prosperity, or left Germany altogether. And, with particular reference to the experience of Jewish families during the Holocaust, the story of the Seckels highlights the strategies families pursued in order to escape the Nazi terror. In so doing, the story challenges some long-standing assumptions about Jewish Germans — such as the belief that older Jewish Germans resisted or refused to consider emigration.
Bertha Seckel, née Grünbaum, was born in 1867 in Walldorf in Thüringen, a centre with a population of approximately 1600 in the mid-1800s. Her parents, Loeser (b. 1839) and Johanna Grünbaum (b. 1894, née Bergmann), moved to the slightly larger centre of Themar — population about 1800— sometime after 1870; her father, Loeser Grünbaum, set up a store on the marketplace, the city’s economic centre.
There were other Grünbaums in Themar but the exact connection to Bertha’s family has still to be sorted. The family of Noah & Minna (née Friedmann) Grünbaum also came from Walldorf/Thüringen sometime after the birth of son Hugo in 1868 and before the birth of daughter Minna in Themar in 1872. Noah Grünbaum (b. 1841) may have been either Loeser’s brother or his cousin. Minna Grünbaum, Noah’s first wife died in 1872, and Noah remarried. In 1876, he and Josefine, his second wife, had a son, Karl.
Bertha, Hugo, Minna and Karl Grünbaum probably all attended the public school in Themar, which at that time was located near St. Bartholomew’s church. As well they received instruction from the Jewish Lehrer, Hugo Friedmann.
In 1890, Jacoby Seckel and Bertha Grünbaum married in Themar; Jacoby joined his father-in-law, Loeser Grünbaum, in the business on the Market Square. He also became active in the religious life of the Jewish community and was one of the Gemeinde’s governing committee by the turn of the twentieth century.
Between 1891 and 1904, Jacoby and Bertha had six children in Themar — we know details of the lives of five of them, but of the eldest, Alfred, b. 1891, we know little other than his name. It is possible that he died young.
The Seckel family left Themar in the first decade of the twentieth century, probably seeking greater economic prosperity in a larger centre. Loeser Grünbaum, Bertha’s father, died in 1904, and, in the spring of 1905, Jacoby Seckel, age 45, sold the business to Hugo Grünbaum, related to him by marriage. A huge ‘going-out-of-business’ sale ended on 1 April 1905.
The family left for the city of Zeitz in Saxony-Anhalt. Situated about 40 km south-west of Leipzig, Zeitz was a city 10 times the size of Themar — about 33,000 citizens in 1910 — although its Jewish community may have been no larger than Themar’s.
In 1906, their 20-month old daughter, Lottchen, the last child born in Themar, died in Zeitz. Bertha’s and Jacoby’s roots were still so deep in Themar that they placed a notice in the local Themar newspaper. Their last child, a girl, Gertrud, was born a year later.
In 1909, the Seckels moved again, this time north to Altenburg, a city of app. 40,000 citizens, of whome 156 were Jews. They lived at Rossplan 5, and for two years, Jacoby continued his Kolonialwarengeschäft.
But, on 12 January 1911, Jacoby Seckel died, 49 years old. As with all family news, Bertha immediately let her friends and relatives in Themar know. The outpouring of sympathy prompted Bertha to place another notice in the Themar paper thanking everyone for their condolences.
In 1911, therefore, 44-year-old Bertha, was a widow with 4 children under 20 years of age. Bertha did not return to Themar; instead, she remained in Altenburg, raised her family and continued the family business. It is possible that her mother, Johanna Grünbaum, née Bergmann, had moved to Altenburg before 1911. But if not, she probably joined her daughter in Altenburg shortly after Jacoby’s death in order to help raise the children and allow Bertha to continue the business. The family moved into Rossplan 2 and 3 and, on 27 November 1911, Bertha registered the Kolonialwarengeschäft in her own name in Altenburg’s Commercial Register.
Johanna Grünbaum died in November 1917, just as the Seckel children were about to form their own families. In 1919, two Seckel sisters married two Wohlgemuth brothers: on 27 May 1919, Klara Seckel, b. 1893, married Max Wohlgemuth. Six months later, Hilda, b. 1895, married Max’s elder brother, Emil (b. 1889). Two years later, Sophie Seckel, b. 1897, married Frederich Fernich, b.1893 in Klotten/North Rhine Westfalen.
The Seckels left Altenburg in the early 1920s; address and telephone directories of the 1930s help us to know who lived where when until the late 1930s. Bertha moved to Leipzig, home to some 700,000 citizens, of whom over 12,500 were Jews. She lived at Kohlgartenstraße 38 and Gertrud probably lived with her until her marriage to Alfred Münzer. We can slso find Klara, Sophie and Heinrich in Leipzig, all living within the Waldstraßenviertel: Sophie and Frederich Fernich at Brühl 29; Heinrich Seckel, his wife, Edith, and two children first lived at Körnerstraße 7 and then moved to Waldstraße 72; Klara and Max Wohlgemuth first lived at Richard Wagner Straße 8 and then moved home and business to Leplaystraße 10. Only Hilda’s whereabouts in the 1930s are elusive: the few Leipzig address books available online do not include an entry for Emil Wohlgemuth, suggesting that they they lived elsewhere. Hilda and Emil divorced in 1932 and Emil was living in Berlin in the late 1930s. Where Hilda was is not yet known.
By mid-June 1938, if not before, the Seckels were seeking to emigrate: according to his daughter’s account, Heinrich travelled alone to the United States in June 1938 to seek sponsors for his family. He returned and was in Leipzig on the night of the Reichspogromnacht/”Kristallnacht’. The clothing store was burned and Heinrich was imprisoned in Dachau. On 23 December 1938, after his release from Dachau, Heinrich Seckel registerd the addition of the namees ‘Israel’ and ‘Sara’ for family members directly connected to Themar living in Leipzig: for himself and his two sisters, Klara and Sophie, who had been born in Themar, and for his mother, who had been married in Themar.
Between fall 1938 and July 1941, eight members of Bertha’s immediate family escaped: three daughters, son Heinrich, two sons-in-law and three grandchildren. First to leave was Alfred Muenzer, husband of Gertrud; he left in early November 1938 — that is, before Kristallnacht/Reichspogromnacht — for the United States and lived in New York City, awaiting Gertrud and daughter Ingrid Dorothea. In August 1939 — just before WWII began — Heinrich Seckel left for England. His task, like that of so many other Jewish Germans who immigrated into England at that time, was to make arrangements for family members to follow and then to move on elsewhere. Although successful in the long run, Heinrich was unable to bring his family to England before war started. However, he also avoided being sent to Canada or Australia as ]=an ‘enemy alien’ in May 1940, and immigrated into the United States on 29 July 1940.
Wartime conditions made escape doubly difficult but not impossible: Sophie and Frederich Fernich were able to leave Europe from St. Nazaire, France, on 19 May 1940, just weeks before Germany took control of France. A year later, on 22 May 1941, when time was running out fast, Gertrud Münzer, née Seckel, left Lisbon with her six-year-old daughter, Ingrid Dorothea. Only in early July 1941 did Heinrich Seckel’s wife, Edith (née Glassmann) and their two children, Joachim-Philipp and Ilse, sail on one of the very last ships to carry refugees from Europe. They joined Heinrich in Dainesville, Ohio.
Again, as with her whereabouts during the 1930s, Hilda Seckel’s story is unclear. Almost by chance, we learn that she was in Leipzig in May 1940; her sister Sophie identifies Hilda Wohlgemuth as the point of contact on the Fernichs’s immigration papers into the United States. Then official California records tell us that Hilda lived in California and died there in 1988. But what happened between 1940 and 1988 is still a mystery.
Records in the Jewish Transmigration Bureau files tell us that Sophie Fernich tried to bring her mother, Bertha Seckel, to the United States. By February 1941, Sophie had paid the monies required to guarantee financial security for her mother’s immigration into the United States. But Bertha’s number in the American quota list was too high; in October 1941, when the German authorities banned emigration from Germany, Bertha’s number had not come up and she was trapped in Germany. On 20 March 1942, the Americans closed the file and refunded Sophie her deposit.
Klara too was trapped in Germany but the circumstances remain cloudy. We know that her husband, Max Wohlgemuth, either escaped before the Holocaust or survived it in some manner. In her book, Menschen ohne Grabstein (2001), Ellen Bertram states that he emigrated. It is possible that, like his brother-in-law Heinrich, Max left Germany first, hoping to be able to arrange for Klara to follow and the effort failed. He too came to the United States where he died in 1983.
Bertha’s and Klara’s situation deteriorated steadily: in early 1939, Leipzig Jews were forced to move into Judenhäuser and records such as her file with the Jewish Transmigration Bureau (above) indicate that for the Seckels, the first of these was Humboldtststraße 10. The immigration records tell us that Hilda, Gertrud, and Heinrich Seckel’s family also lived at Humboldtstrasse 10 in the time leading up to their emigration.
Deportations from Leipzig began on 21 January 1942, but neither Bertha nor Klara was on the first transports. Klara probably avoided the 10 May 1942 transport to Belzyce Ghetto because her forced labour as a seamstress (furrier) was deemed essential. Bertha, age 75, was held back in May to be deported to the so-called ‘retirement ghetto’ at Theresienstadt in fall 1942.
As the combination of emigration and deportation reduced the number of Jews in Leipzig, those remaining were forced to move yet again. The available information is that Klara’s last address was Gustav-Adolf-Strasse 7 and Bertha’s Keilstrasse 5. Bertha was the first to be transported: on 20 September 1942, a transport — of over 800 Jews — took her to Theresienstadt Ghetto. Grünbaum relatives were on the same transport — Hugo Grünbaum and his wife, Klara, from Themar; Minna Rosenthal, née Grünbaum, from Apolda; and Karl Grünbaum and his wife, Hulda née Schlesinger, from Erfurt. From Bertha’s death certificate, we also learn that she arrived in Theresienstadt just as her brother-in-law, Hugo Seckel and his wife, Else, were being deported from Theresienstadt to Treblinka. Her sister-in-law, Rosa Herzberg, née Seckel, was also in Theresienstadt when Bertha arrived; whether Bertha knew any of this is impossible to know.
Bertha Seckel, née Grünbaum, died on 30 November 1942, just over two months after her arrival. It is possible that Klara learned of her death.
On 17 February 1943, Klara was rounded up with at least 116 other Leipziger Jews and taken to Berlin; on 26 February 1943, she was deported to Auschwitz. The German National Archives Memorial Book does not give an official date of death.
In the United States, the Seckels initially lived in various states: the Fernichs lived in New York, the Heinrich Seckels in Michigan, Hilda and Gertrud in California. Heinrich died in 1981. Sophie joined her sisters Hilda and Gertrud to live in California and died there in 1983. Hilda, who married Siegbert Lippschutz (b. 1901 in Berlin), died in 1988; and Gertrud, the youngest of Bertha’s and Jacoby’s children, died in 2001.
Of the many things the story of the Seckel family teaches us is that, despite subsequent moves and the passage of time, some members retained a strong attachment to the city. They did not forget Themar. Themar is now committed to ensuring that the memory of this family is honoured. Luckily, traces have been found in the City of Themar archives and Themar newspaper archives. Further critical information has come from the Pages of Testimony that Bertha’s daughter and son-in-law, and grand-daughter, have contributed to the Yad Vashem Central Database of Shoah Victims’ Names. Other researchers have provided additional information and so the story acquires layers and nuance.
The Nachkommenliste/Descendants List below identifies the members of Jacoby and Bertha Seckel who were born in Germany (or Europe) prior to 1945.
Please see the page about Klara Seckel and her husband Max Wohlgemuth.
- Jacoby SECKEL, b. 1862 Gross Munzel, d. 1911 Altenburg
- ∞ Bertha GRÜNBAUM, b. 1867 Walldorf, murdered 30 Nov 1942 Theresienstadt
- 1. Alfred SECKEL, b. 10 Nov 1891 Themar
- 1. Klara SECKEL, b. 26 Aug 1893 Themar, murdered Auschwitz
- ∞ Max WOHLGEMUTH, b. 1894 Pakość Poland, d. 1983 Los Angeles/Ca.
- 1. Hilda SECKEL, b. 19 Apr 1895 Themar, d. 1988 Alameda/Ca
- ∞ Emil WOHLGEMUTH, b. 1889 Pakość, Poland, murdered 1945 Auschwitz [n2]
- ∞ Siegbert LIPPSCHÜTZ, b. 1901, d. 1987 Alameda/Ca.
- 1. Sophie SECKEL, b. 03 Mar 1897 Themar, d. 1983 Los Angeles/Ca
- ∞ Frederich Jacob FERNICH, b. 1893 Klotten, d. Los Angeles/Ca
- 1. Heinrich SECKEL, b. 23 Feb 1902 Themar, d. 1981 Oakland/Mich.
- ∞ Edith GLASSMANN, b. 1903, d. 2002 Oakland/Mich. [n3]
- 2. Ilse SECKEL, b. 1929 Leipzig
- 2. Joachim SECKEL, b. 1930 Leipzig
- 1. Lottchen SECKEL, b. 30 08 1904 Themar, d. 1906 Zeitz
- 1. Gertrud SECKEL, b. 1907 Zeitz, d. 2001 Los Angeles/Ca.
- ∞ Alfred Siegfried MÜNZER, b. 1904 Brandenburg, d. 1992 Los
- 2. Ingrid Dorothea MÜNZER, b. 1935 Leipzig, d. 2010 USA
Many thanks to Christian Repkewitz of Altenburg for the information he has provided about the Seckel family when they lived in Altenburg. You will find his research about the Jewish community of Altenburg and a Stadtplan appear here.
- 1. The spelling of the family names — for example, Klara and Bertha — is based on documents which they themselves prepared.
- 2. Klara Wohlgemuth’s in-laws perished in the Holocaust. Her father-and-mother-in-law, Nathan and Fredericke (née Peritz) Wohlgemuth, were deported from Berlin on 31 July 1942, first to Theresienstadt and subseqently, on 26 September 1942, to Treblinka. Her brother-in-law (and Hilda Seckel’s first husband), Emil Wohlgemuth, was deported from Berlin on 29 January 1943, from Berlin to Auschwitz.
- The online biography accompanying the Stolperstein for Arno Glassmann, Edith Glassmann’s brother, in Hamburg, has provided much of the information about the Heinrich Seckel family. Edith Glassmann Seckel lost all three of her brothers in the Holocaust. (Click image at right to enlarge.)
- Primary Sources include:
Geburtsregister Themar 1876-1937, Staatsarchiv Thüringen Meiningen
German National Archives. Memorial Book online version.
Leipzig Adressbücher, 1920, 1930, 1932, and 1936.
Themar City Archives
Yad Vashem. Database of Shoah Victims’ Names. Pages of Testimony for Bertha Seckel, Klara Wohlgemuth, and Emil Wohlgemuth.
Zeitung für Themar, 1900-1903. Themar City Archives.
Zeitung für Themar und Umgegend, 1904-1934. Kirchenarchiv Themar, with many thanks to Pastor and Mrs. Winfried Wolff
Ancestry.com. Public Family Trees.
Ancestry.com. Jewish Transmigration Bureau Deposit Cards, 1939-1954 (JDC) [database on-line]. Provo, UT, USA: Ancestry.com Operations Inc, 2008.
Ancestry.com. New York Passenger Lists, 1820-1957 [database on-line]. Provo, UT, USA: Ancestry.com Operations, Inc., 2010.
Bertram, Ellen. Menschen ohne Grabstein: Die aus Leipzig deportierten und ermordeten Juden. Leipzig: Passage-Verlag, 2001.
Human, Rudolf Armin. Geschichte der Juden im Herzogtum Sachsen-Meiningen-Hildburghausen. Hildburghausen: Kesselring, 1898/reprinted Weimar: F. Fink, 1939.
German National Archives. Memorial Book online version.
Kowalzik, Barbara. Wir waren eure Nachbarn: Die Juden in Leipziger Waldstraßenviertel. Leipzig: Pro Leipzig, 1996.
|
<urn:uuid:91b67f9f-7385-4bb8-add2-5ebda0ee99a2>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00212.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9417004585266113,
"score": 3.3125,
"token_count": 4519,
"url": "https://judeninthemar.net/3250-2/"
}
|
Saturday, June 30, 2012
FREE-FLOATING PLANETS MAY BE MORE COMMON THAN STARS
May 18, 2011
"Although free-floating planets have been predicted, they finally have been detected, holding major implications for planetary formation and evolution models," said Mario Perez, exoplanet program scientist at NASA Headquarters in
discovery indicates there are many more free-floating Jupiter-mass planets that
can't be seen. The team estimates there are about twice as many of them as
stars. In addition, these worlds are thought to be at least as common as
planets that orbit stars. This adds up to hundreds of billions of lone planets
in our Milky Way galaxy alone. Washington
"Our survey is like a population census," said David Bennett, a NASA and National Science Foundation-funded co-author of the study from the University of Notre Dame in
"We sampled a portion of the galaxy, and based on these data, can estimate
overall numbers in the galaxy." The study, led by Takahiro Sumi from South Bend, Ind. Osaka University
appears in the May 19 issue of the journal Nature. The survey is not sensitive
to planets smaller than Jupiter and Saturn, but theories suggest lower-mass
planets like Earth should be ejected from their stars more often. As a result,
they are thought to be more common than free-floating Jupiters. Japan
Previous observations spotted a handful of free-floating planet-like objects within star-forming clusters, with masses three times that of Jupiter. But scientists suspect the gaseous bodies form more like stars than planets. These small, dim orbs, called brown dwarfs, grow from collapsing balls of gas and dust, but lack the mass to ignite their nuclear fuel and shine with starlight. It is thought the smallest brown dwarfs are approximately the size of large planets. On the other hand, it is likely that some planets are ejected from their early, turbulent solar systems, due to close gravitational encounters with other planets or stars. Without a star to circle, these planets would move through the galaxy as our sun and others stars do, in stable orbits around the galaxy's center. The discovery of 10 free-floating Jupiters supports the ejection scenario, though it's possible both mechanisms are at play.
"If free-floating planets formed like stars, then we would have expected to see only one or two of them in our survey instead of 10," Bennett said. "Our results suggest that planetary systems often become unstable, with planets being kicked out from their places of birth." The observations cannot rule out the possibility that some of these planets may have very distant orbits around stars, but other research indicates Jupiter-mass planets in such distant orbits are rare.
The survey, the Microlensing Observations in Astrophysics (MOA), is named in part after a giant wingless, extinct bird family from New
Zealand called the moa. A 5.9-foot
(1.8-meter) telescope at Mount John University Observatory
is used to regularly scan the copious stars at the
center of our galaxy for gravitational microlensing events. These
occur when something, such as a star or planet, passes in front of
another more distant star. The passing body's gravity warps the
light of the background star, causing it to magnify and brighten.
Heftier passing bodies, like massive stars, will warp the light of the
background star to a greater extent, resulting in brightening
events that can last weeks. Small planet-size bodies will
cause less of a distortion, and brighten a star for only a few days or
less. New Zealand
Friday, June 29, 2012
Thursday, June 28, 2012
Terrorism – A Very Short Introduction by Charles Townshend
I guess that I can date my interest in terrorism back to the early 1970’s during the last big upsurge of activity. With the IRA bombing cities across the
various Red Brigades operating in Italy,
Germany and Japan, Action Direct in France, The Weathermen in the US and our very own home grown Angry Brigade in England hardly
a day went by without some mention in the press or on the TV. Then, of course,
those of us who are old enough will remember the birth of Palestinian
terrorism, hijackings and various other attacks designed to bring attention to
All of this, and more, was covered in this excellent little volume by the author of ‘Easter 1916 – The Irish Rebellion’ which I reviewed here back in September 2010. Odd as it initially seem the author began by struggling to define terrorism (in distinction to acts of terrorism) and found it – just like many before him, to be a difficult process indeed. Most definitions used to date, he suggests, are either too inclusive or too exclusive to be of much use. Moving on the author went on to discuss the different types of terrorism drawing on the rich historical record for examples – The Terror of the French and Russian revolutions, the 19th Century revolutionary terrorists in Europe and the USA, their more contemporary followers in Latin America in the 20th century and the nationalistic terror of Ireland and the Basque region of Spain, ending with a brief overview of religious terror which has been around a lot longer than we generally think.
Finally the author recounts some of the ideas and some of the ways nations have attempted to combat terrorism and a very interesting analysis of how most terrorist campaigns end – between 1968 and 2006 only 10% could reasonably claim victory whilst a similar percentage had been successfully crushed by direct military force. Contrast this with around 40% being terminated by police investigation and a slightly larger percentage (43%) ending in political settlement. These figures certainly make a mockery of the present ‘war on terror’ which should have been focused on police action leading towards some kind of political understanding. After all, when all is said and done, terrorism is a crime – normally encompassing murder and property damage. Existing laws may need periodic ‘tweaking’ to keep pace with developments but, I contend, most terrorist activity can be controlled (but never wholly eliminated) by the police, the courts and, in exceptional circumstances, military special forces under the direction of civilian authorities.
As weapons technology progresses (if you can use such a word) more and more deadly devices will fall into the hands of people who are willing to use them for their own political objectives which they think will be advanced by killing civilians and making the world take them seriously. This I think is inevitable. What we must not do in response to this threat is either abandon our liberal democratic way nor fall for the apparently seductive charm of perpetual war. What we can do is to treat terrorism as crime and respond accordingly. Bombs will always go off from time to time and innocents will die but by controlling our response to what is reasonable and proportionate we can prevent or at lest reduce a great deal of future damage and even more casualties: the opposite, in fact, to what we are doing right now. A highly recommended book that puts the ‘war on terror’ into perspective.
Wednesday, June 27, 2012
Tuesday, June 26, 2012
Monday, June 25, 2012
Never Let Me Go by Kazuo Ishiguro
Kathy, Ruth and Tommy have always known that they’re special. Growing up in the exclusive English school at Hailsham they have been taught from an early age that they must take the greatest care of themselves and each other. Only slowly do they find out what is in store for them when they leave their safe harbour and make their own way in the world. Looking back on those days 31 year old Kathy (played by Carey Mulligan in the 2010 movie adaptation) must come to terms with her knowledge of all of their fates as she cares for Ruth (Keira Knightly) and her long love Tommy (Andrew Garfield) as they play their part in society.
[SPOILER – For those who haven’t read the book or seen the movie and want to I’d advise you to stop reading here – SPOILER]
OK, has everyone logged off who doesn’t want to hear the rest of it [pauses, looks around]. Right……
I found this book to be quite a struggle but not because it was difficult to understand or because it was badly written per se. I was irritated by its style first and foremost. Told in flashbacks it meandered all over the place as the narrator Kathy related stories of her youth and her relationship with Ruth and Tommy. Quite quickly we find out the secret of their existence (OK you were warned that there would be spoilers). They’re all clones. Now this didn’t come as a huge shock to me as I’d heard about this particular angle before I read the book. What did surprise me much more was that the clones themselves didn’t seem particularly bothered (or interested overmuch) by that news. There was some childish nonsense about finding the real-world person they were a copy of but only to discover how things might have turned out if they didn’t have that other thing hanging over their heads – because being clones was only part of it. They were actually being specifically bred to provide the larger society with organ donations which would eventually, and inevitably, kill them. What did they do with this news? Absolutely nothing except hold on to the vain hope that if they could prove they were in love that they’d get some kind of stay of execution for a few years before calmly being led off to slaughter.
A theme throughout the book was the idea that the children were encouraged to produce works of art and that the best of these – poetry, paintings sculpture – would be taken away each year for reasons unknown. Near the end of the book Kathy and Tommy find out what really happened to them. They were exhibited to patrons who were interested in the welfare of the clones and wanted to prove that they were practically human so should be treated in a humane fashion not, apparently as they were elsewhere, like cattle. This whole theme made me more than a little angry. To me it was bloody obvious that these clones were less than human because, after being informed that they would be killed at the whim of a largely uncaring society they did not even conceive the idea of rebelling against it. They calmly went on with their lives and right up to the moment of their inevitable death on an operating table remained proud of their sacrifice for the greater good. Where was a Clone Resistance I asked myself? Why no suicides as acts of aggression against the system that bred them? But of course the novel had no political and a minimal sociological aspect to it. With them it would have been completely different and would have actually deserved the name of Science-Fiction.
Reading some of the reviews on the back I was struck by one from the Sunday Times which described it as “A novel with piercing questions about humanity and humaneness”. I think they missed the point. I don’t think it was about how we treat people at all. I think it was about how we treat our animals that provide us with food and clothing. Do we treat them well right up until the moment we kill them and eat them or do we treat them like things bred to be eaten and, therefore, hardly to be thought of. Or do we actually treat our fellow creatures with somewhat more consideration and not eat them in the first place? Kathy, Ruth, Tommy and the rest were cattle and behaved like cattle even assisting the State in their own slow executions. Like cattle they were rounded up, herded and killed whenever someone needed a heart or a kidney or a few feet of intestines. Maybe the book did exactly what it meant to do – it got an emotional response out of me. It certainly did that! It also means that I will never read anything else by this author.
Sunday, June 24, 2012
Saturday, June 23, 2012
The Earth Cannot Be Saved by Hope and Billionaires
by George Monbiot for The Guardian
Tuesday, June 19, 2012
Worn down by hope. That's the predicament of those who have sought to defend the earth's living systems. Every time governments meet to discuss the environmental crisis, we are told that this is the "make or break summit", on which the future of the world depends. The talks might have failed before, but this time the light of reason will descend upon the world.
'To see Obama backtracking on the commitments made by Bush the elder 20 years ago is to see the extent to which a tiny group of plutocrats has asserted its grip on policy. We know it's rubbish, but we allow our hopes to be raised, only to witness 190 nations arguing through the night over the use of the subjunctive in paragraph 286. We know that at the end of this process the UN secretary general, whose job obliges him to talk nonsense in an impressive number of languages, will explain that the unresolved issues (namely all of them) will be settled at next year's summit. Yet still we hope for something better. This week's earth summit in
Rio de Janeiro is a ghost of the glad,
confident meeting 20 years ago. By now, the leaders who gathered in the same
city in 1992 told us, the world's environmental problems were to have been
solved. But all they have generated is more meetings, which will continue until
the delegates, surrounded by rising waters, have eaten the last rare dove,
exquisitely presented with an olive leaf roulade. The biosphere that
world leaders promised to protect is in a far worse state than it was 20 years
ago. Is it not time to recognize that they have failed?
These summits have failed for the same reason that the banks have failed. Political systems that were supposed to represent everyone now return governments of millionaires, financed by and acting on behalf of billionaires. The past 20 years have been a billionaires' banquet. At the behest of corporations and the ultra-rich, governments have removed the constraining decencies – the laws and regulations – which prevent one person from destroying another. To expect governments funded and appointed by this class to protect the biosphere and defend the poor is like expecting a lion to live on gazpacho. You have only to see the way the
United States has savaged the Earth
summit's draft declaration to grasp the scale of this problem. The word
"equitable", the US
insists, must be cleansed from the text. So must any mention of the right to
food, water, health, the rule of law, gender equality and women's empowerment.
So must a clear target of preventing two degrees of global warming. So must a
commitment to change "unsustainable consumption and production
patterns", and to decouple economic growth from the use of natural
resources. Most significantly, the US delegation demands the removal
of many of the foundations agreed by a Republican president in Rio in 1992. In particular, it has set out to purge all mention of the core principle of
that Earth summit: common but differentiated responsibilities. This means that
while all countries should strive to protect the world's resources, those with
the most money and who have done the most damage should play a greater part. This
is the government, remember, not of George W Bush but of Barack Obama. The
paranoid, petty, unilateralist sabotage of international agreements continues
uninterrupted. To see Obama backtracking on the commitments made by Bush the
elder 20 years ago is to see the extent to which a tiny group of plutocrats has
asserted its grip on policy.
While the destructive impact of the
US in Rio is
greater than that of any other nation, this does not excuse our own failures.
The British government prepared for the Earth summit by wrecking both our own
Climate Change Act and the European energy efficiency directive. David Cameron
will not be attending the Earth summit. Nor will Ed Davey, the energy and
climate change secretary (which is probably a blessing, as he's totally
useless). Needless to say, Cameron, with other absentees such as Obama and
Angela Merkel, are attending the G20 summit in Mexico,
which takes place immediately before Rio.
Another tenet of the 1992 summit – that economic and environmental issues
should not be treated in isolation – goes up in smoke. The environmental crisis
cannot be addressed by the emissaries of billionaires. It is the system that
needs to be challenged, not the individual decisions it makes. In this respect
the struggle to protect the biosphere is the same as the struggle for
redistribution, for the protection of workers' rights, for an enabling state,
for equality before the law. So this is the great question of our age: where is
everyone? The monster social movements of the 19th century and first 80 years
of the 20th have gone, and nothing has replaced them. Those of us who still
contest unwarranted power find our footsteps echoing through cavernous halls
once thronged by multitudes. When a few hundred people do make a stand – as the
Occupy campers have done – the rest of the nation just waits for them to
achieve the kind of change that requires the sustained work of millions.
Without mass movements, without the kind of confrontation required to revitalize
democracy, everything of value is deleted from the political text. But we do not mobilize, perhaps because we are endlessly seduced by hope. Hope is the rope from which we all hang.
[We are SO screwed……..]
Friday, June 22, 2012
Thursday, June 21, 2012
Stoic Warriors – The Ancient Philosophy behind the Military Mind by Nancy
I bought this book some time ago because I thought it could help me with my last Masters dissertation. It certainly looked the part. So I skim read it for an hour or so and came up blank. Slightly disappointed it went back on my Philosophy bookshelf and I moved onto the next volume in my reading list. Finally I picked this back off the shelf and gave it a good cover the cover going over.
On the face of it this seemed like a good read in waiting. I’ve been interested in the military mind for quite a while now and have an almost as long interest in Stoic Philosophy. Unfortunately this turned out to be a very disappointing book indeed (and not completely because of my initial high hopes). Some parts of the book did actually interest me. The author related several stories, in particularly about US Navy pilot James B Stockdale who was shot down over
captured, imprisoned and tortured before finally being released much later.
Part of what kept him both physically and mentally alive was his strong Stoic
sensibilities. So far so good I thought. But when she moved onto other
militaristic themes I grew less and less enamoured by her arguments and her
general portrayal of applying ancient philosophy to modern combat situations. I
lost count of the number of times Sherman outlined the Stoic stance on say
Grief, attempt to apply it to the real-world situation of military men and
women put in that position and then saying how basically inappropriate it was.
Time and again, despite the fact that she was a supposed expert on the subject
I found myself strongly disagreeing with her interpretation, or even her
understanding, of what the various Stoic authors meant. Now I would hardly call
myself an expert on the subject but I found myself continually reading between
the lines of the text and uncovering much more about the authors beliefs rather
than, as I expected, her understanding of the Stoic mindset. Two examples I
think will suffice. Firstly she built up the idea of the Stoic Sage – not too
dissimilar it seemed to the Nietzschean Superman being beyond Good and Evil –
being detached from the world and yet still interacting with it. This Sage-like
figure could literally soak up the slings and arrows of outrageous fortune
without so much as a raised eyebrow – in other words Mr Spock. Such distance,
the author maintains is both impossible and would make the person for all
intents and purposes inhuman. I disagreed. Secondly she harped on (and on)
about the need to cultivate moral indignation, and indeed righteous anger, as a
healthy motivator to go out into the world a right its wrongs. I really didn’t
know whether to laugh out loud at this point or simply stop reading.
Wednesday, June 20, 2012
Tuesday, June 19, 2012
Monday, June 18, 2012
Thinking About: Life in the Galaxy
If you have been reading this Blog for any length of time you’ll know that I periodically post articles about Extraterrestrial life. You will no doubt have realised that I am of the opinion that the probability of such life existing is high (if not actually certain) and that it is only a matter of time before we stumble over it or it stumbles upon us. After all, our Galaxy is certainly old enough for life to have emerged in it – we even have a confirmed example of it: Earth. Our Galaxy (one of many, many Galaxies) contains billions of stars around which probably orbit billions of planets. Any one (or any one other) could be the home of life so the odds against such a thing occurring elsewhere are literally astronomical. So the question remains: Where is everyone (else)? It’s a very good question and is usually referred to as the Fermi Paradox. If the Galaxy is really old enough, diverse enough and capable of producing life in multiple locations why haven’t we found it yet? Let me consider some of the facts and some speculations to try to answer that.
The first thing we need to consider is the size of the Galaxy. It’s big, really big. The distances between the stars are vast. Light from even our nearest star takes a little over 4 years to get here and light, as you may know, moves at a fairly decent speed. To send a probe there using present technology would take thousands of years. If the speed of light is indeed the universal speed limit – putting all of the various SF propulsion systems to one side – it’s hardly surprising that no one has come calling. But what about sending signals? After all radio waves move at the speed of light, right? So why haven’t we received any signals either? There was a comment from the head of NASA in one of those asteroid movies when he tried to explain why no one had seen it coming until it was almost upon us. He said that they only scanned a small percentage of the sky and it was a big-ass sky. We’ve only been listening for signals for about 50 years (though we’ve been leaking signals for somewhat longer) and it’s certainly a big-ass sky. Presumably the discovery of planets around a host of ‘nearby’ stars can narrow the search a bit but there’s still an enormous amount of ground to cover. It’s possible that a signal is one its way right now from a star 100 or 200 light years away which will get to us in 50 or 100 years. It may simply be a case that we haven’t listened long enough or we’re searching in the wrong places rather than the sky being empty of life.
We know for a fact that life exists on one world: Earth. We also know that our star isn’t particularly unique. We suspect that the same forces that produced our solar system are likely to operate universally which means that planetary systems just like ours exist orbiting stars just like ours – and that some of those planets will be in the so-called ‘Goldilocks zone’ where conditions allow liquid water on the surface and are suitable for the emergence and evolution of life. I have long contended that where conditions are conducive to the emergence of life that it will indeed emerge. After that has occurred evolution will kick in and things will start getting interesting. But it should be remembered that for the vast majority of life on Earth it was the domain of single or simple multi-cellular animals. It’s quite possible that even if life is prolific in the Galaxy that it’s at this simple level. Of course intelligent life has only existed on Earth for about a million years or so (depending on your definition of intelligent). It’s only in the last 100 years or so that we’ve begun broadcasting signals into space. It’s possible that we are the first species to do so in this part of the Galaxy so there’s no on to listen to (or to listen to us) yet. Likewise intelligent life could have flourished within 100 light years of us but may have died out 500 years ago either due to a natural or home-made catastrophe. Intelligence that can build radio transmitters and receivers capable of interstellar communication may also, inevitably maybe, create atomic bombs and bio-weapons and be stupid enough to use them. We certainly are. Maybe what intelligent life emerges in the Galaxy quickly snuffs itself out before anyone else is around to hear them? Or maybe they are snuffed out by wandering fleets of machines bent on the destruction of all organic life? It’s just as possible that one (or more) of the emergent civilisations destroyed itself by creating intelligent machines that see all organic life as a threat and have spent the last 100 million years hunting down radio signals are whipping out their producers. With a Galaxy this big, this diverse and this old such an idea might not just belong between the covers of science-fiction novels or in summer blockbusters at the multiplex.
Of course it’s quite possible that the Galaxy is indeed as empty as it appears to be. We might be the first intelligent (and I use this word advisedly) species to have evolved or simply the only one to be around at this time – others having become extinct or not evolved far enough yet. But I think the odds are against this. If intelligent life is a fairly late product of evolution, which seems likely given its obvious advantages, it’s likely that intelligent life will have evolved many times in this Galaxy. Maybe those that do exist are far above is in evolutionary terms and simply don’t regard us as worthy of communicating with. Would you spend too much time trying to speak to ants? I think not. Maybe any nearby alien life is simply too different from us and can’t see the point in dropping by to say hello. Maybe they’ve tried and failed – thereby proving that we’re not worth communicating with?
We could certainly speculate all day about why ET isn’t calling us. Presently we just have too little data to work with. My gut feeling is that it isn’t because intelligent life simply does not exist anywhere else but here (with the usual caveat). My best guess on the subject is that the vast distances involved make communication very difficult. Together with the fact that we really haven’t been listening for that long and until very recently really didn’t know exactly where to look it’s hardly surprising that we haven’t heard any alien chatter. We may be receiving messages within days of me posting this or we might have to wait hundreds of years. I really have no idea. After all…. it’s a big-ass sky.
Sunday, June 17, 2012
Saturday, June 16, 2012
Top 10 greatest Science Fiction detective novels
From Wired Magazine
30 April 2010
China Miéville's detective story The City And The City is well on its way to being the award-winningest novel of the year. But it's not the only great novel about science fiction/fantasy sleuths. Here are 10 other SF detective classics. Speculative fiction and detective fiction have a lot in common -- they're both about digging down to the truth of matters. Fictional scientists and explorers, like detectives, follow clues and act on hunches. The truth is enshrouded in an ocean of red herrings and false trails. Plus, a lot of great science fiction authors, like Ray Bradbury and Robert Silverberg, also wrote detective novels, for money or as a change of pace.
A Philosophical Investigation by Philip Kerr
I loved this book when it came out in the early 1990s, but I see it has tons of mixed reviews online. In a nutshell, it's the future - the year 2013 - and we've replaced executions with punitive comas as a method of punishment for extreme criminals. And a neurologist has discovered that men with a particular brain configuration are much more likely to become sociopaths and serial killers. Everybody gets tested, and the list of men with this deficiency is kept on file, with each man given a code name from the Penguin Book of Great Thinkers. One of the men, codenamed Wittgenstein, finds out about his diagnosis - so he hacks into the confidential database and erases his information, then goes around killing the other men on the list. And the serial killer begins to see his murders through the lens of Wittgenstein's philosophy. It's up to police officer Isadora "Jake" Jakowicz to find out who Wittgenstein is and stop his murder spree. Like I said, I loved it.
The Retrieval Artist novels by Kristine Kathryn Rusch
This series, which started with the short story The Retrieval Artist, takes place in the future, when the Moon has been colonized for centuries and humans are in contact with lots of alien races. And when humans inadvertently break the laws of alien cultures, they have to face those aliens' punishments - no matter how bizarre or severe. And people sometimes try to disappear, or change their identities, to avoid this harsh alien justice. Detective Miles Flint and his partner Noelle DeRicci wind up solving murders whose solution is often startling - like the cleaning robots were reprogrammed to rearrange the crime scene, or the murder wasn't what it first appears - and at the same time, avoid offending the strange customs of the alien races living
When Gravity Fails by George Alec Effinger
It's the 22nd century, and the Arab world has advanced far beyond the West, into a cyberpunk marvel. Marid Audran is a cocky, wisecracking hero who's forced to solve a series of brutal murders - the killer is using "moddies" to download the personalities and skills of some of history's most bestial serial killers into his brain, making him more than a match for the non-upgraded Audran. Audran finally discovers and overpowers the killer, but his problems are just beginning.
Tea From An Empty Cup by Pat Cadigan
Detective Dore Konstantin is called upon to investigate the murder of a young man inside an Artificial Reality chamber, and discovers that he died the exact same way inside the game as in reality. Her investigations into AR worlds lead her into the VR gamescape of post-apocalyptic Noo Yawk Sitty, and she begins to discover that other people have died while wired into the game. The murders turn out to be part of something much more complex, and startling.
The Automatic Detective by A. Lee Martinez
Mack Megaton is a nearly indestructible robot, built by a scientist bent on world domination. But he's gained free will, and decided to give up the world-domination racket in favor of assimilating with society and driving a cab. So far so good - until his neighbors are kidnapped and he decides to find them. His quest takes him into the secrets of
aka Technotopia, and he confronts talking gorillas, mutant villains and robot
thugs, eventually going on a rampage of destruction that might just save . Empire City
Altered Carbon by Richard K. Morgan
Another cyberpunk-esque noir future, in which people can be "shelved" and then later "resleeved" into new bodies. For the super-rich, known as Meths (or Methuselahs), it's possible to remain young and healthy for hundreds of years, just regrowing a new body whenever you want one. So when someone apparently murders wealthy asshole Laurens Bancroft, he just gets resleeved in a new body soon afterwards. But he still wants to know who killed him, so he hires/enslaves former soldier and current convict Takeshi Kovacs, giving Kovacs a new body, which happens to have a nicotine addicition and a few other annoying quirks. Possibly the greatest classic of the "future noir" genre. James McTeigue (Ninja Assassin, V For Vendetta) wants to make the movie version.
Gun, With Occasional Music by Jonathan Lethem
Lethem's trippiest novel, this book follows Conrad Metcalf, a detective in a world where asking questions is considered shockingly rude, and guns have a violin soundtrack. He's looking for the murder of a prominent urologist, and this takes him through a futuristic version of
and San Francisco,
in a world full of weird drugs, uplifted animals, babies with adult
consciousness and erotic nerve-swapping. The mob has a kangaroo enforcer. And
psychology is now considered a weird cult.
Lethem writes the whole thing in a wise-acre
Chandler pastiche, which
makes it just so bizarrely awesome. "The sky was clean and blue. I tried
to concentrate on it, to keep my mind off what I'd just held in my arms and
pressed against my body, as well as the fact that I made my living picking the
scabs off other people's lives. But the day I can't shrug off a twinge of
self-pity is the day I'm washed up for keeps."
Dirk Gently's Holistic Detective Agency by Douglas Adams
The creator of The Hitchhiker's Guide To The Galaxy series turns his twisted mind to detective fiction, and creates a story so convoluted, it will turn your brain into haggis. The plot revolves around a ghost possessing a guy to kill another guy, and also embedding clues into the poems of Samuel Taylor Coleridge that will allow him to use a secret time machine to prevent his spaceship from blowing up four billion years in the past. It's sort of a mash-up of the Doctor Who stories "Shada" and "City Of
Death," but the
genius is in the telling of it and the way in which the titular "holistic
detective" infers stuff based on the fundamental inter-connectedness of
The Yiddish Policeman's
by Michael Chabon
One of the great meldings of detective fiction with alternate history - the other one being Robert Harris' Fatherland, which is in the list of "other notable titles" below - Chabon's Hugo Award-winning novel takes place in an alternate world where the Jews settled a patch of Alaska and Israel was never founded. Mayer Landsman, an alcoholic homicide cop, is called to investigate the execution-style murder of a man in a residence hotel. But the chess-playing victim turns out to be more than he first appears. Chabon's prose pays homage to
Chandler, as well as Ross MacDonald and
Dashiell Hammett, but his alternate-history world building elevates the story
beyond the pure detective genre, and creates something much stranger and
The Caves Of Steel by Isaac Asimov
As Asimov writes in his introduction to one edition, "[John] Campbell had often said that a science fiction mystery story was a contradiction in terms; that advances in technology could be used to get detectives out of their difficulties unfairly, and that the readers would therefore be cheated. I sat down to write a story that would be a classic mystery and that would not cheat the reader - and yet would be a true science-fiction story. The result was The Caves Of Steel." In a nutshell, in this novel and The Naked Sun, Asimov pioneers the human-robot "buddy cop" genre, with policeman Elijah Baley paired with robot detective R. Daneel Olivaw.
Other notable titles:
The Andrea Cort novels by Adam-Troy Castro, the KOP novels by Warren Hammond, the October Daye novels by Seanan McGuire, Daymare by Frederic Brown, Zombies Of The Gene Pool by Sharyn McCrumb, the Johnson and HARV novels by John Zakour, The Elysium Commission by L.E. Modesitt, Jr., Dark Heart by Margaret Weis and David Baldwin, the Victory Nelson series by Tanya Huff, The Dresden Files by Jim Butcher, Sacred Ground by Mercedes Lackey, The Demolished Man by Alfred Bester, Fatherland by Robert Harris, and the Arabesk novels by Jon Courtenay Grimwood. There are also several anthologies of SF detective stories, including Isaac Asimov's Detectives, a collection of mystery stories from the pages of Isaac Asimov's Science Fiction Magazine, Mike Resnick's Down These Dark Spaceways, and the Asimov-edited 13 Crimes Of Science Fiction.
[As a fan of both detective novels and SF I had to share this list. I’ve already read a few of them and will be checking out a few more in the future – pun intended.]
Friday, June 15, 2012
Thursday, June 14, 2012
The Ipcress File by Len Deighton
The story starts when the unnamed protagonist (called Harry Palmer and played by Michael Caine in the 1965 movie adaptation) is transferred from his old post in Military Intelligence – probably part of what is now called MI5 – into a counter-espionage unit led by the enigmatic character Dalby. Here he learns that British scientists have been disappearing over the past few months and it’s their job to find out what is happening and stop it. Of course things are simply not that easy. The chief suspect is thought to be a double agent – although there’s no proof that he is – and the one scientist they get back has large chunks of his memory missing and is now useless to HM Government. If that wasn’t bad enough the American’s suspect that MI5 has been penetrated by the Russians and ‘Harry’ is their main suspect.
When it was published in 1962 this novel was hailed as a breakthrough in the espionage genre. For the first time spying was shown as just another job with meetings, file keeping, arguments over expenses and heavy layers of bureaucracy. It showed, or at least appeared to show, the more down-to-earth side of things. So much so that it drips with the details and minutia that embedded it firmly in its time and place thereby dating it very badly. Probably a good thing at the time but over 40 years later maybe not – except maybe for the social historians amongst its readership. That may have been part of what helped confused me for ¾ of the book. Although it was eminently readable I really didn’t have much of a clue what was going on. Memories of the film didn’t help much as (IIRC) the plot was significantly different – with enough similarities to make it even more confusing! I’ve read several Deighton books in the past – most recently XPD – and have pretty much enjoyed all of them (in particular SS-GB). But I can’t honestly say the same about this offering. One for dedicated Deighton fans only I think.
Wednesday, June 13, 2012
Tuesday, June 12, 2012
Monday, June 11, 2012
Sharpe’s Prey by Bernard Cornwell
Sunday, June 10, 2012
Saturday, June 09, 2012
NASA FINDS EARTH-SIZE PLANET CANDIDATES IN HABITABLE ZONE, SIX PLANET SYSTEM
Feb. 02, 2011
Candidates require follow-up observations to verify they are actual planets. Kepler also found six confirmed planets orbiting a sun-like star, Kepler-11. This is the largest group of transiting planets orbiting a single star yet discovered outside our solar system. "In one generation we have gone from extraterrestrial planets being a mainstay of science fiction, to the present, where Kepler has helped turn science fiction into today's reality," said NASA Administrator Charles Bolden. "These discoveries underscore the importance of NASA's science missions, which consistently increase understanding of our place in the cosmos."
The discoveries are part of several hundred new planet candidates identified in new Kepler mission science data, released on Tuesday, Feb. 1. The findings increase the number of planet candidates identified by Kepler to-date to 1,235. Of these, 68 are approximately Earth-size; 288 are super-Earth-size; 662 are Neptune-size; 165 are the size of Jupiter and 19 are larger than Jupiter. Of the 54 new planet candidates found in the habitable zone, five are near Earth-sized. The remaining 49 habitable zone candidates range from super-Earth size -- up to twice the size of Earth -- to larger than Jupiter. The findings are based on the results of observations conducted May 12 to Sept. 17, 2009, of more than 156,000 stars in Kepler's field of view, which covers approximately 1/400 of the sky.
"The fact that we've found so many planet candidates in such a tiny fraction of the sky suggests there are countless planets orbiting sun-like stars in our galaxy," said William Borucki of NASA's
Ames Research Center
in , the mission's science principal investigator.
"We went from zero to 68 Earth-sized planet candidates and zero to 54
candidates in the habitable zone, some of which could have moons with
liquid water." Among the stars with planetary candidates, 170 show
evidence of multiple planetary candidates. Kepler-11, located approximately
2,000 light years from Earth, is the most tightly packed planetary system yet
discovered. All six of its confirmed planets have orbits smaller than Venus,
and five of the six have orbits smaller than Mercury's. The only other star
with more than one confirmed transiting planet is Kepler-9, which has three.
The Kepler-11 findings will be published in the Feb. 3 issue of the journal
Nature. Moffett Field, Calif.
"Kepler-11 is a remarkable system whose architecture and dynamics provide clues about its formation," said Jack Lissauer, a planetary scientist and Kepler science team member at
"These six planets are mixtures of rock and gases,
possibly including water. The rocky material accounts for most
of the planets' mass, while the gas takes up most of their volume. By
measuring the sizes and masses of the five inner planets, we
determined they are among the lowest mass Ames
confirmed planets beyond our solar system." All of the planets orbiting Kepler-11 are larger than Earth, with the largest ones being comparable in size to Uranus and Neptune. The innermost planet, Kepler-11b, is ten times closer to its star than Earth is to the sun. Moving outward, the other planets are Kepler-11c, Kepler-11d, Kepler-11e, Kepler-11f, and the outermost planet, Kepler-11g, which is half as far from its star as Earth is from the sun.
The planets Kepler-11d, Kepler-11e and Kepler-11f have a significant amount of light gas, which indicates that they formed within a few million years of the system's formation. "The historic milestones Kepler makes with each new discovery will determine the course of every exoplanet mission to follow," said Douglas Hudgins, Kepler program scientist at NASA Headquarters in
Kepler, a space telescope, looks for planet signatures by measuring tiny decreases in the brightness of stars caused by planets crossing in front of them. This is known as a transit. Since transits of planets in the habitable zone of sun-like stars occur about once a year and require three transits for verification, it is expected to take three years to locate and verify Earth-size planets orbiting sun-like stars.
[Slightly old news I know but worth repeating (or bringing to your attention if you missed it). The discovery of Earth-like planets in habitable zones in star systems very much like our own gives great (if admittedly circumstantial) credence to the idea that life – even intelligent life – could very possibly exist on other worlds orbiting other stars. Those who continually dismiss even the possibility of life elsewhere need to contend with the growing number of planetary discoveries orbiting stars not too dissimilar to our own and in just the place they need to be to allow liquid water to exist on their surface. If, like me, you believe that life emerged on Earth as part of a natural process without the need for supernatural agency you cannot but agree that if it can happen here in the right circumstances then it can happen elsewhere too if those circumstances are similar enough. The odd, it increasingly appears, are being stacked in favour of finding life elsewhere in our galaxy. Now all we need to do is actually go and find it.]
Friday, June 08, 2012
Thursday, June 07, 2012
Dreaming – A Very Short Introduction by J Allan Hobson
Dreams have long fascinated mankind and our species has spent a great deal of time and effort attempting to discover what they mean: Which has all been a monumental waste of time – according to the author of this interesting little book! In the 143 closely argued pages Hobson makes the case for looking at the brain and the mind as purely material entities (which I strongly agree with) and analysing dreams as by-products of this materialism. Dreams, he contends, are not messages from the Gods nor are they shape shifted entries into the workings of the subconscious as Freud would have us believe. Freud indeed comes under special and very critical analysis for leading dream research in the wrong direction for the majority of the 20th Century.
Dreams can, the author proposes, tell us a great deal – but not about what they have long believed to inform us about. What dreams and dream research can tell us about is the functioning (and sometimes malfunctioning) of both the brain and the mind it produces and especially about the operation of human consciousness. The contents of dreams – such as they are – are red herrings which will, to mix my metaphors here, lead the unwary down various garden paths. Rather than the content once the form of dreams is considered, along with various scans (CAT, MRI etc.), they give vital clues to how the brain/mind operates when we’re asleep – basically attempting in vain to bring some order and structure out of the chaos that is our sleeping brains whilst the very centres dedicated to rational analysis have been decoupled and are unavailable. With these areas of the brain off-line the remaining centres try their best to weave a narrative using disparate images, memories and other elements we are all familiar with (at least briefly) on awaking.
The author certainly makes a convincing case – OK I was already starting from the purely materialist stand-point but it’s still a valid point – that old ideas of dream analysis are bunk and have prevented the real analysis of what’s actually going on in our brains to move much beyond modern versions of shamanism. With our increasing knowledge of how the brain works we are beginning to understand what function dreams uncover behind their often bizarre outward appearance. If you want an interesting and thought provoking view of something we all do for a considerable amount of time during our lives then this is a very good place to start. More on sleep (and dreams) to come.
Wednesday, June 06, 2012
Tuesday, June 05, 2012
My Favourite Movies: The Terminator Series
OK, I’m kind of cheating here but as I watched all four films back-to-back recently it seemed reasonable to review them all at the same time too.
For those of you who have just returned from another planet or woken from a particularly deep slumber the Terminator movies (and the rather missed TV spin-off The Sarah Connor Chronicles) follows a story arc as follows: At some point in the near future – the date changes because of actions in the present – a military computer system called Skynet becomes self-aware and tries to destroy mankind by launching its missiles against Russia forcing them to retaliate against western targets. The survivors – who call the event Judgement Day – then face a new threat, machines bent on their destruction. At the point of human extinction a hero arose – John Connor – who leads the human resistance and destroys Skynet….. or does he? In its dying moments Skynet manages to send a Terminator, a killer cyborg with living tissue over a metal endoskeleton, to kill his mother Sarah before John is even born. The resistance sends a soldier, Kyle Reese, back in time to protect her which forms the first movie in the series The Terminator made in 1984 with Arnold Schwarzenegger as the Terminator (in admittedly a seminal role for him), Linda Hamilton as Sarah and Michael Biehn as Kyle. I reviewed this movie here back in September 2008 so I won’t repeat myself much, except to say that, apart from some dodgy SFX (which I guess were OK for the time) it was a pretty good and in many ways unique movie. I liked the killer robots from the future idea very much and thought that Arnie played his part very well indeed (I was a huge Arnie fan back then). Hamilton was OK in the role of Sarah but I guess she was meant to be largely out of her depth – I mean who wouldn’t be if some crazy person came up to you saying that you have been targeted for termination but a killer robot! By far my favourite character in the movie was Kyle played by the superb Michael Biehn who stole, in my opinion, every scene he was in. My favourite bits, as in all of the movies, where the scenes played in the future.
We had to wait until 1991 for the cunningly titles sequel Terminator 2: Judgement Day. In it the young John Conner (Ed Furlong in his first ever film) lives with foster parents while Sarah (this time played superbly and in iconic fashion by Linda Hamilton) languishes in the
. John does not believe
in his mothers ravings about killer robots until one tries to kill him in the
Mall (Jason Patrick) and another saves him just in time (Arnold Schwarzenegger
again but this time as a ‘good’ Terminator). What follows is basically a chase
movie as the liquid metal Terminator T-1000 (Patrick) tries to kill the Connors
and Arnie tries to save them. There are some very exciting chases and a lovely
set piece at Cyberdyne Systems where Sarah and John try to put an end to the
whole Skynet issue by blowing everything up. As you might expect the SFX is
much improved although the liquid metal Terminator FX left something to be
desired from time to time. Although probably the best movie in the series it
did have some things in it that I really didn’t like. I hated the
sentimentality between Arnie and John – the whole ‘Why do you cry’ business and
most especially the puke inducing thumbs-up scene in the foundry at the end.
Totally nauseating. Pescadero Mental Hospital
In the 2003 movie Terminator 3: Rise of the Machines we learnt the Judgement Day is inevitable despite the destruction of Cyberdyne when a T-X Terminator (played very ably by the beautiful Kristanna Loken) starts killing John Conner’s lieutenants before finding Kate Brewster (impressively played by Claire Daines) who has just stumbled upon John Conner at her veterinary practice. Just in the nick of time another T-101 (Arnie again) shows up and slows the T-X down long enough for John and Kate to escape. We are then back in chase territory which, quite honestly, gets a bit silly from time to time (complete with unnecessary and annoying sound effects). In the few pauses we learn that Sarah has died of cancer – but not before she outlived the original date of Judgement Day – and that the date has merely been postponed by the Conner’s efforts in T2. The focus of which was wrongly placed on Cyberdyne when it should have been on Kate’s dad who is the head of the military research facility that is responsible for Skynet and the early Terminator machines. Racing to get to her father and avert Judgement Day (again) they arrive just too late and Skynet goes ‘live’. It’s at this point that we find out that Skynet is a ‘virus’ which has taken control of the worlds computer systems – which kind of makes the idea of smashing the machine complexes in the future kind of moot if Skynet could in effect infect any and every computer on the planet and come right back at you from anywhere, but hey, I didn’t write the fucking thing! Overall this was a pretty good movie despite it basically being a rehash of T2 with a few tweaks. At least it moved the story on to the point where the missiles flew and Judgement Day happened. Again we had nice set-pieces with the end scenes of robots moving through the research facility killing the scientists and engineers being particularly effective.
Since 1984 I had wanted them to make a film wholly based in the future after Judgement Day. In 2009 I finally got my wish with Terminator: Salvation starring Christian Bale as John Connor. I just loved the opening where the human forces, protected by A-10 anti-tank attack planes, landed in helicopters to blow up a Skynet facility. I really liked it when the skid of one helicopter landed on a damaged Terminator and Conner steps out and shoots it repeatedly in the head. Awesome! After that it got a little patchier (inevitably considering how much I had been looking forward to this movie for 25 years). By far the best thing in the film, at least for me, was the role played by Sam Worthington. We see Marcus Wright on death row being readied for execution and then, years later, emerging from the very same place that John Conner had been trying to destroy. How did he get there and why doesn’t he know about Skynet, Judgement Day and the war with the machines? Of course things are explained during the course of the movie (and quite well considering). We are also introduced to the ‘love interest’ in the form of A-10 pilot Blair Williams (played by the very eye-catching Moon Bloodgood) who takes a shine to Marcus and his ‘strong heart’. Although I still rank this as one of my favourite movies I have to say that overall I was a little disappointed with it. Conceptually it was OK. It had, as we have come to expect, very good set-piece action sequences and robotic inventiveness. I wasn’t overly impressed with Bale as Conner who, in Christian Bale fashion, looked moody and shouted a lot. I was very impressed with Sam Worthington who stole every scene he was in. I would have liked more stand-up fighting between humans and machines and I would, eventually, like to see John Connor sending Kyle Reese back to 1984 to complete the circle (or cycle) but, as Salvation didn’t do so well at the box office we’ll never see it. Oh, and I really didn’t like the ending.
|
<urn:uuid:ef64d5ea-fca7-4a10-b88a-8f6771cb3dea>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00212.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.964035153388977,
"score": 3.4375,
"token_count": 11573,
"url": "http://cyberkittenspot.blogspot.com/2012/06/"
}
|
AN ARCHAEOLOGICAL AND HISTORICAL TIME LINE OF USEPPA ISLAND
Reprinted with permission from The Archaeology of Useppa Island, edited by William H. Marquardt, 1999
ca. 8000 B.C. Landform that would later become Useppa Island is visited by Paleo-Indian people.
ca. 4500 B.C. Rising sea level makes Useppa an island; oyster-shell middens begin to be deposited on the ancient dune sands by seasonal inhabitants; estuarine environment approaches conditions similar to those of today.
ca. 4000-3000 B.C. Barrier islands form to create Pine Island Sound.
4500-3000 B.C. Calusa Ridge is occupied, mostly during the spring and summer. As estuarine conditions become more pronounced, catfish, pinfish, pigfish, rays, sharks, other fish, oysters, whelks, conchs, and clams are eaten; at first, pine is used for firewood, then later mangroves, seagrape, and buttonwood are selected, as the estuary becomes more firmly developed; seeds and fruits of seasonally available plants are consumed; columellae of lightning whelks are worked into hammers and cutting-edged tools; shouldered celts made of lightning whelks show close connection to Horr’s Island; other shell artifacts include quahog clam shell anvils, net mesh gauges, and notched bivalves; bone-engraving artistry similar to that of other contemporary people elsewhere in the Florida peninsula shows evidence of wide-ranging communication and exchange of ideas; bone points are also made; chert from near Tampa is used to make bifacial stone tools.
3000-2000 B.C. Calusa Ridge and Collier Ridge are occupied, mostly in the spring and summer months. Catfish, pinfish, pigfish, rays, sharks, other fish, oysters, whelks, conchs, and clams are eaten; firewood used is a mixture of pine, mangrove, and other woods; seeds and fruits of seasonally available plants are consumed; columellae of lightning whelks are worked into hammers and cutting-edged tools; shouldered celts are made; quahog clam-shell anvils are used; chert from near Tampa is used to make tools; the dead are buried in flexed position in middens in both Collier and Calusa Ridges.
2000-1200 B.C. Steatite stone vessels and fiber-tempered pottery of the Orange series are used on Useppa Island; Sand-tempered pottery is used by 1200 B.C.; Calusa Ridge is abandoned, but Collier Ridge continues to be occupied, mostly in the spring and summer months (March-August). Diet continues to be catfish, pinfish, pigfish, sharks, rays, other fishes, and shellfish, supplemented by wild plant foods.
1200-500 B.C. Terminal Archaic occupation is limited to Collier Ridge and the south-central area (east of Calusa Ridge and west of the southeastern midden ridge), mostly in the spring and summer months (March-September). Diet continues as before, but with less emphasis on sharks and rays. Pottery is sand-tempered plain ware.
500 B.C.-A.D. 500 Extensive Caloosahatchee I-period occupation is located in the southeastern ridge area (Milanich and Chapman’s Tests 3 and 5) and the south-central area (the area east of Calusa Ridge and west of the southeastern ridge) in the summer and fall months (June-October). Diet continues to be catfish, pinfish, pigfish, sharks and rays, other fishes, and shellfish, supplemented by wild plant foods; evidence of occupation includes substantial middens ca. A.D. 400-500 in areas of Test Pit I-3, Operation D, (area of Lot II-11), and Milanich and Chapman’s Test 6. Pottery is mostly thick, sand-tempered plain ware.
A.D. 500-800 During the Caloosahatchee IIA period, Collier Ridge and Calusa Ridge are used for burial ca. A.D. 600-800; Belle Glade pottery is used by ca. A.D. 600; broken pottery is deposited with burials in Collier Ridge; occupation of south-central area diminishes, but southeastern ridge accumulates rapidly after A.D. 700, with evidence of more diverse and higher salinity shellfish than previously deposited. The food assemblage includes wild plants, fish, sea urchins, penshells, surf clams, fighting conchs, oysters, scallops, and various other whelks and conchs.
A.D. 800-1200 During the Caloosahatchee IIB period, the southeastern ridge continues to grow rapidly, with evidence of high salinity shellfish. Diverse food assemblage includes wild plants, fish, sea urchins, penshells, surf clams, fighting conchs, oysters, scallops, and various other whelks and conchs.
A.D. 1200-1700 No known habitation of Useppa Island; possibly used sporadically as a fishing camp.
1704-1750 Effective end of domination of the area by Calusa Indians; most native south Florida Indians succumb to slavery, warfare, and disease; Yamassee and Uchise (Creek) people enter Florida from the north, bearing firearms; Yamassee are bent on enslaving south Florida people for service in the Carolina colony; Uchise claim some former Calusa territory.
ca. 1780's Muspa Indians are reported to be living on Captiva, Sanibel, and other nearby Islands. The Muspa may be descended from people who formerly occupied the Ten Thousand Islands area, possibly mixed with remnants of Calusa and other native groups.
ca. 1784 Cuban Jose Caldez begins to use Useppa as a seasonal location for mullet fishing, employing both Cuban and Native American laborers (probably a mixture of native southwest Florida people--Muspa/Calusa (?)–and refugees from northern Florida missions and in-migrating Creek people). The name “Seminoles,” derived from the Spanish word “cimarrones” for wild or untamed, begins to be applied loosely to all Indian people in the Florida peninsula.
1831 Useppa is listed as “Caldez Island” in William Whitehead’s inventory of fishing rancho operations.
1832 George C. Willis is assigned to “Josefa” Island as a customs official; he builds a house on the north end of the island; Jose Caldez is still living on the island at age 90.
1833 John Lee Williams refers to the island as “Toampe,” reporting that Caldez has a village of almost 20 palmetto houses on the southwest point of the island. About 60 people, Europeans and Indians live on the island.
1833 Henry Crews replaces Willis as customs official. Caldez sells island to Joseph Ximenez for $372.
1835 Second Seminole War begins over Indian Removal issue; so called “Spanish Indians” who work in the fishing industry on Useppa and elsewhere – even those married to Cubans – are in danger of capture and removal. Caldez sails from Useppa Island to Havana for probably the last time; the name of his schooner is registered as the “Joseffa”.
1836 Henry Crews is killed, ostensibly by Indians; fishing ranchos on Useppa and other places are burned by American soldiers, who fear they are being used by Indian sympathizers; Crews’s replacement Alexander Patterson reports that there is “no living person in Charlotte Harbor.”
1850 A supply depot on Calusa Ridge called Fort Casey is established on January 3, 1850, garrisoned by 108 men; it is abandoned on November 10th of the same year.
1848-1855 U.S. Coast Survey of Charlotte Harbor produces “Sketch F” map showing “Ft. Casey” on Useppa Island (Bache 1855).
1859-1863 Topographic and hydrological survey of area results in navigation chart (Bache 1863); island’s name is printed on a map as “Useppa” for the first time.
1863 Union soldiers camp on Useppa Island during the War between the States; Charlotte Harbor is blockaded to try to prevent beef shipments to the Confederacy; the surrounding area is inhabited sparsely by hunters, fishers, and farmers. Union sympathizers find refuge on Useppa Island under the protection of the Union army. Some are active as Florida Rangers.
1870 Census reports two persons living on “Giuseppe Island.”
1875 Physician and writer Charles Kenworthy refers to “Useppi” as one of three places in the immediate area at which to obtain fresh water.
1882 M. H. Simons of the Smithsonian Institution visits the island, referring to it as “Useppa Key.” He notes that Useppa was used by “Spanish fishermen” as a source of water, but says that there is little habitation in Charlotte Harbor.
1885 Andrew Douglas reports that Useppa Island is “desolate and uninhabited,” and that no more than ten adults live in the entire Charlotte Harbor area.
1895(?) Useppa Island is purchased by A.M. McGregor.
1895 Eleanor Pearse and her family visit Useppa in February; she reports one Cuban family in residence.
1896-1898 John Roach buys Useppa Island, builds home and hotel.
1899 Sixteen-foot windmill and 35,000-gallon water tank are built on Useppa for irrigating groves and flower gardens.
1900 Archaeologist Clarence B. Moore visits “Joseffa” Island but does not excavate there.
about 1907 Name of hotel is changed from “Useppa Inn” to "Tarpon Inn.”
1911 Useppa Island is purchased by Barron Collier.
1912 Izaak Walton Club is founded on Useppa Island.
1914 Shells are brought from Captiva Island to build a road connecting the hotel, barn, and bungalows.
1915 Work begins on the golf course, June 15, 1915. More than 100 cords of oak wood are obtained from the clearing operations. Another shell road is built from the tennis courts to the south end of the island. Laundry and refrigeration plant are constructed.
1916 The nine-hole golf course opens for play. Fifteen to twenty tarpon fishing guides are employed by Useppa.
1917-1918 The hotel is enlarged and remodeled by Collier; a third floor is added, a colonial-style porch and entrance are built.
1926 Five of the stilt houses built for guides are blown away by a hurricane, September, 1926.
1927 or 1928 The hotel’s name is changed from “Tarpon Inn” back to “Useppa Inn.”
1918-1939 Useppa is a popular seasonal destination for the wealthy; Useppa Island becomes Barron Collier’s official residence as he builds a broad-based development, transportation, resort, and communication business. By the late 1930's, Collier owns more than 1,000,000 acres of land in Florida.
1939 Barron Collier dies March 13, 1939 at the age of 65.
1941-1945 Useppa Island is closed for business during World War II.
1944-1947 The Useppa Inn and other buildings are damaged by hurricanes of October 20, 1944 and October 7-9, 1946; the hotel is demolished in the late 1940's.
1947-1960 The Collier family operates Useppa Island as a seasonal resort.
1947 Archaeologists John Griffin and Hale Smith visit Useppa to examine middens and burials disturbed by tennis court construction (Griffin and Smith 1947; Griffin 1949).
1951 Useppa is recorded in the Florida archaeological site file by J. M. Goggin.
1960 The Central Intelligence Agency uses Useppa for secret training of officers for planned Cuban invasion.
1962 William A. Snow purchases Useppa Island; refurbishes buildings, installs pool, septic system, and air strip.
1966 Useppa Island sustains damages from Hurricane Alma, October 1966. The island is put up for sale by Snow.
1968 Jimmy B. Turner purchases Useppa Island; builds new docks; operates the island as year-round resort for the first time; no children under 14 are permitted on the island.
1970 Useppa Island closes.
1973 Mariner Properties Development Corporation purchases Useppa from Turner, but does not develop it.
1976 Garfield Beckstead (Useppa Inn and Dock Company) purchases Useppa from Mariner.
1979 Jerald Milanich and Jefferson Chapman undertake archaeological backhoe tests.
1980 Milanich and Chapman do test excavations.
1981 Cable connects Useppa Island to electrical power from the mainland.
1984 Milanich et al.’s (1984) report is published.
1985 William Marquardt and Michael Hansinger perform salvage excavations at Collier Ridge (Operation A).
1989 Marquardt and Corbett Torrence excavate on Calusa Ridge (Operations B and C), Lot II-11 (Operation D), and the southeastern shell ridge (Operation E), “Year of the Indian” project.
1992 Results of 1985 archaeological excavations are published (Marquardt 1992b). 1993 Marquardt and Maria Palov excavate in search of intact historic-period middens (Operations F-I).
1994 Useppa Museum opens, April 2, 1994. Its “Calusa Room” exhibits findings from Florida Museum of Natural History investigations of 1985 and 1989.
1994 Marquardt does salvage excavation of burial, Lot II-17.
1995 Karen Walker submits nomination of pre-columbian components to National Register of Historic Places. Jenna Wallace studies burial from Lot II-17 (Marquardt and Wallace 1995).
1995 Marquardt interviews Beckstead, researches golf-course topography.
1996 Useppa Island’s pre-columbian components are listed in the National Register of Historic Places. Walker conducts test excavations in the twentieth-century midden (Operation J). Useppa Island observes it “Centennial,” marking 100 years since John Roach began to entertain guests in his island home. Renovations to Collier Inn are begun.
1996 Renovations to Collier Inn are completed; new roof line more closely approximates original appearance; rooms are again offered for rent, making it a true inn once again. Development begins on final phase of lots on the island – those on the extreme southern end, site of the former airstrip. Walker undertakes new excavations in the southeastern midden ridge area to investigate climatic fluctuations during the Caloosahatchee IIB period. All chapters of this monograph are completed and prepared for publication.
1999 The Archaeology of Useppa Island is published.
2002 The Useppa Museum is renamed to The Barbara Sumwalt Museum.
2004 Useppa gets a direct hit from Hurricane Charley, a category 4 hurricane, on Friday, August 13th and suffers enormous damage to the structures and foliage on the island.
2004 The island recovers from the hurricane, the Collier Inn and homes are rebuilt and Useppa undergoes extensive re-landscaping.
2005 The Collier Inn has a grand reopening on August 13, one year to the day after being damaged by Hurricane Charley.
|
<urn:uuid:7d09df39-91df-4b9c-88d1-8d8c01a3a238>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00412.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9307906627655029,
"score": 3.5,
"token_count": 3349,
"url": "http://useppahs.org/pages/useppa_history.html"
}
|
List of galaxies
The following is a list of notable galaxies.
There are about 51 galaxies in the Local Group (see list of nearest galaxies for a complete list), on the order of 100,000 in our Local Supercluster and an estimated number of about one to two trillion in all of the observable universe.
The discovery of the nature of galaxies as distinct from other nebulae (interstellar clouds) was made in the 1920s. The first attempts at systematic catalogues of galaxies were made in the 1960s, with the Catalogue of Galaxies and Clusters of Galaxies listing 29,418 galaxies and galaxy clusters, and with the Morphological Catalogue of Galaxies, a putatively complete list of galaxies with photographic magnitude above 15, listing 30,642. In the 1980s, the Lyons Groups of Galaxies listed 485 galaxy groups with 3,933 member galaxies. Galaxy Zoo is a project aiming at a more comprehensive list: launched in July 2007, it has classified over one million galaxy images from The Sloan Digital Sky Survey, The Hubble Space Telescope and the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey.
There is no universal naming convention for galaxies, as they are mostly catalogued before it is established whether the object is or isn't a galaxy. Mostly they are identified by their celestial coordinates together with the name of the observing project (HUDF, SDSS, 3C, CFHQS, NGC/IC, etc.)
This is a list of galaxies that are well known by something other than an entry in a catalog or list, or a set of coordinates, or a systematic designation.
|Image||Galaxy||Constellation||Origin of name||Notes|
|Andromeda||Andromeda||Andromeda, which is shortened from "Andromeda Galaxy", gets its name from the area of the sky in which it appears, the constellation of Andromeda.||Andromeda is the closest big galaxy to the Milky Way and is expected to collide with the Milky Way around 4 billion years from now. The two will eventually merge into a single new galaxy called Milkomeda.|
|Black Eye Galaxy||Coma Berenices||It has a spectacular dark band of absorbing dust in front of the galaxy's bright nucleus, giving rise to its nicknames of the "Black Eye" or "Evil Eye" galaxy.|
|Bode's Galaxy||Ursa Major||Named for Johann Elert Bode who discovered this galaxy in 1774.|
|Cartwheel Galaxy||Sculptor||Its visual appearance is similar to that of a spoked cartwheel.|
|Cigar Galaxy||Ursa Major||Appears similar in shape to a cigar.|
|Comet Galaxy||Sculptor||This galaxy is named after its unusual appearance, looking like a comet.||The comet effect is caused by tidal stripping by its galaxy cluster, Abell 2667.|
|Cosmos Redshift 7||Sextans||The name of this galaxy is based on a redshift (z) measurement of nearly 7 (actually, z = 6.604).||Galaxy Cosmos Redshift 7 is reported to be the brightest of distant galaxies (z > 6) and to contain some of the earliest first stars (first generation; Population III) that produced the chemical elements needed for the later formation of planets and life as we know it.|
|Hoag's Object||Serpens Caput||This is named after Art Hoag, who discovered this ring galaxy.||It is of the subtype Hoag-type galaxy, and may in fact be a polar-ring galaxy with the ring in the plane of rotation of the central object.|
|Large Magellanic Cloud||Dorado/Mensa||Named after Ferdinand Magellan||This is the fourth largest galaxy in the Local Group, and forms a pair with the SMC, and from recent research, may not be part of the Milky Way system of satellites at all.|
|Small Magellanic Cloud||Tucana||Named after Ferdinand Magellan||This forms a pair with the LMC, and from recent research, may not be part of the Milky Way system of satellites at all.|
|Mayall's Object||Ursa Major||This is named after Nicholas Mayall, of the Lick Observatory, who discovered it.||Also called VV 32 and Arp 148, this is a very peculiar looking object, and is likely to be not one galaxy, but two galaxies undergoing a collision. Event in images is a spindle shape and a ring shape.|
|Milky Way||Sagittarius (centre)||The appearance from Earth of the galaxy – a band of light.||The galaxy containing the Sun and its Solar System, and therefore Earth.|
|Pinwheel Galaxy||Ursa Major||Similar in appearance to a pinwheel (toy).|
|Sombrero Galaxy||Virgo||Similar in appearance to a sombrero.|
|Sunflower Galaxy||Canes Venatici||Similar in appearance to a sunflower.|
|Tadpole Galaxy||Draco||The name comes from the resemblance of the galaxy to a tadpole.||This shape resulted from tidal interaction that drew out a long tidal tail.|
|Whirlpool Galaxy||Canes Venatici||From the whirlpool appearance this gravitationally disturbed galaxy exhibits.|
This is a list of galaxies that are visible to the naked-eye, for at the very least, keen-eyed observers in a very dark-sky environment that is high in altitude, during clear and stable weather.
|Milky Way||-6.5 (excluding the Sun[nb 1])||0||Sagittarius (centre)||This is the galaxy containing the Sun and its Solar System, and therefore Earth.. Most things visible to the naked-eye in the sky are part of it, including the Milky Way composing the Zone of Avoidance.|
|Large Magellanic Cloud||0.9||160 kly (50 kpc)||Dorado/Mensa||Visible only from the southern hemisphere. It is also the brightest patch of nebulosity in the sky.|
|Small Magellanic Cloud (NGC 292)||2.7||200 kly (60 kpc)||Tucana||Visible only from the southern hemisphere.|
|Andromeda Galaxy (M31, NGC 224)||3.4||2.5 Mly (780 kpc)||Andromeda||Once called the Great Andromeda Nebula, it is situated in the Andromeda constellation.|
|Triangulum Galaxy (M33, NGC 598)||5.7||2.9 Mly (900 kpc)||Triangulum||Being a diffuse object, its visibility is strongly affected by even small amounts of light pollution, ranging from easily visible in direct vision in truly dark skies to a difficult averted vision object in rural/suburban skies.|
|Centaurus A (NGC 5128)||6.84||13.7 ± 0.9 Mly (4.2 ± 0.3 Mpc)||Centaurus||Centaurus A has been spotted with the naked eye by Stephen James O'Meara.|
|Bode's Galaxy (M81, NGC 3031)||6.94||12 Mly (3.6 Mpc)||Ursa Major||Highly experienced amateur astronomers may be able to see Messier 81 under exceptional observing conditions.|
|Messier 83 (NGC 5236)||8.2||14.7 Mly (4.5 Mpc)||Hydra||M83 has reportedly been seen with the naked eye.|
- Sagittarius Dwarf Spheroidal Galaxy is not listed, because it is not discernible as being a separate galaxy in the sky.
|First spiral galaxy||Messier 51||Canes Venatici||1845||Lord William Parsons, Earl of Rosse discovered the first spiral nebula from observing M51 (recognition of the spiral shape without the recognition of the object as outside the Milky Way).|
|Notion of galaxy||Milky Way Galaxy & Messier 31||Sagittarius (centre) & Andromeda||1923||Recognition of the Milky Way and the Andromeda nebula as two separate galaxies by Edwin Hubble.|
|First Seyfert galaxy||NGC 1068 (M77)||Cetus||1943 (1908)||The characteristics of Seyfert galaxies were first observed in M77 in 1908, however, Seyferts were defined as a class in 1943.|
|First radio galaxy||Cygnus A||Cygnus||1951/2||Of several items, then called radio stars, Cygnus A was identified with a distant galaxy, being the first of many radio stars to become a radio galaxy.|
|3C273 was the first quasar with its redshift determined, and by some considered the first quasar. 3C48 was the first "radio-star" with an unreadable spectrum, and by others considered the first quasar.|
|First superluminal galactic jet||3C279||Virgo||1971||The jet is emitted by a quasar|
|First low-surface-brightness galaxy||Malin 1||Coma Berenices||1986||Malin 1 was the first verified LSB galaxy. LSB galaxies had been first theorized in 1976.|
|First superluminal jet from a Seyfert||III Zw 2||Pisces||2000|||
This is a list of galaxies that became prototypes for a class of galaxies.
|BL Lac object||BL Lacertae (BL Lac)||Lacerta||This AGN was originally catalogued as a variable star, and "stars" of its type are considered BL Lac objects.|
|Hoag-type Galaxy||Hoag's Object||Serpens Caput||This is the prototype Hoag-type Ring Galaxy|
|Giant LSB galaxy||Malin 1||Coma Berenices||1986|||
|FR II radio galaxy
(double-lobed radio galaxy)
|starburst galaxy.||Cigar Galaxy||Ursa Major|
Closest and most distant known galaxies by typeEdit
|Closest galaxy||Canis Major Dwarf||Canis Major||0.025 Mly||Discovered in 2003, a satellite of the Milky Way, slowly being cannibalised by it.|
|Most distant galaxy||GN-z11||Ursa Major||z=11.09||With an estimated distance of about 32 billion light-years, astronomers announced it as the most distant astronomical galaxy known.|
|Closest quasar||3C 273||Virgo||z=0.158||First identified quasar, this is the most commonly accepted nearest quasar.|
|Most distant quasar||ULAS J1120+0641||Leo||z=7.085||Discovered in June 29, 2011 via UKIRT Infrared Deep Sky Survey; first quasar discovered beyond the redshift of 7.|
|Closest radio galaxy||Centaurus A (NGC 5128, PKS 1322-427)||Centaurus||13.7 Mly|||
|Most distant radio galaxy||TN J0924-2201||Hydra||z=5.2|
|Closest Seyfert galaxy||Circinus Galaxy||Circinus||13 Mly||This is also the closest Seyfert 2 galaxy. The closest Seyfert 1 galaxy is NGC 4151.|
|Most distant Seyfert galaxy||z=|
|Closest blazar||Markarian 421 (Mrk 421, Mkn 421, PKS 1101+384, LEDA 33452)||Ursa Major||z=0.030||This is a BL Lac object.|
|Most distant known blazar||Q0906+6930||Ursa Major||z=5.47||This is a flat spectrum radio-loud quasar type blazar.|
|Closest BL Lac object||Markarian 421 (Mkn 421, Mrk 421, PKS 1101+384, LEDA 33452)||Ursa Major||z=0.030|||
|Most distant BL Lac object||z=|
|Most distant LINER||z=|
|Most distant LIRG||z=|
|Closest ULIRG||IC 1127 (Arp 220/APG 220)||Serpens Caput||z=0.018|||
|Most distant ULIRG||z=|
|Closest starburst galaxy||Cigar Galaxy (M82, Arp 337/APG 337, 3C 231, Ursa Major A)||Ursa Major||3.2 Mpc|||
|Most distant starburst galaxy||SPT 0243-49||z= 5.698|||
|1||Milky Way Galaxy||0||This is the galaxy containing the Sun and its Solar System, and therefore Earth.|
|2||Canis Major Dwarf||0.025 Mly|
|3||Virgo Stellar Stream||0.030 Mly|
|4||Sagittarius Dwarf Elliptical Galaxy||0.081 Mly|
|5||Large Magellanic Cloud||0.163 Mly||Largest satellite galaxy of the Milky Way|
|6||Small Magellanic Cloud||0.197 Mly|
|Nearest galaxy||Milky Way||always||0||This is the galaxy containing the Sun and its Solar System, and therefore Earth.|
|Nearest galaxy to our own||Canis Major Dwarf||2003||0.025 Mly||The absolute closest galaxy|
|Nearest dwarf galaxy||Canis Major Dwarf||2003||0.025 Mly|
|Nearest major galaxy to our own||Andromeda Galaxy||always||2.54 Mly||First identified as a separate galaxy in 1923|
|Nearest giant galaxy||Centaurus A||12 Mly|
|Canis Major Dwarf||2003||0.025 Mly|
|Sagittarius Dwarf Elliptical Galaxy||1994 − 2003||0.081 Mly|
|Large Magellanic Cloud||antiquity − 1994||0.163 Mly||This is the upper bound, as it is nearest galaxy observable with the naked-eye.|
|Small Magellanic Cloud||1913–1914||0.197 Mly||This was the first intergalactic distance measured. In 1913, Ejnar Hertzsprung measures the distance to SMC using Cepheid variables. In 1914, he did it for LMC.|
|Andromeda Galaxy||1923||2.5 Mly||This was the first galaxy determined to not be part of the Milky Way.|
Most distant galaxiesEdit
|Candidate most remote galaxy (photometric redshift)||UDFj-39546284||2011||z=11.9(?)||This was proposed to be the remotest object known at time of discovery. In late 2012, its distance was revised from z=10.3 to 11.9, however, recent re-analyses suggest it is likely to be at much lower redshift.|
|Most remote galaxy confirmed (spectroscopic redshift)||GN-z11||2016||z=11.09||As of March 2016, GN-z11 was the most distant known galaxy.|
|Most remote quasar||ULAS J1120+0641||2011||z=7.085||This is the undisputed most remote quasar of any type, and the first with a redshift beyond 7.|
|Most distant non-quasar SMG||Baby Boom Galaxy (EQ J100054+023435)||2008||z=4.547|||
|grand-design spiral galaxy||Q2343-BX442||2012||z=2.18|||
|GN-z11||2016 −||z=11.09||Announced March 2016.|
|2015 − 2016||z=8.68||This galaxy's redshift was determined by examining its Lyman-alpha emissions, which were released in August 2015.|
|EGS-zs8-1||2015 − 2015||z=7.730||This was the most distant galaxy as of May 2015.|
|Z8 GND 5296||2013 − 2015||z=7.51|||
|SXDF-NB1006-2||2012 − 2013||z=7.215|||
|GN-108036||2012 − 2012||z=7.213|||
|BDF-3299||2012 − 2013||z=7.109|||
|IOK-1||2006 − 2010||z=6.96||This was the remotest object known at time of discovery. In 2009, gamma ray burst GRB 090423 was discovered at z=8.2, taking the title of most distant object. The next galaxy to hold the title also succeeded GRB 090423, that being UDFy-38135539.|
|SDF J132522.3+273520||2005 − 2006||z=6.597||This was the remotest object known at time of discovery.|
|SDF J132418.3+271455||2003 − 2005||z=6.578||This was the remotest object known at time of discovery.|
|HCM-6A||2002 − 2003||z=6.56||This was the remotest object known at time of discovery. The galaxy is lensed by galaxy cluster Abell 370. This was the first galaxy, as opposed to quasar, found to exceed redshift 6. It exceeded the redshift of quasar SDSSp J103027.10+052455.0 of z=6.28|
|SSA22−HCM1||1999 − 2002||z=5.74||This was the remotest object known at time of discovery. In 2000, the quasar SDSSp J104433.04-012502.2 was discovered at z=5.82, becoming the most remote object in the universe known. This was followed by another quasar, SDSSp J103027.10+052455.0 in 2001, the first object exceeding redshift 6, at z=6.28|
|HDF 4-473.0||1998 − 1999||z=5.60||This was the remotest object known at the time of discovery.|
|RD1 (0140+326 RD1)||1998||z=5.34||This was the remotest object known at time of discovery. This was the first object found beyond redshift 5.|
|CL 1358+62 G1 & CL 1358+62 G2||1997 − 1998||z=4.92||These were the remotest objects known at the time of discovery. The pair of galaxies were found lensed by galaxy cluster CL1358+62 (z=0.33). This was the first time since 1964 that something other than a quasar held the record for being the most distant object in the universe. It exceeded the mark set by quasar PC 1247-3406 at z=4.897|
|8C 1435+63||1994 − 1997||z=4.25||This is a radio galaxy. At the time of its discovery, quasar PC 1247-3406 at z=4.73, discovered in 1991 was the most remote object known. This was the last radio galaxy to hold the title of most distant galaxy. This was the first galaxy, as opposed to quasar, that was found beyond redshift 4.|
|4C 41.17||1990 − 1994||z=3.792||This is a radio galaxy. At the time of its discovery, quasar PC 1158+4635, discovered in 1989, was the most remote object known, at z=4.73 In 1991, quasar PC 1247-3406, became the most remote object known, at z=4.897|
|1 Jy 0902+343 (GB6 B0902+3419, B2 0902+34)||1988 − 1990||z=3.395||This is a radio galaxy. At the time of discovery, quasar Q0051-279 at z=4.43, discovered in 1987, was the most remote object known. In 1989, quasar PC 1158+4635 was discovered at z=4.73, making it the most remote object known. This was the first galaxy discovered above redshift 3. It was also the first galaxy found above redshift 2.|
|3C 256||1984 − 1988||z=1.819||This is a radio galaxy. At the time, the most remote object was quasar PKS 2000-330, at z=3.78, found in 1982.|
|3C 241||1984||z=1.617||This is a radio galaxy. At the time, the most remote object was quasar PKS 2000-330, at z=3.78, found in 1982.|
|3C 324||1983 − 1984||z=1.206||This is a radio galaxy. At the time, the most remote object was quasar PKS 2000-330, at z=3.78, found in 1982.|
|3C 65||1982 − 1983||z=1.176||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974. In 1982, quasar PKS 2000-330 at z=3.78 became the most remote object.|
|3C 368||1982||z=1.132||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974.|
|3C 252||1981 − 1982||z=1.105||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974.|
|3C 6.1||1979 -||z=0.840||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974.|
|3C 318||1976 -||z=0.752||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974.|
|3C 411||1975 -||z=0.469||This is a radio galaxy. At the time, the most remote object was quasar OQ172, at z=3.53, found in 1974.|
|3C 295||1960 -||z=0.461||This is a radio galaxy. This was the remotest object known at time of discovery of its redshift. This was the last non-quasar to hold the title of most distant object known until 1997. In 1964, quasar 3C 147 became the most distant object in the universe known.|
|LEDA 25177 (MCG+01-23-008)||1951 − 1960||z=0.2
|This galaxy lies in the Hydra Supercluster. It is located at B1950.0 08h 55m 4s +03° 21′ and is the BCG of the fainter Hydra Cluster Cl 0855+0321 (ACO 732).|
|LEDA 51975 (MCG+05-34-069)||1936 -||z=0.13
|The brightest cluster galaxy of the Bootes cluster (ACO 1930), an elliptical galaxy at B1950.0 14h 30m 6s +31° 46′ apparent magnitude 17.8, was found by Milton L. Humason in 1936 to have a 40,000 km/s recessional redshift velocity.|
|LEDA 20221 (MCG+06-16-021)||1932 -||z=0.075
|This is the BCG of the Gemini Cluster (ACO 568) and was located at B1950.0 07h 05m 0s +35° 04′|
|BCG of WMH Christie's Leo Cluster||1931 − 1932||z=
|BCG of Baede's Ursa Major Cluster||1930 − 1931||z=
|NGC 4860||1929 − 1930||z=0.026
|Using redshift measurements, NGC 7619 was the highest at the time of measurement. At the time of announcement, it was not yet accepted as a general guide to distance, however, later in the year, Edwin Hubble described redshift in relation to distance, leading to a seachange, and having this being accepted as an inferred distance.|
|NGC 584 (Dreyer nebula 584)||1921 − 1929||z=0.006
|At the time, nebula had yet to be accepted as independent galaxies. However, in 1923, galaxies were generally recognized as external to the Milky Way.|
|M104 (NGC 4594)||1913 − 1921||z=0.004
|This was the second galaxy whose redshift was determined; the first being Andromeda - which is approaching us and thus cannot have its redshift used to infer distance. Both were measured by Vesto Melvin Slipher. At this time, nebula had yet to be accepted as independent galaxies. NGC 4594 was originally measured as 1000 km/s, then refined to 1100, and then to 1180 in 1916.|
|M81||antiquity - 20th century
||11.8 Mly (z=-0.10)||This is the lower bound, as it is remotest galaxy observable with the naked-eye. It is 12 million light-years away. Redshift cannot be used to infer distance, because it is moving toward us faster than cosmological expansion.|
|Messier 101||1930 -||Using the pre-1950s Cepheid measurements, M101 was one of the most distant so measured.|
|Triangulum Galaxy||1924–1930||In 1924, Edwin Hubble announced the distance to M33 Triangulum.|
|Andromeda Galaxy||1923–1924||In 1923, Edwin Hubble measured the distance to Andromeda, and settled the question of whether or not there were galaxies, or if everything was in the Milky Way.|
|Small Magellanic Cloud||1913–1923||This was the first intergalactic distance measured. In 1913, Ejnar Hertzsprung measures the distance to SMC using Cepheid variables.|
- MACS0647-JD, discovered in 2012, with z=10.7, does not appear on this list because it has not been confirmed with a spectroscopic redshift.
- UDFy-38135539, discovered in 2009, with z=8.6, does not appear on this list because its claimed redshift is disputed. Follow-up observations have failed to replicate the cited redshift measurement.
- A1689-zD1, discovered in 2008, with z=7.6, does not appear on this list because it has not been confirmed with a spectroscopic redshift.
- Abell 68 c1 and Abell 2219 c1, discovered in 2007, with z=9, do not appear on this list because they have not been confirmed.
- IOK4 and IOK5, discovered in 2007, with z=7, do not appear on this list because they have not been confirmed with a spectroscopic redshift.
- Abell 1835 IR1916, discovered in 2004, with z=10.0, does not appear on this list because its claimed redshift is disputed. Some follow-up observations have failed to find the object at all.
- STIS 123627+621755, discovered in 1999, with z=6.68, does not appear on this list because its redshift was based on an erroneous interpretation of an oxygen emission line as a hydrogen emission line.
- BR1202-0725 LAE, discovered in 1998 at z=5.64 does not appear on the list because it was not definitively pinned. BR1202-0725 (QSO 1202-07) refers to a quasar that the Lyman alpha emitting galaxy is near. The quasar itself lies at z=4.6947
- BaasdR2237-0607 LA1 and BR2237-0607 LA2 were found at z=4.55 while investigating around the quasar BR2237-0607 in 1996. Neither of these appear on the list because they were not definitively pinned down at the time. The quasar itself lies at z=4.558
- Two absorption dropouts in the spectrum of quasar BR 1202-07 (QSO 1202-0725, BRI 1202-0725, BRI1202-07) were found, one in early 1996, another later in 1996. Neither of these appear on the list because they were not definitively pinned down at the time. The early one was at z=4.38, the later one at z=4.687, the quasar itself lies at z=4.695
- In 1986, a gravitationally lensed galaxy forming a blue arc was found lensed by galaxy cluster CL 2224-02 (C12224 in some references). However, its redshift was only determined in 1991, at z=2.237, by which time, it would no longer be the most distant galaxy known.
- An absorption drop was discovered in 1985 in the light spectrum of quasar PKS 1614+051 at z=3.21 This does not appear on the list because it was not definitively fixed down. At the time, it was claimed to be the first non-QSO galaxy found beyond redshift 3. The quasar itself is at z=3.197
- From 1964 to 1997, the title of most distant object in the universe was held by a succession of quasars. That list is available at list of quasars.
- In 1958, cluster Cl 0024+1654 and Cl 1447+2619 were estimated to have redshifts of z=0.29 and z=0.35 respectively. However, no galaxy was spectroscopically determined.
Galaxies by brightness and powerEdit
|Intrinsically brightest galaxy||Baby Boom Galaxy||[verification needed]||Starburst galaxy located 12 billion light years away|
|Brightest galaxy to the naked eye||Large Magellanic Cloud||Apparent magnitude 0.6||This galaxy has high surface brightness combined with high apparent brightness.|
|Intrinsically faintest galaxy||Boötes Dwarf Galaxy (Boo dSph)||Absolute magnitude -6.75||This does not include dark galaxies.|
|Lowest surface brightness galaxy||Andromeda IX|
|Most luminous galaxy||WISE J224607.57-052635.0||As of May 21, 2015, WISE-J224607.57-052635.0-20150521 is the most luminous galaxy discovered and releases 10,000 times more energy than the Milky Way galaxy, although smaller. Nearly 100 percent of the light escaping from this dusty galaxy is infrared radiation. (Image)|
|Brightest distant galaxy (z > 6)||Cosmos Redshift 7||Galaxy Cosmos Redshift 7 is reported to be the brightest of distant galaxies (z > 6) and to contain some of the earliest first stars (first generation; Population III) that produced the chemical elements needed for the later formation of planets and life as we know it.|
Galaxies by mass and densityEdit
|Least massive galaxy||Segue 2||~550,000 MSun||This is not considered a star cluster, as it is held together by the gravitational effects of dark matter rather than just the mutual attraction of the constituent stars, gas and black holes.|
|Most massive galaxy||ESO 146-IG 005||~30×1012 MSun||Central galaxy in Abell 3827, 1.4 Gly distant.|
|Most dense galaxy||M85-HCC1||This is an ultra-compact dwarf galaxy |
|Least dense galaxy|
|Most massive spiral galaxy||ISOHDFS 27||1.04×1012 MSun||The preceding most massive spiral was UGC 12591|
|Least massive galaxy with globular cluster(s)||Andromeda I|||
A field galaxy is a galaxy that does not belong to a larger cluster of galaxies and hence is gravitationally alone.
|The Magellanic Clouds are being tidally disrupted by the Milky Way Galaxy, resulting in the Magellanic Stream drawing a tidal tail away from the LMC and SMC, and the Magellanic Bridge drawing material from the clouds to our galaxy.|
|The smaller galaxy NGC 5195 is tidally interacting with the larger Whirlpool Galaxy, creating its grand design spiral galaxy architecture.|
|These three galaxies interact with each other and draw out tidal tails, which are dense enough to form star clusters. The bridge of gas between these galaxies is known as Arp's Loop.|
|NGC 6872 is a barred spiral galaxy with a grand design spiral nucleus, and distinct well-formed outer barred-spiral architecture, caused by tidal interaction with satellite galaxy IC 4970.|
|Tadpole Galaxy||The Tadpole Galaxy tidally interacted with another galaxy in a close encounter, and remains slightly disrupted, with a long tidal tail.|
|Arp 299 (NGC 3690 & IC 694)||These two galaxies have recently collided and are now both barred irregular galaxies.|
|Mayall's Object||This is a pair of galaxies, one which punched through the other, resulting in a ring galaxy.|
|Antennae Galaxies (Ringtail Galaxy, NGC 4038 & NGC 4039, Arp 244)||2 galaxies||Two spiral galaxies currently starting a collision, tidally interacting, and in the process of merger.|
|Butterfly Galaxies (Siamese Twins Galaxies, NGC 4567 & NGC 4568)||2 galaxies||Two spiral galaxies in the process of starting to merge.|
|Mice Galaxies (NGC 4676, NGC 4676A & NGC 4676B, IC 819 & IC 820, Arp 242)||2 galaxies||Two spiral galaxies currently tidally interacting and in the process of merger.|
|NGC 520||2 galaxies||Two spiral galaxies undergoing collision, in the process of merger.|
|NGC 2207 and IC 2163 (NGC 2207 & IC 2163)||2 galaxies||These are two spiral galaxies starting to collide, in the process of merger.|
|NGC 5090 and NGC 5091 (NGC 5090 & NGC 5091)||2 galaxies||These two galaxies are in the process of colliding and merging.|
|NGC 7318 (Arp 319, NGC 7318A & NGC 7318B)||2 galaxies||These are two starting to collide|
|Four galaxies in CL0958+4702||4 galaxies||These four near-equals at the core of galaxy cluster CL 0958+4702 are in the process of merging.|
|Galaxy protocluster LBG-2377||z=3.03||This was announced as the most distant galaxy merger ever discovered. It is expected that this proto-cluster of galaxies will merge to form a brightest cluster galaxy, and become the core of a larger galaxy cluster.|
|Starfish Galaxy (NGC 6240, IC 4625)||This recently coalesced galaxy still has two prominent nuclei.|
|Disintegrating Galaxy||Consuming Galaxy||Notes|
|Canis Major Dwarf Galaxy||Milky Way Galaxy||The Monoceros Ring is thought to be the tidal tail of the disrupted CMa dg.|
|Virgo Stellar Stream||Milky Way Galaxy||This is thought to be a completely disrupted dwarf galaxy.|
|Sagittarius Dwarf Elliptical Galaxy||Milky Way Galaxy||M54 is thought to be the core of this dwarf galaxy.|
|Omega Centauri||Milky Way Galaxy||This is now categorized a globular cluster of the Milky Way. However, it is considered the core of a dwarf galaxy that the Milky Way cannibalized.|
|Mayall II||Andromeda Galaxy||This is now categorized a globular cluster of Andromeda. However, it is considered the core of a dwarf galaxy that Andromeda cannibalized.|
Galaxies with some other notable featureEdit
|M87||Virgo||This is the central galaxy of the Virgo Cluster, the central cluster of the Local Supercluster|
|M102||Draco (Ursa Major)||[clarification needed]||This galaxy cannot be definitively identified, with the most likely candidate being NGC 5866, and a good chance of it being a misidentification of M101. Other candidates have also been suggested.|
|NGC 2770||Lynx||"Supernova Factory"||NGC 2770 is referred to as the "Supernova Factory" due to three recent supernovae occurring within it.|
|NGC 3314 (NGC 3314a and NGC 3314b)||Hydra||exact visual alignment||This is a pair of spiral galaxies, one superimposed on another, at two separate and distinct ranges, and unrelated to each other. It is a rare chance visual alignment.|
|ESO 137-001||Triangulum Australe||"tail" feature||Lying in the galaxy cluster Abell 3627, this galaxy is being stripped of its gas by the pressure of the intracluster medium (ICM), due to its high speed traversal through the cluster, and is leaving a high density tail with large amounts of star formation. The tail features the largest amount of star formation outside of a galaxy seen so far. The galaxy has the appearance of a comet, with the head being the galaxy, and a tail of gas and stars.|
|Comet Galaxy||Sculptor||interacting with a galaxy cluster||Lying in galaxy cluster Abell 2667, this spiral galaxy is being tidally stripped of stars and gas through its high speed traversal through the cluster, having the appearance of a comet.|
|4C 37.11||230 Mpc||Perseus||Least separation between binary central black holes, at 24 ly (7.3 pc)||OJ 287 has an inferred pair with a 12-year orbital period, and thus would be much closer than 4C 37.11's pair.|
|SDSS J150636.30+540220.9 ("SDSS J1506+54")||z=0.608||Boötes
15h 06m 36.30s+54° 02′ 20.9″
|Most efficient star production||Most extreme example in the list of moderate-redshift galaxies with the highest density starbursts yet observed found in the Wide-field Infrared Survey Explorer data (Diamond-Stanic et al. 2012).|
|Cosmos Redshift 7||z = 6.604 (12.9 billion light-years)||Sextans||Brightest distant galaxy (z > 6)||Galaxy Cosmos Redshift 7 is reported to be the brightest of distant galaxies (z > 6) and to contain some of the earliest first stars (first generation; Population III) that produced the chemical elements needed for the later formation of planets and life as we know it.|
Lists of galaxiesEdit
- Using the formula for addition of apparent magnitudes, the added magnitudes of all stars in the Milky Way but our Sun (-6.50) and our Sun (-26.74) differs from the apparent magnitude of just our sun by less than 10^-8
- B.D. Simmons; et al. (2014). "Galaxy Zoo: CANDELS barred discs and bar fractions". MNRAS. 445 (4): 3466–3474. arXiv: . Bibcode:2014MNRAS.445.3466S. doi:10.1093/mnras/stu1817.
- Sobral, David; Matthee, Jorryt; Darvish, Behnam; Schaerer, Daniel; Mobasher, Bahram; Röttgering, Huub J. A.; Santos, Sérgio; Hemmati, Shoubaneh (4 June 2015). "Evidence For POPIII-Like Stellar Populations In The Most Luminous LYMAN-α Emitters At The Epoch Of Re-Ionisation: Spectroscopic Confirmation". The Astrophysical Journal. 808: 139. arXiv: . Bibcode:2015ApJ...808..139S. doi:10.1088/0004-637x/808/2/139.
- Smith, Robert T. (1941). "The Radial Velocity of a Peculiar Nebula". Publications of the Astronomical Society of the Pacific. 53: 187. Bibcode:1941PASP...53..187S. doi:10.1086/125301.
- Burbidge, E. Margaret (1964). "The Strange Extragalactic Systems: Mayall's Object and IC 883". Astrophysical Journal. 140: 1617. Bibcode:1964ApJ...140.1617B. doi:10.1086/148070.
- Baade, W.; Minkowski, R. (1954). "On the Identification of Radio Sources". Astrophysical Journal. 119: 215. Bibcode:1954ApJ...119..215B. doi:10.1086/145813.
- Karen Masters (December 2003). "Curious About Astronomy: Can any galaxies be seen with the naked eye?". Ask an Astronomer. Retrieved 2008-11-01.
- "Magellanic Cloud". Astronomy Knowledge Base. University of Ottawa. Archived from the original on 2006-07-05.
- "The Large Magellanic Cloud, LMC". SEDS.
- "The Small Magellanic Cloud, SMC". SEDS.
- "Messier 31". SEDS.
- John E. Bortle (February 2001). "The Bortle Dark-Sky Scale". Sky & Telescope.
- Barbara Wilson & Larry Mitchell. "The Revised AINTNO 100".
- Stephen Uitti. "Farthest Naked Eye Object". Retrieved 2008-11-01.
- "Messier 81". SEDS.
- S. J. O'Meara (1998). The Messier Objects. Cambridge University Press. ISBN 0-521-55332-6.
- Inglis, Mike. "Galaxies". Patrick Moore’s Practical Astronomy Series: 157–189. doi:10.1007/978-1-84628-736-7_4.
- SEDS, Lord Rosse's drawings of M51, his "Question Mark" "Spiral Nebula"
- SEDS, Seyfert Galaxies
- Burbidge, G. (1999). "Baade & Minkowski's Identification of Radio Sources". Astrophysical Journal. 525: 569. Bibcode:1999ApJ...525C.569B.
- Baade, W.; Minkowski, R. "Identification of the Radio Sources in Cassiopeia, Cygnus a, and Puppis a". The Astrophysical Journal. 119: 206. Bibcode:1954ApJ...119..206B. doi:10.1086/145812.
- Scientific American, "The Ghostliest Galaxies", GD Bothun, Vol. 276, No. 2, February 1997, pp.40-45, Bibcode: 1997SciAm.276b..40B
- Brunthaler, A.; et al. (2000). "III Zw 2, the first superluminal jet in a Seyfert galaxy". Astronomy & Astrophysics Letters. 357: 45. arXiv: . Bibcode:2000A&A...357L..45B.
- Ken Crosswell, "Malin 1: A Bizarre Galaxy Gets Slightly Less So", 22 January 2007
- Moffet, Alan T. "The Structure of Radio Galaxies". Annual Review of Astronomy and Astrophysics. 4: 145–170. Bibcode:1966ARA&A...4..145M. doi:10.1146/annurev.aa.04.090166.001045.
- Drake, Nadia (March 3, 2016). "Astronomers Spot Most Distant Galaxy—At Least For Now". National Geographic. Retrieved March 10, 2016.
- Sub-parsec-scale structure and evolution in Centaurus A Introduction Archived 2009-07-04 at the Wayback Machine. ; Tue November 26 15:27:29 PST 1996
- The 2006 Giant Flare in PKS 2155-304 and Unidentified TeV Sources
- Julie McEnery. "Time Variability of the TeV Gamma-Ray Emission from Markarian 421". Iac.es. Archived from the original on 2009-01-12. Retrieved 2008-11-01.
- bNet, Ablaze from afar: astronomers may have identified the most distant "blazar" yet Archived 2009-08-03 at the Wayback Machine., Sept, 2004
- Romani; David Sowards-Emmerd; Lincoln Greenhill; Peter Michelson (2004). "Q0906+6930: The Highest-Redshift Blazar". The Astrophysical Journal. 610: L9–L11. arXiv: . Bibcode:2004ApJ...610L...9R. doi:10.1086/423201.
- Rodríguez Zaurín, J.; Tadhunter, C. N.; González Delgado, R. M. (2008). "Optical spectroscopy of Arp220: the star formation history of the closest ULIRG". Monthly Notices of the Royal Astronomical Society. 384 (3): 875–885. arXiv: . Bibcode:2008MNRAS.384..875R. doi:10.1111/j.1365-2966.2007.12658.x.
- Murray, Steven (1999). "ACIS Imaging of the Starburst Galaxy M82". Chandra Proposal ID #01700041: 362. Bibcode:1999cxo..prop..362M.
- Starburst Galaxies: Proceedings of a Workshop (page 27) ; 2001 ; ISBN 3-540-41472-X
- Science Daily, "'Monster' Starburst Galaxies Discovered in Early Universe", NRAO, 13 March 2013 (accessed 13 March 2013)
- Vieira, J. D.; et al. "Dusty starburst galaxies in the early Universe as revealed by gravitational lensing". Nature. 495: 344–347. arXiv: . Bibcode:2013Natur.495..344V. doi:10.1038/nature12001. PMID 23485967.
- Hubble; "Hubble finds a new contender for galaxy distance record", 26 January 2011
- Wall, Mike (December 12, 2012). "Ancient Galaxy May Be Most Distant Ever Seen". Space.com. Retrieved December 12, 2012.
- Brammer, Gabriel B.; van Dokkum, Pieter G.; Illingworth, Garth D.; Bouwens, Rychard J.; Labbé, Ivo; Franx, Marijn; Momcheva, Ivelina; Oesch, Pascal A. (2013). "A Tentative Detection of an Emission Line at 1.6 mum for the z ~ 12 Candidate UDFj-39546284". The Astrophysical Journal. 765: L2. arXiv: . Bibcode:2013ApJ...765L...2B. doi:10.1088/2041-8205/765/1/l2.
- Capak, Peter; Carilli, C. L.; Lee, N.; Aldcroft, T.; Aussel, H.; Schinnerer, E.; Wilson, G. W.; Yun, M. S.; Blain, A. (2008). "Spectroscopic Confirmation of an Extreme Starburst at Redshift 4.547". The Astrophysical Journal. 681 (2): L53–L56. arXiv: . Bibcode:2008ApJ...681L..53C. doi:10.1086/590555.
- David R. Law; Alice E. Shapley; Charles C. Steidel; Naveen A. Reddy; Charlotte R. Christensen; Dawn K. Erb (2012-07-18). "High velocity dispersion in a rare grand-design spiral galaxy at redshift z = 2.18". Nature. 487: 338–340. arXiv: . Bibcode:2012Natur.487..338L. doi:10.1038/nature11256. PMID 22810697. Retrieved 2012-07-20.
- W. M. Keck Observatory (6 August 2015). "A new record: Keck Observatory measures most distant galaxy". Astronomy Now.
- Mike Wall (5 August 2015). "Ancient Galaxy Is Most Distant Ever Found". Space.com.
- Jonathan O'Callaghan & Ellie Zolfagharifard (16 July 2015). "A galaxy that rally IS far, far away: Astronomers confirm star system 13.1 billion light-years away is the most distant known in the universe". Daily Mail (London).
- Oesch, P.A.; et al. (3 May 2015). "A Spectroscopic Redshift Measurement for a Luminous Lyman Break Galaxy at z=7.730 using Keck/MOSFIRE". The Astrophysical Journal. 804: L30. arXiv: . Bibcode:2015ApJ...804L..30O. doi:10.1088/2041-8205/804/2/L30.
- "Galaxy breaks record for farthest ever seen". Associated Press. CBC News. 6 May 2015.
- Finkelstein, S. L.; Papovich, C.; Dickinson, M.; Song, M.; Tilvi, V.; Koekemoer, A. M.; Finkelstein, K. D.; Mobasher, B.; Ferguson, H. C.; Giavalisco, M.; Reddy, N.; Ashby, M. L. N.; Dekel, A.; Fazio, G. G.; Fontana, A.; Grogin, N. A.; Huang, J.-S.; Kocevski, D.; Rafelski, M.; Weiner, B. J.; Willner, S. P. (2013). "A galaxy rapidly forming stars 700 million years after the Big Bang at redshift 7.51". Nature. 502: 524–527. arXiv: . Bibcode:2013Natur.502..524F. doi:10.1038/nature12657. PMID 24153304.
- Shibuya, Takatoshi; Kashikawa, Nobunari; Ota, Kazuaki; Iye, Masanori; Ouchi, Masami; Furusawa, Hisanori; Shimasaku, Kazuhiro; Hattori, Takashi (2012). "The First Systematic Survey for Lyalpha Emitters at z = 7.3 with Red-sensitive Subaru/Suprime-Cam". The Astrophysical Journal. 752: 114. arXiv: . Bibcode:2012ApJ...752..114S. doi:10.1088/0004-637x/752/2/114.
- Ono, Yoshiaki; Ouchi, Masami; Mobasher, Bahram; Dickinson, Mark; Penner, Kyle; Shimasaku, Kazuhiro; Weiner, Benjamin J.; Kartaltepe, Jeyhan S.; Nakajima, Kimihiko; Nayyeri, Hooshang; Stern, Daniel; Kashikawa, Nobunari; Spinrad, Hyron (2012). "Spectroscopic Confirmation of Three z-dropout Galaxies at z = 6.844-7.213: Demographics of Lyalpha Emission in z ~ 7 Galaxies". The Astrophysical Journal. 744: 83. arXiv: . Bibcode:2012ApJ...744...83O. doi:10.1088/0004-637X/744/2/83.
- Vanzella, E.; Pentericci, L.; Fontana, A.; Grazian, A.; Castellano, M.; Boutsia, K.; Cristiani, S.; Dickinson, M.; Gallozzi, S.; Giallongo, E.; Giavalisco, M.; Maiolino, R.; Moorwood, A.; Paris, D.; Santini, P. (2011). "Spectroscopic Confirmation of Two Lyman Break Galaxies at Redshift Beyond 7". The Astrophysical Journal. 730: L35. arXiv: . Bibcode:2011ApJ...730L..35V. doi:10.1088/2041-8205/730/2/l35.
- Lehnert, M. D.; Nesvadba, N. P. H.; Cuby, J.-G.; Swinbank, A. M.; Morris, S.; Clément, B.; Evans, C. J.; Bremer, M. N.; Basa, S. (2010). "Spectroscopic confirmation of a galaxy at redshift z = 8.6". Nature. 467 (7318): 940–942. arXiv: . Bibcode:2010Natur.467..940L. doi:10.1038/nature09462. PMID 20962840.
- Iye, M; Ota, K; Kashikawa, N; et al. (2006). "A galaxy at a redshift z = 6.96". Nature. 443: 186–188. arXiv: . Bibcode:2006Natur.443..186I. doi:10.1038/nature05104. PMID 16971942.
- Yoshi Taniguchi (2008). "Star Forming Galaxies at z > 5". Proceedings of the International Astronomical Union. 3. arXiv: . doi:10.1017/S1743921308020796.
- PASJ: Publ. Astron. Soc. Japan 57, 165-182, February 25, 2005; The SUBARU Deep Field Project: Lymanα Emitters at a Redshift of 6.6
- BBC News, Most distant galaxy detected, Tuesday, 25 March 2003, 14:28 GMT
- SpaceRef, Subaru Telescope Detects the Most Distant Galaxy Yet and Expects Many More, Monday, March 24, 2003
- Kodaira; Taniguchi; Kashikawa; Kaifu; Ando; Karoji (2003). "The Discovery of Two Lyman-α Emitters Beyond Redshift 6 in the Subaru Deep Field". Publications of the Astronomical Society of Japan. 55: L17–L21. arXiv: [astro-ph]. Bibcode:2003PASJ...55L..17K. doi:10.1093/pasj/55.2.L17.
- New Scientist, New record for Universe's most distant object, 17:19 14 March 2002
- BBC News, Far away stars light early cosmos, Thursday, 14 March 2002, 11:38 GMT
- Hu, E. M.; Cowie, L. L.; McMahon, R. G.; Capak, P.; Iwamuro, F.; Kneib, J.-P.; Maihara, T.; Motohara, K. (2002). "A Redshift [CLC][ITAL]z[/ITAL][/CLC] = 6.56 Galaxy behind the Cluster Abell 370". The Astrophysical Journal. 568 (2): L75–L79. arXiv: . Bibcode:2002ApJ...568L..75H. doi:10.1086/340424.
- K2.1 HCM 6A — Discovery of a redshift z = 6.56 galaxy lying behind the cluster Abell 370 Archived 2011-05-18 at the Wayback Machine.
- Hu, Esther M.; McMahon, Richard G.; Cowie, Lennox L. (1999). "An Extremely Luminous Galaxy at [CLC][ITAL]z[/ITAL][/CLC] = 5.74". The Astrophysical Journal. 522: L9–L12. arXiv: . Bibcode:1999ApJ...522L...9H. doi:10.1086/312205.
- Publications of the Astronomical Society of the Pacific, 111: 1475–1502, 1999 December; SEARCH TECHNIQUES FOR DISTANT GALAXIES; INTRODUCTION
- New York Times, Peering Back in Time, Astronomers Glimpse Galaxies Aborning, October 20, 1998
- Astronomy Picture of the Day, A Baby Galaxy, March 24, 1998
- Arjun Dey; Hyron Spinrad; Daniel Stern; Graham; Chaffee (1998). "A Galaxy at z=5.34". The Astrophysical Journal. 498: L93–L97. arXiv: [astro-ph]. Bibcode:1998ApJ...498L..93D. doi:10.1086/311331.
- A New Most Distant Object: z = 5.34
- Franx, Marijn; Illingworth, Garth D.; Kelson, Daniel D.; Van Dokkum, Pieter G.; Tran, Kim-Vy (1997). "A Pair of Lensed Galaxies at [CLC][ITAL]z[/ITAL][/CLC] = 4.92 in the Field of CL 1358+62". The Astrophysical Journal. 486 (2): L75–L78. arXiv: . Bibcode:1997ApJ...486L..75F. doi:10.1086/310844.
- Astronomy Picture of the Day, Behind CL1358+62: A New Farthest Object, July 31, 1997
- "Astrophysics and Space Science" 1999, 269/270, 165-181 ; GALAXIES AT HIGH REDSHIFT - 8. Z > 5 GALAXIES ; Garth Illingworth
- Wil van Breugel; Carlos De Breuck; Adam Stanford; Huub Röttgering; George Miley; Daniel Stern; Dante Minniti; Chris Carilli (1999). "Ultra-Steep Spectrum Radio Galaxies at Hy Redshifts". arXiv: [astro-ph].
- Hyron Spinrad; Arjun Dey; Graham (1995). "Keck Observations of the Most Distant Galaxy: 8C1435+63 at z=4.25". The Astrophysical Journal. 438: L51. arXiv: . Bibcode:1995ApJ...438L..51S. doi:10.1086/187713.
- New Scientist, Galaxy hunters close to the edge, 5 November 1994
- Miley, G. K.; Chambers, K. C.; van Breugel, W. J. M.; Macchetto, F. (1992). "Hubble Space Telescope imaging of distant galaxies - 4C 41.17 at Z = 3.8". Astrophysical Journal. 401: L69. Bibcode:1992ApJ...401L..69M. doi:10.1086/186673.
- Chambers, K. C.; Miley, G. K.; van Breugel, W. J. M. (1990). "4C 41.17 - A radio galaxy at a redshift of 3.8". Astrophysical Journal. 363: 21. Bibcode:1990ApJ...363...21C. doi:10.1086/169316.
- Science News, Farthest galaxy is cosmic question - 0902+34 April 23, 1988
- Science News, Two distant galaxies provide new puzzles - 4c 41.17, B2 09021+34, November 14, 1992
- Paola Mazzei; Gianfranco De Zotti (1995). "Dust in High Redshift Radio Galaxies and the Early Evolution of Spheroidal Galaxies". Monthly Notices of the Royal Astronomical Society. 279: 535–544. arXiv: [astro-ph]. Bibcode:1996MNRAS.279..535M. doi:10.1093/mnras/279.2.535.
- Le Fevre, O.; Hammer, F.; Nottale, L.; Mazure, A.; Christian, C. (1988). "Peculiar morphology of the high-redshift radio galaxies 3C 13 and 3C 256 in subarcsecond seeing". Astrophysical Journal. 324: L1. Bibcode:1988ApJ...324L...1L. doi:10.1086/185078.
- Lilly, S. J.; Longair, M. S. (1984). "Stellar populations in distant radio galaxies". Royal Astronomical Society. 211: 833–855. Bibcode:1984MNRAS.211..833L. doi:10.1093/mnras/211.4.833.
- Longair, M. S. (1984). "The Most Distant Galaxies". Journal of the British Astronomical Association. 94: 97. Bibcode:1984JBAA...94...97L.
- "3C324 - Most Distant Galaxy". Bibcode:1983S&T....65..321S. Retrieved 14 August 2013.
- Smith, H. E.; Junkkarinen, V. T.; Spinrad, H.; Grueff, G.; Vigotti, M. (1979). "Spectrophotometry of three high-redshift radio galaxies - 3C 6.1, 3C 265, and 3C 352". The Astrophysical Journal. 231: 307. Bibcode:1979ApJ...231..307S. doi:10.1086/157194.
- The Discovery of Radio Galaxies and Quasars
- McCarthy, P J (1993). "High Redshift Radio Galaxies". Annual Review of Astronomy and Astrophysics. 31: 639–688. Bibcode:1993ARA&A..31..639M. doi:10.1146/annurev.aa.31.090193.003231.
- Sandage, Allan (1961). "The Ability of the 200-INCH Telescope to Discriminate Between Selected World Models". Astrophysical Journal. 133: 355. Bibcode:1961ApJ...133..355S. doi:10.1086/147041.
- Hubble, E. P. (1953). "The law of red shifts (George Darwin Lecture)". Monthly Notices of the Royal Astronomical Society. 113: 658–666. Bibcode:1953MNRAS.113..658H. doi:10.1093/mnras/113.6.658.
- OBSERVATIONAL TESTS OF WORLD MODELS; 6.1. Local Tests for Linearity of the Redshift-Distance Relation ; Annu. Rev. Astron. Astrophys. 1988. 26: 561-630
- Humason, M. L.; Mayall, N. U.; Sandage, A. R. (1956). "Redshifts and magnitudes of extragalactic nebulae". Astron. J. 61: 97. Bibcode:1956AJ.....61...97H. doi:10.1086/107297.
- "1053 May 8 meeting of the Royal Astronomical Society". The Observatory. 73: 97. 1953. Bibcode:1953Obs....73...97.
- Merrill, Paul W. (1958). "From Atoms to Galaxies". Astronomical Society of the Pacific Leaflets. 7: 393. Bibcode:1958ASPL....7..393M.
- Bolton, J. G. (1969). "Extragalactic Radio Sources". Astronomical Journal. 74: 131. Bibcode:1969AJ.....74..131B. doi:10.1086/110786. A&AAid:AAA001.141.093
- Humason, M. L. (1936). "The Apparent Radial Velocities of 100 Extra-Galactic Nebulae". Astrophysical Journal. 83: 10. Bibcode:1936ApJ....83...10H. doi:10.1086/143696.
- THE FIRST 50 YEARS AT PALOMAR: 1949–1999 ; The Early Years of Stellar Evolution, Cosmology, and High-Energy Astrophysics; 5.2.1. The Mount Wilson Years ; Annu. Rev. Astron. Astrophys. 1999. 37: 445-486
- Chant, C. A. (1932). "Notes and Queries (Doings at Mount Wilson-Ritchey's Photographic Telescope-Infra-red Photographic Plates)". Journal of the Royal Astronomical Society of Canada. 26: 180. Bibcode:1932JRASC..26..180C.
- Humason, Milton L. (1931). "Apparent Velocity-Shifts in the Spectra of Faint Nebulae". Astrophysical Journal. 74: 35. Bibcode:1931ApJ....74...35H. doi:10.1086/143287.
- Hubble, Edwin; Humason, Milton L. (1931). "The Velocity-Distance Relation among Extra-Galactic Nebulae". Astrophysical Journal. 74: 43. Bibcode:1931ApJ....74...43H. doi:10.1086/143323.
- Humason, M. L. (1931). "The Large Apparent Velocities of Extra-Galactic Nebulae". Astronomical Society of the Pacific Leaflets. 1: 149. Bibcode:1931ASPL....1..149H.
- Humason, M. L. (1930). "The Rayton short-focus spectrographic objective". Astrophys. J. 71: 351. Bibcode:1930ApJ....71..351H. doi:10.1086/143255.
- Trimble, Virginia (1996). "H_0: The Incredible Shrinking Constant, 1925–1975". Publications of the Astronomical Society of the Pacific. 108: 1073. Bibcode:1996PASP..108.1073T. doi:10.1086/133837.
- "The Berkeley Meeting of the Astronomical Society of the Pacific, June 20–21, 1929". Publications of the Astronomical Society of the Pacific. 41: 244. 1929. Bibcode:1929PASP...41..244.. doi:10.1086/123945.
- From the Proceedings of the National Academy of Sciences; Volume 15 : March 15, 1929 : Number 3 ; THE LARGE RADIAL VELOCITY OF N. G. C. 7619 ; January 17, 1929
- THE JOURNAL OF THE ROYAL ASTRONOMICAL SOCIETY OF CANADA / JOURNAL DE LA SOCIÉTÉ ROYALE D'ASTRONOMIE DU CANADA; Vol. 83, No.6 December 1989 Whole No. 621 ; EDWIN HUBBLE 1889–1953
- National Academy of Sciences; Biographical Memoirs: V. 52 - VESTO MELVIN SLIPHER; ISBN 0-309-03099-4
- Bailey, S. I. (1920). "Comet Skjellerup". Harvard College Observatory Bulletin No. 739. 739: 1. Bibcode:1920BHarO.739....1B.
- New York Times, DREYER NEBULA NO. 584 INCONCEIVABLY DISTANT; Dr. Slipher Says the Celestial Speed Champion Is 'Many Millions of Light Years' Away. ; January 19, 1921, Wednesday
- New York Times, NEBULA DREYER BREAKS ALL SKY SPEED RECORDS; Portion of the Constellation of Cetus Is Rushing Along at Rate of 1,240 Miles a Second. ; January 18, 1921, Tuesday
- Coe, Dan; Zitrin, Adi; Carrasco, Mauricio; Shu, Xinwen; Zheng, Wei; Postman, Marc; Bradley, Larry; Koekemoer, Anton; Bouwens, Rychard; Broadhurst, Tom; Monna, Anna; Host, Ole; Moustakas, Leonidas A.; Ford, Holland; Moustakas, John; van der Wel, Arjen; Donahue, Megan; Rodney, Steven A.; Benítez, Narciso; Jouvel, Stephanie; Seitz, Stella; Kelson, Daniel D.; Rosati, Piero (2013). "CLASH: Three Strongly Lensed Images of a Candidate z ≈ 11 Galaxy". The Astrophysical Journal. 762: 32. arXiv: . Bibcode:2013ApJ...762...32C. doi:10.1088/0004-637x/762/1/32.
- Lehnert, M. D.; Nesvadba, N. P. H.; Cuby, J.-G.; Swinbank, A. M.; Morris, S.; Clément, B.; Evans, C. J.; Bremer, M. N.; Basa, S. (2010). "Spectroscopic confirmation of a galaxy at redshift z = 8.6". Nature. 467 (7318): 940–942. arXiv: . Bibcode:2010Natur.467..940L. doi:10.1038/nature09462. PMID 20962840.
- New Scientist, Baby galaxies sighted at dawn of universe, 22:34 10 July 2007
- Lawrence Livermore National Laboratory, Lab scientists revoke status of space object
- Hsiao-Wen Chen; Lanzetta; Sebastian Pascarelle; Noriaki Yahata (2000). "The Unusual Spectral Energy Distribution of a Galaxy Previously Reported to be at Redshift 6.68". Nature. 408: 562–564. arXiv: [astro-ph]. Bibcode:2000Natur.408..562C. doi:10.1038/35046031.
- BBC News, Hubble spies most distant object, Thursday, April 15, 1999
- Hu; McMahon (1996). "Detection of Lyman-alpha Emitting Galaxies at Redshift z=4.55". Nature. 382 (6588): 231–233. arXiv: . Bibcode:1996Natur.382..231H. doi:10.1038/382231a0.
- 31/01/02 ; "DAZLE NEAR IR NARROW BAND IMAGER" (PDF). Archived from the original (PDF) on 2008-07-27. (570 KB) ; DAZLE-IoA-Doc-0002
- ESO Press Release 11/95, ESO Astronomers Detect a Galaxy at the Edge of the Universe[permanent dead link], 15 September 1995
- New Scientist, Trouble at the edge of time, 21 October 1995
- Wampler, E. J.; et al. (1996). "High resolution observations of the QSO BR 1202-0725: deuterium and ionic abundances at redshifts above z=4". Astronomy & Astrophysics. 316: 33. arXiv: . Bibcode:1996A&A...316...33W.
- Elston, Richard; Bechtold, Jill; Hill, Gary J.; Ge, Jian (1996). "A Redshift 4.38 MG II Absorber toward BR 1202-0725". Astrophysical Journal Letters. 456: L13. Bibcode:1996ApJ...456L..13E. doi:10.1086/309853.
- Smail, I.; Ellis, R. S.; Aragon-Salamanca, A.; Soucail, G.; Mellier, Y.; Giraud, E. (1993). "The Nature of Star Formation in Lensed Galaxies at High Redshift". R.a.s. Monthly Notices V.263. 263: 628–640. Bibcode:1993MNRAS.263..628S. doi:10.1093/mnras/263.3.628.
- Gravitational Lenses II: Galaxy Clusters as Lenses
- Djorgovski, S.; Strauss, Michael A.; Spinrad, Hyron; McCarthy, Patrick; Perley, R. A. (1987). "A galaxy at a redshift of 3.215 - Further studies of the PKS 1614+051 system". Astronomical Journal. 93: 1318. Bibcode:1987AJ.....93.1318D. doi:10.1086/114414. ISSN 0004-6256.
- NED, Searching NED for object "3C 123"
- Spinrad, H. (1975). "3C 123: a distant first-ranked cluster galaxy at z = 0.637". Astrophys. J. 199: L3. Bibcode:1975ApJ...199L...3S. doi:10.1086/181835.
- Staff (May 21, 2015). "PIA19339: Dusty 'Sunrise' at Core of Galaxy (Artist's Concept)". NASA. Retrieved May 21, 2015.
- Staff (21 May 2015). "WISE spacecraft discovers most luminous galaxy in universe". PhysOrg. Retrieved 22 May 2015.
- Overbye, Dennis (17 June 2015). "Astronomers Report Finding Earliest Stars That Enriched Cosmos". New York Times. Retrieved 17 June 2015.
- Sci-News.com, "Segue 2: Most Lightweight Galaxy in Universe", Natali Anderson, 11 June 2013 (accessed 11 June 2013)
- "SEGUE 2: THE LEAST MASSIVE GALAXY". The Astrophysical Journal. 770: 16. arXiv: . Bibcode:2013ApJ...770...16K. doi:10.1088/0004-637X/770/1/16.
- Astronomy Now, "Heavyweight galaxy is king of its cluster", Keith Cooper, 13 May 2010 (accessed 9 March 2013)
- Research.gov, "Astronomers Discover Most Massive Galaxy Yet, Formed by 'Galactic Cannibalism'" (accessed 9 March 2013)
- "Undergraduates discover the densest galaxies known". Space Daily. 29 July 2015.
- ESO Press Release 25/00 , Most Massive Spiral Galaxy Known in the Universe Archived 2008-06-16 at the Wayback Machine. , 8 December 2000
- Grebel (2000). "Star Clusters in Local Group Galaxies". ASP Conference Series. 211: 262–269. arXiv: . Bibcode:2000ASPC..211..262G.
- Chandra X-Ray Observatory at Havard, "Abell 644 and SDSS J1021+1312: How Often do Giant Black Holes Become Hyperactive?", 20 December 2010 (accessed 7 July 2012)
- Sky and Telescope, Stars in the Middle of Nowhere, 10 January 2008
- Sky and Telescope, Galaxy Monster Mash, 9 August 2007
- ABC News, Found! Oldest galaxy pile-up, Wednesday, 9 April 2008
- Cooke, Jeff; Barton, Elizabeth J.; Bullock, James S.; Stewart, Kyle R.; Wolfe, Arthur M. (2008). "A Candidate Brightest Protocluster Galaxy atz= 3.03". The Astrophysical Journal. 681 (2): L57–L60. arXiv: . Bibcode:2008ApJ...681L..57C. doi:10.1086/590406.
- "Black hole found in Omega Centauri". UPI.com. 10 April 2008.
- "Local Large-Scale Structure". Hayden Planetarium. 15 September 2008. Archived from the original on 28 August 2008.
- Goldman, Stuart (28 September 2007). "New Stars in a Galaxy's Wake". Sky & Telescope.
- "Orphan' Stars Found in Long Galaxy Tail" (Press release). NASA. 20 September 2007.
- Sun; Donahue; Voit (2007). "H-alpha tail, intracluster HII regions and star-formation: ESO137-001 in Abell 3627". arXiv: [astro-ph].
- Fraser Cain (20 September 2007). "Galaxy Leaves New Stars Behind in its Death Plunge". Universe Today.
- Geach et al., A Redline Starburst: CO(2–1) Observations of an Eddington-Limited Galaxy Reveal Star Formation At Its Most Extreme Draft version 27 February 2013
- Wolfram Research: Scientific Astronomer Documentations - Brightest Galaxies
- 1956 Catalogue of Galaxy Redshifts: Redshifts and magnitudes of extragalactic nebulae by Milton L. Humason, Nicholas U. Mayall, Allan Sandage
- 1936 Catalogue of Galaxy Redshifts: The Apparent Radial Velocities of 100 Extra-Galactic Nebulae by Milton L. Humason
- 1925 Catalogue of Galaxy Redshifts: [ ] by Vesto Slipher
- (1917) First Catalogue of Galaxy Redshifts: Nebulae by Vesto Slipher
- Interactive Map of the Visible Universe with Galaxies: Deep Space Map
|
<urn:uuid:7f9694dd-8972-43cc-b460-c79bb422b6cf>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816068.93/warc/CC-MAIN-20180224231522-20180225011522-00412.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7261613011360168,
"score": 3.34375,
"token_count": 16923,
"url": "https://en.m.wikipedia.org/wiki/List_of_galaxies"
}
|
The 1916 proclamation, the manifesto of the 1916 rebels, states
“The Republic guarantees religious and civil liberty, equal rights and equal opportunities to all its citizens, and declares its resolve to pursue the happiness and prosperity of the whole nation and of all its parts, cherishing all the children of the nation equally, and oblivious of the differences carefully fostered by an alien government, which have divided a minority from the majority in the past.”
These noble aspirations would become almost a bible of Irish Republican ideals but little did the authors know that within six years, Irish people would have a chance to implement them after The War of Independence in 1922. However the society established after the war of independence “The Irish Free State” was a pale shadow of even the most modest interpretation of this document.
Civil liberties were almost non existent, citizens were not equal with women becoming second class while the poor were plunged further in destitution. The history of early Irish Independence is often passed over with a less than critical eye that glorifies state building at any cost. However behind this abstract veneer lies the story of a dark authoritarian regime based on repression, discrimination and censorship. This was enforced by deeply authoritarian attitudes underscored by severe catholic morality which stifled culture and allowed no political debate or opposition of any kind. By 1937 the “The Irish Free State” had created a society that had betrayed the ideals of what many had set out achieve two decades earlier.If you want to recieve updates about future articles or podcasts follow the blog on facebook or twitter
War of Independence and Revolution.
Within a few years of the 1916 rebellion the Irish Republican movement found itself transformed from a relatively marginal group to being one of the key political forces in early 20th century Ireland. In 1917 when the British Army faced a man power crisis in World War I conscription was threatened in Ireland. This was deeply unpopular and the Republican movement grew quickly as they had consistently and militantly opposed World War I since its outbreak in 1914.
The movement grew from strength to strength and by 1919 a full-scale war of Independence was under way. Over the following two years the basis of British power in Ireland collapsed and groups traditionally frozen out of society began to assert their power most notably women and workers.
In the decade before independence women had made great strides in their struggle for equality. After years of struggle, albeit with opposition, women were forcing their way into politics best symbolised by the republican socialist Constance Markievicz, who was the first women elected to the house of Commons in the 1918 election . Markievicz’s formal role as a military leader during the 1916 rebellion would been unthinkable in the previous century. This surge of activity from women was reflected through the ranks of the republican movement with women’s organisations like Cumann na mBan and Inghinidhe na hÉireann.
Although not feminist in any sense of the word their very existence showed a marked change from the last period of radicalism in Ireland in the 1880’s when women had struggled to get any acknowledgement for their participation in the Land War of 1879 -1882. The Ladies Land League was castigated by nearly all sections of society and only received limited acknowledgement when the Land League itself was proscribed.
While womens’ liberation had a long way to go through the second decade of the 20th century change seemed imminent. This mood was reflectec by the fact equality of the sexes was enshrined in both 1919 democratic programme of the first Dáil and 1922 constitution.
The other group in society to surge forward was Ireland’s working class organised through its Trade Unions. Although resoundingly defeated in 1913 during the Dublin Lock Out by 1919 the Trade Union movement in Ireland had been reorganised and was immensely powerful. Aside from the well known IRA activity, organised labour played a prominent role during the war of Independence.
Along with numerous general strikes including one in support of IRA hunger strikes in 1920, there were 233 other strikes that same year and even the establishment of a albeit brief workers soviet in Limerick in 1919 . They also played a crucial role in the war itself when transport unions refused to transport war supplies or soldiers for the British Army .
The Birth of The Free State
After two years of conflict strikes and assassinations a temporary a truce was called 1921 between the IRA and the British Government. This was followed by negotiations which produced the famous Anglo Irish Treaty. The Treaty was without doubt the most controversial document in 20th century Irish history. It clearly fell short of the aims of the Republican movement. The six counties that today form the Northern Ireland were to remain part of the United Kingdom while Ireland was not to become a Republic but a “Free State” within the British Empire.
When the document was debated in Ireland it created huge division. The Dáil (the Irish Parliament) eventually narrowly passed the treaty 56-48 and a few days later the final meeting of all Irish members of the house of commons met at the the Mansion house Dublin and formally brought into existence the “Irish Free State” by voting for the Treaty.
Post Independence Hopes
After independence both women and workers had high hopes that the society being forged in Ireland would protect their new found power but over the following decade these groups were harshly suppressed by the new Irish government. Ireland’s new political elite would effectively hope to turn the clock back and enforce the status quo that had existed in Ireland years if not decades before the war of Independence.
However first to learn the authoritarian nature of the new state were the former comrades of the new government who opposed the treaty. A few months after Independence a civil war broke out between the pro and anti-treaty sides which the new government fought in the most ferocious manner.
Tensions and the build up to civil war
As soon as the Dáil ratified the treaty the President Eamon de Valera resigned and walked out uttering the words “I am not going to connive at setting a up in Ireland another Government for England”. He was soon joined by many other republican TD’s who opposed the Treaty including Harry Boland, Constance Markievicz and Cathal Brugha. In their absence those who supported the treaty set about establishing a government. Among the key figures were WT Cosgrave, Kevin O Higgins, Richard Mulcahy, Arthur Griffith and Michael Collins.
The first major challenge of the new government was how they would deal with opponents of the Treaty. These opponents, while in a minority, significantly had a majority of support within the army – the IRA. When senior anti-treaty members of the IRA called a convention on March 26th 1922 inspite of a governement ban, 52 out of 73 brigades attended and rejected the Treaty proclaiming the parliament had betrayed the republican ideal by ratifying the treaty.
Over the next few months the Free State reacted reacted by establishing a new army – the National Army to break its independence on an organisation that did not support it and it could not control.
In June an election were held in which the anti treaty side received 21% while the pro treaty side received almost 40% of the vote. While this was interpreted as a mandate by those in favour of the treaty, those opposed to the treaty were unmoved. Liam Mellows an opponent of the treaty remarked it was not the “will of the people” but “the fear of the people” in reference to the British threat to wreak a terrible war if the treaty was rejected.
For reasons beyond the scope of this article which are highly debated among historians the opposing sides ended up in conflict within a few days of the election, precipitated by Free State’s “ National Army” shelling a 3 month IRA occupation of the Four courts on June 26th 1922. This was after 3 months of effort by groups within both camps to avoid conflict.
Through the course of this Civil War a deeply authoritarian brutal streak in the new government was exposed and cultivated. This would permeate through the political culture of the Free State for decades afterwards.
The Civil War
It became evident very quickly the Pro Treaty forces were going to emerge victorious. The Anti Treaty IRA’s sole point of unity was that they opposed to the Treaty. Identifying other goals which unified them is impossible as they encompassed republicans of both the left and right. This lack of unity hamstrung their ability to act. While the pro- Treaty side were also politically very diverse they had unity originating not the least from the fact that they could claim a mandate from the 1922 election.
Within a few weeks the I.R.A. forces were decisively defeated in Dublin and Cork city was captured on August 10th. By early August the overall threat being posed by the Anti-Treaty I.R.A. was diminishing given they had already lost every urban area and Liam Lynch the Chief of Staff of the IRA gave the order to resort to guerilla warfare on August 10th.
A few days later, Michael Collins, the key figure in the Free State Government, now a soldier in the National Army was killed in an ambush in West Cork at Béal na mBláth on August 22nd 1922. His death unleashed and unmasked the true authoritarianism that lay behind the Free state government. Instead of trying to de-escalate a conflict they were on the clearly winning the authoritarianism of the government politicians demanded an absolute annihilation of the I.R.A.
Following Collins’ death nearly a year of terrifying brutality saw the Free State National Army breech several articles of The Hague convention of 1907, the era’s equivalent of the Geneva Convention. Far from the lofty heights of ensuring civil liberties for the people of Ireland they engaged in a campaign of brutal repression.
At Oriel house in Dublin the Free State set up the Criminal Investigation Department where ex IRA members waged a campaign of torture and killings against anti-treaty republicans. After the killing of Collins they killed four republicans in Dublin and their bodies dumped . This would result in 21 deaths in Dublin alone by the end of the war. These activities were not just those of a few men who had gone off the edge, but that of a 250 strong force operating in Dublin city centre.
During the second half of 1922 the National Army made several naval landings into Munster where the IRA remained strongest. In a ruthless campaign prisoners were frequently executed. Again this cannot be explained away as just the activities of soldiers hardened by war, indeed far from it.
By September 18th 1922 reports of the executions of prisoners were forwarded to cabinet but nothing was done save Richard Mulcahy agreeing to help remove soldiers who had a problem with such activity. The activity was in effect condoned by Patrick Hogan Minister for Land and Agriculture when he said that the “national army are a little too ready to take prisoners” .
Further to this the government itself passed legislation which effectively legalised similar executions. On 28th of September the sitting members of the Dáil had overwhelming (48-18) endorsed legislation that removed jury trials for numerous activities and allowed military courts to try civilians with death sentences being handed down to those carrying weapons. On October 3rd they offered an amnesty lasting only two weeks before the military courts began a killing spree endorsed by cabinet which saw dozens of people executed.
On November 10th Erskine Childers, who had signed the treaty but opposed its recommendations was arrested tried and executed for being in possession of an ornamental gun given to him as a wedding present by Michael Collins himself. Worse was yet to come.
The IRA responded in kind and on November 27th Liam Lynch issued an order that any TD who voted for this legislation which was dubbed the “Murder Bill” was to executed on sight. Two weeks later two government T.D.’s Sean Hales and Padraig O Máille were shot. Hales died of his wounds.
In response the government decided to execute four prominent republicans being held in Mountjoy jail in Dublin – Liam Mellows (IRA quarter master), Joe McKelvey (former IRA Chief of Staff) , Rory O Connor (IRA director of Engineering) and Dick Barrett. The sentiment behind the government policy was outlined by WT Cosgrave in the statement “Terror will met with Terror”. Indeed nothing else could explain killing four men who could not possibly have had any involvment given they were in prison since the first weeks of the war . It has been argued that the time provoked desperate measures but even contemporaries thought it was unjustifiable. Thomas Johnson leader of the Labour Party which was neutral in civil war described the enormity of what had happened:
Murder most foul as in the best it is— but this most foul, bloody and unnatural. The four men in Mountjoy have been in your charge for five months…….. the Government of this country—the Government of Saorstát Eireann, announces apparently with pride that they have taken out four men, who were in their charge as prisoners, and as a reprisal for that assassination murdered them.…..I wonder whether any member of the Government who has any regard for the honour of Ireland, or has any regard for the good name of the State, or has any regard for the safety of the State, will stand over an act of this kind.
In a particularly shocking aspect the Minister for Justice Kevin O Higgins in signing Rory O’Connors execution order was signing away the life of the person who had been his best man the previous year.
By march 1923 as the Free State was unquestionably on the verge of victory they began to commit atrocities on an unprecedented scale in reaction to anti-treaty assassinations and attacks on property. In Kerry at Cahirciveen, Killarney and Countess Bridge horrific massacres of IRA prisoners were committed. The most notorious atrocity was that committed at Ballyseedy, Co. Kerry where the National Army tied 9 IRA prisoners to a bridge before detonating a landmine killing all except one – Stephen Fuller, who testified to the events later.
The Civil war drew to a close in the early summer of 1923 and it was clear the Irish Free State had fallen far short of the aims of the 1916 proclamation or even far more timid aspirations. It has been argued that exceptional times called for exceptional measures however it is hard to see how such measures could ever be justifiable or excusable. Even if it was justifiable it is difficult to see how the IRA posed such a threat to the state after Michael Collins death (the period that saw the worst persecution) that warranted such a brutal response.
The Anti Treaty forces had always been seriously disunited and poorly armed with an arguably non existent strategy. One of the events that heightened tensions in the run up to war illustrated this. When an IRA unit occupied the Four Courts they were so disunited that when the IRA chief of staff Liam Lynch attempted to gain entry on the 19th of June he was locked out. Although Lynch eventually was able to repair the links with the four courts garrison it was indicative of wider problems that such squabbling was ongoing within days of the civil war breaking out.
Their disunity through the following months stopped them utilising their numerical strength. This was compounded by the fact several key figures within the anti Treaty movement including Rory O Connor, Liam Mellows, Joe McKelvey Cathal Brugha and Paddy O Brien were captured or killed within a few days of the conflict starting.
In essence they were strategically reactionary. Their sole innovative move was the Four Courts occupation in Spring 1922 after which they largely reacted to Free State activity: when the war started when the Free State attacked the Four Courts garrison, they reverted to guerilla warfare only after they had lost all urban centres and logically enough in this pattern they responded to state terror with terror. In this situation the Free State dictated the pace and course of the war.
Using terror was clearly the worst path as the I.R.A. would respond in kind, illustrated by Liam Lynch issuing assassinations orders on all T.D.’s who had voted for what they called “The Murder Bill” or the the ferocious brutality illustrated when the IRA killed Kevin O Higgins elderly father on February 10th 1923 in reprisal for the execution of 33 prisoners in January.
Indeed arguably it was this repression and brutality that allowed what was a disunited factious movement hold men as disparate as the communist Peader O Donnell and the conservative catholic Liam Lynch together. Had the Free State executed the war in a less authoritarian manner they could have surely undermined the basis of the IRA leadership. Aside two brief amnesties in late 1922 and February 1923 which seem to have been more tokenistic than a real gesture to end the war, they fought in a manner which backed the anti treaty side into a corner. The brutality if anything played into the hands of militarists like Liam Lynch who argued for carrying on the war until they were utterly annihilated.
Why did the Free State choose this strategy?
To see this however as a series as flawed decisions is incorrect. When Thomas Johnson the Labour leader vented his fury over the execution of Mellowes, Barret, O Connor and McKelvey in December 1922, he said “I am almost forced to say you have killed the new State at its birth” he missed the point. They had not killed the state quite the opposite.
The knew how weak the Anti Treaty forces were indeed the secretary of the Free State Government Diarmuid O Hegarty said “The Government was, however, satisfied, that those forces contained within themselves elements of disruption that given time would accomplish their own disintegration” Yet they still ruthlessly crushed them. The Free State were well aware of what they were doing. The next ten years would show they had successfully laid the groundwork for a deeply authoritarian state in the civil war.
In this light their execution of the war did not auger well for the future. Over the following ten years they would apply an equally authoritarian outlook in enforcing their view of society. Far from creating a stable society they forced well over half the population into an oppressive existence.
Free State in Power
By early 1923 victory was inevitable and the Pro Treaty forces began to look to the future. Since December the formation of a new party had been discussed and in April they reorganised themselves into a new political party – Cumann na nGaedheal. This new party was supposedly formed to transcend War of Independence politics appealing to all sections of society including those who had been opposed to Independence . Whilst theoretically a nice idea it was in reality a rallying point for the conservative elite in Irish society who wanted to re establish their authority after a decade of social radicalism. In office it would introduce a plethora of authoritarian reforms based on excluding various groups from society.
In May the I.R.A. all but accepted defeat when chief of staff Frank Aiken (Liam Lynch was killed in April) issued the order to dump arms on May 24th. Over the next few months state executions and torture tailed off – although Noel Lemass was executed and dumped by Free State forces in Dublin in the summer of 1923 although his body would not be found in the Wicklow Mountains until October . Comfortable in their power having annihilated the opposition elections were held in August 1923.
The results were only mediocre for Cumann na nGaedhael. Given that many Anti Treaty republicans candidates were in prison, on the run or in the case of Eamon de Valera who arrested when he tried to electioneer that fact that Cumann na nGaedhael only returned with 39% was a poor showing. Lacking a majority they could rule because the Anti-Treaty republicans refused to sit in the parliament they saw as lacking legitimacy.
Cumann na nGaedheal in Government
Although the president of the administration was W.T. Cosgrave, the Cumann na nGaedheal government was increasingly under the influence of the highly conservative faction centred around the authoritarian Kevin O Higgins who famously quipped that Cumann na nGaedheal were “most conservative-minded revolutionaries that ever put through a successful revolution”. If anyone had any hope they would fulfil the 1916 ideal to “ pursue the happiness and prosperity of the whole nation and of all its parts” they were about to sorely disappointed.The authoritarianism that governed their policy in the Civil War was now to be turned on society at large.
Mary McSwiney Demonstration
Their willingness to use authoritarian measures on the civilian population had been displayed as early as November 1922. When the anti-treaty activist Mary McSwiney was interned this caused public anger. The 50 year old McSwiney, was one of the most famous female republican activists hailing from the same family as the republican martyr, former Lord Mayor of Cork Terence McSwiney who had died on hunger strike during the war of Independence in 1920.
When McSwiney went on hunger strike in prison on November 4th, a demonstration was called to protest against her incarceration. On November 9th a large demonstration of women gathered in Dublin city centre. With no apparent provocation the National Army arrived and fired shots at the demonstration . Although no one was killed, 14 were injured in the ensuing stampede .
Post office strike
The states use of authoritarian measures was increasingly evident not just through its prosecution of the civil war but also the way it dealt with internal dissent. In September 1922 10,000 postal workers went on strike provoked by a government wage cut . The reaction of the government was all to predictable as the army were sent in to break the strike, with armed guards threatening strikers on picket lines.
The rural poor were also an early victim of Cumann na nGaedhael in power. Hoping to cultivate a support base with larger farmers in Ireland they supported these farmers in ongoing their on going attempts to drive down the wages of landless agricultural labourers. These labourers formed around 23% of the rural workforce . As a class had been the big losers during the land war of the 1880′s as they could not benefit from reforms that allowed farmers buy land given they had none. Their attempts to gain a stake in Irish rural society through organising themselves in the ITGWU (The Irish Transport and General Workers Union) in the early 20th century was fiercely resisted by farmers.
In 1923 farmers emboldened by the knowledge that the Free State would support them they locked out thousands of unionised labourers in attempts to drive down wages. In Athy, co. Kildare when farmers locked out 350 labourers the National Army arrested the ITGWU branch secretary in the area . When a farmer was attacked and a threshing machine damaged 8 trade unionists were arrested and held for 3 months without trial or charge.
Later in the year when 1500 laboureres were locked out in Waterford the response was similar. The state sent in 600 Soldiers and the entire of East Waterford was put under a curfew between 11p.m. and 5-30 am. Meanwhile nothing was done to stop vigilantes organised by farmers called “White Guards” attacking union organisers across the county. The Farmers backed by the state emerged victorious and crushed the union .
This, accompanied by high unemployment, broke the power of organised rural labour, the ITGWU membership halved in the following three years. This was reflected by the fact that within 5 years days lost to strike action were reduced by 95% . In the absence of Unions, the government clearly had no interest in their welfare and the labourers had no one to argue their corner. This saw their living standards plummet. There was 10% fall in agricultural labourers wages between 1922 and 1926 and a further 10% in the following 5 years . These policies saw a whole section of the rural population – the labourers disappear through emigration, little wonder given their income had fallen by 20% between 1923 and 31.
The Urban Poor
If their despicable attitude toward the rural poor was devastating their ambivalence to the urban poor proved fatal. The desperate living standards of the urban poor was the greatest single social issue facing “The Free State” in 1923. The tenement population in Dublin lived in crushing poverty. However instead of helping the poorest of the poor the government focused on building houses for the middle classes which saw the expansion of the suburbs on the fringes of Dublin. Little was done to alleviate the conditions among the urban poor in Dublin. Housing construction was largely privatised and thus little was done to alleviate the desperate squalor in which people lived as they could never afford housing. Shockingly Dublin corporation only built an average of 483 houses a year between 1923 and 1933 . This lead to the deterioration of housing conditions. In 1926 when a census was conducted over third of the population of Dublin lived in housing conditions with an average of 4 people per room.
This disregard for overcrowding was worsened by their tax approach. Appealing to the rich in society the Free State short of money unbelievably reduced income tax from what was 27% to 15% and levying finances indirectly which had an greater impacts on the poor.
The outcome of theses policies was revealed in 1926 when the shocking statistic of an infant mortality rate of 12% among children younger than one in urban areas was revealed . The authoritarian callous attitude of Free State politicians and their indifference would allow this to continue unaddressed with its devastating consequences. While their inaction and indifference to the poor was shocking the group they actively attacked most were women. Over ten years in power Cumann na nGaedheal started a process that would see women effectively forced out of any public role in society.
The Free State, The Catholic Church and Women
While Kevin O Higgins may have been the key political influence on the Cumann na nGaedhael, the key to understand their social policy and the attitude to women of the Free State was their connection to the Catholic Church. The Catholic Church effectively formed the social policy of the Free State.
This had little to do specifically with Cumann na nGaedhael and more to do with the fact that the Catholic Church was arguably the most powerful institution in Ireland in 1923, even more powerful than the state itself. Cumann na nGaedhael were in no position but also had no inclination to stand up to the church. Indeed the opposite was true. The Catholic Church had been the key influence on Irish society since before the famine and the entire nationalist movement of all sides had been inculcated with its moral and cultural attitudes as were large sections of the population.
In this context the social values of the church were effectively the values of Cumann na nGaedhael highlighted best by W.T. Cosgrave the president, who suggested that the upper house in the Free State could be a “theological board which would decide whether any enactments of the Dáil were contrary to [Roman Catholic] faith and morals or not” . Indeed Kevin O Higgins himself had failed in an attempt to become a priest. Rather than one influencing the other, both church and state became almost inseparable and at times indistinguishable on social policy.
Once in power Cumann na nGaedhael soon set about trying to implement what were catholic social values. There was no debate on these issues they were enforced regardless of their impact. This was to have disastrous consequences particularly for women as when fused with Cumann na nGaedhael’s authoritarianism, catholic views of women would see them slowly but surely excluded and denuded of power usually due to legislative change but also on occasion more forceful methods when they deemed it neccessary.
Attitudes of Women
The Catholic Church had a deeply sexist view of women in society. As the Sociologist Tom Inglis (1998) points out they portrayed women as “fragile, weak beings” and “for women to attain and maintain moral power it was necessary that they retain their virtue and chastity”. In order to enforce these attitudes the church portrayed sex as unclean and immoral and ultimately womens bodies were something to be ashamed of.
The lead to deep embarrassment and guilt over sex. Where the church had substantial influence they effectively could control women’s knowledge of sex as only place they could talk about it was in confession where they were berated over the topic by their priest. Outside of this the catholic point of view on women’s role in society was that they were to rear children and take care of the family and nothing else.
The Nationalist movement in Ireland had been heavily influenced by this and its formula of an ideal Irish woman was almost identical. Arthur Griffith who had died in 1922 had stated that in any Irish house “You will meet the ideal mother, modest, hospitable, religious, absorbed in her children and motherly duties” clearly reflecting the ethos of the church.
The reality of 1920’s Ireland
In spite of the significant influence of the church the reality of life in Ireland in 1922 was quite different. Prior to independence the church had used its not inconsiderable social and cultural weight to enforce these ideas. However Ireland like in many countries across Europe in the period between 1914-23 witnessed great social change which undermined the churches control and authority. While women were by no means equal citizens, significant progress had been made.
However after independence the church did not only have to rely on its moral, social and cultural influence, now in unison with the authoritarian Cumann na nGaedhael government it could use the apparatus of state to enforce its authority over women particularly when it came to sex.
It was around the issue of sex that the church were most vocal and outraged. They viewed sex as a dirty subject and a sphere where women were largely a corrupting influence. However in relation to sex, by 1923 Irish women may not have been as ashamed and prudish as the church believed they should have been or many assume Irish women were in the 1920′s.
In 1924 an Inter-Departmental Committee of Inquiry regarding Venereal Disease was tasked to ‘make inquiries as to the steps necessary, if any, which are desirable to secure that the extent of venereal disease may be diminished’. In its unpublished report ‘venereal disease was widespread throughout the country, and that it was disseminated largely by a class of girl who could not be regarded as a prostitute”. The report also illustrated that the spread of disease was relatively evenly distributed across the country and not limited as anticipated to former garrison towns and cities.
Aside from the blatant sexism of the report which attributed the spread of venereal disease to women, it clearly indicated a higher level of sexual activity than often imagined. For the state and its moral watchdog the Church this was seen as a great danger to the churches authority and control and to the nationalist vision of what womanhood was ie a home-maker.
The authoritarianism of the state went into overdrive in the 1920′s to suppress sexual activity. In 1923 Strict censorship in film was introduced and films which ‘indecent, obscene or blasphemous or contrary to..or subversive of public morality’ were banned. 1924 saw the restrictions placed on the sale of alcohol not least as it was seen as one of the causes of slipping morality
By 1929 censorship bills enabled the government to ban even the dissemination of material on birth control. Aside from their moral view on birth control it was clearly something that allowed women gain greater control over sex while society in general would have a greater understanding of the sexual process something that was anathema to the Catholic Churches teaching and practice. The attitude toward contraception articulated just how authoritarian the Free State was – even discussion on the topic was not going to be tolerated.. The Minister for Justice James FitzGerald-Kenney (Kevin O Higgins was assassinated in 1927) stated in 1928 when the censorship bill was debated in the Dáil
“In our [the government] views on [contraception] we are perfectly clear and perfectly definite. We will not allow … the free discussion of this question … We have made up our minds that it is wrong. That conclusion is for us unalterable … We consider it to be a matter of grave importance. We have decided, call it dogmatically if you like—and I believe almost all persons in this country are in agreement with us—that that question shall not be freely and openly discussed. That question shall not be advocated in any book or in any periodical which circulates in this country
This attitude towards sex and the setting of unattainable standards for women was also lead to horrific abuse of women on a level only being fully understood in the last decade. This culture allowed women who had children outside of marriage, who were raped and spoke of their experience or even just assertive women to be committed into what were effectively prisons run by catholic nuns. These were the brutal Maghdalene Laundries. The states attitude to this was more than supportive. In 1927 The State Commission on the Destitute Poor in referred to women who had children outside of marriage as either “first time offenders” or those” or those “who had fallen more than once” . The impact of this catholic morality lead to the a very hard life for single mothers who managed to hold on to their children (often they were forced to give them up for adoption). They were more often than not impoverished. This lead to a shameful infant mortality rate of 33% of children of single mothers .
Perhaps the most direct attack on women over the issue of sex came in 1925 when the state cracked down on prostitution. The opposite of both the Catholic Churches teaching and the nationalist view of women were prostitutes. Dublin had had a world famous red light before independence in the North Inner city known as the “Monto” based around Montgomery street. Although it went into decline after the withdrawal of the British Army hundreds of women still worked as prostitutes. Everything about the Monto horrified the church, not only was it “immoral” but they had little or no control over the sex lives of the women working there.
The Monto was also to a certain extent outside the patriarchal structure of Irish society given many of the brothels were run by women and if anything it was polar opposite to the catholic view of the world. For the women working there it was a very tough life where they were controlled by madams or pimps. Unfortunately when the Church and State attacked the area in the 1920′s they did not have their interests at heart. They were concerned with ridding Dublin of a moral scourge as they saw it rather than helping people who were being exploited.
Campaigning against the Monto had begun in the early 1920′s firstly by church organisations. Lead by a group who would form the Legion of Mary in 1925 catholic activists targeted the area attempting to literally force the prostitutes to convert from prostitution to home-makers. They operated hostels where former prostitutes could stay, although they were operated under strict moral guidlines including the issue that “every entrant is made the object of a special and individual attention, directed in the first place to the creation of moral fibre.” . To ensure that the prostitutes would stay in the hostel once they got the brothel closed they moved a family into the building effectively making the prostitute homeless unless they stayed with the church run hostels.
It was clear the interests of these women were not been taken into account but more abstract notions of moral fibre. Frank Duff who was most synonymous with this campaign on prostitution often lauded as a great social reformer was illustrated the thinking behind this “moral fibre” deeply sexist. Duff said that “The only cause of Syphilis … is the prostitute lying in wait in cities to
tempt men” and “Behind all Venereal Disease, the prostitute lies hidden somewhere.” both statements that was blatantly untrue from findings in the 1926 Committee of Inquiry regarding Venereal Disease Ireland but indicative of Duff’s prejudices and a disregard for the prostitutes.
To “save” these women they were inculcated with the State and Churches Idea of what they should be – essentially wives and mothers. The move from prostitution gave these women no more power as it was a simple process of replacing the brothel madam with a husband – through the hostels the catholic activists married off the women off as quickly as possible, between 1922-23 61 women were married off.
This campaign where these supposedly “saved” women were bystanders in their “liberation“ from prostitution was heavily supported by the state. The first hostel was opened at 76 Harcourt street a building given to them in 1922 by future president and then Minister for local government W.T. Cosgrave .
After campaigning for a few years in 1925 the campaign against the prostitutes in the Monto was stepped up a notch. Several arms of the church including the Jesuits and the Legion of Mary worked with the police driving prostitutes out of the Monto. After the church organisations moderate success early in the year the police launched a series of raids on the Monto. In March over one hundred people were arrested and one woman was imprisoned for 6 weeks for allowing a house to be used as a brothel Needless to say while the church and state succeeded in closing the Monto they did not end prostitution, this was a secondary concern, the campaign was mainly about moral aesthetics, no doubt prompted by the fact that as the catholics left the Pro-Cathedral on Marlborough Street in Dublin they were on the fringe of a red light district.
Child Abuse and The Carrigan Report
The long-term ramifications of authoritarian attitudes fused with the churches morality which created an environment where sex was something unspeakable had horrendous consequences. When a report was carried out into sexual crimes in Ireland – The Carrigan Report (1930), it uncovered widespread sexual abuse of children.
In the report Eoin O Duffy the chief of police stated there had been 6,000 cases of abuse of people under 18 (some under 11) between 1927 and 1929 for which only 15% of the cases had been prosecuted. Immediately one is reminded of the 1916 proclamations most modest of demands of “cherishing all children of the nation equally”. These notions were long dead by 1930 – the report was never published or acted upon. When it was circulated to politicians on December 2nd 1931 the Department of Justice attached a cover note arguing against publication because
‘it might not be wise to give currency to the damaging allegations made in Carrigan regarding the standard of morality in the country”
This policy was continued when Fianna Fail came to power the following year and the report was buried. The longterm implications of this are really only being understood today as the true extent of child sex abuse emerges. As Fiona Kennedy (2000) pointed out had this report been published it may not have stopped all sex abuse but certainly the culture of silence that allowed perpetrators abuse children for decades would have been lessened.
Women and Wider Society
Accompanying the campaigning around the issue of sex, the church and state through the 1920′s brought in several pieces of legislation designed to force women from the workplace into the home and keep them there.
In 1925 divorce something that was already something very difficult to attain was banned for women. Technically it was possible for men if they moved to a country where divorce was legal but this provision was not open to women. The only option available was legal seperation but no remarriage. When debated in the Senate the Countess of Desart noted the implications of this bill for women who could be legally separated but no able to remarry.
“You condemn her to a life of misery or isolation, for a woman in so false a position must be ten times more circumspect than any other, if she would safeguard her good name. If guilty, she must spend the rest of her days as an example of the wicked, flourishing like a bay tree or as an eyesore in a land hitherto famed for its high ideals of purity.”
Countess Desart was right but unfortunately this was one of the intentions of the bill in order to preserve the family women would be prevented from taking independent action in terms of divorce or seperation. This legislation reflecting the desire to control women as home makers was reinforced in the provision in the bill which legally made a woman’s legal residence that of her husband even if he lived in a different continent.
A crucial aspect of controlling women and enforcing the catholic view of the family was the exclusion of women from public life. In 1924 Kevin O Higgins first attempted to exclude women totally from jury duty. This was clearly unconstitutional as the 1922 constitution enshrined the idea that all citizens were equal. When it was finally brought in the 1927 O Higgins, a few months from his assassination, had found a way around equality: women would have to register for jury duty.
In the course of the debate in the Seanad O Higgins outlined how he saw women “I think we take the line that it was proper to confer on women citizens all the privileges of citizenship and such of the duties of citizenship as we thought it reasonable to impose upon them.”. This idea that women had limited capabilities and were unable to bear the weight of citizenship was very much to the fore of their thinking and directed policy. The shaped the overriding aim: the removal of women from the public sphere.
Women working outside the home was something the Catholic Church loathed. In 1925 the government attempted to limit posts in the Senior Civil Service to men but this was rejected in the Senate. A few years later the bill was forced through as the Senate could only reject legislation for a certain time period. Women were thus banned from progressing past a certain grade thereby making a successful career in the civil service impossible. In time a marraige bar would be introduced forcing women to retire from the civil service when they married.
The Catholic Church and the Free State alliance by the late twenties almost had total control over the social life of the vast majority of people. Any threat to this no matter how inconsequential was treated in the harshest of terms. The level of authoritarianism ruling Irish society was illustrated in Leitrim in the early 1930′s.
Lietrim in the early 1920′s had been like much of the country. It was the site of much republican activity and class struggle. In 1921 an Irish emigrant, Jimmy Gralton returned from New York and got involved in local organising of tenants taking over landlords farms. In the 1920′s he was very much seen to the left of the political spectrum making enemies amongst the establishment in the area. In 1922 Gralton lead the building of a local community Hall – the Pearse-Connolly Hall where educational classes and dances were held. This immediately irked the local catholic church as Gralton was challenging their control over social activities normally held in a church run parish hall. Through the 1920′s the Catholic Church vented much of its moral indignation at such dance halls and accused them of being sites of debauchery which caused alcoholism and sex outside marriage. In 1930 the local priest began a sustained campaign against Gralton’s Pearse Connolly Hall. This lead to physical attacks on the hall which was eventually burned down in December 1932 most likely by the local IRA .
Not happy with this, the church just like in the attack on the Monto in 1925 was able to rely on the state for support but their reaction was almost incredulous. For what was comparatively low level activity Jimmy Gralton, a man born in rural Leitrim was deported to America and exiled from Ireland. There’s little doubt that Gralton could have been dispensed in more brutal ways for example in 1931 the republican James Vaugh died in very mysterious circumstances in a police cell in Ballinamore, Co. Leitrim but there can be little doubt that the deportation of Gralton was to serve as a lesson to others.
Indeed Gralton’s case highlighted just how much control the church-state alliance had over all aspects of society including the media. The Irish Times reporting on Gralton’s extradition emphasised the fact Gralton was an “Irish American” which he was not – he had spent some time in America as an emigrant where he also became a US citizen. This masked the fact that the Irish State was deporting someone who was born in the state .
This lie was repeated in the several articles in the Irish Times during March when Gralton’s deportation order was delivered. Finally in August 1933 when Gralton was deported to the USA he was called “a returned American” and the only crime cited was that he supposedly held “extreme communistic views”. No article in the Irish Times raises any issue about the right to deport him, indeed it clearly shirked from challenging the state by frequently and erroneously saying that Gralton was an Irish-American.
It reflects the authoritarian nature of the Free State which was increasingly identifying what it was to be Irish with the moral, ethical and social values of its political and religious elite. As Gralton’s case illustrated they would ruthlessly persecute anyone who questioned this.
The authoritarianism that shaped the first ten years deeply shaped Ireland into the future. In 1932 a faction of the Republicans defeated in the Civil War won the election that year and replaced Cumann na nGael as government. (5 years earlier lead by Eamon de Valera they had broken with the IRA over the issue of taking seats in parliament and had formed a new party – Fianna Fail). The transition was largely seamless with Fianna Fail largely continuing in a similar vein to Cumann na nGael.
It is hard to tell how much they naturally shared the authoritarian views of Cumann na nGael or they replicated what they saw as a successful model of taking and keeping power but they proved more than able to build on Cumann na nGaedhael’s authoritarian foundation.
Indeed it was Fianna Fail who ensured the Carrigan report detailing child abuse was not published or acted up on, it was they who would deported Jimmy Gralton at the behest of the Catholic Church and most all it was they who delivered a coup de grace of 15 years of conservative laws formally incorporating the attacks on women in a deeply chauvinistic document that was supposed to outline what it meant to be Irish – the 1937 constitution.
The culture created by the all encompassing authoritarianism became endemic in Irish politics for decades leading many of of Ireland’s most creative people into self imposed exile. Publishing anything other than things agreed by the Catholic Nationalist ethos was next to impossible. This produced what can only be described as a stifling monolithic culture where nothing was in anyway challenging was tolerated. As early as 1923 after W.B. Yeats who won the Nobel Prize for
Literature that year faced stinging criticism – the award was criticised in the Catholic magazine “The Catholic Bulletin” as “A substantial sum provided by a deceased anti-christian manufacturer of dynamite” .
It is little surprise then that the more creative minded followed the urban and rural poor into what was often miserable emigration. This would prompt Samuel Beckett in his 1956 play “All that Fall” to reflect “It is suicide to be abroad but what is it to be at home?… A lingering dissolution”
Over 40 years later Philip Chevron could still write in 1988“Where e’er we go, we celebrate, The land that makes us refugees, From fear of priests with empty plates, From guilt and weeping effigies”
in his emigration song “Thousands are Sailing”.
When looking at The Free State there is little to take from its first ten years or indeed subsequent governments. Most praise comes when histroans use the “the litmus test” of “the survival of the state” as Thomas Bartlett did as recently as 2010. While they were successful ensuring the state survived (whatever that actually means given they just replicated the administrative practices of the British Empire), for the vast majority – women, the rural and urban poor and political opponents this meant effective removal from an active role in society, a role that they had fought hard to achieve between 1913-22.
From legislation making public life for women impossible, to the summary executions in the civil war to the deportation of Jimmy Gralton the achievements of “The Free State” were limited to the restoration of the pre World War 1 social and econmic order. They succeeded in preserving a state for the rich and powerful in a symbiotic relationship with the Catholic Church. In this context those who laud the “achievements” of the founders of the Irish State as great men for no obvious reason other than the preservation of this state should reflect on the words of Mikhail Bakunin, the 19th century Russian anarchist.
Thus, to offend, to oppress, to despoil, to plunder, to assassinate or enslave one’s fellow man is ordinarily regarded as a crime. In public life, on the other hand, from the standpoint of patriotism, when these things are done for the greater glory of the State, for the preservation or the extension of its power, it is all transformed into duty and virtue……..There is no horror, no cruelty, sacrilege, or perjury, no imposture, no infamous transaction, no cynical robbery, no bold plunder or shabby betrayal that has not been or is not daily being perpetrated by the representatives of the states, under no other pretext than those elastic words, so convenient and yet so terrible: “for reasons of state.”
Bartlett Ireland (2010) Ireland: A History Cambridge universtity Press Cambridge
Coogan T.P. (1998) The Irish Civil War
Ferriter, D. (2005) The Transformation of Ireland: 1900-2000 Profile Books London
Garvin, T. (2005) Preventig the Future Why was Ireland poor for so long Gill and McMillan Dublin
Gillis Liz The Fall of Dublin Mercier Dublin
Hill, J. (2003) A New History of Ireland Volume VII: Ireland, 1921-84 Oxford University Press Oxford.
Inglis, T. (1998) Moral Monolpoly. The Rise and Fall of the Catholic Church in Modern Ireland niversity College Dublin Press Dublin
Kostick, C (1996) Revolution in Ireland popular militancy 1917 to 1923 Pluto Press London
Lee J (1989) Ireland, 1912-1985: Politics and Society Cambridge University Press Cambridge
Regan, J (2001) The Irish counter-revolution, 1921-1936:Treatyite politics and settlement in independent Ireland Gill & McMillan Dublin
Regan, J. Strangers in Our Midst: Middling People, Revolution and Counter-Revolution in Twentieth-Century Ireland Radharc, Vol. 2 (Nov., 2001), pp. 35-50
Dolan, A Killing and Bloody Sunday november 1920 The Historical Journal, 49, 3 (2006), pp. 789–810.Jackson, A. (1999) Ireland 1798-1998: politics and war –
Drudy, P.J. ( 1982) Irish Studies II Ireland–land, politics, and people CUP Archive,
G. W. Hogan Law and Religion: Church-State Relations in Ireland from Independence to the Present Day The American Journal of Comparative Law, Vol. 35, No. 1 (Winter, 1987), pp. 47-96
Regan, J. Strangers in Our Midst: Middling People, Revolution and Counter-Revolution in Twentieth-Century Ireland Radharc, Vol. 2 (Nov., 2001), pp. 35-50
Gibbon L Labour and Local history: the case against Jim Gralton 1886-1945 Saothar pge 91
Howell, P Venereal Disease and the Politics of Prostitution in the Irish Free Irish Historical Studies, Vol. 33, No. 131 (May, 2003), pp. 320-341
Kennedy F Frank Duff’s Search for the Neglected and Rejected Studies: An Irish Quarterly Review, Vol. 91, No. 364 (Winter, 2002), pp. 381-389
Kennedy, F. The Suppression of the Carrigan Report: A Historical Perspective on Child Abuse Studies: An Irish Quarterly Review, Vol. 89, No. 356 (Winter, 2000), pp. 354-363
Luddy, M Sex and the Single Girl in 1920s and 1930s Ireland The Irish Review (1986-), No. 35, Irish Feminisms (Summer, 2007), pp. 79-91
Lydon, J. (1998) The making of Ireland: from ancient times to the present Routledge London
Regan J Irish public histories as a historiographical problem Irish historical studies xxxvii no146 Nov (2010) pp88-115
1She did not take her seat. Sinn Fein at this point were implementing an abstentionist policy.
2Kostick C, 1996 pge 108
3Kostick C, 1996 pge 130
4Gilllis Liz, pge 24
5O Regan 1997 551
9Regan J 2001 pge 105
10Irish Times December 9th 1922
11Dail Debate 8th December 1922 http://debates.oireachtas.ie/dail/1922/12/08/00007.asp
12Several other buildings were occupied but heavy fighting only took place at the Four Courts and the Gresham hotel.
13Dail Debate 8th December 1922 http://debates.oireachtas.ie/dail/1922/12/08/00007.asp
15Jackson 1999 pge 272
16Lydon J (2001) pge 363
17O Regan J (2010) pge 560
18Irish Times November 9th 1922
19Kostick C, 1996 pge 184
20Kostick C, 1996 pge 185
21Peasant Models and Rural Ireland pge 145
22Kostick C, 1996 pge 191
23Kostick C, 1996 pge 191
24Kostick C, 1996 pge 191-2
25Lee, J pge 128
26Lee, J 115
27Ferriter 2010 pge318
28Lee, J. (1989) pge 124
29Regan J (2001) pge 83
30 Inglis, T (1996) pge 210
31Luddy, M (2007) pge 85
32 Dáil Éireann, 18 October, 1928 http://www.oireachtas-debates.gov.ie/plweb-cgi/fastweb?state_id=1329072053&view=oho-view&docrank=17&numhitsfound=22&query=That%20question%20shall%20not%20be%20advocated%20in%20any%20book%20or%20in%20any%20periodical%20which%20circulates%20in%20this%20country&query_rule=%28%28$query1%29%3C%3DDATE%3C%3D%28$query2%29%29%20AND%20%28%28$query4%29%29%3ASPEAKER%20AND%20%28%28$query5%29%29%3Aheading%20AND%20%28%28$query6%29%29%3ACATEGORY%20AND%20%28%28$query3%29%29%3Ahouse%20AND%20%28%28$query7%29%29%3Avolume%20AND%20%28%28$query8%29%29%3Acolnumber%20AND%20%28%28$query%29%29&query4=James%20FitzGerald-Kenney&docid=38898&docdb=Debates&dbname=Debates&sorting=none&operator=and&TemplateName=predoc.tmpl&setCookie=1
33Ferriter D (2005) pge 542
34Ferriter D, (2005) pge 323
35Ferriter D, (2005) pge 323
36 Luddy, (2007) Pge 88
37Howell, P. (2003) Pge 330
38Howell, P. (2003) Pge 330
39Luddy, M (2007) Pge 88
40Luddy, M(2007) Pge 88
41Kennedy, F (2010) Pge 383
42 Irish Times Saturday, March 28, 1925
44 Kennedy F (2000)
45Seanad debate on the Divorce bill http://historical-debates.oireachtas.ie/S/0005/S.0005.192506110009.html
46 Hill, J. (2003)
47 2nd reading of jury bill in the senate
48 Labour and Local history: the case against Jim Gralton 1886-1945 pge 91
49 Irish Times February 18th 1933
50 Irish Time August 19th 1933
52 Bartlett 2010 pge434
|
<urn:uuid:c2671dc6-a489-482b-916b-ab140e29b30a>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00612.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9758301973342896,
"score": 3.953125,
"token_count": 11917,
"url": "http://irishhistorypodcast.ie/torture-murder-and-exclusion-irelands-first-10-years-of-independence/"
}
|
What Does "IQ" Stand For, and What Does It Mean?
Alfred Binet, 1857-1911
a matter of everyday experience that some people are more intelligent
than others. But what is "intelligence"? And how do we measure it?
In 1905, a French psychologist by the name of Alfred Binet, working with a physician-associate, Theodore Simon, developed the Binet Simon Test designed to measure the intelligence of retarded children, based upon their observations that.
(1) Just as children grow taller as they grow older, they grow more mentally capable as they grow older; and
(2) Some children can perform at age and equivalent-grade levels above their chronological ages, while other children perform at age and equivalent-grade levels below their chronological ages. For example, a few 6-year-olds could perform as well on the Binet Simon mental tests as the average 8-year-old, while a few 6-year-olds could only perform as well as the average 4-year-old.
It was also observed that the gaps between children's mental ages and their chronological ages widened as the children got older. The 6-year-old with the mental age of 8 had a mental age of 12 by the time he was 9 and a mental age of 16 by the time he was12. Similarly, the 6-year-old with a mental age of 4 had a mental age of 6 when he was 9 and a mental age of 8 when he was 12. In 1912, the German psychologist, William Stern, noticed that even though the gap between mental age and chronological age widens as a child matures, the ratio of mental age to chronological age remains constant (and, as we will see, remains essentially constant throughout life). This constant ratio of mental age divided by chronological age was given the name "Intelligence Quotient". Actually, the intelligence quotient is defined as 100 times the Mental Age (MA) divided by the Chronological Age (CA).
IQ = 100 MA/CA.
Age for Adults
At approximately. the age of 16, mental age, like height, stops increasing. Until 1960, it was customary to use 16 as the divisor for mental age among adults. Actually, certain mental functions increase slowly and slightly after the age of 16, peaking in the 20's, with others remaining stable or even rising slightly up to the age of 60 or so. With some individuals, vocabulary may increase over time.
Practical Significance of IQ
The average IQ of the population as a whole is, by definition, 100. IQs range from 0 to above 200, and among children, to above 250. However, about 50% of the population have IQs between 89 and 111, and about 80% of the population have IQs ranging between 80 and 120, with 10% lying below 80, and 10% falling above 120.
For IQs below 120, IQ is the best predictor of socioeconomic status of any psychometric measurement. In more complex jobs, IQ is better than even education or experience at predicting job performance. In her article "The General Intelligence Factor", Scientific American Presents "Exploring Intelligence", pg. 24, 1999, Linda Gottfredson states,
"Adults in the bottom 5% of the IQ distribution (below 75) are very difficult to train and are not competitive for any occupation on the basis of ability. Serious problems in training low-IQ military recruits during World War II led Congress to ban enlistment from the lowest 10% (below 80) of the population, and no civilian occupation in modern economies routinely recruits its workers from that below-80 range. Current military enlistment standards exclude any individual whose IQ is below about 85."
"Persons of average IQ (between 90 and 100) are not competitive for most professional and executive-level work but are easily trained for the bulk of jobs in the American economy. By contrast, individuals in the top 5 percent of the adult population can essentially train themselves, and few occupations are beyond their reach mentally."
"People with IQs between 75 and 90 are 88 times more likely to drop out of high school, seven times more likely to be jailed, and five times more likely as adults to live in poverty than people with IQs between 110 and 125. The 75-to-90 IQ woman is eight times more likely to become a chronic welfare recipient, and four times as likely to bear an illegitimate child than the 110-to-125-IQ woman."
In his book, "Straight
Talk About Mental Tests", The Free Press, A Division of the
Macmillan Publishing Co., Inc., New York, 1981, pg. 12, Dr. Arthur
Jensen cites the following four IQ thresholds:
(1) An IQ of 50 or below. This is the threshold below which most adults cannot cope outside of an institution. They can typically be taught to read at a 3rd or 4th grade level. However, they cannot normally function in the customary classroom setting, and they require special training programs.
(2) An IQ between 50 and 75. At this level of intelligence, they generally cannot complete elementary school. Most adults will need smarter help in coping with the world.
(3) An IQ between 75 and 105. Children in this IQ range are not generally able to complete a college prep course in high school.
(4) An IQ between 105 and 115. May graduate from college but generally, not with grades that would qualify them for graduate school.
(5) An IQ above 115. No restrictions.
For IQs in these ranges, the influence of IQ upon socioeconomic sttus is dramatic. 31% of those with IQs below 75 were on welfare, compared with 8% of those in the 90 to 110 IQ interval, and 0% in those with IQs above 125. 55% of mothers with IQs below 75 went on welfare after the birth of the first child, compared with 12% of those with IQs between 90 and 110, and 1% of those with IQs above 125. Income is highly dependent upon IQ up to an IQ-level of about 125.
Table 1 - Practical Significance of IQ
||>1% below 30||Illiterate||Unemployable. Institutionalized.|
||>1% below 50||1st-Grade to 3rd-Grade||Simple, non-critical household chores.|
||1.5% below 60||3rd-Grade to 6th-grade||Very simple tasks, close supervision.|
||5% below 74||6th-Grade to 8th-Grade||"Slow, simple, supervised."|
||25% below 89||8th-Grade to 12th-Grade||Assembler, food service, nurse's aide|
||50% below 100||8th-Grade to 1-2 years of College.||Clerk, teller, Walmart|
|100 to 111||
||1 in 2 above 100||12th-Grade to College Degree||Police officer, machinist, sales|
|111 to 120||
||1 in 4 above 111||College to Master's Level||Manager, teacher, accountant|
|120 to 125||
||1 in 10 above 120||College to Non-Technical Ph. D.'s.||Manager, professor, accountant|
|125 to 132||
||1 in 20 above 125||Any Ph. D. at 3rd-Tier Schools||Attorney, editor, executive.|
|132 to 137||
||1 in 50 above 132||No limitations.||Eminent professor, editor|
|137 to 150||
||1 in 100 above 137||No limitations.||Leading math, physics professor|
|150 to 160||
||1 in 1,100 above 150||No limitations||Lincoln, Copernicus, Jefferson|
|160 to 174||
||1 in 11,000 above 160||No limitations||Descartes, Einstein, Spinoza|
|174 to 200||
||1 in 1,000,000
|No limitations||Shakespeare, Goethe, Newton|
Wandering Down to Walmart
To gain a clearer perspective regarding what this means in terms of our daily contacts with people, let's take a trip down to a local Walmart. Let's suppose we're visiting the only Walmart in a small, rural town, so that neighborhood inhomogeneities don't affect the cohort of shoppers we'll find at the store. That way, we'll be seeing a nearly random cross-section of the public on our trip.
OK. Here we are at Walmart. I can already see quite a few people out here in the parking lot.
Let's suppose that we're going to see 100 other customers while we're here shopping, and then consider their breakdown by IQ. On the basis of the law of averages, we'd expect to see one person here with an IQ below 64! There'd be someone else with an IQ between 64 and 68. There should be 3 more with IQs between 69 and 75. In other words, if this is a random crowd, 1 out of 20 people we're going to meet will have IQs below 75, and will be seriously retarded! (I guess we're lucky the world works as well as it does.) Keep your eyes peeled. See if you can spot 'em. About 1 out of 10 people we'll walk past here at Walmart has an IQ below 80, or about 10 of the 100 people who cross our paths here in the store! Hey, look! Does she look kind of sagaciously-challenged to you? One out of 5, or 20 of the 100 people we're seeing have IQs below 87, with about 1 in 10 in the 80 to 87 IQ range. Half the crowd, or 50 out of the 100, has below-average intelligence! And of course, the other half has above-average intelligence. Twenty of them (1 out of 5) have IQs above 113. Ten of them, or 1 in 10, have IQs above 120. Five of them have IQs above 125, and have the potential to become university professors with Ph. D's. Two of them have IQs of 132 or above, and are potential members of Mensa. One of them has an IQ above 136.
Did you spot them? I saw one or two possible candidates, but I suppose we'd better not walk up and say,
"Pardon me, ma'am. You look mentally challenged. Are you?"
She might hit us with her purse.
If we spent time at a large urban mall, we might rub elbows with 1,000 shoppers. In an average, unenriched setting, where we saw 1,000 other shoppers at Christmas-time, IQs might typically be expected to range between 50 and 150. In a blue-stocking suburb like Norcross or Corte Madera, we might expect to find one or more folk with IQs above 150, and perhaps, an individual or two with an IQ above 160. This is a huge range of IQs.
I think that the range of intellects that we walk past in the world is awesome. The span between top and bottom among 100 people chosen at random would be about 75 points of deviation IQ, or more than 80 points of ratio IQ. And we've been walking past them every day.
This isn't the whole story. It's mentioned below that even on culture-fair tests, the average IQ of our African-American population falls about one standard deviation below those of the other components of our population. This means that 1 out of 10 African-Americans has an IQ below 59, and only about 2 Africans in 1,000 can qualify for Mensa. So most probably, on our trip to Walmart, we're going see an African-American with an IQ of 60 or below (mental age of 10).
Until I wrote this up this afternoon, I had never stopped to think just what intellectual diversity awaits us at our local shopping centers. Half the people we meet in cars on the road have below-average intelligence, and 1 in 20 must be seriously retarded, with a mental age of 12 or below. Ouch! I think I'll ride my bike on back streets to the store.
Ethnicity, and Gender
There are significant variations in the distributions of IQ as we switch among races, ethnic groups, and gender. In discussing this area, I'm presently skating on thin ice because I'm relying on recalled information. I'll try to pin this down within the next few weeks, so if you would, please regard what I'm about to say as an unconfirmed "placeholder" for what I hope will be more-reliable information a week or two from now.
Fifty years ago, it was thought that American Indians had an average IQ of about 69, but I have the impression that this is now considered to be a vile canard. American Indians appear to have an average-IQ equal to approximately 100. Sinic people reared in the United States have an average IQ if about 103(?), with an average IQ of the order of 106 when reared in Japan. They consistently show relatively higher mathematical and spatial-visualization scores and relatively lower verbal scores than their Caucasian counterparts. African-Americans have a population-average IQ that has remained consistently about one standard deviation or 16 points below the U. S. Caucasian mean of 100 (for an average IQ of 84).
Having condemned African-Americans to this racially inferior estate, let me bring up the good news.
(1) If the Flynn Effect is real, African-Americans in 2000 have IQs as high as Caucasians in 1950, and hey! we didn't consider ourselves to be slow learners. Today's African-Americans have IQs as much as 15 points above the average Caucasian in 1900.
(2) A year ago, a Johns Hopkins spokeswoman, speaking to the parents of profoundly gifted children, told them that, based upon the nootropic ("smart pill") pharmaceuticals that are now entering the FDA pipeline, it should be possible to boost children's IQs by as much as 50 points by 2010. A memory enhancement pill is on its way to market over the next few years which would allow total memorization in long-term memory over a 3-to-4-hour period.
(3) Genetically engineered boosts in intelligence should be technically feasible within the next decade or two.
As mentioned previously, individuals with IQs of 132 or above may join Mensa upon presentation of qualifying test results. Individuals with IQs of 137+ are eligible to join organizations such as TOPS (Top One Percent Society). Those with IQs of 150+ qualify for membership in the Triple-Nine Society and the One-in-a-Thousand (OATH) Society, those with IQs of 164 or above are potential candidates for the Prometheus Society and the Ultranet, and those rare specimens with IQs above 176 are welcomed into the Mega and Pi Societies. (There is even a Giga--one-in--billion--Society, with two members, plus its founder.) Needless to say, at the one-in-a-million level, the membership roster is somewhat exiguous. These organizations are also open to subscribers. Subscribers are not allowed to vote, but they may participate in the fascinating dialogues that take place within these societies.
from a Bell-Curve
IQs near the center of the range, between about 75 and 125 are well-represented by a bell curve like the one shown below. However, IQs below about 75 don't fit a bell curve well at all. The reason is that there are some individuals who suffer brain damage and who increase the pool of the seriously retarded. Similarly, it was discovered in 1921, when 250,000 California schoolchildren were screened with IQ tests to determine whether they should be included in the Terman Study of gifted children, that there are a lot more very high IQ score than would be predicted by the bell-curve. For example, the Terman Study found 77 children with IQs of 170 or above, where they would only have expected to find 1 or 2. They found 26 children with IQs of 180, where theory would have predicted only one child with an IQ above 180 in 3,000,000 children. They found one child with an IQ of 201, where the bell-curve predicts only one such child out of every 5,000,000,000 children. Part of this is thought to be a result of uneven rates of mental growth. Some children experience temporary spurts of mental growth that are later offset by temporary slackening of mental development--like children that physically-mature relatively early. Part of it is also a function of the fact that, if there are 4 or 5 children with IQs in the mid-190's (because of "growth spurts"), one of them may have an especially good day and score 5 or 6 points higher than he would normally score, while another of them on that same day might score 5 or 6 points lower than she would usually score. The one that scores higher is the one that catches our attention.
Because of these effects, beginning around 1960, psychometrists defined adult scores in terms of percentiles, and then translated those percentiles into the IQ scores that the bell-curve predicts. These percentile-derived scores are called "deviation IQs", and the older (mental age)/(chronological age) IQs are called "ratio IQs". (For a more-complete description of deviation IQs versus ratio IQs, click here.This had the effect of reducing IQ scores, since ratio IQs tend to run quite a bit higher at the higher levels than do deviation IQs. (The highest probably deviation IQ is about 200, since a deviation IQ of 200 would be expected, as mentioned above, to occur only once in every 5,000,000,000 people--the approximate current population of the earth.) The scale shown below the plot presents one approach--(a log-normal conversion)--to estimating the ratio IQs that correspond to given deviation IQs.
The figure below shows the upper half of the "bell-curve"
distribution (Gaussian normal distribution) of human intelligence. As the
plot shows, 50% of the population has below-average intelligence. As the
bell-curve below indicates, 1 person in 10 has an IQ of 120 or above, 1
in 20 boasts an IQ of 126 or above, 1 in 50 is Mensa level, with an IQ
of 132 or above, 1 in 100 possesses an IQ of 137 or above, 1 in 1,100 is
characterized by an IQ of 150 or above, 1 in 11,000 sports an IQ of 160
or above, 1 in 1,000,000 owns an IQ of 176 or above, and so forth.
Is IQ Measured?
There are a number of IQ tests available. Some IQ tests are untimed, individually administered tests such as the Stanford-Binet and the Wechsler tests. (The five Wechsler Performance subtests are timed.) Other tests are timed, proctored group tests, such as the Raven Progressive Matrices, the California Test of Mental Maturity (CTMM) and the Cattell Culture-Fair Test, which are easier to administer but are narrower in scope. (Included in this group would be the Scholastic Aptitude Test, the Graduate Record Exam, the Miller Analogies.) Still a third class of test is the power test, such as the Mega Test, the Titan Test, and the Test for Genius. These are unproctored, open-book tests in which the test-taker lays protracted siege to difficult problems that emulate the kinds of problems encountered in actual research. These tests are not universally recognized as true IQ tests because it is felt that they are susceptible to cheating. and that their scores depend upon collatoral factors such as persistance and library skills as well as sheer intelligence.
IQ tests have been under attack since their inception. It is, perhaps, counter-intuitive and unpopular that a test requiring an hour or two can establish the upper bounds of one's intellect for a lifetime. However, although they're not infallible they do a remarkably good job of generating a score that will remain more or less constant throughout life.
Intelligence Be Measured With a Single Number?
Yes and no. One of the most serious criticisms of using a single number to assess intelligence is that people may be stronger in certain areas such as verbal skills, logical aptitude or spatial visualization than in others. Drs. Richard Feynmann and Albert Einstein would be examples of geniuses who were extremely strong mathematically while being relatively weak verbally. More commonly, though, purely intellectual abilities tend to be uniformly high or uniformly low in a given individual, leading to the concept of an underlying "g" or "general intelligence" that powers all the specialized intellectual aptitudes. Still, this doesn't happen with everyone, and the exceptions, like Richard Feynmann and Albert Einstein, are very important. Tests like the Wechsler Adult Intelligence Scale (WAIS) consist of a number of subtests that are scored separately and can measure the profile for an individual. (Dr. Howard Gardner has defined seven types of intelligence, while Dr. Robert Sternberg has identified three.)
It's also easier to make an IQ score that's lower than your true IQ than it is to make a score that's higher. Taking a test on a bad day, or spending too much time on a few difficult items could artificially lower one's score. The best results are obtained when more than one test is administered.
Does Adult IQ Mean?
Generally, one's mental age stops rising rapidly when one reaches the latter teens--e. g., 16. Consequently, on some IQ tests, "16" was taken as the chronological-age divisor in an IQ calculation for adults. The Wechsler Adult Intelligence Scale is calibrated for all ages up to 70, with chronological-age divisors appropriate to every age 70 or below.
The average IQ is, by definition, 100. To get an idea what this means, someone with an IQ of 80 or below is considered to be marginally able to cope with the adult world. People with IQ's of 80 or below typically work as unskilled laborers such as lawn maintenance and trash pickup. They generally need help from friends or family to manage life's complications. About 10% of the population has an IQ of 80 or below.
People with IQ's of 80-90 are a little on the slow side but may be found in fast-food restaurants, day-care centers, etc. They may also be found in unskilled jobs. About 16% of the population has IQ's in this range.
People with IQ's of 90-110 generally occupy semi-skilled positions, including typists, receptionists, assembly line workers, and checkout clerks. They are able to keep up with the world, and comprise about 46% of the public.
People with IQ's in the 110 to 120 range fill the skilled trades and include some tool and die makers, teachers, and Ph. D.'s among their ranks. They also make up 16% of the population.
People with IQ's of 120 and above tend to staff the professions as doctors, dentists, lawyers, teachers, and college professors. They fall in the upper 10% of the population.
The average IQ of all college
professors is 130, which lies within the upper 3% of the general public.
|
<urn:uuid:488f52f4-050c-4c36-8ea5-07e908dd5b27>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00612.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9522382020950317,
"score": 3.390625,
"token_count": 4891,
"url": "http://hiqnews.megafoundation.org/Definition_of_IQ1.html"
}
|
Water Workshop 2002
A CHALLENGE FOR THE 27th WATER WORKSHOP
George Sibley, Workshop Coordinator
The concept of 'reclamation,' for Western Civilization, began in Europe where it mostly involved making land fit for cultivation by removing water from it, as was done for much of The Netherlands, which is about 40 percent below sea level. For people from those humid climates, and later from the humid eastern part of North America, the idea of aridity over much of the western half of the continent was so alien that they simply refused to believe it, until aridity had driven thousands of homesteaders off the land in Western America.
So the evolution of the idea of reclamation in America began with the realization that, to make land fit for cultivation, it was necessary to put water on it. This began primarily as a local agrarian phenomenon: a farmer would lead water out of a stream to irrigate a piece of bottomland, and other farmers downstream might enlarge and extend that ditch. Then groups of settlers established ditch companies to bring water from ever greater distances to irrigate mesas and other uplands that were fertile but dry. Sometimes these companies bit off a little more than they could chew, and their projects languished.
The federal Reclamation Service came into being in 1902 in large part as a progressive effort to encourage the settlement of small farmers on western lands as a deliberate effort to counter the growing power of ever larger corporations in an urbanizing and industrializing society, and most of the Service's early projects reflected that, picking up some troubled projects like the Gunnison Tunnel just downstream from here, and creating other local projects. This 'agrarian thrust' has remained an important thread in the weave of western reclamation.
But the urbanizing industrializing society also had needs - rather than spreading the water out onto the land, more along the lines of concentrating water, energy and food resources in centers. And by the time the 'Reclamation Service' had become the 'Bureau of Reclamation', this work also became a federal matter as the Bureau enlarged its scope to meet those needs, beginning with the boulder Canyon Project in 1928 that, by the beginning of World War II, had established the regional infrastructure for the phenomenon of Southern California.
After World War II, it became evident that large-scale reclamation work was also providing recreational 'byproducts' for the growing urban masses as well as the infrastructure for a working society. Many Americans began to look at the remaining 'unreclaimed' West more for its natural qualities than for whatever resources remained to be developed there, and both protecting and restoring natural systems became the reclamation challenge of the last quarter of the 20th century - in some placed, like the Grand Canyon, the Bureau has been challenged to 'reclaim nature' form the earlier exuberance of reclamation.
The challenge for this Water Workshop is to try to imagine and envision what reclamation will be in the future of the West. We have learned too much about the consequences of engineering streams and rivers for a relatively narrow set of human needs and desires to ever proceed again with the naïve exuberance of the first two-thirds of the past century. But it seems equally naïve to think that a still-growing West, whose population grew from around 10 million to 90 million over the century just past, can step away from the idea of reclamation and 'the engineered environment.
One thing we might all try to take out of this conference is a more comprehensive and 'evolved' definition of reclamation that truly reflects the challenge of keeping a society of 90 million westerners healthy without consuming the ecological and aesthetic attributes that make the West a desirable place to live.
RECLAIMING THE SPIRIT OF RECLAMATION
by Ed Marston
Exec. Director, High Country News, Paonia, Colorado
It is astounding to me, watching the divided society we live in, that an earlier society situated on the same land could have come together to build Hoover, Glen Canyon, Flaming Gorge and scores of other major dams. We today are like barbarians left with something a higher order, or at least a more organized and cohesive society, built. The society that built those machines agreed on what they were for, and put them to work to produce food, fiber, and electricity and water for urban areas, with flat-water recreation thrown in.
Now, decades later, we have 50 ideas about what they're for. Some of us want them to be used exclusively for their original purposes. But others want them to be used to create floods to build beaches, and to provide water for rafters, raptors, or fish that are barely hanging onto their changed environments. And always, there is the tug of war between rural uses of water and urban uses of water. That rural-urban conflict does not include only the diversion of water away from irrigation and into cities' water treatment plants, but also includes the environmental uses of water.
So, the dams and Reclamation Era, which opened with the last century and declined well before the 20th century ended, is both a rebuke and a challenge to us: a rebuke for being so quarrelsome, without even having the excuse of being liquored up; and a challenge to come together and use these machines to serve our collective needs.
We are at the moment like the tribe in the movie "The Gods Must Be Crazy." The tribe found a Coca Cola bottle, which they found endlessly useful -- so useful that they fell to quarreling with each other over how to use it and who was to use it. Should it be a container to carry water? To store grain? To pound stakes in the ground?
We have found dozens of wonderful Coke bottles, left to us by a civilization that has all but disappeared, and whose vision and drive have certainly disappeared. We are fighting each other over those bottles. In case you didn't see the movie, at its end, the tribe's leader took the bottle, traveled a long way to a city, and returned this gift to whence it had come.
There are those who suggest that we, too, return the gift, which they see as a curse: that we breach the dams and let the rivers run through them. The most organized, cohesive and middle-of-the-road of these groups, the Glen Canyon Institute, has this as a mission statement:
The Glen Canyon Institute's mission is to provide leadership toward restoration of a free-flowing Colorado River through Glen Canyon and Grand Canyon.
So far as I can tell from its web site, the keeper of the traditional vision, the U.S. Bureau of Reclamation, has this for a mission statement:
Through leadership, use of technical expertise, efficient operations, responsive customer services and the creativity of its employees, Reclamation continues to manage, develop, and protect the water resources of the West for economic, social, and environmental purposes. Over the past 95 years, the Reclamation program has emphasized development of safe and dependable water supplies and hydropower to foster settlement and economic growth in the West.
Reclamation will continue to increase productivity to carry out its mission more efficiently. This requires Reclamation to provide the opportunity and means for its employees to excel in their work, thereby ensuring that Reclamation can effectively and efficiently carry out its mission and provide high quality customer services at the lowest possible cost. Reclamation intends to achieve a diverse workforce to promote excellence, innovation and responsiveness to the needs of our various constituencies.
The Glen Canyon Institute may or may not succeed in implementing its audacious vision, but there is no doubt what its vision is. By comparison, it is clear that the US Bureau of Reclamation has no vision.
In a few places, dams have been dismantled, or steps toward such dismantling are well underway, as in Olympic National Park on the Elwha River in the State of Washington. I don't want to take sides on the question of wholesale dismantling of dams, because I don't think that's the core issue. I don't think the West would become a wonderful place if all of our dams disappeared tomorrow. Nor do I think our world would collapse. What we're up against is how to change our Hatfield and McCoy approach to water matters. Our challenge is how to achieve the unity of purpose that allowed the Reclamation Era to be an era.
I don't like everything the Reclamation Era achieved. I think it overshot, but I do admire its unity. I do admire the fact that the people of that time came together with a purpose they believed in, and they did it democratically, for that time. The Reclamation Era, I believe, was not a product of despotic forces. I think there was as much democracy in Reclamation as we can reasonably expect in this world. I think the evidence of that democracy came in the 1960s and 1970s, when the building of dams in places that the nation held sacred - like Dinosaur National Monument and the Grand Canyon - was stopped. The nation's values changed, and dam building was stopped even though the top levels of government and most organized economic interests wanted to continue building dams.
The trouble is, we stopped Reclamation without replacing its vision with another. We were against, but we weren't clearly for something. What was Reclamation's vision? Initially, it was an agrarian, Jeffersonian vision: to make the desert bloom by putting water and tens of thousands of small farmers on the land. In places like these west-central valleys, that vision can still be seen in place today. It is what makes our areas special, I believe.
But far more typical is a place like California's Imperial Valley, which uses something like 3 million acre-feet of water a year to raise a huge percentage of the nation's vegetables, as well as huge quantities of sudan grass, alfalfa and cotton. The Imperial Valley is being squeezed today, like a sponge, as California tries to figure out how to water its 33 million people while skinnying down to its 4.4 million acre-foot/year quota out of the Colorado River.
Imperial Valley agriculture has created as close to a feudal society as you can find in the United States today. The valley has a few large growers, tens of thousands of workers, 25 percent of its population living under the poverty level, and many, many workers migrating daily from the Mexican city of Mexicali to work in the fields. This poverty, these immense land holdings, and the drying up of the Colorado River Delta are all a result of the Reclamation vision gone awry. We built the Hoover Dam and the All American Canal so that the people who produce our food can live as if they were vassals of some knight in England or France. The desert is blooming in the Imperial Valley, but the society is not.
Reclamation completely abandoned the vision of small farmers creating a Jeffersonian society in the West after World War II. That vision was replaced by a vision of growth, progress, and technological mastery. It is the vision that is at work in Southern California as that region tries to meet its Colorado River Compact quota. California and the entire seven-state basin are proceeding as if they face only a technical problem of reallocating water. I think we face a deep social problem, which is easiest to express by pointing out that we have never replaced the lost visions of making the desert bloom, settling small farmers on the land, and, finally, creating growth and progress.
What we have today, if we have anything, is the latter vision: a vision of a smoothly running, ever-growing machine. I think people expect more from their society and even from their government than simply efficiency. America is a wonderful place because, periodically, we think and dream with large, impractical strokes. If we did not do this, we could not have built the Hoover Dam in the midst of the Great Depression. We could not have built Glen Canyon Dam, Flaming Gorge, or Blue Mesa. The West had a vision for itself, and the nation bought into that vision.
But that vision has played itself out, and we are living among monuments whose technical workings we understand, but whose spirit we do not understand. And so we divide into different camps: those who still want to keep the deserts and mountain valleys blooming; those who want to divert those waters to metropolitan areas to grow houses and malls, and those who want to tear down the dams and make the rivers live again.
I would like to see us recapture the Reclamation Era not by building more dams - where would we put them? and what would we put in them? - but by recapturing the spirit of Reclamation: a vision that would unite us in pursuit of a more fulfilling future. Much as I admire the simplicity of the mission statement of the Glen Canyon Institute - to breach Glen Canyon Dam - I don't think it's a sufficient vision for the society. We need and deserve more.
The future will require the merging of two large forces: environmentalism - which I define as a desire for a more natural and less paved world, and sprawl - which accepts as inevitable a paved world, but which demands a bit of fenced and private green space within that paved world. Both are intent on natural space, but they are after that space in different sizes.
The immediate tragedy - and you can see it here in the Gunnison area - is that caught between these two pincers are people who depend on large expanses of cheap land: ranchers, loggers, farmers, oil and gas drillers, and miners. They are people who depend on nature for their livings; people who experience nature in a much different way than environmentalists or suburbanites.
I should say here that if we Americans had a lick of sense we'd be perfectly happy with our material state, happy with our politics, and that we'd thank the Lord each day that we live here and not elsewhere. We'd bless our dams and dammed rivers, and we'd bless our undammed rivers, and we'd kiss our children and relax and cut our work weeks to 10 hours or so.
But we don't have a lick of sense. I know I don't. We live as if saber tooth tigers were still at our heels, and adrenaline still courses into our systems at the slightest provocation. And individually and as a society we're addicted to adrenaline, so we will keep on churning. We will keep busy. We will keep organizing. For whatever reason, we can't stop. I accept that. The only question is: in what direction should we try to direct our churning?
At my age, and at this point in my career, I feel like the Nez Perce Chief Joseph: I am tired of fighting ... from where the sun now stands, I will fight no more.
What I want instead of fighting are colleagues and allies, especially if they look at the world very differently. I am no longer a very good ideologue. I don't believe in large, overarching ideas or in the charismatic characters who preach those ideas. I don't believe in big technological fixes. I don't believe wind energy, or the hydrogen economy, or the fuel cell, or even the dismantling of dams will save us.
I believe instead in pragmatism. I believe in working away at a knot in many different ways, with many different hands and minds and approaches, until it finally unravels. I want to be involved with people who have the patience and temperament to work away at the many knots that confront the western United States: the cattle-and-public land knot; the dam and rivers knot; the logging and old growth forest knot. Those are my people. Those are my soul mates.
Chief Joseph came to his decision to fight no more out of honorable defeat. My war was against rural, extractive uses of the Interior West. I run an environmental newspaper, and for most of the 1980s, I ran that newspaper as if only the environmental movement could save the West from ranching, mining, logging and dam building. I consider that we, the green folks, have won that war. After all, we live in a state and in a region where urban uses now trump rural uses everywhere, including the most remote county.
But for me at least, the victory is proving hollow, for much of what I loved about the West was in rural nature. This isn't a new conclusion. For much of the 1990s, I tried to run as a vehicle of reform rather than of revolution. I became especially attached to the idea that ranching, properly done, could lead the way to a New West, and I've been appalled for years at the efforts some of my fellow environmentalists make to drive ranchers off the public land.
Where did this war within the West come from? I can describe it in terms of a personal evolution. We city people came here out of an alienation with how urban America was being run. We idealized the rural West, and we ran head on into the people who were living here, and who did not idealize the rural West. They understood it was a great place to live. But they knew it was also a tough place to make a living, and that it was a left-behind part of America, with everything stacked against it. They knew the rural West was living off the crumbs of the American economy, producing commodities at rock-bottom prices for relatively well-off city people.
Of course, they were enraged when the newcomers, and city people working through national environmental groups, interfered with the production of those commodities, and also interfered with the subsidies that larger economy chose to send to the rural West. Led politically by the environmental movement, and squeezed economically by free trade, by a reaction against subsidies and regulation, and by the increasing price of land and labor in rural areas, natural-resource based economies have come under increasing pressures.
What does this have to do with Reclamation? We should see Reclamation as a spirit rather than as a set of dams. The West came together - it buried enough of its differences to get a job done. Unless we can now adopt that spirit, we will be locked in endless warfare. Nothing will work well, and those things we care about: the land, wildlife, the economy and the things a healthy economy enables us to do will all deteriorate.
The following books are helpful in understanding the spirit, if not the purpose, of the Reclamation Era:
High and Dry: The Texas-New Mexico Struggle for the Pecos River, by Emlen Hall. A University of New Mexico law professor describes how Reclamation really works in the Southwest.
Against the Current: Essays in the History of Ideas, and The Hedgehog and the Fox: An Essay on Tolstoy's View of History, by Isaiah Berlin. What does a now dead Oxford philosopher have to tell us about the West? Plenty. Berlin is the apostle of a society which uses seemingly clashing ideas to find a workable middle.
Cadillac Desert, by Marc Reisner. A wonderful, from-the-heart book about the failures of reclamation. The wonderful thing about Reisner is that he went on to work with rice farmers and others to enhance rural economies. His death was a tragedy, for this was that rarity: a thinker and activist capable of growth.
Big Trouble: A Murder in a Small Western Town Sets off a Struggle for the Soul of America, by J. Anthony Lukas. If you like your history to be well plotted, this story of the murder of the former governor of Idaho, around 1900, is for you.
Storey, Brit Allan
EVOLUTION OF THE BUREAU OF RECLAMATION:
AN INSIDER HISTORIAN'S PERSPECTIVE ON THE LEGACY AND THE CHALLENGE
Brit Allan Storey, Senior Historian
BUREAU OF RECLAMATION
THE MOVEMENT FOR RECLAMATION
As the Nineteenth Century ended and the Twentieth Century began, a number of events meshed to create the correct political, economic, and technological setting for the creation of a Federal irrigation service. Westerners had long known that the largely arid American West receives a distinctly small share of the earth's fresh water supply. As a result, because it is essential for occupation, settlement, agriculture, and industry, water has always been a dominating factor in the arid West's prehistory and history.
The snowmelt and gush of spring and early summer runoff frustrated early Western settlers. They watched helplessly as water they wanted to use in the dry days of late summer disappeared down Western watercourses. In response to this problem, settlers developed water projects and created complicated Western water law systems, which varied in detail among the various states and territories but generally allocated a sort of property right in available water based on the concept of prior appropriation (first in time, first in right) for beneficial use.
At first, water development projects were relatively simple. Settlers diverted water from a stream or river and used it nearby; but, in many areas, the demand for water outstripped the supply. As demands for water increased, settlers wanted to store "wasted" runoff for later use. Storage projects would help maximize water use and make more water available for use. Unfortunately, private and state-sponsored irrigation ventures often failed because of lack of money and/or lack of engineering skill. This resulted in mounting pressure for the Federal Government to develop water resources.
In the jargon of the day, irrigation projects were known as "reclamation" projects. The concept was that irrigation would "reclaim" or "subjugate" arid lands for human use. John Wesley Powell's western explorations and his published articles and reports; private pressures through publications, irrigation organizations, and irrigation "congresses"; nonpartisan Western political pressures; and Federal Government studies, conducted by the U. S. Army Corps of Engineers and U. S. Geological Survey (USGS), contributed to the discussions and cogitations that influenced American public opinion, Congress, and the executive branch in support of "reclamation."
During their period of dominion, the Spanish and Mexican governments in the American Southwest supported settlement and irrigation through their land grant systems. Before 1900, the United States Congress had already invested heavily in America's infrastructure. Roads, river navigation, harbors, canals, and railroads had all received major subsidies. A tradition of government subsidization of settlement of the "West" was longstanding when the Congress in 1866 passed "An Act Granting the Right-of-Way to Ditch and Canal Owners over the Public Lands, and for other Purposes." A sampling of subsequent congressional actions promoting irrigation reveals passage of the Desert Land Act in 1877 and the Carey Act in 1894 which were intended to encourage irrigation projects in the West. In addition, beginning in 1888, Congress appropriated money to the USGS to study irrigation potential in the West. Then, in 1890 and 1891, while that irrigation study continued, the Congress passed legislation reserving rights-of-way for reservoirs, canals, and ditches on lands then in the public domain. However, westerners wanted more; they wanted the Federal Government to invest directly in irrigation projects. Western interest in Federal investment in irrigation was exacerbated by the Depressions of 1973, 1883, and 1893 which successively effectively dried up private investment money in the West for irrigation and other projects. The "reclamation" movement demonstrated its strength when pro-irrigation planks found their way into both Democratic and Republican platforms in 1900. Then, in 1901, "reclamation" gained a powerful supporter in Theodore Roosevelt when he became President after the assassination of William McKinley.
"RECLAMATION" BECOMES A FEDERAL PROGRAM
President Roosevelt supported the "reclamation" movement because of his personal experience in the West, and because of his "conservation" ethic. He later wrote in his autobiography that
The first work I took up when I became President was the work of reclamation. Immediately after I had come to Washington, after the assassination of President McKinley. . . before going into the White House, [Frederick] Newell and [Gifford] Pinchot called upon me and laid before me their plans for National irrigation of the arid lands of the West. . .
To Roosevelt and others of that time, "conservation" meant a movement for sustained exploitation of natural resources by man for the good of the many through careful management -- a very different ethic than what "conservation" means today. Roosevelt also believed "reclamation" would permit "homemaking" in support of the agrarian Jeffersonian Ideal. Reclamation supporters believed the program would make homes on subsistence family farms for Americans. After some political horse trading over rivers and harbors legislation, the Reclamation Act passed in both Houses of the Congress by wide margins, and President Roosevelt signed the Reclamation Act in June of 1902.
In July of 1902, Secretary of the Interior Ethan Allen Hitchcock established the United States Reclamation Service (USRS) within the Division of Hydrography in the USGS. Charles D. Walcott, director of the USGS, also became the first "director" of the USRS, and Frederick Newell became the first "Chief Engineer" while continuing his responsibilities as chief of the Division of Hydrography.
The Reclamation Act required that
Nothing in this act shall be construed as affecting or intended to affect or in any way interfere with the laws of any State or Territory relating to the control, appropriation, use, or distribution of water . . . or any vested right acquired thereunder, and the Secretary of the Interior . . . shall proceed in conformity with such laws . . .
That meant implementation of the act required that Reclamation comply with numerous and often widely varying state and territorial legal codes. The development and ratification over the years of numerous interstate compacts governing the sharing of streamflows between states, of several international treaties governing the sharing of streams by the United States with Mexico or Canada, and numerous court decisions made Reclamation's efforts to comply with state or territorial water law even more complex. Colorado was party to the most famous of Western compacts, the Colorado River Compact signed in 1922 and ratified by Congress in 1928. However, quite a number of other compacts affected Colorado - the South Platte River Compact (March 8, 1926) , the Rio Grande Compact (1930 [temporary] and 1939) Rio Grande Compact. Approved by Congress on May 31, 1939. This compact was signed by the commissioners of the states of Colorado, New Mexico, and Texas on March 18, 1938, in Santa Fe, New Mexico. It was subsequently ratified by the legislatures of each state., the Republican River Compact (May 26, 1943) , the Upper Colorado River Basin Compact (April 6, 1949) , and the Arkansas River Compact (May 31, 1949) are among these. Examples of court decisions include Wyoming v. Colorado [259 U.S. 419] decided in 1922 and Nebraska v. Wyoming,[ 325 U.S. 589] decided in 1945.
In its early years, the Reclamation Service relied heavily on the USGS Division of Hydrography's previous studies of potential projects in each western state. Between 1903 and 1906, about 25 projects were authorized throughout the West. Because Texas had no Federal lands, it was not one of the original "reclamation" states. It became a reclamation state only in 1906.
PRINCIPLES OF THE RECLAMATION PROGRAM
Using revenues from the sales of public lands, Reclamation implemented a program underlain by several basic principles. The details have changed over the years, but the general principles remain: (1) Federal monies spent on reclamation water development projects which benefitted water users would be repaid by the water users; (2) projects remain Federal property even when the water users repay Federal construction costs though the Congress could, of course, choose to dispose of title to a project; (3) Reclamation generally contracts with the private sector for construction work; (4) Reclamation employees administer contracts to assure that contractors' work meets Government specifications; (5) in the absence of acceptable bids on a contact, Reclamation, especially in its early years, would complete a project by "force account" (that is, would use Reclamation employees to do the construction work); and, (6) hydroelectric power revenues could be used to repay project construction charges.
EARLY HISTORY OF RECLAMATION
In 1907, the USRS separated from the USGS to become an independent bureau within the Department of the Interior. The Congress, and the Executive Branch, including USRS, were then just beginning a learning period during which the economic and technical needs of Reclamation projects became clearer. Initially overly optimistic about the ability of water users to repay construction costs, Congress set a 10-year repayment period. Subsequently, the repayment period was increased to 20 years, then to 40 years, and ultimately to an indefinite period based on "ability to pay." Other issues that arose included: soil science problems related both to construction and to arability (ability of soils to grow good crops); economic viability of projects (repayment potential) including climatic limitations on the value of crops; waterlogging of irrigated lands on projects resulting in the need for expensive drainage projects; and, the need for practical farming experience for people successfully to take up project farms. Many projects were far behind their repayment schedules, and setters were vocally discontented.
The learning period for Reclamation and the Congress resulted in substantial changes when the USRS was renamed the Bureau of Reclamation in 1923 and, in 1924, the Fact Finder's Act began major adjustments to the basic Reclamation program. Those adjustments were suggested by the Fact Finder's Report which resulted from an in-depth study of the economic problems and settler unrest on Reclamation's twenty-plus projects. Elwood Mead, one of the members of the Fact Finder's Commission, was appointed Commissioner of Reclamation in 1924 as the reshaping of Reclamation continued. A signal of the changes came in 1928, for instance, when the Congress authorized the Boulder Canyon Project (Hoover Dam), and, for the first time, large appropriations began to flow to Reclamation from the general funds of the United States instead of from public land revenues and other specific sources. This was at least partially a response to the fact that many projects were not economically viable. The Congress chose to continue to invest in the West through subsidization of projects from general funds and through hydroelectric revenues.
In 1928, the Boulder Canyon Act ratified the Colorado River Compact and authorized construction of Hoover Dam which was a key element in implementation of the compact. Subsequently, during the Depression, Congress authorized almost 40 projects for the dual purposes of promoting infrastructure development and providing public works jobs. Among these projects were the beginnings of the Central Valley Project in California, the Colorado-Big Thompson Project in Colorado, and the Columbia Basin Project in Washington. With the addition of the Boulder Canyon Project which included both Hoover Dam and the All-American Canal System, these four Depression-era projects represent between forty and fifty percent of Reclamation's irrigated acreage.
Ultimately, of Reclamation's more than 180 projects, about 70 were authorized before World War II, but the remainder were authorized during and after World War II in both small authorizations and major authorizations, such as the Pick-Sloan Missouri Basin Program (1944), the Colorado River Storage Project (1956), and the Third Powerplant at Grand Coulee Dam (1966). The last really big project construction authorization occurred in 1968 when Congress approved the Colorado River Basin Project Act which included the Central Arizona Project, the Dolores Project, the Animas-La Plata Project, the Central Utah Project, and several other smaller projects.
One problem confronted by Reclamation was laboratory testing of special problems. Testing was carried out in various locations such as Montrose and Estes Park, Colorado, Colorado State University, and Reclamation offices in the old Custom's House in Denver until Reclamation located its primary laboratory at the Denver Federal Center in 1946.. These research laboratories study modeling and designs for hydraulic structures, concrete technology, electrical problems, construction design innovations, groundwater, weed control in canals and reservoirs, various environmental issues, water quality, ecology, drainage, control of evaporation and other water losses, and other technical subjects.
The earliest hydroelectric plant on a Reclamation project was in place in 1908, and it was soon followed by hydroelectric generation on two other Reclamation projects in 1909. However, it was only during the 1930s that generation of hydroelectric power became a principal benefit of Reclamation projects. Reclamation built the major hydroelectric plant at Hoover Dam only after a hard public debate about whether the Federal Government should become involved in public power production or whether private power production should be the rule. It was the Hoover Dam precedent which ultimately allowed Reclamation to become a major hydroelectric producer. Once the issues received public airing at Hoover Dam, major hydroelectric plants became a feature of many Reclamation projects. Hydroelectric revenues have subsequently proved an important source for funding repayment of Reclamation project costs. In 1993, Reclamation had 56 power plants online and generated 34.7 billion kilowatt hours of electricity. In 1999, revenues from Grand Coulee hydroelectric generation alone returned to the U. S. Treasury about two-thirds of Reclamation's entire appropriated budget.
RECLAMATION AND INTERSTATE WATERS
Allocation of the waters of the Colorado River was addressed in 1922 in Santa Fe when Secretary of Commerce Herbert Hoover moderated a meeting of commissioners representing Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming. The meeting developed and signed the Colorado River Compact (Compact) to divide and allocate the waters of the Colorado River. For Reclamation, this is the most complex and difficult of the interstate compacts, and it was ratified by the Congress in 1928 without the concurrence of Arizona. California and Arizona argued for years over how to calculate Arizona's share of the waters of the lower Colorado River. The Arizona legislature ratified the Compact only in 1944 and then later sued California over its interpretation of the Compact. The lawsuit lasted from 1952 until issuance of the Supreme Court decree in 1964. Concern over the Compact has only heightened over the years as it became increasingly apparent that there isn't consistently as much water in the Colorado River as was presumed by the signers and ratifiers of the Compact. In addition, the Compact did not anticipate provision for 1.5 million acre-feet of water promised to Mexico in a 1944 treaty. Reclamation is deeply involved in these complicated Colorado River issues because Reclamation reservoirs largely store and regulate the flow of the Colorado River. Reclamation dams in the Upper Colorado River Basin deliver water to Glen Canyon Dam which then stores the water in Lake Powell. From Lake Powell, the water is delivered in accordance with the terms of the Colorado River Compact to the Lower Colorado River Basin states. Once delivered to the Lower Colorado River Basin, Hoover Dam stores the water in Lake Mead.
The Colorado River Compact is the most complex and difficult of the interstate compacts. As already mentioned, Reclamation is affected by other compacts and court decisions all over the West where the waters of interstate streams are shared among states.
Reclamation's traditional area of operation is the 17, arid, continental states of the West. Reclamation has, however, at times been assigned work outside that traditional operational area. For instance, during the late 1920s Reclamation studied "planned group settlement" in the South in cut over areas and swamps. This project was supposed to create new farms, but it ultimately died as impacts of the farm depression of the 1920s and 1930s were recognized. Other projects in the eastern United States were also undertaken, and Reclamation's photograph collection includes hundreds of photographs from areas outside the arid West. Beginning in the 1930s Reclamation studied possible projects in Hawaii, and in 1954 the Congress authorized investigations on Oahu, Hawaii, and Molokai among the Hawaiian Islands. In the 1940s and 1950s Reclamation studied many water development projects in Alaska and ultimately built the Eklutna Project outside Anchorage. The Eklutna Project has since been transferred out of Reclamation.
In the early years of its history, Reclamation was actively involved, in conjunction with the Indian Service, in irrigation projects for Indian tribes including the San Carlos, Blackfeet, Flathead, Crow, and Yuma. However, the majority of Reclamation project water went to non-Indians. In the early years, Reclamation's mission to develop water supplies appeared to carry the potential for injuring the rights of tribes. If non-Indians began using Reclamation-provided water, it was feared they would establish a senior right under the appropriation doctrine, leaving little or no water for the tribes when they were ready to develop their reservation lands.
In the landmark 1908 decision, Winters v. United States, the Supreme Court attempted to reconcile this potential conflict through the "Winters Doctrine." This case concerned the Milk River in Montana, and actually delayed development of Reclamation's Milk River Project. The Winters Doctrine established the principle of reserved rights - Indian tribes with reservations have reserved water rights in sufficient quantities to fulfill the purposes for which the reservation was established, and the date of the reserved right is the date of the treaty or Executive Order setting aside the land. The dates of reserved rights generally are very early in relation to non-Indian settlement and, thus, establish very high priority for Indian water rights. Further, unlike appropriative water rights, a reserved water right does not have to have been used to remain in effect. A reserved right remains in effect regardless of how many years have passed. A congressionally authorized and funded Reclamation project could not take precedence over senior water rights. Thus, if a tribe had senior reserved water rights, its right to the future development of reserved rights should not be affected legally by Reclamation project development. Nevertheless, there are situations in which tribes have encountered difficulties in attempting to develop their senior reserved water rights for various reasons - situations the United States, with Reclamation's participation, is trying to address through the Indian water rights settlement program and other initiatives.
In recent years the Federal Government has become much more sensitive to Indian tribal water issues. Many Reclamation projects include provision for honoring the Secretary of the Interior's trust responsibility for Indian water rights. Among notable examples are the Central Arizona Project, the Dolores Project, and the Animas-La Plata Project. Reclamation is also involved in water-related activities such as the Mni Wiconi water distribution system in South Dakota which provides rural culinary water supply in a large area that includes several reservations. Reclamation personnel often serve on negotiating teams or provide technical expertise to negotiating teams working for the Secretary of the Interior to develop equitable water solutions for Native American tribes. Reclamation has amended its procedures so that before any new actions are undertaken, Reclamation first determines if the action could adversely impact Indian trust resources. When it appears that adverse impacts are possible, Reclamation will work with the tribe to seek to avoid the impacts, or when unavoidable, to determine appropriate mitigation.
RECLAMATION PROJECTS AND THE ENVIRONMENT
Conservation and environmental issues are not as new to Reclamation as many think. The nature of conservation and environmental issues and how they have affected Reclamation, however, has changed considerably. Very early in Reclamation's history between 1908 and 1912, for instance, there was a public outcry about conservation of Lake Tahoe's natural lake level and scenic beauty when Reclamation proposed to build a dam both to increase storage capacity and to sometimes lower the existing lake level to benefit the Newlands Project. In a distinctly different direction, Reclamation's Belle Fourche Project in South Dakota was specifically designed to avoid mixing hazardous industrial mining wastes in Whitewood Creek with its irrigation water.
Subsequently, proposals for Reclamation projects raised public consciousness about major dams and their impacts on various resources. Reclamation, by the mid-1930s, was looking at fishery issues as it addressed construction of Grand Coulee and other dams. On another front, in the mid- to late-1930s, Coloradoans and their congressional representatives pushed Reclamation to build the Colorado-Big Thompson Project which would require construction on the fringe of and under Rocky Mountain National Park. The project was ultimately built because Rocky Mountain National Park was created with a provision in the enabling law that specifically authorized a water development project infringing on the National Park. In the 1950s, the controversy over construction of Echo Park Dam in Dinosaur National Monument heightened public awareness of issues surrounding construction of a dam in a National Park Service-managed area. Ultimately, public opinion forced cancellation of plans for Echo Park Dam and resulted in construction of the alternative, Glen Canyon Dam. By the 1960s, Marble Canyon and Bridge Canyon dams were proposed, but Secretary of the Interior Stewart Udall canceled those dams because of public pressure in support of preserving parts of the Grand Canyon. Ironically, opposition was based at least partly on the public's belief that nuclear power generation was a viable alternative for meeting growing electric power needs in the West.
Although effects on the environment were always, to a limited extent, a part of Reclamation's work, during the 1960s, Reclamation's work began to change substantially as public awareness reached new heights. There was a sea change in America and the way Americans looked at natural resources exploitation. This change resulted, in part, from improved communication which meant that the average American's news came not from newsreels, radio, and newspapers, but from television, with same-day information and images which visually reinforced issues. It also came, in part, from transportation changes which meant that the average American could travel to the "West" on airliners or in powerful cars on much improved highways. Americans were coming to understand issues about the West better and to consider the West "theirs." Thus, expanded knowledge and accessibility resulted in an increasingly proprietary feeling on the part of large new groups of Americans toward public lands and public works.
At the same time, communities across the country began to pay increasing attention to water and air pollution issues. This new situation combined with far more sophisticated science and resultant understandings of the complex interactions of the communities of nature as well as of water and air pollution issues. Among other items, the effects of wetlands loss on fisheries and bird populations were better recognized. Improved understanding of the natural world and its issues combined with a shifting political power which moved away from the rural and agrarian population and components of the economy to the urban population and components of the economy. The change was signaled in many ways. Wide-open, little-regulated exploitation of historic and natural resources, even on private property, lost support in America as effects on animals, birds, fishes, plants, water, air, archaeological sites, and historic sites were better recognized.
Rachel Carson's Silent Spring appeared in 1962 and increased public support for more environmentally sensitive project development. While even popular music expressed growing environmental concerns, increased public consciousness and support manifested itself in political action when the Congress passed the Wilderness Act in 1964, the Fish and Wildlife Coordination Act in 1965, the National Historic Preservation Act in 1966, the Wild and Scenic Rivers Act of 1968, the National Environmental Policy Act (NEPA) of 1969, and many other subsequent laws. Accompanying and buttressing these Federal laws were presidential Executive Orders, Federal regulations; and state and local laws, orders, and regulations.
The specific effects of Reclamation projects were also better identified in this period. Dam construction adversely affected some native fish populations while also often creating blue ribbon fishing waters below dams. Dams often altered the flow characteristics and ecology of rivers and streams. Land "reclamation" and construction projects affected plant, animal, fish, and bird populations through displacement or destruction because of ecological changes. In addition, land development made possible by water development often destroyed historic or archeological resources. Destruction of non-arable wetlands was a special environmental problem. Hydroelectric production, often considered pollution-free, was recognized as carrying environmental effects because of altered water temperatures, effects on native fish populations, effects on migratory fish, and water fluctuations. Environmental issues that conflicted with traditional bureau missions were not unique to Reclamation. Americans identified long menus of environmental effects throughout construction and natural resources exploitation programs in both the government and private sectors in American society.
After a period of adjustment to the new laws and regulations, and as a result of increasing public and political pressure, Reclamation developed staffs to deal with environmental and historic preservation issues. Reclamation invests a great deal of time and money in issues such as: endangered species; instream flows; the preservation and enhancement of quality freshwater fisheries below dams; preserving wetlands; conserving and enhancing fish and wildlife habitat; dealing with Endangered Species Act issues; controlling water salinity and sources of pollution; ground water contamination; and the recovery of salmon populations on both the Columbia/Snake and the San Joaquin/Sacramento River systems. Reclamation implemented "reoperation" (revision of the way hydroelectric power generation is scheduled and carried out) of hydroelectric facilities at Glen Canyon Dam on the Colorado River to better achieve environmental objectives. Reclamation has made costly modifications to dams such as Shasta and Flaming Gorge to achieve environmental goals. There is a major effort underway among Federal and state agencies and other interest groups to improve environmental and water quality in the delta at the mouth of the Central Valley of California where the San Joaquin and Sacramento Rivers join and flow into San Francisco Bay.
Ironically, Reclamation's attempts to use drainage water to support environmental objectives at the Kesterson National Wildlife Refuge in the Central Valley of California resulted in unexpected and difficult environmental problems. The drainage water mobilized selenium and concentrated it in water of the refuge causing death and deformity among the affected animal populations. The selenium issue was a problem neither Reclamation nor the Fish and Wildlife Service foresaw, and it has been dealt with.
Reclamation reservoirs provide flat water recreation opportunities all over the West. From the very beginning of Reclamation's history, westerners were quick to identify and enjoy recreation opportunities on and in the water captured behind dams on Reclamation projects. However, recreation was not recognized legally as a project use until 1937. Reclamation transferred Lake Mead, behind Hoover Dam, to the National Park Service for recreation management in 1936 and initiated the still-existing pattern of seeking other agencies to manage recreation at Reclamation facilities. That pattern means that today Reclamation manages only about one-sixth of the recreation areas on its projects. From the 1930s to the early 1960s, authorizations by Congress for recreation identified specific projects; but in the mid-1960s, the Congress began to give Reclamation more generalized authorities for funding recreation on all projects. Fishing, hunting, boating, picnicking, swimming, and other recreational opportunities developed over the years.
In 1992, Reclamation had over 300 recreation areas on its projects with almost 5 million acres of land (a little less than five-eighths of Reclamation-controlled Federal lands) open to various recreational uses. In recent years, Reclamation has "reoperated" some facilities seeking to improve recreational fishing, commercial fishing, and white water recreational opportunities. Three recreation areas managed by the National Park Service - Lake Roosevelt behind Grand Coulee Dam, Lake Mead behind Hoover Dam, and Lake Powell behind Glen Canyon Dam - as well as the U. S. Forest Service's Shasta Lake behind Shasta Dam, are among the most prominent recreation areas on Reclamation projects. Other managing partners for recreation areas include other Federal agencies, state agencies, counties, and cities. These partnerships result annually in millions of recreation days of use on Reclamation projects and raise numerous issues in terms of interagency coordination, water quality, public safety, public access, cost-sharing, law enforcement, etc.. As water is converted from rural to urban uses in the West, resulting in urban population increases, recreation visits to Reclamation projects are expected to increase.
FLOOD CONTROL/DROUGHT BENEFITS
Flood control is one of the benefits provided on many Reclamation projects. Reclamation's facilities are operated in a way that annually, prevents millions of dollars of flood damage. In the 42 years between 1950 and 1992, Reclamation projects with the most flood control benefits prevented in excess of 8.3 billion dollars in flood damage.
Flood control is needed in very wet years. In drought periods, Reclamation becomes involved in drought management activities. In some cases, Reclamation projects fare better than other water users because many Reclamation projects have carryover storage which can provide water during a few consecutive years of drought. In some areas, however, growing demand stresses the water supply even in normal water years. Water shortages, often drought-influenced, will probably increase in the Reclamation West, thus forcing more effective and efficient use of water supply. Possible Reclamation drought activities are quite varied, e.g., assisting water users with planning during drought periods for use and allocation of limited water supplies, participating in cooperative contingency planning for future drought, water conservation, loans, involvement in water banking, deepening wells, and water purchases are among the many possible activities.
INTERNATIONAL AND OTHER ASSISTANCE
International assistance is an important aspect of Reclamation's program. Reclamation employees have worked in more than 80 countries providing technical assistance on a wide range of water resources issues, and Reclamation has welcomed more than 10,000 visitors from nearly every country in the world to its facilities. Reclamation routinely provides training programs for foreign visitors. All this activity is done in accordance with United States policy and in cooperation with the U. S. State Department.
In addition, Reclamation provides technical water assistance within the United States to various public and private entities through a variety of programs.
Reclamation currently has more than 180 projects in the 17 Western States which are managed out of over twenty area offices. The area offices are within five regions which are organized around western watersheds. Many projects are actually operated and maintained by the water users on the projects. Reclamation's projects provide agricultural, municipal, and industrial water to about one-third of the population of the West. Farmers on Reclamation projects produce about 13 percent of the value of all crops in the United States, including about 65 percent of all vegetables and 24 percent of all fruits and nuts. As a result of initiatives under the presidency of Bill Clinton, Reclamation's staffing level is about one-fifth smaller than it was in 1993; and as Reclamation enters into additional partnerships with the beneficiaries of the water and electricity produced on its projects, Reclamation's staffing levels are expected to shrink even further in the Twenty-first Century.
Nevertheless, in Colorado alone, Reclamation has twenty-four projects - notable among these are the Uncompaghre Project of 1903, Grand Valley Project of 1911, Colorado-Big Thompson Project of 1937, the Wayne Aspinall Unit of the Colorado River Storage Project of 1956, and the Animas-La Plata Project authorized in 1968 and currently under construction. In Colorado alone Reclamation normally annually serves some 1.1 million acres of irrigated land, from a net water supply of well over 2 million acre feet of water. In addition, Reclamation water serves in excess of 1,200,000 of Colorado's non-agricultural population.
As we move into the Twenty-first Century in Colorado and the West, Reclamation is the largest single supplier of water and one of the largest suppliers of electricity in the region. Because of that, Reclamation undoubtedly will continue to be an important player as the drama that is Western water is played out on the stage of the arid West.
SELECTED BIBLIOGRAPHY, BUREAU OF RECLAMATION
Armstrong, Ellis L., ed. "Irrigation," Chapter in History of Public Works in the United States, 1776-1976. Chicago: American Public Works Association, 1976.
Cannon, Brian Q., Remaking the Agrarian Dream: New Deal Rural Resettlement in the Mountain West. Albuquerque: University of New Mexico Press, 1996.
__________. "'We Are Now Entering a New Era'": Federal Reclamation and the Fact Finding Commission of 1923-1924." Pacific Historical Review 66 (May 1997): 185-211.
Dawdy, Doris Ostrander. Congress in Its Wisdom: The Bureau of Reclamation and the Public Interest. Boulder, San Francisco, London: Westview Press, 1989.
Dean, Robert., "'Dam Building Still Had Some Magic Then'": Stewart Udall, the Central Arizona Project, and the Evolution of the Pacific Southwest Water Plan, 1963-1968." Pacific Historical Review 66 (Feb 1997): 81-98.
Dunar, Andrew J. and Dennis McBride, Building Hoover Dam: An Oral History of the Great Depression. New York: Twayne Publishers, 1993.
Gottlieb, Robert and Margaret Fitz Simmons. Thirst for Growth: Water Agencies as Hidden Government in California. Tucson: University of Arizona Press, 1991.
Gressley, Gene M. "Arthur Powell Davis, Reclamation, and the West." Agricultural History 42 (July 1968): 241-57.
Harvey, Mark W. T. A Symbol of Wilderness: Echo Park and the American Conservation Movement. Albuquerque: University of New Mexico Press, 1994.
Hess, Jeffrey A. "A Mile High in the Mountains: The Planning, Design, and Construction of Deadwood Dam." Idaho Yesterdays 36 (Fall 1993): 2-17.
__________. "Inventions and Patents for the Public Good: The Needle-Valve Program of the Bureau of Reclamation." Journal of the Society For Industrial Archeology 22 (1996): 35-51.
Hundley, Norris, Jr.. Dividing the Waters: A Century of Controversy between the United States and Mexico. Berkeley and Los Angeles: University of California Press, 1966.
__________. The Great Thirst: Californians and Water, 1770s-1990s. Berkeley, Los Angeles, Oxford: University of California Press, 1992.
Jackson, Donald C. "Engineering in the Progressive Era: A New Look at Frederick Haynes Newell and the U. S. Reclamation Service." Technology and Culture 34 (July 1993): 539-74.
Johnson, Rich. The Central Arizona Project, 1918-1968. Tucson: The University of Arizona Press, 1977.
Kluger, James R. Turning Water with a Shovel: The Career of Elwood Mead. Albuquerque: University of New Mexico Press, 1992.
Kollgaard, Eric B., and Wallace L. Chadwick, eds. Development of Dam Engineering in the United States. New York: Pergamon Press, 1988.
Martin, Russell. A Story that Stands Like a Dam: Glen Canyon and the Struggle for the Soul of the West. New York: Henry Holt and Company, 1989.
McCool, Daniel. Command of the Waters: Iron Triangles, Federal Water Development, and Indian Water. Berkeley: University of California Press, 1987.
Miller, M. Catherine. Flooding the Courtrooms: Law and Water in the Far West. Lincoln, London: University of Nebraska Press, 1993.
Morgan, Robert M. Water and the Land: A History of American Irrigation. Fairfax, VA: The Irrigation Association, 1993.
Pisani, Donald J. "Conflict Over Conservation: The Reclamation Service and the Tahoe Contract." Western Historical Quarterly 10 (April 1979): 167-190.
__________. From the Family Farm to Agribusiness: The Irrigation Crusade in California and the West, 1850-1931. Berkeley, Los Angeles, London: University of California Press, 1984.
__________. To Reclaim a Divided West: Water, Law, and Public Policy, 1848-1902. Albuquerque: University of New Mexico Press, 1992.
__________. Water, Land, and Law in the West. Lawrence: University of Kansas Press, 1996.
Pitzer, Paul C. Grand Coulee: Harnessing a Dream. Pullman: Washington State University Press, 1994.
Reisner, Marc P. Cadillac Desert: The American West and Its Disappearing Water. New York: Viking, 1986.. and Sarah Bates. Overtapped Oasis: Reform or Revolution for Western Waters. Washington, D.C.: Island Press, 1990.
Robinson, Michael C. Water for the West: The Bureau of Reclamation, 1902-1977. Chicago: Public Works Historical Society, 1979.
Rowley, William D. Reclaiming the Arid West: The Career of Francis G. Newlands. Bloomington, Indianapolis: Indiana University Press, 1996.
Smith, Karen L. The Magnificent Experiment: Building the Salt River Reclamation Project, 1890-1917. Tucson: The University of Arizona Press, 1986.
Terrell, John Upton. War for the Colorado River: The California-Arizona Controversy. 2 volumes. Glendale, California: The Arthur H. Clark Company, 1965.
Tyler, Daniel. The Last Water Hole in the West: The Colorado-Big Thompson Project and the Northern Colorado Water Conservancy District. Niwot: University Press of Colorado, 1992.
Walton, John. Western Times and Western Wars: State, Culture, and Rebellion in California. Berkeley, Los Angeles, Oxford: University of California Press, 1992.
Warne, William E. The Bureau of Reclamation. Praeger Publishers, Inc., 1973; reprint, Boulder, Colorado, and London: Westview Press, 1985.
Wilkinson, Charles F. Crossing the Next Meridian: Land, Water, and the Future of the West. Washington, D.C.: Island Press, 1992.
Worster, Donald. Rivers of Empire: Water, Aridity, and the Growth of the American West. New York: Pantheon Books, 1985. Revised May 2000.
THE COLORADO RIVER STORAGE PROJECT
IN THE 21ST CENTURY
by Randall Peterson, Manager
Adaptive Management and Environmental Resources Division
(also Program Manager, Glen Canyon Dam Adaptive Management Program)
Upper Colorado Region, Bureau of Reclamation
It seems such a simple question: Why have dams on the Colorado River? They are viewed by some as life-givers, and by others as intruders. Some perceive that we can't live without them; others perceive that we have somehow outgrown them, their necessity faded away. The past debated their existence. The present debates their operation, dividing the surplus; traditional water and power benefits, and instream flows. Like most societal issues, there can be no segregation of humans, their values, and their surroundings. As the West continues to press the boundaries of population growth, the future will debate our use of limited resources, particularly water. We will have to address the hard questions of why, how, and what's next.
There can be no getting around it, we live in a desert. It took early settlers just one year to realize that this wasn't Ohio. Streams dried to a trickle. It would take some type of water storage to supply human needs during the parched summers. Early attempts were humorous; buckets, vats and tubs were scripted into service. For a settlement of just a few, small efforts might have worked. But for our current population, we speak in a language of water demands that the early settlers could never have understood. And the demands are still growing.
In the Colorado, Congress provided the Boulder Canyon Project and the Colorado River Storage Project (CRSP) as water resources to satisfy these life demands, about 30 million acre-feet of storage in both the Upper and Lower Basins. For the Lower Basin, the purpose was storage delivered directly to the thirsty states of Arizona, Nevada and California.
But upstream the purpose seems less clear. In truth, CRSP was a giant exchange agreement. Compact and potential treaty requirements would be delivered from the lower end of the Upper Basin, while depletions were allowed to develop upstream. Absent the storage to fulfill our Lower Basin commitments, upstream users would be forced to abandon, as the Anasazi, their water use during cyclic periods of drought. With CRSP, those threats were subdued. The Colorado is a system of extremes, with annual flows varying historically by a factor of five. Reservoirs smooth the extremes and society benefited from this certainty.
So the answer to "Why?" is simple: CRSP exists because we have chosen to live in this part of the West. Absent our existence in this basin, there would be no need for reservoir storage. We could point to others and their excessive water demands, but in truth the answer to "Why?" will be found in the mirror.
Not only was CRSP designed to provide water; it also was a power generation project. Revenues from the sale of power not only were to repay the construction costs of the project (with interest), but also provided financial assistance for the development of irrigation projects in the basin. The irrigation subsidies designed to support farmers and keep food prices competitive came not from the federal government, but from the basin's power users. Initially, the projected power rates to accomplish all this were higher than the open market, and non-profit public power municipalities took some risk in signing contracts for CRSP power. In recent years this situation has reversed, and public power customers now enjoy CRSP rates lower than the open market.
The development and financing scheme developed during the 1950s has worked flawlessly. Much of the original construction cost has been repaid, and numerous water development projects are providing upstream water supplies. What wasn't completely foreseen was the change in society's expectations or the resource implications of constructing CRSP. River restoration and endangered species are now part of the demands that are placed on the reservoir system, necessitated by human demands on the water resources of the West.
The regulating nature of reservoirs reduced sediment load, spring peak flows and river temperatures, while increasing base flows during the summer, fall and winter months. The natural functioning of watersheds and river systems has been altered, with declining native species the result.
It seems fair to ask the value of these natural resources; indeed, this question often frames the debate over the Endangered Species Act. What is sometimes lost in the debate is the recognition that there is something about the Intermountain West that either drew us away or keeps us from either coastal metropolis. We choose to live here. There is a premium that we place on the quality of life in the Colorado Basin. That premium is the currency that bridges human demands and human surroundings.
It's no surprise that there is a multitude of beliefs and positions on this issue, but perhaps it will be a surprise how we address these differences of opinion in the future. One emerging technique that may assist in this discussion is adaptive management. Adaptive management can be viewed as an admission of incomplete knowledge, which leads us to experiment to find solutions to current challenges. This incompleteness results from the extraordinary complexity of both ecosystems and our relationship to them. When CRSP is viewed through this filter, the debates over operational issues can change from polarization to solution-finding. It is inaccurate to assume that solutions only exist which result in winners and losers. Clearly we stand at a point in time when the possible universe of solutions has been only partially explored.
Future exploration depends on commitments to scientific rigor, respect for all needs, and a willingness to try. Litigation seems a failure of all three. The greatest creativity we can muster will be required, nurtured by trust. CRSP and its original purposes will continue to endure, but it will adapt as water use pressures continue to increase. That adaptation will bear the same marks of ingenuity as the early settlers, who not surprisingly were drawn here by the quality of life. Surely, that deserves our best efforts.
WHAT WILL BE THE NATURE OF PUBLIC RECLAMATION WORK
IN COLORADO'S FUTURE?
Carol DeAngelis, Area Manager
Colorado West Area, Bureau of Reclamation
Grand Junction, Colorado
Thank you for the opportunity to speak. For those of you who do not know me, I am Carol DeAngelis, the Area Manager of Reclamations' Western Colorado Area Office, with offices in Grand Junction and Durango. We oversee projects in Western Colorado, Northwestern NM and Northeastern AZ.
First I'd like to read you a short quote from a June 17, 2002 Sacramento Bee Editorial "What's Left to Reclaim? Bureau of Reclamation must reassess its mission."
"As this proud agency celebrates its centennial, its role for the next 100 years isn't as clear as the first. Is it still the master plumber and dam builder? Or is it the diplomat that solves water conflicts between people and fish? Or is it the rebuilder of habitat, the demolisher of dams, the conservationist demanding water meters?
The correct answer is likely to vary state by state, watershed by watershed, tributary by tributary. If the Bureau continually reassesses its role to match the challenge of the moment, it could loom larger in its second century than in its first. If the bureau stands still, paralyzed by a revolving door inside the bureaucracy and the partisan politics of Washington, it diminishes into an agency that operates some valves. That would be a shame."
I began with this quote because I think it is thought-provoking and it's conclusions are accurate. I believe Reclamation's role in the future will be similar to our role in the past. This may come as a surprise to you , but here's why I think it's true.
Throughout this conference, you have heard our history. Our original mission was to reclaim the west. If you look at a map with all of Reclamation's projects on it, you will find that these projects are at the heart of most cities in the west. Our water projects allowed settlers to inhabit what otherwise would have been uninhabitable areas; getting water where it was needed, when it was needed.
Throughout our history, Reclamation has changed with the public's needs and desires -We were originally authorized to build irrigation projects to settle the West. Then small towns began to grow up around our facilities, and we were further authorized to deliver domestic and industrial water to these towns. As the towns grew, our authorities continued to grow - we were authorized for flood control purposes to protect the citizens, for hydropower purposes to generate energy, for fish and wildlife protection and recreation development, as these issues became more important to the public.
Many people would say we've accomplished our mission. The west is developing so fast that there doesn't appear to be enough water for the growth we are experiencing. Especially during this time of drought, there appears to be a need for new water projects, enlarged projects, new ways to use water more than once, water banking, water transfers, water conservation and management, and drought planning. Reclamation is still involved in all of these issues.
As we have changed to meet the needs of the public in the past, we will continue to change in the future. We are currently working together with our partners to ensure that we can recover endangered fish and continue water development in several basins. We continue to protect the original purposes of our facilities while complying with laws that were passed long after our facilities were built.
But we cannot do it alone. Our partnerships with many of you are the key to the innovative solutions that we all share. We have been involved in several innovative partnerships during this drought. This is what Secretary Norton likes to call the 4 C's - conservation through communication, cooperation, and consultation. We currently operate in this fashion and will continue in the future.
Reclamation's role in our existing projects will continue as we ensure our dams are safe and operated and maintained, as intended. We have a heightened awareness of security at all of our facilities since Sept.11. Again, we continue to do our jobs while taking on new or changing roles.
We will continue to ensure delivery of water and power benefits consistent with environmental and other requirements. We will continue to honor state water rights, interstate compacts, and contracts with our users. In the future, we will continue to play an important role in meeting increasing demands for finite water resources and enhance effectiveness in addressing complex water management issues in the West.
Specifically with regard to the role of Reclamation in Colorado in the future, the first and most important thing that would have to happen is that the State would have to request our involvement. If that request is for an action that we already have the authority to do, like providing funding for studies under the Technical Assistance to States program, contracting for Reclamation to do design work, participating in a demonstration program, or providing assistance in any of our existing programs, it should be a simple matter of working out the details and providing the assistance. If the request is for feasibility studies or implementation of a large project or environmental assistance not related to one of our projects, new authority would be needed from Congress.
Many people do not realize that the Bureau of Reclamation has only the authority to do what Congress specifically tells us to do, through individual laws. The Bureau of Reclamation does not have an organic act that gives us broad powers like some other agencies. In order for us to build every existing project, Congress had to pass a specific law. So, depending on what the request from the state may turn out to be, Reclamation would need to make sure that the authority already existed for us to do the work, or the State and other project sponsors would have to go to Congress to get us the authority to do the work.
In summary, I'd like to answer a few questions that I will pose. Have we accomplished our original mission? Yes. Is there more that we can do? Yes. Is the Bureau of Reclamation the right organization to do it? We are ready to use our expertise in any way we can, when requested and when authorized by Congress.
Sudman, Rita Schmidt
Rita Schmidt Sudman
What is the Foundation?
This year the Water Education Foundation is celebrating its 25th anniversary as a nonprofit, impartial, tax-exempt organization. Its mission is to create a better understanding of Western water issues and help resolve water resource problems through educational programs. A 25 member Board of Directors representing a true cross-section of the water issue community: environmental, business, agricultural and public interest communities and a variety of public agencies, private foundations and stakeholder groups, sets general policy goals for the Foundation. A staff of ten develops and maintains an extensive menu of educational products: 3-day water tours, conferences and briefings, television documentaries and educational videos, school curricula, and a wide range of publications, including the well-known Western Water magazine.
Educating key policy-makers and members of the public and bring stakeholders together are the main elements of the Foundations program to improve water management in California and the Southwestern states.
Earning our Reputation
For many years major authorities- including the press - have recognized the Foundation for publishing factual information on Western water issues. This is the most important part of the Foundation's education program and the basis for our reputation.
The Foundation has become the leading disseminator of impartial, timely, balanced and easy to understand educational materials about water issues in California and the Western states. Such materials are especially critical today as the region faces the twin pressures of continued economic growth and the desire to preserve and protect the environment.
Experts and stakeholders review material produced by the Foundation in draft form. Comments that provide factual information are accepted by the Foundation. This thoughtful review process has increased the shelf life and accuracy of the published information.
The Foundation focuses its educational efforts on three main audiences:
S Policy-makers in the government, and leading stakeholders in the agricultural, environmental and urban water communities:
S Members of the media, who assist our efforts to educate the general public; and
S School children - and their families- in grades K-14 (kindergarten through college sophomores). Recognizing the need to educate students about water quantity and water quality issues, the Foundation has created games to explore they types of common activities that contribute to pollution.
The Foundation's primary objective in all of these efforts is not to advance one particular viewpoint or solution, but to explain the complexities of various opinions and ideas so that people can make better-informed decisions.
Funding our program
In the early days of the Foundation, funding sources were small contributions, subscriptions to Western Water magazine and briefings. As a way of maintaining its highly valued independence, the small Foundation staff began to develop products, including many publications to help fund programs. Staff then added water tours and school programs. The Foundation is a leading disseminator of Project WET (Water Education for Teachers) program. Teaching reaching over half a million students has been trained in use of the Foundation school programs.
As its reputation grew, the Foundation was able to begin receiving, state, federal, local and private grants. In 2001 the breakdown of the sources of revenue for the Foundation were: 40% grants, 22% contributions, 16% tours and briefings, 13% product sales, and 1% interest. The Foundation's budget is about $1.7 million. Western Water magazine now goes to about 17,000 people.
About 10 years after the Foundation was established, the Foundation's impartial reputation was strong enough to satisfy PBS stations throughout the country and public television documentaries on water were added to the Foundation's program. (In 2002, the Foundation won a regional Emmy for Fate of the Jewel - a documentary on Lake Tahoe hosted by actor Bruce Dern. These public programs are seen by millions of people all over the country.) A popular Water Leaders program to mentor young professionals was added to the program five years ago.
Expanding our program
In 1995 the Foundation began a focus on states and interests that share the Colorado. Since that time the Foundation has held three invitation-only stakeholder symposiums and published and widely disseminated the conversional proceedings of these symposiums. River Report, bi-annual a newsletter on Colorado River issues, other specific Colorado River products has been published and sent to about 3,000 stakeholders. A bi-national conference proceeding, on Mexican-U.S. border issues) was edited by Foundation staff and is currently being published by the Foundation.
Final Thoughts: Making a difference
With water at a premium in the West, there is increasing interest in better coordinating the use of surface water and groundwater to further stretch the total supply of good quality water.
Although the impacts of Foundation programs are sometimes difficult to quantify, the Foundation can tell through attendance and conferences and tours, the sale of these low-cost materials and the letters and phone calls received that the Foundation has played an important role in helping people better understand the complexities of water resources. The Foundation's success also has been recognized through state and national awards and the partnerships forged with stakeholders on all sides of these issues. The Foundation has changed the very nature of how the main competing stakeholder groups communicate. Many have thanked the Foundation for helping them better understand other points of view and recognizing important areas where there are common goals. Formal partnerships between these groups and informal exchanges of ideas have been often resulted.
The lessons learned by the Foundation could be valuable for governmental and nongovernmental organization working in other parts of the U.S. and the world. The core issue of honest reporting, education and responsiveness to the search for knowledge and information of key audiences is a universal need.
The Honorable Greg Hobbs, Jr.,
Justice, Colorado Supreme Court
The Hon. Gregory Hobbs, in his closing presentation to the workshop, provided the following ten observations about the history and experience of men and women in the Americas regarding water:
(1) Water is a public resource. Speculation and waste at the expense of community deserve no respect;
(2) The construction and use of waterworks is a required adaptation to living in the Americas. Always has been, always will;
(3) The role of law in water resource policy is to allocate and administer the water by means of a fair system that promotes water planning and serves human and environmental needs;
(4) Public debate about water law and policy must be free and open. The rights of individuals and the community must be respected in the discussion. The discussion must be reflected in decisions that are implemented certainly and have flexibility for further adaptation, based on experience;
(5) At its core, prior appropriation is one of the most fundamental adaptations humans have made to living in the Americas. Prior appropriation is a drought-planning system. By study of the historic water data, planners and decision makers can determine what is available to a proposed community need, taking into account the use of others who have established their uses previously;
(6) In the third year of a drought, the summer of 2002 demonstrates how reservoirs are fundamental to life in the West. Saving in the ample time for the lean time is civilization at its best and most necessary. When the snow pack diminishes and storage water is not available to be released into the streams, so that water might run through the river channels to its place of use, humans and the environment suffer greatly;
(7) Our over-appropriated western and Colorado watersheds reveal the limits of our settlement. Now we must live with settling in. Local and state governments in all land use decisions must consider water use and its efficient availability. If not, the people will hold officials accountable for default in their elected and appointed community roles;
(8) We must allow our water officials to make sound decisions that involve curtailment of uses in priority and that forward efficiency of use. A system of fair allocation demands fair enforcement and respect for the enforcers;
(9) We must allow the market to function to redistribute water. We must employ reservoirs, including the storage opportunities available in our groundwater systems. We must negotiate and reach agreements that make Colorado's interstate water allocations available to as many needs for as many benefits, locally and statewide in Colorado, as possible. Ducks and people need water;
(10) We must pray for the blessing of insight, patience, and common sense-for what we must and must not do-as individuals in community. In scarcity is the opportunity for community. Civilized sacrifice is a sacrament.
|
<urn:uuid:ecbfa770-b143-46b7-bd03-bbb12e5b3945>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00212.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9574535489082336,
"score": 3.671875,
"token_count": 16182,
"url": "https://www.western.edu/water-workshop-2002"
}
|
||Arsenic : A tragedy for Millions|
Arsenic poisoning has emerged as a fresh blow to Bangladesh, a country of 130 million population, known as a land of frequent natural calamities. Recent surveys showed that about 80 million people of the country are living under the risk of Arsenic poisoning as the groundwater of a vast region contaminated with Arsenic the Arsenic pollution is not only causing serious health hazard to the people, but also affecting the environment and creating social problems.
Arsenic poisoning was first detected in Bangladesh in 1993 by the Department of Public Health Engineering (DPHE). But the fact remained behind the screen till 1996. According to the latest surveys conducted at both Government levels, at least 53 out of the total 64 districts of Bangladesh are affected with Arsenic pollution.
More than 2 million Tube-wells are presently being used as the source of drinking water in Bangladesh. Out of those, only 50,000 have so far been brought under investigation by various Government and non-Government agencies. The rest are still beyond the survey. The actual picture of the severity of Arsenic pollution is yet to be revealed as the entire country could not be surveyed till now.
The Arsenic poisoning has mainly been detected through testing samples of tube-well water and human tissues-hair, nail, skin and urine. Regular intake of Arsenic at higher level through food and drinking water causes various diseases, especially skin diseases. Arsenic causes both physical and intellectual damage to the human beings.
The World Health Organization (WHO) has fixed a recommended value at 0.01 milligram Arsenic for per liter of water. It also set a maximum permissible limit of 0.05 mg/l. Under the survey, conducted by the DCH and SOES, tube-well water of suspected areas of Bangladesh were tested in laboratory. It is matter of great concern that in many cases the Arsenic concentration in the Groundwater crosses the WHO recommended value and maximum permissible limit.
The West Bengal State of India, which surrounds Bangladesh's west and north border, is also an Arsenic-affected zone because of the geological similarity. But the situation in Bangladesh is more alarming compared to West Bengal, according to experts. In West Bengal, SOES tested water samples of 40000 tube-wells in the affected area and found 58 tube -wells containing above 1.0-milligram Arsenic in per liter of water. On the other hand, some 6101 tube-wells were examined in the affected areas of Bangladesh by DCH-SOES and 75 tube-wells were found with such a high level of Arsenic concentration. Luxmipur, Nawabganj and Faridpur districts were identified as the most affected areas of the country where a large number of people have already been affected with various diseases caused by Arsenic poisoning.
The Arsenic pollution has been creating serious social problems for the affected people. They virtually become isolated from the society as nobody want to keep any social contact with them. Nobody wants to marry any Arsenic affected made of female. Some affected housewives were even divorced by their husbands. Affected school children become victims of avoidance by their teachers and classmates and are not allowed to attend their classes. Due to ignorance, the villagers consider Arsenical diseases as the curse of nature. They do not allow the Arsenic patients at social functions. The Arsenic affected villages also become isolated zone.
After conducting extensive surveys and research in the affected areas. Experts suggested for undertaking motive awareness building program on Arsenic pollution, reducing use of groundwater for drinking purpose and increasing the use of safe surface water to avert diseases caused by Arsenic poisoning. They identified indiscriminate withdrawal of groundwater as of the major causes of Arsenic pollution and suggested for finding out alternative source of safe drinking water. Regular testing of tube-well water at intervals and examination of suspected patients at the affected areas are also included in their suggestions. A government-NGO concerted effort is essential to combat the problem, they observe, Moreover, Bangladesh has not enough resources to implement such a huge task. The country does not have any modern laboratory capable to test water and examine the samples collected from Arsenic patients. Without continues assistance from the donor community and international organizations, it is impossible for the country to resolve the problem alone.
Arsenic pollution is now considered as a great threat to the future generation of the country. Bangladesh has emerged as the most vulnerable palace with regards to Arsenic pollution as the extent and spread of the problem have taken a serious turn. We have already become the victims of Arsenic poisoning and are pushing our next generation in to a more dangerous situation. So, this is the high time to be aware of the problem and take steps to combat the Spread of Arsenic pollution. Otherwise, nothing could stop this silent killer.
Arsenic affected Areas of Bangladesh
Source: Dainichi Consultant Inc, Japan
Some 80 million people of Bangladesh are now at risk of Arsenic contamination. This was revealed from the latest survey jointly conducted by DCH and SOES. After analyzing the data, experts opined that groundwater Arsenic contamination and sufferings of peoples in Bangladesh may be the biggest Arsenic calamity in the world.
The survey was carried out in 230 thanas of 64 out of 64 districts of the country. Arsenic at alarming level was found in the water of 54 districts. Some 80 million people reside in these districts having a total area of 97390 square kilometer. May be not all of the 80 million people are drinking Arsenic contaminated water regally, But it can be said that they are a always at risk of being affected with Arsenic. Arsenic contamination in ground water was found in various countries of the world. But there is no instance of such a huge population facing the risk of Arsenic pollution.
Till September 2000, the water samples from 30,000 tube-wells 7,000 human tissues (hair, nail) and urine samples from 2830 people were tested in laboratory under the DCH-SOES survey. the presence of high Arsenic was detected in 55 percent samples of water,94 percent samples of urine. This reveals a disastrous situation regarding Arsenic pollution in Bangladesh.
Some 210 samples of skin-scales were also tested in the SOES laboratory in which high level of fixed of recommended value of Arsenic of skin-scales it is difficult to say anything about the nature of Arsenic poisoning on human body through skin.
It should be mention here that four districts are still out of survey on Arsenic pollution. These are Rangamati, Bandarban, Khagrachhari and Cox's Bazar.
The WHO recommended value of Arsenic concentration in water is 0.01 mg/l, while the maximum permissible limit for Bangladesh and India has been fixed at 0.05 mg/l In the DCH-SOES survey, less than .01 mg/l Arsenic concentration was detected in 51 percent or 2803 out of 6101 water samples while above the WHO recommended value (0.01 mg/l was found in the rest 73 percent of 3298 sample on the other hand, Arsenic concentration at less than maximum permissible limit 0.05 mg/l was detected in 62 percent or 3783 samples, while above the limit was found in the rest 38 percent or 2318 water samples.
Arsenic concentration at higher level than the WHO recommended value be found in the tube-wells of 53 districts out of the 64 surveyed. Of these, the level of Arsenic presence exceeds the maximum permissible limit in the tube-wells water of 41 districts. In 11 districts, the level of Arsenic concentration was found more than the WHO recommended value, but less than the maximum permissible limit. That means, highest 0.05 mg/l Arsenic exists in the tube-wells water of these districts. These areas can be considered moderately safe. These 11 districts are Kurigram, Lalmonirhat, Rangpur, Bogra, Dhaka, Joypurhat, Gazipur, Borguna, Bhola, Sylhet and Habiganj. A total of 695 tube-wells were brought under investigation in these districts. Less than 0.01 mg/l Arsenic was found in 582 tube-wells (84 percent). The level of Arsenic concentration between 0.01 mg/l and 0.05 mg/l was in the rest 113 tube-wells (16 percent).
Besides, after testing the water of 328 tube-wells of 8 districts, the survey did not found alarming level of Arsenic. Arsenic was found less than 0.01 mg/l in these samples. These districts can be considered as completely safe from Arsenic pollution. These 8 districts are-Panchagar, Dinajpur, Gaibandha, Naogaon, Patuakhali and Moulavibazar.
Dangerous level (above 0.05 mg/l) of Arsenic was found in the water of 41 districts. These are: Nawabganj, Rajshahi, Pabna, Kushtia, Meherpur, Chuadanga, Jhenidah, Jessore, Sathkhira, Khulna, Bagerhat, Pirojpur, Rajbari, Magura, Chandpur, Noakhali, Luxmipur, Madaripur, Shariatpur, Narail, Barishal, Jhalakati, Gopalganj, Natore, Comilla, Manikganj, Munshiganj Feni, Narsingdi, Chittagong, Sherpur, Netrokona, Mymensingh, Jamalpur, Tangail, Kishoreganj, Sumamganj, Sirajganj, and Brahmanbaria.
The water of 5036 tube-wells were tested in these 41 districts of those, 38 percent or 1893 tube-wells were found having Arsenic at less than WHO recommended value. Arsenic at a level of upto 0.49 mg/l or less than the maximum permissible limit was found in 55 percent or 2760 samples. And the rest 2286 samples (45 percent) were found having above 0.05 mg/l Arsenic.
They survey in these 41 districts revealed a more dangerous fact. That is the presence of high level of Arsenic in the polluted water. The detected Arsenic level was from 0.01 mg/l to 1.0 mg/l, even more, in 1743 samples. Arsenic concentration more than 1.0 mg/l was found in 75 samples. Such a high concentration was not found even in the worst affected districts of the West Bengal.
According to Worlds Health Organization, more than 1.0 mg/l Arsenic in water may create a disastrous situation. This concentrations 100 times higher than the Who recommended value and 20 times higher than the maximum permissible limit.
Not only in the water samples. Arsenic at high level was found in 89 percent of the total 1758 hair samples tested in these districts. The normal amount of Arsenic in hair is less than l mg/kg. 98 percent of tested 1760 nail samples were detected having found contain Arsenic above the normal value. The normal content Arsenic in nail is 0.43 -1.08 mg/kg.
High level of Arsenic presence was also found in 95 percent urine samples out of the total 830 samples tested. The normal level of Arsenic presence in urine is between 0.01 mg/l and 0.05 mg/l. there is no recommended value of Arsenic in skin-scales. But while testing 210 skin samples, the DCH-SOES survey detected on an average 7.41 mg/kg Arsenic.
Out of the 41 districts, where Arsenic has been found above 0.05 mg/l, the DCH and SOES so far surveyed 22 districts for Arsenic patients. In 21 districts, they identified people suffering from Arsenic induced skin-lesions like Melanosis, leuco-melanosis, Keratosis, Hyper-keratosis, dorsum, non-petting oedema, gangrene, skin cancer etc. During the preliminary field survey conducted for last one and half year in 96 groundwater Arsenic contaminated villages in 44 thanas of 22 districts they found Arsenic patients in 93 villages in 21 districts. They examined at random 5664 people including children and out of them, 33.6 percent were found to have Arsenical skin lesions.
Statistics of Arsenic Calamity
The Arsenic contamination in the groundwater of Bangladesh is not only the reason of serious health hazards for the people, but also the cause of a widespread social problem. The rural people due to their ignorance, superstitions and lack of information, consider the diseases caused by Arsenic a "Gazab" of Allah or a curse of nature. They maintain safe distance from the Arsenic affected people as they think that the disease is like leprosy or other contagious disease. For this reason, the villages affected with Arsenic contamination have become almost isolated from the others.
Nobody wants to come to contact with the Arsenic-affected people. The affected people are bared to come out from their houses. One or such victims, Narayan Shill of Faridpur districts, has become isolated from others as all his social activities have virtually come to an end. For the last 15 years, he has been suffering from Arsenicosis. His hands and feet are full of ulcers. Some of his toes were amputated due to gangrene caused by Arsenic poisoning Narayan Shill cannot move freely in his villages, as the villagers do not allow him to enter into the social places. Even he does not have the right to enter into the tea stall in the village. If he wants to take tea. He has to bring a cup of his own and take tea to his home. He cannot sit there with other people. Two young daughters of Narayan Shill are also affected with Arsenic. Thus, the whole shill family has become separated from the society. The villagers think that all of them were attacked by leprosy or something like that.
Narayan shill did not commit any offence. But he has to fell victim of isolation from the society as like other Arsenic patients in the country. Neighbors do not even allow the Arsenic patients to use the water of their tube-wells. The affected children are varied to enter into their schools. The adults are not allowed to move freely at markets work places, even at the chambers of doctors. With deterioration of their condition, the patients gradually loss their ability to work and thus they feel into poverty.
Arsenic pollution create serious social problems in the family relations in the rural areas. It is difficult to arrange marriage for a young girl affected by Arsenic. Some affected housewives are divorced by their husbands, and even forcibly sent to their paternal home with children.
Most Most of the people of village Samta of Jessore district are affected by Arsenicosis. The village now turned into an isolated place. Nobody wants to marry any man or women of this village. A large number of families were broken up due to Arsenic poisoning. Nobody tries to realize the fact that the diseases caused by nature. This is nothing but a man made disaster.
A lot of examples may be mentioned here on the social problems created by Arsenic contamination in water. Anjuara Khatun of village Khokshapur in Kushtia district is concerned with her skin condition as it is becoming rough like the skin of snake. She feels embarrassed while mixing in the society and tries to keep herself away from others.
The husbands of Sajeda of village Khimirdiar of same district, wanted to divorce her as skin was pigmented with the effect of Arsenic poisoning. But ironically her husband was also affected by Arsenic and their family survived from a possible break-up. Ambia, Shefali, Rupban, Rupban, Rupali, Fuljan, Shilpi, Biliis and some other girls of the same village were affected with Arsenic. Their parents or guardians are concerned about their marriage and future.
Generally, the women community of Bangladesh is a backward class of society and the victims of discrimination. Moreover, the Arsenic poisoning emerged as a major problem for them. It is creating social problems mainly for the rural women.
In fear of social problem, some affected people feel hesitated to express their disease. Even they do not want to tell it to doctors. In some cases, the patients preferred to be unidentified. The problem is more serious in the case of children. The entrance of affected children in schools becomes restrict. Some of them may at last get the opportunity to go to school. But they also fell victims of avoidance from their friends and classmates.
Due to illiteracy, superstitions and lack of proper motivational program
at both government and non-governmental levels, the Arsenic problem has
been creating serious crisis in the rural Bangladesh. The affected people
are becoming the victims of cruelty in the society. If there was proper
motivational levels, the Arsenic problem has been creating serious crisis
in the rural Bangladesh. The affected people are becoming the victims
of cruelty in the society. If there was proper motivational campaign,
the rural people would not have thought that the Arsenic is the curse
of nature or it is a contagious disease as they think about leprosy. At
the same time, such superstition would not have gripped the common villages
and the Arsenic victims would not have to face such a social crisis.
» Dhaka Community Hospital.
» Dainichi Consultant, Inc., Gifu, Japan
» A Preliminary Status Report on Arsenic Problems in Groundwater of Bangladesh, SOES, Jadavpur University, Calcutta, India, May 1996.
» The Arsenic Disaster and Dhaka Community Hospital, DCH Report to the seminar on "Arsenic Disaster in Bangladesh Environment", January 6, 1997
|
<urn:uuid:130415b9-7479-4886-8133-9306c53f5afa>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815560.92/warc/CC-MAIN-20180224112708-20180224132708-00212.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9646416306495667,
"score": 3.3125,
"token_count": 3824,
"url": "http://bemfbd.tripod.com/cgi-bin/Arsenic_Problems.htm"
}
|
1634 – Massachusetts Bay colony annexed the Maine colony.
1775 – The Mecklenburg Resolves are allegedly adopted in the Province of North Carolina. The Mecklenburg Resolves, or Charlotte Town Resolves, was a list of statements drafted in the month following the fighting at Lexington and Concord. Similar lists of resolves were issued by other local colonial governments at that time, none of which called for independence from Great Britain.
1854 – Kansas-Nebraska Act was passed by U.S. Congress.
1861 – Gen. PGT Beauregard was given command of Confederate Alexandria Line.
1862 – Confederate forces strike Union troops in the Peninsular campaign. During May 1862, the Army of the Potomac, under the command of George B. McClellan, slowly advanced up the James Peninsula after sailing down the Chesapeake Bay by boat. Confederate commander Joseph Johnston had been cautiously backing his troops up the peninsula in the face of the larger Union force, giving ground until he was in the Richmond perimeter. When the Rebels had backed up to the capital, Johnston sought an opportunity to attack McClellan and halt his advance. That chance came when McClellan’s forces were straddling the Chickahominy River. The swampy ground around the river was difficult to maneuver, and the river was now a raging torrent from the spring rains. A major storm on May 31 threatened to cut the only bridge links between the two wings of the Union army. Johnston attacked one of McClellan’s corps south of the river on May 31 in a promising assault. The plan called for three divisions to hammer the Federal corps from three sides, but the inexperienced Confederates were delayed and confused. By the time the attack came, McClellan had time to muster reinforcements and drive the Rebels back. A Confederate attack the next day also produced no tangible results. The Yankees lost 5,000 casualties to the Rebels’ 6,000. But the battle had two important consequences. McClellan was horrified by the sight of his dead and wounded soldiers, and became much more cautious and timid in battle—actions that would eventually doom the campaign. And since Johnston was wounded during the battle’s first day, Robert E. Lee replaced him. Lee had been serving as Confederate President Jefferson Davis’ military advisor since his undistinguished service in western Virginia during the war’s first year. The history of the war in the eastern theater drastically changed as Lee ascended the ranks. His leadership and exploits soon became legend.
1863 – U.S.S. Carondelet, Lieutenant Murphy, patrolling the Mississippi River below Vicksburg, proceeded to Perkins Landing, Louisiana, where Army troops were found cut off from the Union headquarters. Murphy “shelled the woods and thus prevented the enemy from advancing and throwing an enfilading fire on the troops ashore,” while awaiting the arrival of a transport which could rescue the soldiers. As Forest Queen arrived and the Union troops began to board her, a large force of Confederates pressed an attack. Carondelet’s guns laid down a heavy fire, saving the troops and forcing the Southerners eventually to break off the assault. Carondelet remained at Perkins’ Landing after Forest Queen departed, saved those stores and material which it was possible to take on board, and destroyed the rest to prevent its capture by Confederates.
1863 – Rear Admiral Porter, accompanied by some of the fleet officers, went ashore, mounted horses and rude to Major General ‘V. T. Sherman’s headquarters before Vicksburg. Sherman reported that the Admiral, referring to the loss of U.S.S. Cincinnati on 27 May, was “willing to lose all the boats if he could do any good.” Porter also volunteered to place a battery ashore. To that end, Lieutenant Commander Selfridge visited Sherman on the first of June and reported that he was prepared to land two 8-inch howitzers and to man and work them if the Army would haul the guns in to position and build a parapet for them. On 5 June Selfridge told Porter that one gun was in position and “I shall have the other gun mounted tonight. . . Frequent joint efforts of this nature hastened the end of Vicksburg.
1863 – U.S.S. Pawnee, Commander Balch, and U.S.S. E.B. Hale, Acting Lieutenant Edgar Brodhead, supported an Army reconnaissance to James Island, South Carolina, and covered the troop landing. Balch reported: ”The landing was successfully accomplished and the reconnaissance made, or forces meeting with no opposition, and they were embarked at 9 a.m. and returned to their camps without a casualty of any kind.” Colonel Charles H. Simonton, CSA, commanding at James Island, warned: ”This expedition of the enemy removes all [their] fear of our supposed batteries on the Stono, and no doubt we will have visits from them often.”
1864 – The Army of Northern Virginia under Robert E. Lee engages the Army of the Potomac under Ulysses S. Grant and George Meade.
1866 – In the Fenian Invasion of Canada, John O’Neill leads 850 Fenian raiders across the Niagara River at Buffalo, New York/Fort Erie, Ontario, as part of an effort to free Ireland from the United Kingdom. Canadian militia and British regulars repulse the invaders in over the next three days, at a cost of 9 dead and 38 wounded to the Fenian’s 19 dead and about 17 wounded.
1868 – The 1st Memorial Day parade was held in Ironton, Ohio.
1894 – The US Senate passed a resolution encouraging Hawaii to establish its own form of government without interference from the US.
1900 – Sailors and Marines from USS Newark and USS Oregon arrive at Peking, China with other Sailors and Marines from Britain, France, Russia, Italy and Japan to protect U.S. and foreign diplomatic legations from the Boxers.
1913 – The 17th Amendment to the Constitution, providing for the popular election of U.S. senators, was declared in effect.
1921 – A major race riot broke out in Tulsa, Oklahoma. Greenwood, the black section of town, was burned. As many as 10,000 white men and boys attacked the black community and 35 blocks of the black business district were burned with participation by police officers. The National Guard was clled out and martial law enforced but not until the day after the violence when most of the conflict had already ended. Some 200-300 people were believed to have been killed.
1940 – President Roosevelt introduces a “billion-dollar defense program” which is designed to boost the United States military strength significantly.
1942 – In an attempt to reinforce the Pacific Fleet, battleships Colorado and Maryland sail from San Francisco.
1944 – USS England sank a record 6th Japanese submarine in 13 days.
1944 – The Canadian 1st Corps captures Frosinone; the British 10th Corps takes Sora. Around Anzio, forces of the US 6th Corps capture Velletri and Monte Artemiso while other elements attack Albano. The German loss of Velletri unhinges their defenses of the Caesar Line.
1944 – US forces reduce their perimeter near Arare. All the American beachheads on the north coast experience significant Japanese attacks. Meanwhile, to the east, Australian forces capture Bunabum.
1945 – On Okinawa, the US 6th Marine Division (part of US 3rd Amphibious Corps) encounters Japanese rearguards near Hill 46. Japanese forces pull out of Shuri.
1945 – On Negros, organized Japanese resistance ends. On Luzon, a regiment of the US 37th Division begins moving northward from Santa Fe through the Cagayan valley.
1947 – Authority of the U.S. Coast Guard for the establishment and disestablishment of prohibited, restricted, and anchorage areas, conferred by the Espionage Act (50 U.S.C. 191) and Proclamation No. 2412 of 27 June 1940 was terminated by Proclamation No. 2732, signed by the President on this date.
1948 – The Coast Guard assumed command of the former Navy base at Cape May, New Jersey, and formally established its east coast recruit training center there the next day.
1956 – In violation of the Geneva Agreements, the United States sends 350 additional military men designated Temporary Equipment Recovery Team (TERM) to Saigon under the pretext of helping to recover and redistribute equipment abandoned by the French. They will stay on as a permanent part of MAAG.
1959 – US Advisors are assigned to the regimental level of the South Vietnamese armed forces.
1959 – At the 15th plenum of the Central Committee, North Vietnam’s leaders decide to formally take control of the growing insurgency in the South. The tempo of war speeds up as more southern cadre members infiltrate back to the South along an improved Ho Chi Minh Trail. Although infiltration from the North began in 1955, not until 1959 does the CIA pick up evidence of large-scale infiltration. Hanoi’s decisions of this month along with the troop movements in preparation for an October offensive are viewed by intelligence in Washington as the beginnings of the North Vietnamese intervention.
1962 – Around 5,000 troops (including US Special Forces, or Green Berets) are serving in South Vietnam,and there are a total of 124 US aircraft including two USAF C-123 squadrons and four helicopter companies. The Communists are forming battalion-sized units in several parts of central Vietnam.
1965 – U.S. planes bomb an ammunition depot at Hoi Jan, west of Hanoi, and try again to drop the Than Hoa highway bridge. These raids were part of Operation Rolling Thunder, which had begun in March 1965. President Lyndon B. Johnson had ordered the sustained bombing of North Vietnam to interdict North Vietnamese transportation routes in the southern part of North Vietnam and slow infiltration of personnel and supplies into South Vietnam. In July 1966, Rolling Thunder was expanded to include North Vietnamese ammunition dumps and oil storage facilities as targets. In the spring of 1967, it was further expanded to include power plants, factories, and airfields in the Hanoi-Haiphong area. The White House closely controlled operation Rolling Thunder and President Johnson occasionally selected the targets himself. From 1965 to 1968, about 643,000 tons of bombs were dropped on North Vietnam. A total of nearly 900 U.S. aircraft were lost during Operation Rolling Thunder. The operation continued, with occasional suspensions, until President Johnson halted it on October 31, 1968, under increasing domestic political pressure.
1971 – In accordance with the Uniform Monday Holiday Act passed by the U.S. Congress in 1968, observation of Memorial Day occurs on the last Monday in May for the first time, rather than on the traditional Memorial Day of May 30.
1973 – The United States Senate votes to cut off funding for the bombing of Khmer Rouge targets within Cambodia, hastening the end of the Cambodian Civil War.
1988 – President Ronald Reagan ends his first trip to Moscow, and his fourth summit meeting with Soviet leader Mikhail Gorbachev, on notes of both frustration and triumph. Although there were no breakthroughs or agreements on substantive issues, the “Great Communicator,” as Reagan was known in the United States, was a hit with Soviet audiences. The May 1988 summit between Gorbachev and Reagan was billed as a celebratory follow-up to their breakthrough summit of October 1987. At that meeting in Washington, D.C., the two leaders had signed the groundbreaking Intermediate-Range Nuclear Forces (INF) Treaty, which eliminated an entire class of nuclear missiles from Europe. The May meeting, however, got off to a rocky start as Reagan lectured Gorbachev about the need to improve the Soviet Union’s human rights record. From that inauspicious start, the summit went downhill and ended with no further progress on arms control. Gorbachev’s frustration boiled over as he declared to Reagan, “Maybe now is again a time to bang our fists on the table” in order to hammer out an arms agreement. During his final day in Moscow, Reagan turned away from strictly political issues and spoke before a group of students and Russian intellectuals and then took a walking tour of some old churches. He praised Russian cultural achievements, particularly the nation’s great literary tradition and disarmed his audiences with his usual self-effacing humor. The May 1988 summit meeting was a victory of style over substance. Both Reagan and Gorbachev kept up positive fronts in their public statements, but in fact, the meeting had been a great disappointment for both sides. No further progress on arms limitation was made, and Reagan’s efforts to push the human rights issue met a frosty response from Gorbachev. The summit indicated that despite the progress made in improving U.S.-Soviet relations in the past years, serious differences still existed.
1988 – The first search and rescue agreement with the Soviet Union was signed at a summit in Moscow. The agreement set a general line, or boundary, separating SAR regions and provided for exchange visits to SAR coordination centers in both countries, joint SAR exercises, and regular communication checks.
1988 – The CGC Fir became the oldest cutter in commission after the CGC Ingham was decommissioned this day.
1990 – President Bush and his wife, Barbara, welcomed Soviet President Mikhail S. Gorbachev in a ceremony on South Lawn of the White House. The two leaders and their aides then held talks on German reunification.
1994 – The United States announced it was no longer aiming long-range nuclear missiles at targets in the former Soviet Union.
1995 – President Clinton declared he was ready to permit the temporary use of American ground forces in Bosnia to help UN peacekeepers move to safer positions if necessary.
1997 – Rosie Will Monroe (76), WW II icon (Rosie the riveter), died.
1999 – During a Memorial Day visit to Arlington National Cemetery, President Clinton asked Americans to reconsider their ambivalence about Kosovo, calling it “a very small province in a small country. But it is a big test of what we believe in.”
2000 – Iraqi Oil Minister Amir Muhammad Rashid states that Iraq does not intend to sign more contracts with foreign oil companies to develop its oilfields until contracts previously awarded are implemented.
2001 – Veteran FBI agent Robert Hanssen pleaded innocent to charges of spying for Moscow. He later changed his plea to guilty and was sentenced to life in prison.
2001 – The United States and Britain win Security Council approval of a one-month extension of the United Nations oil-for-food program. A vote on the new “smart sanctions” on Iraq proposed by the United States and Britain is delayed at least one month.
2002 – The US State Dept. urged some 60,000 Americans in India to leave over concerns of war between India and Pakistan.
2002 – Bulgaria signed an agreement with the US to destroy its Cold War-era missiles. The US planned to pay the costs of destruction.
2003 – Eric Rudolph, the longtime fugitive charged in the 1996 Olympic Park bombing and in attacks at an abortion clinic and a gay nightclub, was arrested in the mountains of North Carolina.
2003 – American forces arrested 15 members of Saddam Hussein’s banned Baath Party as they met at a police college in Baghdad.
2004 – U.S. troops clashed with Shiite militiamen in the holy city of Kufa for a second day in fighting that killed two Americans. In Baghdad, a car bomb exploded near the headquarters of the U.S. coalition, killing at least two people and injuring more than 20.
2005 – Vanity Fair reveals that former Federal Bureau of Investigation Associate Director Mark Felt was Deep Throat. Deep Throat is the pseudonym given to the secret informant who provided information to Bob Woodward and Carl Bernstein of The Washington Post in 1972 about the involvement of United States President Richard Nixon’s administration in what came to be known as the Watergate scandal.
2008 – STS-124, a Space Shuttle mission, flown by Space Shuttle Discovery to the International Space Station, is launched with a crew of seven and the main module of the Japanese laboratory Kibō.
2009 – CGC Boutwell arrived in the port of Tubruq, Libya, during her around-the-world cruise, becoming the first U.S. military ship to visit Libya in more than 40 years.
2012 – SpaceX’s unmanned Dragon capsule successfully returns to Earth following its demo mission to the International Space Station, landing intact in the Pacific Ocean. It is later recovered and shipped back to the United States.
2014 – Sergeant Bowe Bergdahl, previously the only United States military prisoner held captive in Afghanistan, is released in exchange for five Taliban prisoners held at Guantanamo Bay.
Follow Rebuilding Freedom
Search Rebuilding Freedom
Online NowUsers: 1 Guest
Visits Since 2-24-2012
Rebuilding Freedom Disclaimer
The views expressed in the posts and comments of this blog do not necessarily reflect the Administrators. They should be understood as the personal opinions of the author.
All readers are encouraged to join Rebuilding Freedom and leave comments. While all points of view are welcome, only comments that are courteous and on-topic will be posted. While we acknowledge freedom of speech, comments may be reviewed. The Administrators at Rebuilding Freedom reserve the right to delete posted comments at its discretion. Spam will not be posted. Participants on this blog are fully responsible for everything that they submit in their comments, and all posted comments are in the public domain.
Any email addresses, names, or contact information received through this blog will not be shared or sold to anyone outside of Rebuilding Freedom, unless required by law enforcement investigation.
This blog may contain external links to other sites. Rebuilding Freedom does not control or guarantee the accuracy, relevance, timeliness, or completeness of information on other Web sites. Links to particular items in hypertext are not intended as endorsements of any views expressed, products or services offered on outside sites, or the organizations sponsoring those sites.
|
<urn:uuid:787f7729-1bea-4ef7-b4b3-95ef9fbbb2fe>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812959.48/warc/CC-MAIN-20180220125836-20180220145836-00613.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9627679586410522,
"score": 3.90625,
"token_count": 3790,
"url": "http://rebuildingfreedom.org/this-day-in-u-s-military-history-may-31/"
}
|
primary strategic objective of the Union in the western theater of the
Civil War was to obtain full control of the entire course of the
Mississippi River, thus making it available for Northern commerce. Also,
Union control of the Mississippi would geographically cut the
Confederacy in two. By the winter of 1862-63, Union control had been
established as far south as Vicksburg, and as far north as Baton Rouge.
However, the Confederacy had retained control of the Mississippi between
those points by holding powerful fortresses at Vicksburg and Port
Lt. Gen. John C. Pemberton commanded the Confederate Department
of Mississippi and East Louisiana. Maj. Gen. Ulysses S. Grant commanded
the Union Army of the Tennessee. Both assumed command during October
1862 and both were West Pointers. Grant’s initial offensive to gain
control of the Mississippi using the railroads of western Mississippi as
a main supply line failed on 20 December 1862 when Confederate cavalry
destroyed his base of supply. This forced Grant to return to Memphis,
and sealed the fate of Maj. Gen. William T. Sherman’s cooperating
amphibious expedition at Chickasaw Bayou on 27-29 December 1862. Early
in1863, Grant moved the bulk of his army from Memphis to three camps in
Louisiana opposite Vicksburg: Lake Providence, Milliken’s Bend, and
During a miserably wet winter, Grant’s attempts to
bypass Vicksburg by digging canals at Lake Providence, DeSoto Point, and
Duckport all failed. Other Bayou Expeditions also failed: The Yazoo Pass
Expedition at Fort Pemberton on 20 March, and the Steele’s Bayou
Expedition on Rolling Fork Creek in late March. The Vicksburg defenses
However, Grant never lost sight of his objective:
"To secure footing upon dry ground on the east side of the river
from which the troops could operate against Vicksburg." On 31
March, Grant marched his army southward through Louisiana, corduroying
roads and building bridges as he went. He
|hoped to find a
lightly-defended point on the Mississippi shore
south of Vicksburg.
Grant’s first plan was to cross the Mississippi
River at Confederate occupied Grand Gulf. At Grant’s request, on the
night of 16 April, Flag Officer David D. Porter ran the Vicksburg
batteries. Porter’s seven ironclads and four transports were to
provide gunnery support and transport for Grant’s troops. By 28 April,
the bulk of Grant’s army had assembled at Hard Times Plantation,
Louisiana, with plans to land at Grand Gulf, Mississippi. The next day, a determined effort
by Porter’s ironclad gunboats failed to knock out the Grand Gulf guns. Undaunted, Grant moved his army further south to Disharoon’s
On 30 April his men, transported by Porter’s boats (which
had run the Grand Gulf batteries the previous night), landed unopposed
at Bruinsburg. Moving inland, on 1 May the Union force encountered Brig.
Gen. John Bowen’s Confederates five miles west of Port Gibson.
the Confederates were greatly outnumbered, they fought so tenaciously
that an entire day was required to drive them back across Bayou Pierre.
Grant then outflanked Bowen by a river crossing of Bayou Pierre at
Grindstone Ford and advanced to Hankinson’s Ferry on the Big Black
River. This forced Bowen to evacuate Grand Gulf. Grant immediately
converted Grand Gulf to a forward supply depot. Grant decided not to
advance directly on Vicksburg from Hankinson’s Ferry because of
considerations of terrain and tactics.
He boldly turned northeast toward
Edwards to cut the railroad. He planned to cut off Pemberton’s
supplies, as well as to draw the Confederates out of their
fortifications. Grant’s plan changed after the battle of Raymond on 12
May, when Maj. Gen James McPherson’s corps was attacked by Confederate
Brig. Gen. John Gregg’s brigade. While at Dillon’s farm Grant was
informed of the Union victory at Raymond.
He daringly decided to turn
his army toward Jackson, assuming that a large Confederate force was
assembling there. Gen. Joseph E. Johnston had
at Jackson with 5,000 Confederate
troops. He abandoned Jackson on 14 May after a brief fight with
Grant’s soldiers. The next day the Union army turned toward Vicksburg,
leaving Sherman’s corps behind to destroy the city. Pemberton had
moved 23,000 men eastward out of Vicksburg to defend his railroad supply
On 15 May, he marched to interdict the Union
supply line at Dillon’s farm. The Union and Confederate armies clashed
at Champion Hill on 16 May, where a decisive Confederate defeat forced
Pemberton to withdraw toward Vicksburg. Pemberton withdrew the bulk of
his army across the Big Black Bridge, leaving Bowen with a force of
7,000 men to defend a fortified bridgehead. Bowen’s defenses collapsed
under Union assault early on 17 May, turning an orderly retreat into the
Vicksburg defenses into a rout. By nightfall, Sherman had bridged the
Big Black River at Bridgeport, and was on the road to Vicksburg.
Pemberton was able to rally his disorganized and demoralized troops in
the trenches of Vicksburg. On 19 May they to repulsed an assault,
primarily by Sherman’s corps. On 22 May a second assault by Grant’s
entire army was also repulsed.
Unwilling to expend more lives in
attempts to take the city by storm, Grant began siege operations. By the
end of June, with all communication by either land or river cut off,
Pemberton realized that he could neither break out nor hope for rescue
by Johnston’s Army of Relief. After 47 days of siege, Pemberton
accepted Grant’s terms, including the parole of all Confederate
Fortress Vicksburg was officially surrendered at 10:00 a.m. on 4
July 1863. Port Hudson on the Mississippi River was now flanked and
rendered inconsequential due to the surrender of Vicksburg. The river
fortress was surrendered on 9 July 1863. Union control of the
Mississippi was complete, and the strategic objective in the west had
been achieved. Grant would write, “The fate of the Confederacy was
sealed when Vicksburg fell.”
National Park Service
In Delta, LA. From I-20, take Exit 186 to US-80. A segment of the
Williams/Grant canal still exists. The canal was started by Brig. Gen.
Thomas Williams and Rear Adm. David Farragut in late June, 1862. The
effort was abandoned in late July, 1862. Grant resumed work on the
project in the winter of 1863, but abandoned it when floods forced
evacuation of the area.
2. Duckport Canal
On the Thomastown Road, 2.7 miles north of US-80 . The site of a Union
attempt to create a water route for supplies from the Mississippi River
to New Carthage via Walnut and Roundaway Bayous. An unusual drop in the
river stage in early May of 1863 forced abandonment of the canal.
3. Milliken’s Bend
At the end of Thomastown Road, 10.5 miles north of US-80. This was the
camp of Maj. Gen. John McClernand’s XIII Corps before 1 April 1863,
and site of the Battle of Milliken’s Bend, 7 June 1863. Maj. Gen.
Richard Taylor attacked the post with Brig. Gen. H. E. McCulloch’s
Texas brigade. The defense of the post was the first major action
involving African-American soldiers. They suffered the highest casualty
rate of any Union garrison that successfully defended a post during the
4. Historic Richmond
Two miles south of the center of Tallulah. Now gone without a trace, in
1863 Richmond was the largest town in Madison Parish. Here, on 31 March
1863, the advance guard of the Union army forced a crossing of Roundaway
Bayou, compelling Confederate Maj. Isaac F. Harrison’s Fifteenth
Louisiana Cavalry to withdraw to the south. Richmond was used as a
forward supply depot by the Union army from 1 April to 16 May 1863. It
was later used as base by Confederate Maj. Gen John Walker’s Texas
Division from 5 - 15 June 1863. A Union task force led by Brig. Gen
Joseph A. Mower forced Confederate evacuation after a sharp skirmish on
15 June 1863.
5. Winter Quarters
On LA-608, 6.5 miles southeast of Newellton. Owned by Dr. Haller Nutt,
this was one of the largest plantation homes on Lake St. Joseph, and the
only one not burned in 1863. Used on 27 April 1863 as a bivouac by Union
soldiers en route to Hard Times Plantation on the Mississippi River 3
miles to the east. Entrance fee.
6. Grand Gulf Military Park
Seven miles northwest of Port Gibson. Grand Gulf was once an important
port on the Mississippi River. By 1862 the river had washed away much of
the town. Union Flag Officer David D. Porter attacked the newly
constructed batteries on 29 April 1863, hoping to silence them in
preparation for a landing by Grant’s army. Defeated in his attempt,
Porter then regrouped at Hard Times Plantation, 4 miles up-river. Grand
Gulf State Park features a Civil War museum, an antebellum Catholic
church and houses, a section of the original parapet of Fort Cobun, one
of the 13-inch mortars used to bombard Vicksburg, and other attractions.
7. Ruins of Windsor
west of Port Gibson on the Rodney Road. On 30 April 1863 Grant and
McClernand conferred briefly at this site after landing unopposed at
Bruinsburg Plantation two miles to the west. Built by Smith Coffee
Daniell III, the 5-story mansion burned in 1890, leaving only the 22
magnificent Corinthian columns as a reminder of its former grandeur.
8. Bethel Presbyterian
south of Windsor on MS-552. After marching from Windsor on the afternoon
of 30 April 1863, the Union soldiers of Grant’s army reached the road
junction at Bethel Church. At the junction a Union officer directed the
column into the historic Rodney Road leading east toward Port Gibson.
Heavily damaged by a tornado in 1943, the present structure is a
restoration of the 1863 building.
9. Old Rodney Road
Now known as
the Russum-Westside Road and the Shaifer Road, this road was the Rodney
Road in 1863. The original width of the road is preserved in the
abandoned section north of Bethel church. The road served as the main
axis of advance for the Union army to Port Gibson. A Union soldier
described his experience: “The moon is shining above us and the
road is romantic in the extreme. The artillery wagons rattle forward and
the heavy tramp of many men gives a dull but impressive sound.”
Today, the old road appears much as it did in 1863.
10. Shaifer House
Four miles west of Port Gibson on the Shaifer Road (the historic Rodney
Road). The house was used by Maj. Gen. John McClernand as headquarters
during the Battle of Port Gibson. It was later used as a hospital by
both Union and Confederate troops. The Battle of Port Gibson began at
this site when, near midnight on 30 April 1863, Confederate pickets
fired on the Union advance guard as it marched eastward toward Port
Gibson. Much of the battle was fought on the ridges immediately to the
east as well as along the road 2 miles to the north. The site is now
owned by the State of Mississippi.
One mile southwest of the Claiborne County Court House in Port Gibson.
Wintergreen Cemetery began in 1807 as the family burial plot of Samuel
Gibson. The cemetery is noted for its enormous Eastern red cedar trees
and cast-iron ornamental fences. It is the final resting place of Brig.
Gen. Benjamin G. Humphreys, first post-war governor of Mississippi, Maj.
Gen. Earl Van Dorn, and many of the soldiers killed in the Battle of
12. Grindstone Ford
Accessible only from the Natchez Trace Parkway, this historic river
crossing is 4.5 miles northeast of the junction of MS-18 and the Trace.
On the evening of 2 May 1863, Confederate troops retreating after the
Battle of Port Gibson set fire to the wooden decking of the suspension
bridge. Union troops extinguished the blaze and repaired the damage.
They crossed early the following morning and flanked Grand Gulf. Ruins
of the stone foundations can still be seen by walking the Old Natchez
13. Rocky Springs
miles northeast of Port Gibson on the Natchez Trace. Union General
McClernand arrived here on 6 May 1863 from Willow Springs. One of his
soldiers wrote, “came to… Rocky Springs several stores and fair
buildings. I called at one, where a crowd was gathering up the articles
and got a couple of books.” Grant arrived here with Union General
McPherson on 7 May from Hankinson’s Ferry. Another soldier noted, “here
we have good, cold spring water, fresh from the bosom of the hills.”
The only remnant of the 1863 town is an old cistern, an abandoned bank
safe, and the old red-brick Methodist church and its cemetery.
14. Utica Cemetery
Located near the town center. The cemetery is the final resting place of
many of the town’s founding citizens. Maj. Gen James McPherson’s
XVII Corps passed through Utica on 10 May 1863 and encamped at the A. B.
Weeks and later the Roach Plantations north of town.
15. Lebanon Presbyterian Church and Cemetery
Eight miles northeast of Utica on MS-18. Lebanon Church, one of the
oldest churches in the state, was passed by Maj. Gen. James McPherson’s
XVII Corps on its way from Utica to the Battle of Raymond. The old
roadbed may be seen in front of the church. MS-18 closely follows the
route of McPherson’s march.
16. Hinds County Courthouse in Raymond
in Raymond. The Courthouse was constructed by the famous Weldon Brothers
of Natchez between 1857-1859 using skilled slave labor. One of the most
elegant examples of Classic Revival architecture in Mississippi. It
served as a Confederate hospital following the Battle of Raymond, 12 May
17. St Mark’s Episcopal Church
Next to the
Raymond Courthouse. Built in 1854, St. Mark’s is the only antebellum
church in Raymond and is still in use. The church was used as a hospital
to treat Union soldiers following the Battle of Raymond. Bloodstains are
still visible on the old wooden floors.
18. Confederate Cemetery
the Old Raymond Cemetery on Port Gibson Street, 0.4 miles from the town
center. The Confederate Cemetery is the final resting place for 140 men
who were killed during the Battle of Raymond. Most of the dead are from
the Third Tennessee and Seventh Texas Infantries.
19. Raymond Civil War Battlefield
On the MS-18, 2 miles southwest of town center. Confederate Brig. Gen.
John Gregg’s brigade of 3,000 men attacked Union Maj. Gen. James
McPherson’s 11,500-man XVII Corps late on the morning of 12 May 1863.
After an all-day battle, Gregg’s brigade was forced to withdraw
through Raymond and retreat toward Jackson. A monument honoring the
Seventh Texas Infantry can be seen beside MS-18 at Fourteenmile Creek.
The Union victory at the Battle of Raymond caused Grant to change his
offensive plan and attack Jackson on 14 May 1863.
20. Old Capitol Museum
near the center of Jackson at 100 South State Street. One of three
public buildings in the city not destroyed by Maj. Gen William T.
Sherman’s army when it occupied the city on 17-23 July 1863. The
historic building, built in 1836 by William Nichols, architect from
England and a resident of Raymond, is now a museum. Free. Open Friday
8-5, Saturday 9:30-4:30 and Sunday 12:30-4:30.
In the city
center at 300 E. Capitol St. Designed in 1842 by William Nichols who was
also the architect of the Old Capitol. It is an excellent example of
Greek Revival architecture. It is the oldest occupied governor’s
mansion in the United States. Tours available Fridays on the half hour,
22. Manship House
420 E. Fortification Street. Built in 1857, the restored house is a rare
example of the Gothic Revival residential style of architecture. The
house survived the destruction of Jackson during the Union occupations
of 14-15 May, and 17-23 July 1863. Entrance fee.
23. Greenwood Cemetery
Located at 324 George Street. Established in 1823, Greenwood’s burials
include seven of Mississippi’s governors. A Confederate Cemetery is
located within the oldest public cemetery in the city of Jackson.
24. The Oaks House Museum
Located at 823 North Jefferson Street. The museum interprets the life of
the Boyd family from the 1840's to 1860's. It is one of the few houses
to survive the burning of Jackson during the Union occupation of 17-23
July 1863. Fee charged.
25. Historic Middle and Jackson Roads
Now known as the Billy Fields Road, this road joins the Champion Hill
Road 4 miles east of Edwards. The Crossroads, a strategic junction of
the Jackson and Middle Roads was a focal point of heavy fighting during
the Battle of Champion Hill. It is located 1.5 miles east of the
junction of the Champion Hill Road. In 1977, Champion Hill was
designated a National Historic Landmark.
26. Coker House
Four miles southeast of Edwards on MS-467. It was used as a hospital
following the decisive Union victory at the Battle of Champion Hill on
16 May 1863. The house fronts on modern MS-467, which very closely
follows the alignment of the historic Raymond Road, one of three axes of
advance of the Union army.
27. General Lloyd Tilghman Monument
On MS-467, 3.5 miles southeast of Edwards. Confederate Brig. Gen. Lloyd
Tilghman was killed at this spot by Union artillery near the close of
the Battle of Champion Hill as his men were delaying the Union advance
along the Raymond Road. The Tilghman monument north of the road was
placed by his sons in 1907.
28. Pemberton’s Headquarters
1018 Crawford Street, near city center. Confederate Lt. Gen. John
Pemberton used this house as his headquarters. Here, on the night of 2
July 1863, Pemberton met with his commanders to discuss surrender, and
on the following day, sent a message to Grant to “arrange terms of
capitulation of Vicksburg.” Vicksburg and the Confederate army were
surrendered on 4 July.
29. Vicksburg Military Park
just off I-20. Established by Congress on February 21, 1899, to
commemorate the most decisive campaign of the Civil War. The park
includes 1,325 historic markers and monuments, a 16-mile tour road, the
antebellum Shirley House, one hundred and forty-four cannons, the USS
Cairo Museum, and the Vicksburg National Cemetery. Entrance fee.
30. Old Vicksburg Courthouse
of the most famous buildings in the South and certainly Vicksburg’s
most imposing structure. Construction began in 1858 according to designs
developed by the Weldon Brothers of Natchez. Today, the historic
building is maintained as a museum with emphasis on Civil War history.
|
<urn:uuid:957577cc-c539-42f4-ad23-a9c462a173e0>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00013.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9401125311851501,
"score": 3.890625,
"token_count": 4425,
"url": "http://battleofchampionhill.org/trail/index.htm"
}
|
Difference between revisions of "Prague"
Revision as of 23:58, 30 December 2010
Prague (Czech: Praha) is the capital city and largest city of the Czech Republic. It is one of the larger cities of Central Europe and has served as the capital of the historic region of Bohemia for centuries.
Confusingly, several incompatible district systems are used in Prague. Partially, different systems are from different historic periods, but at least three different systems are used today for different purposes. To make things even worse, a single district name can be used in all the systems, but with different meanings.
For purposes of this guide, the "old" district system is used. In this "old" system, Prague is divided into ten numbered districts: Praha 1 through to Praha 10. If you encounter a higher district number, a different system is being used. For example, Praha 13 is part of the "old" Praha 5 district. The advantage of the "old" system of ten districts is that it is used on street signs and house numbers throughout the city, so you can always easily determine the "old" system district you are located in.
Praha 1 is the oldest part of the city, the original 'Town of Prague', and has by far the densest number of attractions. Praha 2 also contains important historic areas. In this central area, the "old" district system (or any of the newer systems) is too crude to be practical, a finer division is needed. Traditional city "quarters" provide such a division. Their disadvantage is that they are somewhat incompatible with the modern district systems - although "quarters" are smaller than the "old" system districts, a single quarter can belong to two or even more districts. The advantage is that these central quarters are well known and widely used and identical with the homonymous cadastral areas shown on on street and house number signs along the "old" district designation, allowing easy orientation.
Buildings in Czech Republic have two numbers, one blue and one red. The blue ones are the orientation numbers - it is the ordinal number of the building on its street. Historicaly these numbers always started from the end of the street which is closer to a river. As is normal in Europe, odd numbers belong on one side of the street and even numbers on the other. This allows you to find quickly the house you are looking for. The red numbers are related to the house register of the entire quarter (for example, Staré Město), and thus usually correspond to the order the buildings in that district were constructed. Most people do not remeber them; if somebody says e.g. the house is in Dlouha str. number 8, they will usually mean the blue number. Red numbers usually have 3 or more digits.
The most important quarters in the historic city centre are:
For the rest of the city, the "old" district system is used in this guide:
Links to the articles using the former division, until rewritten:
It is regarded by many as one of Europe's most charming and beautiful cities, Prague has become the most popular travel destination in Central Europe along with Budapest and Krakow. Millions of tourists visit the city every year.
Prague was founded in the later 9th century, and soon became the seat of Bohemian kings, some of whom ruled as emperors of the Holy Roman Empire. The city thrived under the rule of Charles IV, who ordered the building of the New Town in the 14th century - many of the city's most important attractions date back to that age. The city also went under Habsburg rule and became the capital of a province of the Austro-Hungarian Empire. In 1918, after World War I, the city became the capital of Czechoslovakia. After 1989 many foreigners, especially young people, moved to Prague. In 1992, its historic centre was inscribed on the UNESCO World Heritage List. In 1993, Czechoslovakia split into two countries and Prague became capital city of the new Czech Republic.
The Vltava River runs through Prague, which is home to about 1.2 million people. The capital may be beautiful, but pollution often hovers over the city due to its location in the Vltava River basin.
Many Praguers have a small cottage (which can range from a shack barely large enough for garden utensils to an elaborate, multi-story dwelling) outside the city. There they can escape for some fresh air and country pursuits such as mushroom hunting and gardening. These cottages, called chata (plural form chaty, pronounciation of ch as in Bach), are treasured both as getaways and ongoing projects. Each reflects its owners' character, as most of them were built by unorthodox methods. There were no Home Depots under communism. Chata owners used the typically Czech "it's who you know" chain of supply to scrounge materials and services. This barter system worked extremely well, and still does today. Chaty are also sometimes used as primary residences by Czechs who rent out their city-centre apartments for enormous profit to foreigners who can afford to pay inflated rent.
Local foreign language media
Ruzyně International Airport, (IATA: PRG), +420 220 111 111, +420 296 661 111 . Located 20km northwest of the city centre, it generally takes about 30 minutes to reach the city centre by car. The airport is served by a number of airlines:
Getting into the city from the airport
All international trains arrive at Praha hlavní nádraží (the central station, abbreviated to Praha hl.n.) which has connections with Metro Line C.
The park in front of the main train station is a haunt for some of the city's undesirable elements and should be avoided after dark. If you do have to come through on foot, it's best to avoid coming through the park and approach from the Southeast along Washingtonova. As you get to the corner of the park there's a police station, so the likelihood of running into problems from this direction is minimalised. The station is currently undergoing a major refurbishment, alas the 70s style will be lost, but the toilets might be cleaned up once in a while. Beware of the taxi drivers operating from the (official-looking) taxi rank alongside Praha hl.n.; they will attempt to charge a fixed price of CZK1760 (~USD100) for a trip within the city center zone, or more than this if you want to travel further.
Eurocity trains connect Prague to Berlin, Vienna and Budapest. It is a very comfortable way of travel, but not as quick as in other countries - Eurocity has average speed about 120 km/h as the Czech railway network is not suitable for higher speeds. From Berlin, a train reaches Prague in just under five hours, from Vienna in 4-4.5 hours and from Budapest in 6.5 hours. The train line from Berlin to Prague passes through the Erzgebirge mountains, and for a couple of hours the passengers are treated to a series of beautiful alpine river valleys, surrounded by rocky escarpments and mountains. Between Nuremberg to Prague, there is a direct express bus service run by the German and Chech railway companies, which only takes 3:45 hours (while the trains take 5 hours or more).
Since 2005, faster Super City Pendolino trains operate from Ostrava (3.5 hours), Olomouc (just over two hours), and Vienna (4 hours) to Prague. Reservation is necessary on these trains. If you come to Prague by SC Pendolino, you can use Airport Express to Prague Airport without any additional fee. These buses operate every 30 minutes (5:15AM to 9:45PM). Without a SC Pendolino ticket, you will have to pay 45 CZK to the driver.
Train connections from western countries such as France and the United Kingdom are complicated and slow because of the layout of German railways, which lead mainly from north to south, with no direct connections from east to west. The route with the fewest connections is Prague-Berlin-Paris, but you can shave a few hours off your route if you're willing to transfer several times; eg. Prague-Nurnberg-Stuttgart-Paris can be done in 12 hours. Trains from within Germany can be best scheduled through the 'Deutsche Bahn' website . Direct trains run several times a week from Prague to the Netherlands, reaching Amsterdam in about 14 hours.
Prague has highway connections from five major directions. Unfortunately, the highway network in the Czech Republic is quite incomplete and some highways are old and in poor condition. Thus, the highway connection from Prague to the border of the Czech Republic is available only in two directions - southeast and southwest. The south-western highway (D5; international E50) leads through Plzeň to Germany. The D5 highway continues in Germany as A6. Riding from the state border to Prague takes about an hour and a half (160 km). The south-eastern highway (D1) is the Czech Republic's oldest and most used highway - as such it's in a rather poor condition. It leads through Brno to Bratislava in Slovakia. It offers a good connection to Vienna, Budapest and all traffic from the east. It runs for 250km, and usually takes over two hours. To the northwest you can take highway D8 (E55), but it is not complete to the German border. It ends now at Lovosice (about 60 km from Prague and starts again in Usti nad Labem and continues to the northern Germany via A17 (Dresden, Berlin, Leipzig). To the northeast you can take highway R10 (E65). It is strictly speaking a motorway, not a highway, but it has four lanes and differs little from a highway. It leads from Liberec to Turnov. It isn't regarded as an important access route, as there are no major cities in this direction (Zittau in Germany, some cities in Poland), however it offers a good connection to the Czech mountains Jizerské hory and Krkonoše (Riesengebirge) with the best Czech skiing resorts. To the east you can take the newly completed D11 (E67), which goes to Hradec Kralove. It leads to Poland.
Czech highways are under development (D8 and D11 are being extended, D3 to Ceske Budejovice and Linz is supposed to be completed in 2020) so it's hoped that things will get better. Unless there are road works, there are only seldom traffic jams on Czech highways, with the exception of D1 near Prague (and near Mirosovice (direction to Ceske Budejovice and Linz, and Brno, too)).
Prague suffers from heavy traffic and on week days the main streets are one big traffic jam. Moreover, Prague still doesn't have a complete highway outer circuit. It is a really good idea to use the P+R (park and ride) parking places, where you can park your car for a very small fee and use public transport. The P+Rs are situated near all highways and are well marked. Note that traffic wardens are rife and parking in most residential streets in and around Prague city centre (even after dark) without a valid permit will result in a parking fine. In particular, avoid blue-marked areas which are parking-restricted area if you don't want your car to get towed away within the hour.
The main bus station for international buses in Prague is Florenc, in Praha 8 (metro lines B and C). It is located east of the city centre. In June 2009 a new terminal building was opened.
Eurolines and Student Agency connect Prague to major European cities. Other, less frequently used bus stations are at Nádraží Holešovice (metro C), Dejvická (A), Zličín (B) and Černý most (B).
Public transportation is very convenient in most of the areas visitors are likely to frequent.
Prague is renowned as a very "walkable" city. For those who enjoy seeing the old and new city by foot, one can easily walk from Wenceslas Square to the Old Town Square, or from the Old Town to Charles Bridge and the Castle District. However almost all of the streets are cobbled, rendering it very difficult for disabled or elderly travellers to get around effectively. Also, pedestrians should enter crosswalks carefully in Prague, as drivers are not as likely to yield as they are in other European cities.
Remember that in the Czech Republic, it is illegal to cross at a pedestrian crossing on a red man, and if caught this incurs a fine of 1000kč.
Shared minibus airport service is cheaper alternative to regular door-to-door private transfers. One can find easy-to-follow website at various websites.
Try to avoid getting taxi on the street (public transportation is always the better option in Prague) and if you have to, try to negotiate the price in advance. It’s advisable to call one of the major Prague Taxi services:
Deceptive taxi drivers are another trap that can badly surprise a tourist. Mostly they charge more than they should. The municipal council has been trying to solve this problem since the Prague mayor dressed up as an Italian tourist and was repeatedly overcharged. The most frequent cases of cheating happen between the railway station or airport and hotel. If you must take a taxi, and cannot call one directly or call your hotel for a referral, the best way to find a reputable one may be to look for a hotel and ask them to call a taxi.
Always insist on having the taxi-meter turned on and ask for a receipt once you leave the taxi. The receipt should have driver's name, address and tax identification number included. Even though you ask for receipt the taxi-meter could be tampered with so called "turbo", which will cause the taxi-meter price go sky high.
If you go for waving the taxi on the street make sure you stop car with logo of one of the major companies. It's not a bullet proof solution, but at least you have some chance to get some satisfaction from the taxi dispatching company.
About two years ago, an information desk was set up on most taxi stands in the city, with orientation prices to most popular destinations from that stand. But there is a mistake in the local law, which actually allows some of the taxi companies renting the taxi stands (specifically around Old Town square) to charge VERY high prices (about 99Kč/Km). There is an ongoing law suit regarding this, however the practice still hasn't stopped. The most infamous company in this regard is a recently created AAA Taxi s.r.o. deliberately creating its name to resemble regulated and popular AAA Radiotaxi Praha, however AAA Taxi cabs charge up to four times more for a ride, they even do not provide services to Czech customers. Visitors are advised to to use the services of proved phone-order taxis, as they are even reports of robberies with street cruising taxis.
If you're not speaking Czech, then be prepared there is about 50% chance to get cheated by a taxi driver, when stopping taxi in the city center. So be always on watch as that is a standard warning in any guide book about Prague.
If you are convinced you got overcharged by the taxi driver, mark the car ID numbers (license plate, taxi license number on the car door, driver name etc.) and contact the company, which the driver is working for (if any) or police. The problem is that you have to testify against the driver, which is kind of hard when you're on the other side of the world. Try to avoid suspicious taxis and if you find even a grain of suspicion, then walk away catching another taxi.
Other alternative is to use some of the chauffeured services companies like Prague Airport Transfers s.r.o. or FEBA Trade Limousine Car Service or even cheaper but as reliable HFS s.r.o. - 123-Prague-Airport-Transfer.com .
Some hotels offer taxi services. Make sure to compare the price with other companies. Some hotel taxis are cheap but others are more than twice the price and the car is not always identified as being a taxi.
Tram and metro
There are three main metro (subway) lines, and numerous bus and tram (streetcar) lines. The tram and bus schedules are posted on the stops, and the metro operates from very early in the morning (around 5:00AM) until approximately midnight. The schedules and connections may also be checked online from the website of Prague Public Transit . You can purchase a limited ticket (30 minutes or 5 stops on the metro, or 20 minutes on buses/trams with no transfers at all) for 18 CZK or a 75-minute transfer ticket for 26 CZK at any dispenser using coins (they give change), or in a tobacco shop or convenience store. It's best to always carry some coins, because often the only way to buy a ticket is via the yellow or red ticket machines. Discounted tickets for children up to 15 years are also available.
You may purchase 24-hour, 3-day or 5-day tickets at ticket offices in some metro stations. A 24-hour ticket costs 100 CZK, and may be both cheaper and more convenient than buying separate tickets for each journey. Tickets for 3 or 5 days allow for free accompaniment of one child between the age of 6 and 14 (inclusive). The same ticket may be used on metro, tram or bus, including transfer from one to the other, during its period of validity.
Validate your ticket by slipping it into one of the yellow boxes in the tram or bus, as soon as you board. In the metro, validation boxes are located inside the stations before the stairs. Be sure to keep it handy until it expires.
Tickets are not checked upon boarding, but uniformed and plain-clothes ticket inspectors often make the rounds asking to see your ticket. One problem is false inspectors who most often ride the trams between "Malostranske Namesti" and Prague Castle - these deceivers can be detected by asking for the identity card which should be possessed by every inspector. An unstamped ticket is invalid - it will be confiscated, and you will incur a 700 CZK fine. Even though "riding black" seems easy in Prague, you should invest in the cheap ticket for the simple reason that Prague's transportation works perfectly and it functions on the honor system - help it stay that way.
Public transport continues at night. Night trams or night buses (00:00 to 5:00AM) usually come every 30 minutes. Every 15 minutes during this time, trams leave the central exchange stop of Lazarská in the centre of Prague. All night trams go through this stop. You can easily change tram lines here if nowhere else.
Do not underestimate how close to the footpath the trams will be when they reach the stop. It's safer to take a few steps back before the tram arrives, as wing mirrors could cause injury for taller people. When you use public transport in Prague, keep in mind that it is good etiquette to let elderly people, pregnant women or disabled people sit down.
You can travel down the famous Vltava River (Moldau, in German), which inspired writers and composers such as Smetana and Dvorak.
A collection of Asian art is exhibited at the Zbraslav Castle.
As with many major European cities, you can get a good deal by buying a tourist card. Be discerning when choosing based on your needs (for example, cards may list free entry to locations that are normally free anyway - this concerns Prague Pass). Here are your options:
Free Attractions Of note is that the card will grant free admission to all the Prague Castle short tour, which normally costs 250 Kč. Many of the town's museums and galleries--including all branches of the National Gallery and the National Museum--are also included, and over four days you can easily see 3 times the card's value. As such, this is an excellent choice if you're planning on visiting a lot of museums. The only major attraction that is not included is the Old New Synagogue and Jewish Museum.
With the Prague Card you can visit Prague Castle (350 CZK), Old Town, Malá Strana and Charles Bridge historical towers and other attractions, Observatory (20 CZK), small copy of Eiffel Tour (100 CZK) and Mirror Maze at Petrin Hill, Vysehrad all castle including his casemates and gallery, many New Town Museums and Galleries and several castles outside centre of Prague.
Free Attractions There is something for everyone with Vysehrad and its casemate (catacombs) and basilica, take a boat trip through Prague on the river Vltava (Moldau), effortless up in the TV tower with the best panorama of Prague or enjoy a ride on the Petrin hill cable railway. The whole city in one hall (perfect model in 1:480 scale) - a time travel to the past in Prague’s historical most significant museum. Don't fear the sharks and marvel at the blaze of colors in the Sea World Aquarium, a magical ride at a performance of a Black-Light-Theater or let your soul swing at a concert in a church. River Navigation Museum, Army museum, Aviation museum and the UNESCO certified auto museum "PRAGA".... all for free! (Some of them however have free entry anyway !)
Also in your pack is a free map of Prague and a program guide booklet as well as a free welcome present. You will also receive discount coupons for several discounts of up to 50% for guided sightseeing- and city-walking tours, Mozart museum, galleries, concerts, internet use, computer games, real laser game or for Rent a Car (25%).
There are many opera and Black Light Theatre companies in Prague. There are several performance groups that cater to tourists. They aren't strictly to be avoided, but common sense should tell you that the opera advertised by costumed pamphleteers is not going to be up to truly professional standards.
List of Concerts, Theatres, Museums, Galleries, Monasteries, Antiques, Trade Fairs, History in prague:
River cruises are both popular and varied, from one hour cruises to long evening cruises with dinner or music.
The streets around Old Town are full of gift shops geared towards tourists, selling Bohemian crystal, soccer shirts and other mass-produced memorabilia. The thoroughfare between Charles Bridge and Old Town Square is particularly bad, turning off into one of the laneways you can find the exact same merchandise for half the price. If you are looking for some decent souvenirs, try to get off the beaten path. Street vendors can have some unexpected treasures and there are plenty in the Charles Bridge area. Prints of paintings and good quality photos are very popular, and a really good way to remember Prague. Don't bother buying overpriced furry hats and Matryoshka dolls, though, because they have nothing to do with Prague - they are Russian in origin, and their sellers are just trying to capitalize on unknowing tourists.
In December, the squares host Christmas Markets selling a mix of arts, craft, food, drink and Prague memorabilia. The markets are an attraction in their own right and a great place to pick up a more unique memento of the city.
There are several large shopping malls in Prague, you should take "Na Prikope" street - the 18th most expensive street in the world (measured by the price of property), with famous shopping arcades "Cerna ruze" (Black rose) and "Palac Myslbek" and many shops. If you are looking for souvenir shops, you will find them in the city's historical centre - mostly around Old Town Square, Wenceslas Square and Prague Castle. There are many other shops offering Bohemian crystal - especially in the centre near the lower end of Wenceslas Square. The other typical (if rather expensive) Czech goods is the garnet jewellery - typical Czech garnet stones (gathered near the town of Turnov) are dark red and nowadays are produced by a single company - Granat Turnov - and if you buy genuine traditional Czech garnet, you should get a certificate of authenticity. "Pařížská" street goes from Old Town Square towards the river - and includes some of the most luxurious (and expensive) boutiques in Prague.
Popular shopping malls:
Palladium - situated directly in the city centre, it's the newest and perhaps most luxurious shopping mall. No cheap options to eat, unless you buy some food in Albert supermarket on the lowest floor (-2). On the top level (+2) are some moderate to expensive restaurants. Tram/metro station Namesti Republiky.
OC Chodov - a huge shopping mall with hypermarket located slightly further away from the centre at metro station Chodov.
Šestka - new shopping mall just 1 station from the Prague Airport. Very far away from the center but ideal for last minute shopping before your departure. Take bus 119 from Dejvicka metro station.
Palác Flora - medium-sized shopping mall with IMAX cinema in the top floor. Tram/metro station Flora.
OC Nový Smíchov - big shopping mall with 2-floor Tesco hypermarket, a cinema, bunch of fastfoods on the top floor and very close to metro/tram station Anděl
Metropole Zličín - medium-sized mall with a cinema, hypermarket Interspar, fast foods, huge parking lot and near the metro/bus station Zličín. If you are hungry after your flight, take a bus 100 from the airport to Zličín and then just walk few meters to this mall and buy something to eat.
The official currency of the Czech Republic is the Czech Crown (koruna), abbreviated as Kč, with the international abbreviation CZK. The current exchange rate can be found at the official website of the Czech National Bank
Sometimes it is also possible to pay with Euros (Hotels in the centre of Prague, McDonalds etc.) but be prepared to suffer an unfavourable exchange rate.
Lunch is traditionally the main meal in Prague. Czech cuisine is typically based around pork or beef with starchy side dishes such as dumplings, potatoes, or fries. Fish is not as popular, though these days it is widely available. Popular Czech desserts include fruit dumplings (ovocné knedlíky), crêpes or ice cream. Most restaurants become very crowded during lunch and dinner, so consider making a reservation or eating earlier than the locals.
The tip should be about 10 to 15% - in cheaper restaurants or pubs you can get away with rounding up the bill or leaving a few extra coins. Otherwise it's customary to leave at least 20Kč-40Kč or €1-2. Taxes are always included in the price by law. Many restaurants in heavily-touristed areas (along the river, or with views near the castle) will charge a cover or "kovert" in addition to your meal charge. If this is printed in the menu, you have no recourse. But a restaurant will often add this charge to your bill in a less up-front manner, sometimes after printing in the menu that there is no cover. Anything brought to your table will have a charge associated with it (bread, ketchup, etc.) If you are presented with a hand-scrawled bill at the end of the meal, it is suggested that you take a moment to clarify the charges with your server. This sort of questioning will usually shame the server into removing anything that was incorrectly added. It should be noted that some waiters are impolite especially to people from the eastern part of Europe. Pay no attention to this, and simply find another restaurant.
If you're on the look out for fast food, you won't be able to move without tripping over street vendors serving Czech style hot dogs and mulled wine in the Old Town Square and Wenceslas Square in New Town. If you're after Western-style fast food, the major chains also have a large presence in Wenceslas Square and the area immediately around it. Most beer halls also serve light snacks or meals. Definitely try the hot dogs - they're far superior to the greasy, messy version you get in the West. Small, hollowed-out French baguettes are used for the bread, filled with mustard and ketchup, and then the frankfurter is inserted afterwards. This turns the bread into a convenient carry-case and means you don't get ketchup all over your hands. Make sure you get mustard, even if you don't normally like it - unfortunately the hot dogs are somewhat flavorless and need that extra bit of kick. Prices range from around 15 crowns for a small one to 45 crowns for the terrifying-looking 'gigant'. Note that size of hot dog relates to girth rather than length. Try the trdelnik, a traditional tube-shaped pastry, which can be found at street vendors in Old Town for 50 crown.
While Czech is the official language of Prague and the Czech Republic, Slovak is also acceptable as Czech and Slovaks have historically understood each other without the need of a translator. Both languages are very similar and mutually intelligible to a very wide extent, leading foreigners to assume incorrectly that they are dialects of each other.
Russian is widely understood by people who were attending school before the Velvet Revolution in 1989, but the language is too different from Czech to be understood without study. In addition, some people may dislike to use Russian even if they know it because of the Soviet occupation of the Czechoslovakia in 1968 and the Communist history in general. Some Czechs will also have knowledge of German. People studying after 1989 and even some older people can speak English. However, learning either Czech or Slovak (even if it's just a few phrases like greetings and thanks) will surely endear the locals.
Pubs (in Czech "hospoda") abound throughout Prague, and indeed are an important part of local culture. The exact brand of beer usually vary from pub to pub, and recommendations are difficult to give as natives are usually willing to argue at lengths about their preferences. The most internationally recognized beers are Pilsner Urquell (Plzeňský Prazdroj) and Budweiser Budvar (Budějovický Budvar). There are other brands famous among Czechs like Gambrinus. If you are looking for a beer brewed in Prague, go for Staropramen. Usual prices for a half-liter glass are between 20 and 35 Kč, based on the brand and locality, while certain restaurants at tourist areas like the Old Town Square are known to charge more than 100 Kč for an euro-sized glass. Don't be afraid to experiment with different beer brands, even if they are not mentioned in this article.
In Prague it is customary, especially at beer halls, to sit with a group of people if there are no free tables, so go ahead and ask if you can join. Prague has also many excellent tearooms (in Czech čajovna) which serve different kinds of teas from around the world.
Prague has a wealth of accommodation options, many of them within walking distance of the town centre. Peak season generally runs from April to October and a major influx of visitors can be expected during New Year as well. Prices for accommodation can be up to twice as high in the peak season and reservations are advised. Otherwise, the main train station, Hlavní nádraží, has an accommodation booking service for hotels and hostels upstairs. Normally, tax and breakfast are included in the room rate.
Even during peak season, dorm rooms in hostels close to the city center can be had for around 350Kč per person per night. Prague has its share of rough and ready youth hostels with a party vibe, but there are many with a more relaxed atmosphere and some housed in beautifully restored buildings as fancy as any hotel. Many hostels also offer private rooms, with or without shared bathrooms, for much cheaper than a pension or hotel room. Around Hlavni Nadrazi, the main train station, there are many touts offering cheap accommodation. Many are Czech residents renting part of their apartment for extra cash. Prices don't vary much between them, but some may not be trustworthy so be cautious.
A fun alternative is a 'Botel'. Usually relatively well placed, with gorgeous views. Prices vary from €20 to €120 pppn. Botel Florentina offers a view of the castle while being affordable too.
For those travelers to Prague that aren't looking to just save money, but to stay and tour the town in style, there are a few luxury hotels including one that is in the historical building from the 16th century:
For those looking for something a little different, a 'botel' (boat hotel) may be an appealing option. Most are moored on the south of the river in Praha 4 and 5.
Many hostels and hotels offer free internet on shared computers or over a wireless network, so ask before you shell out extra at one of Prague's many internet cafes.
Also, almost all KFC fast foods offer a free wifi connected to internet. When you enter the place, just buy something small and tune your laptop or phone to wireless network of the name "KFC". No login required.
There's an internet cafe at Spálená 49 (Metro B & Tram: Národní třída) which is open until midnight every day. It also has printing facilities which were invaluable after my friends recently missed a flight and needed to book another and print boarding cards.
The most common crimes in Prague by far are car theft and pickpocketing: the prevalence of car theft and vandalism pushes up the crime statistics of Prague. But it doesn't mean that you're safe if you do not drive any cars. Pickpocketing is common in Prague, and some violent crimes do occur in this city. You are seriously warned not to provoke drunken people as it will pose yourself in extreme danger to provoke them.
Be wary of Russian and Eastern European gangs who hang around, they are dangerous and a young British student was severely beaten by one.
Begging is a serious problem in this city and you can even see beggars in this city's top tourist attractions. Don't carry a wallet or purse in the back pocket of your pants; always keep an eye on your items; don't put all your money in one place; don't show your money or valuable things to anybody; don't walk alone into deserted areas even if you think you are a strong man. Better safe than sorry so take enough precautions for yourself. Prague might seems attractive with the relatively budget expenses, but keep in mind there's no free lunch in this world. The cops here don't speak English so don't expect them to help you. They are known to be one of the rudest in the world among tourists.
Possession of drugs has been historically a grey area under the Czech jurisdiction. Since early 2010, though, the dubious term "an amount less than small" has been finally transformed into absolute values based on the actual judicial practice and it is no longer an offense to carry less than 15 g of marijuana, 5 patches of LSD, 1 g of cocaine, etc. It is still a criminal offense to posses more than the allowed amount of drugs. Please also note that most bars will expect you to go outside if you intend to smoke a joint.
Be aware of teams of pickpockets that lurk outside metro stations, overcrowded trams, Charles Bridge, Wenceslas Square and the Old Town Square. They usually work in teams of 3-5 and look for lost or distracted tourists. Backpacks are especially interesting to them. Many of those groups use underage children as pickpockets because they can't be prosecuted by Czech laws.
Due to the low incidence of violent crime, the threat of pickpockets has been played up as a great problem. However, common sense and basic precautions can keep most people safe from pickpockets. If you have a camera, try not to wear it openly. Always close and secure your backpack and try to keep an eye on it. Be especially careful not to fall asleep in tram or metro. Wear your wallet in a safe place (like inner pocket of your coat), never put it into your rear pocket or any other place where it can be easily stolen.
Be astute on sleeper trains, as bag robberies are on the increase between major stations. Ask for ID from anyone who asks to take your ticket or passport, and lock backpacks to the luggage racks. Keep valuables on you and maintain common sense.
If you enter the metro (usually at night), you may find a team of con artists at the stations, saying that they are metro clerks and, after examining your ticket for some time, that it's invalid so you'll have to pay a fine of 500 CZK (1000 CZK if you argue with them). So if you happen to see them and you're sure that your ticket is valid, tell them to call the police, or call them yourself. Remember that Prague Metro ticket inspectors have to produce their badge in order to check your ticket and issue a fine; if they don't do this as soon as they approach you then, they are almost certainly fakes.
Be careful with taxi drivers, particularly from the train station. Taxis that are legally registered may still be mafia-run affairs that do their best to overcharge. It is illegal for a taxi driver to refuse you a receipt in Prague, so agree to a price before putting yourself or your luggage in the taxi. The risk of overcharging is greatly overplayed but just take the usual sensible precautions of only using taxi firms affiliated with the station or your hotel, or call a reputable company and wait. Finally, if presented with a wrong bill from a taxi driver, call the police on your mobile phone. Your driver will quickly change his tune.
If you can't afford to haggle with cab drivers, you can always use public mass transit. The network is extensive and can take you almost anywhere in Prague.
Be careful with money exchanges. Exchange your money in banks or official tourist informations and rather avoid exchange offices. Never deal with a street money-dealer: they offer better rates but frequently try to swindle you by giving you money from another country, such as Russian roubles or old Bulgarian leva.
Most of the exchange offices are fair, but some, especially at the busiest tourist sites, may try to cheat customers with various tricks. One of the them is offering favourable exchange rates, but with fine print below such as if you exchange more than 1000 EUR. Another trick is putting a huge board with "we sell" exchange rates to the shop window, which makes an impression of good rates, whereas the actual rate for buying CZK is much more unfavourable.
When the customer finds this out at the counter and wants to cancel the transaction, the money-dealer refuses with an excuse "I have already printed the bill", implying it is too late. The police won't help you, typically referring you to the Czech National Bank, which supervises exchange offices, to file a complaint (which does not help you either).
Czech law is weak and orders exchange offices only to display the actual rates, which you might find somewhere in the office in small print. Therefore, if you decide to use an exchange office always ask for the actual rate you will pay before making the transaction before releasing any money out of your hand.
If you find yourself in emergency, dial 158 for police, 155 for ambulance or 150 for firefighters. You can also dial 112 for a general emergency call.
If you need medication at weekends or evenings, you can go to Lékárna Palackého, (Tel +420 224 946 982) the 24-hour pharmacy on Palackého 5 in the new town.
Buses and trains are frequent and quite inexpensive and can get you to even the smallest village.
Practically every major European city can be reached by bus or train from Prague.
Regular buses are available to the following Czech towns, travel times in brackets:
For just a small selection of further places off the beaten path:
|
<urn:uuid:d204b166-5af1-4169-a5e0-175dea8ed027>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00213.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9590620398521423,
"score": 3.375,
"token_count": 8550,
"url": "https://wikitravel.org/wiki/en/index.php?title=Prague&oldid=1602277&diff=prev"
}
|
Statistics from Altmetric.com
Varicella (chickenpox) is an universal, highly infectious disease characterised by a pruritic vesicular eruption associated with fever and malaise caused by varicella zoster virus (VZV). In children, the illness is usually self limiting, lasting four to five days, but at least 1% of children under 15 years experience a complication.1 2 These include secondary bacterial infection (particularly with group A beta haemolytic streptococcus),3 pneumonia, encephalitis, haemorrhagic complications, hepatitis, arthritis, and Reye syndrome.4Furthermore, 10–50% of all children will visit a physician with an infection.5-7 The mortality rate of varicella in children under 14 years in the United States is estimated at 2 per 100 000 cases,8 and 90% of these have no risk factors for severe disease.9
Adults experience only 5% of all varicella cases, but experience more severe disease (hospitalisations 18 per 1000) and deaths (50 per 100 000).10 Herpes zoster (shingles), a painful, dermatomal, vesicular rash occurs with reactivation of the virus in approximately 15% of the population.11 The likelihood of developing herpes zoster increases with advancing age: the incidence is approximately 74 per 100 000 children aged under 10 years,11 300 per 100 000 adults aged 35–44 years,12 and 1200 per 100 000 adults over 75 years.12
In temperate climates, 95% of varicella cases occur among persons less than 20 years of age.13 14 Seropositivity is lower in adults from tropical and subtropical areas.15 16Seronegativity in adults may be increasing in temperate populations, as shown by a significant upward trend in age distribution of chickenpox cases in England and Wales,17 and increasing varicella susceptibility in young US adults.18
A live attenuated varicella vaccine was first developed in 1974 in Japan by Takahashi and colleagues.19 As this Oka strain virus is heat sensitive, Biken/Oka vaccine (Japan) and Varivax (Oka/Merck) require storage at −15°C and administration within 30 minutes of reconstitution to retain potency (product monograph). Oka strain vaccines were first licensed for use in high risk children in Europe in 1984 and Japan in 1986. Licensure for use in healthy children commenced in 1986 in Japan, 1988 in Korea, and most recently in the USA, Sweden, and Germany (1995),20 21 and Canada (December 1998).22 Many millions of doses have been given in total.
Aims of review
The purpose of this review was to evaluate the evidence that bears on the various options for use of vaccine to prevent varicella in healthy individuals. These include universal vaccination of healthy infants, catch up vaccination of older children, and vaccination of susceptible adolescents and adults. Models of cost effectiveness and epidemiological change suggest that implementation of routine varicella vaccination for infants and children could reduce total number of cases and case severity, and generate cost savings.23 Potential harm that may occur as a result of vaccination includes immediate adverse reactions, transmission of varicella from vaccinees, an increased risk of zoster, and a shift in varicella cases to an older age group (and hence more severe disease).24 In evaluating varicella vaccine it is important that these issues are considered in addition to vaccine effectiveness.
Methodology of search
MEDLINE was searched from 1966 to December 2000 using the MeSH subheadings chickenpox, vaccination, and human (search date 19 January 2001). There was no language restriction. Methodological search terms included: random allocation, placebo, double-blind method, comparative study, epidemiologic methods, research design, clinical trials, controlled clinical trials, meta-analysis, drug evaluation, prospective studies, and evaluation studies. EMBASE was searched using a similar strategy. To identify other studies, we searched reference lists of located studies; the Internet for position papers and summaries from health organisations such as the World Health Organisation and the Centers for Disease Control and Prevention; vaccine product information; and the Cochrane Library.
Published studies were included if they: (1) considered healthy, human subjects vaccinated with VZV vaccine; and (2) were controlled trials addressing the incidence of varicella, zoster, or adverse outcomes. Prospective cohort studies were considered only for longer term outcomes of varicella and zoster following vaccination. To limit the analysis to studies with the highest methodological quality, prospective cohort studies were excluded if: (a) they contained less than 50 subjects; (b) loss to follow up was not described; or (c) duration of follow up was less than one year. All eligible studies were systematically reviewed using the methodology of the Canadian Task Force on Preventive Health Care.25 The quality of evidence in each study was rated from I (well designed randomised controlled trials (RCTs)) to III (descriptive studies or consensus reports) using the Task Force's established methodological hierarchy (see ).
Identified studies meeting inclusion criteria
A total of 26 controlled trials and 50 cohort studies were identified using the described search strategy. After application of exclusion criteria, 24 controlled trials and 18 cohort studies remained for review. For each of the criteria evaluated, we describe the best available level of evidence along with key supporting studies. Summaries of RCTs are presented in the tables.
Two randomised, placebo controlled trials in children (aged 10 months to 14 years) provide level I evidence that a single dose of VZV vaccine is effective in preventing varicella for up to seven years (table 1),26-28 although data beyond three years are subject to a large loss to follow up of study subjects.27Supportive evidence is provided by three RCTs randomising to different vaccine doses21 29 30 and 12 prospective cohort studies with follow up of 1–19.6 years.31-42 Three of these trials (each with over 2000 subjects) also studied adolescents (aged 13–17 years, followed for 1–8 years).36-38 Some methodological issues were noted in these studies: an increasing loss of subjects occurred with longer follow up (up to 62%), and self reported illness was used to determine effectiveness.31 36-38
In adults, effectiveness is shown by one non-randomised controlled trial43 and two prospective cohort studies,44 45 with maximum duration of follow up of six years. Further level II-2 evidence is provided by one RCT providing combined data from both arms of a two dose adult trial.46All but one adult study43 calculated effectiveness based on self reporting of disease. Adult and child vaccinees experiencing close contact with varicella are also protected.21 26 27 46 47
Although controlled trials confirm approximately 100% relative risk reduction for severe disease, no deaths have been reported for subjects in either vaccine or placebo groups. No trial to date has had sufficient power to examine this outcome. A post-licensure report (level III evidence) found 14 deaths temporally related to 9.7 million doses of varicella vaccine; of the five presented case reports, none had proven vaccine strain VZV.48 There is therefore no direct evidence to support or refute a risk reduction in varicella mortality consequent to use of varicella vaccine, although available evidence suggests a reduction is likely. Data for differences in hospitalisation rates are similarly lacking.
The protective efficacy of varicella vaccine has been determined in two placebo controlled RCTs in children. Weibel et al estimated a protective efficacy of 100% over nine months and 98% over seven years,26 27 while Variset al found a protective efficacy of 72% over a mean of 29 months.28 A cohort study of vaccinated and unvaccinated children under 5 years found a vaccine effectiveness of 83%.42 For the RCTs, attack rates were 0–3% per year compared with 7–11% per year in placebo recipients, giving the number needed to treat (the number needed to vaccinate to prevent one case of varicella) as 5.5–11.8. Assuming complications occur in 1% of varicella cases,1 the number needed to vaccinate to prevent one complicated case of varicella is therefore 550–1180. Supportive evidence of a low annual attack rate in vaccinees is provided by other RCTs to four years (0.3–3.6%),29 30 49 and prospective cohort studies to 19.6 years (0.3–2.8%),31 32 34-41 44 45 including adolescents and adults to eight and six years respectively.36 37 44 45 Breakthrough disease may be more common in individuals who are seronegative prior to vaccination.50 51 Exposure to varicella and age less than 14 months at time of vaccination have also been shown to be risk factors for breakthrough disease.30
Tetravalent vaccines for prevention of measles, mumps, rubella, and varicella appear to have similar effectiveness against varicella to varicella vaccine given separately from measles/mumps/rubella vaccine (MMR) at 12–15 months (level of evidence: I,49II-1,52-54 and II-247).
A wide range of vaccine doses have been utilised in studies examining vaccine effectiveness (table 1). One RCT showed no difference in vaccine effectiveness between doses varying from 439 to 3625 PFU,29 while another showed decreased effectiveness below 1260 PFU.28 The study showing no difference had a longer duration of follow up (mean 4.3 years compared to 29 and 35 months), but relied on self reporting of disease.29 Limet al more recently showed that doses less of 501–631 PFU resulted in breakthrough disease more commonly than doses of 7943–10 000 PFU.30
Protection against chickenpox is provided by a single injection in children, without further increase in protection with more doses (table1). A direct comparison of vaccine effectiveness for one versus two injection regimens has not been performed in adolescents or adults. Available data in adolescents come from three prospective cohort studies using a single injection,36-38 and one RCT using two injections in all participants (at different intervals and doses).46 All three studies found evidence of protection (all level II-2 evidence). Similarly in adults, one small controlled trial indicates that a single injection offers protection (level II-1 evidence),43 while three prospective studies providing level II-1 and II-2 evidence suggest two injections given four or eight weeks apart are effective.44-46
The level of VZV antibody six weeks after vaccination appears to be correlated with effectiveness in preventing subsequent varicella to 10 years in children and adolescents (level II-2 evidence).32 38 High seroconversion rates of 94–100% have been shown six to eight weeks after a single VZV vaccination in children26 28 and two doses in adolescents and adults (level I evidence).46 55 A trial by Ndumbeet al suggests a single vaccination may result in less frequent seroconversion in adults (level II-2 evidence).43 This is supported by two prospective cohort studies which found 79–82% seroconversion after one dose in subjects older than 12 years compared with 94–100% after two doses.37 44 Duration of seroconversion has been shown to approach 100% for up to six years in children following a single dose of vaccine,27 29 and for two years in adolescents and adults following two doses (level I evidence).46
ADVERSE REACTIONS TO VACCINATION
RCTs in children show no increase in rates of fever or varicella like rash with varicella vaccination over placebo (table2).26 28 56 One RCT found an increase in local reactions (mild and well tolerated) in vaccine recipients,26 while another smaller trial found no difference.56 Rates of fever varied from 0% to 36% depending on the definition of fever and the duration of follow up. Injection site reactions occurred in 7–30%, and less than 5% of vaccine and placebo recipients experienced a mild, varicella like rash. RCTs in adults give similar results.46 55 57 A higher dose in PFU appears not to result in a greater frequency of adverse reactions.21 29 58 Controlled trials comparing VZV vaccine alone with tetravalent MMR-VZV also show no increase in adverse reactions.47 49 52 56 Finally, a second dose of vaccine appears to cause fewer reactions than the first.31 46 57No serious adverse reactions have been reported in controlled trials. Post licensure level III evidence is conflicting, with one review of 89 000 vaccinees belonging to a health maintenance organisation finding no serious reactions,59 while Wiseet al found a temporally related serious adverse event rate of 2.9/100 000 doses.48
TRANSMISSION OF VARICELLA FROM VACCINATED INDIVIDUALS TO OTHERS
No clinical trials have shown transmission of vaccine related VZV between immunocompetent individuals. One placebo controlled RCT found seroconversion, but no disease in 3/439 placebo vaccinated siblings of 465 VZV vaccine recipients.26 Natural infection or subclinical spread of vaccine virus may have occurred. In a small controlled trial, Asano et al found no evidence of transmission or boosting in unvaccinated seronegative and seropositive close contacts.60 Finally, a prospective study of 37 vaccinated siblings of 30 cancer patients also found no evidence of varicella transmission.61 However, case reports of transmission have been reported rarely from adults and children with varicella like rash following vaccination.62-64 Brunell and Argaw recently reported transmission of vaccine strain virus from a vaccinated child with zoster to their vaccinated sibling, resulting in mild chickenpox.65 A post-licensure report using passive surveillance methods has also found very few cases of possible vaccine strain transmission (“mostly unconfirmed by PCR”) (level III evidence).48 While not a complication of vaccination, transmission of wild type virus (non-vaccine related) breakthrough disease has been reported between vaccinated siblings (rate 12.2%).36 Disease was mild in both primary and secondary cases.
There have been no clinical trials of VZV vaccination during pregnancy. One report of inadvertent administration in seven pregnant women (6–31 weeks gestation) describes delivery of two healthy infants of two completed pregnancies.66 As of March 2000, the Varivax in pregnancy registry had reports of 21 occurrences of inadvertent vaccination during pregnancy including these seven women. Of the 20 prospectively enrolled pregnancies, 16 have had birth outcomes: 14 pregnancies have resulted in normal infants and two have had spontaneous abortions (personal communication, Dr J Seward, Centers for Disease Control and Prevention, March 2000). Wiseet al reported no cases of congenital varicella among infants of 87 women inadvertantly vaccinated during pregnancy using a passive surveillance system (level III evidence).48 Although it is likely that the rate of vaccine VZV transmission in pregnancy is lower than that for wild type VZV, there are insufficient clinical data at this time to confirm whether the risks of vaccination are less than those of congenital varicella syndrome, zoster, and varicella from wild type VZV infection in pregnancy.
RISK OF HERPES ZOSTER FOLLOWING VACCINATION
Only one placebo controlled RCT has commented on the risk of zoster following vaccination: no cases were noted in either placebo or vaccine recipients after nine months (732 person years).26A single prospective cohort study of children has reported a mild case of zoster in one of 854 children (duration of follow up unknown).67 Other cohort studies report no zoster for as much as 19 years 7 months, or 3277 person years after vaccination.33-35 39 41 68 69 However, isolated case reports in children have occurred. Two mild cases of zoster (no virus isolated) were reported in healthy children (aged 2 and 4 years) following vaccination with Oka/Merck vaccine,70 and a rate of 21 cases per 100 000 person-years was estimated for Oka/Merck recipients to that time, compared with an expected rate of 77 per 100 000 person-years in school aged children following natural chickenpox. In 1992, White estimated that 14 cases per 100 000 vaccinees (all mild) had occurred over nine years of Oka/Merck vaccination in the USA.71 A population based study over a longer period found a rate of 42 per 100 000 in unvaccinated children (20 per 100 000 in children under 5 years).72 Most recently, the US post-licensure Vaccine Adverse Event Reporting System suggests a rate of 2.6/100 000 vaccine doses distributed.73
Two adult cohort studies have described the occurrence of zoster six years after vaccination. Gershon et alvaccinated 187 varicella susceptible adults and reported one case of zoster caused by wild type virus after six years (1/1122 person years).44 74 Levin et alreported a rate similar to that expected in an unvaccinated population for persons over 55 years of age who had previously had varicella and received varicella immunisation (10/130 vaccinees or 1/100 person years).75 In all cases the disease was mild.
Of interest, a recent paper using mathematical modelling predicted a short to medium term increase in zoster after vaccination if exposure to varicella is important for preventing reactivation, although a reduction was likely in the longer term (level III evidence).76
Thus, there is fair evidence to suggest that there is a reduced incidence of herpes zoster in vaccinees. Evidence from studies of leukemic vaccinees support this statement.77-79
SHIFT IN AGE OF VARICELLA
There has been a trend towards increasing age of varicella infections over the 20 years preceding use of VZV vaccine.17 80 A theoretical risk of varicella vaccination is that routine VZV vaccination in children may increase this trend; that is an upward shift in remaining varicella cases resulting in more adult varicella with higher complication rates, particularly if immunity in vaccinees is not long lasting. Mathematical models that assume exposure to varicella plays a role in maintaining immunity and preventing reactivation of VZV, suggest that under certain conditions, widespread vaccination of children could result in increased zoster in adults.81 Although the model of Halloranet al predicted a shift in age of remaining varicella cases towards older individuals (with higher complication rates), an overall reduction in the number of adult cases with decreased total morbidity and hospitalisations was predicted.23 A more extended model developed by Brissonet al also predicted a reduction in incidence and morbidity of varicella.76 However, clinical evidence is currently lacking to support some of the assumptions of these models, including the role of exposure to wild type varicella and of varicella vaccination in maintenance of long term protection against varicella and zoster in adults. Furthermore, several studies have shown that administration of varicella vaccine boosts cell mediated immunity to varicella in the elderly, including a recent RCT by Berger and colleagues.55 82-84 If widespread vaccine use results in decreased risk of exposure to varicella, vaccination of adults could be useful by boosting immunity. This view is supported by Krause and Klinman, who showed reactivation with decrease in falling antibody titres after vaccination.51
COST EFFECTIVENESS DATA FOR VARICELLA VACCINE
No clinical trials have examined the cost effectiveness of VZV vaccination in healthy populations. Simulation studies examining both societal and health care costs associated with varicella have all found net cost savings with programmes for routine VZV vaccination directed at children aged 15 months.85-90 Lieu and colleagues,87 in a cost effectiveness study using morbidity and mortality data as well as projected data for vaccine impact,23 found a saving of $US5.40 for every dollar spent on routine vaccination of preschool children. Scuffhamet al found a return of NZ$2.67 and $0.67 for each dollar invested, with and without inclusion of societal costs respectively.89 Simultaneous administration with MMR vaccine85 86 and additional catch up vaccination in children under 12 years may be even more cost effective.88 91
Accuracy of history in those with uncertain or negative history for varicella is an important determinant of cost effectiveness for VZV vaccination in older subjects.91 92 In a cross sectional survey of children whose clinicians had ordered varicella serotesting, Lieu et al found that for all children aged 7–8 years, and for 9–12 year olds with a negative or probable negative history of varicella (determined by parental telephone interview), presumptive vaccination was the most cost effective approach.93 However, for 9–12 year olds with an uncertain history of varicella, serotesting followed by vaccination of those negative for VZV was the most cost effective approach. Serotesting regardless of history was also found to be the most cost effective strategy for adolescents, although clinical effectiveness was somewhat less than with a presumptive vaccination strategy.91Evidence of rising seronegativity in adults independent of country of origin suggests potential cost benefit from adult vaccination programmes in susceptible populations.18 Grayet al found serotesting of adult health care workers with a negative or uncertain history of varicella was the most cost effective approach to vaccination.94 This approach is also supported by mathematical models95 96 and a 1998 cohort study of American soldiers.92 Routine prenatal screening with postpartum vaccination of susceptible women may also be cost saving.97
METHODOLOGICAL QUALITY OF STUDIES
The quality of evidence in studies included in this analysis was generally good. However, a number of methodological issues were identified. Loss of subjects from analysis was sometimes considerable, particularly where the duration of follow up was seven years or more. This occurred in one RCT27 and several prospective cohort studies.34 35 68 69 Other trials relied on self reporting of VZV disease to investigators,29 46 49 52while occasional studies followed only vaccinees who initially seroconverted.27 The only RCT examining the rate of herpes zoster in vaccinees was based on a very short period of follow up.26 These biases could potentially result in an over estimation of vaccine effectiveness by underestimating the true number of cases. However, outcomes across studies were consistent regardless of study design or duration of follow up, suggesting a true effect.
Study subjects were generally from upper middle class socioeconomic backgrounds. As varicella affects approximately 95% of individuals under 20 years living in a temperate climate,14 the generalisability of results is unlikely to be affected.
All cost effectiveness studies were based on simulations. Collection of data from clinical trials and from centres where vaccine use is now licensed would be needed to confirm basic assumptions of proposed models for vaccine and wild type VZV epidemiology and estimated costs of vaccination programmes. No clinical trials have examined hospitalisation rates or mortality as outcomes.
Because of the universality of infection, despite a relatively low complication rate, varicella is an important contributor to hospitalisations and mortality. This critical review has found strong evidence for the effectiveness of VZV vaccination in the prevention of varicella in children. Furthermore, vaccination appears to be cost effective, particularly when taken from a societal perspective. The quality of evidence in support of vaccination in adults is weaker, but in sum is also supportive of two injection regimens in susceptible individuals, who may be identified after confirmatory serological testing. Effectiveness data are required in adolescents and adults to clarify the optimal number of doses. The results of studies do not support the theoretical concerns that immunisation may lead to an increased incidence of herpes zoster or an unacceptable rate of transmission of infection from vaccinees. Although vaccination may increase the mean age of varicella, the overall reduction in the numbers of cases of adult varicella will probably offset this phenomenon. However, it will be important to monitor the epidemiology of varicella infection after introduction of widespread vaccination.
Our findings support current recommendations from the United States, Canada, and the World Health Organisation (WHO) (see table 3). The American Academy of Pediatrics and Immunization Practices Advisory Committee (ACIP) of the Centers for Disease Control and Prevention recommends that all children should be routinely vaccinated at 12–18 months of age; that children under 13 years should receive one vaccination; and that older individuals susceptible to varicella should be offered two vaccinations 4–8 weeks apart.98 99 The National Advisory Committee on Immunisation (Canada) recommends immunisation of all susceptible persons aged 12 months or greater, with similar dose regimens.22 A 1998 WHO position paper recommends that routine childhood immunisation against varicella be considered in countries where the disease is a relatively important public health and socioeconomic problem, where the vaccine is affordable, and where high (85–95%) sustained vaccine coverage can be achieved. Additionally, vaccine may be offered to adolescents and adults without a history of varicella.100
We acknowledge the contribution of the members of the Canadian Task Force on Preventive Health Care in providing guidance and feedback during the evidence review process. We thank also the following independent experts for reviewing a draft form of this report: Dr Anne Gershon, Division of Pediatric Infectious Diseases, Columbia Medical Center, New York; Dr Barbara Law, Department of Medical Microbiology, University of Manitoba, Winnipeg, MB; and Dr Tracy Lieu, Department of Ambulatory Care and Prevention, Harvard Pilgrim Health Centre and Harvard Medical School, Boston, Massachusetts. The views expressed in this report are those of the authors and do not necessarily reflect the position of the Canadian Task Force, nor those of the independent reviewers. Dr Wang now works at Aventis Pasteur, a vaccine manufacturer. The work was completed when Dr Wang's primary appointment was at The Hospital for Sick Children. The discussed vaccines are not produced by Dr Wang's employer.
Levels of evidence
Quality of published evidence:
I—Evidence from at least one well designed, randomised controlled trial
II-1—Evidence from well designed, controlled trials without randomisation
II-2—Evidence from well designed, cohort or case–control analytical studies, preferably from more than one centre or research group
II-3—Evidence from comparisons between times and places with or without the intervention; dramatic results from uncontrolled studies could also be included here
III—Opinions of respected authorities, based on clinical experience; descriptive studies or reports of expert committees.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
<urn:uuid:419eb209-0d5f-4df0-a1cb-556258e8caf4>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00613.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9290719628334045,
"score": 3.515625,
"token_count": 5547,
"url": "http://adc.bmj.com/content/85/2/83.full"
}
|
2 Science starts with curiosity ...something that is born in all of us The starting point is to find patterns in the natural world
3 Seeing the UniverseVisible light is a half-tone range where EM spectrum is full piano
4 Sensitivity Improvement over the Eye Improving on the Eye201210111010Photographic & electronic detection108Telescopes aloneElectronicSensitivity Improvement over the Eye106PhotographyHubble Space Telescope104102Short’s 21.5”Rosse’s 72”Mount Wilson 100”Mount Palomar 200”Soviet 6-mHerschell’s 48”HuygenseyepieceSlow f ratiosGalileo16001700180019002000Year of Observation
5 A Factor of Ten Billion The largest telescope can see 1011 times (100 billion x) fainter than the naked eyeWHERE DOES THIS GAIN COME FROM?The first factor is light gathering powerA gain of 10m over 1cm (or 0.01m) squared, a factor of 106
6 The second factor is efficiency of detecting photons For a gain of nearly 100% efficiency over 1% or so, or a factor of 100WHAT IS THE LAST FACTOR OF 1000?The eye must “read out” every 1/10 of a second, like a movie camera, to give the illusion of motion. On the other hand, a CCD can integrate for hours before the image is read out.
7 The Copernican Revolution The history of astronomy displaces us from cosmic importance
8 X 10 Accuracy For most exploratory calculations EstimationScientists often use estimation or order of magnitude calculations in their work. Often it is not possible, or necessary, to derive very accurate numbers. This is particularly true in astronomy where the objects under consideration are usually very faint and very far away.X 10 Accuracy For most exploratorycalculationsX 2 Accuracy For most numbers incosmology10% Accuracy For the best-measuredparameters
9 DEDUCTIONDeduction combines statements or premises and combines them to reach a conclusion.The conclusion is valid only if the premises are justified and the logical construction is correct.Deduction preserves truth but doesn’t always expand knowledge.i.e. symbolic logic, arithmetic, algebra2 + 2 = 4
10 DEDUCTIONInduction involves a generalization from a limited amount of data to a broad conclusion.Induction cannot yield certainty, but backed by a lot of data, gives reliable conclusions.Induction can expand knowledge so is a basic tool of science.i.e. data is always finite so theories are always subject to verification.INDUCTION
11 Science LimitationsUncertainty, imprecision, and error arise three different ways:Making a false premise, confusing correlation with causation, inferring a pattern where none is presentCONCEPTUALMACROSCOPICThere is no such thing as perfect data. Every data set is limited and every instrument has limitationsMICROSCOPICHeisenberg’s uncertainty principle sets a fundamental limit to precision for measurement of particle position and velocity, or energy and time
12 Science is Evidence Evidence is: based on data reproducible quantitativenot subjectivenever perfect
13 The Importance of Evidence There is no science without evidenceAll assertions must be supported by dataEvery claim in science is subject to verificationScience is data-driven, so progress is made by:1. Gathering more data2. Repeating the experiment3. Someone else repeating the experimentGOOD!BETTER!!BEST!!!
15 Theory Good Science : a model which survives repeated testing Science seeks robust explanations for observed phenomena that rely solely on natural causes.Science progresses by creating and testing models of nature that explain the observations as simply as possible.Occam’s Razor (there may be more than one explanation for any particular data set)A scientific model must make testable predictions that may force us to revise or abandon the model.Plus, the role of luck and persistence: Science is a very human enterprise!Theory: a model which survives repeated testing
16 For % of the universe, including all stars and all galaxies, the evidence is indirect.
18 Distance UnitsTypical distance between stars is 1 pc = 3.36 light years = 6 trilllion km, or 6,000,000,000,000 km.1 pc1 MpcTypical distance between galaxies is 1 Mpc = 106 pc or 3 million light years. It’s an incredible 1019 km.The size of the observable universe is about 10 Gpc = 1010 pc, or 30 billion light years. That distance is an unimaginable 1023 km.10 Gpc
19 THE UNIVERSE AND USUsEarthSolar SystemMilky WayUniverseMultiverse?
20 A Scale ModelSet the Earth to the size of a walnut, or a 1:10,000,000 scale model=The Moon is a pea at arm’s lengthThe Sun is a 3 m ball 100 m awayNeptune is another pea 2 km awayThe nearest star is 50,000 km away
21 And at this scale, light is reduced to very slow walking speed And at this scale, light is reduced to very slow walking speed. There’s no way information in the universe can travel any fasterThe Moon is a seconds walk awayThe Sun is 8 minutes walk away10 hours to walk the Solar SystemA year to walk to the nearest stars
22 Reduce the scale by a factor of 100,000,000 The Solar System is a grain of sandThe distance between stars is 10 mThe Milky Way is the size of IndiaThe MW has 100,000,000,000 stars
23 Now reduce by another factor of 100,000,000 The Milky Way is the size of a plateThe nearest galaxy is 10 m awayThe universe is the size of IndiaBillions of galaxies within this space
24 How Empty is Space?A one-inch cube of the air you’re breathing holds 1020 atoms in itThe average density of the universe is 1022 times lower, about 1 atom per cubic meter
25 Distant Light = Old Light Lookback TimeIf the speed of light were infinite, light from everywhere in the universe would reach us at exactly the same time and we would see the entire universe as it is now.But it is not, so we see distant regions as they were in the past.Distant Light = Old Light
26 How can we know what the universe was like in the past? Light travels at a finite speed (300,000 km/s).Thus, we see objects as they were in the past:The farther away we look in distance,the further back we also look in time.DestinationLight travel timeMoon1 secondSun8 minutesSirius8 yearsAndromeda (M31)2.5 million yearsPoint out how fast the speed of light is: could circle Earth 8 times in one second….Also note that the speed of light is always the same…
35 How do our lifetimes compare to the age of the universe? The Cosmic Calendar: a scale on which we compress the 13.7 billion year history of the universe into 1 year.This is a time scale model that used a scale factor of 14,000,000,000:1.Our lives would scale similarly, so 80 years goes down by a factor of 14 billion too.In the scale model, a human life lasts about 2 tenths of a second!Our favorite way to present the scale of time: a modified version of Carl Sagan’s Cosmic Calendar. Worth noting:Since we are compressing the 14 billion-year history of the universe into one calendar year, 1 month represents about 1.2 billion real years, 1 day represents about 40 million years; 1 second represents about 440 years.the universe already 2/3 of the way through its history before our solar system even formed.dinosaurs arose the day after Christmas, died yesterday.All of (recorded) human history is in the last 30 seconds.You and I were born about 0.05 seconds before midnight, Dec. 31.
36 TIME SENSEBlack HolesArrow of TimeSentient LifeLots of AtomsSingle AtomsNo Arrow of Time
37 The early universe expanded much faster than the speed of light, so there are objects and large regions of space we have never seen.SizeThe UniverseSpeed of LightTimeThis violates no law of physics since the cosmic expansion is governed by general relativity, which sets no limit on the speed of expanding space.
38 Hubble ExpansionGalaxy spectra show redshifts, where all the spectral features shift to longer wavelengths. The amount of the shift increases with growing distance: more distant galaxies are moving away faster.This linear relation was discovered by Edwin Hubble back in 1929.
39 The redshift is not a Doppler shift; it is due to the expansion of space itself. Photons are stretched.
40 Galaxies are all moving away from each other, so every galaxy sees the same Hubble expansion, i.e there is no center.The cosmic expansion is the unfolding of all space since the big bang,i.e. there is no edge.We are limited in our view by the time it takes distant light to reach us,i.e. the universe has an edge in time not space.
41 Nature of the Expansion Space really does expand, like the material of the balloon. The balloon surface area is finite but unbounded. The universe is close to flat so imagine a large balloon with little curvatureGalaxies are held together by gravity and do not expand, so imagine coins glued to the balloonPhotons in this 2D space have their wavelengths stretched or redshifted by the expansion as they travel
42 Dark matter binds galaxies and dark energy drives cosmic acceleration.
43 Nature of the Expansion Early expansion is rapid, driven by radiation. It slows as dark matter begins to dominate and more recently has begun to accelerate due to dark energy.
47 A sand grain of diameter 0. 5mm weighs about 3 grams A sand grain of diameter 0.5mm weighs about 3 grams. The sand is SiO2, molecules 60 times hydrogen mass.How Many?1019 atoms
48 A normal monk, one who does not like “momos” too much, weighs about 50 kg. Monks are made of water, H2O, molecules 17 times hydrogen mass.NOTE: EVERYTHING THAT HAS MASS EXPERIENCES THE GRAVITY FORCE, INCLUDING ATOMS. HOWEVER, THE BEHAVIOR OF ALL SMALL OBJECTS, SUCH AS MONKS AND MOUNTAINS, IS GOVERNED BY THE FAR STRONGER ELECTRIC FORCE BETWEEN ATOMS. FOR ANY OBJECT WITH MORE THAN ABOUT 1045 ATOMS, OR A SIZE ABOUT 100 KILOMETERS, GRAVITY BECOMES THE DOMINANT. SO GRAVTY DRIVES THE BEHAVIOR OF PLANETS, STARS, GALAXIES, AND THE UNIVERSE.How Many?1028 atoms
49 One solar mass is 2 x 1030 kg. Which is an enormous factor larger than a hydrogen atom 2 x kg. Earth is 330,000 times less massive.How Many?1057 atoms
50 How Many? 1069 atoms 1080 atoms The typical galaxy contains 1012 stars The whole universe contains 1011 galaxiesHow Many?1069 atoms1080 atoms
51 What is Dark Matter?THE SHORT ANSWER IS: WE DON’T KNOW. BUT SEVERAL LINES OF EVIDENCE INDICATE 10X MORE INVISIBLE THAN VISIBLE MATTERThe rotation speed of galaxies does not decline with radius, violating Kepler’s law unless without a halo of unseen matter
52 Light from distant galaxies is bent by an intervening cluster to form little arcs. The amount of bending indicates a lot of unseen matter in the cluster.Light from all distant galaxies is very slightly distorted and bent as it travels through the “sea” of dark matter. With the best images, these distortions of 0.1% in shape can be seen.
53 X X X X Why are astronomer so confident that dark matter really exists?Because the law of gravity haspassed so many tests, and if weput dark matter into computersimulations, we evolve structurethat looks just like the universe.So far, we can only rule items out:Stars: (normal matter) census of stars does not allow itXMACHOs: (sub-stars & planets) gravitational lensing rules it outXBlack holes: (dark, collapsed stars) no sign of preceding supernovaeXDust: (dust up to rocks) re-radiation in infrared not seenXWhich leaves: weakly interacting particles, supersymmetric extension to standard model
54 Experiments in the 1960’s and 1970’s showed that, just as atoms are not simple and fundamental, so protons and neutrons are made of much smaller particles that were named quarks.
55 This scheme has multiple generations of particles and their anti-particles, so it is not very elegant or simple. This has led physicists to suppose that there may be an even deeper level of sub-atomic structure
56 TOP DOWNDark MatterObjectsAtomsMoleculesUniverse
57 String TheoryString theory postulates dynamic 1-dimensional entities that are only noticeable on scales of meters, 33 orders of magnitude smaller than atoms!
58 In string theory, the smoothness and the emptiness of space are illusions. If we could imagine ourselves at the incredibly tiny Planck scale, meters, we would see a chaotic version of space-time. At every point, the six hidden dimensions that are not apparent in the everyday world would be manifested...
60 Four Forces Strength: 10-38 10-19 0.0073 1 Range: Long Subatomic Long Subatomic
61 The forces are associated with particular families of particles The forces are associated with particular families of particles. But just as these particles are secondary manifestations of strings, the individual forces are manifestations of a single underlying “superforce”
63 Energy is a very broad concept Energy is a very broad concept. It is anything that can make matter move or changeEnergy changes forms constantly but is not created or destroyed: this is a law of physicsUse this figure to define the nucleus; protons, neutrons, electrons; scale of atom and “electron cloud.”
64 Energy can be kinetic, the overall motion of an object Energy can be radiant, light or other electromagnetic wavesEnergy can be potential, stored in a number of waysChemical bondsElectric fieldsMagnetic fieldsGravity fieldsElastic (materials)
65 Light is an electromagnetic wave Use this slide to define wavelength, frequency, speed of light.
66 Light is a ParticlePhotons: they are “pieces” of light, each with a precise wavelength, frequency, and energy. Think of photons as tiny bullets, localized in spacePhoton energy is proportional to frequency of the waveWithin the visible spectrum, blue light has higher energy than red lightWithin the electromagnetic spectrum, X-rays have the highest energy, followed by UV, visible light, IR, and radioRemember: Light is just one form of electromagnetic wave of energy, the kind we can detect with our eyes.
67 Our first key idea is that visible light is only a small part of the complete spectrum of light. You may wish to spend some time explaining the various things shown in this figure… You may also want to repeat this slide at various points to summarize other ideas.
68 If you pass white light through a prism, it separates into its component colors long wavelengthsROY G B I Vshort wavelengthsspectrum
69 Light Interacts with Matter EmissionAbsorptionTransmissionReflection or ScatteringEverything we know about the universe is a result of these effectsBriefly explain the 4 major interaction processes.Terminology:Transparent: transmits lightOpaque: blocks (absorbs) light
70 Atomic Energy LevelsElectrons in every atom have distinct energy levelsEach chemical element, ion or molecule, has a unique set of energy levelsThis slide represents the first introduction to quantized energy levels.
71 Distinct energy levels lead to distinct emission or absorption lines Hydrogen Energy LevelsEmission: atom loses energyAbsorption: atom gains energy
72 Chemical Fingerprints Atoms, ions, and molecules have unique spectral “fingerprints”We identify chemicals in a gas by their spectral fingerprintsWith additional physics, we can figure out abundances of the chemicals, and often temperature, pressure, and much more.
73 Types of Spectra Hot/Dense Energy Source Continuous Spectrum prismContinuous SpectrumprismHot low density cloud of GasEmission Line SpectrumHot/Dense Energy SourceprismCooler low density cloud of GasAbsorption Line Spectrum
74 Anywhere in the universe, atoms and molecules are always in constant, microscopic motion Temperature is a measure of the average kinetic energy of the particles in a substanceStudents sometimes get confused when we’ve said there are 3 basic types of energy (kinetic, potential, radiative) and then start talking about subtypes, so be sure they understand that we are now dealing with subcategories.COOLERHOTTER
75 All the atoms and molecules in the universe are in constant (invisible) microscopic motion or vibration:Thermal energyAs a result, every substance emits a smooth spectrum of radiation, mostly at invisible infrared wavelengths:Thermal radiation
77 Mass-EnergyAnother way to think about this is that the energy that holds the helium nucleus together has a tiny amount of equivalent mass, and that energy gets released going by fusion from hydrogen to heliumE = mc2big numbersmall numberhuge number
78 When 0.7% of the mass of a hydrogen atom is converted to radiant energy it is a huge amount relative to the mass involvedThe mass-energy in the ink in the dot at the end of a sentence in a book could power a typical family home for an entire year
79 What is Dark Energy?THE SHORT ANSWER IS: WE DON’T KNOW. BUT ONE OBSERVATION OF DISTANT SUPERNOVAE POINTED TO A COSMIC ACCELERATION
80 Expansion History of the Universe 30,000300,0003,0001001,00010,000Constant or faster in past (expected)Redshift cz (km/s)Riess, Press,& Kirshner (1996)Slower in past (big surprise!)Farther in the pastRiess et al. (1998)Perlmutter et al. (1999)Distance (Mpc)
81 Einstein’s Theory: General Relativity 3Riess et al. 1998Perlmutter et al. 1999No Big Bang2Strength of cosmological constant, LRiess et al. 2004Tonry et al. 20038 HST SN Ia z > 1If the acceleration is caused by Einstein’s cosmological constant, HST data on 8 SN Ia have increased our cosmology knowledge by a factor of 71AcceleratingI also had a chance to recalculate constraints in the omega_M,omega_l spaceThe new constraints are about 5 times more precise than the ones either team published in 1998There is a lot more we can do with this data and more like it in the future to understand the nature of dark energyxDeceleratingClosedOpen12Strength of matter
82 Dark energy is much more mysterious than even dark matter. It’s existence restson the unexpectedly faintdistant supernovae, and afew less direct arguments.The direct detection of darkenergy is very challenging.Dark energy is a repulsive force that counter gravity. It does not changeits strength with time (Einstein’s gravitational constant “blunder”)Physics provides no assistance. The vacuum of space could have energyin quantum theory, but it would be 1080 times larger than is observed!The density of dark energy and dark matter are roughly equal, this is theonly time in the history of the universe that is true: is this a coincidence?
90 First instant after the big bang event Most of the history of the universeThe underlying unity suggested by string theory and the unification of forces is only realized in the big bang itself
91 A few dimensionless parameters govern the behavior of the universe: Matter DensityEnergy DensityFine Structure ConstantEntropy per BaryonDielectric ConstantNumber of Space DimensionsA few pure number occur over and over through mathematics
92 The 92 stable elements in the periodic table lead to almost infinite complexity. Life uses only about 20.
93 CosmologicalThe universe was initially very smooth; over time complex structures grew by the action of gravity
94 Quantum fluctuations are a mechanism for multiple realizations of the universe …leading to the concept of the “multiverse”
95 More than just this… LEVEL 1: regions we can not see in big bang model LEVEL 2: many bubbles of space-time, unobservable by us, different propertiesLEVEL 3: indeterminacy, and quantum variationLEVEL 4: mathematical forms, multi-dimensional space-times, 10 preferred
96 String Theory Landscape 500Perhaps 10 different vacuade Sitter expansion in these vacua create quantum fluctuations and provide theinitial conditions for inflation. String theory provides context for the “multiverse”
98 Knowing Meaning Space Life Time Structure Matter Energy 1 8 2 7 3 6 4 5StructureMatterEnergy
99 ? ? ? ? ?As creatures who occupy a tiny portion of time and space we have learned much about our universe. But many important questions are still answered.WHAT IS TIME?WHAT IS SPACE?WHAT IS MATTER?WHAT CAUSED THE BIG BANG?IS THE UNIVERSE UNIQUE?ARE WE ALONE?
100 In the universe with ten thousand billion billion stars, and a likely myriad of life forms, we’re special in some ways yet we are not in a cosmic sense. This leads to another big question:WHY ARE WE HERE?
101 Anthropic PrincipleBrandon Carter presented the “anthropic principle” in 1973 in Poland during the 500th birthday of Nicklaus Copernicus. The idea seems to subvert the sense that we are not special, by elevating the role of intelligent observers in the universe to central importance.The weak form of the anthropic principle states that we can only observe a universe with properties such that intelligent observers exist. This is self-evident and little more than a tautology.The strong form of the anthropic principle states that the universe has to be the way it is because intelligent observers exist. This is much more audacious because it implies a special role for life.
102 Conditions for LifeStars of the right type for sustaining life supportable planets only can occur during a certain range of ages for the universestars of the right type only can form for a narrow range of values of the gravitational constantLiving cells consists of light and heavy elements (hydrogen, carbon, oxygen, and metals such as iron, copper, etc.)To make both the light and heavy elements in the correct proportions, the strengths of the various fundamental forces must lie within a very narrow range of valuesBut does this place too specific a requirement on life? Perhaps life just needs disequilibrium chemistry and an energy source, not necessarily carbon and a star.
103 Fundamental Forces Gravitational force Electromagnetic force Attractive force between all objects with massWeakest, long rangeElectromagnetic forceAttractive and repulsiveLong range, 1039 times stronger than gravityNuclear Weak forceCause neutrons to decay into protonsRange <10-17 m, 1028 times stronger than gravityNuclear Strong forceHolds the nucleus togetherRange <10-15 m, 1041 times stronger than gravity
104 CoincidencesSome physical coincidences are noteworthy and so beg for an explanation. All the seemingly arbitrary, unrelated constants in physics have one strange thing in common – they have just the values that would create a universe capable of sustaining life. In other words, our universe could have quite different values of the fundamental forces and it would be physically sensible, but it would contain no carbon-based life forms.
105 Fine-Tuning of Forces Gravitational force Electromagnetic force A bit stronger, and stars have rapid, unstable livesA bit weaker, no supernovae, so no heavy elementsElectromagnetic forceA bit stronger, no shared electrons, no chemistryA bit weaker, atoms cannot hold their electronsNuclear Weak forceA bit stronger, neutrons all decay, no heavy elementsA bit weaker, all hydrogen converted to inert heliumNuclear Strong forceA bit stronger, nuclear reactions too efficient, H to FeA bit weaker, electrical repulsion splits apart nuclei
106 Cosmological Fine-Tuning The following incredibly precise tweaking of the Universe is known as the flatness-oldness problemThe critical density is the matter density just required to eventually overcome the expansion of the big bangIf X is critical density, what is the actual density?It could have any value, but the matter density has a huge impact on the evolution of the universeOnly a value relatively close to the critical value leads to an old and flat universeX
107 OPENFLATCLOSEDIf the density is much below critical, early expansion is too rapid for stars and galaxies to form, so no lifeIf the density is much above critical, the universe will recollapse quickly, with not enough time for stellar evolution to create carbon, and once again, no lifeThe matter density is only ¼ critical; the other major component affecting the expansion is dark energy, which leads to another issue related to fine-tuning….
108 Anthropic Principle Multiverse Redux Our universe emerged from a quantum space-time foam at the Planck epoch. Other universes may have been spawned this way, with physical properties that are randomly different.Most of the past, present and future universes in the multiverse would be inhospitable to life. Ours is just a mediocre member of the ensemble.Anthropic Principle
109 Applying LogicIs there really a logical basis for anthropic arguments about life?1. We shouldn’t be surprised to see features of the universe that are compatible with our existence2. We should be surprised not to see features of the universe that are incompatible with our existence1 is true, but 2 does not follow from itThis universe has special features, like a double six thrown with dice. The multiverse hypothesis is akin to speculating that there are many possible outcomes, ours is “double six”A double six will occur eventually in a long sequence of throws, sequential or parallelThis is the “inverse gamblers” fallacyThe odds of double six are always 1 in 36, so the supposition above doesn’t explain it
110 Scientific Method? Epistemology Let’s look at the strange conceptual journey we have just followedWe can only observe a universe that is capable of creating observers like usSome features of the universe are very finely-tuned around the existence of lifeBut is this an unduly anthropocentric view of life based on stars and carbon?Fine-tuning might be due to happenstance, providence, or self-selection in a multiverseQuantum creation and string theory give the context for the multiverse ensembleBut these theories are not yet well-tested and other universes are unobservableAnd how to assign likelihood or probability on an infinite set of hypothetical universes?Scientific Method?
111 Sentience and mortality define the human condition The physical parameters of nature and the universe are tuned to values that allow carbon-based life.The big bang allows for other universes and other realities but most of these might be devoid of life.The universe is “built for life” in a profound way, but this begs the question of the definition again.Is it self-selection, coincidence, or evidence of design?We share a planet with other sentient life forms, and it’s very likely there is sentience elsewhere.Our power carries moral responsibility and obligation.
|
<urn:uuid:4e22019d-51b9-4f6a-a4e6-75aa28f362f1>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00013.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8847255110740662,
"score": 3.3125,
"token_count": 5850,
"url": "http://slideplayer.com/slide/2291223/"
}
|
Book of Mormon
The Book of Mormon is a sacred text of the Latter Day Saint movement, which adherents believe contains writings of ancient prophets who lived on the American continent from approximately 2200 BC to AD 421. It was first published in March 1830 by Joseph Smith as The Book of Mormon: An Account Written by the Hand of Mormon upon Plates Taken from the Plates of Nephi.
According to Smith's account and the book's narrative, the Book of Mormon was originally written in otherwise unknown characters referred to as "reformed Egyptian" engraved on golden plates. Smith said that the last prophet to contribute to the book, a man named Moroni, buried it in Cumorah Hill in present-day New York, then returned to Earth in 1827 as an angel, revealing the location of the plates to Smith, and instructing him to translate it into English for use in the restoration of Christ's true church in the latter days. Critics claim that it was fabricated by Smith, drawing on material and ideas from contemporary 19th-century works rather than translating an ancient record.
The Book of Mormon has a number of original and distinctive doctrinal discussions on subjects such as the fall of Adam and Eve, the nature of the Atonement, eschatology, redemption from physical and spiritual death, and the organization of the latter-day church. The pivotal event of the book is an appearance of Jesus Christ in the Americas shortly after his resurrection.
The Book of Mormon is the earliest of the unique writings of the Latter Day Saint movement, the denominations of which typically regard the text primarily as scripture, and secondarily as a historical record of God's dealings with the ancient inhabitants of the Americas. The Book of Mormon is divided into smaller books, titled after the individuals named as primary authors and, in most versions, divided into chapters and verses. It is written in English very similar to the Early Modern English linguistic style of the King James Version of the Bible, and has since been fully or partially translated into 108 languages. As of 2011, more than 150 million copies of the Book of Mormon had been published.
According to Joseph Smith, he was seventeen years of age when an angel of God named Moroni appeared to him and said that a collection of ancient writings was buried in a nearby hill in present-day Wayne County, New York, engraved on golden plates by ancient prophets. The writings were said to describe a people whom God had led from Jerusalem to the Western hemisphere 600 years before Jesus' birth. According to the narrative, Moroni was the last prophet among these people and had buried the record, which God had promised to bring forth in the latter days. Smith stated that this vision occurred on the evening of September 21, 1823 and that on the following day, via divine guidance, he located the burial location of the plates on this hill; was instructed by Moroni to meet him at the same hill on September 22 of the following year to receive further instructions; and that, in four years from this date, the time would arrive for "bringing them forth", i.e., translating them. Smith's description of these events recounts that he was allowed to take the plates on September 22, 1827, exactly four years from that date, and was directed to translate them into English.
Accounts vary of the way in which Smith dictated the Book of Mormon. Smith himself implied that he read the plates directly using spectacles prepared for the purpose of translating. Other accounts variously state that he used one or more seer stones placed in a top hat. Both the special spectacles and the seer stone were at times referred to as the "Urim and Thummim". During the translating process itself, Smith sometimes separated himself from his scribe with a blanket between them. Additionally, the plates were not always present during the translating process, and when present, they were always covered up.:42
Smith's first published description of the plates said that the plates "had the appearance of gold". They were described by Martin Harris, one of Smith's early scribes, as "fastened together in the shape of a book by wires." Smith called the engraved writing on the plates "reformed Egyptian". A portion of the text on the plates was also "sealed" according to his account, so its content was not included in the Book of Mormon.
In addition to Smith's account regarding the plates, eleven others stated that they saw the golden plates and, in some cases, handled them. Their written testimonies are known as the Testimony of Three Witnesses and the Testimony of Eight Witnesses. These statements have been published in most editions of the Book of Mormon.
Smith enlisted his neighbor Martin Harris as a scribe during his initial work on the text. (Harris later mortgaged his farm to underwrite the printing of the Book of Mormon.) In 1828, Harris, prompted by his wife Lucy Harris, repeatedly requested that Smith lend him the current pages that had been translated. Smith reluctantly acceded to Harris's requests. Lucy Harris is thought to have stolen the first 116 pages. After the loss, Smith recorded that he had lost the ability to translate, and that Moroni had taken back the plates to be returned only after Smith repented. Smith later stated that God allowed him to resume translation, but directed that he begin translating another part of the plates (in what is now called the Book of Mosiah). In 1829, work resumed on the Book of Mormon, with the assistance of Oliver Cowdery, and was completed in a short period (April–June 1829). Smith said that he then returned the plates to Moroni upon the publication of the book. The Book of Mormon went on sale at the bookstore of E. B. Grandin in Palmyra, New York on March 26, 1830. Today, the building in which the Book of Mormon was first published and sold is known as the Book of Mormon Historic Publication Site.
Since its first publication and distribution, critics of the Book of Mormon have claimed that it was fabricated by Smith and that he drew material and ideas from various sources rather than translating an ancient record. Works that have been suggested as sources include the King James Bible, The Wonders of Nature, View of the Hebrews, and an unpublished manuscript written by Solomon Spalding. FairMormon maintains that all of these theories have been disproved and discredited, arguing that both Mormon and non-Mormon historians have found serious flaws in their research. The position of most adherents of the Latter Day Saint movement and the official position of The Church of Jesus Christ of Latter-day Saints (LDS Church) is that the book is an accurate historical record.
Smith stated that the title page, and presumably the actual title of the 1830 edition, came from the translation of "the very last leaf" of the golden plates, and was written by the prophet-historian Moroni. The title page states that the purpose of the Book of Mormon is "to [show] unto the remnant of the house of Israel what great things the Lord hath done for their fathers; ... and also to the convincing of the Jew and Gentile that Jesus is the Christ, the eternal God, manifesting himself unto all nations."
The Book of Mormon is organized as a compilation of smaller books, each named after its main named narrator or a prominent leader, beginning with the First Book of Nephi (1 Nephi) and ending with the Book of Moroni.
The book's sequence is primarily chronological based on the narrative content of the book. Exceptions include the Words of Mormon and the Book of Ether. The Words of Mormon contains editorial commentary by Mormon. The Book of Ether is presented as the narrative of an earlier group of people who had come to America before the immigration described in 1 Nephi. First Nephi through Omni are written in first-person narrative, as are Mormon and Moroni. The remainder of the Book of Mormon is written in third-person historical narrative, said to be compiled and abridged by Mormon (with Moroni abridging the Book of Ether).
Most modern editions of the book have been divided into chapters and verses. Most editions of the book also contain supplementary material, including the "Testimony of Three Witnesses" and the "Testimony of Eight Witnesses".
The books from First Nephi to Omni are described as being from "the small plates of Nephi". This account begins in ancient Jerusalem around 600 BC. It tells the story of a man named Lehi, his family, and several others as they are led by God from Jerusalem shortly before the fall of that city to the Babylonians in 586 BC. The book describes their journey across the Arabian peninsula, and then to the promised land, the Americas, by ship. These books recount the group's dealings from approximately 600 BC to about 130 BC, during which time the community grew and split into two main groups, which are called the Nephites and the Lamanites, that frequently warred with each other.
Following this section is the Words of Mormon. This small book, said to be written in AD 385 by Mormon, is a short introduction to the books of Mosiah, Alma, Helaman, Third Nephi, and Fourth Nephi. These books are described as being abridged from a large quantity of existing records called "the large plates of Nephi" that detailed the people's history from the time of Omni to Mormon's own life. The Book of Third Nephi is of particular importance within the Book of Mormon because it contains an account of a visit by Jesus from heaven to the Americas sometime after his resurrection and ascension. The text says that during this American visit, he repeated much of the same doctrine and instruction given in the Gospels of the Bible and he established an enlightened, peaceful society which endured for several generations, but which eventually broke into warring factions again.
The portion of the greater Book of Mormon called the Book of Mormon is an account of the events during Mormon's life. Mormon is said to have received the charge of taking care of the records that had been hidden, once he was old enough. The book includes an account of the wars, Mormon's leading of portions of the Nephite army, and his retrieving and caring for the records. Mormon is eventually killed after having handed down the records to his son Moroni.
According to the text, Moroni then made an abridgment (called the Book of Ether) of a record from a previous people called the Jaredites. The account describes a group of families led from the Tower of Babel to the Americas, headed by a man named Jared and his brother. The Jaredite civilization is presented as existing on the American continent beginning about 2500 BC,—long before Lehi's family arrived shortly after 600 BC—and as being much larger and more developed.
The Book of Moroni then details the final destruction of the Nephites and the idolatrous state of the remaining society. It also includes significant doctrinal teachings and closes with Moroni's testimony and an invitation to pray to God for a confirmation of the truthfulness of the account.
Doctrinal and philosophical teachingsEdit
The Book of Mormon contains doctrinal and philosophical teachings on a wide range of topics, from basic themes of Christianity and Judaism to political and ideological teachings. Jesus is mentioned every 1.7 verses and is referred to by one hundred different names.
Stated on the title page, the Book of Mormon's central purpose is for the "convincing of the Jew and Gentile that Jesus is the Christ, the Eternal God, manifesting himself unto all nations."
The book describes Jesus, prior to his birth, as a spirit "without flesh and blood", although with a spirit "body" that looked similar to how Jesus would appear during his physical life. Jesus is described as "the Father and the Son". He is said to be: "God himself [who] shall come down among the children of men, and shall redeem his people ... [b]eing the Father and the Son—the Father, because he was conceived by the power of God; and the Son, because of the flesh; thus becoming the Father and Son—and they are one God, yea, the very Eternal Father of heaven and of earth." Other parts of the book portray the Father, the Son, and the Holy Ghost as "one." As a result, beliefs among the churches of the Latter Day Saint movement encompass nontrinitarianism (in The Church of Jesus Christ of Latter-day Saints) to trinitarianism (particularly among the Community of Christ). See Godhead (Latter Day Saints).
In furtherance of its theme of reconciling Jews and Gentiles to Jesus, the book describes a variety of visions or visitations to some early inhabitants in the Americas involving Jesus. Most notable among these is a described visit of Jesus to a group of early inhabitants shortly after his resurrection. Many of the book's contributors described other visions of Jesus, including one by the Brother of Jared who, according to the book, lived before Jesus, and saw the "body" of Jesus' spirit thousands of years prior to his birth. According to the book, a narrator named Nephi described a vision of the birth, ministry, and death of Jesus, including a prophecy of Jesus' name, said to have taken place nearly 600 years prior to Jesus' birth.
In the narrative, at the time of King Benjamin (about 130 BC), the Nephite believers were called "the children of Christ". At another place, the faithful members of the church at the time of Captain Moroni (73 BC) were called "Christians" by their enemies, because of their belief in Jesus Christ. The book also states that for nearly 200 years after Jesus' appearance at the temple in the Americas the land was filled with peace and prosperity because of the people's obedience to his commandments. Later, the prophet Mormon worked to convince the faithless people of his time (AD 360) of Christ. Many other prophets in the book write of the reality of the Messiah, Jesus Christ.
In the Bible, Jesus spoke to the Jews in Jerusalem of "other sheep" who would hear his voice. The Book of Mormon claims this meant that the Nephites and other remnants of the lost tribes of Israel throughout the world were to be visited by Jesus after his resurrection.
Other distinctive religious teachingsEdit
On most religious issues, Book of Mormon doctrines are similar to those found in the Bible and among other Christian denominations.[not in citation given] Among its somewhat distinctive theological interpretations are the following:
- When the Old Testament prophet Isaiah wrote of voices that would "whisper out of the dust," he was referring to the publication of the Book of Mormon.
- The fall of man is a prerequisite for procreation, and a necessary requirement for the return to God: "Adam fell that men might be, and men are, that they might have joy."
- The church should be named after Christ.
- The atonement of Christ may save unbaptized people who die without a knowledge of the gospel, including children who die without baptism.
- Because of the death and resurrection of Jesus, all humanity ("both old and young, both bond and free, both male and female, both the wicked and the righteous") will be resurrected with an immortal physical body sometime after their death.
- During Jesus' suffering in the Garden of Gethsemane prior to his crucifixion, blood exudes from every pore on his skin. The description of Jesus' sweat in Luke 22:44 is therefore not figurative.
Teachings about political theologyEdit
The book delves into political theology within a Christian or Jewish context. Among these themes are American exceptionalism. According to the book, the Americas are portrayed as a "land of promise", the world's most exceptional land of the time. The book states that any righteous society possessing the land would be protected, whereas if they became wicked they would be destroyed and replaced with a more righteous civilization.
On the issue of war and violence, the book teaches that war is justified for people to "defend themselves against their enemies". However, they were never to "give an offense," or to "raise their sword ... except it were to preserve their lives." The book praises the faith of a group of former warriors who took an oath of complete pacifism, refusing to take arms even to defend themselves and their people. However, 2,000 of their descendants, who had not taken the oath of their parents not to take up arms against their enemies, chose to go to battle against the Lamanites, and it states that in their battles the 2,000 men were protected by God through their faith and, though many were injured, none of them died.
The book recommends monarchy as an ideal form of government, but only when the monarch is righteous. The book warns of the evil that occurs when the king is wicked, and therefore suggests that it is not generally good to have a king. The book further records the decision of the people to be ruled no longer by kings, choosing instead a form of democracy led by elected judges. When citizens referred to as "king-men" attempted to overthrow a democratically elected government and establish an unrighteous king, the book praises a military commander who executed pro-monarchy citizens who had vowed to destroy the church of God and were unwilling to defend their country from hostile invading forces. The book also speaks favorably of a particular instance of what appears to be a peaceful Christ-centered theocracy, which lasted approximately 194 years before contentions began again.
The book supports notions of economic justice, achieved through voluntary donation of "substance, every man according to that which he had, to the poor." In one case, all the citizens held their property in common. When individuals within a society began to disdain and ignore the poor, to "wear costly apparel", and otherwise engage in wickedness for personal gain, such societies are repeatedly portrayed in the book as being ripe for destruction.
Joseph Smith characterized the Book of Mormon as the "keystone" of Mormonism, and claimed that it was "the most correct of any book on earth". Smith produced a written revelation in 1832 that condemned the "whole church" for treating the Book of Mormon lightly.
The Church of Jesus Christ of Latter-day SaintsEdit
The Book of Mormon is one of four sacred texts or standard works of the LDS Church. Church leaders have frequently restated Smith's claims of the book's significance to the faith. Church members believe that the Book of Mormon is more correct than the Bible because the Bible was the result of a multiple-generation translation process and the Book of Mormon was not.
For most of the history of the LDS Church, the Book of Mormon was not used as much as other books of scripture such as the New Testament and the Doctrine and Covenants. This changed in the 1980s when efforts were made to reemphasize the Book of Mormon. As part of this effort, a new edition was printed with the added subtitle "Another Testament of Jesus Christ".
The importance of the Book of Mormon was a focus of Ezra Taft Benson, the church's thirteenth president. Benson stated that the church was still under condemnation for treating the Book of Mormon lightly. In an August 2005 message, LDS Church president Gordon B. Hinckley challenged each member of the church to re-read the Book of Mormon before the year's end. The book's importance is commonly stressed at the twice-yearly general conference, at special devotionals by general authorities, and in the church's teaching publications. Since the late 1980s, church members have been encouraged to read from the Book of Mormon daily.
The LDS Church encourages discovery of the book's truth by following the suggestion in its final chapter to study, ponder, and pray to God concerning its veracity. This passage is sometimes referred to as "Moroni's Promise". As of April 2011, the LDS Church has published more than 150 million copies of the Book of Mormon.
Community of ChristEdit
The Community of Christ, formerly known as the Reorganized Church of Jesus Christ of Latter Day Saints, views the Book of Mormon as an additional witness of Jesus Christ and publishes two versions of the book through its official publishing arm, Herald House: the Authorized Edition, which is based on the original printer's manuscript, and the 1837 Second Edition (or "Kirtland Edition") of the Book of Mormon. Its content is similar to the Book of Mormon published by the LDS Church, but the versification is different. The Community of Christ also publishes a 1966 "Revised Authorized Edition", which attempts to modernize some language.
In 2001, Community of Christ President W. Grant McMurray reflected on increasing questions about the Book of Mormon: "The proper use of The Book of Mormon as sacred scripture has been under wide discussion in the 1970s and beyond, in part because of long-standing questions about its historical authenticity and in part because of perceived theological inadequacies, including matters of race and ethnicity."
At the 2007 Community of Christ World Conference, President Stephen M. Veazey ruled out-of-order a resolution to "reaffirm the Book of Mormon as a divinely inspired record." He stated that "while the Church affirms the Book of Mormon as scripture, and makes it available for study and use in various languages, we do not attempt to mandate the degree of belief or use. This position is in keeping with our longstanding tradition that belief in the Book of Mormon is not to be used as a test of fellowship or membership in the church."
Greater Latter Day Saint movementEdit
There are a number of other churches that are part of the Latter Day Saint movement. Most of these churches were created as a result of issues ranging from differing doctrinal interpretations and acceptance of the movement's scriptures, including the Book of Mormon, to disagreements as to who was the divinely chosen successor to Joseph Smith. These groups all have in common the acceptance of the Book of Mormon as scripture. It is this acceptance which distinguishes the churches of the Latter Day Saint movement from other Christian denominations. Separate editions of the Book of Mormon have been published by a number of churches in the Latter Day Saint movement, along with private individuals and foundations not endorsed by any specific denomination.
The archaeological, historical and scientific communities are generally skeptical of the claims that the Book of Mormon is an ancient record of actual historical events. This skepticism tends to focus on four main areas:
- The lack of correlation between locations described in the Book of Mormon and known, intact American archaeological sites.
- References to animals, plants, metals and technologies in the Book of Mormon that archaeological or scientific studies have found no evidence of in post-Pleistocene, pre-Columbian America, frequently referred to as anachronisms. Items typically listed include cattle, horses, asses, oxen, sheep, swine, goats, elephants, wheat, steel, brass, chains, iron, scimitars, and chariots.
- The lack of widely accepted linguistic connections between any Native American languages and Near Eastern languages.
- The lack of DNA evidence linking any Native American group to the ancient Near East.
Most adherents of the Latter Day Saint movement consider the Book of Mormon to generally be a historically accurate account. Within the Latter Day Saint movement there are several apologetic groups that disagree with the skeptics and seek to reconcile the discrepancies in diverse ways. Among these apologetic groups, much work has been published by Foundation for Ancient Research and Mormon Studies (FARMS), and Foundation for Apologetic Information & Research (FAIR), defending the Book of Mormon as a literal history, countering arguments critical of its historical authenticity, or reconciling historical and scientific evidence with the text. One of the more common recent arguments is the limited geography model, which states that the people of the Book of Mormon covered only a limited geographical region in either Mesoamerica, South America, or the Great Lakes area. The LDS Church has published material indicating that science will support the historical authenticity of the Book of Mormon.
The first completed manuscript, called the original manuscript, was completed using a variety of scribes. Portions of the original manuscript were also used for typesetting. In October 1841, the entire original manuscript was placed into the cornerstone of the Nauvoo House, and sealed up until nearly forty years later when the cornerstone was reopened. It was then discovered that much of the original manuscript had been destroyed by water seepage and mold. Surviving manuscript pages were handed out to various families and individuals in the 1880s.
Only 28 percent of the original manuscript now survives, including a remarkable find of fragments from 58 pages in 1991. The majority of what remains of the original manuscript is now kept in the LDS Church's Archives.
The second completed manuscript, called the printer's manuscript, was a copy of the original manuscript produced by Oliver Cowdery and two other scribes. It is at this point that initial copyediting of the Book of Mormon was completed. Observations of the original manuscript show little evidence of corrections to the text. Shortly before his death in 1850, Cowdery gave the printer's manuscript to David Whitmer, another of the Three Witnesses. In 1903, the manuscript was bought from Whitmer's grandson by the Community of Christ, known at the time as the Reorganized Church of Jesus Christ of Latter-day Saints. On September 20, 2017, the LDS Church purchased the manuscript from the Community of Christ at a reported price of $35 million. The printer's manuscript is now the earliest surviving complete copy of the Book of Mormon, being nearly 100 percent extant. The manuscript was imaged in 1923 and was recently made available for viewing online.
Critical comparisons between surviving portions of the manuscripts show an average of two to three changes per page from the original manuscript to the printer's manuscript, with most changes being corrections of scribal errors such as misspellings or the correction, or standardization, of grammar inconsequential to the meaning of the text. The printer's manuscript was further edited, adding paragraphing and punctuation to the first third of the text.
The printer's manuscript was not used fully in the typesetting of the 1830 version of Book of Mormon; portions of the original manuscript were also used for typesetting. The original manuscript was used by Smith to further correct errors printed in the 1830 and 1837 versions of the Book of Mormon for the 1840 printing of the book.
Ownership history: Book of Mormon printer's manuscriptEdit
In the late 19th century the extant portion of the printer's manuscript remained with the family of David Whitmer, who had been a principal founder of the Latter Day Saints and who, by the 1870s, led the Church of Christ (Whitmerite). During the 1870s, according to the Chicago Tribune, the LDS Church unsuccessfully attempted to buy it from Whitmer for a record price. LDS president Joseph F. Smith refuted this assertion in a 1901 letter, believing such a manuscript "possesses no value whatever." In 1895, David Whitmer's grandson George Schweich inherited the manuscript. By 1903 Schweich had mortgaged the manuscript for $1,800 and, needing to raise at least that sum, sold a collection including 72-percent of the Book of the original printer's manuscript (John Whitmer’s manuscript history, parts of Joseph Smith’s translation of the Bible, manuscript copies of several revelations, and a piece of paper containing copied Book of Mormon characters) to the RLDS church (now the Community of Christ) for $2,450, with $2,300 of this amount for the printer's manuscript. The LDS Church had not sought to purchase the manuscript.
In 2015 this remaining portion was published by the Church Historian's Press in its Joseph Smith Papers series, in Volume Three of "Revelations and Translations"; and, in 2017, the LDS Church bought the printer's manuscript for US$35,000,000.
Chapter and verse notation systemsEdit
The original 1830 publication did not have verse markers, although the individual books were divided into relatively long chapters. Just as the Bible's present chapter and verse notation system is a later addition of Bible publishers to books that were originally solid blocks of undivided text, the chapter and verse markers within the books of the Book of Mormon are conventions, not part of the original text.
Publishers from different factions of the Latter Day Saint movement have published different chapter and verse notation systems. The two most significant are the LDS system, introduced in 1879, and the RLDS system, which is based on the original 1830 chapter divisions.
The RLDS 1908 edition, RLDS 1966 edition, the Church of Christ (Temple Lot) edition, and Restored Covenant editions use the RLDS system while most other current editions use the LDS system.
The Book of Mormon is currently printed by the following publishers:
|Church publishers||Year||Titles and notes||Link|
|The Church of Jesus Christ of Latter-day Saints||1981||The Book of Mormon: Another Testament of Jesus Christ. New introductions, chapter summaries, and footnotes. 1920 edition errors corrected based on original manuscript and 1840 edition. Updated in a revised edition in 2013.||link|
|Community of Christ||1966||"Revised Authorized Version", based on 1908 Authorized Version, 1837 edition and original manuscript. Notable for the omission of repetitive "it came to pass" phrases.|
|The Church of Jesus Christ (Bickertonite)||2001||Compiled by a committee of Apostles. It uses the chapter and verse designations from the 1879 LDS version.|
|Richard Drew||1992||Photo-enlarged facsimile of the 1840 edition|
|Church of Christ (Temple Lot)||1990||Based on 1908 RLDS edition, 1830 edition, printer's manuscript, and corrections by church leaders.||link|
|Church of Christ with the Elijah Message||1957||The Record of the Nephites, "Restored Palmyra Edition". 1830 text with 1879 LDS chapters and verses.||link|
|Other publishers||Year||Titles and notes||Link|
|Herald Heritage||1970||Facsimile of the 1830 edition.|
|Zarahemla Research Foundation||1999||The Book of Mormon: Restored Covenant Edition. Text from Original and Printer's Manuscripts, in poetic layout.||link|
|Bookcraft||1999||The Book of Mormon for Latter-day Saint Families. Large print with numerous visuals and explanatory notes.|
|University of Illinois Press||2003||The Book of Mormon: A Reader's Edition. Based on the 1920 LDS edition.||link|
|Doubleday||2006 ||The Book of Mormon: Another Testament of Jesus Christ. Text from the current LDS edition without footnotes. First Doubleday edition was in 2004.|
|Experience Press||2006||Reset type matching the original 1830 edition in word, line and page. Fixed typographical errors.|
|Stratford Books||2006||Facsimile reprint of 1830 edition.|
|Penguin Classics||2008||Paperback with 1840 text.||link|
|Yale University Press||2009||The Book of Mormon: The Earliest Text. First edition text with hundreds of corrections from Royal Skousen's study of the original manuscripts.||link|
The following non-current editions marked major developments in the text or reader's helps printed in the Book of Mormon.
|Publisher||Year||Titles and notes||Link|
|E. B. Grandin||1830||"First edition" in Palmyra. Based on printer's manuscript copied from original manuscript.||link|
|Pratt and Goodson||1837||"Second edition" in Kirtland. Revision of first edition, using the printer's manuscript with emendations and grammatical corrections.|
|Robinson and Smith||1840||"Third edition" in Nauvoo. Revised by Joseph Smith in comparison to the original manuscript.||link|
|Young, Kimball and Pratt||1841||"First European edition". 1837 reprint with British spellings. Future LDS Church editions descended from this, not the 1840 edition.|
|Franklin D. Richards||1852||"Third European edition". Edited by Richards. Introduced primitive verses (numbered paragraphs).||link|
|James O. Wright||1858||Unauthorized reprinting of 1840 edition. Used by the early RLDS Church in 1860s.||link|
|Reorganized Church of Jesus Christ of Latter Day Saints||1874||First RLDS edition. 1840 text with verses.||link|
|Deseret News||1879||Edited by Orson Pratt. Introduced footnotes, new verses, and shorter chapters.||link|
|Reorganized Church of Jesus Christ of Latter Day Saints||1908||"Authorized Version". New verses and corrections based on printer's manuscript.||link|
|The Church of Jesus Christ of Latter-day Saints||1920||Edited by James E. Talmage. Added introductions, double columns, chapter summaries, new footnotes, pronunciation guide.||link|
The following versions are published online:
|Online editions||Year||Description and notes||Link|
|LDS Church internet edition||2013||Official Internet edition of the Book of Mormon for the LDS Church.||link|
|LDS Church audio edition||1994||Official LDS version of the Book of Mormon in mp3 audio format, 32 kbit/s||link|
In 1989, scholars at Brigham Young University began work on a critical text edition of the Book of Mormon. Volumes 1 and 2, published in 2001, contain transcriptions of all the text variants of the English editions of the Book of Mormon, from the original manuscript to the newest editions. Volume 4, which is being published in parts, is a critical analysis of all the text variants. Volume 3, which is not yet published, will describe the history of all the English-language texts from Joseph Smith to today.
Differences between the original and printer's manuscript, the 1830 printed version, and modern versions of the Book of Mormon have led some critics to claim that evidence has been systematically removed that could have proven that Smith fabricated the Book of Mormon, or are attempts to hide embarrassing aspects of the church's past with Mormon apologists viewing the changes as superficial changes done to clarify the meaning of the text.
The LDS version of the Book of Mormon has been translated into 83 languages and selections have been translated into an additional 25 languages. In 2001, the LDS Church reported that all or part of the Book of Mormon was available in the native language of 99 percent of Latter-day Saints and 87 percent of the world's total population.
Translations into languages without a tradition of writing (e.g., Kaqchikel, Tzotzil) are available on audio cassette. Translations into American Sign Language are available on videocassette and DVD.
Typically, translators are members of the LDS Church who are employed by the church and translate the text from the original English. Each manuscript is reviewed several times before it is approved and published.
In 1998, the LDS Church stopped translating selections from the Book of Mormon, and instead announced that each new translation it approves will be a full edition.
Representations in mediaEdit
Events of the Book of Mormon are the focus of several LDS Church films, including The Life of Nephi (1915), How Rare a Possession (1987) and The Testaments of One Fold and One Shepherd (2000). Such films in LDS cinema (i.e., films not officially commissioned by the LDS Church) include The Book of Mormon Movie, Vol. 1: The Journey (2003) and Passage to Zarahemla (2007).
In 2011, a long-running religious satire musical titled The Book of Mormon, by the South Park creators, premiered on Broadway, winning 9 Tony Awards, including best musical. Its London production won the Olivier Award for best musical.
The LDS Church, which distributes free copies of the Book of Mormon, reported in 2011 that 150 million copies of the book have been printed since its initial publication.
- Gordon B. Hinckley, "Praise to the Man" Archived 2012-06-08 at the Wayback Machine., 1979-11-04.
- Church Educational System (1996, rev. ed.). Book of Mormon Student Manual (Salt Lake City, Utah: The Church of Jesus Christ of Latter-day Saints), ch. 6.
- Smith (1830, title page).
- Roberts (1902, pp. 11, 18–19).
- Tanner, Jerald and Sandra (1987). Mormonism - Shadow or Reality?. Utah Lighthouse Ministry. p. 91. ISBN 99930-74-43-8.
- Brody, Fawn (1971). No Man Knows My History: The Life of Joseph Smith (2d ed.). New York: Alfred A. Knopf.
- Krakauer, Jon (2003). Under the Banner of Heaven: A Story of Violent Faith. New York: Doubleday.
- Ash, Michael R. (1997). "The King James Bible and the Book of Mormon". Mormon Fortress. Archived from the original on 2013-01-17. Retrieved 2013-01-01.
- "Book of Mormon Reaches 150 Million Copies", lds.org, 2011-04-20.
- "The Life and Ministry of Joseph Smith", Teachings of Presidents of the Church: Joseph Smith (2007), xxii–25.
- Rathbone, Tim; Welch, John W. (1992), Ludlow, Daniel H, ed., "Book of Mormon Translation By Joseph Smith", Encyclopedia of Mormonism, New York: Macmillan Publishing, pp. 210–213, ISBN 0-02-879602-0, OCLC 24502140
- "Book of Mormon Translation", LDS.org, LDS Church, n.d.
- Brodie, Fawn M. (1995). No man knows my history: the life of Joseph Smith, the Mormon prophet (rev. and enl. 2nd ed.). New York: Vintage Books. pp. 53, 61. ISBN 0679730540.
- Smith, Joseph, Jr. (March 1, 1842). "Wentworth Letter/Church History". Times and Seasons. Nauvoo, Illinois. 3 (9): 906–936.
- Smith (1842, p. 707).
- "Testimony of Three Witnesses".
- "Testimony of Eight Witnesses".
- Hitchens 2007, pp. 163, Givens 2002, pp. 33, Givens 2002, pp. 33
- Doctrine and Covenants, section 3 and
- Brodie 1971
- Givens 2002
- Hitchens 2007, pp. 163–164
- Joseph Smith: Rough Stone Rolling (New York: Alfred A. Knopf, 2005), 70."
- "Testimony of Joseph Smith" Hitchens 2007, pp. 164
- Kunz, Ryan (March 2010). "180 Years Later, Book of Mormon Nears 150 Million Copies". Ensign. LDS Church: 74–76. Retrieved 2011-03-24.
- Abanes, Richard (2003). One Nation Under Gods: A History of the Mormon Church. Thunder's Mouth Press. p. 72. ISBN 1-56858-283-8.
- Tanner, Jerald and Sandra (1987). Mormonism - Shadow or Reality?. Utah Lighthouse Ministry. pp. 73–80. ISBN 99930-74-43-8.
- Abanes, Richard (2003). One Nation Under Gods: A History of the Mormon Church. Thunder's Mouth Press. p. 68. ISBN 1-56858-283-8.
- Tanner, Jerald and Sandra (1987). Mormonism - Shadow or Reality?. Utah Lighthouse Ministry. pp. 84–85. ISBN 99930-74-43-8.
- Roberts, Brigham H. (1992). Brigham D. Madsen, ed. Studies of the Book of Mormon. Salt Lake City, UT: Signature Books. ISBN 1-56085-027-2.
- Howe, Eber D (1834). "Mormonism Unvailed". Painesville, Ohio: Telegraph Press.
- Spaulding, Solomon (1996). Reeve, Rex C, ed. Manuscript Found: The Complete Original "Spaulding" Manuscript. Provo, Utah: Religious Studies Center, Brigham Young University.
- Roper, Matthew (2005). "The Mythical "Manuscript Found"". FARMS Review. Provo, Utah: Maxwell Institute. 17 (2): 7–140. Archived from the original on 2007-02-18. Retrieved 2007-01-31.
- "Book of Mormon/Plagiarism accusations - FairMormon". en.fairmormon.org. Retrieved 2016-01-13.
- "Criticism of Mormonism/Books/One Nation Under Gods - FairMormon". en.fairmormon.org. Retrieved 2016-01-13.
- "Review of Mormonism: Shadow or Reality?". Review of Books on the Book of Mormon 4/1 (1992) > Mormonism: Shadow or Reality?. Retrieved 2016-01-13.
- "Criticism of Mormonism/Books/No Man Knows My History: The Life of Joseph Smith - FairMormon". en.fairmormon.org. Retrieved 2016-01-13.
- "Criticism of Mormonism/Books/Under the Banner of Heaven/Index - FairMormon". en.fairmormon.org. Retrieved 2016-01-13.
- "The Historical Case against Sidney Rigdon's Authorship of the Book of Mormon". Neal A. Maxwell Institute for Religious Scholarship. Retrieved 2016-01-13.
- "The limited success so far in swaying popular LDS opinion is a constant source of frustration for Mormon apologists...It appears that Mormons are generally content to picture the Book of Mormon story in a setting that is factually wrong. For most Mormons, the limited geography models create more problems than they solve. They run counter to the dominant literal interpretation of the text and contradict popular folklore as well as the clear pronouncements of all church presidents since the time of Joseph Smith", Simon G. Southerton (2004, Signature Books), Losing a Lost Tribe, pp. 164-165.
"Some of the [Community of Christ]'s senior leadership consider the Book of Mormon to be inspired historical fiction. For leaders of the Utah church, this is still out of the question. [The leadership], and most Mormons, believe that the historical authenticity of the Book of Mormon is what shores up Joseph Smith's prophetic calling and the divine authenticity of the Utah church", Southerton (2004), pg. 201.
Quotations from temple dedicatory sermons and prayers in Central and South America by President Gordon B. Hinckley in 1999-2000 continually refer to Native LDS members in attendance as "children of Lehi" (Southerton , pp. 38-39).
"Latter-Day Saints believe their scripture to be history, written by ancient prophets", Grant Hardy (2009, Yale University Press), "Introduction," The Book of Mormon: The Earliest Text, ed. Royal Skousen, pg. x.
- Joseph Smith stated that the "title page is not by any means a modern composition either of mine or of any other man's who has lived or does live in this generation."
- Smith, Joseph (October 1842). "Truth Will Prevail". Times and Seasons. III (24): 943. Retrieved 2009-01-30.
- Book of Mormon Title Page.
- "A Brief Explanation about the Book of Mormon".
- Joseph L. Allen, Sacred Sites: Searching for Book of Mormon Lands (2003) p. 8.
- "Book of Moroni".
- Gary J. Coleman, "The Book of Mormon: A Guide for the Old Testament", Ensign, January 2002.
- Susan Ward Easton, "Names of Christ in the Book of Mormon", Ensign, July 1978.
- Smith (1830, Title Page)
- See to
- See , ; See also
- See King James Version of the Bible in the
- , ,
- "Mormon Studies - Baptist Version of the Book of Mormon". Archived from the original on 2016-03-04.
- Isaiah 29:4.
- 3 Nephi 27:2–8.
- 2 Nephi 2:25.
- Mosiah 3:11–12.
- Moroni 8:11–16.
- Alma 11:42–45.
- Mosiah 3:7.
- ; ; ; ; ; ; ; ; ; .
- ; ; ; ; ; ; ; ; ; ; ; .
- ; ; ; .
- Joseph Smith, B. H. Roberts, ed., History of the Church, 4, p. 461
- Millet, Robert L. (2007). Strathearn, Gaye; Swift, Charles, eds. The Most Correct Book: Joseph Smith's Appraisal. Provo, Utah: Religious Studies Center, Brigham Young University. pp. 55–71. ISBN 978-1-59038-799-3.
- The other texts are the Bible (King James Version), the Doctrine and Covenants, and the Pearl of Great Price: Nelson, Russell M. (November 2000), "Living by Scriptural Guidance", Ensign: 16–18 (discussing how the four standard works of the church can provide guidance in life).
- Ezra Taft Benson, "The Book of Mormon—Keystone of Our Religion", Ensign, November 1986.
- James E. Faust, "The Keystone of Our Religion", Ensign, January 2004.
- "Which GAs Prefer Which Books of Scripture?". Zelophads Daughters. Retrieved 14 April 2016.
|last1=in Authors list (help)
- "Since 1982, subtitle has defined book as 'another testament of Jesus Christ'", Church News, 1988-01-02.
- "Book of Mormon: Another Testament of Jesus Christ", mormonnewsroom.org.
- Boyd K. Packer, "Scriptures", Ensign, November 1982.
- Ezra Taft Benson, "Cleansing the Inner Vessel", Ensign, May 1986.
- Ezra Taft Benson, "Flooding the Earth with the Book of Mormon", Ensign, November 1988.
- Dallin H. Oaks, "'Another Testament of Jesus Christ'", Ensign, March 1994 (reporting that Benson told a meeting of church leaders on 5 March 1987 that "[t]his condemnation has not been lifted, nor will it be until we repent").
- Gordon B. Hinckley, "A Testimony Vibrant and True", Ensign, August 2005.
- Cook, Gene R. (April 1994), "Moroni's Promise", Ensign: 12; see
- "Book of Mormon: 150 Million Copies". The Church of Jesus Christ of Latter-day Saints. Retrieved 9 April 2012.
- McMurray, W. Grant, "They 'Shall Blossom as the Rose': Native Americans and the Dream of Zion," an address delivered February 17, 2001, cofchrist.org.
- Andrew M. Shields, "Official Minutes of Business Session, Wednesday March 28, 2007," in 2007 World Conference Thursday Bulletin, March 29, 2007. Community of Christ, 2007.
- Robinson, B.A. (June 8, 2010). "The LDS Restorationist movement, including The Church of Jesus Christ of Latter-day Saints". ReligiousTolerance.org. Ontario Consultants on Religious Tolerance. Retrieved 2013-01-01.
- Citing the lack of specific New World geographic locations to search, Michael D. Coe, a prominent Mesoamerican archaeologist and Professor Emeritus of Anthropology at Yale University, writes (in a 1973 volume of Dialogue: A Journal of Mormon Thought): "As far as I know there is not one professionally trained archaeologist, who is not a Mormon, who sees any scientific justification for believing [the historicity of The Book of Mormon], and I would like to state that there are quite a few Mormon archaeologists who join this group."
- 1 Nephi 18:25
LDS scholars think that this may be a product of reassigning familiar labels to unfamiliar items. For example, the Delaware Indians named the cow after the deer, and the Miami Indians labeled sheep, when they were first seen, "looks-like-a cow."
John L. Sorenson, An Ancient American Setting for the Book of Mormon (Salt Lake City, Utah : Deseret Book Co. ; Provo, Utah : Foundation for Ancient Research and Mormon Studies, 1996 ), 294. ISBN 1-57345-157-6
http://www.mormonfortress.com/cows1.html Archived April 2, 2013, at the Wayback Machine.
- 1 Nephi 18:25
- 1 Nephi 18:25
Smithsonian Institution statement on the Book of Mormon paragraph 4 Archived May 20, 2012, at the Wayback Machine.
- Ether 9:19
- 1 Nephi 4:9
- Alma 18:9
- The traditional view of the Book of Mormon suggests that Native Americans are principally the descendents of an Israelite migration around 600 BC. However, DNA evidence shows no Near Eastern component in the Native American genetic make-up. For example:
Simon G. Southerton. 2004. Losing a Lost Tribe: Native Americans, DNA, and the Mormon Church. Signature Books.
The entire book is devoted to the specific topic of DNA evidence and the Book of Mormon." ...[T]he DNA lineages of Central America resemble those of other Native American tribes throughout the two continents. Over 99 percent of the lineages found among native groups from this region are clearly of Asian descent. Modern and ancient DNA samples tested from among the Maya generally fall into the major founding lineage classes... The Mayan Empire has been regarded by Mormons to be the closest to the people of the Book of Mormon because its people were literate and culturally sophisticated. However, leading New World anthropologists, including those specializing in the region, have found the Maya to be similarly related to Asians. Stephen L. Whittington...was not aware of any scientists 'in mainstream anthropology that are trying to prove a Hebrew origin of Native Americans... Archaeologists and physical anthropologists have not found any evidence of Hebrew origins for the people of North, South and Central America.'" (pg 191)
Defenders of the book's historical authenticity suggest that the Book of Mormon does not disallow for other groups of people to have contributed to the genetic make-up of Native Americans. Nevertheless, this is a departure from the traditional view that Israelites are the primary ancestors of Native Americans, and therefore would be expected to present some genetic evidence of Near Eastern origins. A recently announced change in the Book of Mormon's introduction, however, allows for a greater diversity of ancestry of Native Americans. See, for example, the following Deseret News article published on November 9, 2007: Intro Change in Book of Mormon Spurs Discussion
- Peterson, Daniel C. (January 2000), "Mounting Evidence for the Book of Mormon", Ensign
- editor, Dennis L. Largey, general (2003). Book of Mormon reference companion. Salt Lake City, Utah: Deseret Book. ISBN 1573452319.
- Skousen, Royal. "Changes in the Book of Mormon" (Transcription of live presentation). 2002 FAIR Conference: FAIR. Retrieved 2009-09-25.
- Skousen, Royal Skousen (1992), Ludlow, Daniel H, ed., Encyclopedia of Mormonism, New York: Macmillan Publishing, pp. 185–186, ISBN 0-02-879602-0, OCLC 24502140
- "LDS FAQ: Changes in the Book of Mormon". JeffLindsay.com. November 27, 2012. Retrieved 2013-01-01.
- Toone, Trent (2015-08-06). "Recounting the preservation of the printer's manuscript of the Book of Mormon". DeseretNews.com. Retrieved 2017-09-23.
- "Church Acquires Printer's Manuscript of Book of Mormon". mormonnewsroom.org. Retrieved 2017-09-21.
- Walch, Tad (20 September 2017). "LDS Church buys printer's manuscript of Book of Mormon for record $35 million". Deseret (Salt Lake City) News. Retrieved 22 September 2017.
- There are three lines missing from the printer's manuscript in its current condition, covering 1 Nephi 1:7–8, 20. http://mi.byu.edu/publications/jbms/?vol=15&num=1&id=401
- "Printer's Manuscript of the Book of Mormon, 1923 Photostatic Copies". josephsmithpapers.org. pp. 0–464. Retrieved 2016-01-13.
- "3. "A History of All the Important Things" (D&C 69:3): John Whitmer's Record of Church History | Religious Studies Center". Rsc.byu.edu. Retrieved 2017-09-25.
- Montgomeryrmontgomery, Rick (2017-09-21). "Book of Mormon manuscript may be world's most expensive book | The Kansas City Star". Kansascity.com. Retrieved 2017-09-25.
- Mims, Bob (2017-09-21). "Historian: At $35M, original printer's manuscript of Book of Mormon a bargain - The Salt Lake Tribune". Sltrib.com. Retrieved 2017-09-25.
- The Zarahemla Research Foundation publishes a 48-page booklet titled "Book of Mormon Chapter & Verse: RLDS–LDS Conversion Table" to enable readers of an LDS edition to find references from an RLDS edition and vice versa.
- The revised text was first published in 1981 and the subtitle was added in October 1982: Packer, Boyd K. (November 1982). "Scriptures". Ensign.
You should know also that by recent decision of the Brethren the Book of Mormon will henceforth bear the title 'The Book of Mormon,' with the subtitle 'Another Testament of Jesus Christ.'
- Skousen, Royal (1992). "Book of Mormon Editions (1830–1981)". In Ludlow, Daniel H. Encyclopedia of Mormonism. New York: Macmillan Publishing. pp. 175–6. ISBN 0-02-879602-0. OCLC 24502140.
- "Church Releases New Edition of English Scriptures in Digital Formats". lds.org. The Church of Jesus Christ of Latter-day Saints. Retrieved 6 March 2013.
- BYU Catalog for "Book of Mormon. English. 1840 (1992)"
- Johnson, D. Lynn (2000). "The Restored Covenant Edition of the Book of Mormon—Text Restored to Its Purity?". FARMS Review. Provo, Utah: FARMS. 12 (2). Archived from the original on 2008-10-16. Retrieved 2009-02-12.
- Moore, Carrie A. (November 9, 2007). "Intro change in Book of Mormon spurs discussion". Deseret News. Retrieved 2009-08-26.
- Moore, Carrie A. (November 11, 2004). "Doubleday Book of Mormon is on the way". Deseret News. Retrieved 2009-08-26.
- Experience Press
- "The Book of Mormon - Skousen, Royal; Smith, Joseph". Yale University Press. Retrieved 2009-09-22.
- Crawley, Peter (1997). A Descriptive Bibliography of the Mormon Church, Volume One 1830–1847. Provo, Utah: Religious Studies Center, Brigham Young University. p. 151. ISBN 1-57008-395-9. Archived from the original on 11 June 2011. Retrieved 2009-02-12.
- Woodger, Mary Jane (2000). "How the Guide to English Pronunciation of Book of Mormon Names Came About". Journal of Book of Mormon Studies. Provo, Utah: FARMS. 9 (1). Retrieved 2009-02-21.
- Skousen & May 2001;Skousen & January 2001;Skousen & March 2001
- Skousen 2004;Skousen 2005;Skousen 2006
- "Book of Mormon textual changes". Fairmormon. Fairmormon. Retrieved 18 December 2017.
- "Taking the Scriptures to the World", Ensign: 24, July 2001
- "News of the Church: First Presidency Emphasizes Following Christ's Example", Ensign: 75–76, February 2005
- "Who's Nominated? – All Categories". tonyawards.com. May 3, 2011. Retrieved May 3, 2011.
- "150 Million and Counting: The Book of Mormon reaches another milestone", Church News, 2011-04-18.
- Brewster, Quinn (1996). "The Structure of the Book of Mormon: A Theory of Evolutionary Development". Dialogue: A Journal of Mormon Thought. 29 (2): 109–140. Archived from the original on July 22, 2007. .
- Brodie, Fawn M. (1971), No Man Knows My History: The Life of Joseph Smith (2nd ed.), New York: Knopf, ISBN 0-394-46967-4
- Bushman, Richard Lyman (2005). Joseph Smith: Rough Stone Rolling. New York: Knopf. ISBN 1-4000-4270-4.
- Dunn, Scott C (2002). "Automaticity and the Dictation of the Book of Mormon". In Vogel, Dan; Metcalf, Brent Lee. American Apocrypha: Essays on the Book of Mormon. Salt Lake City, Utah: Signature Books. pp. 17–46. ISBN 1-56085-151-1.
- Faulring, Scott H (June 2000). "The Return of Oliver Cowdery". The Disciple as Witness: Essays on Latter-day Saint History and Doctrine in Honor of Richard Lloyd Anderson. Provo, Utah: Maxwell Institute. Archived from the original on 2007-10-13.
- Givens, Terryl (2002). By the Hand of Mormon: The American Scripture That Launched a New World Religion. Oxford University Press. ISBN 0-19-516888-7.
- Hitchens, Christopher (2007). god is not Great. New York: Twelve.
- Howe, Eber Dudley (1834). Mormonism Unvailed: Or, A Faithful Account of that Singular Imposition and Delusion, from its Rise to the Present Time. Painesville, Ohio: Telegraph Press.
- Jessee, Dean (1970). "The Original Book of Mormon Manuscript" (PDF). BYU Studies. 10 (3): 1. Archived from the original (PDF) on September 10, 2008.
- Midgley, Louis C (1997). "Who Really Wrote the Book of Mormon?: The Critics and Their Theories". In Reynolds, Noel B. Book of Mormon Authorship Revisited: The Evidence for Ancient Origins. Provo, Utah: Foundation for Ancient Research and Mormon Studies. pp. 101–139. ISBN 0-934893-25-X.
- Persuitte, David (2000). Joseph Smith and the Origins of The Book of Mormon (second ed.). McFarland & Company. ISBN 0-7864-0826-X.
- Price, Robert M (2002). "Prophecy and Palimpsest". Dialogue: A Journal of Mormon Thought. 35 (3).
- Roberts, Brigham H (1985). Madsen, Brigham D., ed. Studies of the Book of Mormon. Urbana, Illinois: University of Illinois Press. ISBN 0-252-01043-4.
- Foundation for Ancient Research & Mormon Studies. (2001a). "Original manuscript of the Book of Mormon: typographical facsimile of the extant text". In Skousen, Royal. Book of Mormon Critical Text Project. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-04-7.
- Foundation for Ancient Research & Mormon Studies. (2001b). Skousen, Royal, ed. Book of Mormon Critical Text Project. 2–1. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-05-5.
- Foundation for Ancient Research & Mormon Studies. (2001b). "Printer's manuscript of the Book of Mormon: typographical facsimile of the entire text in two parts". In Skousen, Royal. Book of Mormon Critical Text Project. 2–2. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-06-3.
- Royal Skousen. (2004). "Analysis of textual variants of the Book of Mormon". In Skousen, Royal. Book of Mormon Critical Text Project. 4–1. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-07-1.
- Royal Skousen. (2005). "Analysis of textual variants of the Book of Mormon". In Skousen, Royal. Book of Mormon Critical Text Project. 4–2. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-08-X.
- Skousen, Royal (2006). "Analysis of textual variants of the Book of Mormon". In Skousen, Royal. Book of Mormon Critical Text Project. 4–3. Provo, Utah: Foundation for Ancient Research and Mormon Studies. ISBN 0-934893-11-X..
- Smith, Joseph, Jr. (March 26, 1830b). The Book of Mormon: An Account Written by the Hand of Mormon, Upon Plates Taken from the Plates of Nephi. Palmyra, New York: E. B. Grandin.
- Smith, Joseph, Jr. (July 1838). "Editor's note". Elders' Journal of the Church of Jesus Christ of Latter Day Saints. 1 (3).
- Spaulding, Solomon (1996). Reeve, Rex C, ed. Manuscript Found: The Complete Original "Spaulding" Manuscript. Provo, Utah: Religious Studies Center, Brigham Young University.
- Tvedtnes, John A (1984). "Isaiah Variants in the Book of Mormon". Featured Papers. Provo, Utah: Maxwell Institute.
- Van Wagoner, Richard S.; Walker, Steven C. (Summer 1982). "Joseph Smith: The Gift of Seeing". Dialogue: A Journal of Mormon Thought. 15 (2): 48–68.
- Vogel, Dan (2004). Joseph Smith: The Making of a Prophet. Salt Lake City: Signature Books. ISBN 1-56085-179-1.
- Paul C. Gutjahr (2012). The Book of Mormon: A Biography. Lives of Great Religious Books. Princeton University Press. ISBN 978-0-691-14480-1. JSTOR j.ctt7s5sf.
- Noel B. Reynolds (1997). Book of Mormon Authorship Revisited: The Evidence for Ancient Origins. Foundation for Ancient Research and Mormon Studies (FARMS). ISBN 0-934893-25-X. OCLC 36877441.
- Roy A. Cheville (1964). Scriptures from Ancient America: a Study of the Book of Mormon. Harald Publishing House.
- Brent Lee Metcalfe, ed. (1993). New Approaches to the Book of Mormon: Explorations in Critical Methodology. Signature Books. ISBN 1-56085-017-5. OCLC 25788077.
- Dan Vogel and Brent Metcalfe, ed. (2002). American Apocrypha: Essays on the Book of Mormon. Signature Books. ISBN 1-56085-151-1. OCLC 47870060.
- Grant H. Palmer (2002). An Insider's View of Mormon Origins. Signature Books. ISBN 1-56085-157-0. OCLC 50285328.
- Simon G. Southerton (2004). Losing a Lost Tribe: Native Americans, DNA, and the Mormon Church. Signature Books. ISBN 1560851813. OCLC 55534917.
- Daniel C. Peterson, ed. (2008). The Book of Mormon and DNA Research. Neal A. Maxwell Institute for Religious Scholarship. ISBN 9780842527064. OCLC 226304684.
- Terryl L. Givens (2002). By the Hand of Mormon: The American Scripture that Launched a New World Religion. Oxford University Press. ISBN 019513818X. OCLC 47838555.
- John L. Sorenson (2013). Mormon's Codex: An Ancient American Book. Neal A. Maxwell Institute for Religious Scholarship (BYU) and Deseret Book. ISBN 9781609073992. OCLC 828334040.
- Reynolds, George (1888). The Story of the Book of Mormon. Salt Lake City, Utah: Jos. Hyrum Parry. p. 494.
- Ludlow, Daniel H., ed. (1992). "Book of Mormon". Encyclopedia of Mormonism. New York, NY: Macmillan. pp. 139–216. ISBN 0-02-904040-X. OCLC 24502140.
- Works related to Book of Mormon (1830 Edition) at Wikisource
- Media related to Book of Mormon at Wikimedia Commons
- Quotations related to Book of Mormon at Wikiquote
- Facsimile of the 1830 edition
- Project Gutenberg has the full text of the Book of Mormon in various formats (LDS chapters and numbering)
- RLDS 1908 Book of Mormon (RLDS chapters and numbering)
- The Book of Mormon; An Account Written By the Hand of Mormon Upon Plates Taken From the Plates of Nephi. From the Collections at the Library of Congress
- Book of Mormon (2013 edition) from the LDS Church, at LDSCDN.org, a church website
- Book of Mormon Central - All LDS publications about the book
|
<urn:uuid:eca0c4b6-96f4-4b03-bd3e-70f11bc9ecd5>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00213.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9054242372512817,
"score": 3.34375,
"token_count": 13874,
"url": "https://en.m.wikipedia.org/wiki/Book_of_Mormon"
}
|
- 1 Sixteenth Century
- 2 Urban Mennonites in North America
- 3 Urban Mennonite Beliefs and Attitudes
- 4 Bibliography
- 5 Cite This Article
During the 16th century Christopher Columbus and many others explored new worlds. Feudalism was declining and nationalism was on the rise. New ideas were spawned, the old structures could no longer hold the new ideas, discoveries and religious ferment. It was a changing economic, political and religious environment, and new canopies had to be built to integrate new discoveries and traditional values. Urbanization was an important part of this process. The Anabaptist-Mennonite movement started primarily in cities such as Zürich, Bern, Strasbourg, Emden, Amsterdam, Leeuwarden, Groningen, Leyden, Rotterdam, Antwerp, Brussels, Münster and Cologne. In the Swiss, South German, and Austrian cities, the Anabaptist movement was crushed and survived only in remote areas. It was different in The Netherlands. Of the thirteen cities listed by Cornelius Krahn, only two were Swiss; the majority were north European, often members of the Hanseatic commercial league. While Anabaptists in central Europe fled the cities, in the northern cities they survived first as an underground movement, later as a tolerated minority and finally as a recognized religious group. (It must be remembered that these cities were relatively small and non-industrialized. The largest of them had 100,000-200,000 inhabitants, many had between 20,000 and 50,000. They were dominated by commerce and artisan crafts rather than large industries and factories. Many urban dwellers maintained small livestock; city neighborhoods retained some elements of rural life.)
Paul Peachey's study, published as Die Soziale Herkunft der Schweizerischen Täufer (The Social Origins of the Swiss Anabaptists), of 762 Swiss individuals who were connected with the Anabaptist movement in central Europe, shows that 150 of these were urban (20 percent). There were 612 villagers and peasants (80 percent), whom he classified as rural. Of the 150 who were urban, 20 had been clergy (14 priests and 6 monks), 20 more were urban lay intellectuals (including Grebel, Manz, Denck and Hugwald), 10 came from the nobility, and 100 were citizens, often urban artisans. Among the artisans, tailors and bakers were most common. Peasants (460) constituted about three-fifths of the total number of persons listed. Combining them with the villagers we conclude that four-fifths of the people appearing in court records belonged to the non-urban population. Most of the urban Anabaptist leaders disappeared within two years (1525-27) through martyrdom, early death, recantation, exile or other unknown destiny. Thus, the Swiss Anabaptist movement was only one-fifth urban to begin with, and almost completely rural two years later. Severe persecution made an urban foothold impossible.
Urbanism among Mennonites of the northern Low Countries is as old as Mennonitism itself. There are some 1,500 Mennonites in Amsterdam, and some 1,300 in Haarlem in 1986 as well as more than 1,000 in a number of other cities. In the 16th century Amsterdam and Rotterdam were part of the Hanseatic League, whose ships plied the Baltic sea between such ports as Bergen, Oslo, Stockholm, Copenhagen, Danzig, Amsterdam, Rotterdam, and London. While Menno Simons himself emerged out of rural Friesland, he nevertheless served Mennonites in many urban centers of the 16th century.
W. L. C. Coenen made a study of the Anabaptist martyrs in The Netherlands and found that not one out of 161 martyrs was a farmer. Among the 58 occupations were weavers (27), tailors (17), shoemakers (13), sailors (6), carpenters (5), goldsmiths (5), hatmakers (5), bricklayers (4), bakers (3), leather dealers (3), teachers (3), saddlers (3), and potters (3). There were also Mennonites in rural areas in North Holland, Friesland, and Groningen. Persecution also drove some to the east into Prussia, mostly into the countryside as well as the suburbs of cities such as Altona, Hamburg, Danzig, Marienburg, Elbing and Koenigsburg. Many moved upward into the middle class.
Thus, two major Mennonite branches emerged in Europe: The Swiss and South German rural farmers, and the Dutch, North German Russian entrepreneurs with roots in the commerce and artisan manufacturing of northern Europe. While many Dutch Anabaptists have always remained urban, most others turned to safer rural environs and became farmers because of persecution. Thus, for hundreds of years, these rural Mennonites have been known as the "Stillen im Lande" (peaceful country folk). However, in the 20th century Mennonites in some parts of the world are moving to cities.
World Mennonite Urbanization
A Mennonite World Conference map shows that in 1984 there were 724,000 Mennonite members in 57 countries. Almost half (46.1 percent) resided in two countries of North America, (333,704 members) and the other half (53.9 percent) were located roughly in equal numbers on the four continents of Asia (7 countries, 113,504 members), Africa (11 countries, 107,221 members), Europe (13 countries, 92,368 members), and Latin America (23 countries, 76,938 members). Only one eighth (12.7 percent) live in Europe, the place of Anabaptist beginnings. About two-thirds (most in Europe and North America, and some in South America) are descendants of European Caucasians, and one third are now mostly of Asian and African origins (demography).
In 1984, 90 percent of all Mennonites in the world lived in eleven countries. In Table I we see that one third (32 percent) live in the United States; only about 5 percent live in the original countries of The Netherlands (2.8 percent), Germany (1.6 percent) and Switzerland (.4 percent). The range of urbanization of these countries varies enormously from a high of 82 percent in The Netherlands, to a low of 14 percent in Tanzania.
Table 1. Of Mennonites in Eleven Countries and Degree of Urbanization of These Countries, 1974-83
|Countries||% of Nation Urban||Number of Mennonite Members||% of Total World Mennonites|
|Source: United Nations Demographic Yearbook, 1983, and Mennonite World Conference map, 1984|
While statistics on Mennonite urbanization for the United States and Canada are available, it is very difficult to assemble data on urban Mennonites in the Soviet Union and most of the other countries. Estimates by Mennonites who live in these countries show that usually Mennonites are more rural than respective national urban figures; in no case were Mennonites more urban than respective national averages. Mennonites still are urban in The Netherlands, where they have always lived in cities, and where they remain the largest original Anabaptist group (20,000). Mennonites have also moved very heavily into cities in the Soviet Union after World War II, and Mennonite urbanization is also escalating in North America so that about half of Canadian Mennonites are urban.
Urban Mennonites in North America
Since almost half (46.1 percent) of all Mennonites live in the United States and Canada, and since the best urban data are available from there, we shall examine North American Mennonite urbanization in more detail. Howard Kauffman and Leland Harder made the most extensive survey of North American Mennonites in 1972; Driedger and Kauffman published a paper on urbanization using some of these data in 1982. They found that two thirds of the total sample of Mennonites taken in North America were rural, and one third were urban, living in cities of more than 2,500 people. They found that Canadian Mennonites were significantly more urban (44 percent) than American Mennonites (32 percent). However, there are many interesting variations by region and by size of community.
We find Canadian and American Mennonites are similar in the farm and village/town categories. However, there are twice as many rural non-farm Mennonites in the United States (18 percent) as in Canada (8 percent). This greater proportion of non-farm American Mennonites accounts for the higher total rural proportion.
Table 2. Comparison of American and Canadian Mennonites by Rural and Rural Differentiations
|Country and Region||Size of Community|
|Farm||Rural Non-Farm||Village/Town 2,500||2,500-25,000||25,000-250,000||Over 250,000||Total %||N|
|Total USA %||34||18||16||18||9||6||100|
|Total Canada %||34||8||14||10||16||18||100||763|
|Source: J. Howard Kauffman and Leland Harder, eds., Anabaptists four centuries later: a profile of five Mennonite and Brethren in Christ denominations (Scottdale, Pa.: Herald Press, 1975); Driedger and Kauffman in Mennonite Quarterly Review, 56 (1982).|
The basic difference between the two countries in the three urban categories (small city, medium, and large metropolitan centers), is that American Mennonites reside twice as frequently in small cities and Canadian Mennonites are two to three times more likely to reside in larger metropolitan centers. There were also important regional variations. Only 20 percent of the Mennonites in the American East, 23 percent in the Midwest, 32 percent in the American prairie states, and 74 percent of the Pacific Coast Mennonites were urban. Mennonites in the western United States, who were largely of Dutch-Russian background were roughly twice as urban as the Mennonites in eastern America, who were largely of Swiss (Pennsylvania-German) background. These urban distinctions by region are not apparent in Canada. Roughly 40 to 45 percent of the Mennonites were urban in all parts of Canada in 1972. Since then urbanization has increased.
A closer examination of Mennonites in some of the major metropolitan centers of North America shows that there are 1,000 or more members in six centers in Canada. The largest numbers are located in Winnipeg (9,400), Saskatoon (2,300), Kitchener-Waterloo (2,300), and Vancouver (4,800) in 1985. These Mennonites worship in more than a dozen churches in each of five of the centers (3 dozen in Winnipeg). Canadian urban Mennonites are mostly of European heritage who have moved to the cities from rural hinterlands, or who have entered cities as immigrants especially after World War II. Each of the eight cities listed have substantial rural Mennonite hinterlands which feed into these cities. While Mennonites of Asian backgrounds are also starting urban churches, they still represent a small proportion of urban Mennonites in Canada; African origin Mennonites hardly exist in Canada.
Table 3. Mennonites Located in Selected Metropolitan Centers of Canada and the USA, 1985
|Metropolitan Centers (100,000 plus)||Size of Metropolitan Population (Census 1980/81)||Number of Mennonites (all ages, 1981 Census)||Number of Mennonite Churches||Mennonite Adult Membership|
|Los Angeles, Cal.||7,477,657||10||534|
|New York, N.Y||9,119,737||14||406|
|Source: 1980 USA Census, 1981 Canadian Census, and Mennonite conference yearbooks|
The patterns of Mennonite urbanization tend to be different in the United States. Rural Mennonites have not so much moved from hinterlands into large metropolitan centers as they have been attracted more to small cities (Lancaster, Harrisonburg, Elkhart). Mennonite congregations in larger cities are more often the result of mission and church planting efforts and represent a greater variety of ethnic and racial backgrounds. Table 3 indicates that there are relatively small numbers of Mennonites in the very large American metropolitan centers of three million or more. The Mennonites (2,754) who worship in 60 congregations located in Chicago (1,006 members), Los Angeles (534), Washington (437), New York City (406), and Philadelphia (371), comprise groups averaging fewer than 50 members, compared to Canadian urban churches with average memberships of 325. Eastern American Swiss Mennonites are attracted more to smaller urban centers of 50,000 or less: Lancaster, Pa. (9 churches, plus others outside the city itself), Goshen, Ind. (12 churches, with 9 more in adjoining rural areas), and Harrisonburg, Va. (10 churches, plus several located in adjoining rural areas). Western Russian-background Mennonites in Fresno, Cal, (7 churches), and Wichita, Ks. (6 churches), follow the Canadian pattern more.
Amsterdam was the world urban Mennonite center for more than 400 years, with Mennonite membership as high as 10,000. By 1986 this had declined to 1,500 members worshiping in five places. This was part of a general decline in membership in The Netherlands from 31,000 in 1972 to 20,200 in 1984. Thus, after World War II, Winnipeg has emerged as the largest urban Mennonite center in the world with 19,100 Mennonites (1981 census), representing about 9,400 adult members who worshiped in 44 churches in the city in 1988.
There are many Mennonite institutions in Winnipeg, including two colleges, two high schools, the Mennonite Central Committee headquarters for Canada and for Manitoba, two offender ministries half-way houses for former prisoners, a hospital, many homes for the elderly, several credit unions, 44 churches, six newspapers, several musical and drama societies, two national conference offices (GCM and Manitoba, Canada), and scores of Mennonite businesses and companies. A variety of conferences, associations, corporations, organizations, and societies keep information flowing between Winnipeg Mennonites and other Mennonite communities.
Leadership of Mennonite churches in Winnipeg has been entirely Mennonite. About 150 ministers have served the 44 Mennonite churches in Winnipeg over the past fifty years, and all of them (except four to six) were Mennonite. Many were well educated and were heavily involved in provincial, national, and international Mennonite conference activities. Most of the leaders in the 44 churches are graduates of Mennonite Bible schools, high schools colleges and seminaries. Thus they come in constant contact with networks of leaders from all over Canada, the United States and the world. Mennonite leadership also extends to editors of Mennonite and non-Mennonite Winnipeg papers; businessmen in influential places; teachers and professors at elementary, secondary, and university levels; social workers; medical professionals; and virtually all other professions and occupations. These positions have given them the means to inform and promote their identity at all levels of society. More importantly, they are active in their Mennonite churches and they are committed to their heritage and perceived by their fellow Mennonites as committed members. The degree of integration between Winnipeg Mennonites in their Mennonite structures and their everyday occupations is considerable. It is a natural outflow of their faith, life and work. Similar activities are happening in many other cities, but usually not on the same scale.
Urban Mennonite Beliefs and Attitudes
To what extent do beliefs, and attitudes of Mennonites change as they urbanize? The early Anabaptists believed in adult baptism, and they could not take part in war. They also believed in the priesthood of all believers, a disciplined church and the importance of evangelism. They did not swear the oath, and they could not serve in governments. Studies show that these beliefs are still held by urban and rural North American Mennonites alike. However, Driedger and Kauffman found more rural-urban differences when they examined social issues of the day. There was a great deal of consensus against issues such as use of hard drugs, and becoming drunk. However, many more rural than metropolitan Mennonites thought that it was wrong to gamble (80 to 69 percent), smoke tobacco (67 to 56 percent), remarry when the first spouse is still living (66 to 49 percent), drink alcohol moderately (57 to 34 percent), divorce when the cause is not adultery (55 to 39 percent), attend for-adults-only movies (54 to 32 percent), engage in social dancing (50 to 30 percent), masturbate (49 to 37 percent), and divorce when the cause is adultery (39 to 24 percent).
Fewer metropolitan Mennonites hold to some present and past norms of personal morality, but there is somewhat more urban flexibility on family breakdown. More research is required to document the quality of urban Mennonite beliefs, attitudes and behavior especially in other parts of the world.
Coenen, W. L. C. Bijdrage tot de Kennis van de Maatschappelijke Verhoudingen van de Zestiendeeeuwische Doopers. Amsterdam, 1920: 1-90.
Driedger, Leo and J. Howard Kauffman. "Urbanization of Mennonites: Canadian and American Comparisons." Mennonite Quarterly Review 56 (1982): 269-90.
Driedger, Leo. "Canadian Mennonite Urbanism: Ethnic Villagers or Metropolitan Remnant?" Mennonite Quarterly Review 49 (1975): 150-62.
Driedger, Leo. "Post-war Canadian Mennonites: From Rural to Urban Dominance." Journal of Mennonite Studies 6 (1988): 70-88.
Krahn, Cornelius. Dutch Anabaptism: Origin, Spread, Life and Thought. Scottdale, PA, 1981: 90-100.
Kauffman, J. Howard and Leland Harder, eds. Anabaptists Four Centuries Later: a Profile of Five Mennonite and Brethren in Christ Denominations. Scottdale, PA: Herald Press, 1975.
Peachey, Paul. Die Soziale Herkunft der Schweizerischen Täufer in der Reformationszeit. Karlsruhe, 1954: 102-27.
Peachey, Paul. The Church in the City. Newton, KS: Faith and Life, 1963.
Cite This Article
Driedger, Leo. "Urbanization." Global Anabaptist Mennonite Encyclopedia Online. 1989. Web. 17 Feb 2018. http://gameo.org/index.php?title=Urbanization&oldid=143780.
Driedger, Leo. (1989). Urbanization. Global Anabaptist Mennonite Encyclopedia Online. Retrieved 17 February 2018, from http://gameo.org/index.php?title=Urbanization&oldid=143780.
Adapted by permission of Herald Press, Harrisonburg, Virginia, from Mennonite Encyclopedia, Vol. 5, p. 903-907. All rights reserved.
©1996-2018 by the Global Anabaptist Mennonite Encyclopedia Online. All rights reserved.
|
<urn:uuid:ff447381-4856-46a9-a08e-e9bd1744f6af>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00414.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9292570948600769,
"score": 3.484375,
"token_count": 4170,
"url": "http://www.gameo.org/index.php?title=Urbanization"
}
|
Presentation on theme: "Atoms Unit 3 Introduction to the Atom History of the Atom"— Presentation transcript:
1 Atoms Unit 3 Introduction to the Atom History of the Atom +-
2 Some of the First Discussions about the Atom…. Democritus (400 BC) named the atom, which means indivisible, and said it was the smallest particle and could not be broken down.He was pretty much correct in that if you did break the atom down, it wouldn’t be that “element” any more. But he wasn’t aware that the atom was made up of particles.IN OTHER WORDS: Gold is an element. The smallest part of gold that is STILL gold is an atom.
3 Greeks Philosophers Continue the Discussion and Disagree Aristotle (350 BC) believed matter could always be broken down into smaller and smaller parts.And this is true to a certain extent… but eventually you come to the small basic particles that can no longer be broken apart.In other words: You CAN break an atom of gold down into its parts… protons, electrons and neutrons, however, it wouldn’t be gold anymore.
4 His ideas are now called the Atomic Theory of Matter Dalton's Atomic TheoryIn 1808, John Dalton proposed that elements were composed of atoms & that only whole numbers of atoms can combine to form compoundsHis ideas are now called the Atomic Theory of MatterDalton’s Atomic Theory ( AD) was widely accepted but not totally correct
5 Dalton’s Atomic Theory All matter is composed of atoms. THIS IS TRUEAtoms of the same kind of elements are identical; atoms of different elements are different from each other. THIS IS PARTLY TRUEELEMENT1ELEMENT2ELEMENT3ELEMENT4Dalton’s Atomic Theory
6 Atoms can’t be changed, created, or destroyed. THIS IS PARTLY TRUE You can make compounds out of combinations of different atoms THIS IS TRUEChemical reactions are rearranging or recombining atoms THIS IS TRUE++
7 Atomic Theory Not all of Dalton’s claims were true Atoms CAN be divided into even smaller particles (Protons, electrons, neutrons)Some elements have atoms that have different masses. (ISOTOPES)Atomic Theory
8 Atomic Theory Dalton’s Atomic Theory of Matter has been modified. What remains is…All matter is composed of atomsAtoms of any one element differ in properties from atoms of another elementAtomic Theory
9 In the 1800’s it was determined that atoms are actually composed of several basic types of smaller particlesIt’s the number and arrangement of these particles that determine the atom’s chemical properties.A new definition of an atom is the one we use today: The smallest particle of an element that retains the chemical properties of that original element.Atomic Theory
10 Plum Pudding ModelJJ Thomson’s cathode ray tube experiment in the late 1800s showed that atoms had smaller parts, called negative corpuscles; he developed the “plum pudding model.”The plum pudding model showed electrons (the plums) mixed in together with protons (the cake batter)
11 Negative particles embedded in a sphere of positive plasma-like matter. THINK…Chocolate Chip Cookie
12 Scientists still didn’t really understand how the particles were put together in an atom. This was a difficult question to resolve, given how tiny atoms are. They didn’t have GOOGLE to find out the answer!Most thought it likely that the atom resembled Thomson’s modelAtomic Structure
13 Rutherford’s gold foil experiment In 1911, Ernest Rutherford showed:1) atoms had a hard, dense, positively charged nucleus where most of the mass resided.2) negatively charged electrons outside the nucleus, and that the atom was actually mostly empty space.
15 Bohr Model of the AtomNeils Bohr put electrons into different energy levels or shells.(This model is not correct either… electrons do not travel in orbits or paths like the model suggests)
16 Modern Day Theory (Electron Cloud Theory) The Modern Theory suggests that electrons are located somewhere in a cloud.
17 2) Atoms of different elements have different numbers of protons. Basic and important facts to remember: 1) All atoms contain the same basic parts (protons, neutrons, electrons)2) Atoms of different elements have different numbers of protons.The Periodic Table lists atoms in consecutive order by their Atomic NumberThe atomic number is directly related to the number of protons in the nucleus of each atom of that element
18 Atoms have: A nucleus small, dense part of the atom consists of protons and neutronsAn electron cloudlarge part of the atom that is empty space except for the electrons that are moving very fast and very randomly around the nucleusNucleusElectron Cloud
19 The total number of protons & neutrons determines the mass of the atom Called the “Mass Number” (atomic mass is the averaged mass of the isotopes and is given on the periodic table. Simply round the atomic mass to get the mass number)A Carbon atom, has 6 protons and 6 neutrons, so its mass number is 12If you know the atomic number & mass number of an atom of any element, you can determine the atom’s composition and the number of neutrons.
20 The Proton The protons are what give the atom its charge (+) They add mass to the atom as well. Each proton is equal to one AMU (atomic mass unit)The number of protons in an atom determines what element it is.The atomic number signifies the number of protonsProtons are held together in the nucleus by the “Strong Force” otherwise they would repel each other.
21 The Neutron The neutron adds mass (1 amu) to an atom but has NO charge Atoms of the same elements are identical due to the number of protons, but there can be different numbers of neutrons (we call those isotopes)To find the number of neutrons, subtract the atomic number from the mass number.
22 ElectronsElectrons have almost no mass, and we DO NOT count their mass.Located outside the nucleus in the electron cloud (aka: shells, orbitals, energy levels) moving at incredibly high speeds.Electrons have a negative (-) charge.Electrons found in the outermost shells of the atom are responsible for chemical reactions. Electrons have different amounts of energy depending what energy level they are at.Electrons can be removed and added to atoms quite easily, unlike protons.
23 Subatomic Particles Particle Symbol Charge Relative Mass Electron eProton pNeutron n
24 Location of Subatomic Particles 10-13 cmelectronsprotonsneutrons10-8 cmnucleus
25 in an atom and determines what element it is Atomic NumberCounts the numberof protonsin an atom and determines what element it is
26 Atomic Number on the Periodic Table 11NaAtomic NumberSymbolThe symbol represents the element. RULE: The first letter is always capitalized, and IF there is a second letter, it is lower case.
27 All atoms of an element have the same number of protons 11Na11 protonsSodium
28 Atomic Mass on the Periodic Table 11Na22.99Atomic NumberSymbolAtomic MassAtomic mass is the weighted average mass of all the atomic masses of the isotopes of that atom. That is why there is a decimal.
29 Mass Number Counts the number of protons and neutrons in an atom (note: Atomic Mass is different from Mass Number. On your periodic table of elements, the atomic mass is usually given and you need to round it to the nearest whole number to use to figure Neutrons)
30 Atomic Notation atomic number 11 Shows the mass number and atomic numberGives the symbol of the elementmass number23 Na sodium-23atomic number 11
31 Number of Electrons An atom is neutral when no charge is indicated. The net charge is zeroRemember: Atomic number = Number of protonsand therefore….Number of protons = Number of electrons when the atom is neutral.
32 Subatomic Particles Showing the P E N O P Zn8 p p+ 30 p+8 e e- 30 e-8 n 16 n 35 n
33 IsotopesAtoms with the same number of protons, but different numbers of neutrons.Atoms of the same element (same atomic number) with different mass numbersIsotopes of chlorine35Cl 37Clchlorine chlorine - 37
34 Learning CheckNaturally occurring carbon consists of three isotopes, 12C, 13C, and 14C. State the number of protons, neutrons, and electrons in each of these carbon atoms.12C C 14C#p _______ _______ _______#e _______ _______ _______#n _______ _______ _______
|
<urn:uuid:63d318ae-3936-4a40-939c-4b79c65ea5d1>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00414.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8772608041763306,
"score": 3.78125,
"token_count": 1876,
"url": "http://slideplayer.com/slide/3166190/"
}
|
William Warner was the first European to settle in West Philadelphia. He arrived in 1677, five years before William Penn founded his utopian city. Warner came to a vast and beautiful land filled with rolling hills, wetlands, and lush trees. The northern portions of the area were wooded with forests of oak, green pine, evergreen, chestnut, walnut, ash, button, magnolia, and hickory trees, while the meadowlands south and east were covered with moss, grass, and weeds ideal for grazing livestock. Wild berries, water lilies, strawberries, cattail, mushrooms, and corn grew wild in the fields.12
The Lenape resided in bands along various rivers and creeks. They lived on hunting and growing foodstuffs and depended on the fertility of the land. Due to their heavy tillage of the land, the soils they farmed gradually lost their productivity. As a result, Lenape frequently relocated.3 Generally, an occupied area lost its usefulness in two decades' time. Thus, the natives constantly set up, abandoned, and resettled communities throughout Pennsylvania.
Archeological evidence indicates that the Lenape inhabited the area centuries before the Europeans arrived. They established various villages along the Schuylkill River and its tributaries. Recent excavations in West Philadelphia reveal evidence of settlements along the west bank of the Schuylkill River along today’s Civic Center Boulevard.4 In 2001, a team of archeologists excavated the area prior to the building of a parking garage. During the excavation, numerous prehistoric artifacts were found, providing evidence of a fairly large and stable indigenous community occupying the area during the late archaic and early woodland periods, six thousand years ago.
The Lenape utilized natural resources to build their homes. They lived in single doorway wooden huts called wigwams, which were situated along rivers and creeks. The size of their wigwams depended on the region they inhabited. In the southern region, the Unalachtigo’s homes were created for single-family dwellings while in the northern region larger multi-family buildings were constructed.5 Both men and women used bear grease to dress their hair, and decorated their bodies, face, and arms with designs painted in various colors.6 The women were of medium stature. For clothing, men wore breechcloths during the summer and fur robes during the winter. Likewise, women wore wrap-around-skirts during the summer and fur robes with leggings during the winter.7 Both women and girls adorned their bodies with tribal jewelry made from shells, stones, beads, and animal teeth and claws.8
Due to their short life expectancy, men and women married young.9 Girls commonly married at the ages of thirteen and fourteen while young men married at ages of seventeen and eighteen. For some marriage lasted a lifetime, but for others this union ended in divorce. A woman wishing to divorce her husband placed all of his personal possessions outside of the wigwam. A man wishing to divorce his wife left the home.10
Once couples had children, fathers with the help of other male elders bore responsibility for teaching male children to hunt for wild game. Women taught daughters how to gather edible plants and tend to the children. In late fall, the men left their homes to hunt white-tailed deer, wild fowl, muskrat, rabbits, and foxes. Men were responsible for the heavy work around the village, making tools, weapons, mortars, frames for the wigwams, dugouts, and fishing spears.11 Tools were made from the bones of animals, wood, stone, as well as various types of grasses. Birds such as herons, pigeons, eagles, hawks, and turkeys were hunted. Once a bird was captured, it would either be prepared for direct consumption or dried. When the weather was favorable, men would use spears, harpoons, nets, and dams to catch fish. The women would clean and prepare the fish, which were either eaten raw or dried and saved for later.12
Women’s work included tanning hides, sewing, cooking, as well as gathering fruits and berries when they were in season.13 Mothers would show their daughters how to gather roots, nuts, eggs, clams, and edible plants. As they grew older, young girls learned how to garden, care for the children, and cook.14 Although corn was the main crop, several varieties of beans, squash, pumpkins, tobacco, and sunflowers were cultivated.15 When fruit and nuts were in season, children would accompany their mothers and aunts into the forests to gather apples, persimmons, water lilies, and butternuts.16
The Dutch and Swedes had episodic relations with the Lenape. William Penn would have more enduring and impacting interactions. In 1682, William Penn came to the Delaware River valley to claim lands granted to him on a proprietary basis by King Charles II of England and to establish a haven in the New World for fellow members of the persecuted Quaker sect. He came to take possession of lands that reached throughout southeast Pennsylvania where the Lenape resided.17 The Quakers believed strongly in the principles of goodwill and friendship and Penn practiced these principles with the Lenape. Penn was determined to treat them as brothers and sisters as he believed they too were children of God. He entered into purchase agreements with the Lenape that brought lands ceded in his proprietorship under his absolute title.18 Although he took ownership rights, he still recognized and reserved certain lands where Lenape villages were located, not allowing them to be sold. Peaceful relations between the European settlers and the Lenape would disintegrate, however, not long after Penn’s death in 1718.1920
Disease and warfare further eclipsed the presence of the Lenape in eastern Pennsylvania. A series of smallpox epidemics—the smallpox microbe brought by European settlers to the New World—reduced the numbers of Lenape by an estimated 80 percent. Violent attacks sanctioned by Penn authorities took a toll as well. The remaining Lenape retreated westward into Ohio and beyond leaving but a tiny presence in Pennsylvania of the land’s centuries-long original inhabitants.
With the departure of the Lenape, British farmers and craftsmen and later, Americans with substantial resources developed West Philadelphia. The first settlers typically built modest log and frame houses, but by the end of the 18th century large estates—with two-story brick and stone houses—had been developed along the banks of the Schuylkill River. William Warner (ca. 1627-1707), a native of the parish of Blockley in Worcestershire, England, led the way.21
Warner is an important figure in the history of West Philadelphia. He arrived in the Delaware Valley in the mid-1670s and by 1677 had settled on the west bank of the Schuylkill River. It is said that he negotiated with the Lenape and purchased rights from them to 1,500 acres of land.22 In any case, in the spring of 1678 he obtained from the Upland Court (located at present-day Chester, Pennsylvania) a formal order confirming his rights to 100 acres of land in West Philadelphia. Two years later he obtained from the same court legal recognition of his rights to a contiguous tract of 200 acres, and in 1681, still another 400 acres. When William Penn took control of Pennsylvania, Warner (and his family) patented a total of 588 acres with the new government. The original farm of 300 acres fronted on the Schuylkill River in present-day Fairmount Park (at the site of Samuel Breck’s 1797 mansion house, "Sweetbriar") and stretched west, between narrow boundaries, as far as 60th and Media Streets. Warner named his farm "Blockley," after his birth place in England.
Warner was also a community leader in the first years of William Penn’s new colony. He sought election to public office and was rewarded with two terms in the Pennsylvania Assembly, the first in 1683 and the second in 1691. He was also a justice of the peace for Philadelphia County in 1685 and 1686. His influence proved lasting, for in 1705, when the entire 14.2 square mile area we know of as West Philadelphia today was first organized as a political entity, it was officially named Blockley Township. The Blockley name identified West Philadelphia for nearly 150 years. Not until 1854, when the City of Philadelphia expanded and incorporated all the townships in Philadelphia County, did a ward number substitute for Blockley as the designation for West Philadelphia.23 Bartram was born on a farm just west of Philadelphia, in Chester County, Pennsylvania. In 1728 he purchased a farm of 102 acres on the west bank of the Schuylkill and soon thereafter constructed a two-story stone house for himself and his family. He prospered. The origins of Bartram’s interest in flowers are unknown—he perhaps became fascinated by the medicinal powers of plants. In 1736 he traveled upstream on the Schuylkill, collecting plants for exchange with Peter Collinson, his British correspondent. Two years later he undertook the first of his North American explorations, traveling more than 1,000 miles up the James River in Virginia, studying the natural history of that region. He discovered several native floras, of which the Franklinia alatamaha tree is his most famous. John Bartram’s fame grew throughout the American colonies and in Europe (Bartram maintained correspondence with European botanists and regularly sent them clippings of plants and seeds—all rhododendrons in Europe, for example, date back to Bartram’s shipments). In total, it is believed that Bartram helped to identify, cultivate and preserve more than 200 American plant species.24
John Bartram’s son, William Bartram (1739-1823), would come into his own as a famous and respected botanist for the documentation of his travels. In 1773, William Bartram began a tour of the southern colonies that lasted four years and he recorded his experience; a published version of his journals with his drawings drew acclaim as an "American natural history classic." Another of John Bartram’s sons, John Bartram Jr., turned his father’s gardens into a commercial venture. He published one of the first catalogs for the mail-order sale of plants and seeds.25 Such famous American founding fathers as George Washington and Thomas Jefferson purchased flowers and seeds from the Bartrams for their gardens, adding to the renown of John Bartram and his sons. Today, one can tour Bartram’s Gardens and see various plants Bartram helped to cultivate as well as his home, which serves as a museum. The Garden now occupies forty-five acres, much smaller than the original 102-acre site. The preservation of the Bartrams’ contributions to botany is due to the philanthropy of Andrew Eastwick, a successful railroad executive, who purchased the estate and ceded the grounds to the city of Philadelphia in 1891 for public use. Those interested in the history of West Philadelphia may visit Bartram’s Garden from its entry at 54th Street and Lindbergh Boulevard in southwest Philadelphia.
Nearly a century after William Warner, another man, William Hamilton (1745-1813) came to exemplify estate building along the western banks of the Schuylkill River. He was the grandson of the famous Philadelphia lawyer, Andrew Hamilton (ca. 1676-1741), a major legal and political figure in Philadelphia in the first half of the 18th century. Andrew Hamilton, whose British origins are obscure, arrived in Virginia in 1700 and rapidly became a prominent attorney in both Virginia and Maryland.26 He moved to Philadelphia in 1715 and just two years later was named Attorney General of the Province of Pennsylvania. He was also a major investor in Pennsylvania land, including 300 acres on the west bank of the Schuylkill River in Blockley Township, which he purchased in 1735. In that same year Andrew Hamilton became the most famous lawyer in colonial America when he successfully defended John Peter Zenger, publisher of the New York Weekly Journal, in a landmark free-speech case. New York authorities had accused Zenger of "seditious libels." Andrew Hamilton won an acquittal for Zenger by challenging the law rather than proving his client’s innocence. The case ultimately contributed to the adoption of the principles of the First Amendment of the United States Constitution. Hamilton’s abilities in litigation at the trial gave rise to the expression, "Philadelphia lawyer."James (ca. 1710-1783), purchased an additional 179 acres in West Philadelphia, a tract of land which was contiguous with William Hamilton’s estate. When James Hamilton died, in 1783, he willed all 179 acres to William. In this manner—356 acres from his father, Andrew the second, and 179 acres from his uncle, James—William Hamilton, by 1785, came into ownership of approximately 535 acres of West Philadelphia land. William Hamilton’s estate stretched from the Schuylkill River on the east to present-day 43rd Street on the west, and from present-day Market Street on the north to the present-day Woodlands Cemetery on the south.
By 1785, other wealthy Philadelphians were purchasing open land and building grand mansion houses along both the east and the west banks of the Schuylkill River. Many of these "country estates" were later purchased by the City of Philadelphia and consolidated within the present-day boundaries of Fairmount Park. On the east side of the river were, for example, John Macpherson’s "Mount Pleasant" (1765) and the Rawle family’s "Laurel Hill" (1767). On the west side were William Peters’ "Belmont" (1744) and on the old Warner farm, the imposing club house of the gentlemen members of the "Schuylkill Fishing Company" (ca. 1747, but now demolished). William Hamilton decided to follow their lead.
In 1788, near the southwest corner of his inherited estate, William Hamilton built a fashionable, two-story, stone mansion house which featured oval rooms and sweeping views of the Schuylkill River and southwest Philadelphia. He decorated the house with fancy English furniture and fine art. Perhaps most notably, he developed extensive gardens and filled them with native and exotic plants from across America and around the world. In 1798, at the time of that year’s U.S. Direct Tax, the tax assessors described an estate which included not only the 7,100 square foot main house, but also a hot house, greenhouse, seed house, tea house, ice house, coach house and stable, and two porter houses. William Hamilton called his place the "Woodlands" and for a quarter century—until his death in 1813—travelers throughout the English-speaking world came to see it. Over several years and in several transactions Hamilton’s heirs sold the Woodlands, but the mansion house itself has survived to the present day as the office and administration building of the Woodlands Cemetery, formed from part of the estate in 1840. Those interested in the history of West Philadelphia may visit the Woodlands through a magnificent gate to the cemetery grounds at 4000 Woodland Avenue.
North of the Woodlands, that is, north of present-day Market Street, there were, in the late 18th and early 19th centuries, a series of grand mansion houses and country estates on the west bank of the Schuylkill. The first of these was the Powel family’s ninety-seven-acre "Powelton," followed to the north by: a ninety-five-acre property owned by Anne Willing Bingham; John Britton’s ferry house and ferry, known as the "Upper Ferry on Schuylkill," which included fifty-eight acres of land (in 1798 Britton sold the ferry house, ferry, and fourteen acres to Adam Siter); David Beveridge’s property of twenty-nine acres; and Judge Richard Peters’ place, "Mantua Farm," which extended to 227½ acres. By the late 19th century, however, all five of these houses had been demolished and their grounds developed into the present-day neighborhoods of Powelton Village and Mantua. In addition, the land directly on the Schuylkill was taken up by the Pennsylvania Railroad for its rail lines and freight depot.William Bingham’s showplace, "Lansdowne;" and the Peters family’s "Belmont." The last four of these estates were later incorporated into Philadelphia’s present-day Fairmount Park. The houses known as "Belmont" and "Sweetbriar" have survived to the present time and are open for tours at regular hours.
The story of "Powelton" illustrates the eventual denser, residential development of West Philadelphia as well as any of these country estates. It was home to one of the greatest Greek Revival mansions in the entire Delaware Valley; its ninety-seven acres extended, in present-day terms, from the Schuylkill River on the east, 36th Street on the west, Lancaster Avenue on the southwest, and Powelton or Pearl Street on the north. Samuel Powel (1738-1793) and his wife, Elizabeth Willing Powel (1742-1830) purchased the land in 1775 and built a country house there sometime before 1784.
Samuel Powel died there in the midst of the yellow fever epidemic of 1793. His widow began building the grand mansion in May 1800. "Powelton House," as she called it, was completed in December 1801. It stood on ground bounded by present-day Race Street on the south, 32nd Street on the east, Powelton Street on the north, and Natrona Street on the west. Elizabeth Willing Powel adopted her nephew, John Powel Hare, who, in 1808, in accordance with his aunt’s wishes, changed his name to John Hare Powel (1786-1856). As a result, when she died, he inherited her enormous wealth. In the 1830s and 1840s John Hare Powel enlarged the house until it was an enormous Greek Revival palace, but in 1851, just as soon as the Pennsylvania Railroad began to develop its 30th Street rail center, Powel sold his house and land to the Railroad and never returned to West Philadelphia. The Railroad partitioned the property, keeping thirty acres of low land along the Schuylkill River and selling sixty-three acres of higher ground for residential development. Elihu Spencer Miller (1817-1879)—whose wife was Anna Emlen Hare (1833-unk.)—purchased the great house and two acres in 1860. The Miller family occupied the place until 1883, when they sold to Evert Janson Wendell, whose building firm of Wendell and Smith demolished the house in January and February 1885. Within a year, Wendell and Smith had cut two streets through the property and constructed no fewer than sixty houses on the two-acre site. From open country estate to crowded urban streets took only thirty-five years.
The Bingham property to the north of "Powelton" was also rapidly developed. William Bingham (1752-1804)—a member of the Continental Congress, Trustee of the University of Pennsylvania, Director of the Bank of North America, and from March 1795 to March 1801, U.S. Senator from Pennsylvania—was said to be the richest American of his time.27 Bingham was also influential in West Philadelphia. Bingham was President of the Philadelphia and Lancaster Turnpike and it was he, "between 1792 and 1796, [who] oversaw the construction and operation of [this] major commercial artery…The turnpike was modern in its engineering and crushed-stone surface and proved highly profitable as a business."28 Bingham’s daughters married sons of "Sir Francis Baring, head of the British House of Baring…"29 and it was the Barings who gave Baring and Hamilton streets their names and developed the Bingham land between 1850 and 1890.
William Hamilton died in 1813 and his heirs began selling the Woodlands in large acreages. The two most important sales were those of 1829 and 1840. In the first of those years Hamilton’s heirs sold 187 acres to "The Guardians for the Relief and Employment of the Poor of the City of Philadelphia," thereby creating in West Philadelphia a great public institution commonly known as the Philadelphia Almshouse, which was transformed in the early 20th century to the Philadelphia General Hospital and continued on the site until its closing in 1977. In the second instance, real estate lawyer Eli K. Price and his brother, Philip, a surveyor, created the Woodlands Cemetery company, which included the Woodlands mansion house itself and ninety-one acres of surrounding land. By 1850, at about the same time "Powelton" was sold and subdivided, the "Woodlands" as it had been was no more. Nevertheless, the mansion house has survived and is open for tours at regular hours.
From a countryside of sizeable estates and fashionable mansion houses the eastern portion of West Philadelphia would emerge by the mid-19th century as a suburb, built up on streets which represented an extension of the City of Philadelphia to the east. As early as 1802, at least one person imagined this more cosmopolitan place.
That was Charles P. Varle, a distinguished cartographer, who, in that year, published a map for his fellow Philadelphians, entitled, To The Citizens of Philadelphia This New Plan Of The City And Its Environs. Varle portrayed a fully developed version of William Penn’s rectangular city, but also a developed Blockley Township, with the street grid system of the city extended across the Schuylkill to the eastern part of Blockley and Market Street forming a boulevard bridging the river with an esplanade at a western end in the township. However, Varle’s dream aside, Blockley Township remained—with exceptions like Hamilton Village and Powelton Village—a rural and unpopulated district for more than a hundred years after William Warner purchased land from the Lenape and established his "Willow Grove." This is evident in the first tax assessments and censuses conducted in the last decades of the eighteenth century.
Four sources provide another window on life in Blockley Township in the late 18th century. They are the records of the 1783 tax assessment, when the assessors counted the number of residents, buildings, and domesticated animals; the 1790 U.S. census; the 1798 U.S. Direct Tax; and the 1800 U.S. census. In 1783 the tax assessors counted 644 people in Blockley Township.30 They also counted 85 houses, 40 barns, 119 horses, 253 horn cattle and sheep. There were two ferries, two grist mills, and one tannery in the Township of Blockley. In 1790 the U.S. Census takers counted a 40% increase, to 883 residents (423 white males; 434 white females; twenty-two "other persons," that is, free African Americans; and four slaves). In 1798 the tax assessors counted 150 houses. In 1800 the U.S. Census takers documented another 20% growth in the general population to 1091 (549 white males; 507 white females; thirty-three free African Americans; and two slaves). Blockley was only sparsely populated, but was growing rapidly. People lived, on average, seven to a household. Men slightly outnumbered women in these early surveys, but they dominated as property holders. In 1798 only ten of the 150 household owners were women. Official documents thus reveal Blockley Township at the turn of the nineteenth century as a hinterland rather than a suburb to the famed city of Philadelphia to the east that had recently figured so importantly in the American Revolution and creation of a new republic.
In the nineteenth century, West Philadelphia transformed from a countryside of family farms and "gentlemen’s" estates to a set of residential communities. The building of bridges across the Schuylkill River promoted development. Traffic of goods and people between Philadelphia and the city’s hinterland west of the Schuylkill had greatly increased over the course of the eighteenth century with the growth of agricultural production, but until the first decade of the nineteenth century, traffic across the river was chiefly handled by three ferries: "Gray’s Ferry" on the south; "Middle Ferry" at present-day Market Street; and the "Upper Ferry" at present-day Spring Garden Street. But even before a more efficient means of bridging the river came of issue, the greater settlement of Philadelphia’s western hinterland area rested on the building and improvement of road and wagon ways.
As early as 1683, work had begun on what was variously called "Blockley and Merion Turnpike" or "Plank Road," a route used by Welsh Friends of Merion to go from their meeting house to the Upper Ferry on the Schuylkill. Part of this road evolved into present-day 54th Street and Lancaster Avenue.31 In 1722, a committee was appointed to plan further road connections from points in Blockley Township to the river.32 In 1786, a large project was initiated to construct the "First Long Turnpike in the United States."33 This "first important public improvement in the state" was completed by the Philadelphia and Lancaster Turnpike Road Company. The turnpike was completed for public use in 1795 and extended from Philadelphia to Lancaster.34
Improvement of roads and increased wagon traffic did not force innovations in bridging the Schuylkill River. War did. In 1776 at the outset of the American Revolution, George Washington, military leader of the colonial uprising, ordered the erection of a bridge connecting Philadelphia to the west and General Israel Putnam supervised its construction. The "bridge" was a primitive construction involving floating logs and scows; with the later British occupation of Philadelphia, the bridge was destroyed and the scows were hidden in the marshes.35 The British then oversaw a second bridge project. In 1777, Lord William Howe deputed Captain John Montressor to construct a bridge across the Schuylkill River. However, the bridge was hastily built and was destroyed by the rushing water shortly after its completion. Montressor and a team of engineers then gathered the debris and built a third bridge in the same location. The bridge remained after the British evacuated the area. The English traveler Henry Wansey described the bridge as:
[two iron chains] strained across the river parallel to each other, about six feet distance; on it are placed flat planks, fastened to each chain; and in this the horses and carriages pass over. As the horses stepped on the boards they sank under the pressure and the water rose between them; no railing on either side, it really looked very frightful and dangerous.36
In the 1780s, Thomas Paine, the famed author of the revolutionary tract Common Sense and leading figure in the American Revolution, rendered plans for the construction of an iron bridge across the Schuylkill River. However, the project lay idle and in 1798 a design by Timothy Palmer for a wooden bridge was selected over Thomas Paine’s model. On October 18, 1800, construction began on Palmer’s Permanent Bridge at Market Street. The Permanent Bridge opened to the public in 1805. The covered bridge spanned the 550-foot length of the river with a series of arches and was 1300 feet long in total construction.37 The Permanent Bridge facilitated expanding traffic across the Schuylkill until 1850 when it was engulfed and destroyed by fire. The bridge was then reconstructed and widened to incorporate tracks of the Pennsylvania Railroad (the mainline tracks of the Pennsylvania Railroad and its successors connecting Philadelphia and Pittsburgh and points further west runs through a wide swathe of West Philadelphia to this very day).38 Fire would also destroy this bridge in 1875; a truly permanent Market Street Bridge constructed with iron spans and stone fortifications would open in 1887.39
No sooner than the completion of the first standing bridge across Schuylkill in 1805, pressure mounted for another span—this time with residential community-building west of the Schuylkill in mind. In 1809, Judge Richard Peters, owner of the Belmont estate along the northwest banks of the Schuylkill in Blockley Township, announced a plan to subdivide his lands and build suburban homes.40 To allow eased access to the community from Philadelphia petitions to the state legislature soon were filed for public construction of a bridge at Spring Garden Street (replacing private ferry service at that location). The German engineer, Lewis Wernwag, promoted a design for the bridge. The state legislature then contracted with Wernwag’s company, and the wooden Wernwag Bridge opened in 1812. The bridge was the longest single-span bridge in the world. It had a single arch that spanned 343 feet, ninety feet longer than any other bridge.
The Spring Garden Bridge spurred immediate real estate development in Mantua. Following Judge Peters’ lead, John Britton, Jr., a local developer, in April of 1813 announced his plan to sell lots in Mantua for suburban home building and Philadelphians quickly responded.41 By the time the doors of the newly constructed First Presbyterian Church of Mantua opened in 1846, Mantua was a budding neighborhood of streets and homes with little evidence of the estates that marked the area a few decades before.42
As the first pockets of residential neighborhoods emerged in Blockley Township in the first decades of the nineteenth century with the bridging of the Schuylkill River, another kind of development occurred at the same time—the product of social disorders in William Penn’s utopian city of Philadelphia to the east. With growing concern for the numbers of ill, homeless, and unstable people roaming the streets of Philadelphia in the mid-1700s, Benjamin Franklin and other founders of Pennsylvania Hospital made provision for the admission of psychiatric patients when the hospital opened in February 1751 at 8th and Spruce Streets (the first hospital established in North America).43 Benjamin Rush, a physician at the hospital and a leading man of science of his age, instituted humane treatments of the insane, believing that they could be cured in bright surroundings and through recreational and occupational therapies.Pennsylvania Hospital for the Insane opened in 1841 under the progressive superintendency of Thomas Story Kirkbride who oversaw its expansion and the construction of a formidable building complex in the late 1850s that stands to this day.44
During his tenure as superintendent, Kirkbride gained international recognition for his approaches to the treatment of the insane. Patients at the hospital resided unchained in private, sanitary, and well-lit rooms, worked outdoors, enjoyed recreational activities including lectures and the use of a library, and received medical attention. The psychiatric hospital established in West Philadelphia in the 1840s, eventually renamed as the Institute of the Pennsylvania Hospital, remained in operation until 1997, when declining revenues from insurance providers forced the closing and sale of the facility and the re-opening of a treatment center at Pennsylvania Hospital’s 8th Street location.
The Pennsylvania Hospital for the Insane served private patients, those who could afford to pay the costs of hospitalization. The number of indigent, ill, and unstable people in William Penn’s Philadelphia demanded a public response as well. As early as 1684, Penn had designated two lots in his new city for the building of an almshouse for the care of what he termed the "distressed." The first almshouse established there only served Quakers. In 1731, the city opened an inclusive facility at 4th and Pine Streets—the first of its kind in North America—affording shelter, employment, and medical attention to the poor, sick, and insane who had no means of support.45
Philadelphia’s public almshouse developed as a multifunctional institute, part shelter, workhouse, orphanage, and hospital (the facility, not coincidentally, had various names associated with it—as the Philadelphia Almshouse and the Philadelphia General Hospital). Housing growing numbers of public wards, the cramped original and expanded buildings could not fill the need. In similar fashion to the Pennsylvania Hospital for the Insane, guardians of the almshouse looked to establish a larger facility in the open, bucolic setting of the city’s hinterland. Accordingly, city officials in 1832 purchased 187 acres of the remaining Woodlands estate of the heirs of Andrew Hamilton (an area stretching from today’s 34th Street to University Avenue and Spruce Street to Civic Center Boulevard, terrain now encompassing the University of Pennsylvania).46
Philadelphia General Hospital, or Blockley Almshouse as it was more commonly called, grew in its new West Philadelphia location to a massive four building complex, each edifice three stories high and 500 feet long; they housed 3000 public charges by the 1870s, including 200 orphans, 600 insane, and the rest, indigents and vagrants, many in poor health. The physician-in-chief at the time, Dr. J. Chalmers Da Costa caustically described the facility as follows:
Blockley is the microcosm of the city. Within these gray walls we find all sorts of physical and mental diseases, and also a multitude of those social maladies that degrade man-hood, undermine national strength and threaten civilization itself. Here is drunkenness; here is pauperism; here is illegitimacy; here is madness; here are the eternal priestesses of prostitution who sacrifice for the sins of man; here is crime in all its protean aspects, and here is vice in all its monstrous forms.47
The presence of Blockley Almshouse in West Philadelphia slowly diminished. When trustees of the University of Pennsylvania determined to move the university from its original location on 9th and Chestnut Streets to West Philadelphia in the early 1870s, they purchased and were ceded land by the city that included parts of the Almshouse. With new understandings of mental illness, city officials in the first decades of the twentieth century created a separate public psychiatric hospital in northeast Philadelphia; later other facilities of the Blockley Almshouse were emptied or relocated. In 1977, the last remaining services of the Philadelphia General Hospital were closed and the institution ceased its operations; the care of needy patients was now handled by area private hospitals through Medicaid.48 However, the history of the institution did not end there. In the spring of 2002, excavators preparing for the construction of parking garages unearthed the graves of 437 individuals and eleven mass internment sites at the spot that had been the burial grounds for residents of the Blockley Almshouse.49
Blockley Township in the first half of the nineteenth century not only received the marginal peoples of the city of Philadelphia, but also the city’s dead. In addition to the Potter’s Field of Blockley Almshouse, a cemetery for the upper crust of Philadelphia was also established. In 1840, a group of investors purchased the remaining acres of the Woodlands estate of William Hamilton and his heirs, including the mansion and carriage house. and then created a bucolic burial ground. The elite of Philadelphia could pay their respects to and commune with their dearly departed at graveside visitations in the beautifully landscaped, rolling hill estate of the Hamilton family. Among the famous Philadelphians buried at Woodlands Cemetery are: Anthony Drexel, the great Philadelphia financier; John Edgar Thompson and Thomas Alexander Scott, powerful executives of the Pennsylvania Railroad; and Thomas Eakins, the artist.
Social disorder in the city of Philadelphia led to institution building in Blockley Township in the first decades of the nineteenth century. Social unrest and violence in districts due northeast and southeast of Philadelphia would also lead to the political incorporation of these areas and Blockley into a municipality vastly larger than William Penn’s original City of Brotherly Love.
The first calls for the creation of a greater Philadelphia occurred after the city was rocked by rioting in 1844 between native-born Protestants and newly arriving Irish Catholic immigrants. Labor unrest and fighting between volunteer fire brigades and other gangs in working-class communities such as Northern Liberties and Kensington to the northeast of the city and Southwark and Moyamemsing to the south had raised apprehensions in the 1830s, but the riots of 1844, largely in Kensington, demanded a response and a small chorus of civic leaders in Philadelphia suggested that greater political and policing control of the patchwork quilt of townships and boroughs surrounding the city were necessary.50
The advocates of consolidation gained little support at first. Their plan involved heavy public investments in infrastructure and raising taxes had little appeal. Many elite Philadelphians also feared annexing immigrant communities on the outskirts, especially since Democratic Party voters there would threaten the Whig Party dominance in the city. Incorporation and control by city elites gained little resonance in working-class enclaves as well.51
However, by the early 1850s, opinions had shifted. First, advocates of consolidation developed new justifications. Annexation, they argued and predicted, would establish Philadelphia as the commercial center of the United States. With railroads extending through Pennsylvania to western territories, the vast proportion of the goods of the West and even products from East Asia would arrive and be merchandized from Philadelphia (with Philadelphia’s manufactured products finding markets in return). Establishing appropriate transshipment facilities required the organizing of the city’s then surrounding environs (here, Blockley Township was critical in the plans of the boosters of an enlarged city). Second, increased need for coordinated police, fire, water supply, and street construction, maintenance, and sanitation services in the face of social unrest and health crises convinced once wary elite groups of the benefits of consolidation. Many of them now further understood the profits to be made in real estate development in outlying districts with planned growth. Finally, with the prospects of huge public works projects—from the building of train stations and yards and boulevards and proposed edifices that would symbolize the rising cosmopolitan stature of Philadelphia—representatives from working-class districts joined the movement on behalf of consolidation with the prospect of vast new employment opportunities. With consensus and strong advocacy, a bill to consolidate Philadelphia easily passed state assemblies in Harrisburg in 1854. William Penn’s 1200-acre city became a 122 square mile metropolis.
William Warner’s "Blockley" thus became identified as West Philadelphia as of 1854. Blockley Township had not figured in the concerns for social unrest that initially drove the movement for consolidation. But, as transportation and real estate development became an element in the call for the organizing of outlying districts of Philadelphia, Blockley loomed large. Not coincidentally, a major advocate in the political drive toward annexation was an individual who had definite property interests in the area. Eli Kirk Price (1797-1884) shepherded the consolidation bill of 1854 through Harrisburg as a state senator. Price was a descendent of one of the original Quaker families to settle in Philadelphia and he was a noted lawyer and political figure. Although he lived in the city, Price accumulated substantial land holdings in Blockley Township and he was one of the founding members of the Woodlands Cemetery Company.52 The extent to which Price personally profited from the consolidation is difficult to determine, but he influentially stood for a new West Philadelphia, as a highly developed residential community within a major city.
- 1. Herbert Kraft, The Lenape or Delaware Indians (New Jersey: Lenape Lifeways, Inc., 2005), 35.
- 2. Ibid., 29.
- 3. Daniel Richter, Native Americans’ Pennsylvania (Pennsylvania: Pennsylvania Historical Association, 2005), 28.
- 4. "Native American Sites in the City of Philadelphia," Philadelphia Archaeological Forum (accessed May 20, 2008).
- 5. Clinton Alfred Weslager, The Delaware Indians: A History (New Brunswick, New Jersey: Rutgers University Press, 1972), 59.
- 6. Kraft 2005,11-12.
- 7. Ibid., 13.
- 8. Kraft 1986, 129.
- 9. Ibid., 131.
- 10. Kraft 2005, 16.
- 11. Ibid., 22.
- 12. Ibid., 30-33.
- 13. Weslager 1972, 58-62.
- 14. Kraft 2005, 20.
- 15. Kraft 1986, 138.
- 16. Kraft 2005, 37.
- 17. Weslager 1972, 155.
- 18. Ibid., 164.
- 19. Ibid., 170.
- 20. George R. Fisher, "The Walking Purchase," Philadelphia Reflections (accessed May 24, 2008).
- 21. Craig W. Horle, Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary, Vol. 1 (1682-1709) (Philadelphia: University of Pennsylvania Press, 1991).
- 22. Ibid.
- 23. Phillip Drennon Thomas, "Bartram, John," American National Biography Online (accessed July 11, 2008).
- 24. Bartram’s Garden (accessed February 15, 2008).
- 25. Ibid.
- 26. Craig W. Horle, Joseph S. Foster, and Jeffrey L. Scheib, eds., Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary, Vol. 2 (1710-1756) (Philadelphia: University of Pennsylvania Press, 1997), 416-49.
- 27. Robert J. Gough, "Bingham, William," American National Biography Online (accessed July 14, 2008).
- 28. Ibid.
- 29. Ibid.
- 30. Tello J. d’Apery, Overbrook Farms: Its Historical Background, Growth and Community Life (Philadelphia: The Magee Press, 1936), 38 (accessed July 16, 2008).
- 31. Ibid., 38, 49-50.
- 32. M. Laffitte Vieira, West Philadelphia Illustrated (Philadelphia: Avil Printing Co, 1903), 28; (AND) D’Apery 1936, 37 (accessed February 15, 2008).
- 33. Ibid., 47.
- 34. Ibid.
- 35. Bennett Nolan, The Schuykill (New Bruswick: Rutgers University Press, 1951), 249-250.
- 36. Nolan 1951, 251.
- 37. John Lewis, The Reputation of the Lower Schuylkill (Philadelphia: Enterprise Publishing Company, 1924), 3.
- 38. Vieira 1903, 36.
- 39. Ibid., 36.
- 40. Rosenthal, "Mantua: The Real Estate Promotion that Grew and Grew," A History of Philadelphia’s University City (Philadelphia, PA.: Printing Office of the University of Pennsylvania, 1963) (accessed June 14, 2008).
- 41. Ibid.
- 42. Preston Thayer and Jed Porter, Workshop of the World (Oliver Evans Press, 1990) (accessed February 20, 2008).
- 43. Howard Sudak, "A Remarkable Legacy: Pennsylvania Hospital’s Influence on the Field of Psychiatry," University of Pennsylvania Health System (accessed February 20, 2008).
- 44. David J. Rothman, The Discovery of the Asylum: Social Order and Disorder in the New Republic (Boston: Little, Brown and Company, 1971), 141-45.
- 45. John Welsh Croskey, History of Blockley: A History of the Philadelphia Hospital from Its Inception, 1731-1928 (Philadelphia: F.A. Davis Company, 1929), 11-13.
- 46. Ibid., 64.
- 47. Rosenthal 1963, 2.
- 48. "'Old Blockley': Philadelphia General Hospital," City of Philadelphia (accessed June 15, 2008; no longer available).
- 49. Kise Straw and Kolodmer, "Blockley Almshouse Cemetery," KSK Architects, Planners, Historians (accessed June 15, 2008; no longer available).
- 50. David Montgomery, "The Shuttle and the Cross: Weavers and Artisans in the Kensington Riots of 1844," Journal of Social History, 5 (Summer 1972): 411-446.
- 51. Andrew Heath, The Manifest Destiny of Philadelphia: Imperialism, Republicanism, and the Remaking of a City and Its People, 1837-1877 (Ph.D. dissertation, University of Pennsylvania, 2008).
- 52. "Eli K. Papers," University of Delaware Library Special Collections Department (Processed August 1993, by Rhonda R. Newton) (accessed July 10, 2008).
|
<urn:uuid:6e377aaf-d0d8-4510-bc61-2bfe0413d802>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814827.46/warc/CC-MAIN-20180223174348-20180223194348-00414.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9621657729148865,
"score": 4.28125,
"token_count": 9370,
"url": "http://westphillyhistory.archives.upenn.edu/history/chapter-1"
}
|
Naval Battle of Guadalcanal
The Naval Battle of Guadalcanal, sometimes referred to as the Third and Fourth Battles of Savo Island, the Battle of the Solomons, the Battle of Friday the 13th, or, in Japanese sources, the Third Battle of the Solomon Sea (第三次ソロモン海戦 Dai-san-ji Soromon Kaisen), took place from 12–15 November 1942, and was the decisive engagement in a series of naval battles between Allied (primarily American) and Imperial Japanese forces during the months-long Guadalcanal Campaign in the Solomon Islands during World War II. The action consisted of combined air and sea engagements over four days, most near Guadalcanal and all related to a Japanese effort to reinforce land forces on the island. The only two U.S. Navy admirals to be killed in a surface engagement in the war were lost in this battle.
|Naval Battle of Guadalcanal|
|Part of the Pacific Theater of World War II|
Smoke rises from two Japanese aircraft shot down off Guadalcanal on 12 November 1942. Photographed from USS President Adams; ship at right is USS Betelgeuse.
|Commanders and leaders|
| William Halsey, Jr.
Daniel Callaghan †
Norman Scott †
Willis A. Lee
| Isoroku Yamamoto
2 heavy cruisers
3 light cruisers
6 heavy cruisers
4 light cruisers
|Casualties and losses|
First phase (13 Nov):
1 battleship heavily damaged
Plus (13–15 Nov):
for a total of 1,732 killed
1 heavy cruiser
4 transports (beached first)
for a total of 1,900 killed (exclusive of transport losses)
Allied forces landed on Guadalcanal on 7 August 1942 and seized an airfield, later called Henderson Field, that was under construction by the Japanese military. There were several subsequent attempts by the Imperial Japanese Army and Navy, using reinforcements delivered to Guadalcanal by ship, to recapture the airfield, which ultimately failed. In early November 1942, the Japanese organized a transport convoy to take 7,000 infantry troops and their equipment to Guadalcanal to attempt once again to retake the airfield. Several Japanese warship forces were assigned to bombard Henderson Field with the goal of destroying Allied aircraft that posed a threat to the convoy. Learning of the Japanese reinforcement effort, U.S. forces launched aircraft and warship attacks to defend Henderson Field and prevent the Japanese ground troops from reaching Guadalcanal.
In the resulting battle, both sides lost numerous warships in two extremely destructive surface engagements at night. Nevertheless, the U.S. succeeded in turning back attempts by the Japanese to bombard Henderson Field with battleships. Allied aircraft also sank most of the Japanese troop transports and prevented the majority of the Japanese troops and equipment from reaching Guadalcanal. Thus, the battle turned back Japan's last major attempt to dislodge Allied forces from Guadalcanal and nearby Tulagi, resulting in a strategic victory for the U.S. and its allies and deciding the ultimate outcome of the Guadalcanal campaign in their favor.
The six-month Guadalcanal campaign began on 7 August 1942, when Allied (primarily U.S.) forces landed on Guadalcanal, Tulagi, and the Florida Islands in the Solomon Islands, a pre-war colonial possession of Great Britain. The landings were meant to prevent the Japanese using the islands as bases from which to threaten the supply routes between the U.S. and Australia, and to secure them as starting points for a campaign to neutralize the major Imperial Japanese military base at Rabaul and support of the Allied New Guinea campaign. The Japanese had occupied Tulagi in May 1942 and began constructing an airfield on Guadalcanal in June 1942.
By nightfall on 8 August, the 11,000 Allied troops secured Tulagi, the nearby small islands, and a Japanese airfield under construction at Lunga Point on Guadalcanal (later renamed Henderson Field). Allied aircraft operating out of Henderson were called the "Cactus Air Force" (CAF) after the Allied code name for Guadalcanal. To protect the airfield, the U.S. Marines established a perimeter defense around Lunga Point. Additional reinforcements over the next two months increased the number of U.S. troops at Lunga Point to more than 20,000 men.
In response, the Japanese Imperial General Headquarters assigned the Imperial Japanese Army's 17th Army, a corps-sized command based at Rabaul and under the command of Lieutenant-General Harukichi Hyakutake, with the task of retaking Guadalcanal. Units of the 17th Army began to arrive on Guadalcanal on 19 August, to drive Allied forces from the island.
Because of the threat by CAF aircraft based at Henderson Field, the Japanese were unable to use large, slow transport ships to deliver troops and supplies to the island. Instead, they used warships based at Rabaul and the Shortland Islands. The Japanese warships—mainly light cruisers or destroyers from the Eighth Fleet under the command of Vice Admiral Gunichi Mikawa—were usually able to make the round trip down "The Slot" to Guadalcanal and back in a single night, thereby minimizing their exposure to air attack. Delivering the troops in this manner, however, prevented most of the soldiers' heavy equipment and supplies—such as heavy artillery, vehicles, and much food and ammunition—from being carried to Guadalcanal with them. These high-speed warship runs to Guadalcanal occurred throughout the campaign and came to be known as the "Tokyo Express" by Allied forces and "Rat Transportation" by the Japanese.
The first Japanese attempt to recapture Henderson Field failed when a 917-man force was defeated on 21 August in the Battle of the Tenaru. The next attempt took place from 12–14 September, ending in the defeat of the 6,000 men under the command of Major General Kiyotake Kawaguchi at the Battle of Edson's Ridge.
In October, the Japanese again tried to recapture Henderson Field by delivering 15,000 more men—mainly from the Army's 2nd Infantry Division—to Guadalcanal. In addition to delivering the troops and their equipment by Tokyo Express runs, the Japanese also successfully pushed through one large convoy of slower transport ships. Enabling the approach of the transport convoy was a nighttime bombardment of Henderson Field by two battleships on 14 October that heavily damaged the airfield's runways, destroyed half of the CAF's aircraft, and burned most of the available aviation fuel. In spite of the damage, Henderson personnel were able to restore the two runways to service and replacement aircraft and fuel were delivered, gradually restoring the CAF to its pre-bombardment level over the next few weeks.
The next Imperial attempt to retake the island with the newly arrived troops occurred from 20–26 October and was defeated with heavy losses in the Battle for Henderson Field. At the same time, Admiral Isoroku Yamamoto (the commander of the Japanese Combined Fleet) defeated U.S. naval forces in the Battle of the Santa Cruz Islands, driving them away from the area. The Japanese carriers, however, were also forced to retreat because of losses to carrier aircraft and aircrews. Thereafter, Yamamoto's ships returned to their main bases at Truk in Micronesia, where he had his headquarters, and Rabaul while three carriers returned to Japan for repairs and refitting.
The Japanese Army planned another attack on Guadalcanal in November 1942, but further reinforcements were needed before the operation could proceed. The Army requested assistance from Yamamoto to deliver the needed reinforcements to the island and to support their planned offensive on the Allied forces guarding Henderson Field. To support the reinforcement effort, Yamamoto provided 11 large transport ships to carry 7,000 army troops from the 38th Infantry Division, their ammunition, food, and heavy equipment from Rabaul to Guadalcanal. He also sent a warship support force from Truk on 9 November which included the battleships Hiei and Kirishima. Equipped with special fragmentation shells, they were to bombard Henderson Field on the night of 12–13 November and destroy it and the aircraft stationed there in order to allow the slow, heavy transports to reach Guadalcanal and unload safely the next day. The warship force was commanded from Hiei by recently promoted Vice Admiral Hiroaki Abe. Because of the constant threat by Japanese aircraft and warships, it was difficult for Allied forces to resupply their forces on Guadalcanal, which were often under attack from Imperial land and sea forces in the area. In early November 1942, Allied intelligence learned that the Japanese were preparing again to try to retake Henderson Field in another attempt. Therefore, the U.S. sent Task Force 67 (TF 67)—a large reinforcement and re-supply convoy, split into two groups and commanded by Rear Admiral Richmond K. Turner—to Guadalcanal on 11 November. The supply ships were protected by two task groups—commanded by Rear Admirals Daniel J. Callaghan and Norman Scott—and aircraft from Henderson Field on Guadalcanal. The transport ships were attacked several times on 11 and 12 November near Guadalcanal by Japanese aircraft based at Buin, Bougainville, in the Solomons, but most were unloaded without serious damage. Twelve Japanese aircraft were shot down by anti-aircraft fire from the U.S. ships or by fighter aircraft flying from Henderson Field.
Abe's warship force assembled 70 nmi (81 mi; 130 km) north of Indispensable Strait and proceeded towards Guadalcanal on 12 November with an estimated arrival time for the warships of early morning of 13 November. The convoy of slower transport ships and 12 escorting destroyers, under the command of Raizo Tanaka, began its run down "The Slot" (New Georgia Sound) from the Shortlands with an estimated arrival time at Guadalcanal during the night of 13 November. In addition to the battleships Hiei (Abe's flagship) and Kirishima, Abe's force included the light cruiser Nagara and 11 destroyers (Samidare, Murasame, Asagumo, Teruzuki, Amatsukaze, Yukikaze, Ikazuchi, Inazuma, Akatsuki, Harusame, and Yūdachi). Three more destroyers (Shigure, Shiratsuyu, and Yūgure) would provide a rear guard in the Russell Islands during Abe's foray into the waters of "Savo Sound" around and near Savo Island off the north coast of Guadalcanal that would soon be nicknamed "Ironbottom Sound" as a result of this succession of battles and skirmishes. U.S. reconnaissance aircraft spotted the approach of the Japanese ships and passed a warning to the Allied command. Thus warned, Turner detached all usable combat ships to protect the troops ashore from the expected Japanese naval attack and troop landing and ordered the supply ships at Guadalcanal to depart by the early evening of 12 November. Callaghan was a few days senior to the more experienced Scott, and therefore was placed in overall command.
Callaghan prepared his force to meet the Japanese that night in the sound. His force consisted of two heavy cruisers (San Francisco and Portland), three light cruisers (Helena, Juneau, and Atlanta), and eight destroyers: Cushing, Laffey, Sterett, O'Bannon, Aaron Ward, Barton, Monssen, and Fletcher. Admiral Callaghan commanded from San Francisco.
During their approach to Guadalcanal, the Japanese force passed through a large and intense rain squall which, along with a complex formation plus some confusing orders from Abe, split the formation into several groups. The U.S. force steamed in a single column in Ironbottom Sound, with destroyers in the lead and rear of the column, and the cruisers in the center. Five ships had the new, far-superior SG radar, but Callaghan's deployment put none of them in the forward part of the column, nor did he choose one for his flagship. Callaghan did not issue a battle plan to his ship commanders.
At about 01:25 on 13 November, in near-complete darkness due to the bad weather and dark moon, the ships of the Imperial Japanese force entered the sound between Savo Island and Guadalcanal and prepared to bombard Henderson Field with the special ammunition loaded for the purpose. The ships arrived from an unexpected direction, coming not down the slot but from the west side of Savo Island, thus entering the sound from the northwest rather than the north. Unlike their American counterparts, the Japanese sailors had drilled and practiced night fighting extensively, conducting frequent live-fire night gunnery drills and exercises. This experience would be telling in not only the pending encounter, but in several other fleet actions off Guadalcanal in the months to come.
Several of the U.S. ships detected the approaching Japanese on radar, beginning at about 01:24, but had trouble communicating the information to Callaghan due to problems with radio equipment, lack of discipline regarding communications procedures, and general inexperience in operating as a cohesive naval unit. Messages were sent and received but did not reach the commander in time to be processed and used. With his limited understanding of the new technology, Admiral Callaghan wasted further time trying to reconcile the range and bearing information reported by radar with his limited sight picture, to no avail. Lacking a modern Combat Information Center (CIC), where incoming information could be quickly processed and co-ordinated, the radar operator was reporting on vessels that were not in sight, while Callaghan was trying to coordinate the battle visually, from the bridge. (Post battle analysis of this and other early surface actions would lead directly to the introduction of modern CICs early in 1943.)
Several minutes after initial radar contact the two forces sighted each other, at about the same time, but both Abe and Callaghan hesitated ordering their ships into action. Abe was apparently surprised by the proximity of the U.S. ships, and with decks stacked with high explosive (rather than armor penetrating) munitions, was momentarily uncertain if he should withdraw to give his battleships time to rearm, or continue onward. He decided to continue onward. Callaghan apparently intended to attempt to cross the T of the Japanese, as Scott had done at Cape Esperance, but—confused by the incomplete information he was receiving, plus the fact that the Japanese formation consisted of several scattered groups—he gave several confusing orders on ship movements, and delayed too long in acting.
The U.S. ship formation began to fall apart, apparently further delaying Callaghan's order to commence firing as he first tried to ascertain and align his ships' positions. Meanwhile, the two forces' formations began to overlap as individual ship commanders on both sides anxiously awaited permission to open fire.
At 01:48, Akatsuki and Hiei turned on large searchlights and illuminated Atlanta only 3,000 yd (2,700 m) away—almost point-blank range for the battleship's main guns. Several ships on both sides spontaneously began firing, and the formations of the two adversaries quickly disintegrated. Realizing that his force was almost surrounded by Japanese ships, Callaghan issued the confusing order, "Odd ships fire to starboard, even ships fire to port", though no pre-battle planning had assigned any such identity numbers to reference, and the ships were no longer in coherent formation. Most of the remaining U.S. ships then opened fire, although several had to quickly change their targets to attempt to comply with Callaghan's order. As the ships from the two sides intermingled, they battled each other in an utterly confused and chaotic short-range mêlée in which superior Japanese optic sights and well-practiced night battle drill proved deadly effective. An officer on Monssen likened it afterwards to "a barroom brawl after the lights had been shot out".
At least six of the U.S. ships—including Laffey, O'Bannon, Atlanta, San Francisco, Portland, and Helena—fired at Akatsuki, which drew attention to herself with her illuminated searchlight. The Japanese destroyer was hit repeatedly and blew up and sank within a few minutes.
Perhaps because it was the lead cruiser in the U.S. formation, Atlanta was the target of fire and torpedoes from several Japanese ships—probably including Nagara, Inazuma, and Ikazuchi—in addition to Akatsuki. The gunfire caused heavy damage to Atlanta, and a type 93 torpedo strike cut all of her engineering power. The disabled cruiser drifted into the line of fire of San Francisco, which accidentally fired on her, causing even greater damage. Admiral Scott and many of the bridge crew were killed. Without power and unable to fire her guns, Atlanta drifted out of control and out of the battle as the Japanese ships passed her by. The lead U.S. destroyer, Cushing, was also caught in a crossfire between several Japanese destroyers and perhaps Nagara. She too was hit heavily and stopped dead in the water.
Hiei, with her nine lit searchlights, huge size, and course taking her directly through the U.S. formation, became the focus of gunfire from many of the U.S. ships. Laffey passed so close to Hiei that they missed colliding by 20 ft (6 m). Hiei was unable to depress her main or secondary batteries low enough to hit Laffey, but Laffey was able to rake the Japanese battleship with 5 in (127.0 mm) shells and machine gun fire, causing heavy damage to the superstructure and bridge, wounding Admiral Abe and killing his chief of staff. Abe was thus limited in his ability to direct his ships for the rest of the battle. Sterett and O'Bannon likewise fired several salvos into Hiei's superstructure from close range, and perhaps one or two torpedoes into her hull, causing further damage before both destroyers escaped into the darkness.
Unable to fire her main or secondary batteries at the three destroyers causing her so much trouble, Hiei instead concentrated on San Francisco, which was passing by only 2,500 yd (2,300 m) away. Along with Kirishima, Inazuma, and Ikazuchi, the four ships made repeated hits on San Francisco, disabling her steering control and killing Admiral Callaghan, Captain Cassin Young, and most of the bridge staff. The first few salvos from Hiei and Kirishima consisted of the special fragmentation bombardment shells, which reduced damage to the interior of San Francisco and may have saved her from being sunk outright. Not expecting a ship-to-ship confrontation, it took the crews of the two enemy battleships several minutes to switch to armor-piercing ammunition, and San Francisco, almost helpless to defend herself, managed to momentarily sail clear of the melee. She had landed at least one shell in Hiei's steering gear room during the exchange, flooding it with water, shorting out her power steering generators, and severely inhibiting Hiei's steering capability. Helena followed San Francisco to try to protect her from further harm.
Two of the U.S. destroyers met a sudden demise. Either Nagara or the destroyers Teruzuki and Yukikaze came upon the drifting Cushing and pounded her with gunfire, knocking out all of her systems. Unable to fight back, Cushing's crew abandoned ship. Cushing sank several hours later. Laffey, having escaped from her engagement with Hiei, encountered Asagumo, Murasame, Samidare, and, perhaps, Teruzuki. The Japanese destroyers pounded Laffey with gunfire and then hit her with a torpedo which broke her keel. A few minutes later fires reached her ammunition magazines and she blew up and sank.
Portland—after helping sink Akatsuki—was hit by a torpedo from Inazuma or Ikazuchi, causing heavy damage to her stern and forcing her to steer in a circle. After completing her first loop, she was able to fire four salvos at Hiei but otherwise took little further part in the battle.
Yūdachi and Amatsukaze independently charged the rear five ships of the U.S. formation. Two torpedoes from Amatsukaze hit Barton, immediately sinking her with heavy loss of life. Amatsukaze turned back north and later also hit Juneau with a torpedo while the cruiser was exchanging fire with Yūdachi, stopping her dead in the water, breaking her keel, and knocking out most of her systems. Juneau then turned east and slowly crept out of the battle area.
Monssen avoided the wreck of Barton and steamed onward looking for targets. She was noticed by Asagumo, Murasame, and Samidare who had just finished blasting Laffey. They smothered Monssen with gunfire, damaging her severely and forcing the crew to abandon ship. The ship sank some time later.
Amatsukaze approached San Francisco with the intention of finishing her off. While concentrating on San Francisco, Amatsukaze did not notice the approach of Helena, which fired several full broadsides at Amatsukaze from close range and knocked her out of the action. The heavily damaged Amatsukaze escaped under cover of a smoke screen while Helena was distracted by an attack by Asagumo, Murasame, and Samidare.
Aaron Ward and Sterett, independently searching for targets, both sighted Yūdachi, who appeared unaware of the approach of the two U.S. destroyers. Both U.S. ships hit Yūdachi simultaneously with gunfire and torpedoes, heavily damaging the destroyer and forcing her crew to abandon ship. The ship did not sink right away, however. Continuing on her way, Sterett was suddenly ambushed by Teruzuki, heavily damaged, and forced to withdraw from the battle area to the east. Aaron Ward wound up in a one-on-one duel with Kirishima, which the destroyer lost with heavy damage. She also tried to retire from the battle area to the east but soon stopped dead in the water because the engines were damaged.
The star shells rose, terrible and red. Giant tracers flashed across the night in orange arches. ... the sea seemed a sheet of polished obsidian on which the warships seemed to have been dropped and were immobilized, centered amid concentric circles like shock waves that form around a stone dropped in mud.
Ira Wolfert, an American war correspondent, was with the Marines on shore and wrote of the engagement:
The action was illuminated in brief, blinding flashes by Jap searchlights which were shot out as soon as they were turned on, by muzzle flashes from big guns, by fantastic streams of tracers, and by huge orange-colored explosions as two Jap destroyers and one of our destroyers blew up... From the beach it resembled a door to hell opening and closing... over and over.
After nearly 40 minutes of brutal, close-quarters fighting, the two sides broke contact and ceased fire at 02:26, after Abe and Captain Gilbert Hoover (the captain of Helena and senior surviving U.S. officer) ordered their respective forces to disengage. Admiral Abe had one battleship (Kirishima), one light cruiser (Nagara), and four destroyers (Asagumo, Teruzuki, Yukikaze, and Harusame) with only light damage and four destroyers (Inazuma, Ikazuchi, Murasame, and Samidare) with moderate damage. The U.S. had only one light cruiser (Helena) and one destroyer (Fletcher) that were still capable of effective resistance. Although perhaps unclear to Abe, the way was now clear for him to bombard Henderson Field and finish off the U.S. naval forces in the area, thus allowing the troops and supplies to be landed safely on Guadalcanal.
At this crucial juncture, Abe chose to abandon the mission and depart the area. Several reasons are conjectured as to why he made this decision. Much of the special bombardment ammunition had been expended in the battle. If the bombardment failed to destroy the airfield, then his warships would be vulnerable to CAF air attack at dawn. His own injuries and the deaths of some of his staff from battle action may have affected Abe's judgement. Perhaps he was also unsure as to how many of his or the U.S. ships were still combat-capable because of communication problems with the damaged Hiei. Furthermore, his own ships were scattered and would have taken some time to reassemble for a coordinated resumption of the mission to attack Henderson Field and the remnants of the U.S. warship force. For whatever reason, Abe called for a disengagement and general retreat of his warships, although Yukikaze and Teruzuki remained behind to assist Hiei. Samidare picked up survivors from Yūdachi at 03:00 before joining the other Japanese ships in the retirement northwards.
At 03:00 on 13 November, Admiral Yamamoto postponed the planned landings of the transports, which returned to the Shortlands to await further orders. Dawn revealed three crippled Japanese (Hiei, Yūdachi, and Amatsukaze), and three crippled U.S. ships (Portland, Atlanta, and Aaron Ward) in the general vicinity of Savo Island. Amatsukaze was attacked by U.S. dive bombers but escaped further damage as she headed to Truk, and eventually returned to action several months later. The abandoned hulk of Yūdachi was sunk by Portland, whose guns were still functioning despite other damage to the ship. The tugboat Bobolink motored around Ironbottom Sound throughout the day of 13 November, assisting the damaged U.S. ships and rescuing U.S. survivors from the water.
Hiei was attacked repeatedly by Marine Grumman TBF Avenger torpedo planes from Henderson Field, Navy TBFs and Douglas SBD Dauntless dive-bombers from Enterprise, which had departed Nouméa on 11 November, and Boeing B-17 Flying Fortress bombers of the United States Army Air Forces' 11th Bombardment Group from Espiritu Santo. Abe and his staff transferred to Yukikaze at 08:15. Kirishima was ordered by Abe to take Hiei under tow, escorted by Nagara and its destroyers, but the attempt was cancelled because of the threat of submarine attack and Hiei's increasing unseaworthiness. After sustaining more damage from air attacks, Hiei sank northwest of Savo Island, perhaps after being scuttled by her remaining crew, in the late evening of 13 November.
Portland, San Francisco, Aaron Ward, and Sterett were eventually able to make their way to rear-area ports for repairs. Atlanta, however, sank near Guadalcanal at 20:00 on 13 November. Departing from the Solomon Islands area with San Francisco, Helena, Sterett, and O'Bannon later that day, Juneau was torpedoed and sunk by Japanese submarine I-26 (Coordinates: ). Juneau's 100+ survivors (out of a total complement of 697) were left to fend for themselves in the open ocean for eight days before rescue aircraft belatedly arrived. While awaiting rescue, all but ten of Juneau's crew died from their injuries, the elements, or shark attacks. The dead included the five Sullivan brothers.
Most historians appear to agree that Abe's decision to retreat represented a strategic victory for the United States. Henderson Field remained operational with attack aircraft ready to deter the slow Imperial transports from approaching Guadalcanal with their precious cargoes. Plus, the Japanese had lost an opportunity to eliminate the U.S. naval forces in the area, a result which would have taken even the comparatively resource-rich U.S. some time to recover from. Reportedly furious, Admiral Yamamoto relieved Abe of command and later directed his forced retirement from the military. However, it appears that Yamamoto may have been more angry over the loss of one of his battleships (Hiei) than he was over the abandonment of the supply mission and failure to completely destroy the U.S. force. Shortly before noon, Yamamoto ordered Vice Admiral Nobutake Kondō, commanding the Second Fleet at Truk, to form a new bombardment unit around Kirishima and attack Henderson Field on the night of 14–15 November.
Including the sinking of Juneau, total U.S. losses in the battle were 1,439 dead. The Japanese suffered between 550 and 800 dead. Analyzing the impact of this engagement, historian Richard B. Frank states:
This action stands without peer for furious, close-range, and confused fighting during the war. But the result was not decisive. The self-sacrifice of Callaghan and his task force had purchased one night's respite for Henderson Field. It had postponed, not stopped, the landing of major Japanese reinforcements, nor had the greater portion of the (Japanese) Combined Fleet yet been heard from."
Other actions, 13–14 NovemberEdit
Although the reinforcement effort to Guadalcanal was delayed, the Japanese did not give up trying to complete the original mission, albeit a day later than originally planned. On the afternoon of 13 November, Tanaka and the 11 transports resumed their journey toward Guadalcanal. A Japanese force of cruisers and destroyers from the 8th Fleet (based primarily at Rabaul and originally assigned to cover the unloading of the transports on the evening of 13 November) was given the mission that Abe's force had failed to carry out—the bombardment of Henderson Field. The battleship Kirishima, after abandoning its rescue effort of Hiei on the morning of 13 November, steamed north between Santa Isabel and Malaita Islands with her accompanying warships to rendezvous with Kondo's Second Fleet, inbound from Truk, to form the new bombardment unit.
The 8th Fleet cruiser force, under the command of Vice Admiral Gunichi Mikawa, included the heavy cruisers Chōkai, Kinugasa, Maya, and Suzuya, the light cruisers Isuzu and Tenryū, and six destroyers. Mikawa's force was able to slip into the Guadalcanal area uncontested, the battered U.S. naval force having withdrawn. Suzuya and Maya, under the command of Shōji Nishimura, bombarded Henderson Field while the rest of Mikawa's force cruised around Savo Island, guarding against any U.S. surface attack (which in the event did not occur). The 35-minute bombardment caused some damage to various aircraft and facilities on the airfield but did not put it out of operation. The cruiser force ended the bombardment around 02:30 on 14 November and cleared the area to head towards Rabaul on a course south of the New Georgia island group.
At daybreak, aircraft from Henderson Field, Espiritu Santo, and Enterprise—stationed 200 nmi (230 mi; 370 km) south of Guadalcanal—began their attacks, first on Mikawa's force heading away from Guadalcanal, and then on the transport force heading towards the island. The attacks on Mikawa's force sank Kinugasa, killing 511 of her crew, and damaged Maya, forcing her to return to Japan for repairs. Repeated air attacks on the transport force overwhelmed the escorting Japanese fighter aircraft, sank six of the transports, and forced one more to turn back with heavy damage (it later sank). Survivors from the transports were rescued by the convoy's escorting destroyers and returned to the Shortlands. A total of 450 army troops were reported to have perished. The remaining four transports and four destroyers continued towards Guadalcanal after nightfall of 14 November, but stopped west of Guadalcanal to await the outcome of a warship surface action developing nearby (below) before continuing.
Kondo's ad hoc force rendezvoused at Ontong Java on the evening of 13 November, then reversed course and refueled out of range of Henderson Field's bombers on the morning of 14 November. The U.S. submarine Trout stalked but was unable to attack Kirishima during refueling. The bombardment force continued south and came under air attack late in the afternoon of 14 November, during which they were also attacked by the submarine Flying Fish, which launched five torpedoes (but scored no hits) before reporting its contact by radio.
Kondo's force approached Guadalcanal via Indispensable Strait around midnight on 14 November, and a quarter moon provided moderate visibility of about 7 km (3.8 nmi; 4.3 mi). The force included Kirishima, heavy cruisers Atago and Takao, light cruisers Nagara and Sendai, and nine destroyers, some of the destroyers being survivors (along with Kirishima and Nagara) of the first night engagement two days prior. Kondo flew his flag in the cruiser Atago.
Low on undamaged ships, Admiral William Halsey, Jr., detached the new battleships Washington and South Dakota, of Enterprise's support group, together with four destroyers, as TF 64 under Admiral Willis A. Lee to defend Guadalcanal and Henderson Field. It was a scratch force; the battleships had operated together for only a few days, and their four escorts were from four different divisions—chosen simply because, of the available destroyers, they had the most fuel. The U.S. force arrived in Ironbottom Sound in the evening of 14 November and began patrolling around Savo Island. The U.S. warships were in column formation with the four destroyers in the lead, followed by Washington, with South Dakota bringing up the rear. At 22:55 on 14 November, radar on South Dakota and Washington began picking up Kondo's approaching ships near Savo Island, at a distance of around 18,000 m (20,000 yd).
Kondo split his force into several groups, with one group—commanded by Shintaro Hashimoto and consisting of Sendai and destroyers Shikinami and Uranami ("C" on the maps)—sweeping along the east side of Savo Island, and destroyer Ayanami ("B" on the maps) sweeping counterclockwise around the southwest side of Savo Island to check for the presence of Allied ships. The Japanese ships spotted Lee's force around 23:00, though Kondo misidentified the battleships as cruisers. Kondo ordered the Sendai group of ships—plus Nagara and four destroyers ("D" on the maps)—to engage and destroy the U.S. force before he brought the bombardment force of Kirishima and heavy cruisers ("E" on the maps) into Ironbottom Sound. The U.S. ships ("A" on the maps) detected the Sendai force on radar but did not detect the other groups of Japanese ships. Using radar targeting, the two U.S. battleships opened fire on the Sendai group at 23:17. Admiral Lee ordered a cease fire about five minutes later after the northern group disappeared from his ship's radar. However, Sendai, Uranami, and Shikinami were undamaged and circled out of the danger area.
Meanwhile, the four U.S. destroyers in the vanguard of the U.S. formation began engaging both Ayanami and the Nagara group of ships at 23:22. Nagara and her escorting destroyers responded effectively with accurate gunfire and torpedoes, and destroyers Walke and Preston were hit and sunk within 10 minutes with heavy loss of life. The destroyer Benham had part of her bow blown off by a torpedo and had to retreat (she sank the next day), and destroyer Gwin was hit in her engine room and put out of the fight. However, the U.S. destroyers had completed their mission as screens for the battleships, absorbing the initial impact of contact with the enemy, although at great cost. Lee ordered the retirement of Benham and Gwin at 23:48.
Washington passed through the area still occupied by the damaged and sinking U.S. destroyers and fired on Ayanami with her secondary batteries, setting her afire. Following close behind, South Dakota suddenly suffered a series of electrical failures, reportedly during repairs when her chief engineer locked down a circuit breaker in violation of safety procedures, causing her circuits repeatedly to go into series, making her radar, radios, and most of her gun batteries inoperable. However, she continued to follow Washington towards the western side of Savo Island until 23:35, when Washington changed course left to pass to the southward behind the burning destroyers. South Dakota tried to follow but had to turn to starboard to avoid Benham, which resulted in the ship being silhouetted by the fires of the burning destroyers and made her a closer and easier target for the Japanese.
Receiving reports of the destruction of the U.S. destroyers from Ayanami and his other ships, Kondo pointed his bombardment force towards Guadalcanal, believing that the U.S. warship force had been defeated. His force and the two U.S. battleships were now heading towards each other.
Almost blind and unable to effectively fire her main and secondary armament, South Dakota was illuminated by searchlights and targeted by gunfire and torpedoes by most of the ships of the Japanese force, including Kirishima, beginning around midnight on 15 November. Although able to score a few hits on Kirishima, South Dakota took 26 hits—some of which did not explode—that completely knocked out her communications and remaining gunfire control operations, set portions of her upper decks on fire, and forced her to try to steer away from the engagement. All of the Japanese torpedoes missed. Admiral Lee later described the cumulative effect of the gunfire damage to South Dakota as to, "render one of our new battleships deaf, dumb, blind, and impotent." South Dakota's crew casualties were 39 killed and 59 wounded, and she turned away from the battle at 00:17 without informing Admiral Lee, though observed by Kondo's lookouts.
The Japanese ships continued to concentrate their fire on South Dakota and none detected Washington approaching to within 9,000 yd (8,200 m). Washington was tracking a large target (Kirishima) for some time but refrained from firing since there was a chance it could be South Dakota. Washington had not been able to track South Dakota's movements because she was in a blind spot in Washington's radar and Lee could not raise her on the radio to confirm her position. When the Japanese illuminated and fired on South Dakota, all doubts were removed as to which ships were friend or foe. From this close range, Washington opened fire and quickly hit Kirishima with at least nine (and possibly up to 20) main battery shells and at least seventeen secondary ones, disabling all of Kirishima's main gun turrets, causing major flooding, and setting her aflame.[N 1] Kirishima was hit below the waterline and suffered a jammed rudder, causing her to circle uncontrollably to port.
At 00:25, Kondo ordered all of his ships that were able to converge and destroy any remaining U.S. ships. However, the Japanese ships still did not know where Washington was located, and the other surviving U.S. ships had already departed the battle area. Washington steered a northwesterly course toward the Russell Islands to draw the Japanese force away from Guadalcanal and the presumably damaged South Dakota. The Imperial ships finally sighted Washington and launched several torpedo attacks, but by the skilled seamanship of her captain she avoided all of them and also avoided running aground in shallow waters. At length, believing that the way was clear for the transport convoy to proceed to Guadalcanal (but apparently disregarding the threat of air attack in the morning), Kondo ordered his remaining ships to break contact and retire from the area about 01:04, which most of the Japanese warships complied with by 01:30.
Ayanami was scuttled by Uranami at 2:00, while Kirishima capsized and sank by 03:25 on 15 November. Uranami rescued survivors from Ayanami and destroyers Asagumo, Teruzuki, and Samidare rescued the remaining crew from Kirishima. In the engagement, 242 U.S. and 249 Japanese sailors died. The engagement was one of only two battleship-against-battleship surface battles in the entire Pacific campaign of World War II, the other being at the Surigao Strait during the Battle of Leyte Gulf.
The four Japanese transports beached themselves at Tassafaronga on Guadalcanal by 04:00 on 15 November, and Tanaka and the escort destroyers departed and raced back up the Slot toward safer waters. The transports were attacked, beginning at 05:55, by U.S. aircraft from Henderson Field and elsewhere, and by field artillery from U.S. ground forces on Guadalcanal. Later, destroyer Meade approached and opened fire on the beached transports and surrounding area. These attacks set the transports afire and destroyed any equipment on them that the Japanese had not yet managed to unload. Only 2,000 to 3,000 of the embarked troops made it to Guadalcanal, and most of their ammunition and food were lost.
Yamamoto's reaction to Kondo's failure to accomplish his mission of neutralizing Henderson Field and ensuring the safe landing of troops and supplies was milder than his earlier reaction to Abe's withdrawal, perhaps because of Imperial Navy culture and politics. Kondo, who also held the position of second in command of the Combined Fleet, was a member of the upper staff and battleship "clique" of the Imperial Navy while Abe was a career destroyer specialist. Admiral Kondo was not reprimanded or reassigned but instead was left in command of one of the large ship fleets based at Truk.
This battle would be the only time in history an American battleship engaged in direct combat with another battleship and successfully sunk it, the final time in history this would occur. The only other battleship-vs-battleship engagement of the Pacific theater failed to immediately destroy the opposing battleships, and the Scharnhorst was sunk mainly by torpedoes from destroyers.
The failure to deliver to Guadalcanal most of the troops and especially supplies in the convoy prevented the Japanese from launching another offensive to retake Henderson Field. Thereafter, the Imperial Navy was only able to deliver subsistence supplies and a few replacement troops to Japanese Army forces on Guadalcanal. Because of the continuing threat from Allied aircraft based at Henderson Field, plus nearby U.S. aircraft carriers, the Japanese had to continue to rely on Tokyo Express warship deliveries to their forces on Guadalcanal. However, these supplies and replacements were not enough to sustain Japanese troops on the island, who – by 7 December 1942 – were losing about 50 men each day from malnutrition, disease, and Allied ground and air attacks. On 12 December, the Japanese Navy proposed that Guadalcanal be abandoned. Despite opposition from Japanese Army leaders, who still hoped that Guadalcanal could be retaken from the Allies, Japan's Imperial General Headquarters—with approval from the Emperor—agreed on 31 December to the evacuation of all Japanese forces from the island and establishment of a new line of defense for the Solomons on New Georgia.
Thus, the Naval Battle of Guadalcanal was the last major attempt by the Japanese to seize control of the seas around Guadalcanal or to retake the island. In contrast, the U.S. Navy was thereafter able to resupply the U.S. forces at Guadalcanal at will, including the delivery of two fresh divisions by late December 1942. The inability to neutralize Henderson Field doomed the Japanese effort to successfully combat the Allied conquest of Guadalcanal. The last Japanese resistance in the Guadalcanal campaign ended on 9 February 1943, with the successful evacuation of most of the surviving Japanese troops from the island by the Japanese Navy in Operation Ke. Building on their success at Guadalcanal and elsewhere, the Allies continued their campaign against Japan, which culminated in Japan's defeat and the end of World War II. U.S. President Franklin Roosevelt, upon learning of the results of the battle, commented, "It would seem that the turning point in this war has at last been reached."
Historian Eric Hammel sums up the significance of the Naval Battle of Guadalcanal this way:
On November 12, 1942, the (Japanese) Imperial Navy had the better ships and the better tactics. After November 15, 1942, its leaders lost heart and it lacked the strategic depth to face the burgeoning U.S. Navy and its vastly improving weapons and tactics. The Japanese never got better while, after November 1942, the U.S. Navy never stopped getting better.
General Alexander Vandegrift, the commander of the troops on Guadalcanal, paid tribute to the sailors who fought the battle:
We believe the enemy has undoubtedly suffered a crushing defeat. We thank Admiral Kinkaid for his intervention yesterday. We thank Lee for his sturdy effort last night. Our own aircraft has been grand in its relentless hammering of the foe. All those efforts are appreciated but our greatest homage goes to Callaghan, Scott and their men who with magnificent courage against seemingly hopeless odds drove back the first hostile attack and paved the way for the success to follow. To them the men of Cactus lift their battered helmets in deepest admiration.
- The number of actual hits is a matter of conjecture. USS Washington observed eight main battery hits. The US Strategic Bombing Survey estimated nine major caliber and 40 secondary battery hits based on one postwar interview with a junior officer. Kirishima's damage control officer identified twenty main battery hits and 17 five inch hits on a schematic drawing, including several underwater hits which would have been invisible to Washington. Examination of the wreck has confirmed the location of three of these underwater hits, lending credence to his account.
- Frank, Guadalcanal, p. 490; and Lundstrom, Guadalcanal Campaign, p. 523.
- Frank, Guadalcanal, p. 490. Frank's breakdown of Japanese losses includes only 450 soldiers on the transports, "a figure no American flier would have believed", p. 462, but cites Japanese records for this number.
Miller, in Guadalcanal: The First Offensive (1948), cites "USAFISPA, Japanese Campaign in the Guadalcanal Area, 29–30, estimates that 7,700 troops had been aboard, of whom 3,000 drowned, 3,000 landed on Guadalcanal, and 1,700 were rescued." Frank's number is used here instead of Miller. Aircraft losses from Lundstrom, Guadalcanal Campaign, p. 522.
- Hogue, Pearl Harbor to Guadalcanal, p. 235–236.
- Morison, Struggle for Guadalcanal, p. 14–15; Miller, Guadalcanal: The First Offensive, p. 143; Frank, Guadalcanal, p. 338; and Shaw, First Offensive, p. 18.
- Griffith, Battle for Guadalcanal, p. 96–99; Dull, Imperial Japanese Navy, p. 225; Miller, Guadalcanal: The First Offensive, pp. 137–138.
- Frank, Guadalcanal, p. 202, 210–211.
- Frank, Guadalcanal, p. 141–143, 156–158, 228–246, & 681.
- Frank, Guadalcanal, p. 315–3216; Morison, Struggle for Guadalcanal, p. 171–175; Hough, Pearl Harbor to Guadalcanal, p. 327–328.
- Frank, Guadalcanal, 337–367.
- Hara, Japanese Destroyer Captain, 134–135.
- Hammel, Guadalcanal: Decision at Sea, p. 44–45.
- Morison, Struggle for Guadalcanal, p. 225–238; Hammel, Guadalcanal: Decision at Sea, p. 41–46. The 11 transport ships provided to carry the troops, equipment, and provisions included Arizona Maru, Kumagawa Maru, Sado Maru, Nagara Maru, Nako Maru, Canberra Maru, Brisbane Maru, Kinugawa Maru, Hirokawa Maru, Yamaura Maru, and Yamatsuki Maru.
- Hammel, Guadalcanal: Decision at Sea, p. 93.
- Hammel, Guadalcanal: Decision at Sea, p. 28.
- Hammel, Guadalcanal: Decision at Sea, p. 37.
- Kilpatrick, Naval Night Battles, p. 79–80; Hammel, Guadalcanal: Decision at Sea, p. 38–39; Morison, Struggle for Guadalcanal, p. 227–233, 231–233; Frank, Guadalcanal, p. 429–430. The American reinforcements totaled 5,500 men and included the 1st Marine Aviation Engineer Battalion, replacements for ground and air units, the 4th Marine Replacement Battalion, two battalions of the U.S. Army's 182nd Infantry Regiment, and ammunition and supplies. The first transport group, TF 67.1, was commanded by Captain Ingolf N. Kiland and included McCawley, Crescent City, President Adams, and President Jackson. The second transport group, part of Task Group 62.4 (TG 62.4), consisted of Betelgeuse, Libra, and Zeilin.
- Frank, Guadalcanal, p. 432; Hammel, Guadalcanal: Decision at Sea, p. 50–90; Morison, Struggle for Guadalcanal, p. 229–230.
- Morison, Struggle for Guadalcanal, p. 234; Frank, Guadalcanal, p. 428; Hammel, Guadalcanal: Decision at Sea, p. 92–93. Morison lists only 11 destroyers in Tanaka's convoy escort group, namely: Hayashio, Oyashio, Kagerō, Umikaze, Kawakaze, Suzukaze, Takanami, Makinami, Naganami, Amagiri, and Mochizuki. Tanaka states that there were 12 destroyers (Evans, Japanese Navy, p. 188).
- Morison, Struggle for Guadalcanal, p. 233–234; Hammel, Guadalcanal: Decision at Sea, p. 103–105. Rear Admiral Susumu Kimura commanded Destroyer Squadron 10, including Amatsukaze, Yukikaze, Akatsuki, Ikazuchi, Inazuma, and Teruzuki from Nagara. Rear Admiral Tamotsu Takama commanded Destroyer Squadron 4 which included Asagumo, Murasame, Samidare, Yūdachi, and Harusame.
- Frank, Guadalcanal, p. 429.
- Morison, Struggle for Guadalcanal, p. 235; Hara, Japanese Destroyer Captain, p. 137.
- Kilpatrick, Naval Night Battles, p. 83–85; Morison, Struggle for Guadalcanal, p. 236–237; Hammel, Guadalcanal: Decision at Sea, p. 92. Turner and the transport ships safely reached Espiritu Santo on 15 November.
- Hammel, Guadalcanal: Decision at Sea, p. 99–107.
- Hara, Japanese Destroyer Captain, p. 137–140; Morison, Struggle for Guadalcanal, p. 238–239.
- Kilpatrick, Naval Night Battles, p. 85; Morison, Struggle for Guadalcanal, p. 237; Hammel, Guadalcanal: Decision at Sea, p. 106–108. In Callaghan's column the distance between the destroyers and cruisers was 800 yd (730 m); between cruisers 700 yd (640 m); between destroyers 500 yd (460 m)
- Calendar-12.com; moon phases, 1942. http://www.calendar-12.com/moon_phases/1942 retvd 10 26 15
- Frank, Guadalcanal, p. 437–438.
- Kilpatrick, Naval Night Battles, p. 86–89; Hammel, Guadalcanal: Decision at Sea, p. 124–126; Morison, Struggle for Guadalcanal, p. 239–240.
- Frank, Guadalcanal, p. 438.
- Hara, Japanese Destroyer Captain, p. 140.
- Kilpatrick, Naval Night Battles, p. 89–90; Morison, Struggle for Guadalcanal, p. 239–242; Hammel, Guadalcanal: Decision at Sea, p. 129.
- Frank, Guadalcanal, p. 439.
- Kilpatrick, Naval Night Battles, p. 90–91; Hammel, Guadalcanal: Decision at Sea, p. 132–137; Morison, Struggle for Guadalcanal, p. 242–243.
- Frank, Guadalcanal, p. 441.
- Morison, Struggle for Guadalcanal, p. 242–243; Hammel, Guadalcanal: Decision at Sea, p. 137–183, and Frank, Guadalcanal, p. 449. Only eighteen crewmen out of a total complement of 197 (combinedfleet.com) survived the sinking of Akatsuki and were later captured by U.S. forces. One of Akatsuki's survivors, Michiharu Shinya, wrote a book called The Path From Guadalcanal which states that his ship did not fire a torpedo before sinking. Shinya's book has not been translated into English from Japanese.
- Hammel, Guadalcanal: Decision at Sea, p. 150–159.
- Kilpatrick, Naval Night Battles, p. 96–97, 103; Morison, Struggle for Guadalcanal, p. 246–247; Frank, Guadalcanal, p. 443.
- Morison, Struggle for Guadalcanal, p. 244; Hammel, Guadalcanal: Decision at Sea, p. 132–136.
- Morison, Struggle for Guadalcanal, p. 244; Hammel, Guadalcanal: Decision at Sea, p. 137–141. Jameson, The Battle of Guadalcanal, p. 22 says, "Only by speeding up did the Laffey manage to cross the enemy's bows with a few feet (metres) to spare."
- Morison, Struggle for Guadalcanal, p. 244; Hara, Japanese Destroyer Captain, p. 146.
- Hara, Japanese Destroyer Captain, p. 148.
- Hammel, Guadalcanal: Decision at Sea, p. 142–149; Morison, Struggle for Guadalcanal, p. 244–245.
- Frank, Guadalcanal, p. 444.
- Hammel, Guadalcanal: Decision at Sea, p. 160–171; Morison, Struggle for Guadalcanal, p. 247.
- Hammel, Guadalcanal: Decision at Sea, p. 234.
- Hammel, Guadalcanal: Decision at Sea, p. 246; and Hara, Japanese Destroyer Captain, p. 146.
- Hammel, Guadalcanal: Decision at Sea, p. 180–190.
- Hammel, Guadalcanal: Decision at Sea.
- Hara, Japanese Destroyer Captain, p. 146–147.
- Morison, Struggle for Guadalcanal, p. 244; Hammel, Guadalcanal: Decision at Sea, p. 191–201.
- Morison, Struggle for Guadalcanal, p. 247–248; Hammel, Guadalcanal: Decision at Sea, p. 172–178.
- Hara, Japanese Destroyer Captain, p. 144–146; Morison, Struggle for Guadalcanal, p. 249.
- Kilpatrick, Naval Night Battles, p. 94; Morison, Struggle for Guadalcanal, p. 248; Hammel, Guadalcanal: Decision at Sea, p. 204–212.
- Kilpatrick, Naval Night Battles, p. 95; Morison, Struggle for Guadalcanal, p. 249–250; Hammel, Guadalcanal: Decision at Sea, p. 213–225, 286.
- Frank, Guadalcanal, p. 449.
- Hara, Japanese Destroyer Captain, p. 149.
- Hara, Japanese Destroyer Captain, p. 147.
- Hammel, Guadalcanal: Decision at Sea, p. 246–249.
- Hammel, Guadalcanal: Decision at Sea, p. 250–256.
- Frank, Guadalcanal p. 451, quoting Leckie's Helmet for my Pillow.
- Miller, The Story of World War II p. 134–135.
- Frank, Guadalcanal, p. 451.
- Frank, Guadalcanal, p. 449–450.
- Hara, Japanese Destroyer Captain, p. 153.
- Frank, Guadalcanal, p. 452.
- Hammel, Guadalcanal: Decision at Sea, p. 270.
- Hammel, Guadalcanal: Decision at Sea, p. 272.
- Kilpatrick, Naval Night Battles, p. 98; Frank, Guadalcanal, p. 454.
- Kilpatrick, Naval Night Battles, p. 79 and 97–100; Hammel, Guadalcanal: Decision at Sea, p. 298–308.
- Hammel, Guadalcanal: Decision at Sea, p. 298–308; Morison, Struggle for Guadalcanal, p. 259–160. Enterprise and her escorting warships were designated Task Force 16 (TF 16) and was commanded by Rear Admiral Thomas C. Kinkaid. TF 16 consisted of Enterprise plus battleships Washington and South Dakota, cruisers Northampton and San Diego, and ten destroyers.
- Hammel, Guadalcanal: Decision at Sea, p. 274–275.
- Kurzman, Left to Die, Frank, Guadalcanal, p. 456; Morison, Struggle for Guadalcanal, p. 257; Kilpatrick, Naval Night Battles, p. 101–103.
- Hammel, Guadalcanal: Decision at Sea, p. 400.
- Morison, Struggle for Guadalcanal, p. 258.
- Hara, Japanese Destroyer Captain, p. 156.
- Hammel, Guadalcanal: Decision at Sea, p. 401; Hara, Japanese Destroyer Captain, p. 156.
- Frank, Guadalcanal, p. 459–460.
- Frank, Guadalcanal, p. 461.
- Evans, Japanese Navy, p. 190; Frank, Guadalcanal, p. 465; Hammel, Guadalcanal: Decision at Sea, p. 298–308, 312; Morison, Struggle for Guadalcanal, p. 259.
- Kilpatrick, Naval Night Battles, p. 108–109; Morison, Struggle for Guadalcanal, p. 234, 262; Hammel, Guadalcanal: Decision at Sea, p. 313, combinedfleet.com.
- Hammel, Guadalcanal: Decision at Sea, p. 316; Morison, Struggle for Guadalcanal, p. 263. One dive-bomber and 17 fighter aircraft were destroyed on Henderson Field by the bombardment.
- Kilpatrick, Naval Night Battles, p. 109; Hammel, Guadalcanal: Decision at Sea, p. 318.
- Frank, p. 465–474; Hammel, p. 298–345.
- Kilpatrick, Naval Night Battles, p. 110; Morison, Struggle for Guadalcanal, p. 264–266; Frank, Guadalcanal, p. 465, Hammel, Guadalcanal: Decision at Sea, p. 327; combinedfleet.com. An SBD Dauntless accidentally crashed into Maya, killing 37 of her crewmen and causing heavy damage. Maya was under repair in Japan until 16 January 1943. Kinugasa sank 15 nmi (17 mi; 28 km) south of Rendova Island.
- Evans, Japanese Navy, p. 191–192; Hammel, Guadalcanal: Decision at Sea, p. 345; Frank, Guadalcanal, p. 467–468; Morison, Struggle for Guadalcanal, p. 266–269; Jersey, Hell's Islands, p. 446. In the attacks on the transports the U.S. lost five dive bombers and two fighters and the Japanese lost 13 fighters. The transports sunk were Arizona, Shinanogawa, Sado, Canberra, Nako, Nagara, and Brisbane. Canberra and Nagara were sunk first, with Sado forced to turn back for the Shortlands escorted by Amagiri and Mochizuki. Next, Brisbane was sunk, followed by Shinanogawa, Arizona and Nako. The seven transports totaled 44,855 tons and carried a total of 20 anti-aircraft guns.
- "Senkan! IJN Kirishima: Tabular Record of Movement". combined fleet.com. Retrieved 27 November 2006.
- Morison, Struggle for Guadalcanal, p. 271; Frank, Guadalcanal, p. 469, and footnote to Chapter 18, p. 735. Frank states that Morison attributed both submarine contacts to Trout but was in error.
- Frank, Guadalcanal, p. 474.
- Evans, Japanese Navy, p. 193; Hammel, Guadalcanal: Decision at Sea, p. 351, 361.
- Morison, Struggle for Guadalcanal, p. 234; Hammel, Guadalcanal: Decision at Sea, p. 349–350, 415. The complete Imperial order of battle: battleship Kirishima, heavy cruisers Atago and Takao, light cruisers Nagara and Sendai, and destroyers Hatsuyuki, Asagumo, Teruzuki, Shirayuki, Inazuma, Samidare, Shikinami, Uranami, and Ayanami. Rear Admiral Shintaro Hashimoto commanded Destroyer Squadron 3, consisting of Uranami, Shikiname, and Ayanami from Sendai.
- Morison, Struggle for Guadalcanal, p. 270–272; Hammel, Guadalcanal: Decision at Sea, p. 351–352; Frank, Guadalcanal, p. 470.
- Hammel, Guadalcanal: Decision at Sea, p. 352, 363; Morison, Struggle for Guadalcanal, p. 270–272.
- Morison, Struggle for Guadalcanal, p. 234, 273–274; Frank, Guadalcanal, p. 473.
- Kilpatrick, Naval Night Battles, p. 116–117; Morison, Struggle for Guadalcanal, p. 274; Hammel, Guadalcanal: Decision at Sea, p. 362–364; Frank, Guadalcanal, p. 475.
- Frank, Guadalcanal, p. 480.
- Kilpatrick, Naval Night Battles, p. 118–121; Frank, Guadalcanal, p. 475–477; Morison, Struggle for Guadalcanal, p. 274–275; Hammel, Guadalcanal: Decision at Sea, p. 368–383.
- Frank, Guadalcanal, p. 478.
- Lippman, Second Naval Battle of Guadalcanal, Frank, Guadalcanal, p. 477–478; Hammel, Guadalcanal: Decision at Sea, p. 384–385; Morison, The Struggle for Guadalcanal, p. 275–277.
- Frank, Guadalcanal, p. 479.
- Morison, The Struggle for Guadalcanal, p. 277–279, Scan of original report Archived 26 March 2009 at the Wayback Machine.. The "Gunfire Damage Report" made by the Bureau of Ships showed 26 damaging hits and can be found at 6th and succeeding photos, Hammel, Guadalcanal: Decision at Sea, p. 385–389.
- Frank, Guadalcanal, p. 482.
- Lippman, Second Naval Battle of Guadalcanal, p. 9. Lee stated he felt "relief", but Capt. Davis of Washington said South Dakota "pulled out" without a word.
- Lundgren, Robert. "Kirishima Damage Analysis" (PDF). www.navweapons.com. The Naval Technical Board. Retrieved 20 September 2015.pp.5–8
- Kilpatrick, Naval Night Battles, p. 123–124; Morison, The Struggle for Guadalcanal, p. 278; Hammel, Guadalcanal: Decision at Sea, p. 388–389; Frank, Guadalcanal, p. 481.
- Frank, Guadalcanal, p. 483–484.
- Morison, The Struggle for Guadalcanal, p. 281; Hammel, Guadalcanal: Decision at Sea, p. 391.
- Frank, Guadalcanal, p. 484; Atago, Takao, and Nagara returned to Japan for repairs, with all three being out of action for about one month. Chōkai was repaired at Truk and returned to Rabaul on 2 December 1942. (combinedfleet.com). Gwin and South Dakota were repaired and returned to action a few months later: Gwin in April 1943, and South Dakota in February 1943.
- Frank, Guadalcanal, p. 486.
- Evans, Japanese Navy, p. 195–197; Morison, The Struggle for Guadalcanal, p. 282–284; Hammel, Guadalcanal: Decision at Sea, p. 394–395; Frank, Guadalcanal, p. 488–490; Jersey, Hell's Islands, p. 307–308. Morison and Jersey state that 2,000 Japanese soldiers landed with 260 cases of ammunition and 1,500 bags of rice. Lost were provisions for 30,000 men for 20 days, 22,000 artillery shells, thousands of cases of small-arms ammunition, and 76 large and seven small landing craft. Realizing that the transports would not have enough time to unload before daybreak, Tanaka asked permission to run them aground. Mikawa rejected his request, but Kondo accepted it, so Tanaka ordered the transport captains to run their ships aground. The American artillery that shelled the beached transports was from the 244th Coast Artillery Battalion and 3rd Defense Battalion, including two 155 mm (6.1 in) guns and several 5-inch guns.
- Hara, Japanese Destroyer Captain, p. 157.
- Hara, Japanese Destroyer Captain, p. 157, 171.
- Dull, Imperial Japanese Navy, p. 261; Frank, Guadalcanal, p. 527; Morison, The Struggle for Guadalcanal, p. 286–287.
- Frank, Guadalcanal, p. 428–92; Dull, Imperial Japanese Navy, p. 245–69; Morison, The Struggle for Guadalcanal, p. 286–287.
- Hammel, Guadalcanal: Decision at Sea, p. 402.
- The wording varies slightly from source to source: USS Cushing, Late November 1942 to February 1943: The endgame, Commendations for the Men who fought in the Naval Battle for Guadalcanal on November 13th, 1942., Communiqués
- Dull, Paul S. (1978). A Battle History of the Imperial Japanese Navy, 1941–1945. Naval Institute Press. ISBN 0-87021-097-1.
- Evans, David C. (Editor); Raizo Tanaka (1986). "The Struggle for Guadalcanal". The Japanese Navy in World War II: In the Words of Former Japanese Naval Officers (2nd ed.). Annapolis, Maryland: Naval Institute Press. ISBN 0-87021-316-4.
- Frank, Richard B. (1990). Guadalcanal: The Definitive Account of the Landmark Battle. New York: Penguin Group. ISBN 0-14-016561-4.
- Griffith, Samuel B. (1963). The Battle for Guadalcanal. Champaign, Illinois, USA: University of Illinois Press. ISBN 0-252-06891-2.
- Hammel, Eric (1988). Guadalcanal: Decision at Sea: The Naval Battle of Guadalcanal, November 13–15, 1942. (CA): Pacifica Press. ISBN 0-517-56952-3.
- Hara, Tameichi (1961). "Part Three, The "Tokyo Express"". Japanese Destroyer Captain. New York & Toronto: Ballantine Books. ISBN 0-345-27894-1.- Firsthand account of the first engagement of the battle by the captain of the Japanese destroyer Amatsukaze.
- Jameson, Colin G. (1944). "The Battle of Guadalcanal, 11–15 November 1942". Publications Branch, Office of Naval Intelligence, United States Navy (Somewhat inaccurate on the details of actual damage done to and actions by Japanese ships). Retrieved 8 April 2006.
- Jersey, Stanley Coleman. Hell's Islands: The Untold Story of Guadalcanal. College Station, Texas: Texas A&M University Press. ISBN 1-58544-616-5.
- Kilpatrick, C. W. (1987). Naval Night Battles of the Solomons. Exposition Press. ISBN 0-682-40333-4.
- Kurzman, Dan (1994). Left to Die: The Tragedy of the USS Juneau. New York: Pocket Books. ISBN 0-671-74874-2.
- Lippman, David H. (2006). "Second Naval Battle of Guadalcanal: Turning Point in the Pacific War". The HistoryNet.com. World War II magazine. Retrieved 26 November 2006.
- Lundstrom, John B. (2005). First Team And the Guadalcanal Campaign: Naval Fighter Combat from August to November 1942 (New ed.). Naval Institute Press. ISBN 1-59114-472-8.
- Miller, Donald L.; Commager, Henry Steele (2001). The Story of World War II. New York: Simon and Schuster. ISBN 9780743227186.
- Morison, Samuel Eliot (1958). "The Naval Battle of Guadalcanal, 12–15 November 1942". The Struggle for Guadalcanal, August 1942 – February 1943, vol. 5 of History of United States Naval Operations in World War II. Boston: Little, Brown and Company. ISBN 0-316-58305-7.
- Barham, Eugene Alexander (1988). The 228 days of the United States Destroyer Laffey, DD-459. OCLC 17616581.
- Calhoun, C. Raymond (2000). Tin Can Sailor: Life Aboard the USS Sterett, 1939–1945. Naval Institute Press. ISBN 1-55750-228-5.
- Coombe, Jack D. (1991). Derailing the Tokyo Express. Harrisburg, Pennsylvania: Stackpole. ISBN 0-8117-3030-1.
- D'Albas, Andrieu (1965). Death of a Navy: Japanese Naval Action in World War II. Devin-Adair Pub. ISBN 0-8159-5302-X.
- Fuquea, David C. (18 June 2004). "Commanders and Command Decisions: The Impact on Naval Combat in the Solomon Islands, November 1942" (Academic report). Center for Naval Warfare Studies, Naval War College. Retrieved 4 August 2009.
- Generous, William Thomas, Jr., (2003). Sweet Pea at War: A History of USS Portland (CA-33). University Press of Kentucky. ISBN 0-8131-2286-4. Online views of selections of the book:
- Grace, James W. (1999). Naval Battle of Guadalcanal: Night Action, 13 November 1942. Annapolis, Maryland: Naval Institute Press. ISBN 1-55750-327-3.
- Hone, Thomas C. (1981). "The Similarity of Past and Present Standoff Threats". Proceedings of the U.S. Naval Institute (Vol. 107, No. 9, September 1981). Annapolis, Maryland. pp. 113–116. ISSN 0041-798X
- Hornfischer, James D. (2011). Neptune's Inferno: The U.S. Navy at Guadalcanal. New York: Bantam Books. ISBN 978-0-553-80670-0.
- Lacroix, Eric; Linton Wells (1997). Japanese Cruisers of the Pacific War. Naval Institute Press. ISBN 0-87021-311-3.
- McGee, William L. (2002). The Solomons Campaigns, 1942–1943: From Guadalcanal to Bougainville—Pacific War Turning Point, Volume 2 (Amphibious Operations in the South Pacific in WWII). BMC Publications. ISBN 0-9701678-7-3.
- Parkin, Robert Sinclair (1995). Blood on the Sea: American Destroyers Lost in World War II. Da Capo Press. ISBN 0-306-81069-7.
- Stafford, Edward P.; Paul Stillwell (Introduction) (2002). The Big E: The Story of the USS Enterprise (reissue ed.). Naval Institute Press. ISBN 1-55750-998-0.
|Wikimedia Commons has media related to Naval Battle of Guadalcanal.|
- Chen, C. Peter (2006). "Guadalcanal Campaign". World War II Database. Archived from the original on 11 December 2008. Retrieved 27 October 2008.
- Hough, Frank O.; Ludwig, Verle E., and Shaw, Henry I., Jr. "Pearl Harbor to Guadalcanal". History of U.S. Marine Corps Operations in World War II. Retrieved 27 October 2008.
- Lippman, David H. (2006). "Battle of Guadalcanal: First Naval Battle in the Ironbottom Sound". HistoryNet.com. World War II magazine. Archived from the original on 16 March 2008. Retrieved 27 October 2008.
- Miller, Jr., John (1949). "Chapter 7. Decision at Sea". Guadalcanal: The First Offensive. United States Army in World War II: The War in the Pacific. United States Army Center of Military History. CMH Pub 5-3. Retrieved 27 October 2008.
- Mohl, Michael (1996–2008). "BB-57 USS South Dakota 1942". NavSource Online Photo Archive. NavSource Naval History. Retrieved 27 October 2008.
- Tully, Anthony P. (1997). "Death of Battleship Hiei: Sunk by Gunfire or Air Attack?". Retrieved 27 October 2008. Article on the battle of Friday the 13th that gives additional details on the demise of Hiei.
|
<urn:uuid:4ae43f25-1a43-4069-a5e0-2b0a3dce8fbf>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00614.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9305233359336853,
"score": 3.296875,
"token_count": 16003,
"url": "https://en.m.wikipedia.org/wiki/Naval_Battle_of_Guadalcanal"
}
|
2 Arithmetic Where we've been: Performance (seconds, cycles, instructions)Abstractions: Instruction Set Architecture Assembly Language and Machine LanguageWhat's up ahead:Implementing the Architecture32operationresultabALU
3 NumbersBits are just bits (no inherent meaning) — conventions define relationship between bits and numbersBinary numbers (base 2) decimal: n-1Of course it gets more complicated: numbers are finite (overflow) fractions and real numbers negative numbers e.g., no MIPS subi instruction; addi can add a negative number)How do we represent negative numbers? i.e., which bit patterns will represent which numbers?
4 Possible Representations Sign Magnitude: One's Complement Two's Complement 000 = = = = = = = = = = = = = = = = = = = = = = = = -1Issues: balance, number of zeros, ease of operationsWhich one is best? Why?
5 MIPS32 bit signed numbers: two = 0ten two = + 1ten two = + 2ten two = + 2,147,483,646ten two = + 2,147,483,647ten two = – 2,147,483,648ten two = – 2,147,483,647ten two = – 2,147,483,646ten two = – 3ten two = – 2ten two = – 1tenmaxintminint
6 Two's Complement Operations Negating a two's complement number: invert all bits and add 1remember: “negate” and “invert” are quite different!Converting n bit numbers into numbers with more than n bits:MIPS 16 bit immediate gets converted to 32 bits for arithmeticcopy the most significant bit (the sign bit) into the other bits > >"sign extension" (lbu vs. lb)
7 Novas instruçõesinstruções “unsigned”: (exemplo de aplicação, cálculo de memória)sltu $t1, $t2, $t # diferença é “sem sinal”slti e sltiu # envolve imediato, com ou sem sinalExemplo pag 215: supor $s0 = FF FF FF FF e $s1 =slt $t0, $s0, $s1como $s0 < 0 e $s1 > 0 Þ $s0<$s1 Þ $t0 = 1sltu $t0, $s0, $s1como $s0 e $s1 não tem sinal Þ $s0>$s1 Þ $t0 = 0
8 Cuidados com extensão 16 bits beq $s0, $s1, nnn # salta para PC + nnn se teste OKnnn tem 16 bits e PC tem 32 bitsestender de 16 para 32 bits antes daoperação aritméticase nnn > 0preencher com zeros à esquerdase nnn < CUIDADOpreencher com 1´s à esquerdaverificarpor este motivo operação é chamada deEXTENSÃO DE SINAL
9 Addition & Subtraction Just like in grade school (carry/borrow 1s) 0101Two's complement operations easysubtraction using addition of negative numbers 1010Overflow (result too large for finite computer word):e.g., adding two n-bit numbers does not yield an n-bit number 0001 note that overflow term is somewhat misleading, it does not mean a carry “overflowed”
10 Detecting OverflowNo overflow when adding a positive and a negative numberNo overflow when signs are the same for subtractionCONDIÇÕES DE OVERFLOWEm hardware, comparar o “vai-um” e o“vem-um” com relação ao bit de sinal
11 Effects of Overflow An exception (interrupt) occurs Control jumps to predefined address for exception (EPC — EXCEPTION PROGRAM COUNTER)Interrupted address is saved for possible resumptionmfc0 (move from system control): copia endereço do EPC para qualquer registradorDon't always want to detect overflow — new MIPS instructions: addu, addiu, subu note: addiu still sign-extends! note: sltu, sltiu for unsigned comparisons
13 Review: Boolean Algebra & Gates Problem: Consider a logic function with three inputs: A, B, and C. Output D is true if at least one input is true Output E is true if exactly two inputs are true Output F is true only if all three inputs are trueShow the truth table for these three functions.Show the Boolean equations for these three functions.Show an implementation consisting of inverters, AND, and OR gates.
14 An ALU (arithmetic logic unit) Let's build an ALU to support the andi and ori instructionswe'll just build a 1 bit ALU, and use 32 of themPossible Implementation (sum-of-products):operationresultopabresab
15 Review: The Multiplexor Selects one of the inputs to be the output, based on a control inputLets build our ALU using a MUX:SCABnote: we call this a 2-input muxeven though it has 3 inputs!1
16 Different Implementations Not easy to decide the “best” way to build somethingDon't want too many inputs to a single gateDont want to have to go through too many gatesfor our purposes, ease of comprehension is importantLet's look at a 1-bit ALU for addition:How could we build a 1-bit ALU for add, and, and or?How could we build a 32-bit ALU?cout = a b + a cin + b cinsum = a xor b xor cin
20 Tailoring the ALU to the MIPS Need to support the set-on-less-than instruction (slt)remember: slt is an arithmetic instructionproduces a 1 if rs < rt and 0 otherwiseuse subtraction: (a-b) < 0 implies a < bNeed to support test for equality (beq $t5, $t6, $t7)use subtraction: (a-b) = 0 implies a = b
21 Supporting slt Can we figure out the idea? Rs bit de sinal Rt Rd RsRtRdbit de sinalsubtração
23 Test for equalityNotice control lines: 000 = and 001 = or 010 = add 110 = subtract 111 = sltNote: zero is a 1 when the result is zero!
24 ALU ALUop 32 bits: A, B, result 1 bit: Zero, Overflow A Zero 3 bits: ALUop
25 Conclusion We can build an ALU to support the MIPS instruction set key idea: use multiplexor to select the output we wantwe can efficiently perform subtraction using two’s complementwe can replicate a 1-bit ALU to produce a 32-bit ALUImportant points about hardwareall of the gates are always workingthe speed of a gate is affected by the number of inputs to the gatethe speed of a circuit is affected by the number of gates in series (on the “critical path” or the “deepest level of logic”)Our primary focus: comprehension, however,Clever changes to organization can improve performance (similar to using better algorithms in software)we’ll look at two examples for addition and multiplication
26 Problem: ripple carry adder is slow Is a 32-bit ALU as fast as a 1-bit ALU? atraso (ent Þ soma ou carry = 2G) n estágios Þ 2nGIs there more than one way to do addition?two extremes: ripple carry (2nG) sum-of-products (2G)Can you see the ripple? How could you get rid of it?c1 = b0c0 + a0c0 + a0b0c2 = b1c1 + a1c1 + a1b c2 =c3 = b2c2 + a2c2 + a2b2 c3 =c4 = b3c3 + a3c3 + a3b3 c4 =Not feasible! Why?
27 Carry-lookahead adder An approach in-between our two extremesMotivation:If we didn't know the value of carry-in, what could we do?When would we always generate a carry? gi = ai biWhen would we propagate the carry? pi = ai + biDid we get rid of the ripple?c1 = g0 + p0c0c2 = g1 + p1c1 c2 =c3 = g2 + p2c2 c3 =c4 = g3 + p3c3 c4 = Feasible! Why?atraso: ent Þ gi pi (1G) gi pi Þ carry (2G) carry Þ saídas (2G)total: 5G independente de n
28 Use principle to build bigger adders Can’t build a 16 bit adder this way... (too big)Could use ripple carry of 4-bit CLA addersBetter: use the CLA principle again!super propagate (ver pag 243)super generate (ver pag 245)ver exercícios 4.44, 45 e 46 (não será cobrado)
29 Multiplication More complicated than addition accomplished via shifting and additionMore time and more areaLet's look at 3 versions based on gradeschool algorithmNegative numbers: convert and multiplythere are better techniques, we won’t look at them
32 Final VersionNo MIPS:dois novos registradores de uso dedicado para multiplicação: Hi e Lo (32 bits cada)mult $t1, $t # Hi Lo Ü $t1 * $t2mfhi $t # $t1 Ü Himflo $t # $t1 Ü Lo
33 Algoritmo de Booth (visão geral) Idéia: “acelerar” multiplicação no caso de cadeia de “1´s” no multiplicador:* (multiplicando) =* (multiplicando)* (multiplicando)Olhando bits do multiplicador 2 a 200 nada01 soma (final)10 subtrai (começo)11 nada (meio da cadeia de uns)Funciona também para números negativosPara o curso: só os conceitos básicosAlgoritmo de Booth estendidovarre os bits do multiplicador de 2 em 2Vantagens:(pensava-se: shift é mais rápido do que soma)gera metade dos produtos parciais: metade dos ciclos
36 Divisão29 3 Þ = 3 * Q R = 3 *divisorrestodividendoquociente2910 = = 11Q = R = 21 1Como implementar em hardware?1 0
37 Alternativa 1: divisão com restauração hardware não sabe se “vai caber ou não”registrador para guardar resto parcialverificação do sinal do resto parcialcaso negativo Þ restauraçãoq4q3q2q1q0 = = 9R = 11 = 2
42 InstruçõesNo MIPS:dois novos registradores de uso dedicado para multiplicação: Hi e Lo (32 bits cada)mult $t1, $t # Hi Lo Ü $t1 * $t2mfhi $t # $t1 Ü Himflo $t # $t1 Ü LoPara divisão:div $s2, $s # Lo Ü $s3 / $s Hi Ü $s3 mod $s3divu $s2, $s # idem para “unsigned”
43 mantissa ou significando Ponto FlutuanteObjetivos:representação de números não inteirosaumentar a capacidade de representação (maiores ou menores)Formato padronizado1.XXXXXXXXX * 2yyy (no caso geral Byyy)No MIPS:Sexpmantissa ou significando823 exp mantissa faixa precisãosinal-magnitude (-1)S F * 2E
50 Conjunto de instruções do MIPS para fp Fig Pag 291
51 Floating Point Complexities Operations are somewhat more complicated (see text)In addition to overflow we can have “underflow”Accuracy can be a big problemIEEE 754 keeps two extra bits, guard and roundfour rounding modespositive divided by zero yields “infinity”zero divide by zero yields “not a number”other complexitiesImplementing the standard can be trickyNot using the standard can be even worsesee text for description of 80x86 and Pentium bug!
52 Chapter Four SummaryComputer arithmetic is constrained by limited precisionBit patterns have no inherent meaning but standards do existtwo’s complementIEEE 754 floating pointComputer instructions determine “meaning” of the bit patternsPerformance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation).We are ready to move on (and implement the processor) you may want to look back (Section 4.12 is great reading!)
|
<urn:uuid:f8b979f0-8e97-4a8d-bac8-e1e7c863a194>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00614.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.6590784192085266,
"score": 3.984375,
"token_count": 2996,
"url": "http://slideplayer.com.br/slide/47131/"
}
|
Factors of Production: Per capita expansion was what had to be considered.Some historians believed that causes of economic growth were: the various "factors of production"the "inputs" of labornatural resourcesskillscapitaltechnologySome thought the most dynamic elements in economic expansion were: the increasing "demand' for goods and services that comes with: population growthchanging consumption patternsgovernment taxspending policies.Some were impressed by the cultural elements that encourage societies to alter their investment and consumption patterns and their attitutes toward profit, wealth, and private property.In reality, to understand how the U.S. increased its output enough to create relative abundance for its people, we must look at each of these, for all seem to have contributed to the outcome: ResourcesTechnologyGrowing MarketsCapitalBanks and BankingGrowth of the Banking SystemGovernment ActionsLaborPublic Schools and Economic GrowthResources: From the outset the U.S. was richly endowed by nature.The nation in 1815 stretched over one billion acres.Wood was a major source of fuel, used by steamboats, locomotives, factory steam engines, and by most householders to cook their food and heat their homes.America was also rich in minerals.The ores of the Appalachian region from central Vermont to the Carolinas formed an "iron belt" that as early as 1800 was dotted with forges, smelters, and mines.Copper and lead deposity were found in Michigan and Missouri.Pennsylvania and Ohio had excellent coal and - though as yet unused - the country possessed vast petroleum reserves.In water power, too, the U.S. was blessed. The Appalachian chain was the source of many rivers that emptied into the Atlantic.Along the fall line, where the Piedmont plateau drops abruptly to the Atlantic coast plain, scores of swift cascading streams offered a vast reserve of water power to turn mill wheels.In 1815 relatively little of the country's land was being farmed.In the West white famers were found only in the regions adjacent to the Ohio Valley and a few other pockets.Almost all of Mississippi valley was forest except some tracts in present-day Indiana, Illinois, and Iowa, covered with tall prairie grass.Even in the Atlantic coast states forests and unused farm woodlots covered the landscape, especially in northern New England, New York, western Pennsylvania, and the mountain regions of the southern states.Nor were the nation's power resources much used. Aside from some gristmills for grinding wheat and corn into flour and meal, and sawmills for slicing logs into boards, the waterpower of the fall line went largely to waste.Also neglected was the coal of eastern Pennsylvania;al long as wood was cheap, people had no incentive to exploit the unfamiliar black stone for fuel.As for petroleum, though people knew "rock oil" would burn, they did no know how to guarantee a steady supply, so it remaind a curiosity sold by quacks and hucksters as medicine.In the forty-five years following the War of 1812, the accessible and usuable resources within the country's 1830 boundaries were greatly expanded.Growing population and easier access to consumers induced farmers to expand their cultivated acreage.Increasingly, similar incentives moved businessmen to build water-powered mills and exploit coal deposits.In 1859 Edwin L. Drake, backed by New Haven capitalists, found that by drilling into the ground, an abundant supply of petroleum could be assured. Drake's well at Titusville, Pennsylvania, set off a "rush" to the oil regions that resembled the earlier gold rush to California.But besides learning to exploit its existing resources, the country added to these resources by enlarging its boundaries.Between the Louisiana Purchase in 1803 and the Gadsden Purchas fifty years later, the U.S. grew by 840 million acres.A large part of the new territory was arid, but it also included great tracts of fertil eland in the Central Valley of California, in east Texas, and in Gulf Coast Florida;vast deposits of copper, silver, gold, lead, and zinc in the Rocky Mountain area; and unique timber resources along the coasts of California and Oregon.Labor: The U.S. was a sparsely populated country in 1815.With 8.5 million people spread over 1.7 million square miles, it had under 5 inhabitants for every square mile of land, compared with over 90 per square mile today.In 1820 the population reached 9.6 million, including some 1.8 million blacks.Though legal importation of slaves had ceased in 1808 (when Congress implemented the constitutional provision allowing it to end the Atlantic slave trade), African Americans remained about 19 percent of the country's population.With so few people spread over so much land, the United States suffered from a chronic labor shortage.The shortage was alleviated somewhat by the youth of the population.In 1817 the median age was 17.In an era when people began work for a living at 13 or 14, such a young population was a distinct economic asset.Offsetting this demographic advantage, however, were the problems of disease and ill health. There were majore cholera epidemics in 1832 and again in 1849-1850 that killed thousands and disrupted economic life.In low-lying swampy areas many people suffered each summer from "fevers" or "agues," probably mosquito-borne malaria.In addition, typhys, typhoid, whooping cough, and tuberculosis killed or disabled vast numbers or working people every year.After 1815 the potetial labor force was further reduced by individual efforts to limit family size.The American birth rate dopped sharply, so that by 1850 it was below that of many countries in Europe.As in the past, the Old World helped to offset the New World's labor shortage.Between independence and 1808 the South's labor force was augmented by a large number of slave imports.Then, on January 1, 1808, the African slave trade became illegal.Some smuggling of captive Africans continued, but the number of slaves who arrived in the U.S. from abroad was drastically cut.Europe added to America's population, however, with each passing year. In the period 1776-1815 no more than 10,000 Europeans had entered the U.S. annually.In the next 25 years the number rose to over 30,000 each year.Then, in the 1840s and 1850s, economic dislocations in Germany and Scandinavia and the potato blight in Ireland made life hard, in some cases intolerable, for hundreds of thousands of European peasants.During the 1840s and 1850s a staggering average of 200,000 Europeans arrived each year at American Atlantic and Gulf Coast ports.Many of these immigrants were in their most productive early adult years.Europe had nurtured them through their dependent childhood period, and they added their brawn and their skill to the American labor pool at scarcely any cost to their adopted nation.Almost all of these additions accrued to the North.European newcomers perceived the South as an alien place where slaves competed with free labor and the chances of economic success were limited.They avoided Dixie.All told, by 1860, the nation's labor force, as a result of both natural increase and transatlantic immigration, had grown to over 11 million people.Public Schools and Economic Growth: Modern economic development has depended as much on the improvement of labor force quality - the enhancement of "human capital," as economists call it- as on the sheer growth of workers' numbers.In America the upgrading of labor force skills, literacy, and discipline was the result of the system of public education.Education standards had been relatively high in colonial America, especially in New England, but they had declined during the half-century following the Revolution.In 1835 Professor Francis Bowen of Harvard complained that New England's once-celebrated school system "had degenerated into routine ... [and] was starved by parsimony."But even as Bowen - and the "scholars" - wrote, labor leaders, philanthropists, businessmen, and concerned citizens were struggling to improve the country's educational system.The most effective worker for better schools after 1835 was Horace Mann, a lawyer who gave up a successful legal practice to become secretary of the Massachusetts Board of Education in 1837. Mann believed that an educated body of citizens was essential for a healthy democratic society.His colleagues also believed, as one of his successors on the Board of Education noted, that "the prosperity of the mills and shops is based quite as much upon the intellectual vigor as the physical power of the laborers."During Mann's twelve years as secretary of the board, Massachusetts doubled teachers' salaries, built and repaired scores of school buildings, opened fifty public high schools, and established a minimum school year of six months.Other states, especially in the North, soon followed the lead of the Bay State.The new school systems taught useful values as well as useful skills.Children learned punctuality, good hygiene, industriousness, sobriety, and honesty - all valuable qualities for an emergine industrial society.Many gaps remained in the country's educational system even after the advent of the state-supported primary school.Secondary education, except in Massachusetsts, remained the privilege of the rich who could afford the tuition of private "academies" for their children.One of the most serious deficiences was in the education of girls and young women. At the elementary level young girls were treated the same as boys.Beyond the first few grades, however, female education was often inferior.American women could not attend college until Oberlin admitted its first female student in 1833.The typical secondary school or academy for young women in 1815 was a "finishing school" where the daughters of businessmen, professionals, and wealthy farmers or planters were taught French, music, drawing, dancing, and a little "polite" literature.Then, in the period of 1820-1840, educational reformers, both men and women, began to conceive of a new sort of secondary schooling for women.These reformers attacked the idea that women should be mere ornaments or drudges.In a busting progressive society, they said, women had a vital role to play as mothers and teachers, educating the leaders of the nation in all areas of life.This "cult of domesticity" did not assert women's equality with men. But it did insist that in their own "spheres" women were an immense neglected resource and that this waste must not continue.The new idea that women's role was important transformed female education, especially in the Northeast.Under the leadershipe of Emma Willard, Mary Lyon, Joseph Emerson, and Catharine Beecher, female "seminaries" were established throughout the region.Schools such as Willard's Troy Female Seminary (1821) and Lyon's Mount Holyoke Female Seminary (1836), unlike the earlier finishing schools, taught algebra, geometry, history, geography, and several of the sciences. These more "muscular" subjects were now thought appropriate for the mothers-to-be of statesmen, soldiers, and captains of industry. The most important role of these schools, however, was to provide a flood of trained women to fill the ranks of the burgeoning theaching profession.Though the educational system still had many failings, by 1860 the United States had a highly skilled and literate labor force. It was ahead of every nation in the world except Denmark in the ratio of students to total population, and New England was even ahead of the advanced Danes.Literacy made it possible for workers to read plans and compose written reports, and gave them access to new ideas and new ways of doing things.It is no accident that the ingenious Yankee tinkerer became a legendary figure or that New England, with the best educational system in the country, became a beehive of shops, mills, and factories, producing cloth, clocks, shoes, hardware, and machinery for the rest of the nation.Technology:During the years between independence and the Civil War, the U.S. became a world leader in useful invention.In the 1780s Oliver Evans invented a new flour mill that introduced grain at the top and automatically cleaned, ground, cooled, sifted, and barreled it as it descended to the bottom of the structure.In the 1790s Eli Whitney perfected the "gin," a machine that cleaned the sticky seeds from the cotton boll and revolutionized cotton growing in the U.S.In 1787 John Fitch first hitched steam power to navigation, created the first steam boat.Twenty years later, Robert Fulton's paddle-wheeled steamboat, the Clermont, made the trip from NY to Albany in a record-breaking 32 hours.In the 1840s a NYU professor, Samuel F.B. Morse, developed a practical telegraph system to transmit information instaneously over long distances.While the men who advanced technology in these years were well educated in the arts and humanities, few had formal technical education.Most of the country's first civil engineers, for example, learned their trade by working on the early turnpikes and canals.Gradually, however, more formal means to train technicians and scientists were developed.West Point (founded in 1802), Norwich University (1820), Rensselae Polytechnic Institute (1825), and the Lawrence Scientific School at Harvard (1847) eventually established engineering schools to train men to build the canals, bridges and railroads that would knit the country together.Growing Markets:We can treat the expanding and ever-more-skillful population of the country as an addition to the supply side of the economic growth equation.It was also a facor on the demand side, however.As population increased, so did the market for everything from babies' cribs to old folks' canes.Americans were already well supplied with food, clothing, and shelter, and each addition of family income provided additional money for modest luxuries.Before the Civil War the finer industrial goods were commonly obtained from Britain or France, but with each passing year American industry expanded to meet the growing home market for jewelry, furniture, carriages, carpets, writing paper, clocks, fine cloth, and a thousand other sophisticated manufactured articles.Capital: The growing labor force of the United States was matched by a growing supply of physical capital.Capital, as economists use the term, is not money as such, but money transformed into machines, barns, factories, railroads, mines - that is, money invested in "tools" that produce other commodities.It comes ultimately from the savings of the society, what it sets aside out of its total income.When employed productively, capital becomes the bases for increasing the output of goods and services and the rate of economic growth.During the colonial period, most capital came from abroad in the form of implements, credits, and cash brough by immigrants or lent to Americans by European promoters and merchants.After independence the United States continued to rely on foreign sources of capital.The increasing flood of immigrants brought some capital to America, but loans extended by British, French, Dutch, and German bankers and businesspeople were a larger source of foreign capital.The toatl amount of the nation's outstanding foreign loans went from under $100 million in 1815 to $400 million by the eve of the Civil War.Foreign trade was yet another source of capital.The U.S. in this period exported vast quantities of raw materials and farms products to foreign nations.Cotton from the South alone represented almost half the value of the country's total annual exports in the mid-1850s.Profits from these sales enabled the country to buy not only European consumer goods, but also machinery, iron rails, locomotives, and other tools.At the same time the American merchant marine earned income for the U.S.Europeans often preferred to hire America's swift clipper ships to send their exports to Australia, South America, and the Far East.Foreign trade created fortunes for American merchants, particularly in the middle states and New England, much of which was reinvested in domestic industry.And finally, as a source of capital, after 1849 there was gold from California. The millions of dollars of precious metals extracted from the streams and hills of the gold rush country also helped pay for the capital goods imported from the advanced industrial nations of Europe.Banks and Banking: The country's commercial banking system also contributed to the growth of private capital in these years.They did so by creating money or credit and lending it to business borrowers.Commerical banks keep only a small reserve of money against the debts they owe to their depositors and the loans they make to borrowers.On a small amount of paid-in capital or deposited savings, they can lend a large amount to investors.In effect, commercial banks are money machines that transmit the cash or credit they create to businesspeople who need it and can put it to productive use.This system depended on prudence to work successfully.If bankers lent to unreliable borrowers or made loans far beyond what a cautious reserve policy required, they jeopardized their firms and often the economy as a whole.Depositors, or other creditors, fearing for the safety of their savings, might demand immediate repayment.If enough of a bank's creditors simultaneously asked for their money back, the bank might be forced to "suspend payments" and close its doors.That in turn could trip off a broad "panic," with everyone demanding cash and insisting their creditors pay their debts.Serious national panics occurred in 1819, 1837, and 1857, and each ushered in a long economic downturn.For a time businesspeople would not invest and consumers would not buy.Economic activity slowed, and workers lost their jobs.In the pre-Civil War period, banks also provided the paper money that people used in their daily buying and sellings.The U.S. Treasury issued gold, silver, and copper coins, but this was not enough to do the people's business.Instead, in all but minor transactions, the "bank note," issued by some banking corporation, served the public as money.By law these notes were usually backed by a reserve of gold to redeem each note when presented, but the requirement was often laxly enforced.The Second Bank of the United States, chartered in 1816, had little trouble keeping its circulation of paper notes "as good as gold."Many of the state-chartered banks, however, issued excessive amounts to maximize their profits.When a bank could not redeem its notes - as when it could not pay its depositors - it was forced to suspend operations.Those who held the bank's notes now found themselves with worthless paper, much as depositors in defaulted banks found themselves with worthless bank accounts.Growth of the Banking System: Despite these failings, the country's banking system proved adequate to the job of increasing the nation's pool of capital.The first modern American commercial bank was the Bank of North America, charted by Congress in 1781 and located in Philadelphia.In 1784 the legislatures of NY and Massachusetts chartered two additional banks.Congress, acting on Hamilton's financial program, chartered the first Bank of the U.S. (or BUS).Like any other commercial bank, the BUS lent money, but before its demise in 1811, it took on some of the functions of a central bank.That is, it sought to control and stabilize the entire economy by providing extra funds to state bank lenders when credit was scarce and by limiting their loans when credit was excessive.The Second Bank of the United States, chartered in 1816, was even larger than the First, with $35 million in capital compared to the $10 million of its predecessor.It, too, sought to provide a blance wheel for the economy.At times, however, it blundered badly.Under its first president, it initially followed an easy-credit policy, lending freely to businessmen and speculators. This practice helped fuel a western land boom after 1815.Then, when it tightened credit in 1819, the Bank triggered a major panic and depression.Meanwhile, a large state banking system was growing up alongside the BUS.In 1820 there were 300 state banks;In 1860, almost 1,600.At first most state banks were established by charters granted individually by state legislatures.By the 1840s, however, banks could secure charters by applying to designated state officials and meeting general legal requirements (free banking).In some states, especially in the Northeast, these requirements were strict.In the new parts of the country they were often slack.There the need for capital to clear land, build barns, construct railroads, and lay out towns was most acute, and interest rates - the price of money - was therefore high.Under the circumstances, it is not surprising that many western states' banking laws were lax and enforcement even more lenient.This led to large issues of "wildcats," paper money backed by hope and faith rather than "specie" (gold and silver).The practices of western banks encouraged a boom-and-bust pattern, but their free-and-easy policies undoubtedly facilitated rapid capital growth in the emerging parts of the country.All told, economic historians conclude, the banking system of this period, for all its faults, worked well for an enterprising people.Government Actions: Americans disagreed about the rold of the government in the American economy.Jeffersonians continued to fear federal and state intrusion into private affairs as a danger to political freedom.Citizens influenced by the laissez-faire ideas of Adam Smith believed that government intervention would only hamper economic progress.And private captial was in face the predominant source of economic growth during the pre-Civil War period, but we must not ignor the rold of government in the country's pre-Civil War expansion.Through laws favorable to the easy chartering of banks and corporations, the states encouraged private capitalists to pool their savings for investment purposes.The federal tariff system, proposed by Hamilton and implemented by the Republicans in 1816, by making imports more expensive, protected American manufacturers against foreign competition and so encouraged capitalists to risk their money in factories and mills.The legal system, buffered by lawyers, contributed to the growth surge that marked these years.Never far removed from the commercial realm, lawyers came to identify ever more closely with the entrepreneurial spirit.Increasingly, judges and lawyer-dominated legislatures proved more attentive to the right to earn a profit than to individual rights under the common law.Governments also contributed to capital formation more directly.Many investments, such as canals, required so much capital and posed so many risks that private investors hesitated to undertake them.Yet they promised to confer economic benefits on man y people or whole regions.Profit on a railroad through a wilderness area, for example, might take years to realize, though the road might open an underdevloped region for settles and eventually benefit the whole nation.To encourage growth in these instances, state and local governments in the years before 1860 joined with private promoters to build roads, canals, and railroads.Sometimes the states lent money to private capitalists; in the case of the canals, they often financed projects directly.NY State put up the $7 million for the Erie Canal after efforts to secure federal funds failed.Federal revenues built the National Road, begun in 1811 and completed in 1850, from Cumberland, Maryland, to Vandalia, Illinois, a distance of 700 miles.The federal goverment also financed the St. Mary's Falls ship canal linking Lake Huron and Lake Superior, build coastal lighthouses, dredged rivers and harbors, and, in the 1850s, contributed millions of acres of land to promoters of the Illinois Central Railroad connecting the Great Lakes with the Gulf of Mexico.All told, the government contribution to pre-Civil War investment was enormous.One scholar has estimated that by 1860 states, counties, and municipalities had spent about $400 million toward building the country's transportation network alone.And the federal government spent at least as much.If we add to this sum the millions expended by governments on schools, hospitals, and other vital public faciltites, and the value of the tariff and land grants, we can see that we must qualify strongly the myth of private enterprise as the sole engine of economic growth in pre-Civil War America.The Course of American Economic Growth: America, then, was endowed with stupendous natural resources, a skilled, acquisitive, and disciplined population, and values and institutions conducive to hard work, saving, and capital growth.How did these elements combine to produce an economic miracle?The Birth of King Cotton: Most people associate nineteenth-century economic growth with factories, forges, and mines. But agricultural progess was a vital part of the process.The outstanding advance in American agriculture before the Civil War was the opening of the "cotton kingdom."Toware the end of the eighteenth century several ingenious Englishmen develped machines to spin cotton yarn and weave it into fabric.By the 1790s the mills of Lancashire in northwest England were producing cheap cotton cloth for an ever-expanding world market.But where was the raw cotton to come from for the hungry mills? A small amount of cotton was grown on the Sea Islands off the South Carolina and Georgia coasts.Sea Island cotton has smooth fibers; its seeds could easily be removed by hand.But the region where it flourished was limited.Short-staple cotton would grow throughout the South's vast upland interior, but it had burr-like green seeds that required much hand labor to remove.It was not economical to grow, even in the slave South.Cotton cultivation remained confined to the narrow band of Carolina-Georgia coast.Yet the South badly needed a new cash crop. Tobacco, rice, and indigo had all suffered declining markets after independence.What could be done to make short-fiber cotton a practical replacement for the slumping older staples? The answer was provided by the Yankee Eli Whitney.In 1793, while visiting the Georgia plantation of Mrs. Nathanael Greene, widow of the Revolutionary War general, Whitney learned about the problem confronting southern planters.As a gesture of graditude to his gracious hostess, he put together a simple machine that would efficiently remove the sticky seeds from the upland cotton boll.Now a single laborere, using Whitney's new "gin" (from "engine"), could do the work of fifty hand cleaners.The gin, and cotton culture, quickly spread thoughout the lower South.Thousands of planters, white famers, and slaves migrated into western Georgia, Alabama, Mississippi, Lousiana, Arkansas, and east Texas to clear fileds and plant cotton.From about 2 million pounds in 1793, short-fiber cotton output shot up to 80 milion pounds by 1811.In 1859 the U.S. produced 5 million bales of 400 pounds each and had become the world's leading supplier of raw cotton.On the eve of the Civil War cotton was "king," and its realm spanned the region from North Carolina on the Atlantinc coast, 1,300 miles westward to central Texas, and from the gulf of Mexico to Tennessee.The North and West: If cotton was king in the South, wheat was king in the agricultural North.Grown since colonial times in almost every part of North America except New England and the deep South, it continued to be important in the Middle Atlantic states and the upper South after 1815.Thereafter, as canals and railroads made the prairies accessible, wheat growing moved westward.By 1859 Illinois, Indiana, Ohio, and Wisconsin had become the chief wheat-producing states.The soils of the new wheat region were especially fertile and the prairies that covered large parts of several northwestern states were practically treeless; farmers did not have to clear forest cover, an occupation that consumed much of their time in the middle states.The shift of wheat growing to the Midwest accordingly increased the output per capita of American agriculture and helped to supply expanding national markets at ever-lower costs.Labor was a problem in northern agriculture. There were generally enough hands for plowing, planting, and cultivating.But at harvest time, when the crop had to be gathered quickly, there was not enough labor to go around.In the 1830s Obed Hussey and Cyrus McCormick invented horse-drawn mechanical reapers to speed the process.A man with a hand-operated "cradel" could cut from three to four acres of ripe wheat a day; with the new machines he could harvest more than four times as much.By 1860 there were some 80,000 reapers worth $246 million, at work on the fields of the North and West, more than in the rest of the contemporary world.Land Policy: Land policies too encouraged agricultural productivity.Congress in these years was under constant pressure to provide family farms for the growing population by accelerating the conversion of public lands to private use.In 1800 it allowed settlers to buy land in tracts of 320 acres, or half the smallest parcel previously permietted, at a minimum of $2 an acre.The same law also gave the buyer four years to pay and provided a discount of 8 percent for cash.The Land Act of 1804 lowered the minimum price of $1.64 an acre and reduced the smallest amount purchasable to 160 acres.Federal land policies, however, were not consistent.The states and the federal government occasionally sold public land in large blocks, some of 100,000 acres or more.But these were not worked as great estates.Rather, they were bought by speculators, often on credit, and resold in small parcels to settlers.The system allowed free-wheeling businesspeople to make large profits, but even that did not prevent widespread ownership of land by people of small and middling means.Low land prices and easy credit combined to set off periodic waves of speculation in the West.Buyers with little capital placed claims to much larger amounts of land than they could ever expect to farm themselves in hopes of selling most of it for profit later.Meanwhile, they met their payments to the government by borrowing from the banks.To prevent widespread default Congress passed periodic relief acts that delayed collection of overdue payments.Such measures did not always help, however.When speculation got out of hand in 1819, the country experienced a major depression set off by panicky speculators trying to unload their land at a time when no one wanted to buy.The Panic of 1837 also stemmed in part from western land speculation.Not ever would-be farmer waited for land to be surveyed and put up for sale.Many cleared some unsurveyed acres and farmed illegally.Such "squatters" risked losing fences, barnes, houses, and the land itself when the tract they had settled and "improved" was finally offered for sale by the government.In 1830 champions of the squatters, led by Senator Thomas Hart Benton of Missouri, convinced Congress to pass the Pre-emption Act to allow those who had illegally occupied portions of the public domain on or before 1829 to buy up to 160 acres of land at the minimum price of $1.25 an acre before others were allowed to bid.In 1841 the time restrictions on the Pre-emption Act were removed.Land policy remained relatively unchanged for more than a decade after 1841.Benton and his collegues, joined at times by working-class leaders, continued to fight for a "homestead act" that would give land free to all bona fide settlers.But many easterners feared that free western lands would drain off eastern labor;southerners feared it would give the government an excuse to raise the tariff to offset the loss of land-sale revenues and also that it would encourage the growth of free states.The continued opposition of the South blocked a homestead law until 1862.Still, federal land policies overall accelerated the geographical expansion of the economy's agricultural sector.Farm Productivity: If agriculture had remained stagnant, rapid overall economic growth would not have been possible.While the reaper made farm labor more efficient and newly opened "virgin" lands yielded far more for each outlay of labor and capital than the older lands of the East, there was a serious downside to this agricultural expansion. It depleted centuries-long accumulations of top-soild nutrients wastefully.But is also churned out ever-cheaper wheat, pork, beef, fruits, vegetables, and fiber in a profusion seldom attained anywhere, anytime. Without this development there would not have been an "economic miracle."Steamboats and Roads: In some ways American geography favored the efficient, cheap transporation necessary for growth.The Mississippi River system combined with the Great Lakes made it possible for ships to penetrate deep into the vital interior of North America. But the lakes lacked lighthouses and port facilities and at several points were connected only by unnavigable rapids.As for the Mississippi system, flatboats and rafts could easily be floated down to New Orleans propelled by the current, but the trip upstream by poled keelboats required backbreaking labor and took far longer.Capitalists and inventors had been working on schemes to apply steam power to river navigation for some time, but not until Robert Fulton took up the quest, backed by the powerful Livingston family, did it become economically feasible.Soon steamboats were operating on schedule up and down the Husdon River.In 1811 the Fulton-Livingston interests, having already secured a legal monopoly of steamboat traffic in New York State waters, received an exclusive charter from the Lousiana territorial legislature to operate steamboats on the lower Mississippi.If unchecked, the Fulton group might have monopolized steamboat navigation on all the inland waters.However, the Supreme Court struck down these monopoly privileges in the case of Gibbons v. Ogden and opened up steamboat navigation to all investors.Entrepreneurs were not long in seizing the opportunity.By 1855 there were 727 steamboats on the western rivers with a combined capacity of 170,000 tons;many more plied the Great Lakes as well as the streams and coastal waters of the Gulf and the Atlantic.It had taken four months to pole a boat upstream from New Orleans to Louisville;by 1853 steamboats made it in under four and a half days.Freight rates on the same route in this period fell from an average of $5 per hundred pounds to under 15 cents.Impressive as the advances in inland navigation were, there still remained the problem of transportation where there were no natural waterways.Overland travelers during the colonial period had been forced to use narrow, mudddy, circuitous trails to move themselves and commodities.After 1800 a network of surfaced, all-weather roads for horses, carriages, and wagons began to appear, financed by tolls on users.The first major "turnpike" in the country was the Philadelphia-Lancaster Road in Pennsylvania opened in 1794.The entire country soon caught the road-building fever.In the Northeast private capital built most of the turnpikes;in the South and West state governments built the roads directly or bought stock in private turnpike companies.The federal government also joined in the rush, investing $7 million in the construction of the National Road.Canals: Though turnpikes reduced the cost and time of moving people and goods, transportation by land remained more expensive than by water.Where there were no navigable streams or lakes, the solution was canals.A few miles of artificial waterway were constructed in the Northeast just before the War of 1812.The real boom got under way in 1817, when the NY State legislature appropriated funds for constructing and enormously long canal between the Hudson River and Lake Erie, bypassing the Appalachian barrier to connect the Great Lakes with the Atlantic Ocean.The project was an impressive technical achievement.The state engineers learned on the job and improvised a score of new tools and techniques.In the end they moved millions of cubic yards of earth, constructed 83 locks, scores of stone aqueducts, and 363 miles of "ditch" 4 feet deep and 40 feet wide.The completed Erie Canal, opened by a colorful ceremoney in 1825, was an engineering marvel that astounded the world.Power for the canal boats was provided by horses and mules that treaded towpaths on either side of the waterway.A man or boy led the animals; another man at the tiller kept the boat in mid-channel and signaled passengers seated on top of the cabing to duck by blowing a horn when the vessel approached a low bridge.The canal was also an immesnse economic success.In 1817 the cost of shipping freight between NYC and Buffalo on Lake Erie was 19.2 cents a ton.By 1830 it was down to 3.4 cents.Freight rates to and from the upper Mississippi Vally also plummeted.By 1832 the canal was earning the state well over one million dollars yearly in tolls.The canal deflected much of the interior trade that had gone down the Mississippi and its tributaries and redirected it eastward to NYC, reinforcing its existing economic advantage over the nation's other business centers.New York's experience inevitably aroused the envy of merchants in the other Atlantic ports. Baltimore, Boston, Philadelphia, and Charleston businessmen now demanded that their states follow New York's lead.At the same time, promoters, speculators, farmers, and merchants in the Northwest saw that their region's prosperity depended on constructing canals to link up with the waterways built or proposed.The pressure on state governments soon got results. By the 1830s the dirt was flying all over the Northeast and Northwest as construction crews raced to create a great network of canals.In 1816 there were 100 miles of canals in the U.S.by 1840 over 3,300 miles of artificial waterways criss-crossed the Middle Atlantic states, southern New England, and the Old Northwest.Few canals built after 1825 were as successful as the Erie.Some never overcame difficult terrain and other engineering problems; others never attracted sufficient business to repay investors.Still others were built too late and were overtaken by the railroads, which provided quicker and less easily interrupted service.Nevertheless, the sharp decline in freight and passenger rates was a great boon to interregional trade.Western farmers found new outlets in the East for their wheat, corn, pork, beef, and other commodities.With transportation costs lower, the price of manufactured goods in the West fell, enabling eastern manufacturers to sell more to western customers.Everyone benefited.The Railroads Arrive: The railroads, too, encouraged growth.The early steam railroads were plagued by technical problems.Engines frequently broke down; boilers exploded.And even on a normal trip, passengers emerged from the cars neraly suffocated by smoke or with holes burned in their clothes from flying sparks.Rails were at first flat iron straps nailed to wooden beams. When these came loose, they sometimes curled up throught the floors of moving passenger cars, maiming or killing the occupants.Cattle that got in the way of trains caused derailmentsTraines moving rapidly over lightly ballasted rails and around sharp curves did not always stay on the track.Some of these problems were inevitable in so new a system, but accidents were also the resulf of makeshift construction imposed by the shortage of capital and the desire to build quickly.Gradually, railroad technology improved. All-iron rails, more substantial passenger cars, the "cow catcher" in front of the locomotive to push aside obstructions, more dependable boilers, and enlarged smokestacks to contain the hot sparks all made the railroads more efficient and more comfortable.To deal with the hairpin curves characteristic of American railraods, engineers developed loose-jointed engines and cars with wheels that swiveled to guide trains around turns.The first major American railroad was the Baltimore and Ohio, chartered in 1828.In 1833 the Charleston and Hamburg in South Carolina reached its terminus 136 miles from its starting point, making it the longest railroad in the world.By 1860 the country boasted some 30,000 miles of track, and passengers and freight could travel by rail from the Atlantic coast as far west as St. Joseph, Missouri, and from Portland, Maine, to New Orleans.The system was far from complete, and many communities remained without rail connections.Nevertheless, the accomplishment was impressive.The Factory System: Advances in agriculture and transportation contributed immensely to the pre-Civil War economic surge.But the most dynamic development of the antebellum economy was the rise of the factory system, initially in southern New England.There had been large workshops here and there in the colonial period, but none of these had brought together hundreds of "operatives" and expensive power-driven machinery under one roof to produce a single uniform product.The modern factory, copied from eighteenth-century English inventors and entrepreneurs in cotton textile manufactures, arrived in the U.S. soon after independence and in a rudimentary form.The first mill using water power to spin cotton yarn was probably the Beverly Cotton Manufactory of Massachusetts incorporated in 1789.in 1790 a skilled English mechanic, Samuel Slater, linked up with Almy and Brown, a concern with capital to invest descended from colonial candle-makers and West Indies traders.In 1790-1791 the firm opened the nation's first cotton spinning mill at Pawtucket, Rhode Island.Before long, the small state was covered with spinning mills that employed whole families, including women and young children, to tend the water-powered spindles.The Rhode Island mills produced only cotton yarn.For finished cloth skilled hand weaving was still needed, a labor-intensive and expensive process.The deficiency was made up in the second decade of the nineteenth century when Francis Cabot Lowell, a Boston merchant hard hit by Jefferson's embargo, visited Lanchashire, center of the flourishing British textile industry.Lowell took careful note of the latest power looms and carried the plans home in his head, prepared to build a loom superior to the original.Joining with other merchants, Lowell secured a corporation charter for the Boston Manufacturing Company. With their combined capital the promoters built a mill at a water power site in Waltham on the Charles River.The first cotton cloth came from the company's power looms in 1815 and proved superior to British imports.Between 1816 and 1826 the Boston Manufacturing Company averaged almost 19 percent profit a year.The promoters soon found they could not produce enough cloth at the limited Waltham power site to satisfy the demand and made plans for a complete new textile community alon the swift-flowing Merrimack.The new mills at Lowell, Massachusetts, were much larger than either the Waltham factory of the earlier spinning mills in Rhode Island.How could they attract enough labor for the new factories? The promoters turned to New England farm girls drawn to Lowell by promises of good wages and cheap, attractive dormitory housing built at company expense.The company also provided a lyceum, where the literate and pious young women could hear edifying lectures, and paid for a church and a minister.By the mid-1830s Lowell was a town of 18,000 people with schools, libraries, paved streets, churches, and health facilities.The mills themselves numbered some half-dozen, each separately incorporated, arranged in quadrangles surrounded by the simidetached houses of the townsfolk and the dormitories of the female workers.The Lowell system became famous even across the Atlantic. Distinguished foreign visitors made pigrimages to the town and were invariably impressed by what tey saw.The British novelist Charles Dickens, who had encountered at home the worst evils of industrialism, noted that the girls at Lowell wore "serviceable bonnets, good warm cloaks and shawls...,[were] healthy in appearance, many of them remarkably so . . .,[and had] the manners and deportment of young women, not of degraded brutes."What a contrast they made with the beaten, sickly workers and child laboreres of the mills of Lancashire and Birmingham!Unfortunately, the halcyon days did not last. During the "hungry forties," when the nation's economy slowed, conditions in the mills worsened.The girls' wages were cut, and when they protested, they were replaced with newly arrived Irish immigrants who were not so demanding.But for a time the Lowell system served as a showcase for the benefits of industrialization.Industrial Workers: Unequal Gains: In 1815, well before Lowell, the Erie Canal, and the Baltimore and Ohio Railroad, Americans were already a rich people by the standards of the day.During the next 35 years their average wealth and income increased impressively. One scholar believes that between the mid-1830s and the Civil War alone, annual GNP (gross national product, a dollar measure of all goods and services produced) more than doubled.Growth in per capita GNP was also high, as much as 2.5 % a year in the 1825-1837 period, for example.Yet it is clear that all Americans did not benefit equally from the economic surge. Clearly it enlarged the urban middle class by creating jobs not only for laboreres and factory operatives but also for engineers, clerks, bookkeepers, factory managers, and others.Most of these "white-collar" workers were native-born Americans whose familiarity with the English language and American ways gave them the pick of the new jobs.The industrial leap also created a new class of rich manufacturers, bankers, and railroad promoters. Many were "new" men who used the industrial transformation to lift themselves out of poverty.Samuel Slater, for one, had come to America in 1789 with almost nothing, he was worth $700,000 by 1829.But in fact, the economic growth of the 1815-1860 period was accompanied by growing inequality of economic condition. Studies of wealth ownership between the end of the colonial era and 1860 show a considerable increase in the proportion of houses, land, slaves, bank accounts, ships, equipment, factories, and other kinds of property, owned by the richest 10 % of the American people, compared with everyone else.Wages and Working Conditions: Leaving aside the South's slaves, taken as a whole, American wage earners made real economic gains during the generation preceding 1860.But while getting better individual circumstances varied widely.Relatively few married women worked for wages, but those who did were badly paid. When women teachers flocked to the new public schools, teachers' average wage levels fell.For traditional "women's work" the situation was similar.Female household servants in 1850 received, typically, a little over a dollar a week plus their room and board.Manufactureres of straw hats, ready-made clothes, and shoes relied on a large pool of poorly paid female workers, many employed part-time at home and paid "by the piece."These women often earned no more than 25 cents a day.(The Lowell girls were relatively affluent at $2.50 to $3 a week.)Many men were not much richer.In 1850 common laboreres - ditch diggers, stevedores, carters, and the like - received 61 cents a day with board, or 87 cents without board.Skilled labor was in shorter supply an so better rewarded. Blacksmiths earned about $1.10 a day in 1852.In 1847 a skilled iron founder in Pennsylvania could make as much as $30 per week.The Boston Manufacturing Company paid machinists up to $11 a week.To put these earnings in persepctive, the New York Tribune estimated in 1851 that a minimum budget of about $10 a week was needed to support a family of five in expensive NYC. This meant that an unskilled worker needed help from other family members, and they generally got it.In many families children were put to work at ten or twelve and earned enough to push total family incomes past the bare subsistence point.One scholar estimates that just after the Civil War family heads in Massachusetts earned just 57 percent of total family income; the rest came from the employed children.Though the income picture for labor is mixed, wage earners were clearly better off in the U.S. than in Europe. We know of one Irish immigrant construction worker who received wages of 75 cents a day plus board, including meat three times a day.Writing to his family in Ireland, however, he told them he at meat three times a week.When asked why he hid the truth, the man replied, "If I told them that, they'd never believe me."In fact, the abundance of cheap food, especially items seldom part of working-class diet in Europe, invariable astonished people accustomed to foreign practice.One immigrant expressed amazement at what his New York boardinghouse offered its patrons.Breakfast included "beef steaks, fish, has, ginger cakes, buckwheat cakes etc such a profusion as I never saw before at the breakfast tables."And at dinner there was even "a greater profusion than breakfast."But wages and income were not the whole story. Wage earners' lives were not easy. Work hours were long.The Lowell girls spent 12 hours a day, fewing in winter, more in summer.Foreigners, seeking to explain superior American wages, believed that Americans worked harder than their own compatriots.And they probably did.Though the pace of factory labor was more leisurely than today, it was difficult for people used to the slow rhythms of the nineteenth-century farm to adjust to the remoseless pace of the factory machines.Pre-Civil War workers and their families also experienced great insecurity. Occupational accidents were common, and when workers were injured, they usually lost their jobs.Men killed in the mines or factories left behind families who had to turn to meager private charities or begrudging public support.Beside industrial disaster, there was the uncertainty of employment.A bad harvest or a particularly hard winter often left agricultural workers destitute.Severe periodic depressions produced acute hardship among laborers and factory workers.During the hard times that began in 1819, an English traveler through the East and Northwest noted that he had "seen upwards of 1,500 men in quest of work within 11 months past." Again, following the 1837 and 1857 panics, unemployment forced many wage earners to ask for city and state relieft for themselves and their families.In 1857 there were food riots in several northern cities.Still another source of distress among workers was the downgrading of skills and the loss of independence that sometimes accompanied mechanization and the factory system. The fate of the Massachusetts shoemakers is a case in point.In the opening years of the nineteenth century they had been skilled, semi-independent craftsmen.Merchants brought them cut leather and paid them a given sum for each pair of shoes they sewed and finished in their "ten-footers," the ten-by-ten sheds they worked in behind their homes.These skilled craftsmen owned their own tools and often employed their wives and grown children to helop with the work.Not only were they well paid, they also enjoyed a sense of independence since they were subcontractors, not wage earners, and were the heads of their households, not only in a social and legal sense, but also in a direct economic way.Gradually, as the market for ready-made shoes, especially for southern slaves, expanded, the shoemakers' independence and incomes declined. Merchants divided the shoemaking process into smaller and simpler parts and "put out" the simplified work to unmarried young women in New England country villages.Eventually entrepreneurs intoduced power-driven machines that sewed heavy leather, enabling the merchants to establish factories where wage workers could use the expensive, capitalist-owned machines.By the eve of the Civil War the independent master craftsman working in his ten-footer had been replaced with semiskilled labor for weekly wages in factories.The Labor Movement: Clearly, many wage earners were unhappy with the new aggressive capitalism and the new factory system. In 1836 the young women at Lowell went on strick to protest a wage cut.In the end the owners won and the wage cut stuck.In 1860 the shoemakers of Lynn, Massachusetts, "turned out" to protest declining wages; before the strike ended, some 20,000 Massachusetts shoemakers had left their places at the machines.All through the antebellum period workers struck for higher wages or bettwe working conditions. Most of the strikes were unplanned uprisings in response to some unexpected blow such as a wage cut.But some grew out of long-standing grievances such as the sheer drudgery of factory life or the loss of worker independence.These grievances created a labor movement of considerable dimensions.The small community craft societies organized in the early 1800s expanded over the next 30 years into citywide labor unions, each representing a whole trade. Later, such local unions joined together into national organizations.But after the Panic of 1837 employers usually defeated the strikers by threatening to hire the many unemployed.Trade unions thereafter declined, and in the next two decades labor discontent generally was diverted from union organizing to political action and various reform movements.We must not exaggerate the extent of labor discontent during these years, however. The school system, as well as the churches, worked hard to instill the "work ethic" into the labor force, and on the whole they were successful.By and large the American workforce cooperated with economic growth.As one pre-1860 observer noted, in New England "every workman seems to be continually devising some new thing to assist him in his work, and there [is] a strong desire both with masters and workman . . . to be 'posted up' [that is, kept informed] in every improvement."Skilled English workingmen who came to American machine shops in the 1830s and 1840s were often startled to find that their American counterparts, rather than fighting the shop owners, were "fire eaters" whose "ravenous appetites for labor" made their own performance look bad.Several eminent students of American economic development are convinced that this cooperation was one of the most important elements in creating the pre-Civil War American economic miracle.
|
<urn:uuid:3e359820-8290-4e7e-b8a6-da0284ed98e1>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816370.72/warc/CC-MAIN-20180225110552-20180225130552-00614.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9699050188064575,
"score": 3.859375,
"token_count": 11139,
"url": "https://www.studyblue.com/notes/hh/since-during-periods-economic-recession-which-fiscal-action-has-been-taken-united-states-government/032168056006534443"
}
|
|Related fields and sub-fields|
Censorship is the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, politically incorrect or "inconvenient" as determined by government authorities or by community consensus.
Governments and private organizations may engage in censorship. Other groups or institutions may propose and petition for censorship. When an individual such as an author or other creator engages in censorship of their own works or speech, it is referred to as self-censorship.
Censorship could be direct or indirect, in which case it is referred to as soft censorship. It occurs in a variety of different media, including speech, books, music, films, and other arts, the press, radio, television, and the Internet for a variety of claimed reasons including national security, to control obscenity, child pornography, and hate speech, to protect children or other vulnerable groups, to promote or restrict political or religious views, and to prevent slander and libel.
Direct censorship may or may not be legal, depending on the type, location, and content. Many countries provide strong protections against censorship by law, but none of these protections are absolute and frequently a claim of necessity to balance conflicting rights is made, in order to determine what could and could not be censored. There are no laws against self-censorship.
- 1 History
- 2 Rationale
- 3 Types
- 4 By media
- 5 Surveillance as an aid
- 6 Implementation
- 7 Criticism
- 8 By country
- 9 See also
- 10 References
- 11 Further reading
In 399 BC, Greek philosopher, Socrates, defied attempts by the Greek state to censor his philosophical teachings and was sentenced to death by drinking a poison, hemlock. Socrates' student, Plato, is said to have advocated censorship in his essay on The Republic, which opposed the existence of democracy. In contrast to Plato, Greek playwright Euripides (480–406 BC) defended the true liberty of freeborn men, including the right to speak freely. In 1766, Sweden became the first country to abolish censorship by law.
The rationale for censorship is different for various types of information censored:
- Moral censorship is the removal of materials that are obscene or otherwise considered morally questionable. Pornography, for example, is often censored under this rationale, especially child pornography, which is illegal and censored in most jurisdictions in the world.
- Military censorship is the process of keeping military intelligence and tactics confidential and away from the enemy. This is used to counter espionage, which is the process of gleaning military information.
- Political censorship occurs when governments hold back information from their citizens. This is often done to exert control over the populace and prevent free expression that might foment rebellion.
- Religious censorship is the means by which any material considered objectionable by a certain religion is removed. This often involves a dominant religion forcing limitations on less prevalent ones. Alternatively, one religion may shun the works of another when they believe the content is not appropriate for their religion.
- Corporate censorship is the process by which editors in corporate media outlets intervene to disrupt the publishing of information that portrays their business or business partners in a negative light, or intervene to prevent alternate offers from reaching public exposure.
Strict censorship existed in the Eastern Bloc. Throughout the bloc, the various ministries of culture held a tight rein on their writers. Cultural products there reflected the propaganda needs of the state. Party-approved censors exercised strict control in the early years. In the Stalinist period, even the weather forecasts were changed if they suggested that the sun might not shine on May Day. Under Nicolae Ceauşescu in Romania, weather reports were doctored so that the temperatures were not seen to rise above or fall below the levels which dictated that work must stop.
Independent journalism did not exist in the Soviet Union until Mikhail Gorbachev became its leader; all reporting was directed by the Communist Party or related organizations. Pravda, the predominant newspaper in the Soviet Union, had a monopoly. Foreign newspapers were available only if they were published by Communist Parties sympathetic to the Soviet Union.
Possession and use of copying machines was tightly controlled in order to hinder production and distribution of samizdat, illegal self-published books and magazines. Possession of even a single samizdat manuscript such as a book by Andrei Sinyavsky was a serious crime which might involve a visit from the KGB. Another outlet for works which did not find favor with the authorities was publishing abroad.
The People's Republic of China employs sophisticated censorship mechanisms, referred to as the Golden Shield Project, to monitor the internet. Popular search engines such as Baidu also remove politically sensitive search results.
Cuban media used to be operated under the supervision of the Communist Party's Department of Revolutionary Orientation, which "develops and coordinates propaganda strategies". Connection to the Internet is restricted and censored.
Censorship also takes place in capitalist nations, such as Uruguay. In 1973, a military coup took power in Uruguay, and the State practiced censorship. For example, writer Eduardo Galeano was imprisoned and later was forced to flee. His book Open Veins of Latin America was banned by the right-wing military government, not only in Uruguay, but also in Chile and Argentina.
In the United States, censorship occurs through books, film festivals, politics, and public schools. See banned books for more information. Additionally, critics of campaign finance reform in the United States say this reform imposes widespread restrictions on political speech.
In the Republic of Singapore, Section 33 of the Films Act originally banned the making, distribution and exhibition of "party political films", at pain of a fine not exceeding $100,000 or to imprisonment for a term not exceeding 2 years. The Act further defines a "party political film" as any film or video
- (a) which is an advertisement made by or on behalf of any political party in Singapore or any body whose objects relate wholly or mainly to politics in Singapore, or any branch of such party or body; or
- (b) which is made by any person and directed towards any political end in Singapore
In 2001, the short documentary called A Vision of Persistence on opposition politician J. B. Jeyaretnam was also banned for being a "party political film". The makers of the documentary, all lecturers at the Ngee Ann Polytechnic, later submitted written apologies and withdrew the documentary from being screened at the 2001 Singapore International Film Festival in April, having been told they could be charged in court. Another short documentary called Singapore Rebel by Martyn See, which documented Singapore Democratic Party leader Dr Chee Soon Juan's acts of civil disobedience, was banned from the 2005 Singapore International Film Festival on the same grounds and See is being investigated for possible violations of the Films Act.
This law, however, is often disregarded when such political films are made supporting the ruling People's Action Party (PAP). Channel NewsAsia's five-part documentary series on Singapore's PAP ministers in 2005, for example, was not considered a party political film.
Since March 2009, the Films Act has been amended to allow party political films as long as they were deemed factual and objective by a consultative committee. Some months later, this committee lifted the ban on Singapore Rebel.
State secrets and prevention of attention
In wartime, explicit censorship is carried out with the intent of preventing the release of information that might be useful to an enemy. Typically it involves keeping times or locations secret, or delaying the release of information (e.g., an operational objective) until it is of no possible use to enemy forces. The moral issues here are often seen as somewhat different, as the proponents of this form of censorship argues that release of tactical information usually presents a greater risk of casualties among one's own forces and could possibly lead to loss of the overall conflict.
During World War I letters written by British soldiers would have to go through censorship. This consisted of officers going through letters with a black marker and crossing out anything which might compromise operational secrecy before the letter was sent. The World War II catchphrase "Loose lips sink ships" was used as a common justification to exercise official wartime censorship and encourage individual restraint when sharing potentially sensitive information.
An example of "sanitization" policies comes from the USSR under Joseph Stalin, where publicly used photographs were often altered to remove people whom Stalin had condemned to execution. Though past photographs may have been remembered or kept, this deliberate and systematic alteration to all of history in the public mind is seen as one of the central themes of Stalinism and totalitarianism.
Censorship is occasionally carried out to aid authorities or to protect an individual, as with some kidnappings when attention and media coverage of the victim can sometimes be seen as unhelpful.
Censorship by religion is a form of censorship where freedom of expression is controlled or limited using religious authority or on the basis of the teachings of the religion. This form of censorship has a long history and is practiced in many societies and by many religions. Examples include the Galileo affair, Edict of Compiègne, the Index Librorum Prohibitorum (list of prohibited books) and the condemnation of Salman Rushdie's novel The Satanic Verses by Iranian leader Ayatollah Ruhollah Khomeini. Images of the Islamic figure Muhammad are also regularly censored. In some secular countries, this is sometimes done to prevent hurting religious sentiments.
The content of school textbooks is often the issue of debate, since their target audience is young people, and the term "whitewashing" is the one commonly used to refer to removal of critical or conflicting events. The reporting of military atrocities in history is extremely controversial, as in the case of The Holocaust (or Holocaust denial), Bombing of Dresden, the Nanking Massacre as found with Japanese history textbook controversies, the Armenian Genocide, the Tiananmen Square protests of 1989, and the Winter Soldier Investigation of the Vietnam War.
In the context of secondary school education, the way facts and history are presented greatly influences the interpretation of contemporary thought, opinion and socialization. One argument for censoring the type of information disseminated is based on the inappropriate quality of such material for the young. The use of the "inappropriate" distinction is in itself controversial, as it changed heavily. A Ballantine Books version of the book Fahrenheit 451 which is the version used by most school classes contained approximately 75 separate edits, omissions, and changes from the original Bradbury manuscript.
In February 2006 a National Geographic cover was censored by the Nashravaran Journalistic Institute. The offending cover was about the subject of love and a picture of an embracing couple was hidden beneath a white sticker.
Copy, picture, and writer approval
Copy approval is the right to read and amend an article, usually an interview, before publication. Many publications refuse to give copy approval but it is increasingly becoming common practice when dealing with publicity anxious celebrities. Picture approval is the right given to an individual to choose which photos will be published and which will not. Robert Redford is well known for insisting upon picture approval. Writer approval is when writers are chosen based on whether they will write flattering articles or not. Hollywood publicist Pat Kingsley is known for banning certain writers who wrote undesirably about one of her clients from interviewing any of her other clients.
There are many ways that censors exhibit creativity, but a specific variant is of concern in which censors rewrite texts, giving these texts secret co-authors.
Self-censorship is the act of censoring or classifying one's own blog, book, film, or other forms of media. This is done out of fear of, or deference to, the sensibilities or preferences (actual or perceived) of others and without overt pressure from any specific party or institution of authority. Self-censorship is often practiced by film producers, film directors, publishers, news anchors, journalists, musicians, and other kinds of authors including individuals who use social media.
According to a Pew Research Center and the Columbia Journalism Review survey, "About one-quarter of the local and national journalists say they have purposely avoided newsworthy stories, while nearly as many acknowledge they have softened the tone of stories to benefit the interests of their news organizations. Fully four-in-ten (41%) admit they have engaged in either or both of these practices."
Threats to media freedom have shown a significant increase in Europe in recent years, according to a study published in April 2017 by the Council of Europe. This results in a fear of physical or psychological violence, and the ultimate result is self-censorship by journalists.
Book censorship can be enacted at the national or sub-national level, and can carry legal penalties for their infraction. Books may also be challenged at a local, community level. As a result, books can be removed from schools or libraries, although these bans do not extend outside of that area.
Aside from the usual justifications of pornography and obscenity, some films are censored due to changing racial attitudes or political correctness in order to avoid ethnic stereotyping and/or ethnic offense despite its historical or artistic value. One example is the still withdrawn "Censored Eleven" series of animated cartoons, which may have been innocent then, but are "incorrect" now.
Film censorship is carried out by various countries to differing degrees. For example, only 34 foreign films a year are approved for official distribution in China's strictly controlled film market.
Music censorship has been implemented by states, religions, educational systems, families, retailers and lobbying groups – and in most cases they violate international conventions of human rights.
Censorship of maps is often employed for military purposes. For example, the technique was used in former East Germany, especially for the areas near the border to West Germany in order to make attempts of defection more difficult. Censorship of maps is also applied by Google Maps, where certain areas are grayed out or blacked or areas are purposely left outdated with old imagery.
Under subsection 48(3) and (4) of the Penang Islamic Religious Administration Enactment 2004, non-Muslims in Malaysia are penalized for using the following words, or to write or publish them, in any form, version or translation in any language or for use in any publicity material in any medium: "Allah", "Firman Allah", "Ulama", "Hadith", "Ibadah", "Kaabah", "Qadhi'", "Illahi", "Wahyu", "Mubaligh", "Syariah", "Qiblat", "Haji", "Mufti", "Rasul", "Iman", "Dakwah", "Wali", "Fatwa", "Imam", "Nabi", "Sheikh", "Khutbah", "Tabligh", "Akhirat", "Azan", "Al Quran", "As Sunnah", "Auliya'", "Karamah", "False Moon God", "Syahadah", "Baitullah", "Musolla", "Zakat Fitrah", "Hajjah", "Taqwa" and "Soleh".
Publishers of the Spanish reference dictionary Real Acádemia Española received petitions to censor the entries "Jewishness", "Gypsiness", "black work" and "weak sex", claiming that they are either offensive or non-PC.
One elementary school's obscenity filter changed every reference to the word "tit" to "breast," so when a child typed "U.S. Constitution" into the school computer, it changed it to Consbreastution.
British photographer and visual artist Graham Ovenden's photos and paintings were ordered to be destroyed by a London's magistrate court in 2015 for being "indecent" and their copies had been removed from the online Tate gallery.
A 1980 Israeli law forbade banned artwork composed of the four colours of the Palestinian flag, and Palestinians were arrested for displaying such artwork or even for carrying sliced melons with the same pattern.
Internet censorship is control or suppression of the publishing or accessing of information on the Internet. It may be carried out by governments or by private organizations either at the behest of government or on their own initiative. Individuals and organizations may engage in self-censorship on their own or due to intimidation and fear.
The issues associated with Internet censorship are similar to those for offline censorship of more traditional media. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to prevent access to information even though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering.
Unless the censor has total control over all Internet-connected computers, such as in North Korea or Cuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet. Pseudonymity and data havens (such as Freenet) protect free speech using technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways to access blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those in China, are able to devote significant resources to building and maintaining a comprehensive censorship system.
Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies:
- A 1993 Time Magazine article quotes computer scientist John Gillmore, one of the founders of the Electronic Frontier Foundation, as saying "The Net interprets censorship as damage and routes around it."
- In November 2007, "Father of the Internet" Vint Cerf stated that he sees government control of the Internet failing because the Web is almost entirely privately owned.
- A report of research conducted in 2007 and published in 2009 by the Beckman Center for Internet & Society at Harvard University stated that: "We are confident that the [censorship circumvention] tool developers will for the most part keep ahead of the governments' blocking efforts", but also that "...we believe that less than two percent of all filtered Internet users use circumvention tools".
- In contrast, a 2011 report by researchers at the Oxford Internet Institute published by UNESCO concludes "... the control of information on the Internet and Web is certainly feasible, and technological advances do not therefore guarantee greater freedom of speech."
A BBC World Service poll of 27,973 adults in 26 countries, including 14,306 Internet users, was conducted between 30 November 2009 and 7 February 2010. The head of the polling organization felt, overall, that the poll showed that:
- Despite worries about privacy and fraud, people around the world see access to the internet as their fundamental right. They think the web is a force for good, and most don’t want governments to regulate it.
The poll found that nearly four in five (78%) Internet users felt that the Internet had brought them greater freedom, that most Internet users (53%) felt that "the internet should never be regulated by any level of government anywhere", and almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right (50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion).
The rising usage of social media in many nations has led to the emergence of citizens organizing protests through social media, sometimes called "Twitter Revolutions." The most notable of these social media led protests were parts Arab Spring uprisings, starting in 2010. In response to the use of social media in these protests, the Tunisian government began a hack of Tunisian citizens' Facebook accounts, and reports arose of accounts being deleted.
Automated systems can be used to censor social media posts, and therefore limit what citizens can say online. This most notably occurs in China, where social media posts are automatically censored depending on content. In 2013, Harvard political science professor Gary King led a study to determine what caused social media posts to be censored and found that posts mentioning the government were not more or less likely to be deleted if they were supportive or critical of the government. Posts mentioning collective action were more likely to be deleted than those that had not mentioned collective action. Currently, social media censorship appears primarily as a way to restrict Internet users' ability to organize protests. For the Chinese government, seeing citizens unhappy with local governance is beneficial as state and national leaders can replace unpopular officials. King and his researchers were able to predict when certain officials would be removed based on the number of unfavorable social media posts.
Social media sites such as Facebook are known to censor posts containing things such as nudity and hate speech. As of November 2016, Twitter has been banning numerous accounts associated with alt-right politics. Facebook with other areas is more hesitant to censor with news then with the above aforementioned nudity or hate speech. Fake news censorship is a more debated topic and whether or not social media sites should take authority and work to ban fake news. Many think that banning fake news will help stop the spread of fake news as numerous adults receive their news directly from social media. It is still a very fragile topic with varying viewpoints.
Since the early 1980s, advocates of video games have emphasized their use as an expressive medium, arguing for their protection under the laws governing freedom of speech and also as an educational tool. Detractors argue that video games are harmful and therefore should be subject to legislative oversight and restrictions. Many video games have certain elements removed or edited due to regional rating standards. For example, in the Japanese and PAL Versions of No More Heroes, blood splatter and gore is removed from the gameplay. Decapitation scenes are implied, but not shown. Scenes of missing body parts after having been cut off, are replaced with the same scene, but showing the body parts fully intact.
Surveillance as an aid
Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some form of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship.
Protection of sources is no longer just a matter of journalistic ethics; it increasingly also depends on the journalist's computer skills and all journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online or storing it on a computer or mobile phone. And individuals associated with high-profile rights organizations, dissident, protest, or reform groups are urged to take extra precautions to protect their online identities.
The former Soviet Union maintained a particularly extensive program of state-imposed censorship. The main organ for official censorship in the Soviet Union was the Chief Agency for Protection of Military and State Secrets generally known as the Glavlit, its Russian acronym. The Glavlit handled censorship matters arising from domestic writings of just about any kind—even beer and vodka labels. Glavlit censorship personnel were present in every large Soviet publishing house or newspaper; the agency employed some 70,000 censors to review information before it was disseminated by publishing houses, editorial offices, and broadcasting studios. No mass medium escaped Glavlit's control. All press agencies and radio and television stations had Glavlit representatives on their editorial staffs.
Sometimes, public knowledge of the existence of a specific document is subtly suppressed, a situation resembling censorship. The authorities taking such action will justify it by declaring the work to be "subversive" or "inconvenient". An example is Michel Foucault's 1978 text Sexual Morality and the Law (later republished as The Danger of Child Sexuality), originally published as La loi de la pudeur [literally, "the law of decency"]. This work defends the decriminalization of statutory rape and the abolition of age of consent laws.
When a publisher comes under pressure to suppress a book, but has already entered into a contract with the author, they will sometimes effectively censor the book by deliberately ordering a small print run and making minimal, if any, attempts to publicize it. This practice became known in the early 2000s as privishing (private publishing).
Censorship has been criticized throughout history for being unfair and hindering progress. In a 1997 essay on Internet censorship, social commentator Michael Landier claims that censorship is counterproductive as it prevents the censored topic from being discussed. Landier expands his argument by claiming that those who impose censorship must consider what they censor to be true, as individuals believing themselves to be correct would welcome the opportunity to disprove those with opposing views.
Censorship is often used to impose moral values on society, as in the censorship of material considered obscene. English novelist E. M. Forster was a staunch opponent of censoring material on the grounds that it was obscene or immoral, raising the issue of moral subjectivity and the constant changing of moral values. When the novel Lady Chatterley's Lover was put on trial in 1960, Forster wrote:
‘Lady Chatterley’s Lover is a literary work of importance...I do not think that it could be held obscene, but am in a difficulty here, for the reason that I have never been able to follow the legal definition of obscenity. The law tells me that obscenity may deprave and corrupt, but as far as I know, it offers no definition of depravity or corruption.
Censorship by country collects information on censorship, Internet censorship, Freedom of the Press, Freedom of speech, and Human Rights by country and presents it in a sortable table, together with links to articles with more information. In addition to countries, the table includes information on former countries, disputed countries, political sub-units within countries, and regional organizations.
- "Censorship – Definition and More from the Free Merriam-Webster Dictionary". merriam-webster.com.
- Sui-Lee Wee; Ben Blanchard (June 4, 2012). "China blocks Tiananmen talk on crackdown anniversary". Reuters. Retrieved 2013-05-08.
- "The Long History of Censorship", Mette Newth, Beacon for Freedom of Expression (Norway), 2010
- "Child Pornography: Model Legislation & Global Review" (PDF) (5 ed.). International Centre for Missing & Exploited Children. 2008. Retrieved 2012-08-25.
- "World Congress against CSEC". Csecworldcongress.org. 2002-07-27. Archived from the original on March 16, 2012. Retrieved 2011-10-21.
- Timothy Jay (2000). Why We Curse: A Neuro-psycho-social Theory of Speech. John Benjamins Publishing Company. pp. 208–209. ISBN 1-55619-758-6.
- David Goldberg; Stefaan G. Verhulst; Tony Prosser (1998). Regulating the Changing Media: A Comparative Study. Oxford University Press. p. 207. ISBN 0-19-826781-9.
- McCullagh, Declan (2003-06-30). "Microsoft's new push in Washington". CNET. Retrieved 2011-10-21.
- The Commissar vanishes (The Newseum) Archived June 11, 2008, at the Wayback Machine.
- Major & Mitter 2004, p. 6
- Major & Mitter 2004, p. 15
- Crampton 1997, p. 247
- "Baidu's Internal Monitoring and Censorship Document Leaked (1) (Updated) – China Digital Times (CDT)". China Digital Times (CDT).
- "Baidu's Internal Monitoring and Censorship Document Leaked (2) – China Digital Times (CDT)". China Digital Times (CDT).
- "Baidu's Internal Monitoring and Censorship Document Leaked (3) – China Digital Times (CDT)". China Digital Times (CDT).
- "10 most censored countries". The Committee to Protect Journalists.
- "Going online in Cuba: Internet under surveillance" (PDF). Reporters Without Borders. 2006. Archived from the original (PDF) on 2009-03-03.
- "Fresh Off Worldwide Attention for Joining Obama's Book Collection, Uruguayan Author Eduardo Galeano Returns with "Mirrors: Stories of Almost Everyone"". Democracynow.org. 28 May 2009. Retrieved 2011-10-21.
- "Books". National Coalition Against Censorship. Retrieved 2016-04-11.
- "The Trick of Campaign Finance Reform". Christian Science Monitor.
- "Felonious Advocacy". reason.
- "Turkish authorities block Wikipedia without giving reason". BBC News. 29 April 2017.
- "New York Times".
- The Raw Story | Investigative News and Politics[dead link]
- "Donald Trump just threatened to shut down one of America's biggest news stations". 11 October 2017.
- "National Council Of Educational Research And Training :: Home (Page 105, Democratic Politics - Class 9)". ncert.nic.in. Retrieved 2017-12-12.
- Bradbury, Ray. Fahrenheit 451. Del Rey Books. April 1991.
- Lundqvist, J. "More pictures of Iranian Censorship". Retrieved 2007-08-01.
- Ian Mayes (2005-04-23). "The readers' editor on requests that are always refused". London: The Guardian. Retrieved 2007-08-01.
- Barber, Lynn (2002-01-27). "Caution: big name ahead". London: The Observer. Retrieved 2007-08-01.
- Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism, Ozzie Zehner, University of Nebraska Press, 2012, 464 pp, ISBN 978-0-8032-3775-9. Retrieved 23 October 2013.
- CLARK, Marilyn; GRECH, Anna (2017). Journalism under pressure. Unwarranted interference, fear and self-censorship in Europe. Strasbourg: Council of Europe publishing. Retrieved 12 May 2017.
- "Self Censorship: How Often and Why". Pew Research Center.
- "Journalists suffer violence, intimidation and self-censorship in Europe, says a Council of Europe study". Council of Europe. Newsroom. 20 April 2017. Retrieved 12 May 2017.
- "Why China is letting 'Django Unchained' slip through its censorship regime". Quartz. March 13, 2013.
- "What is Music Censorship?". Freemuse.org. 1 January 2001. Retrieved 2008-10-25.
- Jenna Johnson (2007-07-22). "Google's View of D.C. Melds New and Sharp, Old and Fuzzy". News. Washington Post. Retrieved 2007-07-22.
- "Check law first, Karpal asks Penang government over decree banning 'Islamic words'". Malaysia Insider.
- "Penang mufti outlaws 40 words to non-Muslims". New Straits Times. 2014.
- "browser – IE6 PAGE TITLE". mufti.penang.gov.my. Archived from the original on April 21, 2014. Retrieved 2014-09-14.
- ""Gitanada" y "judiada" pueden seguir en el nuevo Diccionario de la RAE". La Voz de Galicia. 12 December 2013.
- Dr. Ted Eisenberg and Joyce K. Eisenberg, ‘’The Scoop on Breasts: A Plastic Surgeon Busts the Myths,’’ Incompra Press, 2012, ISBN 978-0-9857249-3-1
- "Paedophile artist's photographs and paintings 'must be destroyed'". The Independent. Retrieved 2015-10-16.
- "Graham Ovenden | Tate". 2015-10-16. Archived from the original on October 16, 2015. Retrieved 2015-10-16.
- Ashley, John; Jayousi, Nedal (December 2013). "The Connection between Palestinian Culture and the Conflict". Discourse, Culture, and Education in the Israeli-Palestinian Conflict (PDF). netanya.ac.il (Report). Friedrich-Ebert-Stiftung, Israel Office. p. 55. Retrieved 21 May 2017.
In 1980, Israel banned art exhibitions and paintings of “political significance”, with the grouping of the four colours of the Palestinian flag in any one painting also forbidden.
- Kifner, John (October 16, 1993). "Ramallah Journal; A Palestinian Version of the Judgment of Solomon". The New York Times. Retrieved May 21, 2010.
- Dalrymple, William (October 2, 2002). "A culture under fire". The Guardian. London. Retrieved May 21, 2010.
- "The watermelon makes a colourful interlude". The Age. Melbourne. September 12, 2004.
- OpenNet Initiative "Summarized global Internet filtering data spreadsheet", 8 November 2011 and "Country Profiles", the OpenNet Initiative is a collaborative partnership of the Citizen Lab at the Munk School of Global Affairs, University of Toronto; the Berkman Center for Internet & Society at Harvard University; and the SecDev Group, Ottawa
- "Internet Enemies", Enemies of the Internet 2014: Entities at the heart of censorship and surveillance, Reporters Without Borders (Paris), 11 March 2014. Retrieved 24 June 2014.
- Internet Enemies, Reporters Without Borders (Paris), 12 March 2012 Archived March 23, 2012, at the Wayback Machine.
- Due to legal concerns the OpenNet Initiative does not check for filtering of child pornography and because their classifications focus on technical filtering, they do not include other types of censorship.
- Freedom of Connection, Freedom of Expression: The Changing Legal and Regulatory Ecology Shaping the Internet, Dutton, William H.; Dopatka, Anna; Law, Ginette; Nash, Victoria, Division for Freedom of Expression, Democracy and Peace, United Nations Educational, Scientific and Cultural Organization (UNESCO), Paris, 2011, 103 pp., ISBN 978-92-3-104188-4
- "First Nation in Cyberspace", Philip Elmer-Dewitt, Time, 6 December 1993, No. 49
- "Cerf sees government control of Internet failing", Pedro Fonseca, Reuters, 14 November 2007
- 2007 Circumvention Landscape Report: Methods, Uses, and Tools, Hal Roberts, Ethan Zuckerman, and John Palfrey, Beckman Center for Internet & Society at Harvard University, March 2009
- For the BBC poll Internet users are those who used the Internet within the previous six months.
- "BBC Internet Poll: Detailed Findings", BBC World Service, 8 March 2010
- "Internet access is 'a fundamental right'", BBC News, 8 March 2010
- Madrigal, Alexis C. "The Inside Story of How Facebook Responded to Tunisian Hacks". The Atlantic. Retrieved 2016-04-15.
- King, Gary; Pan, Jennifer (2014). "Reverse-engineering censorship in China: Randomized experimentation and participant observation". Science. 345: 1251722. doi:10.1126/science.1251722. PMID 25146296. Retrieved 7 April 2016.
- "Professor Gary King, Inaugural Government Regius Lecture 2015". Vimeo. Retrieved 2016-04-12.
- "Community Standards | Facebook". www.facebook.com. Retrieved 2016-04-16.
- "Twitter's Misbegotten Censorship". www.theatlantic.com. Retrieved 2016-11-21.
- Solon, Olivia (2016-11-15). "Facebook won't block fake news posts because it has no incentive, experts say". The Guardian. ISSN 0261-3077. Retrieved 2017-11-02.
- Byrd P. "It's all fun and games until somebody gets hurt: the effectiveness of proposed video game regulation." Houston Law Review 2007. Accessed 19 March 2007.
- "A Hornet's Nest Over Violent Video Games", James D. Ivory and Malte Elson, The Chronicle of Higher Education (Washington), 16 October 2013.
- gamesradararchive (19 October 2012). "No More Heroes - Censored gameplay 12-07-07" – via YouTube.
- "Censorship is inseparable from surveillance", Cory Doctorow, The Guardian, 2 March 2012
- "Online Censorship : Ubiquitous Big Brother, witchhunt for dissidents"[dead link], WeFightCensorship.org, Reporters Without Borders, retrieved 12 March 2013
- "When Secrets Aren’t Safe With Journalists", Christopher Soghoian, New York Times, 26 October 2011
- The Enemies of the Internet Special Edition : Surveillance, Reporters Without Borders, 12 March 2013
- Everyone's Guide to By-passing Internet Censorship, The Citizen Lab, University of Toronto, September 2007
- Winkler, David (11 July 2002). "Journalists Thrown 'Into the Buzzsaw'". CommonDreams.org. Archived from the original on August 4, 2007.
- "Internet Censorship is Absurd and Unconstitutional", Michael Landier, 4 June 1997
- "The Trial of Lady Chatterley's Lover", Paul Gallagher, Dangerous Minds, 10 November 2010
- Crampton, R. J. (1997), Eastern Europe in the Twentieth Century and After, Routledge, ISBN 0-415-16422-2
- Major, Patrick; Mitter, Rana (2004), "East is East and West is West?", in Major, Patrick, Across the Blocs: Exploring Comparative Cold War Cultural and Social History, Taylor & Francis, Inc., ISBN 978-0-7146-8464-2
|Library resources about
- Abbott, Randy. "A Critical Analysis of the Library-Related Literature Concerning Censorship in Public Libraries and Public School Libraries in the United States During the 1980s." Project for degree of Education Specialist, University of South Florida, December 1987.
- Birmingham, Kevin, "The Most Dangerous Book: The Battle for James Joyce's Ulysses", London (Head of Zeus Ltd), 2014, ISBN 978-1594203367
- Burress, Lee. Battle of the Books. Metuchen, NJ: The Scarecrow Press, 1989.
- Butler, Judith, "Excitable Speech: A Politics of the Performative"(1997)
- Foucault, Michel, edited by Lawrence D. Kritzman. Philosophy, Culture: interviews and other writings 1977–1984 (New York/London: 1988, Routledge, ISBN 0-415-90082-4) (The text Sexual Morality and the Law is Chapter 16 of the book).
- Gilbert, Nora. "Better Left Unsaid: Victorian Novels, Hays Code Films, and the Benefits of Censorship." Stanford, CA: Stanford University Press, 2013.
- Wittern-Keller, Laura. Freedom of the Screen: Legal Challenges to State Film Censorship, 1915–1981. University Press of Kentucky 2008
- Hoffman, Frank. "Intellectual Freedom and Censorship." Metuchen, NJ: The Scarecrow Press, 1989.
- Mathiesen, Kay Censorship and Access to Information Handbook of Information and Computer Ethics, Kenneth E. Himma, Herman T. Tavani, eds., John Wiley and Sons, New York, 2008
- National Coalition against Censorship (NCAC). "Books on Trial: A Survey of Recent Cases." January 1985.
- Parker, Alison M. (1997). "Purifying America: Women, Cultural Reform, and Pro-Censorship Activism, 1873–1933," University of Illinois Press.
- Biltereyst, Daniel, ed. Silencing Cinema. Palgrave/Macmillan, 2013.
- Ringmar, Erik A Blogger's Manifesto: Free Speech and Censorship in the Age of the Internet (London: Anthem Press, 2007)
- Terry, John David II. "Censorship: Post Pico." In "School Law Update, 1986," edited by Thomas N. Jones and Darel P. Semler.
- Silber, Radomír. Partisan media and modern censorship: media influence on Czech political partisanship and the media's creation of limits to public opposition and control of exercising power in the Czech Republic in the 1990s. First edition. Brno: Tribun EU, 2017. 86 stran. Librix.eu. ISBN 978-80-263-1174-4.
|
<urn:uuid:c39b9a84-1da7-4625-80ce-1903fb464647>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00014.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.913334846496582,
"score": 3.765625,
"token_count": 8514,
"url": "https://en.wikipedia.org/wiki/Censorship"
}
|
Titanic Size Comparison to Modern Cruise Ships
When the Titanic was launched in 1912 it was considered to be the biggest man-made object ever built to float on the waters. It was an incredible engineering achievement since modern technology was still in its infancy near the beginning of the twentieth century. The idea of the Titanic was first conceived in Lord and Lady Pirrie’s Downshire home in London, six years before its fateful maiden voyage. Bruce Ismay and Lord Pirrie wanted to build the largest luxury ship ever. How large was this ship? Was it about the size of the modern day cruise ships some of us are familiar with, or does its size pale in comparison?
Surprising Facts About the Titanic
- The Titanic was registered as a British ship despite the fact that it was owned by an American. John Pierpoint Morgan was the owner of the White Star Line. In 1902, when he brought the White Star Line in Britain, it was originally called the Oceanic Steam Navigation Company. It's office was located at 9 Broadway, New York City.
- The Titanic was not christened by breaking a champagne bottle against its hull. The movie, "A Night To Remember", had it wrong. The White Star Line did not believe in this practice. The sister ships of theTitanic were not christened either during launch.
- When the Titanic sank there were no priceless jewelries aboard it. However, a Renault sports car went down with her.
- There was no 300 foot long gash along the hull of the ship from the collision with the iceberg. The damaged area was only 12 square feet in size. This was determined by a 1996 expedition using a sonar device to scan the hull buried in 60 feet of sand.
- The Titanic had enough boats for 1,178 passengers. The requirement at the time was that there were enough boats to ferry people back and forth to a rescue ship. Also they figured that the Titanic's watertight compartments would keep the ship afloat for awhile to complete the transfer of passengers to a rescue ship. However, if the Carpathia had arrived in time everyone on the ship would have been rescued before the ship sank. It took 2 hours and 40 minutes for the ship to sink after its collision with the iceberg. This length of time indicates that there was plenty of time to rescue nearly everyone aboard the ship and furthermore, 465 (of the available 1,178) lifeboat seats went unfilled during that fateful night.
For Its Time the Titanic Was the Biggest Ship
Let me begin with the fact that the Titanic was a large ship for its time. It was longer than the famous Lusitania by more than a 100 feet. The Lusitania itself was 790 feet in length. Prior to construction of the ship, three berths at the Belfast shipyard had to be modified to accommodate the Titanic and its two sister ships, the Britannic and the Olimpic.. Also across the Atlantic, modifications had to be made to the pier in New York City harbor to receive these larger ships. Construction of the Titanic officially began on March 31, 1909, and it would continue for approximately two years until May 31,1911, when the hull was completed. It would take another 10 months to put the final touches (fitting-out of the ship) on the ship before it sailed to Belfast for her sea trials on April 2, 1912, eight days before her maiden voyage from Southampton to New York.
Titanic Facts By the Number
The Titanic and its sister ships did not hold the distinction of being the largest ships for long, even though they were 883 feet long from bow to stern. By 1934 the luxury cruise ship Queen Mary took the honor of being the longest and largest ship. It would beat the Titanic by 136 feet in length. It length of 1,019 feet was equivalent to more than three football fields laid end-to-end. It would not be until the 1990s that another cruise ship would we built longer than the Queen Mary. Many of the Royal Caribbean cruise ships today have lengths greater than the Queen Mary. Believe it or not they are only one foot longer than the Queen Mary. The newest ships, The Oasis of the Seas and the Allure of the Seas, launched in 2010 and 2011 respectively, are considered the largest cruise ships in the world with a length of 1,187 feet, about 304 feet longer or another football field longer than the Titanic. Currently, there are ships larger than these two ships.
- Beam or Width
After the Titanic was built with a beam of 93 feet, the beam of later cruise ships stayed relatively the same width until 2004 when the Queen Mary 2 was launched. It had a beam of 148 feet; which was about 55 feet wider than the Titanic’s beam. Before that time, the beam of the first Queen Mary, launched in 1934, was 32 feet wider than the Titanic. Currently the beam of the Oasis of the Seas, and its sister ship Allure of the Seas, is almost double the width of the Titanic. Another way of looking at this is to imagine two Titanic ships side-by-side as one ship. That is a big increase in width.
When the Titanic was built it had nine decks for a total height of 175 feet, which is equivalent to the height of a eleven story building. The Oasis of the Seas has 16 decks, with the ship towering at a height of 236 feet. That is about 20 stories high. Another ship currently sailing the seas, the Disney Dream, has 14 decks for passengers and crew.
To move 2,500 passengers about from one deck to another, four elevators were used aboard the Titanic. Three elevators were used for the first class passengers and one for the second class passengers. The Oasis of the Seas has a total of 24 elevators aboard to move more than 6,000 passengers from one deck to another.
With all these decks came ample space for amenities such as pools, gymnasium, spas, dining areas, theaters, etc. When the Titanic was first designed there were only one pool on the ship. The Oasis of the Seas has 21 pools and jacuzzis on the ship for passengers to cool down in. One of the main, and unique, features of the Oasis of the Seas is the living park made of more than 12,000 living plants and trees, with some of them as tall as 24 feet in height. However, there were real palm trees on the Titanic in the Veranda Cafe making it the first ship to have real trees on its deck.
- Gross Tonnage
Gross tonnage is the overall internal volume of a ship as measured from its keel to the funnel, and from stern to bow, and to the outside of the hull of the ship. It is a measurement without units and is used to set port fees, safety rules, etc. Today’s cruise ships obviously have larger internal volume than the Titanic since they are much larger. With a gross tonnage of 225,282, the Oasis of the Seas is five times larger than the Titanic. The Titanic was a mere 46,000 in volume. Even the Disney Dream cruise ship is almost three times larger than the Titanic is in internal volume.
Everyone who has been on a cruise ship knows that speed is not a desirable quality since they are not built to move through the waters very fast. This is what cruising is all about; moving along slowly from one port to another port in days instead of hours. The Titanic was designed by Lord Pirrie and Ismay with luxury and comfort in mind, rather than speed. As a result, the Titanic’s maximum speed was limited by design to about 22 knots. At that time, competing ship designers wanted to break the speed record crossing the Atlantic.
Today cruise ships are pretty much still design for the same classic reason as the Titanic to cruise at roughly the same maximum speed established in 1912. In 1934 the Queen Mary had a top speed of 29 knots and the Queen Mary 2 in 2004 had a maximum speed of 30 knots. But generally most cruise ships still cruise around that speed of 22 knots for safety reasons and to minimize fuel consumption. Even the largest ship Oasis of the Seas cruises around 23 knots despite its power and size. As stated before, it is not about speed in the cruising industry. It is about luxury and comfort. This is what Pirrie and Ismay started 100 years ago. Unfortunately, the officers aboard the Titanic ironically violated the main reason why this magnificent ship was built for luxury and comfort, not for speed. This was one of a series of events that led to the tragic lost of more than 1,500 lives at sea after the ship struck an iceberg on that fateful, cold night of April 14, 1912.
Summary of Titanic Facts
When was the Titanic Built?
- Construction started March 31, 1909.
Where was the Titanic built?
- It was built at Belfast, United Kingdom.
When did the Titanic sink?
- The night of April 14, 1912 into the early morning hours April 15, 1912.
How long did it take the Titanic to sink?
- It rook 2 hours 40 minutes after ship struck iceberg at 11:40 PM
Where did the Titanic sink?
- It sank in the North Atlantic Ocean.
What is the exact location of the Titanic wreck on the ocean floor?
- The location is 41degrees 43.5 minutes North 49 degrees 56.8 minutes West, about 370 miles south-southeast of Newfoundland.
Number of passengers on the Titanic?
- There were 2229 passengers onboard the ship.
Number of survivors rescued after the Titanic sinking?
- There were 713 survivors after the sinking.
The Titanic's Environmental Impact
- A Geological Study of The Titanic Shipwreck Site
An examination of the Titanic shipwreck site from a geological perspective. What been happening in the area around the shipwreck since it hit the North Atlantic ocean floor more than 100 years ago.
© 2012 Melvin Porter
|
<urn:uuid:e9b04988-5558-42e8-bfe3-578965a5d79f>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00214.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9788745045661926,
"score": 3.265625,
"token_count": 2056,
"url": "https://owlcation.com/humanities/A-Size-Comparison-of-the-Titanic-to-Modern-Cruise-Ships"
}
|
The Special Relationship is an unofficial term for the political, diplomatic, cultural, economic, military, and historical relations between the United Kingdom and the United States. It was used in a 1946 speech by Winston Churchill. The two nations have been close allies in many conflicts in the 20th and 21st centuries, including World War I, World War II, the Korean War, the Cold War, the Gulf War, and the War on Terror. Although the UK and US have close relationships with many other nations, the level of cooperation between them in economic activity, trade and commerce, military planning, execution of military operations, nuclear weapons technology, and intelligence sharing has been described as "unparalleled" among major powers.
The existence of a "special relationship" has sometimes been described as a myth. US president Barack Obama considered Germany to be his "closest international partner" and said the UK would be at the "back of the queue" in any trade deal with the US if it left the European Union.
- 1 Churchillian emphasis
- 2 Military cooperation
- 3 Other areas of cooperation
- 4 Personal relationships
- 4.1 Churchill and Roosevelt (May 1940–April 1945)
- 4.2 Churchill and Truman (April–July 1945)
- 4.3 Attlee and Truman (July 1945–October 1951)
- 4.4 Churchill and Truman (October 1951–January 1953)
- 4.5 Churchill and Eisenhower (January 1953–April 1955)
- 4.6 Eden and Eisenhower (April 1955–January 1957)
- 4.7 Macmillan and Eisenhower (January 1957–January 1961)
- 4.8 Macmillan and Kennedy (January 1961–October 1963)
- 4.9 Douglas-Home and Kennedy (October–November 1963)
- 4.10 Douglas-Home and Johnson (November 1963–October 1964)
- 4.11 Wilson and Johnson (October 1964–January 1969)
- 4.12 Wilson and Nixon (January 1969–June 1970)
- 4.13 Heath and Nixon (June 1970–March 1974)
- 4.14 Wilson and Nixon (March 1974–August 1974)
- 4.15 Wilson and Ford (August 1974–April 1976)
- 4.16 Callaghan and Ford (April 1976–January 1977)
- 4.17 Callaghan and Carter (January 1977–May 1979)
- 4.18 Thatcher and Carter (May 1979–January 1981)
- 4.19 Thatcher and Reagan (January 1981–January 1989)
- 4.20 Thatcher and George H. W. Bush (January 1989–November 1990)
- 4.21 Major and George H. W. Bush (November 1990–January 1993)
- 4.22 Major and Clinton (January 1993–May 1997)
- 4.23 Blair and Clinton (May 1997–January 2001)
- 4.24 Blair and George W. Bush (January 2001–June 2007)
- 4.25 Brown and George W. Bush (June 2007–January 2009)
- 4.26 Brown and Obama (January 2009–May 2010)
- 4.27 Cameron and Obama (May 2010–July 2016)
- 4.28 May and Obama (July 2016–January 2017)
- 4.29 May and Trump (January 2017–present)
- 5 Public opinion
- 6 In popular culture
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
Although the special relationship between the UK and the US was emphasized by Churchill, its existence had been recognized since the 19th century, not least by rival powers.
Relations in the mid-19th century were often strained, and even verged on war when Britain almost supported the Confederacy in the early part of the American Civil War. British leaders were constantly annoyed from the 1840s to the 1860s by what they saw as Washington's pandering to the democratic mob, as in Oregon boundary dispute in 1844-46. However British middle class public opinion sensed a common "special relationship" between the two peoples based on language, migration, evangelical Protestantism, liberal traditions, and extensive trade. This constituency rejected war, forcing London to appease the Americans. During the Trent Affair of late 1861, London drew the line and Washington retreated.
Prime Minister Ramsay MacDonald's visit to the US in 1930 confirmed his own belief in the "special relationship", and for this reason he looked to the Washington Treaty rather than a revival of the Anglo-Japanese alliance as the guarantee of peace in the Far East. However, as David Reynolds observes: "For most of the period since 1919, Anglo-American relations had been cool and often suspicious. America's 'betrayal' of the League of Nations was only the first in a series of US actions—over war debts, naval rivalry, the 1931–2 Manchurian crisis and the Depression—that convinced British leaders that the United States could not be relied on". Equally, as President Truman's secretary of state, Dean Acheson, recalled: "Of course a unique relation existed between Britain and America—our common language and history ensured that. But unique did not mean affectionate. We had fought England as an enemy as often as we had fought by her side as an ally".
|Booknotes interview with Jon Meacham on Franklin and Winston: An Intimate Portrait of an Epic Friendship, 15 February 2004, C-SPAN|
The fall of France in 1940 has been described as a decisive event in International relations, leading the special relationship to displace the entente cordiale as the pivot of the international system. During World War II, one observer noted that "Great Britain and the United States integrated their military efforts to a degree unprecedented among major allies in the history of warfare". "Each time I must choose between you and Roosevelt", Churchill shouted at General Charles de Gaulle, leader of the Free French, in 1945, "I shall choose Roosevelt". Between 1939 and 1945 Churchill and Roosevelt exchanged 1,700 letters and telegrams and met 11 times. Churchill estimated that they had 120 days of close personal contact. During one meeting, Roosevelt was wheeled to Churchill's room while Churchill was taking a shower. The confused Roosevelt wanted to leave but Churchill urged him to remain: "British Prime Minister has nothing to cover from US President."
Churchill's mother was a US citizen and he keenly felt the links between the English-speaking peoples. He first used the term "special relationship" on 16 February 1944, when he said it was his "deepest conviction that unless Britain and the United States are joined in a special relationship… another destructive war will come to pass". He used it again in 1945 to describe not the Anglo-American relationship alone, but the UK's relationship with both the US and Canada. The New York Times Herald quoted Churchill in November 1945:
We should not abandon our special relationship with the United States and Canada about the atomic bomb and we should aid the United States to guard this weapon as a sacred trust for the maintenance of peace.
Churchill used the phrase again a year later, at the onset of the Cold War, this time to note the special relationship between the US on the one hand, and the English-speaking nations of the British Commonwealth and Empire under the leadership of the UK on the other. The occasion was his 'Sinews of Peace Address' in Fulton, Missouri, on 5 March 1946:
Neither the sure prevention of war, nor the continuous rise of world organization will be gained without what I have called the fraternal association of the English-speaking peoples ...a special relationship between the British Commonwealth and Empire and the United States. Fraternal association requires not only the growing friendship and mutual understanding between our two vast but kindred systems of society, but the continuance of the intimate relationship between our military advisers, leading to common study of potential dangers, the similarity of weapons and manuals of instructions, and to the interchange of officers and cadets at technical colleges. It should carry with it the continuance of the present facilities for mutual security by the joint use of all Naval and Air Force bases in the possession of either country all over the world.
There is however an important question we must ask ourselves. Would a special relationship between the United States and the British Commonwealth be inconsistent with our over-riding loyalties to the World Organisation? I reply that, on the contrary, it is probably the only means by which that organisation will achieve its full stature and strength.
In the opinion of one international relations specialist: "the United Kingdom's success in obtaining US commitment to cooperation in the postwar world was a major triumph, given the isolation of the interwar period". A senior British diplomat in Moscow, Thomas Brimelow, admitted: "The one quality which most disquiets the Soviet government is the ability which they attribute to us to get others to do our fighting for us ... they respect not us, but our ability to collect friends". Conversely, "the success or failure of United States foreign economic peace aims depended almost entirely on its ability to win or extract the co-operation of Great Britain". Reflecting on the symbiosis, prime minister Margaret Thatcher in 1982 declared: "The Anglo-American relationship has done more for the defence and future of freedom than any other alliance in the world".
While most government officials on both sides have supported the special relationship, there have been sharp critics. British journalist Guy Arnold (b. 1932) in 2014 denounced it as a “sickness in the body politic of Britain that needs to be flushed out”. Instead Arnold calls for closer relationship with Europe and Russia so as to rid “itself of the US incubus.”
The intense level of military co-operation between the UK and US began with the creation of the Combined Chiefs of Staff in December 1941, a military command with authority over all US and British operations. Following the end of the Second World War the joint command structure was disbanded, but close military cooperation between the nations resumed in the early 1950s with the start of the Cold War.
Since the Second World War and the subsequent Berlin Blockade, the US has maintained substantial forces in Great Britain. In July 1948, the first American deployment began with the stationing of B-29 bombers. Currently, an important base is the radar facility RAF Fylingdales, part of the US Ballistic Missile Early Warning System, although this base is operated under British command and has only one USAF representative for largely administrative reasons. Several bases with a significant US presence include RAF Menwith Hill (only a short distance from RAF Fylingdales), RAF Lakenheath and RAF Mildenhall.
Following the end of the Cold War, which was the main rationale for their presence, the number of US facilities in the UK has been reduced in number in line with the US military worldwide. Despite this, these bases have been used extensively in support of various peacekeeping and offensive operations of the 1990s and early 21st century.
The two nations also jointly operate on the British military facilities of Diego Garcia in the British Indian Ocean Territory and on Ascension Island, a dependency of Saint Helena in the Atlantic Ocean.
Nuclear weapons development
The Quebec Agreement of 1943 paved the way for the two countries to develop atomic weapons side by side, the UK handing over vital documents from its own Tube Alloys project and sending a delegation to assist in the work of the Manhattan Project. The US later kept the results of the work to itself under the postwar McMahon Act, but after the UK developed its own thermonuclear weapons, the US agreed to supply delivery systems, designs and nuclear material for British warheads through the 1958 US-UK Mutual Defence Agreement.
The UK purchased first Polaris and then the US Trident system which remains in use today. The 1958 agreement gave the UK access to the facilities at the Nevada Test Site, and from 1963 it conducted a total of 21 underground tests there before the cessation of testing in 1991. The agreement under which this partnership operates was updated in 2004; anti-nuclear activists claimed renewal may breach the 1968 Nuclear Non-Proliferation Treaty. The US and the UK jointly conducted subcritical nuclear experiments in 2002 and 2006, to determine the effectiveness of existing stocks, as permitted under the 1998 Comprehensive Nuclear-Test-Ban Treaty.
The Reagan administration offered Britain the opportunity to purchase the F-117 Nighthawk stealth aircraft while a black program. The UK is the only collaborative, or Level One, international partner in the largest US aircraft procurement project in history, the F-35 Lightning II program. The UK was involved in writing the specification and selection and its largest defense contractor, BAE Systems is a partner of the American prime contractor Lockheed Martin. BAE Systems is also the largest foreign supplier to the US Defense Department and has been permitted to buy important US defense companies such as Lockheed Martin Aerospace Electronic Systems and United Defense.
The US operates several British designs including Chobham Armour, the RAF Harrier GR9 or United States Marine Corps AV-8B Harrier II and the US Navy T-45 Goshawk. The UK also operates several American designs, including the Javelin anti-tank missile, M270 rocket artillery, the Apache gunship, C-130 Hercules and C-17 Globemaster transport aircraft.
Other areas of cooperation
A cornerstone of the special relationship is the collecting and sharing of intelligence. This originated during World War II with the sharing of code-breaking knowledge and led to the 1943 BRUSA Agreement, signed at Bletchley Park. After World War II the common goal of monitoring and countering the threat of communism prompted the UK-USA Security Agreement of 1948. This agreement brought together the SIGINT organizations of the US, UK, Canada, Australia, and New Zealand and is still in place today (see: Five Eyes). The head of the CIA station in London attends each weekly meeting of the British Joint Intelligence Committee.
One present-day example of such cooperation is the UKUSA Community, comprising the US National Security Agency, the UK Government Communications Headquarters, Australia's Defence Signals Directorate and Canada's Communications Security Establishment collaborating on ECHELON, a global intelligence gathering system. Under classified bilateral accords, UKUSA members do not spy on each other.
Following the discovery of the 2006 transatlantic aircraft plot, the CIA began to assist the Security Service (MI5) by running its own agent networks in the British Pakistani community. Security sources estimate 40 per cent of CIA activity to prevent a terrorist attack in the US involves operations inside the UK. One intelligence official commented on the threat against the US from British Islamists: "The fear is that something like this would not just kill people but cause a historic rift between the US and the UK".
The US is the largest source of foreign direct investment to the UK; likewise the UK is the largest single foreign direct investor in the US. British trade and capital have been important components of the American economy since its colonial inception. In trade and finance, the special relationship has been described as 'well-balanced', with London's 'light-touch' regulation in recent years attracting a massive outflow of capital from New York. The key sectors for British exporters to the US are aviation, aerospace, commercial property, chemicals and pharmaceuticals, and heavy machinery.
British ideas, classical and modern, have also exerted a profound influence on US economic policy, most notably the historian Adam Smith on free trade and the economist John Maynard Keynes on counter-cyclical spending, while the British government has adopted workfare reforms from the US. US and British investors share entrepreneurial attitudes towards the housing market, and the fashion and music industries of each country are major influences on their counterparts. Trade ties have been strengthened by globalisation, while both governments agree on the need for currency reform in China and educational reform at home to increase their competitiveness against India's developing service industries. In 2007 the US ambassador suggested to British business leaders that the special relationship could be used 'to promote world trade and limit environmental damage as well as combating terrorism'.
In a press conference that made several references to the special relationship, US Secretary of State John Kerry, in London with UK Foreign Secretary William Hague on 9 September 2013, said
"We are not only each other’s largest investors in each of our countries, one to the other, but the fact is that every day almost one million people go to work in the United States for British companies that are in the United States, just as more than one million people go to work here in Great Britain for U.S. companies that are here. So we are enormously tied together, obviously. And we are committed to making both the U.S.-UK and the U.S.-EU relationships even stronger drivers of our prosperity."
The relationship often depends on the personal relations between British prime ministers and US presidents. The first example was the close relationship between Winston Churchill and Franklin Roosevelt who were in fact distantly related.
Prior to their collaboration during World War II Anglo-American relations had been somewhat frosty. President Woodrow Wilson and Prime Minister David Lloyd George in Paris had been the only previous leaders to meet face-to-face, but had enjoyed nothing that could be described as a special relationship, although Lloyd George's wartime Foreign Secretary, Arthur Balfour, got on well with Wilson during his time in the United States and helped convince the previously skeptical president to enter the war.
Churchill spent much time and effort cultivating the relationship which paid dividends for the war effort. Two great architects of the special relationship on a practical level were Field Marshal Sir John Dill and General George Marshall, whose excellent personal relations and senior positions (Roosevelt was especially close to Marshall), oiled the wheels of the alliance considerably.
Major links were created during the war, such as the Combined Chiefs of Staff. Britain, starting off in 1941, as somewhat the senior partner, had found herself the junior. The diplomatic policy was thus two-pronged, encompassing strong personal support and equally forthright military and political aid. These two have always operated in tandem, that is to say, the best personal relationships between British prime ministers and American presidents have always been those based around shared goals. For example, Harold Wilson's government would not commit troops to Vietnam. Wilson and Lyndon Johnson did not get on especially well.
Peaks in the special relationship include the bonds between Harold Macmillan (who like Churchill had an American mother) and John F. Kennedy, Jimmy Carter and James Callaghan were close personal friends despite their differences in personality, between Margaret Thatcher and Ronald Reagan and more recently between Tony Blair and both Bill Clinton and George W. Bush. Nadirs have included Dwight D. Eisenhower's opposition to UK operations in Suez under Anthony Eden and Harold Wilson's refusal to enter the war in Vietnam.
Churchill and Roosevelt (May 1940–April 1945)
When Winston Churchill entered the office of Prime Minister, Great Britain had already entered World War II. Immediately at the start of Churchill's Prime Ministership, the Battle of Dunkirk took place.
Before Churchill's premiership, President Roosevelt had secretively been in frequent correspondence with him. Their correspondence had begun in September 1939, at the very start of World War II. In these private communications, the two had been discussing ways in which the United States might support Britain in their war effort. However, at the time when Winston Churchill assumed the office of Prime Minister, Roosevelt was nearing the end of his second-term and making considerations of seeking election to an unprecedented third-term (he would make no public pronouncements about this until the Democratic National Convention that year). From the United States' experience during the First World War, Roosevelt judged that involvement in the Second World War was likely to be an inevitability. This was a key reason for Roosevelt's decision to break from tradition and seek a third term. Roosevelt desired to be President when the United States would finally be drawn into entering the conflict. However, in order to win a third-term, Roosevelt made the American people promises that he would keep them out of the war.
In November 1940, upon Roosevelt's victory in the presidential election, Churchill sent him a congratulatory letter,
"I prayed for your success…we are entering a somber phase of what must inevitably be a protracted and broadening war."
Having promised the American public to avoid entering any foreign war, Roosevelt went as far as public opinion allowed in providing financial and military aid to Britain, France and China. In December 1940 talk dubbed the, Arsenal of Democracy Speech Roosevelt declared, "This is not a fireside chat on war. It is a talk about national security". Roosevelt went on to declare the importance of the United State's support of Britain's war effort, framing it as a matter of national security for the United States. As the American public opposed involvement in the conflict, Roosevelt sought to emphasize that it was critical to assist the British in order to prevent the conflict from reaching American shores. He aimed to paint the British war effort as beneficial to the United States by arguing that they would contain the Nazi threat from spreading across the Atlantic.
“If Great Britain goes down, the Axis powers will be in a position to bring enormous military and naval resources against this hemisphere......We are the Arsenal of Democracy. Our national policy is to keep war away from this country.”— Franklin D. Roosevelt, Fireside chat delivered on December 29, 1940
The United States ultimately joined the war effort in December 1941, under Roosevelt's leadership.
Roosevelt and Churchill had a relative fondness of one another. They connected on their shared passions for tobacco and liquors, and their mutual interest in history and battleships. Churchill later wrote, "I felt I was in contact with a very great man, who was also a warm-hearted friend, and the foremost champion of the high causes which we served."
One anecdote that has been told to illustrate the intimacy of Churchill and Roosevelt's bond alleges that once, while he hosting Churchill at the White House, Roosevelt stopped by the bedroom in which the Prime Minister was staying to converse with him. Churchill answered his door in a state of nudity, remarking, "You see, Mr. President, I have nothing to hide from you." The president is said to have taken this in good-humor, later joking with an aide that Churchill was, "pink and white all over."
On Churchill's 60th birthday, Roosevelt wrote him, "It is fun to be in the same decade as you."
Roosevelt died in-office April 1945, shortly into his fourth term in office. He was succeeded by his vice-president.
Churchill and Truman (April–July 1945)
After Roosevelt died, he was succeeded by his vice president Harry Truman. Churchill and Truman developed a strong relationship with one another. While he was saddened by the death of Roosevelt, Churchill was a strong supporter of Truman in his early presidency, calling him, "the type of leader the world needs when it needs him most." At the Potsdam Conference, Truman and Churchill, along with Joseph Stalin, made agreements for settling the boundaries of Europe.
Attlee and Truman (July 1945–October 1951)
The deputy in Churchill's wartime coalition government, Attlee had been in the United States at the time of Roosevelt's death, and thus had met with Truman immediately after he took office. The two of them had come to like one another. However, Attlee and Truman never became particularly close with one another. During their coinciding tenure as heads of state, they only met on three occasions. The two did not maintain regular correspondence. Their working relationship with each other, nonetheless, remained sturdy.
When Attlee assumed the position of Prime Minister, negotiations had not yet been completed at the Potsdam Conference, which had begun on July 17. Attlee took Churchill's place at the conference once he was named Prime Minister on July 26. Therefore, Attlee's first sixteen days as Prime Minister were spent handling negotiations at the conference.
In his time as Prime Minister, Attlee managed to convince Truman to agree to greater nuclear cooperation.
Churchill and Truman (October 1951–January 1953)
Churchill became Prime Minister again in October 1951.
Churchill had maintained his relationship with Truman during his six-year stint as Leader of the Opposition. During a 1946 trip the United States, Churchill lost a significant amount of cash in a poker game with Harry Truman and his advisors. In 1947, Churchill had written Truman an unheeded memo recommending that the United States make a pre-emptive atomic bomb strike on Moscow before the Soviet Union could acquire nuclear weapons themselves.
Churchill and Eden visited Washington in January 1952. At the time, Truman's administration was supporting plans for a European Defence Community in hopes that it would allow West Germany to undergo rearmament, consequentially enabling the US to decrease the number of American troops stationed in Germany. Churchill opposed the EDC, feeling that it could not work. He also asked, unsuccessfully, for the United States to commit its forces to supporting Britain in Egypt and the Middle East. This had no appeal for Truman. Truman expected the British to assist the Americans in their fight against communist forces in Korea, but felt that supporting the British in the middle east would be assisting them in their imperialist efforts, which would do nothing to thwart communism.
Truman opted not to seek reelection in 1952, and his presidency ended in January 1953.
Churchill and Eisenhower (January 1953–April 1955)
Eisenhower and Churchill were both familiar with one another, as they had been leaders in the allied effort during World War II.
Relations were strained by Eisenhower's outrage over Churchill's half-baked attempt to set up a "parley at the summit" with Joseph Stalin.
Eden and Eisenhower (April 1955–January 1957)
Similarly to his predecessor, Eden had worked closely with Eisenhower during World War II.
When Eden took office, Gamal Abdel Nasser built up Egyptian nationalism and threatened to take control of the vital Suez Canal. Eden in 1956 made a secret agreement with France and Israel to seize control of the canal. Eisenhower had repeatedly warned The United States would not accept British military intervention. When the invasion came anyway, the United States denounced it at the United Nations, and used financial power to force the British to completely withdraw. Britain lost its prestige and its powerful role in Mid-Eastern affairs, to be replaced by the Americans. Eden, in poor health, was forced to retire.
Macmillan and Eisenhower (January 1957–January 1961)
Once he took office, Macmillan worked to undo the strain that the special relationship had incurred in the preceding years.
Macmillan famously quipped that it was Britain’s historical duty to guide the power of the United States as the ancient Greeks had the Romans. He endeavoured to broaden the special relationship beyond Churchill’s conception of an English-Speaking Union into a more inclusive "Atlantic Community". His key theme, 'of the interdependence of the nations of the Free World and the partnership which must be maintained between Europe and the United States', was one that Kennedy subsequently took up.
Macmillan and Kennedy (January 1961–October 1963)
The special relationship was tested the perhaps most severely by the Skybolt crisis of 1962, when Kennedy cancelled a joint project without consultation. Skybolt was a nuclear air-to-ground missile that could penetrate Soviet airspace and would extend the life of Britain's deterrent, which consisted only of free-falling hydrogen bombs. London saw cancellation as a reduction in the British nuclear deterrent. The crisis was resolved during a series of compromises that led to the Royal Navy purchasing the American UGM-27 Polaris missile and construction of the Resolution-class submarines to launch them. The debates over Skybolt were top secret, but tensions were exacerbated when Dean Acheson, a former Secretary of State, challenged publicly the special relationship and marginalised the British contribution to the Western alliance. Acheson said:
- Great Britain has lost an empire and has not yet found a role. The attempt to play a separate power role—that is, a role apart from Europe, a role based on a 'Special Relationship' with the United States, a role based on being the head of a 'Commonwealth' which has no political structure, or unity, or strength and enjoys a fragile and precarious economic relationship—this role is about played out.
On learning of Acheson's attack, Macmillan thundered in public:
- In so far as he appeared to denigrate the resolution and will of Britain and the British people, Mr. Acheson has fallen into an error which has been made by quite a lot of people in the course of the last four hundred years, including Philip of Spain, Louis XIV, Napoleon, the Kaiser and Hitler. He also seems to misunderstand the role of the Commonwealth in world affairs. In so far as he referred to Britain's attempt to play a separate power role as about to be played out, this would be acceptable if he had extended this concept to the United States and to every other nation in the Free World. This is the doctrine of interdependence, which must be applied in the world today, if Peace and Prosperity are to be assured. I do not know whether Mr. Acheson would accept the logical sequence of his own argument. I am sure it is fully recognised by the US administration and by the American people.
The looming collapse of the alliance between the two thermonuclear powers forced Kennedy into an about-face at the Anglo-American summit in Nassau, where he agreed to sell Polaris as a replacement for the cancelled Skybolt. Richard E. Neustadt in his official investigation concluded the crisis in the special relationship had erupted because 'the president's "Chiefs" failed to make a proper strategic assessment of Great Britain's intentions and its capabilities'.
The Skybolt crisis with Kennedy came on top of Eisenhower's wrecking of Macmillan's policy of détente with the Soviet Union at the May 1960 Paris summit, and the prime minister's resulting disenchantment with the special relationship contributed to his decision to seek an alternative in British membership of the European Economic Community (EEC). According to a recent analyst: 'What the prime minister in effect adopted was a hedging strategy in which ties with Washington would be maintained while at the same time a new power base in Europe was sought.' Even so, Kennedy assured Macmillan 'that relations between the United States and the UK would be strengthened not weakened, if the UK moved towards membership.'
Douglas-Home and Kennedy (October–November 1963)
Alec Douglas-Home only entered the race to replace the resigning Macmillan as Leader of the Conservative Party after learning from the British ambassador to the United States that the Kennedy administration was uneasy at the prospect of Hailsham being Prime Minister. Douglas-Home, however, would only serve as Prime Minister for a little over a month before Kennedy was assassinated.
In England, Kennedy's assassination in November 1963 caused a profound shock and sadness expressed by many politicians, religious leaders, and luminaries of literature and the arts. The Archbishop of Canterbury led a memorial service at St Paul’s Cathedral. Sir Laurence Olivier at the end of his next performance called for a moment of silence, followed by a playing of “The Star Spangled Banner.” Prime Minister Douglas-Home led parliamentary tributes to Kennedy, whom he called, “the most loyal and faithful of allies.” Douglas-Home was visibly upset during his remarks, as he was truly saddened by Kennedy's death. He had liked Kennedy, and had begun to establish a positive working relationship with him.
After his assassination, the British government sought approval to build a memorial to President Kennedy, in part to demonstrate the strength of the special relationship. However the weak popular response to its ambitious fund-raising campaign was a surprise, and suggested a grassroots opposition to the late president, his policies and the United States.
Douglas-Home and Johnson (November 1963–October 1964)
Douglas-Home had a far more terse relationship with Kennedy's successor, Lyndon B. Johnson. Douglas-Home failed to develop a good relationship with Lyndon Johnson. Their governments had a serious disagreement on the question of British trade with Cuba.
Douglas' Conservative Party lost the 1964 general election, thus he lost his position as Prime Minister. He had only served as Prime Minister for 363 days, the UK's second shortest premiership of the twentieth century. Despite its unusual brevity (and due to the assassination of Kennedy), Douglas-Home's tenure had overlapped with two US presidencies.
Wilson and Johnson (October 1964–January 1969)
Prime Minister Harold Wilson recast the alliance as a 'close relationship', but neither he nor President Lyndon B. Johnson had any direct experience of foreign policy, and Wilson's attempt to mediate in Vietnam, where the United Kingdom was co-chairman with the Soviet Union of the Geneva Conference, was unwelcome to the president. 'I won't tell you how to run Malaysia and you don’t tell us how to run Vietnam,' Johnson snapped in 1965. However relations were sustained by US recognition that Wilson was being criticised at home by his neutralist Labour left for not condemning American involvement in the war.
US Defense Secretary Robert McNamara asked Britain to send troops to Vietnam as 'the unwritten terms of the Special Relationship', Wilson agreed to help in many ways but refused to commit regular forces, only special forces instructors. Australia and New Zealand did commit regular forces to Vietnam.
The Johnson administration’s support for IMF loans delayed devaluation of sterling until 1967. The United Kingdom's subsequent withdrawal from the Persian Gulf and East Asia surprised Washington, where it was strongly opposed because British forces were valued for their contribution. In retrospect Wilson's moves to scale back Britain's global commitments and correct its balance of payments contrasted with Johnson's overexertions which accelerated the United States' relative economic and military decline.
Wilson and Nixon (January 1969–June 1970)
In a speech delivered on January 27, 1970 at a State Dinner welcoming the Prime Minister in his visit to the United States Nixon said,
Mr. Prime Minister, I am delighted to welcome you here today as an old friend; as an old friend not only in government, but as an old friend personally. I noted from reading the background, that this is your 21st visit to the United States, and your seventh visit as Prime Minister of your government.
And I noted, too, in looking at the relationship that we have had since I assumed office a year ago, that we met twice in London, once in February, again in August; that we have had a great deal of correspondence; we have talked several times on the telephone. But what is even more important is the substance of those conversations. The substance did not involve differences between your country and ours. The substance of those conversations was with regard to the great issues in which we have a common interest and a common purpose, the development of peace in the world, progress for your people, for our people, for all people. This is the way it should be. This is the way we both want it. And it is an indication of the way to the future.
Winston Churchill once said on one of his visits to this country that, if we are together, nothing is impossible. Perhaps in saying that nothing is impossible, that was an exaggeration. But it can be said today--we are together, and being together, a great deal is possible. And I am sure that our talks will make some of those things possible.
Heath and Nixon (June 1970–March 1974)
A Europeanist, Prime Minister Edward Heath preferred to speak of a '"natural relationship", based on shared culture and heritage', and stressed that the special relationship was 'not part of his own vocabulary'.
The Heath-Nixon era was dominated by the United Kingdom's 1973 entry into the European Economic Community (EEC). Although the two leaders' 1971 Bermuda communiqué restated that entry served the interests of the Atlantic Alliance, American observers voiced concern that the British government's membership would impair its role as an honest broker, and that, because of the European goal of political union, the special relationship would only survive if it included the whole Community.
Critics accused President Nixon of impeding the EEC's inclusion in the special relationship by his economic policy, which dismantled the postwar international monetary system and sought to force open European markets for US exports. Detractors also slated the personal relationship at the top as 'decidedly less than special'; Prime Minister Edward Heath, it was alleged, 'hardly dared put through a phone call to Richard Nixon for fear of offending his new Common Market partners.'
The special relationship was 'soured' during the Arab–Israeli War of 1973 when Nixon failed to inform Heath that US forces had been put on DEFCON 3 in a worldwide standoff with the Soviet Union, and US Secretary of State Henry Kissinger misled the British ambassador over the nuclear alert. Heath, who learned about the alert only from press reports hours later, confessed: 'I have found considerable alarm as to what use the Americans would have been able to make of their forces here without in any way consulting us or considering the British interests.' The incident marked 'a low ebb' in the special relationship.
Wilson and Nixon (March 1974–August 1974)
Wilson held Nixon in high regards. After he left office himself, Wilson praised Nixon as America's "most able" president.
Wilson and Ford (August 1974–April 1976)
In a toast to Wilson at a January 1975 State Dinner, Gerald Ford remarked,
It gives me a very great deal of pleasure to welcome you again to the United States. You are no stranger, of course, to this city and to this house. Your visits here over the years as a staunch ally and a steadfast friend are continuing evidence of the excellence of the ties between our countries and our people.
You, Mr. Prime Minister, are the honored leader of one of America's truest allies and oldest friends. Any student of American history and American culture knows how significant is our common heritage. We have actually continued to share a wonderful common history.
Americans can never forget how the very roots of our democratic political system and of our concepts of liberty and government are to be found in Britain.
Over the years, Britain and the United States have stood together as trusting friends and allies to defend the cause of freedom on a worldwide basis. Today, the North Atlantic Alliance remains the cornerstone of our common defense.
Callaghan and Ford (April 1976–January 1977)
While President Ford never visited the United Kingdom during his presidency, the British government saw the US bicentennial in 1976 as an occasion to celebrate the special relationship. Political leaders and guests from both sides of the Atlantic gathered in May at Westminster Hall to mark the American Declaration of Independence of 1776. Prime Minister Callaghan presented a visiting Congressional delegation with a gold-embossed reproduction of Magna Carta, symbolising the common heritage of the two nations. British historian Esmond Wright noted 'a vast amount of popular identification with the American story'. A year of cultural exchanges and exhibitions culminated in July in a state visit to the United States by The Queen.
Ford lost the 1976 election. Consequentially, his presidency ended in January 1977.
Callaghan and Carter (January 1977–May 1979)
After defeating the incumbent Gerald Ford in the 1976 election, Jimmy Carter was sworn-in as President of the United States in January 1977. Ties between Callaghan and Carter were cordial but, with both left-of-centre governments being preoccupied with economic malaise, diplomatic contacts remained low key. US officials characterised relations in 1978 as 'extremely good', with the main disagreement being over trans-Atlantic air routes.
The economic malaise that Callaghan was facing at home developed into the "Winter of Discontent", which ultimately led to Callaghan's Labour Party losing the May 1979 general election, thus ending his tenure as Prime Minister.
Thatcher and Carter (May 1979–January 1981)
Conservative Party leader Margaret Thatcher became Prime Minister after her party won the general election in 1979. Relations between President Carter and Prime Minister Thatcher during the year-and-a-half overlap of their leadership have often been seen as relatively cold, especially when contrasted with the kinship that Thatcher would subsequently develop with Carter’s successor Ronald Reagan. However, Carter’s relationship with Thatcher never reached the levels of strain that Reagan's relationship would in the midst of the Falklands War.
Thatcher and Carter had clear differences in their political ideology. They both occupied relatively opposing ends of the political spectrum. By the time she had become Prime Minister, Thatcher had already met Carter on two previous occasions. Both of these encounters had initially left Carter with a negative impression of her. However, his opinion of Thatcher had reportedly become more placid by the time she was elected Prime Minister.
Despite the tensions between the two, historian Chris Collins (of the Margaret Thatcher Foundation) has stated, “Carter is somebody she worked hard to get along with. She had considerable success at it. Had Carter lasted two terms we might be writing about the surprising amount of common ground between the two.”
Carter congratulated Thatcher in a phone call after her party’s victory in the general election (which elevated her to the office of Prime Minister), stating that the United States would, “look forward to working with you on an official basis." However, his congratulations was delivered with an audibly unenthusiastic tone. In her first full letter to Carter, Thatcher voiced her assurance of full support in the ratification of the SALT II nuclear arms treaty writing, “We will do all we can to assist you”.
Shortly after her election, following her first meeting with Israeli Prime Minister Menachem Begin (which she would describe as, “profoundly disheartening”) Thatcher expressed her concerns to Carter about the issue of Israeli settlements stating, "I emphasised to Mr Begin the danger which continued expansion of Israeli settlements represents to the autonomy negotiations… but he will not listen and even resents the subject of settlements being raised at all.”
Both leaders were mutually facing great pressures during the overlap of their tenures as a national leader. Both of their nations were experiencing economic crisis due to the Early 1980s recession. In addition, there was international upheaval in Eastern Europe and the Middle East. Among the areas of turmoil were Afghanistan (due to the Soviet–Afghan War) and Iran (where Carter was facing a hostage crisis following the Iranian Revolution)
Both Carter and Thatcher condemned Soviet invasion of Afghanistan. They expressed concern to each other that other European nations were being too soft towards the Russians. Carter hoped that she could persuade other European nations to condemn the invasion. However, with a particularly tumultuous economic situation at home, and with most NATO members reluctant to cut trade ties with the USSR, Thatcher would only provide very weak support to Carter’s efforts to punish the USSR through economic sanctions.
Thatcher was concerned that Carter was naive about Soviet relations. Nevertheless, Thatcher played a (perhaps pivotal) role in fulfilling Carter's desire for the U.N. adoption of a resolution demanding the withdrawal of Soviet troops from Afghanistan. Thatcher also encouraged British athletes to participate in the boycott of the 1980 Summer Olympics in Moscow, which Carter initiated in response to the invasion. However, Thatcher ultimately gave the country’s Olympic Committee and individual athletes the choice to decide whether or not they would boycott the games. The United Kingdom ended up participating in the 1980 games, albeit with a smaller delegation due to individual athletes deciding to participate in boycotting the games.
In their correspondences, Thatcher expressed sympathy to Carter’s troubled efforts to resolve the hostage crisis in Iran. However, she outright refused his request for her to decrease the presence of the British embassy in Iran.
Thatcher provided Carter with praise on his handling of the US economy, sending him a letter endorsing his measures in handling economic inflation and in cutting gas consumption during the 1979 energy crisis as, “painful but necessary”.
In October 1979 Thatcher wrote Carter, "I share your concern about Cuban and Soviet intentions in the Caribbean. This danger exists more widely in the developing world. It is essential that the Soviet Union should recognise your resolve in this matter. […] I am therefore especially encouraged by your statement that you are accelerating efforts to increase the capability of the United States to use its military forces world wide."
Also October 1979 there was a dispute over Thatcher’s government's provision of funding for BBC’s external services. In desperation, the BBC contacted United States Ambassador Kingman Brewster Jr. to request that the US government endorse them in their fight against spending cuts. National Security Advisor Zbigniew Brzezinski discussed this request with the State Department, and even drafted a letter for Carter to send Thatcher. However, Brzezinski ultimately decided against advising Carter to involve himself in the BBC’s efforts to lobby against budget cuts.
During her December 1979 visit to the United States, Thatcher chastised Carter for not permitting the sale of arsenal to equip the Royal Ulster Constabulary. During this visit, she delivered a speech in which a lack of warmth towards Carter was evident.
While Thatcher likely favoured her ideological counterpart Ronald Reagan to win the 1980 election (in which he defeated Carter), she was careful not to voice any such preference, not even in private.
Thatcher and Reagan (January 1981–January 1989)
The personal friendship between President Ronald Reagan and Prime Minister Margaret Thatcher united them as 'ideological soul-mates'. They shared a commitment to the philosophy of the free market, low taxes, limited government, and a strong defence; they rejected détente and were determined to win the Cold War with the Soviet Union. They disagreed on internal social policies such as the AIDS epidemic and abortion. Thatcher summed up her understanding of the special relationship at her first meeting with Reagan as president in 1981: "Your problems will be our problems and when you look for friends we shall be there." Celebrating the 200th anniversary of diplomatic relations in 1985, she enthused: ‘There is a union of mind and purpose between our peoples which is remarkable and which makes our relationship a truly remarkable one. It is special. It just is, and that’s that.’ The president acknowledged:
‘The United States and the United Kingdom are bound together by inseparable ties of ancient history and present friendship ... There's been something very special about the friendships between the leaders of our two countries. And may I say to my friend the Prime Minister, I'd like to add two more names to this list of affection—Thatcher and Reagan.’
In 1982 Thatcher and Reagan reached an agreement to replace the British Polaris fleet with a force equipped with US-supplied Trident missiles. The confidence between the two principals was momentarily strained by Reagan's belated support in the Falklands War, but this was more than countered by the Anglophile American Defense Secretary, Caspar Weinberger, who provided strong support in intelligence and munitions.
In 1986 Washington asked permission to use British airbases in order to bomb Libya in retaliation for Libyan terrorist attacks. The British cabinet was opposed, and British public opinion was highly negative. Thatcher herself was worried it would lead to widespread attacks on British interests in the Middle East. That did not happen, and instead Libyan terrorism fell off sharply. Furthermore, Britain won widespread praise in the United States at a time when Spain and France had vetoed American requests to fly over their territories.
In 1986 the British defence secretary Michael Heseltine, a prominent critic of the special relationship and a supporter of European integration, resigned over his concern that a takeover of Britain's last helicopter manufacturer by a US firm would harm the British defence industry. Thatcher herself also saw a potential risk to Britain's deterrent and security posed by the Strategic Defense Initiative She was alarmed at Reagan's proposal at the Reykjavík Summit to eliminate nuclear weapons, but was relieved when the proposal failed.
All in all, Britain's needs figured more prominently in American thinking strategy than anyone else. Peter Hennessy, a leading historian, singles out the personal dynamic of 'Ron' and 'Margaret' in this success:
At crucial moments in the late 1980s, her influence was considerable in shifting perceptions in President Reagan's Washington about the credibility of Mr Gorbachev when he repeatedly asserted his intention to end the Cold War. That mercurial, much-discussed phenomenon, 'the special relationship,' enjoyed an extraordinary revival during the 1980s, with 'slips' like the US invasion of Grenada in 1983 apart, the Thatcher-Reagan partnership outstripping all but the prototype Roosevelt-Churchill duo in its warmth and importance. ('Isn't she marvellous'?' he would purr to his aides even while she berated him down the 'hot line.')
Thatcher and George H. W. Bush (January 1989–November 1990)
In his personal diary, George H. W. Bush wrote that his first impression of Thatcher was she was principled but very difficult. Bush also wrote that Thatcher, "talks all the time when you're in a conversation. It's a one-way street."
Despite having developed a warm relation with Reagan, whom Bush had served under as vice president, Thatcher never developed a similar sense of camaraderie with Bush. At the time that Bush took office in 1989, Thatcher was politically under siege by both political opposition, as well as from forces within her own party.
Bush was anxious to manage the collapse of communist regimes in Eastern Europe in a manner that would produce order and stability. Bush therefore used a 1989 trip to Brussels to demonstrate the heightened attention that his administration planned to allocate towards US-German relations. Thus, rather than giving Thatcher the precedence which Prime Ministers of the United Kingdom were accustomed to receiving from US Presidents, he met with the president of the European Commission first, leaving Thatcher, "cooling her heels". This irritated Thatcher.
In 1989, after Bush proposed a reduction in US troops stationed in Europe, Thatcher lectured Bush on the importance of freedom. Bush came out of this encounter asking, "Why does she have any doubt that we feel this way on this issue?"
Thatcher lost her premiership in November 1990. However, to Bush's displeasure, she continued attempting to involve herself in diplomacy between the West and the Soviet Union. Bush took particular offence to a speech Thatcher gave after leaving office in which she claimed that she and Ronald Reagan were responsible for ending the Cold War. Thatcher gave this speech, which snubbed the contributions that others had made, before an audience that included a number of individuals who had contributed to the ending the Cold War, such as Lech Wałęsa and Václav Havel. In reaction to this speech, Helmut Kohl sent Bush a note proclaiming that Thatcher was crazy.
Major and George H. W. Bush (November 1990–January 1993)
As had started becoming apparent in Thatcher's last few years of premiership, the special relationship had begun to wane for a time with the passing of the Cold War, despite intensive co-operation in the Gulf War. Thus, while it remained the case that: 'On almost all issues, Britain and the US are on the same side of the table. You cannot say that for other important allies such as France, Germany or Japan', it was also acknowledged: ‘The disappearance of a powerful common threat, the Soviet Union, has allowed narrower disputes to emerge and given them greater weight.’
Major and Clinton (January 1993–May 1997)
Republican administrations had typically worked well with Conservative governments, and the new Democratic President Bill Clinton said he intended to maintain the special relationship, avowing. But he and Major did not prove compatible. The nuclear alliance was weakened when Clinton extended a moratorium on tests in the Nevada desert in 1993, and pressed Major to agree to the Comprehensive Nuclear-Test-Ban Treaty. The freeze was described by a British defence minister as 'unfortunate and misguided', as it inhibited validation of the ‘safety, reliability and effectiveness’ of fail-safe mechanisms on upgraded warheads for the British Trident II D5 missiles, and potentially the development of a new deterrent for the 21st century, leading Major to consider a return to Pacific Ocean testing. The Ministry of Defence turned to computer simulation.
A genuine crisis in transatlantic relations blew up over Bosnia. London and Paris resisted relaxation of the UN arms embargo, and discouraged US escalation, arguing that arming the Muslims or bombing the Serbs could worsen the bloodshed and endanger their peacekeepers on the ground. US Secretary of State Warren Christopher's campaign to lift the embargo was rebuffed by Major and President Mitterrand in May 1993. After the so-called 'Copenhagen ambush' in June 1993, where Clinton 'ganged up' with Chancellor Kohl to rally the European Community against the peacekeeping states, Major was said to be contemplating the death of the special relationship. The following month the United States voted at the UN with non-aligned countries against Britain and France over lifting the embargo.
By October 1993, Warren Christopher was bristling that Washington policy makers had been too 'Eurocentric', and declared that Western Europe was 'no longer the dominant area of the world'. The US ambassador to London Raymond G. H. Seitz demurred, insisting it was far too early to put a 'tombstone' over the special relationship. A senior US State Department official described Bosnia in the spring of 1995 as the worst crisis with the British and French since Suez. By the summer US officials were doubting whether NATO had a future.
The nadir had now been reached, and, along with NATO enlargement and the Croatian offensive in 1995 that opened the way for NATO bombing, the strengthening Clinton-Major relationship was later credited as one of three developments that saved the Western alliance. The president acknowledged: 'John Major carried a lot of water for me and for the alliance over Bosnia. I know he was under a lot of political pressure at home, but he never wavered. He was a truly decent guy who never let me down. We worked really well together, and I got to like him a lot.'
A rift opened in a further area. In February 1994, Major refused to answer Clinton's telephone calls for days over his decision to grant Sinn Féin leader Gerry Adams a visa to visit the United States to agitate. Adams was listed as a terrorist by London. The US State Department, the CIA, the US Justice Department and the FBI all opposed the move on the grounds that it made the United States look 'soft on terrorism' and 'could do irreparable damage to the special relationship'. Under pressure from Congress, the president hoped the visit would encourage the IRA to renounce violence. While Adams offered nothing new, and violence escalated within weeks, the president later claimed vindication after the IRA ceasefire of August 1994. To the disappointment of the prime minister, Clinton lifted the ban on official contacts and received Adams at the White House on St. Patrick's Day 1995, despite the fact the paramilitaries had not agreed to disarm. The rows over Northern Ireland and the Adams affair reportedly 'provoked incandescent Clintonian rages'.
In November 1995, Clinton became only the second US president ever to address both Houses of Parliament, but by the end of Major's premiership disenchantment with the special relationship had deepened to the point where the incoming British ambassador Christopher Meyer banned the 'hackneyed phrase' from the embassy.
Blair and Clinton (May 1997–January 2001)
The election of British prime minister Tony Blair in 1997 brought an opportunity to revive what Clinton called the two nations' "unique partnership". At his first meeting with his new partner, the president said: "Over the last fifty years our unbreakable alliance has helped to bring unparalleled peace and prosperity and security. It's an alliance based on shared values and common aspirations." The personal relationship was seen as especially close because the leaders were "kindred spirits" in their domestic agendas. New Labour's Third Way, a moderate social-democratic position, was partly influenced by US New Democratic thinking.
Co-operation in defence and communications still had the potential to embarrass Blair, however, as he strove to balance it with his own leadership role in the European Union (EU). Enforcement of Iraqi no-fly zones and US bombing raids on Iraq dismayed EU partners. As the leading international proponent of humanitarian intervention, the "hawkish" Blair "bullied" Clinton to back diplomacy with force in Kosovo in 1999, pushing for deployment of ground troops to persuade the president "to do whatever was necessary" to win.
Blair and George W. Bush (January 2001–June 2007)
The personal diplomacy of Blair and Clinton's successor, US president George W. Bush in 2001, further served to highlight the special relationship. Despite their political differences on non-strategic matters, their shared beliefs and responses to the international situation formed a commonality of purpose following the September 11 attacks in New York and Washington, D.C.. Blair, like Bush, was convinced of the importance of moving against the perceived threat to world peace and international order, famously pledging to stand "shoulder to shoulder" with Bush:
This is not a battle between the United States of America and terrorism, but between the free and democratic world and terrorism. We therefore here in Britain stand shoulder to shoulder with our American friends in this hour of tragedy, and we, like them, will not rest until this evil is driven from our world.
Blair flew to Washington immediately after 9/11 to affirm British solidarity with the United States. In a speech to the United States Congress, nine days after the attacks, Bush declared "America has no truer friend than Great Britain." Blair, one of few world leaders to attend a presidential speech to Congress as a special guest of the First Lady, received two standing ovations from members of Congress. Blair's presence at the presidential speech remains the only time in US political history that a foreign leader was in attendance at an emergency joint session of the US congress, a testimony to the strength of the US–UK alliance under the two leaders. Following that speech, Blair embarked on two months of diplomacy rallying international support for military action. The BBC calculated that, in total, the prime minister held 54 meetings with world leaders and travelled more than 40,000 miles (60,000 km).
Blair's leadership role in the Iraq War helped him to sustain a strong relationship with Bush through to the end of his time as prime minister, but it was unpopular within his own party and lowered his public approval ratings. Some of the British Press called Blair "Bush's poodle." It also alienated some of his European partners, including the leaders of France and Germany. Russian popular artist Mikhail Nikolayevich Zadornov mused that "the position adopted by Britain towards America in the context of the Iraq War would be officially introduced into Kama Sutra." Blair felt he could defend his close personal relationship with Bush by claiming it had brought progress in the Middle East peace process, aid for Africa and climate-change diplomacy. However, it was not with Bush but with California governor Arnold Schwarzenegger that Blair ultimately succeeded in setting up a carbon-trading market, "creating a model other states will follow".
The 2006 Lebanon War also exposed some minor differences in attitudes over the Middle East. The strong support offered by Blair and the Bush administration to Israel was not wholeheartedly shared by the British cabinet or the British public. On 27 July, Foreign Secretary Margaret Beckett criticised the United States for "ignoring procedure" when using Prestwick Airport as a stop-off point for delivering laser-guided bombs to Israel.
Brown and George W. Bush (June 2007–January 2009)
Although British Prime Minister Gordon Brown stated his support for the United States on assuming office in 2007, he appointed ministers to the Foreign Office who had been critical of aspects of the relationship or of recent US policy. A Whitehall source said: 'It will be more businesslike now, with less emphasis on the meeting of personal visions you had with Bush and Blair.' British policy was that the relationship with the United States remained the United Kingdom's 'most important bilateral relationship'.
Brown and Obama (January 2009–May 2010)
Prior to his election as US president in 2008, Barack Obama, suggesting that Blair and Britain had been let down by the Bush administration, declared: 'We have a chance to recalibrate the relationship and for the United Kingdom to work with America as a full partner.'
On meeting Brown as president for the first time in March 2009, Obama reaffirmed that 'Great Britain is one of our closest and strongest allies and there is a link and bond there that will not break... This notion that somehow there is any lessening of that special relationship is misguided... The relationship is not only special and strong but will only get stronger as time goes on.' Commentators, however, noted that the recurring use of 'special partnership' by White House Press Secretary Robert Gibbs could be signaling an effort to recast terms.
The special relationship was also reported to be 'strained' after a senior US State Department official criticised a British decision to talk to the political wing of Hezbollah, complaining the United States had not been properly informed. The protest came after the Obama administration had said it was prepared to talk to Hamas and at the same time as it was making overtures to Syria and Iran. A senior Foreign Office official responded: 'This should not have come as a shock to any official who might have been in the previous administration and is now in the current one.’
In June 2009 the special relationship was reported to have 'taken another hit' after the British government was said to be 'angry' over the failure of the US to seek its approval before negotiating with Bermuda over the resettlement to the British overseas territory of four ex-Guantanamo Bay inmates wanted by the People's Republic of China. A Foreign Office spokesman said: 'It's something that we should have been consulted about.' Asked whether the men might be sent back to Cuba, he replied: 'We are looking into all possible next steps.' The move prompted an urgent security assessment by the British government. Shadow Foreign Secretary William Hague demanded an explanation from the incumbent, David Miliband, as comparisons were drawn with his previous embarrassment over the US use of Diego Garcia for extraordinary rendition without British knowledge, with one commentator describing the affair as 'a wake-up call' and 'the latest example of American governments ignoring Britain when it comes to US interests in British territories abroad'.
In August 2009 the special relationship was again reported to have 'taken another blow' with the release on compassionate grounds of Abdelbaset al-Megrahi, the man convicted of the 1988 Lockerbie Bombing. US Secretary of State Hillary Clinton said 'it was absolutely wrong to release Abdelbaset al-Megrahi', adding 'We are still encouraging the Scottish authorities not to do so and hope they will not'. Obama also commented that the release of al-Megrahi was a 'mistake' and 'highly objectionable'.
In March 2010 Hillary Clinton's support for Argentina's call for negotiations over the Falkland Islands triggered a series of diplomatic protests from Britain and renewed public scepticism about the value of the special relationship. The British government rejected Clinton's offer of mediation after renewed tensions with Argentina were triggered by a British decision to drill for oil near the Falkland Islands. The British government's long-standing position was that the Falklands were British territory, with all that this implied regarding the legitimacy of British commercial activities within its boundaries. British officials were therefore irritated by the implication that sovereignty was negotiable.
Later that month, the Foreign Affairs Select Committee of the House of Commons suggested that the British government should be 'less deferential' towards the United States and focus relations more on British interests. According to Committee Chair Mike Gapes, 'The UK and US have a close and valuable relationship not only in terms of intelligence and security but also in terms of our profound and historic cultural and trading links and commitment to freedom, democracy and the rule of law. But the use of the phrase "the special relationship" in its historical sense, to describe the totality of the ever-evolving UK-US relationship, is potentially misleading, and we recommend that its use should be avoided.' In April 2010 the Church of England added its voice to the call for a more balanced relationship between Britain and the United States.
Cameron and Obama (May 2010–July 2016)
On David Cameron being elected as Prime Minister of the United Kingdom after coalition talks between his Conservatives and the Liberal Democrats concluded on 11 May 2010, President Obama was the first foreign leader to offer his congratulations. Following the conversation Obama said:
'As I told the prime minister, the United States has no closer friend and ally than the United Kingdom, and I reiterated my deep and personal commitment to the special relationship between our two countries – a bond that has endured for generations and across party lines.'
Foreign Secretary William Hague responded to the President's overture by making Washington his first port of call, commenting: 'We're very happy to accept that description and to agree with that description. The United States is without doubt the most important ally of the United Kingdom.' Meeting Hillary Clinton, Hague hailed the special relationship as 'an unbreakable alliance', and added: 'It's not a backward-looking or nostalgic relationship. It is one looking to the future from combating violent extremism to addressing poverty and conflict around the world.' Both governments confirmed their joint commitment to the war in Afghanistan and their opposition to Iran's nuclear programme.
The Deepwater Horizon oil spill in 2010 sparked a media firestorm against BP in the United States. The Christian Science Monitor observed that a "rhetorical prickliness" had come about from escalating Obama administration criticism of BP—straining the special relationship—particularly the repeated use of the term 'British Petroleum' even though the business no longer uses that name. Cameron stated that he did not want to make the president's toughness on BP a US-UK issue, and noted that the company was balanced in terms of the number of its American and British shareholders. The validity of the special relationship was put in question as a result of the 'aggressive rhetoric'.
On 20 July, Cameron met with Obama during his first visit to the United States as prime minister. The two expressed unity in a wide range of issues, including the war in Afghanistan. During the meeting, Obama stated, "We can never say it enough. The United States and the United Kingdom enjoy a truly special relationship," then going on to say, "We celebrate a common heritage. We cherish common values. ... (And) above all, our alliance thrives because it advances our common interests." Cameron stated in an interview during the trip that he wanted to build a strong relationship with the United States, Britain's "oldest and best ally." This is in fact a historical error, as the Anglo-Portuguese Alliance is the oldest alliance that is still in force. Cameron further stated that, "from the times I've met Barack Obama before, we do have very, very close – allegiances and very close positions on all the key issues, whether that is Afghanistan or Middle East peace process or Iran. Our interests are aligned and we've got to make this partnership work."
Cameron has tried to downplay the idealism of the special relationship and called for an end to the British fixation on the status of the relationship, stating that it's a natural and mutually beneficial relationship. He said, "...I am unapologetically pro-America. But I am not some idealistic dreamer about the special relationship. I care about the depth of our partnership, not the length of our phone calls. I hope that in the coming years we can focus on the substance, not endlessly fret about the form."
In January 2011, during a White House meeting with the President of France Nicolas Sarkozy, Obama declared: "We don't have a stronger friend and stronger ally than Nicolas Sarkozy, and the French people", a statement which triggered outcry in the United Kingdom. In May, however, Obama became the fourth US President to make a state visit to the UK. For the keynote speech, he became the third US President to address both Houses of Parliament after Ronald Reagan and Bill Clinton. Considered a rare privilege for a foreign leader, only Reagan, Clinton, Charles de Gaulle, Nelson Mandela, Pope Benedict XVI and Nicolas Sarkozy had done so since the Second World War. (George W. Bush was invited to address Parliament in 2003, but declined.)
In 2013 Secretary of State John Kerry remarked "The relationship between the US and UK has often been described as special or essential and it has been described thus simply because it is. It was before a vote the other day in Parliament and it will be for long after that vote." This comment was brought about after the parliament vote to not conduct military strikes against Syria. William Hague replied: "So the United Kingdom will continue to work closely with the United States, taking a highly active role in addressing the Syria crisis and working with our closest ally over the coming weeks and months."
In March 2016, the US President criticised the British PM for becoming "distracted" over the intervention in Libya, a criticism that was also aimed at the French President. A National Security Council spokesman sent an unsolicited email to the BBC limiting the damage done by stating that "Prime Minister David Cameron has been as close a partner as the president has had."
May and Obama (July 2016–January 2017)
The short period of relations between post-Brexit referendum newly elected Theresa May and Obama administration was met with diplomatic hostility over John Kerry's criticism of Israel in a speech. Obama maintained his stance that the UK would be a low priority for US trade talks post-Brexit, and that the UK would be at "the back of the queue".
May chose Boris Johnson to serve as her Foreign Secretary. Johnson had written an op-ed which made mention of Obama's Kenyan heritage in a manner which received critics accused of being racist. He had also previously written an op-ed about Obama's potential successor, Hillary Clinton, which made derisive statements that had been criticized as sexist. A senior official in the US government suggested that Johnson's appointment would push the US further towards ties with Germany at the expense of the Special Relationship with the UK.
May and Trump (January 2017–present)
Following the election of Donald Trump, the British government has sought to establish a close alliance with the Trump administration, which it has referred to as a revival of the historical "special relationship" and which has proved to be strongly controversial in the United Kingdom.
Trump has reversed the stance of the Obama administration of moving the UK to the "back of queue" in regards to trade negotiations, as Trump prefers bilateral trade agreements over multilateral trade agreements, such as the proposed TTIP.
Theresa May was criticised in the United Kingdom by members of all major parties, including her own, for refusing to condemn Donald Trump's Executive Order 13769, referred to as the "Muslim ban" in the UK, as well as for inviting Trump to a state visit with Queen Elizabeth II. The honor of a state visit had not traditionally been extended so early in a presidency, however May did so in hopes of fostering a stronger trade relationship with the United States before the Brexit deadline.
More than 1.8 million signed an official parliamentary e-petition which said that "Donald Trump's well documented misogyny and vulgarity disqualifies him from being received by Her Majesty the Queen or the Prince of Wales," and Opposition leader Jeremy Corbyn of the Labour Party said in Prime Minister's Questions (PMQs) that Trump should not be welcomed to Britain "while he abuses our shared values with his shameful Muslim ban and attacks on refugees' and women's rights" and said that Trump should be banned from the UK until the ban on Muslims entering the US is lifted.
Baroness Warsi, former chair of the Conservatives, accused May of "bowing down" to Trump, who she described as "a man who has no respect for women, disdain for minorities, little value for LGBT communities, no compassion clearly for the vulnerable and whose policies are rooted in divisive rhetoric." London Mayor Sadiq Khan and the Conservative leader in Scotland, Ruth Davidson, also called for the visit to be cancelled. Trump's invitation was later downgraded to a "working visit", in which he would not be meeting with the Queen.
Despite May's efforts to establish a beneficial working relationship with Trump, their relationship has been described as "dysfunctional". It has been reported that, in their phone calls, Trump has made a habit of interrupting May.
On the morning of November 29, 2017, Trump retweeted an anti-Muslim post from the far-right group Britain First. This received strong backlash from leaders across the British political spectrum, and was condemned by a spokesperson of May's, who said that it was, "wrong of the president to have done this." Trump rebutted the statement that had been issued by May's office, Tweeting, "Don't focus on me, focus on the destructive Radical Islamic Terrorism that is taking place within the United Kingdom, We are doing just fine!" Trump's response to May has been seen by some as damaging to May's agenda, as it weakens the perception of a strong "special relationship" under her leadership. Thus, Trump had undone May's diligent efforts to craft an image of a close relationship with the United States in order to ease the passage of Brexit. Some speculate Trump's tweet might have even inflicted significant duress, if not long-term damage, to the Special Relationship.
On January 12, 2018, Trump announced that he would not travel to London for the ribbon-cutting ceremony at the United States' new embassy building in London. Trump tweeted,
Reason I canceled my trip to London is that I am not a big fan of the Obama Administration having sold perhaps the best located and finest embassy in London for “peanuts,” only to build a new one in an off location for 1.2 billion dollars. Bad deal. Wanted me to cut ribbon-NO!
However, despite Trump's suggestions otherwise, the plans and terms under which the new United States embassy was constructed were not the work of the Obama administration, but actually the work of the Bush administration. The reason for moving the embassy to a new location had been concerns that the existing embassy in the American Embassy London Chancery Building was vulnerable to terrorist attacks. Despite Trump's assertions the United States sold the Chancery Building for "peanuts", the sale actually garnered enough revenue to finance the construction of the new embassy building, which cost approximately $1 billion rather than the $1.2 billion Trump proclaimed.
On February 5, Trump again offended many in the United Kingdom with a Twitter post. In an attempt to rebuke a push by some in the United States' Democratic Party to implement universal healthcare, Trump tweeted that, "thousands of people are marching in the UK because their U system is going broke and not working". This was a reference the healthcare provided in the United Kingdom by the National Health Service. Trump's tweets were factually inaccurate in their characterization of the United Kingdom's health system. His tweets also mischaracterized the reason behind recent protests in the United Kingdom, which had not been protesting against a universal healthcare system, but rather been protesting for an improvement in its services. Trump's attack on the United Kingdom's healthcare system is believed to have placed further strain on his relationship with May, who responded by declaring her pride in the United Kingdom's health system.
It has been noted that secret defence and intelligence links 'that [have] minimal impact on ordinary people [play] a disproportionate role in the transatlantic friendship', and perspectives on the special relationship differ.
A 1942 Gallup poll conducted after Pearl Harbor, before the arrival of US troops and Churchill's heavy promotion of the special relationship, showed wartime ally Russia was still more popular than the United States among 62% of Britons. However, only 6% had ever visited the United States and only 35% knew any Americans personally.
In 1969 the United States was tied with the Commonwealth as the most important overseas connection for the British public, while Europe came in a distant third. By 1984, after a decade in the Common Market, Britons chose Europe as being most important to them.
British opinion polls from the Cold War revealed ambivalent feelings towards the United States. Margaret Thatcher's 1979 agreement to base US cruise missiles in Britain was approved of by only 36% of Britons, and the number with little or no trust in the ability of the US to deal wisely with world affairs had soared from 38% in 1977 to 74% in 1984, by which time 49% wanted US nuclear bases in Britain removed, and 50% would have sent US-controlled cruise missiles back to the United States. At the same time, 59% of Britons supported their own country’s nuclear deterrent, with 60% believing Britain should rely on both nuclear and conventional weapons, and 66% opposing unilateral nuclear disarmament. 53% of Britons opposed dismantling the Royal Navy's Polaris submarines. 70% of Britons still considered Americans to be very or fairly trustworthy, and in case of war the United States was the ally trusted overwhelmingly to come to Britain's aid, and to risk its own security for the sake of Britain. The United States and Britain were also the two countries most alike in basic values such as willingness to fight for their country and the importance of freedom.
In 1986, 71% of Britons, questioned in a Mori poll the day after Ronald Reagan’s bombing of Libya, disagreed with Thatcher's decision to allow the use of RAF bases, while two thirds in a Gallup survey opposed the bombing itself, the reverse of US opinion.
In a 1997 Harris poll published after Tony Blair's election, 63% of people in the United States viewed Britain as a close ally, up by one percent from 1996, 'confirming that the long-running "special relationship" with America's transatlantic cousins is still alive and well'. Britain came second behind its colonial offshoot Canada, on 73%, while another offshoot, Australia, came third, on 48%. Popular awareness of the historical link was fading in the parent country, however. In a 1997 Gallup poll, while 60% of the British public said they regretted the end of Empire and 70% expressed pride in the imperial past, 53% wrongly supposed that the United States had never been a British possession.
In 1998, 61% of Britons polled by ICM said they believed they had more in common with US citizens than they did with the rest of Europe. 64% disagreed with the sentence 'Britain does what the US government tells us to do.' A majority also backed Blair's support of Bill Clinton's strategy on Iraq, 42% saying action should be taken to topple Saddam Hussein, with 24% favouring diplomatic action, and a further 24%, military action. A majority of Britons aged 24 and over said they did not like Blair supporting Clinton over the Lewinsky scandal.
A 2006 poll of the US public showed that the United Kingdom, as an 'ally in the war on terror' was viewed more positively than any other country. 76% of the US people polled viewed the British as an 'ally in the War on Terror' according to Rasmussen Reports. According to Harris Interactive, 74% of Americans viewed Great Britain as a 'close ally in the war in Iraq', well ahead of next-ranked Canada at 48%.
A June 2006 poll by Populus for The Times showed that the number of Britons agreeing that 'it is important for Britain’s long-term security that we have a close and special relationship with America' had fallen to 58% (from 71% in April), and that 65% believed that 'Britain's future lies more with Europe than America.' Only 44%, however, agreed that 'America is a force for good in the world.' A later poll during the Israel-Lebanon conflict found that 63% of Britons felt that the United Kingdom was tied too closely to the United States. A 2008 poll by The Economist showed that Britons' views differed considerably from Americans' views when asked about the topics of religion, values, and national interest. The Economist remarked:
For many Britons, steeped in the lore of how English-speaking democracies rallied around Britain in the second world war, [the special relationship] is something to cherish. For Winston Churchill, [...] it was a bond forged in battle. On the eve of the war in Iraq, as Britain prepared to fight alongside America, Tony Blair spoke of the 'blood price' that Britain should be prepared to pay in order to sustain the relationship. In America, it is not nearly as emotionally charged. Indeed American politicians are promiscuous with the term, trumpeting their 'special relationships' with Israel, Germany and South Korea, among others. 'Mention the special relationship to Americans and they say yes, it's a really special relationship,' notes sardonically Sir Christopher Meyer, a former British ambassador to Washington.
In January 2010 a Leflein poll conducted for Atlantic Bridge found that 57% of people in the US considered the special relationship with Britain to be the world's most important bilateral partnership, with 2% disagreeing. 60% of people in the US regarded Britain as the country most likely to support the United States in a crisis, while Canada came second on 24%, and Australia third on 4%.
In May 2010, another poll conducted in the UK by YouGov revealed that 66% of those surveyed held a favourable view of the US and 62% agreed with the assertion that America is Britain's most important ally. However, the survey also revealed that 85% of British citizens believe that the UK has little or no influence on American policies, and that 62% think that America does not consider British interests.
Following the 2003 invasion of Iraq, senior British figures criticized the refusal of the US Government to heed British advice regarding post-war plans for Iraq, specifically the Coalition Provisional Authority's de-Ba'athification policy and the critical importance of preventing the power vacuum in which the insurgency subsequently developed. British defence secretary Geoff Hoon later stated that the United Kingdom 'lost the argument' with the Bush administration over rebuilding Iraq.
Assurances made by the United States to the United Kingdom that 'extraordinary rendition' flights had never landed on British territory were later shown to be false when official US records proved that such flights had landed at Diego Garcia repeatedly. The revelation was an embarrassment for British foreign secretary David Miliband, who apologised to Parliament.
In 2003 the United States pressed the United Kingdom to agree to an extradition treaty which, proponents claimed, allowed for equal extradition requirements between the two countries. Critics argued that the United Kingdom was obligated to make a strong prima facie case to US courts before extradition would be granted, and that, by contrast, extradition from the United Kingdom to the United States was a matter of administrative decision alone, without prima facie evidence. This had been implemented as an anti-terrorist measure in the wake of the 11 September 2001 attacks. Very soon, however, it was being used by the United States to extradite and prosecute a number of high-profile London businessmen (e.g., the Natwest Three and Ian Norris) on fraud charges. Contrasts have been drawn with the United States' harboring of Provisional IRA terrorists in the 1970s through to the 1990s and repeated refusals to extradite them to the UK.
On 30 September 2006, the US Senate unanimously ratified the 2003 treaty. Ratification had been slowed by complaints from some Irish-American groups that the treaty would create new legal jeopardy for US citizens who opposed British policy in Northern Ireland. The Spectator condemned the three-year delay as 'an appalling breach in a long-treasured relationship’.
Trade disputes and attendant job fears have sometimes strained the special relationship. The United States has been accused of pursuing an aggressive trade policy, using or ignoring WTO rules; the aspects of this causing most difficulty to the United Kingdom have been a successful challenge to the protection of small family banana farmers in the West Indies from large US corporations such as the American Financial Group, and high tariffs on British steel products. In 2002, Blair denounced Bush's imposition of tariffs on steel as 'unacceptable, unjustified and wrong', but although Britain's biggest steelmaker, Corus, called for protection from dumping by developing nations, the Confederation of British Industry urged the government not to start a 'tit-for-tat'.
In popular culture
In popular culture over the decades the varying Zeitgeist of the special relationship has been a feature or context or subtext for of works of art and phenomena as varied as A Matter of Life and Death (1946) (released as Stairway to Heaven in the U.S.), the British Invasion musically in the '60s, and the unusual intergenerational pairing of David Bowie and Bing Crosby performing Peace on Earth/Little Drummer Boy in 1977. In the televised science fiction of the '90s the Centauri (Babylon 5) are for plot purposes somewhat analogous to the British Empire.
- United Kingdom–United States relations
- Foreign relations of the United Kingdom
- Foreign policy of the United States
- The Great Rapprochement
- 1943 BRUSA Agreement
- UKUSA Agreement
- ABCA Armies
- The Technical Cooperation Program TTCP
- Pilgrims Society
- James, Wither (March 2006). "An Endangered Partnership: The Anglo-American Defence Relationship in the Early Twenty-first Century". European Security. 15 (1): 47–65. doi:10.1080/09662830600776694. ISSN 0966-2839.
- "The UK and US: The myth of the special relationship". www.aljazeera.com.
- John Baylis, "The 'special Relationship' A Diverting British Myth?," in Cyril Buffet, Beatrice Heuser (eds.), Haunted by History: Myths in International Relations, ch. 10, Berghahn Books, 1998, ISBN 9781571819406
- "Barack Obama delivers parting snub to special relationship with Britain by naming Angela Merkel his 'closest partner'".
- Existence since the 19th century:
- "The Anglo-American Arbitration Treaty". The Times. 14 January 1897. p. 5, col. C., quoting the "semi-official organ" the North-German Gazette: "There is, therefore, not the slightest occasion for other States to adopt as their model and example a form of agreement which may, perhaps, be advantage to England and America in their special relationship".
- "The New American Ambassador". The Times. 7 June 1913. p. 9, col. C. "No Ambassador to this or any other nation is similarly honoured ... It is intended to be, we need hardly say, precisely what it is, a unique compliment, a recognition on our part that Great Britain and the United States stand to one another in a special relationship, and that between them some departure from the merely official attitude is most natural".
- "The Conference and the Far East". The Times. 21 November 1921. p. 11, col. B, C. "The answer of the [Japanese] Ambassador [Baron Kato] shows that he and his Government even then appreciated the special relationship between this country [the United Kingdom] and the United States ... That, probably, the Japanese Government understands now, as clearly as their predecessors understood in 1911 that we could never make war on the United States".
- "Limit of Navy Economies". The Times. 13 March 1923. p. 14, col. F. "After comparing the programmes of Britain, America, and Japan, the First Lord said that so far from importing into our maintenance of the one-Power standard a spirit of keen and jealous competition, we had, on the contrary, interpreted it with a latitude which could only be justified by our desire to avoid provoking competition and by our conception of the special relationship of good will and mutual understanding between ourselves and the United States".
- "Five Years Of The League". The Times. 10 January 1925. p. 13, col. C. "As was well pointed out in our columns yesterday by Professor Muirhead, Great Britain stands in a quite special relationship to that great Republic [the United States]".
- "The Walter Page Fellowships. Mr. Spender's Visit To America., Dominant Impressions". The Times. 23 February 1928. p. 16, col. B. quoting J. A. Spender: "The problem for British and Americans was to make their special relationship a good relationship, to be candid and open with each other, and to refrain from the envy and uncharitableness which too often in history had embittered the dealings of kindred peoples".
- George L. Bernstein, "Special Relationship and Appeasement: Liberal policy towards America in the age of Palmerston." Historical Journal 41#3 (1998): 725-750.
- Cowling, Maurice (1974). The Impact of Hitler: British Politics and British Policy 1933–1940. Cambridge University Press. pp. 77–78.
- Reynolds, David (April 1990). "1940: Fulcrum of the Twentieth Century?". International Affairs. 66 (2): 331. doi:10.2307/2621337.
- Acheson, Dean (1969). Present at the Creation: My Years in the State Department. New York: W. W. Norton. p. 387.
- Reynolds 1990, pp. 325, 348–50
- Lindley, Ernest K. (9 March 1946). "Churchill's Proposal". Washington Post. p. 7.
- Skidelsky, Robert (9 September 1971). "Those Were the Days". New York Times. p. 43.
- Gunther, John (1950). Roosevelt in Retrospect. Harper & Brothers. pp. 15–16.
- Richard M. Langworth, "Churchill's Naked Encounter", (May 27, 2011), https://richardlangworth.com/churchills-naked-encounter
- Reynolds, David (1985). "The Churchill government and the black American troops in Britain during World War II". Transactions of the Royal Historical Society. 35: 113–133. doi:10.2307/3679179.
- "Special relationship". Phrases.org.uk. Retrieved 14 November 2010.
- Webley, Simon (Autumn 1989). "Review: 'The Politics of the Anglo-American Economic Special Relationship', by Alan J. Dobson". International Affairs. 65 (4): 717. doi:10.2307/2622608.
- Coker, Christopher (July 1992). "Britain and the New World Order: The Special Relationship in the 1990s". International Affairs. 68 (3): 408. doi:10.2307/2622963.
- Kolko, Gabriel (1968). The Politics of War: The World and United States Foreign Policy, 1943–1945. New York: Random House. p. 488.
- Philip White (2013). Our Supreme Task: How Winston Churchill's Iron Curtain Speech Defined the Cold War Alliance. PublicAffairs. p. 220.
- Guy Arnold, America and Britain: Was There Ever a Special Relationship? (London: Hurst, 2014) pp 6, 153
- Derek E. Mix - The United Kingdom: Background and Relations with the United States - fas.org. Congressional Research Service. April 29, 2015. Retrieved 13 April 2017.
- 'Time Runs Out as Clinton Dithers over Nuclear Test', Independent On Sunday (20 June 1993), p. 13.
- Richard Norton-Taylor, Nuclear weapons treaty may be illegal, The Guardian (27 July 2004). Retrieved 15 March 2009.
- Michael Smith, Focus: Britain's secret nuclear blueprint, Sunday Times (12 March 2006). Retrieved 15 March 2009.
- Andrea Shalal-Esa, 'Update 1-US, 'Britain conduct Nevada nuclear experiment', Reuters News (15 February 2002).
- Ian Bruce, 'Britain working with US on new nuclear warheads that will replace Trident force', The Herald (10 April 2006), p. 5.
- Rogoway, Tyler (2017-01-03). "Reagan Invited Thatcher To Join The Top Secret F-117 Program". The Drive.
- Kristin Roberts, 'Italy, Netherlands, Turkey seen as possible JSF partners', Reuters News (13 March 2001).
- Douglas Barrie and Amy Butler, 'Dollars and Sense; Currency rate headache sees industry seek remedy with government', Aviation Week & Space Technology, vol. 167, iss. 23 (10 December 2007), p. 40.
- "Why no questions about the CIA?". New Statesman. September 2003.
- Bob Drogin and Greg Miller, 'Purported Spy Memo May Add to US Troubles at UN', Los Angeles Times (4 March 2003).
- Tim Shipman, 'Why the CIA has to spy on Britain', The Spectator (28 February 2009), pp. 20–1.
- "Country Profiles: United States of America" on UK Foreign & Commonwealth Office website
- Irwin Seltzer, 'Britain is not America's economic poodle', The Spectator (30 September 2006), p. 36.
- 'International Trade – The 51st State?', Midlands Business Insider (1 July 2007).
- Seltzer, 'Not America's economic poodle', p. 36.
- 'Special ties should be used for trade and the climate says US ambassador', Western Daily Press (4 April 2007), p. 36.
- "Press Conference by Kerry, British Foreign Secretary Hague". United Kingdom Foreign and Commonwealth Office, London: U.S. Department of State. September 9, 2013. Retrieved 8 December 2013.
- Spencer family
- Darryl Lundy. "Rt. Hon. Sir Winston Leonard Spencer Churchill". thePeerage.com. Retrieved 20 December 2007.
- White, Michael (March 2, 2009). "Special relationship? Good and bad times". www.theguardian.com. The Guardian. Retrieved November 30, 2017.
- Robert M. Hendershot, Family Spats: Perception, Illusion, and Sentimentality in the Anglo-American Special Relationship (2008)
- MacDonald, John (1986). Great Battles of World War II. Toronto: Strathearn Books Limited. ISBN 0-86288-116-1.
- "Roosevelt and Churchill: A Friendship That Saved The World". www.nps.org. United States National Park Service. n.d. Retrieved July 14, 2017.
- Warren F. Kimball, ed. Churchill and Roosevelt, The Complete Correspondence (3 vol Princeton UP, 1984).
- Webley, Kayla (July 20, 2010). "Churchill and FDR". www.time.com. Time Magazine. Retrieved July 14, 2017.
- "A Chronology of US Historical Documents". Archived 5 December 2006 at the Wayback Machine. Oklahoma College of Law
- Gunther, John (1950). Roosevelt in Retrospect. Harper & Brothers. pp. 15–16.
- Lukacs, John (Spring–Summer 2008). "Churchill Offers Toil and Tears to FDR". American Heritage. Retrieved 2 August 2012.
- Jenkins, Roy. Churchill: A Biography (2001); p. 849 ISBN 978-0-374-12354-3/ISBN 978-0-452-28352-7
- Brookshire, Jerry (December 12, 2003). "Atlee and Truman". www.historytoday.com. History Today. Retrieved November 30, 2017.
- "The Potsdam Conference, 1945". www.history.state.gov. US State Department. n.d. Retrieved November 30, 2017.
- Charmley, John (1993). Churchill, The End of Glory: A Political Biography. London: Hodder & Stoughton. p. 225. ISBN 978-0-15-117881-0. OCLC 440131865.
- Churchill On Vacation, 1946/01/21 (1946). Universal Newsreel. 1946. Retrieved 22 February 2012.
- "Interview: Clark Clifford". Archived from the original on 25 October 2007. Retrieved 2008-10-02. ; retrieved 23 March 2009.
- Maier, Thomas (2014). When Lions Roar: The Churchills and the Kennedys. Crown. pp. 412–13. ISBN 0307956792.
- Kevin Ruane, Churchill and the Bomb in War and Cold War (2016) p 156.
- Keith Kyle, Suez: Britain's End of Empire in the Middle East (2003).
- C.C. Kingseed, Eisenhower and the Suez Crisis of 1956 (1995).
- Simon C. Smith, ed. Reassessing Suez 1956: New perspectives on the crisis and its aftermath (Routledge, 2016).
- Alistair Horne, Macmillan, 1894–1956: Volume I of the Official Biography (London: Macmillan, 1988), p. 160.
- Christopher Coker, 'Britain and the New World Order: The Special Relationship in the 1990s', International Affairs, Vol. 68, No. 3 (Jul., 1992), p. 408.
- Harold Macmillan, At the End of the Day (London: Macmillan, 1973), p. 111.
- Nigel J. Ashton, 'Harold Macmillan and the "Golden Days" of Anglo-American Relations Revisited', Diplomatic History, Vol. 29, No. 4 (2005), pp. 696, 704.
- Ken Young, "The Skybolt Crisis of 1962: Muddle or Mischief?." Journal of Strategic Studies 27.4 (2004): 614-635.
- Myron A. Greenberg, 'Kennedy's Choice: The Skybolt Crisis Revisited', Naval War College Review, Autumn 2000.
- Richard E. Neustadt, Report to JFK: The Skybolt Crisis in Perspective (1999)
- Horne, Macmillan: Volume II, pp. 433–37.
- Horne, Macmillan: Volume II of the Official Biography (1989), p. 429.
- Macmillan, At the End of the Day, p. 339.
- Greenberg, 'Kennedy's Choice'.
- Ashton, 'Anglo-American Relations Revisited', p. 705.
- David Reynolds, 'A "Special Relationship"? America, Britain and the International Order Since the Second World War', International Affairs, Vol. 62, No. 1 (Winter, 1985–1986), p. 14.
- Thorpe, D R (1997). Alec Douglas-Home. London: Sinclair-Stevenson. p. 300. ISBN 1856196631.
- Robert Cook and Clive Webb. "Unraveling the special relationship: British responses to the assassination of President John F. Kennedy." The Sixties 8#2 (2015): 179-194, quote p .
- "Carried the hopes of the world", The Guardian, 23 November 1963, p. 3
- Hurd, Douglas "Home, Alexander Frederick Douglas-, fourteenth earl of Home and Baron Home of the Hirsel (1903–1995)",Oxford Dictionary of National Biography, Oxford University Press, 2004, accessed 14 April 2012 (subscription required)
- "Sir Alec Douglas-Home". www.gov.uk. Government of the United Kingdom. n.d. Retrieved June 13, 2017.
During Sir Alec Douglas-Home’s premiership, American President John F Kennedy was assassinated, and relations with Kennedy’s successor Lyndon B Johnson deteriorated after the sale of British Leyland buses to Cuba.......Sir Alec Douglas-Home was an unexpected Prime Minister and served for only 363 days, the second shortest premiership in the 20th century
- Reynolds, 'A "Special Relationship"?', p. 1.
- Gle O'Hara, Review: A Special Relationship? Harold Wilson, Lyndon B. Johnson and Anglo-American Relations "At the Summit", 1964–1968 by Jonathan Colman, Journal of British Studies, Vol. 45, No. 2 (Apr., 2006), p. 481.
- Reynolds, 'A "Special Relationship"?', p. 14.
- O'Hara, Review, p. 482.
- Ashton, 'Anglo-American Relations Revisited', p. 694.
- Ben Macintyre, 'Blair's real special relationship is with us, not the US – Comment – Opinion', The Times (7 September 2002), p. 22.
- Rhiannon Vickers, "Harold Wilson, the British Labour Party, and the War in Vietnam." Journal of Cold War Studies 10#2 (2008): 41-70. online
- John W. Young, "Britain and'LBJ's War', 1964-68." Cold War History 2#3 (2002): 63-92
- Reynolds, pp. 14–15.
- Spelling, Alex (2013). "'A Reputation for Parsimony to Uphold': Harold Wilson, Richard Nixon and the Re-Valued 'Special Relationship' 1969–1970". Contemporary British History. 27 (2): 192–213.
- Nixon, Richard (January 27, 2017). Remarks of Welcome to Prime Minister Harold Wilson of Great Britain (Speech). The American Presidency Project. Retrieved December 11, 2017.
- Ronald Koven, 'Heath Gets Bouquets, But Few Headlines', Washington Post (5 February 1973), p. A12.
- Editorial, New York Times (24 December 1971), p. 24, col. 1.
- New York Times (24 December 1971).
- Allen J. Matusow, 'Richard Nixon and the Failed War Against the Trading World', Diplomatic History, vol. 7, no. 5 (November 2003), pp. 767–8.
- Henrik Bering-Jensen, 'Hawks of a Feather', Washington Times (8 April 1991), p. 2.
- Paul Reynolds, UK in dark over 1973 nuclear alert, BBC News (2 January 2004). Retrieved 16 March 2009.
- 'America "misled Britain" in Cold War; National archives: 1973', The Times (1 January 2004), p. 10.
- ‘Nixon nuclear alert left Heath fuming’, The Express (1 January 2004), p. 8.
- "FORMER BRITISH PRIME MINISTER SIR HAROLD WILSON PRAISES NIXON, CRITICIZES THATCHER AT DEPAUW LECTURE". www.depauw.edu. Depauw University. September 21, 2017. Retrieved December 11, 2017.
- Ford, Gerald (January 30, 1975). Toast (Speech). State Dinner. White House (Washington, D.C.). Retrieved December 19, 2017.
- 'Thatcher Hero and the Leader of Free World Basks in Glory', The Guardian (25 November 1995), p. 8.
- Robert B. Semple, Jr, 'British Government Puts on its Biggest Single Show of Year to Mark Declaration of Independence', New York Times (27 May 1976), p. 1, col. 2.
- 'Callaghan set to see Carter about recession', Globe and Mail (16 March 1978), p. 12.
- "Papers show rapport between Thatcher, Carter". www.politico.com. Politico. Associated Press. March 18, 2011. Retrieved June 11, 2017.
- Seldon, Anthony (February 6, 2010). "Thatcher and Carter: the not-so special relationship". www.telegraph.co.uk. The Telegraph. Retrieved June 11, 2017.
- Keller, Emma G. (April 8, 2013). "Thatcher in the US: prime minister and Reagan 'had almost identical beliefs'". www.theguardian.com. The Guardian. Retrieved June 11, 2017.
- Ruddin, Lee P. (May 20, 2013). "Margaret Thatcher and Jimmy Carter: Political BFFs?". www.historynewsnetwork.org. History News Network. Retrieved June 11, 2017.
- Records of the Prime Minister's Office, Correspondence & Papers; 1979-1997 at discovery.nationalarchives.gov.uk: IRAN. Internal situation in Iran; Attack on British Embassy; Hostage-taking at US Embassy; Freezing of Iranian Assets; US Mission to release hostages; Relations with US & UK following hostage taking at US Embassy. Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7; access date=June 11, 2017
- Daniel James Lahey, "The Thatcher government's response to the Soviet invasion of Afghanistan, 1979–1980," Cold War History (2013) 13#1 pp 21–42.
- Associated Press (April 23, 1980). "Governments slapped for boycott pressure". The Spokesman-Review. Spokane, Washington. p. C1. Retrieved August 8, 2012.
- Geoffrey Smith, Reagan and Thatcher (Vintage, 1990).
- Anthony Andrew Clark, "Were Margaret Thatcher and Ronald Reagan Inseparable Political Allies?." History in the Making 2#2 (2013): 21-29.
- Alan P. Dobson; Steve Marsh (2013). Anglo-American Relations: Contemporary Perspectives. Routledge. p. 71.
- Toasts of the President and Prime Minister Margaret Thatcher of the United Kingdom at a Dinner at the British Embassy, 20 February 1985. University of Texas Archive Speeches, 1985. Retrieved 15 March 2009.
- Toasts of the President and Prime Minister. Retrieved 15 March 2009.
- Carine Berbéri; Monia O’Brien Castro (2016). 30 Years After: Issues and Representations of the Falklands War. Routledge. p. 78.
- John Campbell, Margaret Thatcher: The Iron Lady vol. 2 (2003) pp 279-82. online
- Donald E. Nuechterlein (2015). America Recommitted: A Superpower Assesses Its Role in a Turbulent World. University Press of Kentucky. pp. 23–24.
- Gary Williams, "‘A Matter of Regret’: Britain, the 1983 Grendada Crisis, and the Special Relationship." Twentieth Century British History 12#2 (2001): 208-230.
- John Dumbrell, A Special Relationship: Anglo-American Relations in the Cold War and After (Basingstoke, Hants: Macmillan, 2001), pp. 97–99.
- Margaret Thatcher, The Downing Street Years, (London: HarperCollins, 1993), pp. 465–6.
- Charles Moore (2016). Margaret Thatcher: At Her Zenith: In London, Washington and Moscow. Knopf Doubleday. pp. 793–95.
- Coker, 'Britain and the New World Order', p. 408.
- Peter Hennessy, ‘The Last Retreat of Fame: Mrs Thatcher as History’, Modern Law Review, Vol. 54, No. 4 (Jul., 1991), p. 496.
- Meacham, John (2015). Destiny and Power: The American Odyssey of George Herbert Walker Bush. New York: Random House. ISBN 978-1-4000-6765-7.
- LaFranchi, Howard (April 8, 2017). "Margaret Thatcher: 'This is no time to go wobbly' and other memorable quotes". www.csmonitor.com. Christian Science Monitor. Retrieved July 14, 2017.
- Bush, George H. W.; Snowcroft, Brent (1998). A World Transformed. Knopf. p. 352. ISBN 978-0679432487.
- Thatcher, Margaret (1993). The Downing Street Years. HarperCollins. pp. 823–24. ISBN 0002550490.
- Martin Fletcher and Michael Binyon, ‘Special Relationship Struggles to Bridge the Generation Gap—Anglo-American’, The Times (22 December 1993).
- ‘British-American Strains’, New York Times (25 March 1995), p. 22.
- A. Holmes; J. Rofe (2016). The Embassy in Grosvenor Square: American Ambassadors to the United Kingdom, 1938-2008. Springer. pp. 302–3.
- Martin Walker, ‘President puts Britain's deterrent in melting pot’, The Guardian (24 February 1993), p. 1.
- Graham Barrett, ‘UK Eyes Nuclear Testing In Pacific’, The Age (5 July 1993), p. 8.
- Alexander MacLeod, ‘Clinton's Stay of Nuclear Tests Irks Britain’, Christian Science Monitor(7 July 1993), p. 3.
- Martin Walker, ‘Why Bill Won’t Give Up His Respect for Major’, The Observer (1 June 1997), p. 21.
- Robinson, ‘Clinton's Remarks Cause Upper Lips to Twitch’, p. a18.
- ‘Not so special’, Financial Times (26 February 1993), p. 19.
- Michael White and Ian Black, ‘Whitehall Plays Down Impact of Clinton Criticism of Britain’, The Guardian (19 October 1993), p. 22.
- Steve Doughty, 'Is this the end of a beautiful friendship? World Wide on why Copenhagen proved not so wonderful for Major', Daily Mail (23 June 1993), pp. 1, 12.
- Robi Dutta, 'Bridging Troubled Waters – Chronology – US Foreign Policy', The Times (19 October 1993).
- Walker, ‘Why Bill Won’t Give Up His Respect for Major’, p. 21.
- Rusbridger, Alan (21 June 2004). "'Mandela helped me survive Monicagate, Arafat could not make the leap to peace – and for days John Major wouldn't take my calls'". The Guardian. London. Retrieved 17 September 2006.
- Villa, ‘The Reagan-Thatcher "special relationship" has not weathered the years’.
- Alec Russell, 'Major's fury over US visa for Adams', Daily Telegraph (23 June 2004), p. 9.
- Joseph O'Grady, 'An Irish Policy Born in the U.S.A.: Clinton's Break with the Past', Foreign Affairs, Vol. 75, No. 3 (May/June 1996), pp. 4–5.
- O'Grady, 'An Irish Policy Born in the U.S.A.', p. 5.
- Russell, ‘Major's fury’, Daily Telegraph, p. 9.
- Walker, 'Why Bill Won’t Give Up His Respect for Major', p. 21.
- Walker, 'Why Bill Won’t Give Up His Respect for Major', p. 21
- Jasper Gerar, Ultimate insider prowls into the outside world, Sunday Times (1 June 2003). Retrieved 15 March 2009.
- John Kampfner, Blair's Wars (London: Free Press, 2004), p. 12.
- Kampfner, Blair's Wars, p. 12.
- Peter Riddell, 'Blair as Prime Minister', in Anthony Seldon (ed.), The Blair Effect: The Blair Government 1997–2001 (London: Little, Brown, 2001), p. 25
- Christopher Hill, 'Foreign Policy', in Seldon (ed.), Blair Effect, pp. 348–9
- Hill, 'Foreign Policy', p. 339
- Anne Deighton, 'European Union Policy', in Seldon (ed.), Blair Effect, p. 323.
- Ben Wright, Analysis: Anglo-American 'special relationship', BBC News (6 April 2002). Retrieved 22 March 2009.
- Anthony Seldon, Blair (London: Simon & Schuster, 2005), pp. 399–400, 401.
- Jeremy Lovell, 'Blair says "shoulder to shoulder" with US', Reuters (12 September 2001).
- Address to a Joint Session of Congress and the American People Archived 25 February 2008 at the Wayback Machine. 20 September 2001
- Herald Tribune, (November 15, 2004), p 3.
- 'The cockpit of truth.(Lance Corporal's death breaks United States-United Kingdom's relations', The Spectator (10 February 2007).
- Gonzalo Vina, Blair, Schwarzenegger Agree to Trade Carbon Emissions, Bloomberg (31 July 2006). Retrieved 21 March 2009.
- "Beckett protest at weapons flight". BBC News. 27 July 2006. Retrieved 17 August 2006.
- "Speech not critical of US – Brown". BBC News. 13 July 2007.
- "US and UK 'no longer inseparable'". BBC News. 14 July 2007.
- Reynolds, Paul (14 July 2007). "The subtle shift in British foreign policy". BBC News.
- 'A Special Relationship No More?', Today (Singapore, 14 July 2007), p. 26.
- "/ Home UK / UK – Ties that bind: Bush, Brown and a different relationship". Financial Times. 27 July 2007. Retrieved 14 November 2010.
- Julian Borger, UK's special relationship with US needs to be recalibrated, Obama tells ex-pats in Britain, The Guardian (27 May 2008). Retrieved 15 March 2009.
- "Obama hails special relationship". BBC News. BBC News. 3 March 2009. Retrieved 3 March 2009.
- The 'special relationship' Nick Robinson Blog, BBC News, 3 March 09. Retrieved 3–8–09.
- Alex Spillius, 'Special relationship' strained: US criticises UK's vow to talk to Hezbollah, Daily Telegraph (13 March 2009). Retrieved 21 March 2009.
- Mark Landler, Britain’s Contacts With Hezbollah Vex US, New York Times (12 March 2009). Retrieved 21 March 2009.
- Suzanne Goldenberg, Obama camp 'prepared to talk to Hamas', The Guardian (9 January 2009). Retrieved 21 March 2009.
- Raed Rafei and Borzou Daragahi, Senior US envoys hold talks in Syria, Los Angeles Times (8 March 2009). Retrieved 21 March 2009.
- Tom Baldwin and Catherine Philp, America angered by Britain's 'secret' talks with Hezbollah, The Times (14 March 2009). Retrieved 21 March 2009.
- Thomas Joscelyn, The Special Relationship Takes Another Hit, The Weekly Standard (11 June 2009).
- Tom Leonard, 'Britain angry after Bermuda takes Chinese freed from Guantánamo', The Daily Telegraph (12 June 2009), p. 19.
- Kunal Dutta, 'Bermuda Guantanamo deal sparks anger in UK', The Independent (12 June 2009), pp. 20,21.
- 'US consulted Britain before Uighurs went to Bermuda: official', Agence France Presse (12 June 2009).
- Zhang Xin, 'Repatriate Terrorists, China Says', China Daily (12 June 2009).
- 'Britain chides Bermuda over Guantanamo detainees', Agence France Presse (12 June 2009).
- Joe Churcher, 'Questions for Miliband over Guantanamo Bay Inmates Move', Press Association National Newswire (12 June 2009).
- Catherine Philp, 'British authority snubbed as freed Guantánamo four are welcomed; Bermuda upsets London with deal on Uighurs', The Times (12 June 2009), pp. 1, 35.
- Tim Reid, British Government's wishes are barely on the American radar, Times Online (12 June 2009).
- Kevin Hechtkopf, Obama: Pan Am Bomber's Welcome "Highly Objectionable", CBS News (21 August 2009).
- Giles Whittell, Michael Evans and Catherine Philp, Britain made string of protests to US over Falklands row, Times Online (10 March 2010).
- Con Coughlin, Falkland Islands: The Special Relationship is now starting to seem very one-sided, Telegraph.co.uk (5 March 2010).
- Charles Krauthammer, Obama's policy of slapping allies, Washington Post (2 April 2010).
- "UK rejects US help over Falklands". BBC News. 2 March 2010.
- Drury, Ian (3 March 2010). "Gordon Brown snubs Hillary Clinton's 'help' in Falkland Islands row". Daily Mail. London.
- Drury, Ian (3 March 2010). "With friends like these: Hillary Clinton wades into the Falklands row... and backs the Argentinians". Daily Mail. London.
- Beaumont, Paul (11 March 2010). "Falklands: Barack Obama under fire for failing his ally Britain". The First Post. Retrieved 14 November 2010.
- Grice, Andrew (27 June 2010). "Cameron digs in over the Falklands". The Independent. London.
- "Special relationship between UK and US is over, MPs say". BBC News. 28 March 2010. Retrieved 28 March 2010.
- "Foreign Affairs Committee: Press Notice: Global Security: UK-US relations". Press release. UK Parliament. 28 March 2010. Retrieved 28 March 2010.
The UK and US have a close and valuable relationship not only in terms of intelligence and security but also in terms of our profound and historic cultural and trading links and commitment to freedom, democracy and the rule of law. But the use of the phrase 'the special relationship' in its historical sense, to describe the totality of the ever-evolving UK-US relationship, is potentially misleading, and we recommend that its use should be avoided.
- Lucy Cockcroft, Church of England criticises 'special relationship' between Britain and US, Telegraph.co.uk, 7 April 2010.
- "AFP". Google. 11 May 2010. Retrieved 14 November 2010.
- Foreign Secretary William Hague, Washington meeting press conference, Foreign and Commonwealth Office, 14 May 2010.
- Knickerbocker, Brad (12 June 2010). "Obama, Cameron dampen US-British prickliness on BP Gulf oil spill". The Christian Science Monitor. Retrieved 12 October 2017.
- "Transcript of Diane Sawyer's Interview with the New Prime Minister". ABC. Retrieved 21 July 2010.
- Phillips, Melanie (22 July 2010). "A strain across the (oily) pond". USA Today. Retrieved 12 October 2017.
- the CNN Wire Staff (20 July 2010). "Obama, Cameron blast release of Lockerbie bomber". CNN. Retrieved 20 July 2010.
- Chapman, James (20 July 2010). "Cameron calls for end to fixation with US special relationship as he makes his White House debut". Daily Mail. London. Retrieved 21 July 2010.
- Poirier, Agnès (11 January 2011). "France, America's special friend". The Guardian. Retrieved 12 October 2017.
- Shipman, Tim (11 January 2011). "France is our biggest ally, declares Obama: President's blow to Special Relationship with Britain". Daily Mail. Retrieved 12 October 2017.
- "Queen to roll out red carpet for Obamas". AFP (via Yahoo News). 22 May 2011. Archived from the original on 24 May 2011. Retrieved 25 May 2011.
- "US President Barack Obama addressing MPs and peers". BBC News. 22 May 2011. Retrieved 25 May 2011.
- "President Obama: Now is time for US and West to lead". BBC News. 22 May 2011. Retrieved 25 May 2011.
- Sarkozy: We are stronger together, BBC, Wednesday, 26 March 2008
- Roberts, Bob. Bush Pulls Out of Speech to Parliament. Daily Mirror. 17 November 2003.
- Russell, Benjamin (9 September 2013). "Special relationship is safe... 'US has no better partner than UK', says John Kerry".
- Finamore, Emma (4 January 2015). "Obama likes to call me 'bro' sometimes, says Cameron". Independent.co.uk. Retrieved 5 January 2015.
- CNN, Allie Malloy and Catherine Treyz. "Obama admits worst mistake of his presidency". CNN. Retrieved 2016-04-16.
- Bryant, Nick. "How did Obama and Cameron fall out?". BBC News. Retrieved 2016-04-16.
- Stewart, Heather (29 December 2016). "Theresa May's criticism of John Kerry Israel speech sparks blunt US reply" – via The Guardian.
- "Theresa May is sidelined at G20 as Obama says UK is at 'the back of the queue' for trade deal". 4 September 2016.
- Ishaan, Tharoor (July 14, 2016). "Britain's new top diplomat once likened Hillary Clinton to 'a sadistic nurse in a mental hospital'". www.washingtonpost.com. Washington Post. Retrieved November 30, 2017.
- Robert Moore (14 July 2016). "Boris Johnson's appointment as Foreign Secretary has not gone down well in the United States". ITV News. Retrieved 14 July 2016.
- "Obama: Merkel was my closest ally". The Local. 15 November 2016.
- Lanktree, Graham (November 27, 2017). "OBAMA, NOT DONALD TRUMP, MAY BE INVITED TO ROYAL WEDDING OF PRINCE HARRY AND MEGHAN MARKLE". www.newsweek.com. Newsweek. Retrieved November 30, 2017.
- "Theresa May in US for President Trump talks". 27 January 2017 – via www.bbc.co.uk.
- editor, Patrick Wintour Diplomatic (2 February 2017). "Trump's focus on UK trade could sideline EU, Democrats fear" – via The Guardian.
- "Pressure grows on May as a million people sign anti-Trump petition over 'Muslim ban'". 29 January 2017.
- "Theresa May fails to condemn Donald Trump on refugees". 28 January 2017 – via bbc.com.
- "Theresa May is at heart of a political storm over her 'weak' response to Trump's Muslim ban".
- "British PM Theresa May faces tough lesson over Trump's U.S. entry ban".
- "Boris Johnson faces accusations that Theresa May was told the 'Muslim ban' was coming". 30 January 2017.
- McCann, Kate (1 February 2017). "Theresa May rejects calls to block Donald Trump's state visit in fierce exchange with Jeremy Corbyn". The Daily Telegraph. Retrieved 2 February 2017.
- Castle, Stephen; Ramzy, Austin (January 12, 2018). "Trump Won't Visit London to Open Embassy. His U.K. Critics Say He Got the Message." www.cbsnews.com. CBS News. Retrieved January 12, 2018.
- "A petition to stop Donald Trump's planned visit to the U.K. has surpassed a million signatures".
- "Trump state visit plan 'very difficult' for Queen". 31 January 2017 – via bbc.com.
- "Nationwide protests in the UK over Trump's Muslim ban".
- "Ex Cabinet minister tells Government to consider cancelling Trump state visit". 30 January 2017.
- "Theresa May will find herself as hated as Trump if she keeps sacrificing our ethics for trade deals". 30 January 2017.
- "May says Trump state visit will go ahead no matter how many people sign a petition against it".
- Munzenrieder, Kyle (October 11, 2017). "Donald Trump Won't Be Meeting Queen Elizabeth Anytime Soon". www.wmagazine.com. W Magazine. Retrieved November 30, 2017.
- Ross, Tim; Talev, Margaret (January 24, 2018). "Inside the Dysfunctional Relationship of Donald Trump and Theresa May". www.bloomberg.com. Boomberg News. Retrieved February 18, 2018.
- Staff writer (29 November 2017). "Trump wrong to share far-right videos - PM". BBC News. Retrieved 29 November 2017.
- "Trump hits out at UK PM Theresa May after far-right video tweets". www.bbc.com. BBC. November 29, 2017. Retrieved November 30, 2017.
- Borger, Julian (November 30, 2017). "Special relationship? Theresa May discovers she has no friend in Donald Trump". www.theguardian.com. The Guardian. Retrieved November 30, 2017.
- Sharman, Jon (November 30, 2017). "Donald Trump attacks Theresa May, telling her to focus on 'radical Islamic terrorism' - not his Britain First tweets". www.independent.co.uk. The Independent. Retrieved November 30, 2017.
- Lawless, Jim (November 30, 2017). "Trump tweets strain US-Britain 'special relationship'". www.abc.go.com. ABC News. Retrieved November 30, 2017.
- "How Trump-May Twitter spat will affect the special relationship". www.theweek.co.uk. The Week. November 30, 2017. Retrieved November 30, 2017.
- Korte, Gregory (November 30, 2017). "Trump's retweets of anti-Muslim videos test 'special relationship' with U.K." www.11alive.com. WXIA-TV. USA Today. Retrieved November 30, 2017.
- John, Tara (November 30, 2017). "A Trio of Trump Retweets Strains the Special Relationship". www.time.com. Time Magazine. Retrieved November 30, 2017.
- McCafferty, Ross (November 30, 2017). "Will Donald Trump's tweets affect the Special Relationship?". www.scotsman.com. The Scotsman. Retrieved November 30, 2017.
- Penny, Thomas (November 30, 2017). "Balance of Power: Trump Rattles the Special Relationship". www.bloomberg.com. Bloomberg. Retrieved November 30, 2017.
- "Now Trump Attacks May As The 'Special Relationship' Crumbles". www.esquire.co.uk. Esquire (UK). November 30, 2017. Retrieved November 30, 2017.
- @realDonaldTrump (January 18, 2018). "Reason I canceled my trip to London is that I am not a big fan of the Obama Administration having sold perhaps the best located and finest embassy in London for "peanuts," only to build a new one in an off location for 1.2 billion dollars. Bad deal. Wanted me to cut ribbon-NO!" (Tweet) – via Twitter.
- "Trump gets facts wrong as he nixes London visit". www.cbsnews.com. CBS News. January 12, 2018. Retrieved January 12, 2018.
- Langfitt, Frank (January 12, 2017). "Now Trump Attacks May As The 'Special Relationship' Crumbles". www.npr.org. NPR. Retrieved January 12, 2017.
- Watts, Joe (February 5, 2018). "Theresa May responds to Trump's NHS attack: 'I'm proud of free health service'". www.independent.co.uk. The Independent. Retrieved February 18, 2018.
- King, Laura (February 5, 2018). "Trump stirs a hornet's nest in Britain by blasting its National Health Service". www.latimes.com. Los Angeles Times. Retrieved February 18, 2018.
- @realDonaldTrump (February 5, 2018). "The Democrats are pushing for Universal HealthCare while thousands of people are marching in the UK because their U system is going broke and not working. Dems want to greatly raise taxes for really bad and non-personal medical care. No thanks!" (Tweet) – via Twitter.
- Farley, Robert. "Trump on Britain's Universal Health Care". Retrieved February 8, 2018.
- Bates, Daniel; Little, Alison (February 6, 2018). "'Proud' Theresa May BLASTS Donald Trump for attacking the NHS". www.express.co.uk. Daily Express. Retrieved February 18, 2018.
- Editorial – Bill and Tony – New Best Friends', The Guardian (30 May 1997), p. 18.
- Harry Blaney III and Julia Moore, 'Britain Doubtful of American Intentions, Poll Shows', Dallas Morning News (17 February 1986), p. 15A.
- Blaney and Moore, 'Britain Doubtful', p. 15A.
- Blaney and Moore, ‘Britain Doubtful’, p. 15A.
- Fiona Thompson, 'US Policies Breed Special Relationship Of Resentment / Increasing criticism of British Premier Thatcher's support for Reagan administration', Financial Times (11 November 1986).
- Nihal Kaneira, 'Canada still tops list of US allies – poll', Gulf News (21 September 1997).
- Tunku Varadarajan, 'Britain's place in US hearts secure', The Times (18 September 1997), p. 19.
- Kaneira, 'poll'.
- Varadarajan, 'Britain's place secure', p. 19.
- ‘(Mis)remembrances of Empire’, Wall Street Journal (29 August 1997), p. 6.
- Orya Sultan Halisdemir, ‘British deny they are US puppets’, Turkish Daily News (14 February 1998).
- "The most comprehensive public opinion coverage ever provided for a presidential election". Rasmussen Reports. Retrieved 14 November 2010.
- Populus poll 2–4 June 2006
- Stand up to US, voters tell Blair, The Guardian (25 July 2006).
- "The ties that bind". The Economist (published 26 July 2008). 24 July 2008. p. 66.
- Amanda Bowman, What Britain's changing of the guard will mean for the U.S., Washington Examiner (7 April 2010).
- Americans Overwhelmingly Support the Special Relationship Between the US and the UK, Atlantic Bridge, 2010.
- Obama and the 'Special Relationship', Wall Street Journal, 19 May 2010.
- Weaver, Matthew (5 February 2008). "Prince Andrew rebukes US over Iraq war". The Guardian. London. Retrieved 23 May 2010.
- Robbins, James (21 February 2008). "Miliband's apology over 'rendition'". BBC News. Retrieved 23 May 2010.
- O'Donoghue, Gary (21 February 2008). "Political fall-out from rendition". BBC News. Retrieved 23 May 2010.
- "In full: Miliband rendition statement". BBC News. 21 February 2008. Retrieved 23 May 2010.
- Ambassador Tuttle on the Extradition Treaty (12 July 2006) Embassy of the United States. Retrieved 22 March 2009.
- Meg Hillier, What is the US-UK Extradition Act? (24 November 2006). Retrieved 22 March 2009.
- "MPs angry at 'unfair' extradition". BBC News. 12 July 2006. Retrieved 23 May 2010.
- Silverman, Jon (22 February 2006). "Extradition 'imbalance' faces Lords' test". BBC News. Retrieved 23 May 2010.
- John Hardy, Letter: Bilateral extradition treaty is not equal The Times (22 January 2009).
- Archer, Graeme. "US should do more to tackle far-Right extremism, Theresa May suggests as she issues stinging rebuke to Donald Trump". The Daily Telegraph. London. Retrieved 23 May 2010.
- Blair, William G. (14 December 1984). "U.S. Judge Rejects Bid For Extradition Of I.R.A. Murderer". The New York Times. Retrieved 23 May 2010.
- Torres, Carlos (30 September 2006). "''Senate Unanimously Ratifies US/UK Extradition Treaty'". Bloomberg. Retrieved 14 November 2010.
- 'Suspend the treaty now', The Spectator (8 July 2006).
- The Court That Tries American's Patience, The Daily Telegraph report
- Peter Clegg, From Insiders to Outsiders: Caribbean Banana Interests in the New International Trading Framework
- EU report on steel tariffs.
- Peter Marsh and Robert Shrimsley, 'Blair condemns Bush's tariffs on steel imports', The Financial Times (7 March 2002), p. 3.
- Arnold, Guy. America and Britain: Was There Ever a Special Relationship? (London: Hurst, 2014).
- Bartlett, Christopher John. "The special relationship": a political history of Anglo-American relations since 1945 (Longman Ltd, 1992).
- Campbell, Duncan. Unlikely Allies: Britain, America and the Victorian Origins of the Special Relationship (2007). emphasizes 19th century roots. contents
- Coker, Christopher. "Britain and the new world order: the special relationship in the 1990s," International Affairs (1992): 407-421. in JSTOR
- Colman, Jonathan. A 'Special Relationship'?: Harold Wilson, Lyndon B. Johnson and Anglo-American Relations' at the Summit, 1964-8 (Manchester University Press, 2004)
- DeBres, Karen. "Burgers for Britain: A cultural geography of McDonald's UK," Journal of Cultural Geography (2005) 22#2 pp: 115-139.
- Dobson, Alan and Steve Marsh. “Anglo-American Relations: End of a Special Relationship?” International History Review 36:4 (August 2014): 673-697. DOI: 10.1080/07075332.2013.836124. online review argues it is still in effect
- Dobson, Alan J. The Politics of the Anglo-American Economic Special Relationship (1988)
- Dobson, Alan. "The special relationship and European integration." Diplomacy and Statecraft (1991) 2#1 79-102.
- Dumbrell, John. A Special Relationship: Anglo-American Relations in the Cold War and After (2001)
- Dumbrell, John. "The US–UK Special Relationship: Taking the 21st-Century Temperature." The British Journal of Politics & International Relations (2009) 11#1 pp: 64-78. online
- Edwards, Sam. Allies in Memory: World War II and the Politics ofTransatlantic Commemoration, c. 1941–2001 (Cambridge UP, 2015).
- Glancy, Mark. "Temporary American citizens? British audiences, hollywood films and the threat of Americanization in the 1920s." Historical Journal of Film, Radio and Television (2006) 26#4 pp 461–484.
- Hendershot, Robert M. Family Spats: Perception, Illusion, and Sentimentality in the Anglo-American Special Relationship (2008).
- Holt, Andrew. The Foreign Policy of the Douglas-Home Government: Britain, the United States and the End of Empire (Springer, 2014).
- Louis, William Roger, and Hedley Bull. The special relationship: Anglo-American relations since 1945 (Oxford UP, 1986).
- Lyons, John F. America in the British Imagination: 1945 to the Present (Palgrave Macmillan, 2013).
- McLaine, Ian, ed. A Korean Conflict: The Tensions Between Britain and America (IB Tauris, 2015).
- Malchow, H.L. Special Relations: The Americanization of Britain? (Stanford University Press; 2011) 400 pages; explores American influence on the culture and counterculture of metropolitan London from the 1950s to the 1970s, from "Swinging London" to black, feminist, and gay liberation. excerpt and text search
- Reynolds, David. Rich relations: the American occupation of Britain, 1942-1945 (1995)
- Reynolds, David. "A'special relationship'? America, Britain and the international order since the Second World War." International Affairs (1985): 1-20.
- Riddell, Peter. Hug them Close: Blair, Clinton, Bush and the ‘Special Relationship’ (Politicos, 2004).
- Spelling, Alex. "‘A Reputation for Parsimony to Uphold’: Harold Wilson, Richard Nixon and the Re-Valued ‘Special Relationship’ 1969–1970." Contemporary British History 27#2 (2013): 192-213.
- Vickers, Rhiannon. "Harold Wilson, the British Labour Party, and the War in Vietnam." Journal of Cold War Studies 10#2 (2008): 41-70. online
- Wevill, Richard. Diplomacy, Roger Makins and the Anglo-American Relationship (Ashgate Publishing, Ltd., 2014).
|Wikimedia Commons has media related to Anglo-American relations.|
- June 2002, Policy Review, The State of the Special Relationship
- November 2006, The Times, State Department Official disparages the relationship
- May 2007, Professor Stephen Haseler (Global Policy Institute, London Metropolitan University) has written a book examining the history of the special relationship from a British perspective entitled Sidekick: Bulldog to Lapdog, British Global Strategy from Churchill to Blair
- February 2009, The Guardian, Presidents and prime ministers: a look back at previous first meetings of US and UK leaders
|
<urn:uuid:8fcd3048-5e72-4b70-95a7-8383b63138d6>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00614.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9404292702674866,
"score": 3.546875,
"token_count": 28469,
"url": "https://en.wikipedia.org/wiki/Special_relationship"
}
|
Epidemiology of Trauma
According to the principles of medicine, trauma is a “disease” rather than an injury and, as such, is no different from malaria, heart disease, or cancer. It has degrees of severity and morbidity/mortality statistics just like other diseases. It is important to understand the epidemiology of trauma as it applies to different populations of people.
There is descriptive epidemiology, which looks at the distribution of the disease over time, place, and over different subgroups of people. There is also analytical epidemiology, which looks at the causation of the trauma/disease. The environment can be a physical one or a sociocultural one. How these factors interact determines the epidemiology of trauma. Knowing these things can help public policy and laws designed to protect people from injury secondary to trauma.
The transfer of physical/mechanical energy accounts for three-fourths of all traumas. The rest are due to chemicals, heat, electricity, and ionizing radiation. Injuries become severe and life-threatening when they fall outside the range of human tissue tolerance.
William Haddon in the 1960s first came up with the idea that trauma is a public health issue. He came up with three phases of trauma: 1) pre-event; 2) event; and 3) post-event. He invented the “Haddon Matrix”, which identified pre-event occurrences like avoidance of alcohol and the use of proper restraint. Speed limits are also a part of that. Event occurrences include the use of safety belts, the deployment of an airbag, and impact-absorbing barriers. Post-event occurrences include bystander care, assessment of the vehicle, and access to emergency care.
Epidemiology of Trauma in the US
Injuries account for a quarter of all deaths in the US per year in all age groups. It is the leading cause of death in children. About 150,000 people die per year as a result of an injury. This is about 54 people out of 100,000 individuals in the US. There are 400 injury deaths per day and 50 of these deaths occur in kids. A total of 80 percent of all deaths in people aged 15-24 are attributable to injuries.
Some declines in deaths have occurred over the years because of better restraint laws and the use of airbags. Workplace safety has saved some lives, while things like homicide fluctuate over time. Suicide deaths are relatively stable.
Intentional and unintentional deaths have accounted for more than 30 percent of potential lives lost under the age of 65. This means that injuries account for more premature deaths when compared to cancer, HIV, and heart disease.
Most deaths occur within minutes of the injury—at the scene, enroute to the hospital, or in the emergency rooms. Most deaths occur from central nervous system/head injuries. The immediate deaths are from massive hemorrhage, neurological injury or both. Even the best emergency medical systems can’t prevent or stop these deaths from occurring. There are fewer deaths from massive infection or end-organ damage because of better later trauma care.
The best way to stop deaths from trauma are to deal with prevention-related items like airbags, seatbelts, and better car design. Another thing is to help improve access of injured patients to higher levels of emergency care, such as level III trauma centers. Research into infection, hemorrhage, and trauma will eventually reduce the number of deaths, especially delayed deaths from trauma.
Deaths are only part of the injury burden. More than 1.5 million trauma victims are hospitalized in the US each year. These patients survive to the point of discharge from the hospital. About 28 million people are treated and eventually discharged from emergency rooms or urgent care centers. Injuries make up 6 percent of all discharges from the hospital and 30 percent of all emergency department visits per year. These injuries are far more than they look because they lead to things like disability and a decreased quality of life.
In dollar amounts, the cost of fatal and nonfatal accidents in any given year is about 406 billion dollars to the overall economy. The costs related to deaths account for a disproportionate share of the costs spent on injury care. Deaths account for only 1 percent of injuries but account for 30 percent of the total costs incurred. The rest, about 70 percent, are related to the treatment of nonfatal injuries. These include hospital costs and the other costs related to healthcare as part of the injury. A total of 41 percent of the costs are from permanent and temporary disability. This doesn’t take into account the losses spent by family and loved ones as part of the ongoing trauma-related expenses.
Injuries and deaths from trauma are most likely a problem of young males and older individuals. Seventy percent of deaths and half of all injuries not leading to death are seen in males. In all age groups, except for the age of 0-9, the rate of injury in males is more than twice that of females. In non-fatal injuries, males are only 1.3 times more likely to be affected. This trend reverses in the elderly, where females have a 1.3 times incidence of nonfatal injury when compared to men.
The peak of death is from 16-40 and from the over 65 group where trauma is concerned. People under the age of 45 have 53 percent of all fatalities from injuries and half of all hospitalizations. They accounted for about 80 percent of emergency department visits. Hospitalizations and nonfatal injuries follow this same bimodal pattern, especially when it comes to males.
The elderly person is less likely to be injured but more likely to be hospitalized and die from their injuries. The rate of death in the 65 and older population is 113 out of 100,000 persons and for those older than 75, the rate of death is 169 out of 100,000 persons. So it seems the elderly are overrepresented when it comes to injury fatalities in which approximately 14 percent were related to traffic injury deaths. Less than one percent of the deaths in the elderly were from firearms. Burns account for two percent of injury-related deaths and 1.4 percent of nonfatal events reported to the Centers for Disease Control.
In 2004, alone, there were 167,000 deaths due to trauma. There were 1.9 million hospital discharges from the hospital secondary to trauma. There were 3.1 million visits to the emergency room due to injuries. And there were 35 million initial visits to private physician clinics because of injury-related events.
Statistics show that 93 percent of nonfatal injuries were unintentional, whereas 68 percent of fatal injuries were unintentional. Thirty percent of deaths due to injury were related to violence. In 2007 alone, about 18,000 people were killed because of homicide. More than 34,000 deaths were related to the successful completion of suicide. This accounted for 66 percent of all violent deaths.
Injury in the workplace is common. About 5000 fatalities were reported in 2008 as a result of injuries sustained at the workplace. This means that there were 3.6 per million full time workers per year. Transportation-related deaths accounted for 40 percent of all workplace deaths. Assaults and violence accounted for 16 percent of all fatalities, whereas 18 percent of fatalities were because of contact with equipment. Falls accounted for 13 percent. A total of 10 percent of workplace deaths were because of homicide. Eighty percent of these were firearm-related deaths. Five percent of all workplace deaths were self-inflicted.
The Division of Labor Statistics reported about 4.6 million nonfatal work-related injuries. This amounts to 3.6 people out of 100 workers. A total of 71 percent of these injuries were in the service industry. Half of all injuries produced some kind of disability.
Distribution of Injuries
It is important to catalog injuries by severity and nature. There are several systems available for cataloguing injuries by nature and severity. ICD-10 codes are important in cataloguing the various injuries and their nature. Death certificate data is the best way to identify injury-related deaths. There can be variations in the way that these injuries are presented.
The National Trauma Database is another way to categorize injuries and deaths from trauma. They use a non-scientific sampling of various trauma centers that voluntarily submit data on trauma victims they see. Like death certificates, these voluntary submissions vary as to the completeness and accuracy of data provided. Some information is gleaned from coroner’s reports and autopsy reports. The autopsy results aren’t perfect but they do indicate a trend toward neurological injuries as a cause of death in many injuries. CNS-related deaths account for 40-50 percent of all fatalities found in fatal accidents. The second highest cause of death is hemorrhage, accounting for 30-35 percent of deaths.
According to the CDC, neurological trauma accounts for the most deaths in trauma. This is a serious public health issue in the US today. Traumatic brain injuries account for many deaths and a great deal of disability when it comes to injury. Traumatic brain injury can be mild or severe, and many of the mild cases are missed. There are about 1.7 million visits to the emergency room, deaths and hospitalizations directly related to traumatic brain injury. Traumatic brain injury accounts for 1/3 of all injury-related deaths or about 52,000 deaths per year.
The distribution of nonfatal injuries and fatal injuries is different from one another. There are many injuries associated with body areas that are not considered lethal. Even among nonfatal hospitalized injuries, only a fourth have Abbreviated Injury Scores of 3 or more on a scale of 0-6.
Injuries to the upper and lower extremities involve the leading cause of emergency department visits and hospitalizations among injured people. They account for over half of all non-fatal injuries and 47 percent of hospitalizations because of injuries. More than a third of all moderately severe or severe injuries are for injuries with an Abbreviated Injury Scores of 3 or more. Recovery can take a long time and can be costly. The best of treatment can result in disability and permanent impairment of the individual.
The second most common type of nonfatal injuries that are hospitalized are due to head injuries. It accounts for 10-15 percent of all hospitalizations because of injuries. Mild head injuries are usually treated as an outpatient. The make up 2-5 percent of all trips to the emergency room visits.
About 80 percent of these patients are treated and released from the emergency department. The actual number of head injuries may be under-represented due to the large number of them treated at outpatient centers and urgent care facilities. The total estimate of head injuries ranges from 152 to 367 people out of 100,000 individuals. Most head injuries are mild but about 70,000 to 90,000 are classified as severe and can result in long term disability. Head injuries from recreational activities are not uncommon, accounting for 300,000 injuries per year.
Spinal cord injuries represent a small proportion of injuries from trauma. They account for 10,000 to 15,000 hospitalizations per year. Motor vehicle injuries make up 30-60 percent of all spinal cord injuries. Falls account for 20-30 percent of spinal cord injuries. About 5-10 percent of all spinal cord injuries are from diving accidents. There is a huge financial cost incurred as a result of spinal cord injuries, many of which are nonfatal but result in major disability.
Distribution of Injuries as related to Geographic Location
There is a varying amount of injuries as stratified across different areas of the country and between rural and urban areas. Unintentional injuries are highest in rural populations. In these areas, homicide accounts for many times more deaths than in suburban and urban populations. Death rates for unintentional injuries are greater in the Southern states and Western states. Suicide rates are higher in the west and homicide rates are greater in the south. When things like access to care, educational climate, and economics are factored in, the differences in death among the different geographic locations become nonexistent.
Things that Influence Results
There are a number of confounding factors that play into the results from trauma assessments and epidemiology. There are things such as race, socioeconomic factors, culture, access to healthcare, alcohol abuse, drug abuse, and ethnicity that cannot all be controlled for when looking at trauma data. This means that caution must be observed when interpreting trauma data.
There is a lot of data missing regarding pre-hospital care and post-hospital care of injuries, including rehabilitation. There have been many pre-hospital databases developed over the years. Only 26 states supply data to the NEMSIS program, the “National Emergency Medical Services Information System” database. At least 12 states are considering legislation to have the states contribute to this system, which will help make the data more accurate. Many professional organizations are pushing for change that allows EMS systems to provide data that is NEMSIS-compliant.
Long term care data is essential when it comes to recognizing the long term financial implications of trauma. There is the Uniform Data System for Medical Rehabilitation or UDSMR that evaluates the effectiveness of rehab programs for the trauma patient. This has given us the most comprehensive data on rehabilitation from injuries. These data do not, however, translate well to prevention programs for injury/trauma patients.
Death certificate data doesn’t always provide accurate information for all injury-related deaths. Medical examiner and coroner reports help augment these data but they are not a hundred percent accurate either. Autopsy data is not perfect and tends to be skewed toward homicide-related trauma. Hospital-related data is also skewed and don’t often include data on those patients that were treated and released for their injuries. Trauma registries are skewed toward major trauma and exclude those patients who survive but are in the hospital less than three days. There is a great need for a single database that can link prehospital care, care in the emergency room, care in the hospital, and rehabilitative care altogether.
Injury imposes a heavy burden on society at many levels, including morbidity, mortality, and cost of care. What isn’t recognized is that many of these injuries are completely preventable using specific strategies. Because of this, there isn’t the public outcry about injury deaths, as there is about illnesses such as cancer, HIV, and heart disease. These other diseases are much talked about yet contribute far less to the burden on society as trauma-related injuries and deaths.
Most injuries are unintentional and most of the unintentional injury deaths occur in the elderly. Suicide greatly outnumbers homicide in this population. The risk of death greatly varies according to occupation. The US compares poorly when compared to other countries when it comes to firearm-related deaths. Elderly women make up the vast majority of hospitalizations from injuries sustained. Teens and young adults make up the majority of emergency department visits, with most injuries occurring around the home.
There has been a slight reduction in injury related deaths when comparing 1985 to 2004. Some causes of death are increasing while others are decreasing. Injury morbidity has dropped in every population except for the elderly. Alcohol and drug use continues to be related to several types of traumas.
Trauma remains a great public health issue. Prevention and treating these injuries better will continue to be the focus of healthcare providers across the nation. The priority should be on prevention programs. Accurate data continues to be the focus of agencies trying to determine who gets injured and how to prevent these injuries from occurring.
We’re members of the Million Dollar Advocates Forum.
Ca;; us anytime for free, friendly advice at 916-921-6400 in Sacramento or 800-404-5400 Elsewhere in California.
|
<urn:uuid:c03ee274-9b2a-46ed-b8bf-b61f05387ed2>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00014.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9589805006980896,
"score": 3.34375,
"token_count": 3230,
"url": "https://www.sacramentoinjuryattorneysblog.com/2015/10/epidemiology-of-trauma.html"
}
|
Wolfgang Amadeus Mozart
Wolfgang Amadeus Mozart (German: [ˈvɔlfɡaŋ amaˈdeus ˈmoːtsaʁt], English see fn.; 27 January 1756 – 5 December 1791), baptised as Johannes Chrysostomus Wolfgangus Theophilus Mozart, was a prolific and influential composer of theClassical era.
Mozart showed prodigious ability from his earliest childhood. Already competent on keyboard and violin, he composed from the age of five and performed before European royalty. At 17, he was engaged as a court musician in Salzburg, but grew restless and travelled in search of a better position, always composing abundantly. While visiting Vienna in 1781, he was dismissed from his Salzburg position. He chose to stay in the capital, where he achieved fame but little financial security. During his final years in Vienna, he composed many of his best-known symphonies, concertos, and operas, and portions of the Requiem, which was largely unfinished at the time of his death. The circumstances of his early death have been much mythologized. He was survived by his wife Constanze and two sons.
He composed over 600 works, many acknowledged as pinnacles of symphonic, concertante, chamber, operatic, and choral music. He is among the most enduringly popular of classical composers, and his influence on subsequent Western art music is profound;Beethoven composed his own early works in the shadow of Mozart, and Joseph Haydn wrote that “posterity will not see such a talent again in 100 years.”
Family and childhood
Anonymous portrait of the child Mozart, possibly by Pietro Antonio Lorenzoni; painted in 1763 on commission from Leopold Mozart
Wolfgang Amadeus Mozart was born on 27 January 1756 to Leopold Mozart (1719–1787) and Anna Maria, née Pertl (1720–1778), at9 Getreidegasse in Salzburg. This was the capital of the Archbishopric of Salzburg, an ecclesiastic principality in what is now Austria, then part of the Holy Roman Empire. He was the youngest of seven children, five of whom died in infancy. His elder sister was Maria Anna (1751–1829), nicknamed “Nannerl”. Mozart was baptized the day after his birth at St. Rupert’s Cathedral. The baptismal record gives his name in Latinized form as Joannes Chrysostomus Wolfgangus Theophilus Mozart. He generally called himself “Wolfgang Amadè Mozart” as an adult, but his name had many variants.
Leopold Mozart, a native of Augsburg, was a minor composer and an experienced teacher. In 1743, he was appointed as fourth violinist in the musical establishment of Count Leopold Anton von Firmian, the ruling Prince-Archbishop of Salzburg. Four years later, he married Anna Maria in Salzburg. Leopold became the orchestra’s deputy Kapellmeister in 1763. During the year of his son’s birth, Leopold published a violin textbook, Versuch einer gründlichen Violinschule, which achieved success.
When Nannerl was seven, she began keyboard lessons with her father while her three-year-old brother looked on. Years later, after her brother’s death, she reminisced:
He often spent much time at the clavier, picking out thirds, which he was ever striking, and his pleasure showed that it sounded good…. In the fourth year of his age his father, for a game as it were, began to teach him a few minuets and pieces at the clavier…. He could play it faultlessly and with the greatest delicacy, and keeping exactly in time…. At the age of five, he was already composing little pieces, which he played to his father who wrote them down.
These early pieces, K. 1–5, were recorded in the Nannerl Notenbuch.
Biographer Maynard Solomon notes that, while Leopold was a devoted teacher to his children, there is evidence that Mozart was keen to progress beyond what he was taught. His first ink-spattered composition and his precocious efforts with the violin were of his own initiative and came as a surprise to his father. Leopold eventually gave up composing when his son’s musical talents became evident. In his early years, Mozart’s father was his only teacher. Along with music, he taught his children languages and academic subjects.
During Mozart’s youth, his family made several European journeys in which he and Nannerl performed as child prodigies. These began with an exhibition, in 1762, at the court of the Prince-elector Maximilian III of Bavaria in Munich, and at the Imperial Court in Vienna and Prague. A long concert tour spanning three and a half years followed, taking the family to the courts of Munich, Mannheim, Paris, London, The Hague, again to Paris, and back home via Zurich, Donaueschingen, and Munich.
During this trip, Mozart met a number of musicians and acquainted himself with the works of other composers. A particularly important influence was Johann Christian Bach, whom Mozart visited in London in 1764 and 1765. The family again went to Vienna in late 1767 and remained there until December 1768.
These trips were often difficult and travel conditions were primitive. The family had to wait for invitations and reimbursement from the nobility and they endured long, near-fatal illnesses far from home: first Leopold (London, summer 1764) then both children (The Hague, autumn 1765).
After one year in Salzburg, Leopold and Mozart set off for Italy, leaving Mozart’s mother and sister at home. This travel lasted from December 1769 to March 1771. As with earlier journeys, Leopold wanted to display his son’s abilities as a performer and a rapidly maturing composer. Mozart met Josef Mysliveček and Giovanni Battista Martini in Bologna and was accepted as a member of the famous Accademia Filarmonica. In Rome, he heard Gregorio Allegri’s Miserere twice in performance in the Sistine Chapel and wrote it out from memory, thus producing the first unauthorized copy of this closely guarded property of the Vatican.
In Milan, Mozart wrote the opera Mitridate, re di Ponto (1770), which was performed with success. This led to further opera commissions. He returned with his father later twice to Milan (August–December 1771; October 1772 – March 1773) for the composition and premieres ofAscanio in Alba (1771) and Lucio Silla (1772). Leopold hoped these visits would result in a professional appointment for his son in Italy, but these hopes were never realized.
Toward the end of the final Italian journey, Mozart wrote the first of his works to be still widely performed today, the solo motet Exsultate, jubilate, K. 165.
1773–77: Employment at the Salzburg court
After finally returning with his father from Italy on 13 March 1773, Mozart was employed as a court musician by the ruler of Salzburg, Prince-Archbishop Hieronymus Colloredo. The composer had a great number of friends and admirers in Salzburg and had the opportunity to work in many genres, including symphonies, sonatas, string quartets, masses, serenades, and a few minor operas. Between April and December 1775, Mozart developed an enthusiasm for violin concertos, producing a series of five (the only ones he ever wrote), which steadily increased in their musical sophistication. The last three—K. 216, K. 218, K. 219—are now staples of the repertoire. In 1776 he turned his efforts to piano concertos, culminating in the E-flat concerto K. 271 of early 1777, considered by critics to be a breakthrough work.
Despite these artistic successes, Mozart grew increasingly discontented with Salzburg and redoubled his efforts to find a position elsewhere. One reason was his low salary, 150 florins a year; Mozart longed to compose operas, and Salzburg provided only rare occasions for these. The situation worsened in 1775 when the court theater was closed, especially since the other theater in Salzburg was largely reserved for visiting troupes.
Two long expeditions in search of work interrupted this long Salzburg stay: Mozart and his father visited Vienna from 14 July to 26 September 1773, and Munich from 6 December 1774 to March 1775. Neither visit was successful, though the Munich journey resulted in a popular success with the premiere of Mozart’s opera La finta giardiniera.
1777–78: The Paris journey
In August 1777, Mozart resigned his Salzburg position and, on 23 September, ventured out once more in search of employment, with visits to Augsburg, Mannheim, Paris, and Munich.
Mozart became acquainted with members of the famous orchestra in Mannheim, the best in Europe at the time. He also fell in love withAloysia Weber, one of four daughters in a musical family. There were prospects of employment in Mannheim, but they came to nothing, and Mozart left for Paris on 14 March 1778 to continue his search. One of his letters from Paris hints at a possible post as an organist at Versailles, but Mozart was not interested in such an appointment. He fell into debt and took to pawning valuables. The nadir of the visit occurred when Mozart’s mother was taken ill and died on 3 July 1778. There had been delays in calling a doctor—probably, according to Halliwell, because of a lack of funds. Mozart stayed with Melchior Grimm, who, as personal secretary of the Duke d’Orléans, lived in his mansion.
Writing but a few hours after the death of his mother, Mozart slipped in this nasty comment about the recently deceased Voltaire: ‘Now I have piece of news for you which you may already know, namely that the godless archrogue Voltaire, so to speak, has kicked the bucket like a dog, like a beast. This probably stemmed from Mozart loyalty to the church and the royalty that been often the target of pernicious remarks by Voltaire.
While Mozart was in Paris, his father was pursuing opportunities for his son back in Salzburg. With the support of local nobility, Mozart was offered a post as court organist and concertmaster. The yearly salary was 450 florins, but he was reluctant to accept. By that time Grimm and Mozart did not go along very well and Mozart was sent away. After leaving Paris in September 1778 for Strassbourg, he tarried in Mannheim and Munich, still hoping to obtain an appointment outside Salzburg. In Munich, he again encountered Aloysia, now a very successful singer, but she was no longer interested in him. Mozart finally reached home on 15 January 1779 and took up the new position, but his discontent with Salzburg was undiminished.
Among the better known works that Mozart wrote on the Paris journey are the A minor piano sonata, K. 310/300d and the “Paris” Symphony (No. 31); these were performed in Paris on 12 and 18 June 1778.
In January 1781, Mozart’s opera Idomeneo premiered with “considerable success” in Munich. The following March, Mozart was summoned to Vienna, where his employer, Archbishop Colloredo, was attending the celebrations for the accession of Joseph II to the Austrian throne. Fresh from the adulation he had earned in Munich, Mozart was offended when Colloredo treated him as a mere servant and particularly when the archbishop forbade him to perform before the Emperor at Countess Thun’s for a fee equal to half of his yearly Salzburg salary. The resulting quarrel came to a head in May: Mozart attempted to resign and was refused. The following month, permission was granted but in a grossly insulting way: the composer was dismissed literally “with a kick in the arse”, administered by the archbishop’s steward, Count Arco. Mozart decided to settle in Vienna as a freelance performer and composer.
The quarrel with the archbishop went harder for Mozart because his father sided against him. Hoping fervently that he would obediently follow Colloredo back to Salzburg, Mozart’s father exchanged intense letters with his son, urging him to be reconciled with their employer. Mozart passionately defended his intention to pursue an independent career in Vienna. The debate ended when Mozart was dismissed by the archbishop, freeing himself both of his employer and his father’s demands to return. Solomon characterizes Mozart’s resignation as a “revolutionary step”, and it greatly altered the course of his life.
Mozart’s new career in Vienna began well. He performed often as a pianist, notably in a competition before the Emperor with Muzio Clementi on 24 December 1781, and he soon “had established himself as the finest keyboard player in Vienna”. He also prospered as a composer, and in 1782 completed the opera Die Entführung aus dem Serail (“The Abduction from the Seraglio”), which premiered on 16 July 1782 and achieved a huge success. The work was soon being performed “throughout German-speaking Europe”, and fully established Mozart’s reputation as a composer.
Near the height of his quarrels with Colloredo, Mozart moved in with the Weber family, who had moved to Vienna from Mannheim. The father, Fridolin, had died, and the Webers were now taking in lodgers to make ends meet. Aloysia, who had earlier rejected Mozart’s suit, was now married to the actor and artist Joseph Lange. Mozart’s interest shifted to the third Weber daughter, Constanze. The courtship did not go entirely smoothly; surviving correspondence indicates that Mozart and Constanze briefly separated in April 1782. Mozart faced a very difficult task in getting his father’s permission for the marriage. The couple were finally married on 4 August 1782 in St. Stephen’s Cathedral, the day before his father’s consent arrived in the mail.
The couple had six children, of whom only two survived infancy:
- Raimund Leopold (17 June – 19 August 1783)
- Karl Thomas Mozart (21 September 1784 – 31 October 1858)
- Johann Thomas Leopold (18 October – 15 November 1786)
- Theresia Constanzia Adelheid Friedericke Maria Anna (27 December 1787 – 29 June 1788)
- Anna Maria (died soon after birth, 16 November 1789)
- Franz Xaver Wolfgang Mozart (26 July 1791 – 29 July 1844)
In the course of 1782 and 1783, Mozart became intimately acquainted with the work of Johann Sebastian Bach and George Frideric Handel as a result of the influence ofGottfried van Swieten, who owned many manuscripts of the Baroque masters. Mozart’s study of these scores inspired compositions in Baroque style, and later influenced his personal musical language, for example in fugal passages in Die Zauberflöte (“The Magic Flute”) and the finale of Symphony No. 41.
In 1783, Mozart and his wife visited his family in Salzburg. His father and sister were cordially polite to Constanze, but the visit prompted the composition of one of Mozart’s great liturgical pieces, the Mass in C minor. Though not completed, it was premiered in Salzburg, with Constanze singing a solo part.
Mozart met Joseph Haydn in Vienna around 1784, and the two composers became friends. When Haydn visited Vienna, they sometimes played together in an impromptustring quartet. Mozart’s six quartets dedicated to Haydn (K. 387, K. 421, K. 428, K. 458, K. 464, and K. 465) date from the period 1782 to 1785, and are judged to be a response to Haydn’s Opus 33 set from 1781. Haydn in 1785 told Mozart’s father: “I tell you before God, and as an honest man, your son is the greatest composer known to me by person and repute, he has taste and what is more the greatest skill in composition.”
From 1782 to 1785 Mozart mounted concerts with himself as soloist, presenting three or four new piano concertos in each season. Since space in the theaters was scarce, he booked unconventional venues: a large room in the Trattnerhof (an apartment building), and the ballroom of the Mehlgrube (a restaurant). The concerts were very popular, and the concertos he premiered at them are still firm fixtures in the repertoire. Solomon writes that during this period Mozart created “a harmonious connection between an eager composer-performer and a delighted audience, which was given the opportunity of witnessing the transformation and perfection of a major musical genre”.
With substantial returns from his concerts and elsewhere, Mozart and his wife adopted a rather plush lifestyle. They moved to an expensive apartment, with a yearly rent of 460 florins. Mozart bought a fine fortepiano from Anton Walter for about 900 florins, and a billiard table for about 300. The Mozarts sent their son Karl Thomas to an expensive boarding school, and kept servants. Saving was therefore impossible, and the short period of financial success did nothing to soften the hardship the Mozarts were later to experience.
On 14 December 1784, Mozart became a Freemason, admitted to the lodge Zur Wohltätigkeit (“Beneficence”). Freemasonry played an important role in the remainder of Mozart’s life: he attended meetings, a number of his friends were Masons, and on various occasions he composed Masonic music, e. g. the Maurerische Trauermusik.
1786–87: Return to opera
Despite the great success of Die Entführung aus dem Serail, Mozart did little operatic writing for the next four years, producing only two unfinished works and the one-act Der Schauspieldirektor. He focused instead on his career as a piano soloist and writer of concertos. Around the end of 1785, Mozart moved away from keyboard writing and began his famous operatic collaboration with the librettist Lorenzo Da Ponte. 1786 saw the successful premiere of The Marriage of Figaro in Vienna. Its reception in Prague later in the year was even warmer, and this led to a second collaboration with Da Ponte: the opera Don Giovanni, which premiered in October 1787 to acclaim in Prague, but less success in Vienna in 1788. The two are among Mozart’s most important works and are mainstays of the operatic repertoire today, though at their premieres their musical complexity caused difficulty for both listeners and performers. These developments were not witnessed by Mozart’s father, who had died on 28 May 1787.
In December 1787, Mozart finally obtained a steady post under aristocratic patronage. Emperor Joseph II appointed him as his “chamber composer”, a post that had fallen vacant the previous month on the death of Gluck. It was a part-time appointment, paying just 800 florins per year, and required Mozart only to compose dances for the annual balls in the Redoutensaal. This modest income became important to Mozart when hard times arrived. Court records show that Joseph’s aim was to keep the esteemed composer from leaving Vienna in pursuit of better prospects.
In 1787 the young Ludwig van Beethoven spent several weeks in Vienna, hoping to study with Mozart. No reliable records survive to indicate whether the two composers ever met.
Later years and death
Toward the end of the decade, Mozart’s circumstances worsened. Around 1786 he had ceased to appear frequently in public concerts, and his income shrank. This was a difficult time for musicians in Vienna because of the Austro-Turkish War, and both the general level of prosperity and the ability of the aristocracy to support music had declined.
By mid-1788, Mozart and his family had moved from central Vienna to the suburb of Alsergrund. Although it has been thought that Mozart reduced his rental expenses, research shows that by moving to the suburb, Mozart had not reduced his expenses (as claimed in his letter toPuchberg), but merely increased the housing space at his disposal. Mozart began to borrow money, most often from his friend and fellow Mason Michael Puchberg; “a pitiful sequence of letters pleading for loans” survives. Maynard Solomon and others have suggested that Mozart was suffering from depression, and it seems that his output slowed. Major works of the period include the last three symphonies (Nos. 39, 40, and 41, all from 1788), and the last of the three Da Ponte operas, Così fan tutte, premiered in 1790.
Around this time, Mozart made long journeys hoping to improve his fortunes: to Leipzig, Dresden, and Berlin in the spring of 1789, and toFrankfurt, Mannheim, and other German cities in 1790. The trips produced only isolated success and did not relieve the family’s financial distress.
Mozart’s last year was, until his final illness struck, a time of great productivity—and by some accounts, one of personal recovery. He composed a great deal, including some of his most admired works: the opera The Magic Flute; the final piano concerto (K. 595 in B-flat); the Clarinet Concerto K. 622; the last in his great series of string quintets (K. 614 in E-flat); the motet Ave verum corpus K. 618; and the unfinished Requiem K. 626.
Mozart’s financial situation, a source of extreme anxiety in 1790, finally began to improve. Although the evidence is inconclusive, it appears that wealthy patrons in Hungary and Amsterdam pledged annuities to Mozart in return for the occasional composition. He is thought to have benefited from the sale of dance music written in his role as Imperial chamber composer. Mozart no longer borrowed large sums from Puchberg, and made a start on paying off his debts.
He experienced great satisfaction in the public success of some of his works, notably The Magic Flute (which was performed several times in the short period between its premiere and Mozart’s death) and the Little Masonic Cantata K. 623, premiered on 15 November 1791.
Final illness and death
Mozart fell ill while in Prague for the 6 September 1791 premiere of his opera La clemenza di Tito, written in that same year on commission for the Emperor’s coronation festivities. He continued his professional functions for some time, and conducted the premiere of The Magic Fluteon 30 September. His health deteriorated on 20 November, at which point he became bedridden, suffering from swelling, pain, and vomiting.
Mozart was nursed in his final illness by his wife and her youngest sister, and was attended by the family doctor, Thomas Franz Closset. He was mentally occupied with the task of finishing his Requiem, but the evidence that he actually dictated passages to his student Franz Xaver Süssmayr is minimal.
Mozart died in his home on 5 December 1791 (aged 35) at 1:00 am. The New Grove describes his funeral:
Mozart was interred in a common grave, in accordance with contemporary Viennese custom, at the St. Marx Cemetery outside the city on 7 December. If, as later reports say, no mourners attended, that too is consistent with Viennese burial customs at the time; later Jahn (1856) wrote that Salieri, Süssmayr, van Swieten and two other musicians were present. The tale of a storm and snow is false; the day was calm and mild.
The expression “common grave” refers to neither a communal grave nor a pauper’s grave, but to an individual grave for a member of the common people (i.e., not the aristocracy). Common graves were subject to excavation after ten years; the graves of aristocrats were not.
The cause of Mozart’s death cannot be known with certainty. The official record has it as “hitziges Frieselfieber” (“severe miliary fever”, referring to a rash that looks like millet seeds), more a description of the symptoms than a diagnosis. Researchers have posited at least 118 causes of death, including acute rheumatic fever, streptococcal infection, trichinosis, influenza, mercury poisoning, and a rare kidney ailment.
Mozart’s modest funeral did not reflect his standing with the public as a composer: memorial services and concerts in Vienna and Prague were well-attended. Indeed, in the period immediately after his death, his reputation rose substantially: Solomon describes an “unprecedented wave of enthusiasm” for his work; biographies were written (first by Schlichtegroll, Niemetschek, and Nissen); and publishers vied to produce complete editions of his works.
Appearance and character
Mozart’s physical appearance was described by tenor Michael Kelly, in his Reminiscences: “a remarkably small man, very thin and pale, with a profusion of fine, fair hair of which he was rather vain”. As his early biographer Niemetschek wrote, “there was nothing special about [his] physique. […] He was small and his countenance, except for his large intense eyes, gave no signs of his genius.” His facial complexion was pitted, a reminder of his childhood case of smallpox. There is a photofit of Mozart, created from four contemporary portraits. He loved elegant clothing. Kelly remembered him at a rehearsal: “[He] was on the stage with his crimsonpelisse and gold-laced cocked hat, giving the time of the music to the orchestra.” Of his voice his wife later wrote that it “was a tenor, rather soft in speaking and delicate in singing, but when anything excited him, or it became necessary to exert it, it was both powerful and energetic”.
Mozart usually worked long and hard, finishing compositions at a tremendous pace as deadlines approached. He often made sketches and drafts; unlike Beethoven’s these are mostly not preserved, as his wife sought to destroy them after his death. He was raised a Catholic and remained a loyal member of the Church throughout his life.
Mozart lived at the center of the Viennese musical world, and knew a great number and variety of people: fellow musicians, theatrical performers, fellow Salzburgers, and aristocrats, including some acquaintance with the Emperor Joseph II. Solomon considers his three closest friends to have been Gottfried von Jacquin, Count August Hatzfeld, and Sigmund Barisani; others included his older colleague Joseph Haydn, singers Franz Xaver Gerl and Benedikt Schack, and the horn player Joseph Leutgeb. Leutgeb and Mozart carried on a curious kind of friendly mockery, often with Leutgeb as the butt of Mozart’s practical jokes.
He enjoyed billiards and dancing, and kept pets: a canary, a starling, a dog, and a horse for recreational riding. He had a startling fondness for scatological humor, which is preserved in his surviving letters, notably those written to his cousin Maria Anna Thekla Mozart around 1777–1778, and in his correspondence with his sister and parents. Mozart also wrote scatological music, a series of canons that he sang with his friends.
Works, musical style, and innovations
Mozart’s music, like Haydn’s, stands as an archetype of the Classical style. At the time he began composing, European music was dominated by the style galant, a reaction against the highly evolved intricacy of the Baroque. Progressively, and in large part at the hands of Mozart himself, the contrapuntal complexities of the late Baroque emerged once more, moderated and disciplined by new forms, and adapted to a new aesthetic and social milieu. Mozart was a versatile composer, and wrote in every major genre, including symphony, opera, the solo concerto, chamber music including string quartet and string quintet, and the piano sonata. These forms were not new, but Mozart advanced their technical sophistication and emotional reach. He almost single-handedly developed and popularized the Classical piano concerto. He wrote a great deal of religious music, including large-scale masses, as well as dances, divertimenti, serenades, and other forms of light entertainment.
Hear the Music
Wolfgang Amadeus Mozart – Overture to Don Giovanni
The central traits of the Classical style are all present in Mozart’s music. Clarity, balance, and transparency are the hallmarks of his work, but simplistic notions of its delicacy mask the exceptional power of his finest masterpieces, such as the Piano Concerto No. 24 in C minor, K. 491; the Symphony No. 40 in G minor, K. 550; and the opera Don Giovanni. Charles Rosen makes the point forcefully:
It is only through recognizing the violence and sensuality at the center of Mozart’s work that we can make a start towards a comprehension of his structures and an insight into his magnificence. In a paradoxical way, Schumann’s superficial characterization of the G minor Symphony can help us to see Mozart’s daemon more steadily. In all of Mozart’s supreme expressions of suffering and terror, there is something shockingly voluptuous.
Especially during his last decade, Mozart exploited chromatic harmony to a degree rare at the time, with remarkable assurance and to great artistic effect.
Mozart always had a gift for absorbing and adapting valuable features of others’ music. His travels helped in the forging of a unique compositional language. In London as a child, he met J. C. Bach and heard his music. In Paris, Mannheim, and Vienna he met with other compositional influences, as well as the avant-garde capabilities of the Mannheim orchestra. In Italy he encountered the Italian overture and opera buffa, both of which deeply affected the evolution of his own practice. In London and Italy, the galant style was in the ascendent: simple, light music with a mania for cadencing; an emphasis on tonic, dominant, and subdominant to the exclusion of other harmonies; symmetrical phrases; and clearly articulated partitions in the overall form of movements. Some of Mozart’s early symphonies are Italian overtures, with three movements running into each other; many are homotonal (all three movements having the same key signature, with the slow middle movement being in the relative minor). Others mimic the works of J. C. Bach, and others show the simplerounded binary forms turned out by Viennese composers.
As Mozart matured, he progressively incorporated more features adapted from the Baroque. For example, the Symphony No. 29 in A major K. 201 has a contrapuntal main theme in its first movement, and experimentation with irregular phrase lengths. Some of his quartets from 1773 have fugal finales, probably influenced by Haydn, who had included three such finales in his recently published Opus 20 set. The influence of the Sturm und Drang (“Storm and Stress”) period in music, with its brief foreshadowing of the Romantic era, is evident in the music of both composers at that time. Mozart’s Symphony No. 25 in G minor K. 183 is another excellent example.
Mozart would sometimes switch his focus between operas and instrumental music. He produced operas in each of the prevailing styles: opera buffa, such as The Marriage of Figaro, Don Giovanni, and Così fan tutte; opera seria, such as Idomeneo; and Singspiel, of which Die Zauberflöte is the most famous example by any composer. In his later operas he employed subtle changes in instrumentation, orchestral texture, and tone color, for emotional depth and to mark dramatic shifts. Here his advances in opera and instrumental composing interacted: his increasingly sophisticated use of the orchestra in the symphonies and concertos influenced his operatic orchestration, and his developing subtlety in using the orchestra to psychological effect in his operas was in turn reflected in his later non-operatic compositions.
Mozart’s most famous pupil, whom the Mozarts took into their Vienna home for two years as a child, was probably Johann Nepomuk Hummel, a transitional figure between Classical and Romantic eras. More important is the influence Mozart had on composers of later generations. Ever since the surge in his reputation after his death, studying his scores has been a standard part of the training of classical musicians.
Ludwig van Beethoven, Mozart’s junior by fifteen years, was deeply influenced by his work, with which he was acquainted as a teenager. He is thought to have performed Mozart’s operas while playing in the court orchestra at Bonn, and he traveled to Vienna in 1787 hoping to study with the older composer. Some of Beethoven’s workshave direct models in comparable works by Mozart, and he wrote cadenzas (WoO 58) to Mozart’s D minor piano concerto K. 466. For further details see Mozart and Beethoven.
A number of composers have paid homage to Mozart by writing sets of variations on his themes. Beethoven wrote four such sets (Op. 66, WoO 28, WoO 40, WoO 46). Others include Fernando Sor’s Introduction and Variations on a Theme by Mozart (1821), Mikhail Glinka’s Variations on a Theme from Mozart’s Opera Die Zauberflöte(1822), Frédéric Chopin’s Variations on “Là ci darem la mano” from Don Giovanni (1827), and Max Reger’s Variations and Fugue on a Theme by Mozart (1914), based on the variation theme in the piano sonata K. 331;
Pyotr Ilyich Tchaikovsky wrote his Orchestral Suite No. 4 in G, “Mozartiana” (1887), as a tribute to Mozart.
For unambiguous identification of works by Mozart, a Köchel catalogue number is used. This is a unique number assigned, in regular chronological order, to every one of his known works. A work is referenced by the abbreviation “K.” or “KV” followed by this number. The first edition of the catalogue was completed in 1862 by Ludwig von Köchel. It has since been repeatedly updated, as scholarly research improves knowledge of the dates and authenticity of individual works.
- Cairns, David (2006). Mozart and His Operas. Berkeley, California: University of California Press. ISBN 0-520-22898-7. OCLC 62290645.
- Eisen, Cliff; Keefe, Simon P, eds. (2006). The Cambridge Mozart Encyclopedia. Cambridge: Cambridge University Press. ISBN 0-521-85659-0. OCLC 60245611.
- Gutman, Robert (2000). Mozart: A Cultural Biography. London: Harcourt Brace. ISBN 978-0-15-601171-6. OCLC 45485135.
- Holmes, Edward (2005). The Life of Mozart. New York: Cosimo Classics. ISBN 1-59605-147-7. OCLC 62790104.
- Mozart, Wolfgang (1972). Mersmann, Hans, ed. Letters of Wolfgang Amadeus Mozart. New York: Dover Publications. ISBN 0-486-22859-2. OCLC 753483.
- “New Mozart Pieces Unveiled (Video)”. The Huffington Post. 8 February 2009. Retrieved 29 September 2010.
- Till, Nicholas (1995). Mozart and the Enlightenment: Truth, Virtue and Beauty in Mozart’s Operas. New York City: W. W. Norton & Company. ISBN 0-393-31395-6.OCLC 469628809.
- Salzburg Mozarteum Foundation
- Chronological-Thematic Catalog
- Works by or about Wolfgang Amadeus Mozart in libraries (WorldCat catalog)
- Digitized, scanned material (books, sheet music)
- “Mozart” Titles; Mozart as author from archive.org
- “Mozart” Titles; Mozart as author from books.google.com
- Digital Mozart Edition (Internationale Stiftung Mozarteum)
- (French) “Mozart” titles from Gallica
- From the British Library
- (German) Letters of Leopold Mozart und Wolfgang Amadeus Mozart (Badische Landesbibliothek)
- Sheet music
- Complete sheetmusic (scores) from the Neue Mozart-Ausgabe (Internationale Stiftung Mozarteum)
- “Mozart” Titles from the Munich Digitisation Centre (MDZ)
- “Mozart” Titles from the University of Rochester
- Free scores by Wolfgang Amadeus Mozart at the International Music Score Library Project
- Free scores by Wolfgang Amadeus Mozart in the Choral Public Domain Library (ChoralWiki)
- Free typeset sheet music of Mozart’s works from Cantorion.org
- The Mutopia Project has compositions by Wolfgang Amadeus Mozart
|
<urn:uuid:4dcda525-94c8-4c35-9ba9-e2b4e3366e22>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00615.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9730125069618225,
"score": 3.28125,
"token_count": 8265,
"url": "https://courses.lumenlearning.com/musicapp_historical/chapter/w-a-mozart/"
}
|
researchers tracked the hitherto oldest Sun-like star in the universe. Thus they not only get a sense of how the sun will look like in four billion years ago, but still solve another mystery in cosmology.
He is like our sun more than any other star. Only in one respect he differs: With an age of 8.2 billion years it is nearly twice as old as our Sun, which was 4.65 billion years ago. The speech is from the distant star HIP 102152, prepared by an international team of astronomers now adopted under the microscope. It is 250 light years from Earth and is in the constellation Capricornus (Capricorn). For their analysis, the researchers used a special spectrograph of the Very Large Telescope (VLT) of the European Southern Observatory, which is on Mount Paranal in the Chilean Atacama Desert.The findings inspire astronomers. Because they can not only see how our sun evolved and will look in four billion years, but also give information about the chemical composition of solar-like stars. “For decades, astronomers have searched for twins of the sun, to better understand our life-giving star,” says Jorge Melendez of the Universidade de São Paulo, Brazil. “Since the discovery of the first twin 1997, only very few more were discovered. Now we have thanks to the VLT spectra of special quality and can examine these twins just to find out whether the sun is somehow special. “Melendez led the research group that their study in the journal” Astrophysical Journal Letters “published.
Fact our Sun is the best-studied star at all. For 400 years, astronomers observe our Sun with scientific instruments. But while they could learn something about its current state, but little about his past or future. Although there are models of stellar evolution, which allow certain conclusions. However, to validate this, astronomers must study more sun-like stars that are located in other stages of development. Melendez and his colleagues took to the star HIP 102152 18 Scorpii and even targeted. 18 Scorpii was known that he is younger than the Sun, while HIP 102152 was considered older.
See more room tips class=”markets” data-lt=”markets”>
real estate market
See more room tips class=”services” data-lt=”services”>
Süddeutsche.de Set as Home Note Do not show Close
‘) / /]]>
31 August 2013 11:03
‘) / /]]>
that access to Western intelligence and hacking methods had long been clear to experts. New records of Edward Snowden show for the first time the dimension of American cyber attacks: According to the end of the year special spying software to be placed on strategically selected at least 85,000 computers worldwide
U.S. intelligence agencies have provided worldwide, according to a Washington Post report of tens of thousands of computers with software back doors through which they can access data or all networks. By the end of this year, it should therefore be prepared at least 85,000 such machines, the newspaper wrote on Saturday on the basis of documents from the collection of the informant Edward Snowden. The NSA has also developed a system that millions of infected computers can automatically control.
In 2011, the American intelligence services had performed a total of 231 cyber attacks, it said. The number was caught in a leaked draft of Snowden budget. Of these “offensive operations” according to budget were directed against targets with the highest priority, almost three quarters. According to a former official were among actions against targets in countries such as Iran, Russia, China and North Korea, the newspaper said.
more information about these attacks did not exist. The U.S. intelligence defined according to a presidential directive of October 2012 offensive cyber operations as manipulation or destruction of information in computers or computer networks, or the computers and networks themselves
Most of these actions have a direct impact only on data and functionality of the computer opponent: The connections for example, would slow
‘) / /]]>
code name “Genie”
is a prominent example of a national cyber attack, the Stuxnet computer worm that sabotaged Iran’s nuclear program a few years ago. IT security experts are convinced that Western intelligence stuck behind Stuxnet, although this was never officially confirmed.
More often break the intelligence hackers according to the report in a computer in order to absorb data. The actions were code-named “Genius” (spirit). By the end of this year should as part of “genius” special software to be placed on strategically selected at least 85,000 computers worldwide, it said. This software could for example record and transmit data.
2008 were only 21,252 computers have been attacked in this way, wrote the Washington Post , relying on the intelligence budget. However, in large computer networks could also just open an infected device access to hundreds of thousands more.
secret software often serve only as a back door for possible future requests, said a former official of the Washington Post . According to the documents have been fully exploited by the nearly 69,000 infected computers only 8448 in 2011. This had to do with human resources, although 1870 people were employed in the project already.
future should a system codenamed “turbine” also provide for the automatic operation of millions infiltrated spy programs on other computers. Specialists of NSAs also worked on covert software can make the relevant calls in computer networks locate and record. The intelligence hackers could break into connecting devices such as routers and also behind firewall security systems from different vendors, it said.
also not shied away from the purchase of information about software vulnerabilities. This $ 25.1 million are planned for this year. The U.S. accuses China for years to operate using similar methods cyber espionage in the West. A crucial difference is, however, that American programs would not be used for industrial espionage, it said in the Washington Post .
See more room tips id=”functions” data-bind=”articleFunctions”>
feedback to editors
copy short URL sz.de/1.1759319 http://sz.de/1.1759319
updates to See more room tips class=”stayInformed”>
Now most read on the home page of
accusations of poison gas insert to Assad regime Putin reminded Obama Nobel Peace Prize Back to homepage
British intelligence agency GCHQ Peeking behind barbed wire
Exclusive Actually quite idyllic: From the southern English town shack from the British intelligence agency GCHQ monitored German data connections, but he has also to uphold the law That could soon leave understand – by 50,000.? secret documents, Edward Snowden has collected information on SZ for more than a year ago. By John Goetz, Hans Leyendecker and Frederik Obermaier more .. .
internet monitoring British intelligence taps into data from Germany from
Exclusive Documents of the whistleblower Edward Snowden show. Britain’s monitoring service GCHQ monitors multiple fiber optic cables – in two of them, the German Telekom is one of the operators after SZ-information, the British have theoretically even access Internet connections within Germany. By John Goetz, Hans Leyendecker and Frederik Obermaier more …
The American intelligence seem to have provided tens of thousands of computers around the world with software back doors through which they gain access to data or all networks. By the end of this year, it should therefore be prepared at least 85,000 such machines, writes the “Washington Post” on the basis of documents from the collection of the informant Edward Snowden. The NSA has also developed a system that millions of infected computers can automatically control.
In 2011, the American intelligence services had performed a total of 231 cyber attacks, they say. The number was caught in a leaked draft of Snowden budget. Of these “offensive operations” according to budget were directed against targets with the highest priority, almost three quarters. According to a former official were among actions against targets in countries such as Iran, Russia, China and North Korea.
more information about these attacks did not exist. America’s intelligence agencies defined in accordance with a presidential directive from the October 2012 offensive cyber operations as manipulation or destruction of information in computers or computer networks, or the computers and networks themselves Most of these actions have a direct impact only on data and functionality of computers of the opponent: The connections for example, would slow down.
code name “Genie”
is a prominent example of a national cyber attack, the Stuxnet computer worm that sabotaged Iran’s nuclear program a few years ago. IT security experts are convinced that Western intelligence stuck behind Stuxnet, although this was never officially confirmed.
More often break the intelligence hackers according to the report in a computer in order to absorb data. The actions were code-named “Genius” (spirit). By the end of this year should be placed on at least 85,000 computers worldwide strategically selected as part of “genius” special software. This software could for example record and transmit data.
in 2008 had been attacked in this way only 21,252 computers, writes the “Washington Post”, referring to the intelligence budget. However, in large computer networks could also just open an infected device access to hundreds of thousands more.
secret software often serve only as a back door for possible future requests, said a former official of the “Washington Post”. According to the documents have been fully exploited by the nearly 69,000 infected computers only 8448 in 2011. This had to do with human resources, although 1870 people were employed in the project already.
future should a system codenamed “turbine” also provide for the automatic operation of millions infiltrated spy programs on other computers. Specialists of NSAs also worked on covert software can make the relevant calls in computer networks locate and record.
intelligence hackers could in connecting devices such as routers and also behind firewall security systems from different vendors break, it said. They also do not shied away from buying information about software vulnerabilities. This $ 25.1 million are planned for this year. The U.S. accuses China for years to operate using similar methods cyber espionage in the West. A crucial difference is, however, that American programs would not be used for industrial espionage.
Share Bookmark Print
post by e-mail
chop tens of thousands of computer
America’s intelligence chop tens of thousands of computer
that access to Western intelligence and hacking methods has long been clear for experts. Snowden from the papers that the dimension of American cyber attacks now goes out for the first time.
The brashly colorful and rapidly staged Jump & Run “Rayman Legends” appears – contrary to first Announcements – at the same time for all consoles, but can you wanting only one. Which is, the test shows.
No more Wii U-exclusive, but still awesome: The new Jump & Run and brings fans not only the most beautiful, but also the most fun adventure for years. Even threatening the dismissal of Mario as a genre-King
beginning of the year it was announced that the originally exclusive Wii U title is “Rayman Legends” appear for the Xbox 360 and PS3 and was therefore postponed to August, the shouting was great – and not without reason : Business considers the decision of publisher Ubisoft is understandable due to the wider base console. But that totally brushed on the touch screen of Wii U game pad game concept defines a conversion for systems with conventional joypad solution anything but close.
best on Nintendo hardware
Although the “Rayman Legends” looks resettled in an oblique fantasy dream world at first glance like a traditional platform game, is the inventor Michel Ancel and his French team of developers have come up with so many extra levels and special modes that benefit from the special abilities of the Wii U controller that the original game concept can really enjoy exclusively on the Nintendo hardware.
While the “Rayman” “float over chasms of time”-typical combination of hopping, fist vertebrae and feels pretty much the same as the previous “ Sales: Ubisoft Genre: Jump & Run Price: 50 € Difficulty: beginners and advanced Age: 6 years
Graphics: very good control: very good Sound: very good Game Pass: very good Total: good
But regardless, with or without music, with a touchpad or a joystick, whether with Rayman or one of its oblique comic pals: Because of its wonderful homogeneous combination of brilliant 2D cartoon graphics and 3D elements is “Rayman Legends” a real eye catcher. The virtuoso to the crazy game and level design doyen of Ancel turn it owes the title finally, that he can even catch up with Nintendo’s ingenious “Mario” platform games. Tons of unlockable extras, a knuffiger multiplayer mode and the pleasant open structure that a change between each game and narrative worlds allowed at any time, carry their own difficulties in making “Rayman Legends” the best platformer in years. The only downer: The brilliant musical soundtrack and the rest of the soundscape indeed come pretty impressive from the boxes, but were mixed in stereo, multi-channel sound, there’s no. (_teleschau – The media service)
Rayman Legends tested: According to the version for Wii U, there is a terrific jump and run now for PS3, Xbox 360 and PC. Ubisoft has fortunately there verschlimmbessert nothing but a great game implemented properly for the other platforms.
class=”foto_container_center”> Rayman Legends in review: Outrageous good jump and run. (9) [Source: view picture gallery] At this point a big “Atschibätsch” with out stretched tongue to all the Wii-U owners: Because that was once announced as an exclusive game for the Nintendo console Rayman Legends are is now available for the PS3 – and it’s been great! Who has the previous Rayman Origins gambled that already knows what to expect: Seemingly simple Hüpfspielkost in the style of the classic Super Mario parts. But while the Italian plumber loses its radiance for years, Rayman Legends is a prime example of how modern and traditional elements cleverly intertwined with each other.
In terms of game design, the Gehüpfe provides no significant changes compared to its predecessor: Your sprints from left to right drawn by fantastic 2D worlds, beaten up finely animated monsters and collect all sorts of things. The aim of each level is to get as many Teensies (small, knobby blue animals) to liberate. Does not sound very varied in theory, turns out after a few hours of play as a pure software-gold. Because the developers are constantly coming out with new ideas around the corner once you flee from a giant dragon towards level output, sometimes it dives through a labyrinth of caves, sometimes it sneaks through an enemy base that could easily come from a James Bond movie
No question: If there were at universities the course “How to ingenious level design,” the Rayman-makers would pass him with honors. A very clever idea in this context is Rayman’s friend flying Murfie: The Frog Fly-waste interacts with the press of the environment and nibbles for example holes in the architecture level and makes you so the way. It is not uncommon to perfect timing to:. Especially in the last two of the six story-world professionals will be put to the Hüpfprobe itself
I wanna rock with you
class=”foto_right c8″> Rayman Legends in review: Outrageous good jump and run. (3) [Source: view picture gallery] Each completed game section is in addition to an additional, slightly different invasive variant thus: You must master the level again, have this time only 60 seconds to get to the exit – exciting, but partly also stick hard. But the absolute fun-Pops are the five musical levels! In these areas, mostly quite short runs as background music a famous song whose rhythm dictates when you should jump or hit and run.
More people, more chaos
Folksy natures are expected with the multiplayer options of Rayman Legends have their field day: Up to three gamers can always walk with a gamepad and the game world. However, this is a nice little bonus rather than a real added value – because the screen chaos ensures that some serious points are hardly together to master. But at least the simple football game in which you transgress against a ball like crazy to somehow maneuver these into the opposing goal, is good for a fun party night before the PS3.
Here you can see very well how the new iPad will probably look (Photo: Sonny Dickson).
Friday 30 August 2013
If they could be featured soon? New photos show the back and front bezel of the upcoming iPad 5 and the housing of the new iPad mini.
Sonny Dickson, who has specialized in Fotoleaks of upcoming Apple products, presented by numerous pictures of new iPhone models including photos, allegedly showing the backs and front panels of the new iPad. Environment and resources suggest that the recordings from the same Chinese source.
The edges of the New iPad seem to be less flattened than the current model (Photo: Sonny Dickson).
The backs of the large and the small images on the iPads are actually indistinguishable. Based on comparison with the size of a stand on which the devices are mounted, one can see, however, that aluminum housing belongs to which model.
This confirms previous rumors and leaks, the iPhone 5 which largely takes over the design of the iPad mini. It should be much thinner and lighter than its predecessor. Among other things, this could be possible through a more efficient LED backlighting that allows Apple to incorporate smaller batteries in the devices.
Previous leaks confirmed
The photos of the front panel also show that the frame around the touch panel is significantly narrower than the iPad 4 Also evident on the Sonny Dickson photos of the two separate volume buttons and the holes for stereo speakers, who were already seen on other iPad 5-leaks. To the inner life is so far not much is known, the iPhone 5 will have an improved processor and an 8-megapixel camera.
This should be back sides of the iPad mini 2 (Photo: Sonny Dickson).
The iPad 2 Mini says little from the back because the new device externally hardly likely to differ from the current tablet. Here’s the big question is whether it will have a retina display or not. For a long time it was said that Apple will introduce this year, only an improved device without high-resolution screen, a retina display follow until 2014. But after Google submitted with the super sharp new Nexus 7, there are rumors, Apple pull the retina iPad mini before.
September, when Apple presented the iPhone 5S expected and the iPhone 5C, the new iPad will almost certainly not be of the party. More likely is a separate October or November date.
smolders among consumers has long suspected that manufacturers shorten the life of their devices targeted. However, an evaluation of the Stiftung Warentest shows: This is not the case. Nevertheless, manufacturers have their tricks to boost sales
div equipment manufacturers do not build targeted vulnerabilities in their products, so that they will prematurely broken.. The test results of the Stiftung Warentest from years past provide no evidence for it so far, as the magazine reported “test” on Thursday in its September issue. Nevertheless, the companies expect, how long to hold an electric toothbrush or a vacuum cleaner. In a way, there is therefore a planned wear, report the tester. Frequently the rule: the more expensive the longer lasting
smolders among consumers has long suspected that manufacturers limit the lifetime of their devices intentionally to sell more.. An evaluation of fatigue tests by Stiftung Warentest in the past decade has shown, therefore, that household appliances not frequently go broke sooner than today. Nevertheless, there are noisy “test” tricks that the manufacturers boost their sales. These include high repair costs, permanently installed battery, lack of spare parts, printer, display the empty cartridges falsely or products that can not be repaired.
In their devices, manufacturers are consequently planning at the production stage, how long to keep it. The customer knows about it but nothing. According to Stiftung Warentest are generally cheap cell phones often faster than expensive scrap. For washing machines under 550 euros, battery drills under 50 euros or 80 euros vacuum cleaners under the danger is great that the joy does not last long on the new device. A guara ntee is not however the price. The testers list also costly flops on as an espresso machine for 985 euro or a food processor for 340 euros, which proved to be very robust and less persistent
. Already one presented in the spring on behalf of the Parliamentary Group of the Greens study had shown that the economy an early wear often with schedules already in the design and manufacture of their products. This is also known as “planned obsolescence”.
Were it not for the many ice, the sight would be breathtaking: a ravine at least 750 miles long, up to 800 meters deep and are, ten kilometers wide. No man has ever get to see, millions of years ago they disappeared under the ice. Two miles up the glaciers of Greenland today towers above the canyon. But now scientists have made it visible -. Using radar data collected in numerous flights
As the researchers to Jonathan Bamber of the British University of Bristol in the journal “Science” write the canyon is probably older than the ice surface, the Greenland conceals since about 3.5 million years ago. They have the shape of a meandering riverbed and stretches from the center to the northern tip of the largest island in the world. Its dimensions are impressive: although it is only half as deep as the famous Grand Canyon at its deepest point. But it is at least 300 kilometers longer than the famous gorge to the U.S. state of Arizona.
But the sheer size is not the most important by far. Bamber and his colleagues suggest that the canyon plays an important role in directing the water from the melting process on the surface of the pack ice to the edge of the ice and ultimately into the Arctic Ocean. Before Greenland ice sheet is formed about 3.5 million years ago, a river must have cut the canyon from the rock.
Its function as a water manager, he apparently has not lost – only now he leads the glacial meltwater. This explains, according to the researchers about why there are no lakes in the Greenland ice pack, unlike in the Antarctic. In addition, water plays an important role in the behavior of the ice. It spreads over large areas from below, it can act as a lubricating layer between the bottom of the glacier and the rock, and so accelerate the slipping of ice into the sea.
Greenland is one of the regions most affected by climate change. The sea ice shrinks rapidly, and also the land-based ice is locally strongly decreased. The new data may help to refine models of the movements of the Greenland ice sheet, according to Bamber. However, he does not expect a fundamentally new insights into how climate change is affecting the glaciers in the far north.
expressed a similar David Vaughan of the British Antarctic Survey: The discovery of the giant canyon will probably have “no special influence” on the calculation of Eisflussraten. The canyon lies so deep under the ice that he probably still remain untouched for many decades by the effects of warming.
the canyon was discovered with the help of radar observations. Bamber and his team evaluated from large amounts of data that were collected approximately at the “IceBridge” mission of the U.S. space agency Nasa and researchers in the UK and Germany. In the electromagnetic waves of certain frequencies of the radar can penetrate and bounce off the ice cream from the underlying rock. While they were evaluating all radar data systematically, the scientists discovered the canyon and were able to reconstruct its shape.
The canyon is not the first spectacular discovery under the ice of Greenland. In 2009, researchers had discovered there a “ghost mountains” that resembles the Alps. 2012 was added in a magnificent valley, take one of the scientists that it could speed up the flow of ice into the sea. All this refutes the impression that the landscape of the earth has already been mapped and explored completely, says Bamber. “Our research shows that there is still much to discover.”
the Greenland ice lies a canyon of gigantic proportions. As the researchers reported in Science magazine , it is at least 750 kilometers long and 800 meters deep, significantly longer than the famous Grand Canyon in the southwestern United States.
previously unknown canyon was probably older than the ice surface, the Greenland covering over millions of years. It has the form of a meandering river bed, ranging from the center to the northern tip of the largest island in the world.
The research team, led by Jonathan Bamber of the School of Geographical Sciences at Bristol, England, believed that the canyon plays an important role in the water from the melting process on the surface of the pack ice to the edge of the ice and ultimately into the Arctic to conduct ocean.
tracked by radar
Even before the emergence of the ice surface at least four million years ago, the gorge was a gigantic bed of a river system is an important way for the water outflow from the island. The presence of the canyon loudly declared Science , why is unlike other Arctic regions are no lakes on the Greenland ice pack.
was discovered in the canyon using radar observations. Bamber and his team evaluated from massive amounts of radar data that had been collected over decades by the U.S. space agency Nasa and researchers from the UK and Germany. In certain frequencies, the electromagnetic waves of radar can penetrate the pack ice, and then they bounce off the underlying rock. While they were evaluating all radar data systematically, the scientists discovered the canyon and were able to reconstruct its shape.
The discovery of the canyon belies the suggestion that the landscape of the earth has already been mapped and thoroughly researched, Bamber said. “Our research shows that there is still much to discover.”
The investigation of an American communications company after the interest of Internet users on the PS4 is higher than on the Xbox One. The keyword will PS4 on Twitter, news sites, forums and blogs frequently cited as one Xbox. GTA 5 will be but for a game as a single track in comparison to the new console launches also repeatedly called that the action title of the PS4 and Xbox One would almost steal the spotlight. more …
After the EU Commission gave the green light to “VDSL Turbo” about a week ago, the Federal Network Agency has made its decision on the use of so-called vectoring on Thursday officially. The new technology allows transfer rates of up to 100 megabits per second of VDSL ports.
Now it is up to all companies willing to invest to take advantage of the opportunities for the development of modern telecommunication networks and to promote the rapid deployment of broadband networks, said Jochen Homann, President of the Federal Network Agency. The Telekom as an operator of the largest German fixed network was prompted to change their model contracts for access to the network for its competitors accordingly.
Since the vectoring technology requires access to the entire harness, is when they are used in street cabinets (CCC), no place for two providers. The industry leader now has to ensure that competitors are able to offer their products to end users in other ways. Under “special conditions” may Telekom but still access to the “last mile” on CCC refuse to allow them or another company may use vectoring there.
a first draft of the vectoring requirements, the Agency had partially adjusted in July in favor of telecom competitors after the network operators and associations VATM Breko took to the barricades. Industry representatives spoke of the “re-monopolization” of the fixed network. In the current setting, the Bonn Authority has improved, among other things, the existence of protection for users in the CCC. (With material from dpa ) / (sybe)
How much has changed in the sun over billions of years? Astronomers have now discovered for the first time a star of our central star resembles like an identical twin, but is much older.
changed how strong the sun over the course of billions of years? Astronomers have discovered a star for the first time, of our central star resembles like an identical twin, but is much older.
A newly discovered twin star of the Sun shows, according to experts on the future development of our solar system. An international team of astronomers has identified the Brazilian line with the Very Large Telescope (VLT) of the European Southern Observatory ESO far the oldest twin of the sun.
The observed at the Paranal Observatory in northern Chile star HIP 102152 was similar to our sun as an identical twin, but emerged with an estimated age of 8.2 billion years, much earlier, shared the Eso on Wednesday.
This enables the observation of a much later stage of the evolution of stars of the type of our sun, which is only 4.6 billion years old. The first realization was found that the lithium content of the star decreases with the increasing age.
lithium, the third element of the periodic table was created during the Big Bang, together with hydrogen and helium. Astronomers are wondering for many years, why some stars seem to have less lithium than others.
star destroy their lithium over time
The new observations of HIP 102152 astronomers of the solution to this puzzle are now one step closer by found a strong correlation between the age of the solar-type stars and their lithium content have.
Our Sun has now only one percent of the lithium content, which was present in the material from which it originated. Recent studies solar twins suggest that young sun-like stars have a much higher lithium content. Researchers, however, were not yet prove a link between age and the lithium content in the layer.
talawanda Monroe, co-author of the paper reporting, summarizes: “We have found that HIP 102152 has a very low lithium content This shows for the first time clearly that older Sun in Gemini. Indeed, a lower lithium content than our sun or younger solar twins. We can now be sure that stars destroy in any way their lithium when they are older, and that the lithium content of the Sun’s normal for her age. “
“For decades, astronomers look after twin stars of the sun, to better understand our own life-giving star,” said the head of the research group, Jorge Melendez of the Universidade de São Paulo.
When the earthly life ends
From the comparison with the Sun, the astronomers could also see that at the 250 light-years distant star HIP 102152 in the constellation of Capricorn Earth-like rocky planets could include.
scientists predict that our sun will shine brighter ten percent already in a billion years from today. That’s enough for the ultimate climate catastrophe. Temperatures will then be on earth mean of 50 degrees Celsius.
latest move in a few billion years the hydrogen reserves the sun is low. Before the end of life but it will swell again huge, first swallow the planet Mercury, then Venus, and finally the earth.
China is on the road to super space nation: The People’s Republic in 2013 for the first time will still end up with an unmanned space shuttle to the moon with a rover and explore the earth’s satellite
China is on the road to super space nation: even first time in 2013, the People’s Republic will end up with an unmanned space shuttle to the moon with a rover and explore the Earth’s satellite. From Johnny Erling
Beijing authorities have started the countdown for the launch of ambitious moon welfare programs in the country. End of the year is to be the first unmanned moon landing by ferry.
you will send out a self-developed, from the ground-controlled lunar rover on three-month test-drive and explore it and let the national flag hoisted on the moon: “We are preparing to launch the launcher with the lander and rover on the moon on board, “stated the State Agency for Science, Technology and Industry for National Defense announced.
As the official news agency Xinhua reported, will start the named after a legendary Chinese moon fairy “Chang’e I” lunar exploration program on satellite space station Xichang in southwest China. With the Chang’e-3 will start the second of three phases in the Chinese lunar journey, stand for the three abbreviations: Rao (the lunar orbit), Luo (land on him) and Hui (return from the Moon)
The first phase of China has launched in 2007 and 2010 with the previous projects Chang’e-1 and Chang’e-2 behind. Both spacecraft orbited over a year, the Earth’s satellite systematically in order to measure him for a complete topographic map of the surface and to select a specific landing site for their lunar module into the so-called Rainbow bays of the moon.
ferry is already in just over three months to land
Now the Chang’e-3 will land there in just four months. “For their transport is improved and proven launch vehicle available,” Xinhua quoted the commander of the trip to the moon program, Ma Xingrui.
For the lunar module itself new technologies have been developed, particularly to ensure a soft landing. But he warned: “The mission is extremely difficult and associated with great risks.”
Chinese media speak of a “milestone” for the ambitious Aufholprogramm the country – more than 45 years after the first manned U.S. moon landing. After a successful landing of Chang’e-3, which remains on the moon, at least two unmanned ferries, Chang’e Chang’e-5-4 and follow to take rock samples and these ferries and then from 2017 to to bring back Earth.
Only after that China’s space could prepare to send to a yet undetermined time after 2020, the first time a Chinese astronaut on the Earth’s satellite.
The decision for the first unmanned moon landing was before the year according to the “People’s Daily” a day after the new leadership in Beijing had set the date for their planned major economic party. In November, the Party Central Committee will meet to decide on China’s new economic reform programs to advance its economic and social development to the symbolic date of 2020.
Prestigious Aerospace and military calculus
Beijing’s goal is by 2020 to make the People’s Republic from developing countries to wealthy great power and turn it into a rich world power in the second stage by 2050. The prestigious space, behind which is also military and economic calculus, based on these policy guidelines.
The expansion into space is based on three pillars: the, launched in 2003, the Shenzhou manned space missions, begun on the construction of a space laboratory Tiangong-from which to 2020, the first own space station will emerge and the Chang’e lunar exploration program.
China remains tight-lipped when it comes to the cost of the giant state project. According to the official business newspaper “Jingji Ribao” provide several hundred thousand people in 110 laboratories and research institutions in the country among active employees of approximately 3,000 companies and businesses.
The real costs are completely transparent. After a few official figures, the manned space flight to date with ten Shenzhou missions supposedly only 39 billion yuan (4.6 billion euros) and the Chang’e 1 and 2 have tasted of the moon exploration program together 2.3 billion yuan. Other details were not disclosed.
Manned Moon mission planned station and moon
for the manned moon trip there, according to the chief scientist of the lunar program, Ouyang Ziyuan, no concrete schedule, announced “Shanghai Daily”. In planning for the trip to the moon program is after a landing of astronauts also planned the construction of a lunar base.
over there built telescope and telecommunications equipment will explore space and Mars for further expeditions in the future of China from the moon scientists.
The immediate goals, however, are down to earth. Chang’e-3 will carry China’s national flag and hoist spectacular on the earth satellite, announced the chief engineer of the lunar program, Ye Peijian, on the outskirts of Beijing at parliamentary sessions last March.
The making of the flag, the researchers would have faced problems because they must withstand the extreme temperature fluctuations to minus 170 degrees in the prolonged night periods.
lunar rover was tested in the Gobi Desert
China’s space scientists go anyway safe: The Chang’e-3 will orbit the moon to its landing over 400,000 kilometers to an altitude of 15 kilometers, to ensure is when and where it lands. To the predetermined place in the bay of the rainbow (sine Iridium) there was four other alternatives.
than five years, in February 2008, the project had been prepared, Wu Weiren, one of the chief engineers and designers said. The approximately 100 kilograms, equipped with solar panels moon vehicle, which was tested in the Gobi desert to be conducted by remote control from Earth.
greatest concern of scientist: China’s lunar rover, which should be at least three months in use, must be zurückdirigiert repeatedly in the protection of the ferry before it gets icy nights. And he must on no account fall into a moon crater or a lunar trench.
beach of Aitutaki, one of the Cook Islands: The cool surface waters of the eastern equatorial Pacific cools, according to U.S. researchers, the atmosphere and thus slows down the global warming (Photo:. dpa )
Thursday 29 August 2013
Global warming currently seems to take a break; climate change skeptics feel vindicated. , According to U.S. researchers, however, natural temperature fluctuations in the tropical Pacific are responsible. After the cool phase, it will be back hotter world.
The apparent break of global warming can be explained by natural temperature variations in the tropical Pacific, according to a U.S. study. The researchers conclude the climate Yu Kosaka and Shang-Ping Xie at the Scripps Institution of Oceanography, University of California from their simulations. In the British journal “Nature” they make a custom air-computing model that reproduces the observed temperature evolution well.
Despite the rapid increase of greenhouse gases in the atmosphere, global warming seems currently to take a break. For some 15 years, the temperature of the air near the ground has not risen appreciably in the global average. This surprising development has previously seen a climate computer model. Researchers puzzle over the reasons for climate change skeptics feel vindicated.
fluctuations cover climate change
natural fluctuations in the climate system can currently cover climate change, as the authors explain. The current unusually cool surface waters of the eastern equatorial Pacific, thus cooling the atmosphere and thus slows down the global warming. The reason for the cooling of the tropical Pacific is not yet clear, the climate researchers write. However, it is likely that it was a natural fluctuation. If this is so, the global warming by the end of the cool phase of the sea would continue, scientists predict. Similar phases are also possible in the future.
See more room tips class=”list”>
15:07:13 greenhouse effect for centuries The degrees of warming sea rise by two meters
03:07:13 More than 370,000 deaths in ten years weather disasters are becoming more frequent
20:05:13 Despite hottest decade global warming slower than expected
16:05:13 dispute causes decided man is responsible for climate change
08:04:13 climate change in Germany Two thousand degrees and consequences
12:01:13 U.S. climate report raises the alarm It is getting warmer and warmer
The two experts were fed a climate model with the anomalous surface temperature of the eastern equatorial Pacific. Although this only about eight percent of the earth’s surface, it can affect the air temperature around the world. The oceans hold about 90 percent of the additional heat. And the water temperature in the upper 2000 meters has increased steadily according to data from the U.S. NOAA Ocean Research in the past decades, while the apparent climate change pause.
Hotter summer on northern hemisphere
The modified calculation model now not only reproduce the observed break of global warming, but also regional and seasonal phenomena, the researchers report. So would about the summer in the northern hemisphere still hot, during the months of November to April, a cooling being observed. This also give the model again, albeit less pronounced than in the measured data.
addition, the observed significant slowdown in North West America find also in the modified climate model again as prolonged drought in the southern United States, the team writes. Eurasia for the model calculation, however, does not agree so well with the regional measurements match, which probably lies at an internal climate variability, regardless of the development in the tropics.
internationally little-known Chinese smartphone maker Xiaomi has committed one of the leading managers of Google Android. Hugo Barra, which was responsible for Internet Group for the Android product management, is intended to promote international business in Xiaomi.
class=”bild_links c1″> Hugo Barra Barra confirmed the change on Thursday an entry on Google Plus. He was last seen more often in performances of devices with Android on the stage, he recently unveiled the tablet Nexus 7
Sundar Pichai, who is responsible for Android, Chrome and Google Apps at Google, Barra congratulated for his “next exciting adventure.” “We will miss him but at Google and are thrilled that he remains connected Android.” Most devices of Xiaomi run with the Google system.
smartphone business in China – the world’s largest mobile phone market – is dominated by local vendors. Companies such as ZTE, Huawei and Lenovo are thus one of the world’s leading smartphone vendors and expand internationally. Xiaomi was founded in 2010 and made with cheap high-end devices for attention. (Dpa ) / (anw)
class=”articlemeta-date”> 29 August 2013 clock 00:58
New York (AFP) The paralyzed by a hacker attack website of the “New York Times” is back online. The site was called again on Wednesday evening at 22.00 CEST clock after it had been offline for almost a day. The spokeswoman for the newspaper, Eileen Murphy, had spoken in the attack of a “vicious attack from the outside.” Also, the Internet service Twitter was attacked.
Click here to read more on the topics “USA”, “Syria”, “conflict”, “Media” and “Internet”.
class=”wp_keywordlink”> Samsung will open up with the Galaxy Tab 3 kids in the near future a new target group. The tablet comes in toddler-friendly colors with a rounded stand, reminiscent of baby toys and you might come across in some ten year olds to rejection. At the International Consumer Electronics Fair (IFA) in Berlin will be demonstrated for the first time next week.
Technical basis is the standard Samsung Galaxy Tab Tablet 3 Android and 7-inch screen. Except for a protective Case with Stand Samsung has added software engineering and its usual TouchWiz interface is replaced by a fairer child. Also, educational software and games for children are playing up at the factory.
Google Play to access the children’s tablet on a special app store for children. Presumably can prevent this by default, their offspring may invest larger sums in software there parents. As child protection, there is also a timer: If specified by the parents daily service life, the tablet turns off by itself
The resolution of the screen will be 1024 x 600 pixels. The children also have to make do with a 1.2 GHz dual-core processor and Android 4.1 Jelly Bean. The difference with the current Android 4.3 due to the intermediate grade children surface but should be hardly noticeable.
Samsung’s IFA stand 3 is probably find a place alongside the Galaxy Tab Smart Gear Watch Galaxy and the Galaxy Note phablet third Both Samsung will officially take place on two days before his exhibition opening press conference on 4 September imagine how company representatives have announced. But on the market it comes first in Samsung’s native South Korea, and still this month. A release date or price for other countries is not known.
[With material by Andrew Hoyle, Crave UK]
Tip: Are you familiar with tablets? Test your knowledge – with 15 questions on ITespresso.de
ZDNet Subscribe in Google Currents iOS App Install
|
<urn:uuid:1c9cd51a-9479-4938-9ed8-8ff352d1db22>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814079.59/warc/CC-MAIN-20180222081525-20180222101525-00015.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9444884061813354,
"score": 3.484375,
"token_count": 9424,
"url": "http://computersandtechnologynewss.blogspot.com/2013_08_01_archive.html"
}
|
It is ironic that two prominent Founding Fathers who owned slaves (Thomas Jefferson and George Washington) were both early, albeit unsuccessful, pioneers in the movement to end slavery in their State and in the nation. Both Washington and Jefferson were raised in Virginia, a geographic part of the country in which slavery had been an entrenched cultural institution. In fact, at the time of the Founders, the morality of slavery had rarely been questioned; and in the 150 years following the introduction of slavery into Virginia by Dutch traders in 1619, there had been few voices raised in objection. That began to change in 1765, for as a consequence of America’s examination of her own relationship with Great Britain, there arose for the first time a serious contemplation of the propriety of African slavery in America. As Founding Father John Jay explained, this was the period in which America’s attitude towards slavery began to change:
Prior to the great Revolution, the great majority . . . of our people had been so long accustomed to the practice and convenience of having slaves that very few among them even doubted the propriety and rectitude of it. 1
As the Colonists increasingly recognized that they themselves were slaves of the British Empire, and were experiencing the discomforting effects of such power exercised over them, their commiseration with those enslaved in America began to grow. As one early legal authority explained:
The American Revolution. . . . was undertaken for a principle, was fought upon principle, and the success of their arms was deemed by the Colonists as the triumph of the principle. That principle was. . . . an ardent love of personal liberty, and hence, the very declaration of their political liberty announced as a self-evident truth that all men were created free and equal. 2
Notwithstanding this emerging change in attitude, the response across America on how to end slavery differed widely according to geographical regions. As Thomas Jefferson explained:
Where the disease [slavery] is most deeply seated, there it will be slowest in eradication. In the northern States, it was merely superficial and easily corrected. In the southern, it is incorporated with the whole system and requires time, patience, and perseverance in the curative process. 3
As a middle colony, Virginia experienced the stress from the divergent pull of both northern and southern beliefs meeting in conflict in that State. Several northern States were moving rapidly toward ending slavery, while the deepest southern States of North Carolina, South Carolina, and Georgia largely refused even to consider such a possibility. 4 Virginia contained strong proponents of both attitudes. While many Virginia leaders sought to end slavery in that State (George Mason, George Washington, Thomas Jefferson, Richard Henry Lee, etc.), they found a very cool reception toward their ideas from many of their fellow citizens as well as from the State Legislature. As explained by a southern abolitionist, part of the reason for the unfriendly reception to their proposals proceeded from the fact that:
Virginia alone in 1790 contained 293,427 slaves, more than seven times as many as [Vermont, Massachusetts, New Hampshire, Rhode Island, Connecticut, Pennsylvania, New York, and New Jersey] combined. Her productions were almost exclusively the result of slave labor. . . . The problem was one of no easy solution, how this “great evil,” as it was then called, was to be removed with safety to the master and benefit to the slave. 5
As Jefferson and Washington sought to liberalize the State’s slavery laws to make it easier to free slaves, the State Legislature went in exactly the opposite direction, passing laws making it more difficult to free slaves. (As one example, Washington was able to circumvent State laws by freeing his slaves in his will at his death in 1799; by the time of Jefferson’s death in 1826, State laws had so stiffened that it had become virtually impossible for Jefferson to use the same means.) What today have become the almost unknown views and forgotten efforts of both Washington and Jefferson to end slavery in their State and in the nation should be reviewed. Consider first the views of George Washington. Born in 1732, his life demonstrates how culturally entrenched slavery was in that day. Not only was Washington born into a world in which slavery was accepted, but he himself became a slave owner at the tender age of 11 when his father died, leaving him slaves as an inheritance. As other family members deceased, Washington inherited even more slaves. Growing up, then, from his earliest youth as a slave owner, it represented a radical change for Washington to try to overthrow the very system in which he had been raised. Washington astutely recognized that the same singular force would be either the great champion or the great obstacle to freeing Virginia’s slaves, and that force was the laws of his own State. Concerning the path Washington desired to see the State choose, he emphatically declared:
I can only say that there is not a man living who wishes more sincerely than I do to see a plan adopted for the abolition of it [slavery]; but there is only one proper and effectual mode by which it can be accomplished, and that is by Legislative authority; and this, as far as my suffrage [vote and support] will go, shall never be wanting [lacking]. 6
As Washington had pledged, he did provide his support and leadership in efforts to end the slave trade. For example, on July 18, 1774, the committee which Washington chaired in his own Fairfax County passed the following act:
Resolved, that it is the opinion of this meeting that during our present difficulties and distress, no slaves ought to be imported into any of the British colonies on this continent; and we take this opportunity of declaring our most earnest wishes to see an entire stop for ever put to such a wicked, cruel, and unnatural trade. 7
Having developed this position, Washington maintained it throughout his life and reaffirmed it often. For example, when General Marquis de Lafayette decided to buy a plantation in French Guiana for the purpose of freeing its slaves and placing them on the estate as tenants, Washington wrote Lafayette:
Your late purchase of an estate in the colony of Cayenne, with a view of emancipating the slaves on it, is a generous and noble proof of your humanity. Would to God a like spirit would diffuse itself generally into the minds of the people of this country, but I despair of seeing it. Some petitions were presented to the [Virginia] Assembly at its last session for the abolition of slavery, but they could scarcely obtain a reading. 8
And to his nephew and private secretary, Lawrence Lewis, Washington wrote:
I wish from my soul that the legislature of this State could see the policy of a gradual abolition of slavery. 9
In addition to the slaves he inherited, Washington also bought some fifty slaves prior to the Revolution, although he apparently purchased none afterward, 10 for he had reached the decision that he would no longer participate in the slave trade, and would never again buy or sell a slave. As he explained:
I never mean . . . to possess another slave by purchase; it being among my first wishes to see some plan adopted by which slavery in this country may be abolished by slow, sure, and imperceptible degrees. 11
As the laws of Virginia did not permit him to emancipate his slaved (those laws will be reviewed later in this work), the only other means for him to dispose of the slaves he held was to sell them. And had Washington not become so opposed to selling slaves, he gladly would have used that means to end his ownership of all slaves. As he explained:
Were it not that I am principled against selling Negroes . . . I would not in twelve months from this date be possessed of one as a slave. 12
Interestingly, the personal circumstances faced by Washington provide decisive proof that his convictions were indeed genuine and not merely rhetorical. The quantity of slaves which he held was economically unprofitable for Mount Vernon †and caused a genuine hardship on the estate. As Washington explained:
It is demonstratively clear that on this Estate (Mount Vernon) I have more working Negroes by a full [half] than can be employed to any advantage in the farming system. 13
What, then, could Washington do to reduce his expenses and to increase profits? An obvious solution was to sell his “surplus” slaves. Washington could thereby readily accrue immediate and substantial income. As prize-winning historian James Truslow Adams correctly observed:
One good field hand was worth as much as a small city lot. By selling a single slave, Washington could have paid for two years all the taxes he so complained about. 14
Washington acknowledged the profit he could make by reducing the number of his slaves, declaring:
[H]alf the workers I keep on this estate would render me greater net profit than I now derive from the whole. 15
Yet, despite the vast economic benefits he could have reaped, Washington nevertheless adamantly refused to sell any slaves. As he explained:
To sell the overplus I cannot, because I am principled against this kind of traffic in the human species. To hire them out is almost as bad because they could not be disposed of in families to any advantage, and to disperse [break up] the families I have an aversion. 16
This stand by Washington was remarkable. In fact, refusing not only to sell slaves but also refusing to break up their families distinctly differentiates Washington from the culture around him and particularly from his State legislature. Virginia law, contrary to Washington’s personal policy, recognized neither slave marriages nor slave families. 17
Yet, not only did Washington refuse to sell slaves or to break up their families but he also felt a genuine responsibility to take care of the slaves he held until there was, according to his own words, a “plan adopted by which slavery in this country may be abolished.” One proof of his commitment to care for his slaves regardless of the cost to himself was his order that:
Negroes must be clothed and fed . . . whether anything is made or not. 18
Not only did George Washington commit himself to caring for his slaves and to seeking a legal remedy by which they might be freed in his State but he also took the leadership in doing so on the national level. In fact, the first federal racial civil rights law in America was passed on August 7, 1789, with the endorsing signature of President George Washington. That law, entitled “An Ordinance of the Territory of the United States Northwest of the River Ohio,” prohibited slavery in any new State that might seek to enter the Union. Consequently, slavery was prohibited in all the American territories held at the time; and it was because of this law, signed by President George Washington, that Ohio, Indiana, Illinois, Michigan, Minnesota, and Wisconsin all prohibited slavery. Despite the slow but steady progress made in many parts of the nation, especially in the North, the laws in Virginia were designed to discourage and prevent the emancipation of slaves. The loophole which finally allowed Washington to circumvent Virginia law was by emancipating his slaves on his death, which he did. Notice the following provisions from his will which embodied the two policies he had pursued during his life,the care and well-being of his slaves and their personal emancipation:
Upon the decease of my wife, it is my will and desire that all the slaves which I hold in my own right shall receive their freedom. -To emancipate them during her life would, though earnestly wished by me, be attended with such insuperable difficulties on account of their intermixture by marriages with the Dower [inherited] Negroes as to excite the most painful sensations, if not disagreeable consequences from the latter, while both descriptions are in the occupancy of the same proprietor; it not being in my power, under the tenure by which the Dower Negroes are held, to manumit [free] them. -And whereas among those who will receive freedom according to this devise, there may be some who from old age or bodily infirmities, and others who on account of their infancy, that will be unable to support themselves; it is my will and desire that all who come under the first and second description shall be comfortably clothed and fed by my heirs while they live; -and that such of the latter description as have no parents living, or if living are unable or unwilling to provide for them, shall be bound by the court until they shall arrive at the age of twenty five years; -and in cases where no record can be produced whereby their ages can be ascertained, the judgment of the court upon its own view of the subject, shall be adequate and final. -The Negroes thus bound are (by their masters or mistresses) to be taught to read and write and to be brought up to some useful occupation agreeably to the laws of the Commonwealth of Virginia providing for the support of orphan and other poor children. -And I do hereby expressly forbid the sale or transportation out of the said Commonwealth of any slave I may die possessed of, under any pretense whatsoever. -And I do moreover most pointedly and most solemnly enjoin it upon my executors hereafter named, or the survivors of them, to see that this clause respecting slaves and every part thereof be religiously fulfilled at the epoch at which it is directed to take place without evasion, neglect or delay, after the crops which may then be on the ground are harvested, particularly as it respects the aged and infirm; -Seeing that a regular and permanent fund be established for their support so long as there are subjects requiring it, not trusting to the uncertain provision to be made by individuals. -And to my mulatto man, William (calling himself William Lee), I give immediate freedom; or if he should prefer it (on account of the accidents which have rendered him incapable of walking or of any active employment) to remain in the situation he now is, it shall be optional to him to do so: In either case, however, I allow him an annuity of thirty dollars during his natural life, which shall be independent of the victuals and clothes he has been accustomed to receive, if he chooses the last alternative; but in full, with his freedom, if he prefers the first; -and this I give him as a testimony of my sense of his attachment to me, and for his faithful services during the Revolutionary War. 19
Significantly, numerous incidents in George Washington’s life provide ample proof that he suffered from no racial bigotry. Those incidents include his approving a free black, Benjamin Banneker, as a surveyor to lay out the city of Washington, D. C., and his patronage of black poet Phillis Wheatley. In fact, after Phillis wrote a poem in 1775 praising General Washington, Washington made plans to publish the piece but then feared that the public would misunderstand his publication of a poem praising himself, believing it was a sign of his own vanity rather than as an intended tribute to Phillis. As Washington told her:
I thank you most sincerely for your polite notice of me in the elegant lines you enclosed; and however undeserving I may be of such encomium and panegyric [lofty praise], the style and manner exhibit a striking proof of your great poetical talents. In honor of which, and as a tribute justly due to you, I would have published the poem had I not been apprehensive that, while I only meant to give the world this new instance of your genius, I might have incurred the imputation of vanity. This, and nothing else, determined me not to give it place in the public prints. If you should ever come to Cambridge, or near Head Quarters, I shall be happy to see a person so favored by the muses and to whom nature has been so liberal and beneficent in her dispensations. 20
Additional proof of Washington’s lack of personal bigotry is provided by numerous black authors. One, for example, was Edward Johnson, a former slave and an abolitionist who was an author of textbooks for school children, particularly for young African-American students following the Civil War. Johnson provided the following anecdote:
Washington [was] out walking one day in company with some distinguished gentlemen, and during the walk he met an old colored man, who very politely tipped his hat and spoke to the General. Washington, in turn, took off his hat to the colored man; on seeing this, one of the company, in a jesting manner, inquired of the General if he usually took off his hat to Negroes. Whereupon Washington replied: “Politeness is cheap, and I never allow any one to be more polite to me than I to him.” 21
Other anecdotes were provided by William C. Nell, a former slave who became an ardent abolitionist. Nell wrote numerous works on black history and against slavery preceding the Civil War, and in one of those works, he provided the following anecdote of Washington and Primus Hall:
Primus Hall. -Throughout the Revolutionary war, he [Primus Hall] was the body servant of Col. Pickering, of Massachusetts. He [Hall] was free and communicative, and delighted to sit down with an interested listener and pour out those stores of absorbing and exciting anecdotes with which his memory was stored.
It well known that there was no officer in the whole American army whose friendship was dearer to Washington, and whose counsel was more esteemed by him, than that of the honest and patriotic Col. Pickering. He was on intimate terms with him, and unbosomed himself to him with as little reserve as, perhaps, to any confidant in the army. Whenever he was stationed within such a distance as to admit of it, he [Washington] passed many hours with the Colonel, consulting him upon anticipated measures and delighting in his reciprocated friendship.
Washington was, therefore, often brought into contact with the servant of Col. Pickering, the departed Primus. An opportunity was afforded to the Negro to note him [Washington] under circumstances very different from those in which he is usually brought before the public and which possess, therefore, a striking charm. I remember [one] anecdote from the mouth of Primus. . . . so peculiar as to be replete with interest. The authenticity of . . . may be fully relied upon. . . .
[T]he great General was engaged in earnest consultation with Col. Pickering in his tent until after the night had fairly set in. Head-quarters were at a considerable distance, and Washington signified his preference to staying with the Colonel over night, provided he had a spare blanket and straw.
“Oh, yes,” said Primus, who was appealed to; “plenty of straw and blankets-plenty.” Upon this assurance, Washington continued his conference with the Colonel until it was time to retire to rest. Two humble beds were spread, side by side, in the tent, and the officers laid themselves down, while Primus seemed to be busy with duties that required his attention before he himself could sleep. He worked, or appeared to work, until the breathing of the prostrate gentlemen satisfied him that they were sleeping; and then, seating himself on a box or stool, he leaned his head on his hands to obtain such repose as so inconvenient a position would allow. In the middle of the night Washington awoke. He looked about and descried the Negro as he sat. He gazed at him awhile and then spoke.
“Primus!” said he, calling; “Primus!”
Primus started up and rubbed his eyes. “What, General?” said he.
Washington rose up in his bed. “Primus,” said he, “what did you mean by saying that you had blankets and straw enough? Here you have given up your blanket and straw to me that I may sleep comfortably while you are obliged to sit through the night.”
“It’s nothing, General,” said Primus. “It’s nothing. I’m well enough. Don’t trouble yourself about me, General, but go to sleep again. No matter about me. I sleep very good.”
“But it is matter-it is matter,” said Washington, earnestly. “I cannot do it, Primus. If either is to sit up, I will. But I think there is no need of either sitting up. The blanket is wide enough for two. Come and lie down here with me.”
“Oh, no, General!” said Primus, starting, and protesting against the proposition. “No; let me sit here. I’ll do very well on the stool.”
“I say, come and lie down here!” said Washington, authoritatively. “There is room for both, and I insist upon it!”
He threw open the blanket as he spoke and moved to one side of the straw. Primus professes to have been exceedingly shocked at the idea of lying under the same covering with the commander-in-chief, but his tone was so resolute and determined that he could not hesitate. He prepared himself, therefore, and laid himself down by Washington; and on the same straw, and under the same blanket, the General and the Negro servant slept until morning. 22
Nell also provided the following story entitled “A Tribute from the Emancipated, by Washington’s Freed Men” from the Alexandria, D.C. Gazette to illustrate the respect that Washington’s former slaves had for him:
Upon a recent visit to the tomb of Washington [at Mount Vernon], I was much gratified by the alterations and improvements around it. Eleven colored men were industriously employed in leveling the earth and turfing around the sepulcher. There was an earnest expression of feeling about them that induced me to inquire if they belonged to the respected lady of the mansion. They stated they were a few of the many slaves freed by George Washington, and they had offered their services upon this last melancholy occasion as the only return in their power to make to the remains of the man who had been more than a father to them; and they should continue their labors as long as anything should be pointed out for them to do. I was so interested in this conduct that I inquired their several names, and the following were given me: -Joseph Smith, Sambo Anderson, William Anderson his son, Berldey Clark, George Lear, Dick Jasper, Morris Jasper, Levi Richardson, Joe Richardson, Wm. Moss, Win. Hays, and Nancy Squander, cooking for the men. -Fairfax County, Va., Nov. 14, 1835. 23
Washington was truly one of the leaders in Virginia who sought to end slavery in that State (and the nation) and who worked to bring civil rights to all Americans, regardless of color. Jefferson, too, sought similar goals, but by living twenty-seven years longer than Washington, Jefferson faced additional hostile State laws which Washington had not. But before reviewing Jefferson’s words and actions regarding slavery, a brief review of the overall trend of the laws of Virginia on the subject are in order. In 1692, Virginia passed a law that placed an economic burden on any slave owner who released his slaves, thus discouraging owners from freeing their slaves. That law declared:
[N]o Negro or mulatto slave shall be set free, unless the emancipator pays for his transportation out of the country within six months. 24
(Subsequent laws imposed additional provisions that a slave could not be freed unless the slave owner guaranteed a security bond for the education, livelihood, and support of the freed slave in order to ensure that the former slave would not become a burden to the community or to the society. 25 Not only did such laws place extreme economic hardships on any slave owner who tried to free his slaves but they also provided stiff penalties for any slave owner who attempted to free slaves without abiding by these laws.) In 1723, a law was passed which forbid the emancipation of slaves under any circumstance-even by a last will and testament. The only exceptions were for cases of “meritorious service” by a slave, a determination to be made only by the State Governor and his Council on a case by case basis. 26 Needless to say, this law made the occasions for freeing slaves even more rare. In 1782, however, Virginia began to move in a new direction (for a short time) by passing a very liberal manumission law. As a result, “this restraint on the power of the master to emancipate his slave was removed, and since that time the master may emancipate by his last will or deed.” 27 (It was because of this law that George Washington was able to free his slaves in his last will and testament in 1799.) In 1806, unfortunately, the Virginia Legislature repealed much of that law, 28 and it became more difficult to emancipate slaves in a last will and testament:
It shall be lawful for any person, by his or her last will and testament, or by any other instrument in writing under his or her hand and seal . . . to emancipate and set free his or her slaves . . . Provided, also, that all slaves so emancipated, not being . . . of sound mind and body, or being above the age of forty-five years, or being males under the age of twenty one, or females under the age of eighteen years, shall respectively be supported and maintained by the person so liberating them, or by his or her estate. 29 (emphasis added)
That law even made it possible for a wife to reverse a portion of an emancipation made by her husband in his will:
And . . . a widow who shall, within one year from the death of her husband, declare in the manner prescribed by law that she will not take or accept the provision made for her . . . [is] entitled to one third part of the slaves whereof her husband died possessed, notwithstanding they may be emancipated by his will. 30
Furthermore, recall that Virginia law did not recognize slave families. Therefore, if a slave was freed, the law made it almost impossible for him to remain near his spouse, children, or his family members who had not been freed, for the law required that a freed slave promptly depart the State or else reenter slavery:
If any slave hereafter emancipated shall remain within this Commonwealth more than twelve months after his or her right to freedom shall have accrued, he or she shall forfeit all such right and may be apprehended and sold. 31
It was under difficult laws like these-under laws even more restrictive than those Washington had faced-that Jefferson was required to operate. Nevertheless, as a slave owner (he, like Washington, had inherited slaves), Jefferson maintained a consistent public opposition to slavery and assiduously labored to end slavery both in his State and in the nation. Jefferson’s efforts to end slavery were manifested years before the American Revolution. As he explained:
In 1769, I became a member of the legislature by the choice of the county in which I live [Albemarle County, Virginia], and so continued until it was closed by the Revolution. I made one effort in that body for the permission of the emancipation of slaves, which was rejected: and indeed, during the regal [crown] government, nothing [like this] could expect success. 32
Jefferson’s reference to the role of the British Crown in the continuance of slavery in Virginia is significant. Virginia, as a British colony, was subject to the laws of Great Britain, and those laws, executed by order of King George III, prevented every attempt to end slavery in America-or in any British colony. The specific law which the Crown invoked to strike down the attempts of the Colonies to free slaves had been passed in 1766 (three years before Jefferson’s election to office and his first efforts to end slavery), and declared:
[B]e it declared by the King’s most Excellent Majesty . . . that the said Colonies and plantations in America have been, are, and of right ought to be, subordinate unto and dependent upon the Imperial Crown and Parliament of Great Britain; and that the King’s Majesty . . . had, hath, and of right ought to have, full power and authority to make laws and statutes of sufficient force and validity to bind the Colonies and people of America, subjects of the crown of Great Britain, in all cases whatsoever. And be it further declared and enacted by the authority aforesaid that all resolution, votes, orders, and proceedings whereby the power and authority of the Parliament of Great Britain to make laws and statutes . . . is denied, or drawn into question, are, and are hereby declared to be, utterly null and void to all intents and purposes whatsoever. 33
This law gave to the Crown the unilateral and unambiguous power to strike down any and all American laws on any subject whatsoever. Significantly, prior to the American Revolution some of the Colonies had voted to end slavery in their State, but those State laws had been struck down by the King. 34 This inability of individual Colonies to abolish slavery, even when they wished to do so, had caused Thomas Jefferson to include in the Declaration of Independence a listing of this grievance as one of the reasons propelling America to separate from Great Britain:
He [King George III] has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people which never offended him, captivating and carrying them into slavery in another hemisphere, or to incur miserable death in their transportation thither. This piratical warfare, the opprobrium [disgrace] of infidel powers, is the warfare of the Christian King of Great Britain. He has prostituted his negative for suppressing every legislative attempt to prohibit or to restrain an execrable commerce [that is, he has opposed efforts to prohibit the slave trade], determined to keep open a market where men should be bought and sold. 35
Following America’s separation from Great Britain in 1776, individual States, for the first time in America’s history, were finally able to begin abolishing slavery. For example, Pennsylvania and Massachusetts abolished slavery in 1780, Connecticut and Rhode Island did so in 1784, Vermont in 1786, New Hampshire in 1792, New York in 1799, New Jersey in 1804, etc. Significantly, Thomas Jefferson helped end slavery in several States by his leadership on the Declaration of Independence, and he was also behind the first attempt to ban slavery in new territories. In 1784, as part of a committee of three, they introduced a law in the Continental Congress to ban slavery from the “western territory.” That proposal stated:
That after the year 1800 of the Christian era, there shall be neither slavery nor involuntary servitude in any of the said States, otherwise than in punishment of crimes, whereof the party shall have been duly convicted to have been personally guilty. 36
Unfortunately, that proposal fell one vote short of passage. Three years prior to that proposal, Jefferson had made known his feelings against slavery in his book, Notes on the State of Virginia (1781). That work, circulated widely across the nation, declared:
The whole commerce between master and slave is a perpetual exercise of the most boisterous passions, the most unremitting despotism on the one part, and degrading submissions on the other. Our children see this and learn to imitate it; for man is an imitative animal. This quality is the germ of all education in him. From his cradle to his grave he is learning to do what he sees others do. If a parent could find no motive either in his philanthropy or his self-love for restraining the intemperance of passion towards his slave, it should always be a sufficient one that his child is present. But generally it is not sufficient. . . . The man must be a prodigy who can retain his manners and morals undepraved by such circumstances. And with what execration should the statesman be loaded who permits one half the citizens thus to trample on the rights of the other. . . . And can the liberties of a nation be thought secure when we have removed their only firm basis, a conviction in the minds of the people that these liberties are of the gift of God? That they are not to be violated but with his wrath? Indeed, I tremble for my country when I reflect that God is just; that His justice cannot sleep for ever. . . . The Almighty has no attribute which can take side with us in such a contest. . . . [T]he way, I hone [is] preparing under the auspices of Heaven for a total emancipation. 37
Nearly twenty-five years later, Jefferson bemoaned that ending slavery had been a task even more difficult than he had imagined. In 1805, he lamented:
I have long since given up the expectation of any early provision for the extinguishment of slavery among us. [While] there are many virtuous men who would make any sacrifices to affect it, many equally virtuous persuade themselves either that the thing is not wrong or that it cannot be remedied. 38
Jefferson eventually recognized that slavery probably would never be ended during his lifetime. However, this did not keep him from continually encouraging others in their efforts to end slavery. For example, in 1814, he wrote Edward Coles:
Dear Sir, -Your favor of July 31 [a treatise opposing slavery] was duly received and was read with peculiar pleasure. The sentiments breathed through the whole do honor to both the head and heart of the writer. Mine on the subject of slavery of Negroes have long since been in possession of the public and time has only served to give them stronger root. The love of justice and the love of country plead equally the cause of these people, and it is a moral reproach to us that they should have pleaded it so long in vain. . . . From those of the former generation who were in the fullness of age when I came into public life, which was while our controversy with England was on paper only, I soon saw that nothing was to be hoped. Nursed and educated in the daily habit of seeing the degraded condition, both bodily and mental, of those unfortunate beings, not reflecting that that degradation was very much the work of themselves and their fathers, few minds have yet doubted but that they were as legitimate subjects of property as their horses and cattle. . . . In the first or second session of the Legislature after I became a member, I drew to this subject the attention of Col. Bland, one of the oldest, ablest, and most respected members, and he undertook to move for certain moderate extensions of the protection of the laws to these people. I seconded his motion, and, as a younger member, was more spared in the debate; but he was denounced as an enemy of his country and was treated with the grossest indecorum. From an early stage of our revolution, other and more distant duties were assigned to me so that from that time till my return from Europe in 1789, and I may say till I returned to reside at home in 1809, I had little opportunity of knowing the progress of public sentiment here on this subject. I had always hoped that the younger generation, receiving their early impressions after the flame of liberty had been kindled in every breast and had become as it were the vital spirit of every American, that the generous temperament of youth, analogous to the motion of their blood and above the suggestions of avarice, would have sympathized with oppression wherever found and proved their love of liberty beyond their own share of it. But my intercourse with them since my return has not been sufficient to ascertain that they had made towards this point the progress I had hoped. . . . Yet the hour of emancipation is advancing in the march of time. It will come, whether brought on by the generous energy of our own minds or by the bloody process. . . . This enterprise is for the young; for those who can follow it up and bear it through to its consummation. It shall have all my prayers, and these are the only weapons of an old man. . . . The laws do not permit us to turn them [the slaves] loose. . . . I hope then, my dear sir. . . . you will come forward in the public councils, become the missionary of this doctrine truly Christian; insinuate and inculcate it softly but steadily through the medium of writing and conversation; associate others in your labors, and when the phalanx [brigade or regiment] is formed, bring on and press the proposition perseveringly until its accomplishment. It is an encouraging observation that no good measure was ever proposed which, if duly pursued, failed to prevail in the end. . . . And you will be supported by the religious precept, “be not weary in well-doing” [Galatians 6:9]. That your success may be as speedy and complete, as it will be of honorable and immortal consolation to yourself, I shall as fervently and sincerely pray. 39
The next year, 1815, Jefferson wrote David Barrow:
The particular subject of the pamphlet [against slavery] you enclosed me was one of early and tender consideration with me, and had I continued in the councils [legislatures] of my own State, it should never have been out of sight. The only practicable plan I could ever devise is stated under the 14th Query of my Notes on Virginia, and it is still the one most sound in my judgment. . . . Some progress is sensibly made in it; yet not so much as I had hoped and expected. But it will yield in time to temperate and steady pursuit, to the enlargement of the human mind, and its advancement in science. We are not in a world ungoverned by the laws and the power of a superior agent. Our efforts are in His hand and directed by it; and He will give them their effect in His own time. Where the disease is most deeply seated, there it will be slowest in eradication. In the northern States, it was merely superficial and easily corrected. In the southern, it is incorporated with the whole system and requires time, patience, and perseverance in the curative process. That it may finally be effected and its progress hastened will be the last and fondest prayer of him who now salutes you with respect and consideration. 40
In 1820, Jefferson again reaffirmed his continuing opposition to slavery, declaring:
I can say, with conscious truth, that there is not a man on earth who would sacrifice more than I would to relieve us from this heavy reproach in any practicable way. The cession of that kind of property-for so it is misnamed is a bagatelle [possession] which would not cost me a second thought if, in that way, a general emancipation and expatriation could be effected; and gradually, and with due sacrifices, I think it might be. But as it is, we have the wolf by the ears, and we can neither hold him nor safely let him go. 41
Then less than a before his death, Jefferson responded to a young enthusiast:
At the age of eighty-two, with one foot in the grave and the other uplifted to follow it, I do not permit myself to take part in any new enterprises, even for bettering the condition of man, no even in the great one which is the subject of your letter and which has been through life that of my greatest anxieties. The march of events has not been such as to render its completion practicable with the limits of time allotted to me; and I leave its accomplishment as the work of another generation. And I am cheered when I see that on which it is devolved, taking it up with so much good will and such minds engaged in its encouragement. The abolition of the evil is not impossible; it ought never therefore to be despaired of. Every plan should be adopted, every experiment tried, which may do something towards the ultimate object. 42
And just weeks before his death, Jefferson reiterated:
On the question of the lawfulness of slavery, that is of the right of one man to appropriate to himself the faculties of another without his consent, I certainly retain my early opinions. 43
Since the State laws on slavery had significantly stiffened between the death of George Washington and Thomas Jefferson twenty-seven years later (as Jefferson had observed in 1814, “the laws do not permit us to turn them loose” 44), Jefferson was unable to do what Washington had done in freeing his slaves. However, Jefferson had gone well above and beyond other slave owners in that era in that he actually paid his slaves for the vegetables they raised and for the meat they obtained while hunting and fishing. Additionally, he paid them for extra tasks they performed outside their normal working hours and even offered a revolutionary profit sharing plan for the products that his enslaved artisans produced in their shops. 45
As a final note on Jefferson’s personal views and actions, Jefferson had occasionally offered the view that blacks were an inferior race to whites. For example, in his Notes on the State of Virginia in which he had expressed his ardent desire for the emancipation of blacks, he also offered his opinion that:
Comparing them by their faculties of memory, reason, and imagination, it appears to me that in memory they are equal to the whites; in reason much inferior. 46 [T]he blacks . . . are inferior to the whites in the endowments both of body and mind. 47
Notwithstanding such opinions, Jefferson was willing to be proved wrong. In fact, when Henri Gregoire in Paris read Jefferson’s views on the intellectual capacity of blacks, he sent to Jefferson several examples of blacks for the purpose of disproving Jefferson’s thesis. Jefferson responded to him:
Be assured that no person living wishes more sincerely than I do to see a complete refutation of the doubts I have myself entertained and expressed on the grade of understanding allotted to them by nature and to find that in this respect they are on a par with ourselves. My doubts were the result of personal observation on the limited sphere of my own State, where the opportunities for the development of their genius were not favorable, and those of exercising it still less so. I expressed them therefore with great hesitation; but whatever be their degree of talent it is no measure of their rights. Because Sir Isaac Newton was superior to others in understanding, he was not therefore lord of the person or property of others. On this subject they are gaining daily in the opinions of nations, and hopeful advances are making towards their reestablishment on an equal footing with the other colors of the human family. I pray you therefore to accept my thanks for the many instances you have enabled me to observe of respectable intelligence in that race of men, which cannot fail to have effect in hastening the day of their relief. 48 (emphasis added)
And to Benjamin Banneker (a former slave distinguished for his scientific and mathematical talents, the publisher of an almanac, and one of the surveyors who laid out the city of Washington, D. C.), Jefferson wrote:
I thank you sincerely for your letter . . . and for the almanac it contained. Nobody wishes more than I do to see such proofs as you exhibit, that nature has given to our black brethren talents equal to those of the other colors of men. . . . I have taken the liberty of sending your almanac to Monsieur de Condorcet, Secretary of the Academy of Sciences at Paris, and member of the Philanthropic Society, because I considered it as a document to which your color had a right for their justification against the doubts which have been entertained of them. 49
When considering Jefferson’s views on the capacity of blacks (views apparently not stridently held), Jefferson’s actions to end slavery must be seen as even more remarkable. His efforts to achieve full freedom for a race he perhaps considered inferior indicate not only the sincerity of his belief that all men were indeed created equal but also his abiding conviction-expressed at the age of 77, only five years before his death-that “Nothing is more certainly written in the book of fate than that these people are to be free.” 50
While today both Washington and Jefferson are roundly condemned for owning slaves, it is nevertheless true that they both laid the first seeds for the abolition of slavery in the United States. One historian summarized their pioneer efforts in these words:
With the minds of thoughtful men thoroughly wakened on the subject of human rights [shortly before the American Revolution], it was impossible not to reflect on the wrongs of the slaves, incomparably worse than those against which their masters had taken up arms. As the political institutions of the young Federation were remolded, so grave a matter as slavery could not be ignored. Virginia in 1772 voted an address to the King remonstrating against the continuance of the African slave trade. The address was ignored, and Jefferson in the first draft of the Declaration alleged this as one of the wrongs suffered at the hands of the British government, but his colleagues suppressed the clause. In 1778, Virginia forbade the importation of slaves into her ports. The next year Jefferson proposed to the Legislature an elaborate plan for gradual emancipation, but it failed of consideration. Maryland followed Virginia in forbidding the importation of slaves from Africa. Virginia in 1782 passed a law by which manumission of slaves, which before had required special legislative permission, might be given at the will of the master. For the next ten years manumission went on at the rate of 8000 a year. . . . Jefferson planned nobly for the exclusion of slavery from the whole as yet unorganized domain of the nation a measure which would have belted the slave States with free territory, and so worked toward universal freedom. The sentiment of the time gave success to half his plan. His proposal in the ordinance of 1784 missed success in the Continental Congress by the vote of a single State. The principle was embodied in the ordinance of 1787. 51
Significantly it was the efforts of both Washington and Jefferson, and especially the documents which Jefferson had written, that were so heavily relied on by later abolitionists such as John Quincy Adams, Daniel Webster, and Abraham Lincoln in their efforts to end slavery. For example, John Quincy Adams, called the “Hell Hound of Abolition” for his extensive endeavors against that institution, regularly invoked the efforts of the Virginia patriots, particularly Jefferson, to justify his own crusade against slavery. In fact, in a speech in 1837, John Quincy Adams declared:
The inconsistency of the institution of domestic slavery with the principles of the Declaration of Independence was seen and lamented by all the southern patriots of the Revolution; by no one with deeper and more unalterable conviction than by the author of the Declaration himself [Jefferson]. No charge of insincerity or hypocrisy can be fairly laid to their charge. Never from their lips was heard one syllable of attempt to justify the institution of slavery. They universally considered it as a reproach fastened upon them by the unnatural step-mother country [Great Britain] and they saw that before the principles of the Declaration of Independence, slavery, in common with every other mode of oppression, was destined sooner or later to be banished from the earth. Such was the undoubting conviction of Jefferson to his dying day. In the Memoir of His Life, written at the age of seventy-seven, he gave to his countrymen the solemn and emphatic warning that the day was not distant when they must hear and adopt the general emancipation of their slaves. 52
And Daniel Webster, whose efforts in the U. S. Senate to end slavery paralleled those of John Quincy Adams in the U. S. House, also invoked the efforts of Washington and Jefferson to bolster his own position that slavery must be ended. In fact, on January 29, 1845, Webster was one of three individuals who helped frame an ” ‘Address to the People of the United States’ promulgated by the Anti-Texas Convention. . . . [to] lift our public sentiment to a new platform of anti-slavery.” 53 Part of that address declared:
Soon after the adoption of the Constitution, it was declared by George Washington to be “among his first wishes to see some plan adopted by which slavery might be abolished by law;” and in various forms in public and private communications, he avowed his anxious desire that “a spirit of humanity,” prompting to “the emancipation of the slaves,” “might diffuse itself generally into the minds of the people;” and he gave the assurance, that “so far as his own suffrage would go,” his influence should not be wanting to accomplish this result. By his last will and testament he provided that “all his slaves should receive their freedom,” and, in terms significant of the deep solicitude he felt upon the subject, he “most pointedly and most solemnly enjoined” it upon his executors “to see that the clause respecting slaves, and every part thereof, be religiously fulfilled, without evasion, neglect, or delay.” No language can be more explicit, more emphatic, or more solemn, than that in which Thomas Jefferson, from the beginning to the end of his life, uniformly declared his opposition to slavery. “I tremble for my country,” said he, “when I reflect that God is just-that His justice cannot sleep forever.” * * “The Almighty has no attribute which can take side with us in such a contest.” In reference to the state of public feeling as influenced by the Revolution, he said, “I think a change already perceptible since the origin of the Revolution;” and to show his own view of the proper influence of the spirit of the Revolution upon slavery, he proposed the searching question: “Who can endure toil, famine, stripes, imprisonment, and death itself, in vindication of his own liberty, and the next moment be deaf to all those motives whose power supported him through his trial, and inflict on his fellow men a bondage, one hour of which is fraught with more misery than ages of that which he rose in rebellion to oppose?” “We must wait,” he added, “with patience, the workings of an overruling Providence, and hope that that is preparing the deliverance of these our suffering brethren. When the measure of their tears shall be full-when their tears shall have involved Heaven itself in darkness, doubtless a God of justice will awaken to their distress, and by diffusing light and liberality among their oppressors, or at length, by his exterminating thunder, manifest his attention to things of this world, and that they be not left to the guidance of blind fatality!” Towards the close of his life, Mr. Jefferson made a renewed and final declaration of his opinion by writing thus to a friend: “My sentiments on the subject of the slavery of Negroes have long since been in possession of the public, and time has only served to give them stronger root. The love of justice and the love of country plead equally the cause of these people; and it is a moral reproach to us that they should have pleaded it so long in vain and should have produced not a single effort-nay, I fear, not much serious willingness to relieve them and ourselves from our present condition of moral and political reprobation.” 54
And Abraham Lincoln specifically invoked the words and efforts of Thomas Jefferson to justify his own crusade to end slavery and achieve civil rights and equality for blacks. For example, Lincoln invoked Jefferson to condemn the Kansas-Nebraska Act permitting territories that allowed slavery to become States in the Union:
Mr. Jefferson, the author of the Declaration of Independence, and otherwise a chief actor in the Revolution; then a delegate in Congress; afterwards twice President; who was, is, and perhaps will continue to be, the most distinguished politician of our history; a Virginian by birth and continued residence, and withal, a slave-holder; conceived the idea of taking that occasion to prevent slavery ever going into the northwestern territory. . . . and in the first Ordinance (which the acts of Congress were then called) for the government of the territory, provided that slavery should never be permitted therein. This is the famed ordinance of ‘87 so often spoken of. . . . Thus, with the author of the Declaration of Independence, the policy of prohibiting slavery in new territory originated. Thus, away back of the Constitution, in the pure, fresh, free breath of the Revolution, the State of Virginia and the national Congress put that policy in practice. Thus through sixty odd of the best years of the republic did that policy steadily work to its great and beneficent end. And thus, in those . . . States, and five millions of free, enterprising people, we have before us the rich fruits of this policy. But now new light breaks upon us. Now Congress declares this ought never to have been; and the like of it, must never be again. . . . We even find some men who drew their first breath, and every other breath of their lives, under this very restriction [against slavery], now live in dread of absolute suffocation if they should be restricted in the “sacred right” of taking slaves to Nebraska. That perfect liberty they sigh for-the “liberty” of making slaves of other people-Jefferson never thought of. 55
On other occasions, Lincoln quoted Jefferson’s words from the Declaration of Independence, pointing out that Jefferson had . . .
. . . established these great self-evident truths that when in the distant future some man, some faction, some interest, should set upon the doctrine that none but rich men, or none but white men, were entitled to life, liberty and the pursuit of happiness, their posterity might look up again to the Declaration of Independence and take courage to renew the battle which their fathers began. . . . Now, my countrymen, if you have been taught doctrines conflicting with the great landmarks of the Declaration of Independence; if you have listened to suggestions which would take away from its grandeur and mutilate the fair symmetry of its proportions; if you have been inclined to believe that all men are not created equal in those inalienable rights enumerated by our chart of liberty; let me entreat you to come back. . . . [C]ome back to the truths that are in the Declaration of Independence. 56
It is undebatable that the early efforts and words both of George Washington and of Thomas Jefferson provided one of the strongest platforms on which later generations of abolitionists, and some of their most notable orators, erected their arguments. While it is difficult for today’s critics of Washington and Jefferson to understand the culture of America two centuries ago, it is nevertheless true that both Washington and Jefferson were influential in slowly turning that culture in a direction which-generations later-eventually secured equal civil rights for all Americans, regardless of their color.
For more information on this issue see: The Founding Fathers and Slavery, The Bible, Slavery, and America’s Founders, Black History Issue 2003, Confronting Civil War Revisionism, and Setting the Record Straight (Book, or DVD).
1. John Jay, The Correspondence and Public Papers of John Jay, Henry P. Johnston, editor (New York: G. P. Putnam’s Sons, 1891), Vol. III, p. 342, to the English Anti-Slavery Society in June 1788.
2. Thomas R. R. Cobb, An Inquiry into the Law of Negro Slavery in the United States of America, to Which is Prefixed an Historical Sketch of Slavery (Philadelphia: T. & J. W. Johnson & Co., 1858), Vol. I, p. 169).
3. Thomas Jefferson, The Works of Thomas Jefferson, Paul Leicester Ford, editor (New York and London: G. P. Putnam’s Sons, 1905), Vol. XI, pp. 470-471, to David Barrow on May 1, 1815.
4. Jefferson, The Writings of Thomas Jefferson, Albert Ellery Bergh, editor (Washington, D. C.: Thomas Jefferson Memorial Association, 1903), Vol. I, p. 28, from his Autobiography; see also James Madison, The Papers of James Madison (Washington: Langtree and O’Sullivan, 1840), Vol. 111, p. 1395, August 22, 1787; see also James Madison, The Writings of James Madison, GaiIlard Hunt, editor, (New York: G. P. Putnam’s Sons, 1910), Vol. IX, p. 2, to Robert Walsh on November 27, 1819.
5. Cobb, Vol. 1, p. 172.
6. George Washington, The Writings of George Washington, John C. Fitzpatrick, editor (Washington, D. C.: United States Government Printing Office, 1936), Vol. 38, p. 408, to Robert Morris on April 12, 1786.
7. George Washington, The Writings of George Washington, Jared Sparks (Boston: American Stationers’ Company, 1837), Vol. 11, p. 494.
8. Washington, Writings (1936), Vol. 28, p. 424, to Marquis de Lafayette on May 10, 1786.
9. Washington, Writings (1936), Vol. 36, p. 2, to Lawrence Lewis on August 4, 1797.
10. George Washington, The Diaries of George Washington, 1748-1799, John C. Fitzpatrick, editor (Boston: Houghton Mifflin Company, published for the Mount Vernon Ladies’ Association, 1925), Vol. I, p. 117 (on January 25, 1760, Washington sought to purchase a joiner, a bricklayer, and a gardener), p. 278 (on July 25, 1768, Washington purchased a bricklayer), and p. 383 (on June 11, 1770, Washington purchased two slaves). Additional information on the total number of slaves Washington urchased, and the dates of those purchases, was provided by research specialist Mary Thompson of Mt. Vernon.
11. Washington, Writings (1939), Vol. 29, p. 5, to John Francis Mercer on September 9, 1786.
12. Washington, Writings (1939), Vol. 34, p. 47, to Alexander Spotswood on November 23, 1794.
13. Washington, Writings (1940), Vol. 37, p. 338, to Robert Lewis on August 18, 1799.
14. James Thomas Flexner, George Washington: Anguish and Farewell, 1793-1799 (Boston: Little, Brown and Company, 1972), p. 342.
15. Washington, Writings (1940), Vol. 37, p. 338, to Robert Lewis on August 18, 1799.
16. Washington, Writings (1940), Vol. 37, p. 338, to Robert Lewis on August 18, 1799.
17. Mount Vernon, “George Washington and Slavery. Slave Census, 1996,” www.mountvernon.org/education/slavery/census.html.
18. Washington, Writings (1931), Vol. 111, p. 285, to Edward Montague on April 5, 1775.
19. George Washington, The Last Will and Testament of George Washington and Schedule of his Property to Which is Appended the Last Will and Testament of Martha Washington, John C. Fitzpatrick, editor (Washington, D. C.: The Mount Vernon Ladies’ Association of the Union, 1939), pp. 2-4.
20. Washington, Writings (1931), Vol. 4, pp. 360-361, to Phillis Wheatley on February 28, 1776.
21. Edward Johnson, A School History of the Negro Race in America, from 1619 to 1890, with a Short Introduction as to the Origin of the Race; Also a Short Sketch of Liberia (Raleigh: Edwards & Broughton, 1891), p. 68.
22. William C. Nell, Services of Colored Americans in the Wars of 1776 and 1812 (Boston: Robert F. Wallcut, 1852), pp. 39-40, taken from the Appendix, quoting Rev. Henry F. Harrington, “Anecdotes of Washington,” Godeys Ladys Book, June, 1849.
23. Nell, Services, p. 38.
24. W. O. Blake, The History of Slavery and the Slave Trade; Ancient and Modern. The Forms of Slavery that Prevailed in Ancient Nations, Particularly in Greece and Rome. The African Slave Trade and the Political History of Slavery in the United States (Ohio: J. & H. Miller, 1857), pp. 373-374.
25. Blake, The History of Slavery and the Slave Trade, p. 381.
26. George M. Stroud, A Sketch of the Laws Relating to Slavery in the Several States of the United States of America (Philadelphia: Henry Longstreth, 1856), pp. 236-237.
27. Stroud, A Sketch of the Laws Relating to Slavery, pp. 236-237.
28. Dumas Malone, Jefferson and His Time: Volume Six, The Sage of Monticello (Boston: Little Brown and Company, 1981), p. 319.
29. The Revised Code of the Laws of Virginia: Being A Collection of all Such Acts of the General Assembly, of a Public and Permanent Nature, as are Now in Force (Richmond: Printed by Thomas Ritcher, 1819), pp. 433-436.
30. The Revised Code of the Laws of Virginia, pp. 433-436.
31. The Revised Code of the Laws of Virginia, pp. 433-436; see also, Stroud, A Sketch of the Laws Relating to Slavery, pp. 236-237.
32. Jefferson, Writings (1903), Vol. I, p. 4, from his Autobiography.
33. Anno Regni Georgii III. Regis Magne Britanniæ, Franciæ, & Hiberniæ, Sexto (London: Printed by Mark Baskett, Printer to the King’s most Excellent Majesty; and by the assigns of Robert Baskett, 1766).
34. Benjamin Franklin, The Works of Benjamin Franklin, Jared Sparks, editor (Boston: Tappan, Whittemore, and Mason, 1839), Vol. VIII, p. 42, to the Rev. Dean Woodward on April 10, 1773.
35. Journals of the Continental Congress, 1774-1789 (Washington: Government Printing Office, 1906), Vol. V, 1776, June 5-October 8, p. 498, Jefferson’s draft of the Declaration of Independence.
36. Journals of the Continental Congress, Volume XXVI, pp. 118-119, Monday, March 1, 1784.
37. Thomas Jefferson, Notes on the State of Virginia (New York: M. L. & W. A. Davis, 1794, Second Edition), pp. 240-242, Query XVIII.
38. Jefferson, Works (1905), Vol. X, p. 126, to William A. Burwell on January 28, 1805.
39. Jefferson, Works (1905), Vol. XI, pp. 416-420, to Edward Coles on August 25, 1814.
40. Jefferson, Works (1905), Vol. XI, pp. 470-471, to David Barrow on May 1, 1815.
41. Jefferson, Works (1905), Vol. XII, pp. 158-159, to John Holmes on April 22, 1820.
42. Jefferson, Writings (1904), Vol. XVI, pp. 119-120, to Miss Frances Wright on August 7, 1825.
43. Jefferson, Writings (1904), Vol. XVI, pp. 162-163, to the Hon. Edward Everett on April 8, 1826.
44. The Constitutions of the Sixteen States (Boston: Manning and Loring, 1797), p. 249, Vermont, 1786, Article I, “Declaration of Rights.”
45. Information obtained from Monticello, at www.monticello.org/jefferson/plantation/dig.html.
46. Jefferson, Writings (1903), Vol. II, p. 194, from Query XIV of Notes on Virginia.
47. Jefferson, Writings (1903), Vol. II, p. 201, from Query XIV of Notes on Virginia.
48. Jefferson, Writings (1904), Vol. XII, p. 255, to M. Henri Gregoire on February 25, 1809; see also Vol. XII, p. 322, to Joel Barlow on October 8, 1809, wherein, speaking on the same subject, he declares, “It is impossible for doubt to have been more tenderly or hesitatingly expressed than that was in the Notes of Virginia, and nothing was or is farther from my intentions that to enlist myself as the champion of a fixes opinion where I have only expressed a doubt.”
49. Jefferson, Writings (1903), Vol. VIII, pp. 241-242, to Benjamin Banneker on August 30, 1791.
50. Jefferson, Writings (1903), Vol. I, p. 72, from Jefferson’s Autobiography.
51. George S. Merriam, The Negro and the Nation: A History of American Slavery and Enfranchisement (New York: Henry Holt and Company, 1906), pp. 8-10.
52. John Quincy Adams, An Oration Delivered Before The Inhabitants Of The Town Of Newburyport at Their Request on the Sixty-First Anniversary of the Declaration of Independence, July 4, 1837 (Newburyport: Charles Whipple, 1837), p. 50
53. Daniel Webster, The Writings and Speeches of Daniel Webster Hitherto Uncollected (Boston: Little, Brown, & Company, 1903), Vol. 111, pp. 192-193, n., “Address on the Annexation of Texas,” January 29, 1845.
54. Daniel Webster, Writings . . . Hitherto Uncollected, Vol. III, pp. 204-205, “Address on the Annexation of Texas,” January 29, 1845.
55. Abraham Lincoln, The Collected Works of Abraham Lincoln, Roy P. Basler, editor (New Jersey: Rutgers University Press, 1953), Vol. II, pp. 250-251, from his speech at Peoria, Illinois, on October 16, 1854.
56. Lincoln, Works, Vol. II, p. 546, from his speech on August 17, 1858.
|
<urn:uuid:d2a1c2a1-2f60-4f5f-9496-95f7546d3091>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.8/warc/CC-MAIN-20180220224819-20180221004819-00615.warc.gz",
"int_score": 5,
"language": "en",
"language_score": 0.9716302156448364,
"score": 4.5625,
"token_count": 14047,
"url": "http://wallbuilders.com/george-washington-thomas-jefferson-slavery-virginia/"
}
|
1656 – The 1st Quakers, Mary Fisher and Ann Austin, arrived in Boston and were promptly arrested.
1776 – The Continental Congress, sitting as a committee, met on July 1, 1776, to debate a resolution submitted by Virginia delegate Richard Henry Lee on June 7. The resolution stated that the United Colonies “are, and of right ought to be, free and independent States.” The committee voted for the motion and, on July 2 in formal session took the final vote for independence.
1777 – British troops departed from their base at the Bouquet river to head toward Ticonderoga, New York.
1797 – Congress passed “An Act providing a Naval Armament,” empowered the President to “cause the said revenue cutters to be employed to defend the seacoast and to repel any hostility to their vessels and commerce, within their jurisdiction, having due regard to the duty of said cutters in the protection of the revenue.” The act also increased the complements of the cutters from ten men to a number “not exceeding 30 marines and seamen.”
1800 – First convoy duty; USS Essex escorts convoy of merchant ships from East Indies to U.S.
1801 – U.S. squadron under Commodore Dale enters Mediterranean to strike Barbary Pirates.
1850 – Naval School at Annapolis renamed Naval Academy.
1851 – Naval Academy adopts four year course of study.
1861 – The US War Department decreed that Kansas and Tennessee were to be canvassed for volunteers.
1862 – Congress gave the green light to the tax-centric Revenue Act. The legislation, which was soon signed into law by President Abraham Lincoln, imposed a three-percent tax on people with incomes between $600 to $10,000; and also called for a five-percent levy on people with incomes reaching over $10,000. However, the Revenue Act was perhaps more notable for creating the Bureau of Internal Revenue, a government agency which was charged with collecting the revenue generated by the new taxes. Though the Revenue Act and its attendant package of taxes were allowed to lapse into legislative oblivion after the Civil War, the Bureau of Internal Revenue eventually came back to haunt Americaýs taxpaying citizens in 1913, when the Sixteenth Amendment was added to the Constitution. Along with sanctioning the income tax, the amendment paved the path for the opening of the Internal Revenue Service, which, in its role as the official clearing house for the nationýs taxes, proved to be the bureaucratic progeny of the Internal Revenue Service
1862 – The US Congress outlawed polygamy for the 1st time. The Morrill Anti-Bigamy Act, signed by Pres. Lincoln, made polygamy illegal in American territories. It led to the prosecution of over 1300 Mormons. It also granted large tracts of public land to the states with the directive to sell for the support of institutions teaching the mechanical and agricultural arts. It also obligated state male university students to military training. The education initiative resulted in 68 land-grant colleges.
1862 – In day 7 of the 7 Days Battle Union artillery stopped a Confederate attack at Malvern Hill, Virginia. Casualties totaled: US 15,249 and CS 17,583.
1863 – The largest military conflict in North American history begins this day when Union and Confederate forces collide at Gettysburg. The epic battle lasted three days and resulted in a retreat to Virginia by Robert E. Lee’s Army of Northern Virginia. Two months prior to Gettysburg, Lee had dealt a stunning defeat to the Army of the Potomac at Chancellorsville. He then made plans for a Northern invasion in order to relieve pressure on war-weary Virginia and to seize the initiative from the Yankees. His army, numbering about 80,000, began moving on June 3. The Army of the Potomac, commanded by Joseph Hooker and numbering just under 100,000, began moving shortly thereafter, staying between Lee and Washington, D.C. But on June 28, frustrated by the Lincoln administration’s restrictions on his autonomy as commander, Hooker resigned and was replaced by George G. Meade. Meade took command of the Army of the Potomac as Lee’s army moved into Pennsylvania. On the morning of July 1, advance units of the forces came into contact with one another just outside of Gettysburg. The sound of battle attracted other units, and by noon the conflict was raging. During the first hours of battle, Union General John Reynolds was killed, and the Yankees found that they were outnumbered. The battle lines ran around the northwestern rim of Gettysburg. The Confederates applied pressure all along the Union front, and they slowly drove the Yankees through the town. By evening, the Federal troops rallied on high ground on the southeastern edge of Gettysburg. As more troops arrived, Meade’s army formed a three-mile long, fishhook-shaped line running from Culp’s Hill on the right flank, along Cemetery Hill and Cemetery Ridge, to the base of Little Round Top. The Confederates held Gettysburg, and stretched along a six-mile arc around the Union position. For the next two days, Lee would batter each end of the Union position, and on July 3, he would launch Pickett’s charge against the Union center.
1863 – John Fulton Reynolds (42), Union general, died in battle at Gettysburg.
1864 – Battle of Petersburg, VA, began.
1898 – As part of their campaign to capture Spanish-held Santiago de Cuba on the southern coast of Cuba, the U.S. Army Fifth Corps engages Spanish forces at El Caney and San Juan Hill. In May 1898, one month after the outbreak of the Spanish-American War, a Spanish fleet docked in the Santiago de Cuba harbor after racing across the Atlantic from Spain. A superior U.S. naval force arrived soon after and blockaded the harbor entrance. In June, the U.S. Army Fifth Corps landed on Cuba with the aim of marching to Santiago and launching a coordinated land and sea assault on the Spanish stronghold. Included among the U.S. ground troops were the Theodore Roosevelt-led “Rough Riders,” a collection of Western cowboys and Eastern blue bloods officially known as the First U.S. Voluntary Cavalry. The U.S. Army Fifth Corps fought its way to Santiago’s outer defenses, and on July 1 U.S. General William Shafter ordered an attack on the village of El Caney and San Juan Hill. Shafter hoped to capture El Caney before besieging the fortified heights of San Juan Hill, but the 500 Spanish defenders of the village put up a fierce resistance and held off 10 times their number for most of the day. Although El Caney was not secure, some 8,000 Americans pressed forward toward San Juan Hill. Hundreds fell under Spanish gunfire before reaching the base of the heights, where the force split up into two flanks to take San Juan Hill and Kettle Hill. The Rough Riders were among the troops in the right flank attacking Kettle Hill. When the order was given by Lieutenant John Miley that “the heights must be taken at all hazards,” the Rough Riders, who had been forced to leave their horses behind because of transportation difficulties, led the charge up the hills. The Rough Riders and the black soldiers of the 9th and 10th Cavalry regiments were the first up Kettle Hill, and San Juan Hill was taken soon after. From the crest, the Americans found themselves overlooking Santiago, and the next day they began a siege of the city. On July 3, the Spanish fleet was destroyed off Santiago by U.S. warships under Admiral William Sampson, and on July 17 the Spanish surrendered the city–and thus Cuba–to the Americans.
1907 – World’s 1st air force was established as part of the US Army.
1911 – Trial of first Navy aircraft, Curtiss A-1. The designer, Glenn Curtiss, makes first flight in Navy’s first aircraft, A-1, at Lake Keuka, NY, then prepares LT Theodore G. Ellyson, the first naval aviator, for his two solo flights in A-1.
1916 – Establishment of informal school for officers assigned to submarines at New London, CT.
1917 – Race riots in East St. Louis, Illinois, and 40 to 200 were reported killed.
1918 – USS Covington hit without warning by two torpedoes from German Submarine U-86 and sank the next day.
1921 – The Coast Guard’s first air station, located at Morehead City, North Carolina, was closed due to a lack of funding.
1939 – Lighthouse Service of Department of Commerce transferred to Coast Guard under President Franklin Roosevelt’s Reorganization Plan No. 11. Under the President’s Reorganization Plan No. 11, made effective this date by Public Resolution No. 20, approved 7 June 1939, it was provided “that the Bureau of Lighthouses in the Department of Commerce and its functions be transferred to and consolidated with and administered as a part of the Coast Guard. This consolidation made in the interest of efficiency and economy, will result in the transfer to and consolidation with the Coast Guard of the system of approximately 30,000 aids to navigation (including light vessels and lighthouses) maintained by the Lighthouse Service on the sea and lake coasts of the United States, on the rivers of the United States, and on the coasts of all other territory under the jurisdiction of the United States with the exception of the Philippine Island and Panama Canal proper.” Plans were put into effect, “Providing for a complete integration with the Coast Guard of the personnel of the Lighthouse Service numbering about 5,200, together with the auxiliary organization of 64 buoy tenders, 30 depots, and 17 district offices.”
1940 – Roosevelt signs a further Navy bill providing for the construction of 45 more ships and providing $550,000,000 to finance these and other projects.
1941 – Aircraft from the United States Navy start antisubmarine patrols from bases in Newfoundland.
1941 – Commercial black and white television broadcasting began in the US.
1943 – “Pay-as-you-go” income tax withholding began.
1944 – Elements of the US 5th Army capture Cecina on the west coast while Pomerance falls, further inland, in the advance to Volterra.
1944 – Delegates from 44 countries began meeting at Bretton Woods, N.H., where they agreed to establish the International Monetary Fund and the World Bank. The US hosted an international conference at Bretton Woods, N.H., to deal with international monetary and financial problems. The talks resulted in the creation of the IMF, International Monetary Fund, and the World Bank in 1945. In 1997 Catherine Caufield wrote “Masters of Illusion: The World Bank and the Poverty of Nations.” The Bretton Woods institutions also include the United nations and the General Agreement on Tariffs and Trade (renamed the World Trade organization). The agreement was a gold exchange standard and only the US was required to convert its currency into gold at a fixed rate, and only foreign central banks were allowed the privilege of redemption.
1945 – Some 550 B-29 Superfortress bombers — the greatest number yet to be engaged — drop 4000 tons of incendiary bombs on the Kure naval base, Shimonoseki, Ube and Kumanoto, on western Kyushu.
1946 – As a final step in the return of the Coast Guard to the Treasury Department from wartime operation under the Navy Department, the Navy directional control of the following Coast Guard functions was terminated: search and rescue functions, maintenance and operation of ocean weather stations and air-sea navigational aids in the Atlantic, continental United States, Alaska, and Pacific east of Pearl Harbor.
1946 – The United States exploded a 20-kiloton atomic bomb near Bikini Atoll in the Marshall Islands in the Pacific Ocean. The energy released by any one of the ten or so major earthquakes every year is about 1,000 times as much as the Bikini atomic bomb.
1947 – State Department official George Kennan, using the pseudonym “Mr. X,” publishes an article entitled “The Sources of Soviet Conduct” in the July edition of Foreign Affairs. The article focused on Kennan’s call for a policy of containment toward the Soviet Union and established the foundation for much of America’s early Cold War foreign policy. In February 1946, Kennan, then serving as the U.S. charge d’affaires in Moscow, wrote his famous “long telegram” to the Department of State. In the missive, he condemned the communist leadership of the Soviet Union and called on the United States to forcefully resist Russian expansion. Encouraged by friends and colleagues, Kennan refined the telegram into an article, “The Sources of Soviet Conduct,” and secured its publication in the July edition of Foreign Affairs. Kennan signed the article “Mr. X” to avoid any charge that he was presenting official U.S. government policy, but nearly everyone in the Department of State and White House recognized the piece as Kennan’s work. In the article, Kennan explained that the Soviet Union’s leaders were determined to spread the communist doctrine around the world, but were also extremely patient and pragmatic in pursuing such expansion. In the “face of superior force,” Kennan said, the Russians would retreat and wait for a more propitious moment. The West, however, should not be lulled into complacency by temporary Soviet setbacks. Soviet foreign policy, Kennan claimed, “is a fluid stream which moves constantly, wherever it is permitted to move, toward a given goal.” In terms of U.S. foreign policy, Kennan’s advice was clear: “The main element of any United States policy toward the Soviet Union must be that of a long-term, patient but firm and vigilant containment of Russian expansive tendencies.” Kennan’s article created a sensation in the United States, and the term “containment” instantly entered the Cold War lexicon. The administration of President Harry S. Truman embraced Kennan’s philosophy, and in the next few years attempted to “contain” Soviet expansion through a variety of programs, including the establishment of the North Atlantic Treaty Organization (NATO) in 1949. Kennan’s star rose quickly in the Department of State and in 1952 he was named U.S. ambassador to Russia. By the 1960s, with the United States hopelessly mired in the Vietnam War, Kennan began to question some of his own basic assumptions in the “Mr. X” article and became a vocal critic of U.S. involvement in Vietnam. In particular, he criticized U.S. policymakers during the 1950s and 1960s for putting too much emphasis on the military containment of the Soviet Union, rather than on political and economic programs.
1950 – Task Force Smith, two companies of the 24th Infantry Division’s 21st Infantry Regiment, commanded by Lieutenant Colonel Charles B. Smith and the first U.S. combat unit in Korea, arrived at Pusan. Major General William F. Dean, the 24th Infantry Division commander, was named commander of all U.S. forces in Korea.
1951 – North Korean leader Kim Il Sung and Peng Teh-huai, commander of the Chinese “Volunteers,” agreed to begin armistice discussions.
1956 – The Highway Revenue Act of 1956 was put into effect by Congress, outlining a policy of taxation with the aim of creating a fund for the construction of over 42,500 miles of interstate highways over a period of 13 years. The push for a national highway system began many years earlier, when the privately funded construction of the Lincoln Highway begun in 1919. President Franklin D. Roosevelt (1933-1945) did much to set into motion plans for a federally funded highway system, but his efforts were halted by the outbreak of World War II. With the end of the war came America’s industrial boom and a massive increase in automobile registration. Dwight D. Eisenhower, elected president in 1952, had been a supporter of a federally funded highway system ever since, as an Army Lieutenant in 1919, he led a military convoy from San Francisco to New York. His travels through Germany during World War II only increased his desire to replicate Germany’s autobahn system. Eisenhower’s 1954 State of the Union address made clear his intentions to follow through on his interest. He declared the need to “protect the vital interests of every citizen in a safe, adequate highway system.” It wasn’t until 1956 that Eisenhower saw his vision pass through Congress. The scale of the plan was breathtaking: At a time when the total federal budget approached $71 billion, Eisenhower’s plan called for $50 billion over 13 years for highways. To pay for the project a system of taxes, relying heavily on the taxation of gasoline, was implemented. Legislation has extended the Interstate Highway Revenue Act three times. Today consumers pay 18.3¢ per gallon on gasoline. Eisenhower thought of the Federal Interstate System as his greatest achievement. Today, revisionists question the solutions offered by our massive labyrinth of highways. Undoubtedly the interstate system changed America and made it what it is today, with suburbs and “edge cities” springing up across the country. Employment increased, as well as the U.S. gross national product. Still, both state and federal governments struggle to appropriate the funds to expand our national road network and meet the demand of the ever-growing population of car owners. Many economists subscribe to Helen Levitt’s theory that “congestion rises to meet road capacity,” and anti-road activists are citing the loss of productive farmland, the demise of small business, the destruction of the environment, and the “urbanization” of American society. Truly, the grass is always greener on the other side of the highway.
1958 – The new Atlantic merchant vessel [known by the acronym AMVER] position reporting program was established. It was aimed at encouraging domestic and foreign merchant vessels to send voluntary position reports and navigational data to U.S. Coast Guard shore based radio stations and ocean station vessels. Relayed to a ships’ plot center in New York and processed by machine, these data provided updated position information for U.S. Coast Guard rescue coordination centers. The centers could then direct only those vessels which would be of effective aid to craft or persons in distress. This diversion of all merchant ships in a large area became unnecessary.
1960 – USSR shot down a US RB-47 reconnaissance plane.
1962 – Intelligence has been an essential element of Army operations during war as well as during periods of peace. In the past, requirements were met by personnel from the Army Intelligence and Army Security Reserve branches, two-year obligated tour officers, one-tour levies on the various branches, and Regular Army officers in the specialization programs. To meet the Army’s increased requirement for national and tactical intelligence, an Intelligence and Security Branch was established in the Army effective July 1, 1962, by General Orders No. 38, July 3, 1962. On July 1, 1967, the branch was redesignated as Military Intelligence.
1965 – Undersecretary of State George Ball submits a memo to President Lyndon B. Johnson titled “A Compromise Solution for South Vietnam.” It began bluntly: “The South Vietnamese are losing the war to the Viet Cong. No one can assure you that we can beat the Viet Cong, or even force them to the conference table on our terms, no matter how many hundred thousand white, foreign (U.S.) troops we deploy.” Ball advised that the United States not commit any more troops, restrict the combat role of those already in place, and seek to negotiate a way out of the war. As Ball was submitting his memo, the U.S. air base at Da Nang came under attack by the Viet Cong for the first time. An enemy demolition team infiltrated the airfield and destroyed three planes and damaged three others. One U.S. airman was killed and three U.S. Marines were wounded. The attack on Da Nang, the increased aggressiveness of the Viet Cong, and the weakness of the Saigon regime convinced Johnson that he had to do something to stop the communists or they would soon take over South Vietnam. While Ball recommended a negotiated settlement, Secretary of Defense Robert McNamara urged the president to “expand promptly and substantially” the U.S. military presence in South Vietnam. Johnson, not wanting to lose South Vietnam to the communists, ultimately accepted McNamara’s recommendation. On July 22, he authorized a total of 44 U.S. battalions for commitment in South Vietnam, a decision that led to a massive escalation of the war. There were less than ten U.S. Army and Marine battalions in South Vietnam at this time. Eventually there would be more than 540,000 U.S. troops in South Vietnam.
1966 – The U.S. Marines launched Operation Holt in an attempt to finish off a Vietcong battalion in Thua Thien Province in Vietnam.
1966 – U.S. Air Force and Navy jets carry out a series of raids on fuel installations in the Hanoi-Haiphong area. The Dong Nam fuel dump, 15 miles northeast of Hanoi, with 9 percent of North Vietnam’s storage capacity, was struck on this day. The Do Son petroleum installation, 12 miles southeast of Haiphong, would be attacked on July 3. The raids continued for two more days, as petroleum facilities near Haiphong, Thanh Hoa, and Vinh were bombed, and fuel tanks in the Hanoi area were hit. These raids were part of Operation Rolling Thunder, which had begun in March 1965. The attacks on the North Vietnamese fuel facilities represented a new level of bombing, since these sites had been previously off limits. However, the raids did not have a lasting impact because China and the Soviet Union replaced the destroyed petroleum assets fairly quickly. China reacted to these events by calling the bombings “barbarous and wanton acts that have further freed us from any bounds of restrictions in helping North Vietnam.” The World Council of Churches in Geneva sent a cable to President Lyndon B. Johnson saying that the latest bombing of North Vietnam was causing a “widespread reaction” of “resentment and alarm” among many Christians. Indian mobs protested the air raids on the Hanoi-Haiphong area with violent anti-American demonstrations in Delhi and several other cities.
1968 – The United States, Britain, the Soviet Union and 58 other nations signed the Nuclear Nonproliferation Treaty.
1972 – Date of rank of Rear Admiral Samuel Lee Gravely, Jr., who was first U.S. Navy Admiral of African-American descent.
1991 – A 14th Coast Guard District LEDET, all crewmen from the CGC Rush, deployed on board the U.S. Navy’s USS Ingersoll, made history when they seized the St. Vincent-registered M/V Lucky Star for carrying 70 tons of hashish; the largest hashish bust in Coast Guard history to date. The team, led by LTJG Mark Eyler, made the bust 600 miles west of Midway Island.
1991 – A high personnel retention level led the Commandant, ADM J. William Kime, to begin implementing a high-year tenure program, otherwise known as an “up or out” policy to “improve personnel flow and opportunities for advancement.” Two significant points of the program were that they limited enlisted careers to 30 years of active service and established “professional growth points” for paygrades E-4 through E-9, which had to be attained in order to remain on active duty. Up until this time, enlisted members could remain on active duty until age 62 — the only U.S. military work force with that option.
1992 – UNSCOM begins the destruction of large quantities of Iraqi chemical weapons and production facilities.
1993 – The space shuttle Endeavour returned from a 10-day mission.
1995 – As a result of UNSCOM’s investigations, Iraq admits for the first time the existence of an offensive biological weapons program, but denies weaponisation.
1996 – The United States rejects an Iraqi plan for distributing food and medicine under United Nations Security Council Resolution 986. It would allow Saddam Hussein’s government to evade certain sanctions as well as to give it control over distribution of supplies to separatist Kurds in northern Iraq.
1996 – Twelve members of an Arizona anti-government group, the Viper Militia, were charged with plotting to blow up government buildings. The group was infiltrated by Drew Nolan, an agent for the Federal Bureau of Alcohol Tobacco and Firearms (ATF).
2001 – In China parts of the US spy plane were flown out from Hainan Island.
2002 – Jordan reported that 11 people, including a Palestinian-Jordanian who fled the American bombing on Osama bin Laden’s stronghold in Afghanistan, have been detained in connection with an alleged plot to attack American targets.
2003 – The US planned to suspend $48 million in aid to some 35 countries for failing to meet this day’s deadline for exempting Americans from prosecution before the new UN int’l. war crimes tribunal.
2004 – The US Coast Guard began boarding foreign vessels as int’l. security rules went into effect.
2004 – Historic Afghan elections scheduled for September were delayed because of wrangling among officials and political parties.
2004 – A defiant Saddam Hussein rejected charges of war crimes and genocide in a court appearance, telling a judge “this is all theater, the real criminal is Bush.”
2004 – In Iraq US jets pounded a suspected safehouse of terrorist Abu Musab al-Zarqawi in Fallujah.
2006 – A Web-posted message purportedly written by Osama bin Laden urged Somalis to build an Islamic state in the country and warned western states that his al-Qaeda network would fight against them if they intervened there
Follow Rebuilding Freedom
Search Rebuilding Freedom
Online NowUsers: 4 Guests, 2 Bots
Visits Since 2-24-2012
Rebuilding Freedom Disclaimer
The views expressed in the posts and comments of this blog do not necessarily reflect the Administrators. They should be understood as the personal opinions of the author.
All readers are encouraged to join Rebuilding Freedom and leave comments. While all points of view are welcome, only comments that are courteous and on-topic will be posted. While we acknowledge freedom of speech, comments may be reviewed. The Administrators at Rebuilding Freedom reserve the right to delete posted comments at its discretion. Spam will not be posted. Participants on this blog are fully responsible for everything that they submit in their comments, and all posted comments are in the public domain.
Any email addresses, names, or contact information received through this blog will not be shared or sold to anyone outside of Rebuilding Freedom, unless required by law enforcement investigation.
This blog may contain external links to other sites. Rebuilding Freedom does not control or guarantee the accuracy, relevance, timeliness, or completeness of information on other Web sites. Links to particular items in hypertext are not intended as endorsements of any views expressed, products or services offered on outside sites, or the organizations sponsoring those sites.
|
<urn:uuid:be865a3f-4b41-45a8-848a-520fec158cd6>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00015.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9606550931930542,
"score": 3.6875,
"token_count": 5755,
"url": "http://rebuildingfreedom.org/this-day-in-u-s-military-history-july-1/"
}
|
HERODOTUS (/hᵻˈrɒdətəs/ ;
Ancient Greek : Ἡρόδοτος,
Attic Greek pronunciation: ) was a Greek historian who
was born in
Halicarnassus in the
Persian Empire (modern-day
Turkey ) and lived in the fifth century BC (c. 484–c. 425 BC), a
Socrates , and
Euripides . He is often
referred to as "The Father of History", a title first conferred by
Cicero ; he was the first historian known to have broken from Homeric
tradition to treat historical subjects as a method of
investigation—specifically, by collecting his materials
systematically and critically, and then arranging them into a
_The Histories _ is the only work which he is known to have produced,
a record of his "inquiry" (ἱστορία _historía_ ) on the
origins of the
Greco-Persian Wars ; it primarily deals with the lives
Croesus , Cyrus , Cambyses , Smerdis , Darius , and Xerxes and the
battles of Marathon , Thermopylae , Artemisium , Salamis , Plataea ,
and Mycale ; however, its many cultural, ethnographical ,
geographical, historiographical , and other degressions form a
defining and essential part of the _Histories_ and contain a wealth of
information. Some of his stories are fanciful and others inaccurate;
yet he states that he is reporting only what he was told; a sizable
portion of the information he provided was later confirmed by
historians and archaeologists. Despite Herodotus' historical
significance, little is known of his personal life.
* 1 Place in history
* 1.1 Predecessors
* 1.2 Writing style
* 1.3 Contemporary and modern critics
* 2 Life
* 2.1 Childhood
* 2.2 Early travels
* 2.3 Later life
* 2.4 Author and orator
* 3 Reliability
* 3.1 Egypt
* 3.2 Science
* 3.3 Accusations of bias
* 3.4 Herodotus\'s use of sources and sense of authority
* 4 Mode of explanation
* 4.1 Types of causality
Herodotus and myth
* 6 See also
* 7 Critical editions
* 8 Translations
* 9 Notes
* 10 References
* 10.1 Sources
* 11 Further reading
* 12 External links
PLACE IN HISTORY
Herodotus announced the size and scope of his work at the beginning
of his _Researches_ or _Histories_:
Here are presented the results of the enquiry carried out by
Halicarnassus . The purpose is to prevent the traces of
human events from being erased by time, and to preserve the fame of
the important and remarkable achievements produced by both
non-Greeks; among the matters covered is, in particular, the cause of
the hostilities between
Greeks and non-Greeks. — Herodotus, _The
Robin Waterfield translation (2008)
His record of the achievements of others was an achievement in
itself, though the extent of it has been debated. Herodotus' place in
history and his significance may be understood according to the
traditions within which he worked. His work is the earliest Greek
prose to have survived intact. However, Dionysius of
Halicarnassus , a
literary critic of Augustan Rome , listed seven predecessors of
Herodotus, describing their works as simple, unadorned accounts of
their own and other cities and people, Greek or foreign, including
popular legends, sometimes melodramatic and naïve, often charming –
all traits that can be found in the work of
Modern historians regard the chronology as uncertain, but according
to the ancient account, these predecessors included Dionysius of
Miletus, Charon of Lampsacus,
Hellanicus of Lesbos , Xanthus of Lydia
and, the best attested of them all,
Hecataeus of Miletus . Of these,
only fragments of Hecataeus's works survived, and the authenticity of
these is debatable, but they provide a glimpse into the kind of
tradition within which
Herodotus wrote his own _Histories_.
In his introduction to Hecataeus's work, _Genealogies_: _
Fragment from the Histories_ VIII on
Papyrus Oxyrhynchus 2099, early
2nd century AD
Hecataeus the Milesian speaks thus: I write these things as they seem
true to me; for the stories told by the
Greeks are various and in my
This points forward to the "folksy" yet "international" outlook
typical of Herodotus. However, one modern scholar has described the
work of Hecataeus as "a curious false start to history", since
despite his critical spirit, he failed to liberate history from myth.
Herodotus mentions Hecataeus in his _Histories_, on one occasion
mocking him for his naive genealogy and, on another occasion, quoting
Athenian complaints against his handling of their national history.
It is possible that
Herodotus borrowed much material from Hecataeus,
as stated by Porphyry in a quote recorded by
Eusebius . In
particular, it is possible that he copied descriptions of the
crocodile , hippopotamus , and phoenix from Hecataeus's
_Circumnavigation of the Known World_ (_Periegesis_ / _Periodos ges_),
even misrepresenting the source as "Heliopolitans" (_Histories_ 2.73).
But Hecataeus did not record events that had occurred in living
memory, unlike Herodotus, nor did he include the oral traditions of
Greek history within the larger framework of oriental history. There
is no proof that
Herodotus derived the ambitious scope of his own
work, with its grand theme of civilizations in conflict, from any
predecessor, despite much scholarly speculation about this in modern
Herodotus claims to be better informed than his predecessors
by relying on empirical observation to correct their excessive
schematism. For example, he argues for continental asymmetry as
opposed to the older theory of a perfectly circular earth with Europe
Africa equal in size (_Histories_ 4.36 and 4.42). However, he
retains idealizing tendencies, as in his symmetrical notions of the
His debt to previous authors of prose "histories" might be
questionable, but there is no doubt that
Herodotus owed much to the
example and inspiration of poets and story-tellers. For example,
Athenian tragic poets provided him with a world-view of a balance
between conflicting forces, upset by the hubris of kings, and they
provided his narrative with a model of episodic structure. His
familiarity with Athenian tragedy is demonstrated in a number of
Aeschylus 's _
Persae _, including the epigrammatic
observation that the defeat of the Persian navy at Salamis caused the
defeat of the land army (_Histories_ 8.68 ~ _Persae_ 728). The debt
may have been repaid by
Sophocles because there appear to be echoes of
_The Histories_ in his plays, especially a passage in _
Antigone _ that
resembles Herodotus's account of the death of Intaphernes (_Histories_
3.119 ~ _Antigone_ 904-920). However, this point is one of the most
contentious issues in modern scholarship.
Homer was another inspirational source. Just as
extensively on a tradition of oral poetry, sung by wandering
Herodotus appears to have drawn on an Ionian tradition
of story-telling, collecting and interpreting the oral histories he
chanced upon in his travels. These oral histories often contained
folk-tale motifs and demonstrated a moral, yet they also contained
substantial facts relating to geography, anthropology, and history,
all compiled by
Herodotus in an entertaining style and format.
CONTEMPORARY AND MODERN CRITICS
It is on account of the many strange stories and the folk-tales he
reported that his critics in early modern times branded him "The
Father of Lies". Even his own contemporaries found reason to scoff
at his achievement. In fact, one modern scholar has wondered if
Herodotus left his home in Greek
Anatolia , migrating westwards to
Athens and beyond, because his own countrymen had ridiculed his work,
a circumstance possibly hinted at in an epitaph said to have been
Herodotus at one of his three supposed resting places,
Herodotus the son of Sphynx
lies; in Ionic history without peer;
a Dorian born, who fled from slander's brand
and made in Thuria his new native land.
Yet it was in Athens where his most formidable contemporary critics
could be found. In 425 BC, which is about the time that
thought by many scholars to have died, the Athenian comic dramatist
Aristophanes created _
The Acharnians _, in which he blames the
Peloponnesian War on the abduction of some prostitutes – a mocking
reference to Herodotus, who reported the Persians' account of their
wars with Greece , beginning with the rapes of the mythical heroines
Io , Europa ,
Medea , and Helen .
Similarly, the Athenian historian
Herodotus as a
"logos-writer" (story-teller). Thucydides, who had been trained in
rhetoric, became the model for subsequent prose-writers as an author
who seeks to appear firmly in control of his material, whereas with
his frequent digressions
Herodotus appeared to minimize (or possibly
disguise) his authorial control. Moreover,
Thucydides developed a
historical topic more in keeping with the Greek world-view: focused on
the context of the _polis _ or city-state. The interplay of
civilizations was more relevant to
Greeks living in Anatolia, such as
Herodotus himself, for whom life within a foreign civilization was a
Before the Persian crisis, history had been represented among the
Greeks only by local or family traditions. The "Wars of Liberation"
had given to
Herodotus the first genuinely historical inspiration felt
by a Greek. These wars showed him that there was a corporate life,
higher than that of the city, of which the story might be told; and
they offered to him as a subject the drama of the collision between
East and West. With him, the spirit of history was born into Greece;
and his work, called after the nine Muses, was indeed the first
Clio . —
Richard Claverhouse Jebb
Jean-Guillaume Moitte (1806),
Modern scholars generally turn to Herodotus's own writing for
reliable information about his life, supplemented with ancient yet
much later sources, such as the Byzantine _
Suda _, an 11th century
encyclopaedia which possibly took its information from traditional
The data are so few – they rest upon such late and slight
authority; they are so improbable or so contradictory, that to compile
them into a biography is like building a house of cards, which the
first breath of criticism will blow to the ground. Still, certain
points may be approximately fixed ... —
Modern accounts of his life typically go something like this:
Herodotus was born at
Halicarnassus around 484 BC. There is no reason
to disbelieve the _Suda_'s information about his family: that it was
influential and that he was the son of Lyxes and Dryo, and the brother
of Theodorus, and that he was also related to
Panyassis – an epic
poet of the time. The town was within the
Persian Empire at that time,
Herodotus a Persian subject, and it may be that the young
Herodotus heard local eye-witness accounts of events within the empire
and of Persian preparations for the invasion of Greece, including the
movements of the local fleet under the command of Artemisia .
Inscriptions recently discovered at
Halicarnassus indicate that her
grandson Lygdamis negotiated with a local assembly to settle disputes
over seized property, which is consistent with a tyrant under
pressure. His name is not mentioned later in the tribute list of the
Delian League , indicating that there might well have been a
successful uprising against him some time before 454 BC. The epic poet
Panyassis – a relative of
Herodotus – is reported to have taken
part in a failed uprising.
Herodotus expresses affection for the
Samos (III, 39–60), and this is an indication that he
might have lived there in his youth. So it is possible that his family
was involved in an uprising against Lygdamis, leading to a period of
Samos and followed by some personal hand in the tyrant’s
eventual fall. The statue of
Herodotus in his hometown of
Halicarnassus , modern
Bodrum , Turkey.
Herodotus wrote his _Histories_ in the Ionian dialect, yet he was
born in Halicarnassus, which was a Dorian settlement. According to the
Herodotus learned the Ionian dialect as a boy living on the
Samos , to which he had fled with his family from the
oppressions of Lygdamis , tyrant of
Halicarnassus and grandson of
Artemisia I of
Caria . The _Suda_ also informs us that
returned home to lead the revolt that eventually overthrew the tyrant.
Due to recent discoveries of inscriptions at
Halicarnassus dated to
about Herodotus's time, we now know that the Ionic dialect was used in
Halicarnassus in some official documents, so there is no need to
assume (like the _Suda_) that he must have learned the dialect
elsewhere. Further, the _Suda_ is the only source which we have for
the role played by
Herodotus as the heroic liberator of his
birthplace. That itself is a good reason to doubt such a romantic
Herodotus himself reveals,
Halicarnassus , though a Dorian city ,
had ended its close relations with its Dorian neighbours after an
unseemly quarrel (I, 144), and it had helped pioneer Greek trade with
Egypt (II, 178). It was, therefore, an outward-looking,
international-minded port within the
Persian Empire , and the
historian’s family could well have had contacts in other countries
under Persian rule, facilitating his travels and his researches.
Herodotus's eye-witness accounts indicate that he traveled in Egypt
in association with Athenians, probably some time after 454 BC or
possibly earlier, after an Athenian fleet had assisted the uprising
against Persian rule in 460–454 BC. He probably traveled to Tyre
next and then down the
Babylon . For some reason,
possibly associated with local politics, he subsequently found himself
unpopular in Halicarnassus, and some time around 447 BC, migrated to
Periclean Athens – a city whose people and democratic institutions
he openly admires (V, 78). Athens was also the place where he came to
know the local topography (VI, 137; VIII, 52–5), as well as leading
citizens such as the
Alcmaeonids , a clan whose history features
frequently in his writing.
Herodotus was granted a
financial reward by the Athenian assembly in recognition of his work.
It is possible that he unsuccessfully applied for Athenian
citizenship, a rare honour after 451 BC, requiring two separate votes
by a well-attended assembly.
In 443 BC or shortly afterwards, he migrated to
Thurium as part of an
Aristotle refers to a version of _The
Histories_ written by “
Herodotus of Thurium”, and indeed some
passages in the _Histories_ have been interpreted as proof that he
wrote about southern Italy from personal experience there (IV, 15,99;
VI, 127). Intimate knowledge of some events in the first years of the
Peloponnesian War (VI, 91; VII, 133, 233; IX, 73) indicate that he
might have returned to Athens, in which case it is possible that he
died there during an outbreak of the plague. Possibly he died in
Macedonia instead, after obtaining the patronage of the court there;
or else he died back in Thurium. There is nothing in the _Histories_
that can be dated to later than 430 BC with any certainty, and it is
generally assumed that he died not long afterwards, possibly before
his sixtieth year.
AUTHOR AND ORATOR
Herodotus would have made his researches known to the larger world
through oral recitations to a public crowd. John Marincola writes in
his introduction to the Penguin edition of _The Histories_ that there
are certain identifiable pieces in the early books of Herodotus’
work which could be labeled as “performance pieces”. These
portions of the research seem independent and “almost detachable”,
so that they might have been set aside by the author for the purposes
of an oral performance. The intellectual matrix of the 5th century,
Marincola suggests, comprised many oral performances in which
philosophers would dramatically recite such detachable pieces of their
work. The idea was to criticize previous arguments on a topic and
emphatically and enthusiastically insert their own in order to win
over the audience.
It was conventional in Herodotus’s day for authors to ‘publish’
their works by reciting them at popular festivals. According to Lucian
Herodotus took his finished work straight from
Anatolia to the
Olympic Games and read the entire _Histories_ to the assembled
spectators in one sitting, receiving rapturous applause at the end of
it. According to a very different account by an ancient grammarian,
Herodotus refused to begin reading his work at the festival of Olympia
until some clouds offered him a bit of shade – by which time the
assembly had dispersed. (Hence the proverbial expression ‘_Herodotus
and his shade_’ to describe someone who misses an opportunity
through delay.) Herodotus’s recitation at Olympia was a favourite
theme among ancient writers, and there is another interesting
variation on the story to be found in the _Suda_: that of Photius
Tzetzes , in which a young
Thucydides happened to be in the
assembly with his father, and burst into tears during the recital.
Herodotus observed prophetically to the boy’s father, “Your
son’s soul yearns for knowledge.”
Herodotus became close enough for both to
be interred in Thucydides’ tomb in Athens. Such at least was the
opinion of Marcellinus in his _Life of Thucydides_. According to the
Suda _, he was buried in Macedonian
Pella and in the agora in Thurium
Dedication in the Histories_, translated into Latin by Lorenzo
Valla , Venice 1494
_The Histories_ were occasionally criticized in antiquity, but
modern historians and philosophers generally take a positive view.
Despite the controversy,
Herodotus still serves as the primary, and
often only, source for events in the Greek world, Persian Empire, and
the region generally in the two centuries leading up until his own
day. Herodotus, like many ancient historians, preferred an element
of show to purely analytic history, aiming to give pleasure with
"exciting events, great dramas, bizarre exotica." As such, certain
passages have been the subject of controversy and even some doubt,
both in antiquity and today.
The accuracy of the works of
Herodotus has been controversial since
his own era.
Josephus , Duris of
Plutarch all commented on this controversy.
Generally, however, he was regarded as reliable in antiquity, and is
especially so today. Many scholars, ancient and modern, routinely cite
Herodotus (e.g., Aubin,
A. H. L. Heeren , Davidson,
Cheikh Anta Diop ,
Poe, Welsby, Celenko, Volney,
Pierre Montet , Bernal, Jackson, DuBois,
Strabo ). Many of these scholars (Welsby, Heeren, Aubin, Diop, etc.)
explicitly mention the reliability of Herodotus's work and
demonstrate corroboration of Herodotus's writings by modern scholars.
A. H. L. Heeren quoted
Herodotus throughout his work and provided
corroboration by scholars regarding several passages (source of the
Nile, location of Meroe, etc.). To further his work on the Egyptians
and Assyrians, Aubin uses Herodotus's accounts in various passages and
defends Herodotus's position. Aubin said that
Herodotus was "the
author of the first important narrative history of the world". Diop
provides several examples (the inundations of the Nile) which, he
argues, support his view that
Herodotus was "quite scrupulous,
objective, scientific for his time." Diop argues that Herodotus
"always distinguishes carefully between what he has seen and what he
has been told." Diop also notes that
Strabo corroborated Herodotus's
ideas about the Black Egyptians, Ethiopians, and Colchians.
Reconstruction of the
Oikoumene (inhabited world), ancient map based
on Herodotus, c. 450 BC
The reliability of
Herodotus is sometimes criticized when writing
about Egypt. Alan B. Lloyd argues that, as a historical document,
the writings of
Herodotus are seriously defective, and that he was
working from "inadequate sources". Nielsen writes: "Though we cannot
entirely rule out the possibility of
Herodotus having been in Egypt,
it must be said that his narrative bears little witness to it."
German historian Detlev Fehling questions whether
traveled up the
Nile River, and considers doubtful almost everything
that he says about Egypt and Ethiopia. Fehling states that "there is
not the slightest bit of history behind the whole story" about the
Herodotus that Pharaoh
Sesostris campaigned in Europe, and
that he left a colony in Colchia.
Gold dust and nuggets
Herodotus provides much information about the nature of the world and
the status of science during his lifetime, often engaging in private
speculation. For example, he reports that the annual flooding of the
Nile was said to be the result of melting snows far to the south, and
he comments that he cannot understand how there can be snow in Africa,
the hottest part of the known world, offering an elaborate explanation
based on the way that desert winds affect the passage of the Sun over
this part of the world (2:18ff). He also passes on reports from
Phoenician sailors that, while circumnavigating
Africa , they "saw the
sun on the right side while sailing westwards". Owing to this brief
mention, which is included almost as an afterthought, it has been
Africa was indeed circumnavigated by ancient seafarers,
for this is precisely where the sun ought to have been. His accounts
of India are among the oldest records of Indian civilization by an
Discoveries made since the end of the 19th century have generally
added to Herodotus's credibility. He described
Gelonus , located in
Scythia , as a city thousands of times larger than
Troy ; this was
widely disbelieved until it was rediscovered in 1975. The
archaeological study of the now-submerged ancient Egyptian city of
Heracleion and the recovery of the so-called "Naucratis stela" give
credibility to Herodotus's previously unsupported claim that
Heracleion was founded during the Egyptian
New Kingdom . _ Croesus
Receiving Tribute from a Lydian Peasant_, by
After journeys to India and Pakistan, French ethnologist Michel
Peissel claimed to have discovered an animal species that may
illuminate one of the most bizarre passages in Herodotus's Histories.
In Book 3, passages 102 to 105,
Herodotus reports that a species of
fox-sized, furry "ants " lives in one of the far eastern, Indian
provinces of the
Persian Empire . This region, he reports, is a sandy
desert, and the sand there contains a wealth of fine gold dust. These
giant ants, according to Herodotus, would often unearth the gold dust
when digging their mounds and tunnels, and the people living in this
province would then collect the precious dust. Peissel reports that,
in an isolated region of northern Pakistan on the Deosai Plateau in
Gilgit–Baltistan province, there is a species of marmot – the
Himalayan marmot , a type of burrowing squirrel – that may have been
Herodotus called giant ants. The ground of the Deosai Plateau is
rich in gold dust, much like the province that
According to Peissel, he interviewed the Minaro tribal people who live
in the Deosai Plateau, and they have confirmed that they have, for
generations, been collecting the gold dust that the marmots bring to
the surface when they are digging their underground burrows. Later
authors such as
Pliny the Elder mentioned this story in the gold
mining section of his _
Naturalis Historia _. The Himalayan marmot
Peissel offers the theory that
Herodotus may have confused the old
Persian word for "marmot" with the word for "mountain ant". Research
Herodotus probably did not know any Persian (or any
other language except his native Greek) and was forced to rely on many
local translators when travelling in the vast multilingual Persian
Herodotus did not claim to have personally seen the creatures
which he described.
Herodotus did, though, follow up in passage 105
of Book 3 with the claim that the "ants" are said to chase and devour
ACCUSATIONS OF BIAS
Some "calumnious fictions" were written about
Herodotus in a work
On the Malice of Herodotus _ by
Plutarch , a Chaeronean by
birth, (or it might have been a Pseudo-
Plutarch , in this case "a
great collector of slanders"), including the allegation that the
historian was prejudiced against Thebes because the authorities there
had denied him permission to set up a school. Similarly, in a
Dio Chrysostom (or yet another pseudonymous
author) accused the historian of prejudice against Corinth , sourcing
it in personal bitterness over financial disappointments – an
account also given by Marcellinus in his _Life of Thucydides_. In
Herodotus was in the habit of seeking out information from
empowered sources within communities, such as aristocrats and priests,
and this also occurred at an international level, with Periclean
Athens becoming his principal source of information about events in
Greece. As a result, his reports about Greek events are often coloured
by Athenian bias against rival states – Thebes and Corinth in
_The Histories_ were sometimes criticized in antiquity, but modern
historians and philosophers take a more positive view of Herodotus's
methodology, especially those searching for a paradigm of objective
historical writing. A few modern scholars have argued that Herodotus
exaggerated the extent of his travels and invented his sources, yet
his reputation continues largely intact.
Herodotus is variously
considered "father of comparative anthropology", "the father of
ethnography", and "more modern than any other ancient historian in
his approach to the ideal of total history".
HERODOTUS\'S USE OF SOURCES AND SENSE OF AUTHORITY
It is clear from the beginning of Book 1 of the _Histories_ that
Herodotus utilizes (or at least claims to utilize) various sources in
his narrative. K.H. Waters relates that "Herodotos did not work from a
purely Hellenic standpoint; indeed, he was accused by the patriotic
but somewhat imperceptive
Plutarch of being _philobarbaros_, a
pro-barbarian or pro-foreigner."
Herodotus will at times relate various accounts of the same story.
For example, in Book 1 he mentions both the Phoenician and the Persian
accounts of Io. However,
Herodotus will at time arbitrate between
varying accounts: "I am not going to say that these events happened
one way or the other. Rather, I will point out the man _who I know for
a fact_ began the wrong-doing against the Greeks." Again, later,
Herodotus claims himself as an authority: "I know this is how it
happened because I heard it from the Delphians myself."
Throughout his work,
Herodotus attempts to explain the actions of
people. Speaking about the king,
Solon the Athenian,
" sailed away on the pretext of seeing the world, _but it was really
so that he could not be compelled to repeal any of the laws he had
laid down_." Again, in the story about
Croesus and his son's death,
when speaking of Adrastus (the man who accidentally killed Croesus'
Herodotus states: "Adrastus ... _believing himself to be the
most ill-fated man he had ever known_, cut his own throat over the
Herodotus had not met these people whom he is discussing, he
claims to understand their thoughts and intentions.
MODE OF EXPLANATION
Herodotus writes with the purpose of _explaining_; that is, he
discusses the reason for or cause of for an event. He lays this out in
the proem: "This is the publication of the research of
Halicarnassus, so that the actions of people shall not fade with time,
so that the great and admirable achievements of both
barbarians shall not go unrenowned, and, among other things, _to set
forth the reasons why they waged war on each other_."
This mode of explanation traces itself all the way back to Homer,
who opened the _Iliad_ by asking: _Which of the immortals set these
two at each other's throats?_ _Apollo,_ _Zeus’ son and Leto’s,
offended_ _by the warlord. Agamemnon had dishonored_ _Chryses,
Apollo's priest, so the god_ _struck the Greek camp with plague,_ _and
the soldiers were dying of it._
Herodotus begin with a question of causality. In
Homer's case, "_who set these two at each other's throats?_" In
Herodotus' case, "_Why did the
Greeks and barbarians go to war with
Herodotus' means of explanation does not necessarily posit a simple
cause; rather, his explanations cover a host of potential causes and
emotions. It is notable, however, that "the obligations of gratitude
and revenge are the fundamental human motives for Herodotus, just as
... they are the primary stimulus to the generation of narrative
Some readers of
Herodotus believe that his habit of tying events back
to personal motives signifies an inability to see broader and more
abstract reasons for action. Gould argues to the contrary that this is
Herodotus attempts to provide the rational reasons, as
understood by his contemporaries, rather than providing more abstract
TYPES OF CAUSALITY
Herodotus attributes cause to both divine and human agents. These are
not perceived as mutually exclusive, but rather mutually
interconnected. This is true of Greek thinking in general, at least
Homer onward. Gould notes that invoking the supernatural in
order to explain an event does not answer the question "why did this
happen?" but rather "why did this happen to me?" By way of example,
faulty craftsmanship is the human cause for a house collapsing.
However, divine will is the reason that the house collapses at the
particular moment when I am inside. It was the will of the gods that
the house collapsed while a particular individual was within it,
whereas it was the cause of man that the house had a weak structure
and was prone to falling.
Some authors, including Geoffrey de Ste Croix and Mabel Lang, have
argued that Fate, or the belief that "this is how it had to be," is
Herodotus' ultimate understanding of causality. Herodotus'
explanation that an event "was going to happen" maps well on to
Aristotelean and Homeric means of expression. The idea of "it was
going to happen" reveals a "tragic discovery" associated with
fifth-century drama. This tragic discovery can be seen in Homer's,
Iliad _ as well.
John Gould argues that
Herodotus should be understood as falling in a
long line of story-tellers, rather than thinking of his means of
explanation as a "philosophy of history" or "simple causality". Thus,
according to Gould, Herodotus' means of explanation is a mode of
story-telling and narration that has been passed down from generations
Herodotus' sense of what was 'going to happen' is not the language of
one who holds a theory of historical necessity, who sees the whole of
human experience as constrained by inevitability and without room for
human choice or human responsibility, diminished and belittled by
forces too large for comprehension or resistance; it is rather the
traditional language of a teller of tales whose tale is structured by
his awareness of the shape it must have and who presents human
experience on the model of the narrative patterns that are built into
his stories; the narrative impulse itself, the impulse towards
'closure' and the sense of an ending, is retrojected to become
HERODOTUS AND MYTH
Herodotus considered his "inquiries" a serious pursuit of
knowledge, he was not above relating entertaining tales derived from
the collective body of myth, but he did so judiciously with regard for
his historical method , by corroborating the stories through enquiry
and testing their probability. While the gods never make personal
appearances in his account of human events,
emphatically that "many things prove to me that the gods take part in
the affairs of man" (IX, 100).
In Book One, passages 23 and 24,
Herodotus relates the story of Arion
, the renowned harp player, "second to no man living at that time,"
who was saved by a dolphin.
Herodotus prefaces the story by noting
that "a very wonderful thing is said to have happened," and alleges
its veracity by adding that the "Corinthians and the Lesbians agree in
their account of the matter." Having become very rich while at the
court of Periander,
Arion conceived a desire to sail to Italy and
Sicily. He hired a vessel crewed by Corinthians, whom he felt he could
trust, but the sailors plotted to throw him overboard and seize his
Arion discovered the plot and begged for his life, but the
crew gave him two options: that either he kill himself on the spot or
jump ship and fend for himself in the sea.
Arion flung himself into
the water, and a dolphin carried him to shore.
Herodotus clearly writes as both historian and teller of tales.
Herodotus takes a fluid position between the artistic
Homer and the rational data-accounting of later
historians. John Herrington has developed a helpful metaphor for
describing Herodotus' dynamic position in the history of Western art
and thought –
Herodotus as centaur:
The human forepart of the animal ... is the urbane and responsible
classical historian; the body indissolubly united to it is something
out of the faraway mountains, out of an older, freer and wilder realm
where our conventions have no force.
Herodotus is neither a mere gatherer of data nor a simple teller of
tales – he is both. While
Herodotus is certainly concerned with
giving accurate accounts of events, this does not preclude for him the
insertion of powerful mythological elements into his narrative,
elements which will aid him in expressing the truth of matters under
his study. Thus to understand what
Herodotus is doing in the
_Histories_, we must not impose strict demarcations between the man as
mythologist and the man as historian, or between the work as myth and
the work as history. As James Romm has written,
Herodotus worked under
a common ancient Greek cultural assumption that the way events are
remembered and retold (e.g. in myths or legends) produces a valid kind
of understanding, even when this retelling is not entirely factual.
For Herodotus, then, it takes both myth and history to produce
Historiography (the history of history and historians)
Thucydides , ancient Greek historian who is also often said to be
"the father of history"
Pliny the Elder
Naturalis Historia _
* Battle of Thermopylae:
Herodotus and other Sources
The Herodotus Machine
* C. Hude (ed.) _Herodoti Historiae. Tomvs prior: Libros I-IV
continens._ (Oxford 1908)
* C. Hude (ed.) _Herodoti Historiae. Tomvs alter: Libri V-IX
continens._ (Oxford 1908)
* H. B. Rosén (ed.) _Herodoti Historiae. Vol. I: Libros I-IV
continens._ (Leipzig 1987)
* H. B. Rosén (ed.) _Herodoti Historiae. Vol. II: Libros V-IX
continens indicibus criticis adiectis_ (Stuttgart 1997)
* N. G. Wilson (ed.) _Herodoti Historiae. Tomvs prior: Libros I-IV
continens._ (Oxford 2015)
* N. G. Wilson (ed.) _Herodoti Historiae. Tomvs alter: Libri V-IX
continens._ (Oxford 2015)
Several English translations of _The Histories of Herodotus_ are
readily available in multiple editions. The most readily available are
those translated by:
George Rawlinson , translation 1858–1860. Public domain; many
editions available, although
Everyman Library and Wordsworth Classics
editions are the most common ones still in print.
A. D. Godley 1920; revised 1926. Reprinted 1931, 1946, 1960, 1966,
1975, 1981, 1990, 1996, 1999, 2004. Available in four volumes from
Loeb Classical Library ,
Harvard University Press . ISBN 0-674-99130-3
Printed with Greek on the left and English on the right:
A. D. Godley _
The Persian Wars : Volume I : Books
1–2_ (Cambridge, MA 1920)
A. D. Godley _
The Persian Wars : Volume II : Books
3–4_ (Cambridge, MA 1921)
A. D. Godley _
The Persian Wars : Volume III : Books
5–7_ (Cambridge, MA 1922)
A. D. Godley _
The Persian Wars : Volume IV : Books
8–9_ (Cambridge, MA 1925)
Aubrey de Sélincourt , originally 1954; revised by John
Marincola in 1996. Several editions from
Penguin Books available.
David Grene , Chicago: University of Chicago Press, 1985.
Robin Waterfield , with an Introduction and Notes by Carolyn
Dewald, Oxford World Classics, 1997. ISBN 978-0-19-953566-8
* Strassler, Robert B., (ed.), and Purvis, Andrea L. (trans.), _The
Landmark Herodotus,_ Pantheon, 2007. ISBN 978-0-375-42109-9 with
adequate ancillary information.
* _The Histories of
Herodotus Interlinear English Translation_ by
Heinrich Stein (ed.) and George Macaulay (trans.), Lighthouse Digital
* Herodotus. _Herodotus: The Histories: The Complete Translation,
Backgrounds, Commentaries_. Translated by Walter Blanco. Edited by
Jennifer Tolbert Roberts. New York: W. W. Norton -webkit-column-width:
24em; column-width: 24em; list-style-type: lower-alpha;">
* ^ “In the scheme and plan of his work, in the arrangement and
order of its parts, in the tone and character of the thoughts, in ten
thousand little expressions and words, the Homeric student appears.”
* ^ See
Lucian of Samosata, who went as far as to deny him a place
among the famous on the Island of the Blessed in _Verae Historiae_.
* ^ Some regard his works as being at least partly unreliable.
Fehling writes of "a problem recognized by everybody", namely that
Herodotus frequently cannot be taken at face value.
* ^ Boedeker comments on Herodotus' use of literary devices.
* ^ For Detlev Fehling, the sources are simply not credible that
Herodotus claims for many stories that he reports. Persian and
Egyptian informants tell stories to
Herodotus that dovetail neatly
into Greek myths and literature, yet show no signs of knowing their
own traditions. For Fehling, the only credible explanation is that
Herodotus invented these sources, and that the stories themselves were
* ^ Kenton L. Sparks writes, "In antiquity,
Herodotus had acquired
the reputation of being unreliable, biased, parsimonious in his praise
of heroes, and mendacious".
Cicero (_On the Laws_ I.5) said that the works of Herodotus
were full of legends or "fables".
* ^ Duris of
Herodotus a myth-monger.
Harpocration wrote a book on "the lies of Herodotus".
* ^ such as on the
* ^ Welsby said that "archaeology graphically confirms Herodotus'
Herodotus claimed to have visited
Babylon . The absence of any
mention of the Hanging Gardens of
Babylon in his work has attracted
further attacks on his credibility. In response, Dalley has proposed
that the Hanging Gardens may have been in Ninevah rather than in
* ^ Fehling concludes that the works of
Herodotus are intended as
fiction. Boedeker concurs that much of the content of the works of
Herodotus are literary devices.
* ^ For example, they were criticized for inaccuracy by
Samosata, who attacked
Herodotus as a liar in _Verae Historiae_ and
went as far as to deny him a place among the famous on the Island of
* ^ T. James Luce, _The Greek Historians_, 2002, p. 26.
* ^ _
New Oxford American Dictionary _, "Herodotos", Oxford
* ^ Burn (1972) , p. 23, citing Dionysius _On Thucydides_
* ^ Burn (1972) , p. 27
* ^ _A_ _B_ Murray (1986) , p. 188
* ^ Herodotus, _Histories _ 2.143, 6.137
* ^ _Preparation of the Gospel_, X, 3
* ^ Immerwahr (1985) , pp. 430, 440
* ^ Immerwahr (1985) , p. 431
* ^ Burn (1972) , pp. 22–23
* ^ Immerwahr (1985) , p. 430
* ^ Immerwahr (1985) , pp. 427, 432
* ^ Richard Jebb (ed), _Antigone_, Cambridge University Press,
1976, pp 181-182, n.904-920
* ^ Rawlinson (1859) , p. 6
* ^ Murray (1986) , pp. 190–191
* ^ _A_ _B_ _C_ Burn (1972) , p. 10
* ^ David Pipes. "Herodotus: Father of History, Father of Lies".
Retrieved 16 November 2009. Check date values in: access-date= (help
* ^ Rawlinson (1859)
* ^ Burn (1972) , p. 13
* ^ Lawrence A. Tritle. (2004). _The Peloponnesian War_. Greenwood
Publishing Group. pp 147-148
* ^ John Hart. (1982). _
Herodotus and Greek History_. Taylor and
Francis. p 174
* ^ _A_ _B_ Murray (1986) , p. 191
* ^ Waterfield, Robin (trans.) and Dewald, Carolyn (ed.). (1998).
_The Histories by Herodotus_. University of Oxford Press.
“Introduction”, p xviii
* ^ Richard C. Jebb, _The Genius of Sophocles_, section 7
* ^ Burn (1972) , p. 7
* ^ Rawlinson (1859) , p. 1
* ^ Rawlinson (1859) , Introduction
* ^ Burn (1972) , Introduction
* ^ Dandamaev, M. A. (1989). _A Political History of the Achaemenid
Empire_. BRILL. p. 153. ISBN 978-9004091726 . The ‘Father of
History’, Herodotus, was born at Halicarnassus, and before his
emigration to mainland Greece was a subject of the Persian empire.
* ^ Kia, Mehrdad (2016). _The Persian Empire: A Historical
Encyclopedia_. ABC-CLIO. p. 161. ISBN 978-1610693912 . At the time of
Herodotus’ birth southwestern Asia Minor, including Halicarnassus,
was under Persian Achaemenid rule.
* ^ Burn (1972) , p. 11
* ^ Rawlinson (1859) , p. 11
Eusebius _Chron. Can. Pars._ II p 339, 01.83.4, cited by
Rawlinson (1859) , Introduction
Plutarch _De Malign. Herod._ II p 862 A, cited by Rawlinson
(1859) , Introduction
* ^ _The Histories_. Introduction and Notes by John Marincola;
Trans. by Aubrey de Selincourt. Penguin Books. 2003. pp. xii.
* ^ Rawlinson (1859) , p. 14
* ^ Montfaucon’s _Bibliothec. Coisl. Cod._ clxxvii p 609, cited
by Rawlinson (1859) , p. 14
* ^ Photius _Bibliothec. Cod._ lx p 59, cited by Rawlinson (1859) ,
Tzetzes _Chil._ 1.19, cited by Rawlinson (1859) , p. 15
* ^ Marcellinus, _in Vita. Thucyd._ p ix, cited by Rawlinson (1859)
, p. 25
* ^ Rawlinson (1859) , p. 25
* ^ _A_ _B_ Murray (1986) , p. 189
* ^ Mikalson (2003) , pp. 198–200
* ^ Fehling (1994) , p. 2
* ^ _A_ _B_ Jones (1996)
* ^ _A_ _B_ Boedeker (2000) , pp. 101–102
* ^ Saltzman (2010)
* ^ Archambault (2002) , p. 171
* ^ Farley (2010) , p. 21
* ^ _A_ _B_ Lloyd (1993) , p. 4
* ^ _A_ _B_ Nielsen (1997) , pp. 42–43
* ^ _A_ _B_ Baragwanath & de Bakker (2010) , p. 19
* ^ _A_ _B_ Fehling (1994) , p. 13
* ^ _A_ _B_ _C_ Marincola (2001) , p. 34
* ^ _A_ _B_ Dalley (2003)
* ^ _A_ _B_ Dalley (2013)
* ^ Fehling (1989) , pp. 4, 53–54
* ^ Roberts (2011) , p. 2
* ^ Marincola (2001) , p. 59
* ^ Cameron (2004) , p. 156
* ^ Sparks (1998) , p. 58
* ^ Asheri, Lloyd & Corcella (2007)
* ^ Welsby (1996) , p. 40
* ^ Heeren (1838) , pp. 13, 379, 422–424
* ^ Aubin (2002) , pp. 94–96, 100–102, 118–121, 141–144,
* ^ Diop (1981) , p. 1
* ^ Diop (1974) , p. 2
* ^ Fehling (1994) , pp. 4–6
* ^ The Indian Empire _
The Imperial Gazetteer of India _, 1909, v.
2, p. 272.
* ^ "Was the Ramayana actually set in and around today\'s
* ^ _A_ _B_ Peissel (1984)
* ^ Marlise Simons (25 November 1996). "Himalayas offer clue to
legend of gold-digging \'ants\'".
The New York Times . Retrieved 23
* ^ Rawlinson (1859) , pp. 13–14
* ^ "
Dio Chrysostom \'\'Orat. xxxvii, p11". Penelope.uchicago.edu.
Retrieved 13 September 2012.
* ^ Marcellinus, _Life of Thucydides_
* ^ Burn (1972) , pp. 8, 9, 32–34
* ^ Fehling (1989)
* ^ Waters (1985) , p. 3
* ^ Blanco (2013) , pp. 5–6, §1.1, 1.5
* ^ Blanco (2013) , p. 6, §1.5
* ^ Blanco (2013) , p. 9, §1.20
* ^ Blanco (2013) , p. 12, §1.29
* ^ Blanco (2013) , p. 17, §1.45, ¶2
* ^ Blanco (2013) , p. 5
* ^ Gould (1989) , p. 64
* ^ Homer, _Iliad_, trans. Stanley Lombardo (Indianapolis: Hackett
Publishing Company_,_ 1997): 1, Bk. 1, lines 9-16.
* ^ Gould (1989) , p. 65
* ^ Gould (1989) , p. 67
* ^ Gould (1989) , pp. 67–70
* ^ Gould (1989) , p. 71
* ^ Gould (1989) , pp. 72–73
* ^ Gould (1989) , pp. 75–76
* ^ Gould (1989) , pp. 76–78
* ^ Gould (1989) , pp. 77–78
* ^ Wardman (1960)
* ^ _Histories_ 1.23–24.
* ^ Romm (1998) , p. 8
* ^ Romm (1998) , p. 6
* Archambault, Paul (2002). "
Herodotus (c. 480–c. 420)". In Alba
della Fazia Amoia & Bettina Liebowitz Knapp. _Multicultural Writers
from Antiquity to 1945: a Bio-bibliographical Sourcebook_. Greenwood
Publishing Group. pp. 168–172. ISBN 9780313306877 . CS1 maint: Uses
editors parameter (link )
* Asheri, David; Lloyd, Alan; Corcella, Aldo (2007). _A Commentary
on Herodotus, Books 1-4_. Oxford University Press. ISBN
* Aubin, Henry (2002). _The Rescue of Jerusalem_. New York, NY: Soho
Press. ISBN 1-56947-275-0 .
* Baragwanath, Emily; de Bakker, Mathieu (2010). _Herodotus_. Oxford
Bibliographies Online Research Guide. Oxford University Press. ISBN
* Blanco, Walter (2013). _The Histories_. Herodotus. New York, NY:
W. W. Norton & Company. ISBN 978-0-393-93397-0 .
* Boedeker, Deborah (2000). "Herodotus' genre(s)". In Mary Depew &
Dirk Obbink. _Matrices of Genre: Authors, Canons, and Society_.
Harvard University Press. pp. 97–114. ISBN 9780674034204 . CS1
maint: Uses editors parameter (link )
* Burn, A. R. (1972). _Herodotus: The Histories_.
Penguin Classics .
* Cameron, Alan (2004). _Greek Mythography in the Roman World_.
Oxford University Press. ISBN 978-0-19-803821-4 .
* Dalley, S. (2003). "Why did
Herodotus not mention the Hanging
Gardens of Babylon?". In P. Derow & R. Parker. _
Herodotus and his
World_. New York: Oxford University Press. pp. 171–189. ISBN
0-19-925374-9 . CS1 maint: Uses editors parameter (link )
* Dalley, S. (2013). _The Mystery of the Hanging Garden of Babylon:
an Elusive World Wonder Traced_. Oxford University Press. ISBN
* Diop, Cheikh Anta (1974). _The African Origin of Civilization_.
Chicago, IL: Lawrence Hill Books. ISBN 1-55652-072-7 .
* Diop, Cheikh Anta (1981). _Civilization or Barbarism_. Chicago,
IL: Lawrence Hill Books. ISBN 1-55652-048-4 .
* Farley, David G. (2010). _Modernist Travel Writing: Intellectuals
Abroad_. Columbia, MO: University of Missouri Press. ISBN
* Fehling, Detlev (1989) . _Herodotos and His 'Sources': Citation,
Invention, and Narrative Art_. Arca Classical and Medieval Texts,
Papers and Monographs. 21. Translated from the German by J. G. Howie.
Leeds: Francis Cairns. ISBN 978-0-90520-570-0 .
* Fehling, Detlev (1994). "The art of
Herodotus and the margins of
the world". In Z. R. W. M. von Martels. _Travel Fact and Travel
Fiction: Studies on Fiction, Literary Tradition, Scholarly Discovery,
and Observation in Travel Writing_. Brill's studies in intellectual
history. 55. Leiden: Brill. pp. 1–15. ISBN 9789004101128 .
* Gould, John (1989). _Herodotus_. Historians on historians. London:
George Weidenfeld & Nicolson. ISBN 978-0-297-79339-7 .
* Heeren, A. H. L. (1838). _Historical Researches into the Politics,
Intercourse, and Trade of the Carthaginians, Ethiopians, and
Egyptians_. Oxford: D. A. Talboys. ASIN B003B3P1Y8 .
* Immerwahr, Henry R. (1985). "Herodotus". In P. E. Easterling & B.
M. W. Knox. _Greek Literature_. The Cambridge History of Classical
Greek Literature. 1. Cambridge University Press. ISBN 0-521-21042-9 .
CS1 maint: Uses editors parameter (link )
* Jones, C. P. (1996). "ἔθνος and γένος in Herodotus".
The Classical Quarterly _. new series. 46 (2): 315–320. doi
* Lloyd, Alan B. (1993). _Herodotus, Book II_. Études
préliminaires aux religions orientales dans l'Empire romain. 43.
Leiden: Brill. ISBN 978-90-04-07737-9 .
* Marincola, John (2001). _Greek Historians_. Oxford University
Press. ISBN 978-0-19-922501-9 .
* Mikalson, Jon D. (2003). _
Herodotus and Religion in the Persian
Wars_. Chapel Hill, NC: Univ of North Carolina Press. ISBN
* Murray, Oswyn (1986). "Greek historians". In John Boardman, Jasper
Griffin & Oswyn Murray. _The Oxford History of the Classical World_.
Oxford University Press. pp. 186–203. ISBN 978-0-19-872112-3 . CS1
maint: Uses editors parameter (link )
* Nielsen, Flemming A. J. (1997). _The Tragedy in History: Herodotus
and the Deuteronomistic History_. A&C Black. ISBN 978-1-85075-688-0 .
* Peissel, Michel (1984). _The Ants' Gold: The Discovery of the
Greek El Dorado in the Himalayas_. Collins. ISBN 978-0-00-272514-9 .
* Rawlinson, George (1859). _The History of Herodotus_. 1. New York:
D. Appleton and Company.
* Roberts, Jennifer T. (2011). _Herodotus: a Very Short
Introduction_. OXford University Press. ISBN 978-0-19-957599-2 .
* Romm, James (1998). _Herodotus_. New Haven, CT: Yale University
Press. ISBN 0-300-07229-5 .
* Saltzman, Joe (2010). "
Herodotus as an ancient journalist:
reimagining antiquity\'s historians as journalists". _The IJPC Journal
_. 2: 153–185.
* Sparks, Kenton L. (1998). _Ethnicity and Identity in Ancient
Israel: Prolegomena to the Study of Ethnic Sentiments and their
Expression in the Hebrew Bible_. Winona Lake, IN: Eisenbrauns. ISBN
* Wardman, A. E. (1960). "Myth in Greek historiography". _Historia:
Zeitschrift für Alte Geschichte _. 9 (4): 403–413.
JSTOR 4434671 .
* Waters, K. H. (1985). _Herodotos the Historian: His Problems,
Methods and Originality_. University of Oklahoma Press. ISBN
* Welsby, Derek (1996). _The Kingdom of Kush_. London: British
Museum Press. ISBN 0-7141-0986-X .
* Bakker, Egbert J. ; de Jong, Irene J.F.; van Wees, Hans, eds.
(2002). _Brill's companion to Herodotus_. Leiden: E.J. Brill. ISBN
* Baragwanath, Emily (2010). _Motivation and Narrative in
Herodotus_. Oxford Classical Monographs. Oxford University Press. ISBN
* De Selincourt, Aubrey (1962). _The World of Herodotus_. London:
Secker and Warburg.
* Dewald, Carolyn; Marincola, John, eds. (2006). _The Cambridge
companion to Herodotus_. Cambridge: Cambridge University Press. ISBN
* Evans, J.A.S. (2006). _The beginnings of history:
the Persian Wars_. Campbellville, Ont.: Edgar Kent. ISBN 0-88866-652-7
* Evans, J.A.S. (1982). _Herodotus_. Boston: Twayne. ISBN
* Evans, J.A.S. (1991). _Herodotus, explorer of the past: three
essays_. Princeton, NJ: Princeton University Press. ISBN 0-691-06871-2
* Flory, Stewart (1987). _The archaic smile of Herodotus_. Detroit:
Wayne State University Press. ISBN 0-8143-1827-4 .
* Fornara, Charles W. (1971). _Herodotus: An Interpretative Essay_.
Oxford: Clarendon Press.
* Giessen, Hans W. Giessen (2010). _Mythos Marathon. Von Herodot
über Bréal bis zur Gegenwart_. Landau: Verlag Empirische Pädagogik
(= Landauer Schriften zur Kommunikations- und Kulturwissenschaft. Band
17). ISBN 978-3-941320-46-8 .
* Gould, John (1989). _Herodotus_. New York: St. Martin's Press.
ISBN 0-312-02855-5 .
* Harrington, John W. (1973). _To see a world_. Saint Louis: G.V.
Mosby Co. ISBN 0-8016-2058-9 .
* Hartog, François (2000). "The Invention of History: The
Pre-History of a Concept from
Homer to Herodotus". _History and
Theory_. 39 (3): 384–395. doi :10.1111/0018-2656.00137 .
* Hartog, François (1988). _The mirror of Herodotus: the
representation of the other in the writing of history_. Janet Lloyd,
trans. Berkeley: University of California Press. ISBN 0-520-05487-3 .
* How, Walter W.; Wells, Joseph, eds. (1912). _A Commentary on
Herodotus_. Oxford: Clarendon Press.
* Hunter, Virginia (1982). _Past and process in
Thucydides_. Princeton, NJ: Princeton University Press. ISBN
* Immerwahr, H. (1966). _Form and Thought in Herodotus_. Cleveland:
Case Western Reserve University Press.
* Kapuściński, Ryszard (2007). _Travels with Herodotus_. Klara
Glowczewska, trans. New York: Knopf. ISBN 978-1-4000-4338-5 .
* Lateiner, Donald (1989). _The historical method of Herodotus_.
Toronto: Toronto University Press. ISBN 0-8020-5793-4 .
* Pitcher, Luke (2009). _Writing Ancient History: An Introduction to
Classical Historiography_. New York: I.B. Taurus & Co Ltd.
* Marozzi, Justin (2008). _The way of Herodotus: travels with the
man who invented history_. Cambridge, MA: Da Capo Press. ISBN
* Momigliano, Arnaldo (1990). _The classical foundations of modern
historiography_. Berkeley: Univ. of California Press. ISBN
* Myres, John L. (1971). _
Herodotus : father of history_. Chicago:
Henry Regnrey. ISBN 0-19-924021-3 .
* Pritchett, W. Kendrick (1993). _The liar school of Herodotus_.
Amsterdam: Gieben. ISBN 90-5063-088-X .
* Selden, Daniel (1999). "Cambyses' Madness, or the Reason of
History". _Materiali e discussioni per l'analisi dei testi classici_.
* Thomas, Rosalind (2000). _
Herodotus in context: ethnography,
science and the art of persuasion_. Cambridge: Cambridge University
Press. ISBN 0-521-66259-1 .
* Waters, K.H. (1985). _
Herodotus the Historian: His Problems,
Methods and Originality_. Beckenham: Croom Helm Ltd.
Wikimedia Commons has media related to HERODOTUS _.
Wikiquote has quotations related to: HERODOTUS _
Wikisource has original works written by or about:
Wikisource has original
|
<urn:uuid:3bb3d28c-4528-4a55-a4fd-f36234425d5b>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00015.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.863707959651947,
"score": 3.578125,
"token_count": 14479,
"url": "http://listmoto.com/php/SummaryGet.php?FindGo=Herodotus"
}
|
Early life and career of Abraham Lincoln
President of the United States
Assassination and legacy
Abraham Lincoln was born on February 12, 1809, in a one-room log cabin at Sinking Spring farm, south of Hodgenville, in Hardin County, Kentucky. His siblings were Sarah Lincoln Grigsby and Thomas Lincoln, Jr. After a land title dispute forced the family to leave, they relocated to Knob Creek farm, eight miles to the north. By 1814 Thomas Lincoln, Abraham's father, had lost most of his land in Kentucky in legal disputes over land titles. In 1816 Thomas and Nancy Lincoln, their nine-year-old daughter, Sarah, and seven-year-old Abraham moved to Indiana, where they settled in Hurricane Township, Perry County, Indiana. (Their land became part of Spencer County, Indiana, when it was formed in 1818.)
Abraham spent his formative years, from the age of 7 to 21, on the family farm in Southern Indiana. As was common on the frontier, Lincoln received a meager formal education, the aggregate of which may have been less than twelve months. However, Lincoln continued to learn on his own from life experiences and through reading and reciting what he had read or heard from others. In 1818, two years after their arrival in Indiana, nine-year-old Lincoln lost his birth mother, Nancy, who died after a brief illness. Thomas returned to Kentucky the following year and married Sarah Bush Johnson. Abraham's new step-mother and her three children joined the Lincoln family in Indiana in 1819. A second tragedy befell the family in 1828, when Abraham's sister, Sarah, died in childbirth.
In 1830 twenty-one-year-old Abraham joined his extended family in a move to Illinois. After helping his father establish a farm in Macon County, Illinois, Lincoln set out on his own. Lincoln worked as a boatman, store clerk, surveyor, militia soldier, and became a lawyer in Illinois. He was elected to the Illinois Legislature in 1834, and was reelected in 1836, 1838, 1840, and 1844. In 1842, Lincoln married Mary Todd; the couple had four sons. In addition to his law career, Lincoln continued his involvement in politics, serving in the United States House of Representatives from Illinois in 1846. He was elected president of the United States in 1860.
- 1 Ancestry
- 2 Lincoln's appearance
- 3 Early years (1809–1831)
- 4 New Salem (1831–1837)
- 5 Illinois Legislature (1834–1842)
- 6 Prairie lawyer
- 7 Lincoln the inventor
- 8 Courtships, marriage, and family
- 9 State and national politics
- 10 See also
- 11 Notes
- 12 References
- 13 Further reading
- 14 External links
Lincoln's first known ancestor in America was Samuel Lincoln, who migrated from England to Hingham, Massachusetts, in 1638. Samuel's son, Mordecai, remained in Massachusetts, but Samuel's grandson, who was also named Mordecai, began the family's western migration. John Lincoln, Samuel's great-grandson, continued the westward journey. Born in New Jersey, John moved to Pennsylvania, then brought his family to Virginia. John's son, Captain Abraham Lincoln, who earned that rank for his service in the Virginia militia, was the future president's paternal grandfather and namesake. Born in Pennsylvania, he moved with his father and other family members to Virginia's Shenandoah Valley around 1766. The family settled near Linville Creek, in Augusta County, now Rockingham County, Virginia. Captain Lincoln bought the Virginia property from his father in 1773.
Thomas Lincoln, the future president's father, was Captain Lincoln's son. Thomas was born in Virginia and moved west to Jefferson County, Kentucky, with his father, mother, and siblings in the 1780s, when he was about five years old. In 1786, at the age of forty-two, Captain Abraham was killed in an Indian ambush while working his field in Kentucky. Eight-year-old Thomas witnessed his father's murder and may have ended up a victim if his brother, Mordecai, had not shot the attacker. After Captain Lincoln's death, Thomas's mother moved to Washington County, Kentucky, while Thomas worked at odd jobs in several Kentucky locations. Thomas also spent a year working in Tennessee, before settling with members of his family in Hardin County, Kentucky, in the early 1800s.
The identity of Lincoln's maternal grandfather is unclear. In a conversation with William Herndon, Lincoln's law partner and one of his biographers, the president implied that his grandfather was "a Virginia planter or large farmer", but did not identify him. Lincoln felt that it was from this aristocratic grandfather that he had inherited "his power of analysis, his logic, his mental activity, his ambition, and all the qualities that distinguished him from the other members and descendants of the Hanks family." Lincoln's maternal grandmother, Lucy Shipley Hanks, migrated to Kentucky, with her daughter, Nancy. The debate continues over whether Lincoln's mother, Nancy, was born out of wedlock. Lucy and Nancy resided with Lucy's older sister, Rachael Shipley Berry, and her husband, Richard Berry Sr., in Washington County, Kentucky. Nancy is believed to have remained with the Berry family after her mother's marriage to Henry Sparrow, which took place several years after the women arrived in Kentucky. The Berry home was about a mile and a half from the home of Thomas Lincoln's mother; the families were neighbors for seventeen years. It was during this time when Thomas met Nancy. Thomas Lincoln and Nancy Hanks were married on June 12, 1806, at the Beech Fork settlement in Washington County, Kentucky. The Lincolns moved to Elizabethtown, Kentucky, following their marriage.
Biographers have rejected numerous rumors about Lincoln's paternity. According to historian William E. Barton, one of these rumors began circulating in 1861 "in various forms in several sections of the South" that Lincoln's biological father was Abraham Enloe, a resident of Rutherford County, North Carolina, who died in that same year. However, Barton dismissed the rumors as "false from beginning to end." Enloe publicly denied his connection to Lincoln, but is reported to have privately confirmed it. The Bostic Lincoln Center in Bostic, North Carolina, also claims that Abraham Lincoln was born in Rutherford County, North Carolina, and argues the case that Nancy Hanks had an illegitimate child while she was working for the Enlow family.
Rumors of Lincoln's ethnic and racial heritage were also circulated, especially after he entered national politics. Citing Chauncey Burr's Catechism, which references a "pamphlet by a western author adducing evidence", David J. Jacobson has suggested Lincoln was "part Negro", but the claim is unproven. Lincoln also received mail that called him "a negro" and a "mulatto".
Lincoln was described as "ungainly" and "gawky" as a youth. Tall for his age, Lincoln was strong and athletic as a teenager. He was a good wrestler, participated in jumping, throwing, and local footraces, and "was almost always victorious." His stepmother remarked that he cared little for clothing. Lincoln dressed as an ordinary boy from a poor, backwoods family, with a gap between his shoes, socks, and pants that often exposed six or more inches of his shin. His lack of interest in his personal attire continued as an adult. When Lincoln lived in New Salem, Illinois, he frequently appeared with a single suspender, and no vest or coat.
In 1831, the year after he left Indiana, Lincoln was described as six feet three or four inches tall, weighing 210 pounds, and had a ruddy complexion. Later descriptions included Lincoln's dark hair and dark complexion, which were also evident in photographs taken during his tenure as president of the United States. William H. Herndon described Lincoln as having "very dark skin"; his cheeks as "leathery and saffron-colored"; a "sallow" complexion; and "his hair was dark, almost black". Lincoln described himself around 1838–39 as "black" and his complexion in 1859 as "dark" Lincoln's detractors also remarked on his appearance. For example, during the American Civil War the Charleston, South Carolina Mercury described him as having "the dirtiest complexion" and asked "Faugh! After him what white man would be President?"
Early years (1809–1831)
During his later years, Lincoln was reluctant to discuss his origins. He viewed himself as a self-made man, and may have also found it difficult to confront the untimely deaths of his mother and his sister. However, around the time of his nomination as a candidate for president of the United States, Lincoln provided two brief biographical sketches in response to two inquiries that provide a glimpse of youth in Kentucky and Indiana. One request for a campaign biography came from his friend and fellow Illinois Republican, Jesse W. Fell, in 1859; the other request came from John Locke Scripps, a journalist for the Chicago Press and Tribune. In Lincoln's response Scripps, he summed up his early life in a quote from Thomas Gray's Elegy Written in a Country Churchyard, as "the short and simple annals of the poor." Additional details of Lincoln's early life appeared after his death in 1865, when William Herndon began collecting letters and interviews from Lincoln's friends, family, and acquaintances. Herndon published his collected materials in Herndon's Lincoln: The True Story of a Great Life (1889). Although Herndon's work is often challenged, historian David Herbert Donald argues that they "have largely shaped current beliefs" about Lincoln's early life in Kentucky, Indiana, and his early days in Illinois.
Early life in Kentucky (1809–1816)
Thomas and Nancy Hanks Lincoln became the parents of three children during their years in Kentucky. Sarah was born on February 10, 1807; Abraham, on February 12, 1809; and another son, Thomas, who died in infancy.
In 1808, Thomas, Nancy, and their daughter, Sarah, moved from Elizabethtown to the Sinking Spring farm, on Nolen Creek, near Hodgen's Mill, in Hardin County, Kentucky. (The farm is part of the Abraham Lincoln Birthplace National Historic Site in present-day LaRue County, Kentucky.) Abraham was born at the farm in 1809. Due to a land title dispute, the family lived at the farm only two more years before they were forced to move. Thomas continued legal action in court, but lost the case in 1815. Kentucky's survey methods, which used a system of metes and bounds to identify and describe land descriptions, proved to be unreliable when the natural features of the land changed. This issue, compounded by confusion over previous land grants and purchase agreements, caused continual legal disputes over land ownership in Kentucky. In 1811, the family relocated to Knob Creek farm, now a part of the Abraham Lincoln Birthplace National Historic Site, eight miles to the north. Situated in a valley of the Rolling Fork River, it had some of the best farmland in the area. Lincoln's earliest recollections of his boyhood are from this farm. In 1815 a claimant in another land dispute sought to eject the Lincoln family from the Knob Creek farm.
Years later, after Lincoln became a national political figure, reporters and storytellers often exaggerated his family's poverty and the obscurity of his birth. Lincoln's family circumstances were not unusual for pioneer families at that time. Thomas Lincoln was a farmer, carpenter, and landowner in the Kentucky backcountry. He had purchased the Sinking Spring Farm, which comprised 348.5 acres, in December 1808 for $200, but lost his cash investment and the improvements he had made on the farm in a legal dispute over the land title. Thomas acquired title to the Knob Creek farm on 230 acres of land, but the family was forced to leave it after others claimed a prior title to the land. Of the 816.5 acres that Thomas held in Kentucky, he lost all but 200 acres in land title disputes. By 1816 Thomas was frustrated over the lack of security provided by Kentucky courts. He sold the remaining land he held in Kentucky in 1814, and began planning a move to Indiana, where the land survey process was more reliable and the ability for an individual to retain land titles was more secure.
In 1860 Lincoln stated that the family's move to Indiana in 1816 was "partly on account of slavery; but chiefly on account of the difficulty in land titles in Kentucky." Historians support Lincoln's assertion that the two major reasons for the family's migration to Indiana were most likely due to the problem with securing land titles in Kentucky and the issue of slavery. In the Indiana Territory, once a part of the Old Northwest Territory, the federal government owned the territorial land, which had been surveyed into sections to make it easier to describe in land claims. As a result, the survey method used in Indiana caused fewer ownership problems and helped Indiana attract new settlers. In addition, when Indiana became a state in 1816, the state constitution prohibited slavery as well as involuntary servitude. Although slaves with earlier indentures still resided within the state, illegal slavery ended within the first decade of statehood.
Early religious beliefs
Lincoln never joined a religious congregation; however, his father, mother, sister, and stepmother were all Baptists. Abraham's parents, Thomas and Nancy Lincoln, belonged to Little Mount Baptist Church, a Baptist congregation in Kentucky that had split from a larger church in 1808 because its members refused to support slavery. Through their membership in this anti-slavery church, Thomas and Nancy exposed Abraham and Sarah to anti-slavery sentiment at a very young age. After settling in Indiana, Lincoln's parents continued their Baptist church membership, joining the Big Pigeon Baptist Church in 1823. When the Lincoln family left Indiana for Illinois in 1830, Thomas and his second wife, Sally, were members in good standing at the Little Pigeon Baptist Church.
Sally Lincoln recalled in 1865 that her stepson, Abraham, "had no particular religion" and did not talk about it much. She also remembered that he often read the Bible and occasionally attended church services. Matilda Johnston Hall Moore, Lincoln's stepsister, explained in an 1865 interview how Lincoln would read the Bible to his siblings and join them in singing hymns after his parents had gone to church. Other family members and friends who knew Lincoln during his a youth in Indiana recalled that he would often get up on a stump, gather children, friends, and coworkers around him, and repeat a sermon he had heard the previous week to the amusement of the locals, especially the children.
Indiana years (1816–1830)
Lincoln spent fourteen of his formative years, or roughly one-quarter of his life, from the age of seven to twenty-one in Indiana. In 1816, Thomas and Nancy Lincoln, their nine-year-old daughter, Sarah, and seven-year-old Abraham moved to Indiana. They settled on land in an "unbroken forest" in Hurricane Township, Perry County, Indiana. The Lincoln property lay on land ceded to the United States government as part of treaties with the Piankeshaw and Delaware people in 1804. In 1818 the Indiana General Assembly created Spencer County, Indiana, from portions of Warrick and Perry counties, which included the Lincoln farm.
The move to Indiana had been planned for at least several months. Thomas visited Indiana Territory in 1816 to select a site and mark his claim, then returned to Kentucky and brought his family to Indiana sometime between November 11 and December 20, 1816, about the same time that Indiana became a state. However, Thomas Lincoln did not begin the formal process to purchase 160 acres of land until October 15, 1817, when he filed a claim at the land office in Vincennes, Indiana, for property identified as "the southwest quarter of Section 32, Township 4 South, Range 5 West".
More recent scholarship on Thomas Lincoln has revised previous characterizations of him as a "shiftless drifter". Documentary evidence suggests he was a typical pioneer farmer of his time. The move to Indiana established his family in a state that prohibited slavery, and they lived in an area that yielded timber to construct a cabin, adequate soil to grow crops that fed the family, and water access to markets along the Ohio River. Thomas owned horses and livestock, paid taxes, acquired farmland, served the county when was necessary, and maintained his standing in the local Baptist church. Despite some financial challenges, which involved relinquishing some acreage to pay for debts or to purchase other land, he obtained clear title to 80 acres of land in Spencer County, on June 5, 1827. By 1830, before the family moved to Illinois, Thomas had acquired twenty acres of land adjacent to his property.
Abraham, who became skilled with an axe, helped his father clear their Indiana land. Recalling his boyhood in Indiana, Lincoln remarked that from the time of his arrival in 1816, he "was almost constantly handling that most useful instrument." Once the land had been cleared, the family raised hogs and corn on their farm, which was typical for Indiana settlers at that time. Thomas Lincoln also continued to work as a cabinetmaker and carpenter. Within a year of the family's arrival in Indiana, Thomas had claimed title to 160 acres of Indiana land and paid $80, a quarter of its total purchase price of $320. The Lincolns and others, many of whom came from Kentucky, settled in what became known the Little Pigeon Creek Community, about one hundred miles from the Lincoln farm at Knob Creek in Kentucky. By the time Abraham had reached age thirteen, nine families with forty-nine children under the age of seventeen were living within a mile of the Lincoln homestead.
Tragedy struck the family on October 5, 1818, when Nancy Lincoln died of milk sickness, an illness caused by drinking contaminated milk from cows who fed on Ageratina altissima (white snakeroot). Abraham was nine years old; his sister, Sarah, was eleven. After Nancy's death the household consisted of Thomas, aged forty; Sarah, Abraham, and Dennis Friend Hanks, an orphaned nineteen-year-old cousin of Nancy Lincoln. In 1819 Thomas left Sarah, Abraham, and Dennis Hanks at the farm in Indiana and returned to Kentucky. On December 2, 1819, Lincoln's father married Sarah "Sally" Bush Johnston. a widow with three children from Elizabethtown, Kentucky. Ten-year-old Abe quickly bonded with his new stepmother, who raised her two young stepchildren as her own. Describing her in 1860, Lincoln remarked that she was "a good and kind mother" to him.
Sally encouraged Lincoln's eagerness to learn and desire to read, and shared her own collection of books with him. Years later she compared Lincoln to her own son, John D. Johnston: "Both were good boys, but I must say — both now being dead that Abe was the best boy I ever saw or ever expect to see." In an interview with William Herndon following Lincoln's death in 1865, Sally Lincoln described her stepson as dutiful and kind, especially to animals and children, and cooperative and uncomplaining. She also remembered him as a "moderate" eater, who was not picky about what he ate, and enjoyed good health. In pioneer-era Indiana, where hunting and fishing were typical pursuits, Thomas and Abraham did not appear to have enjoyed them. Abraham later admitted that he had shot and killed only a single wild turkey. Apparently, he opposed killing animals, even for food, but occasionally participated in bear hunts, when the bears threatened settlers' farms and communities.
In 1828 another tragedy struck the Lincoln family. Lincoln's older sister, Sarah, who had married Aaron Grigsby on August 2, 1826, died in childbirth on January 20, 1828, when she was twenty-one years old. Little is known about Nancy Hanks Lincoln or Abraham's sister. Neighbors who were interviewed by William Herndon agreed that they were intelligent, but gave contradictory descriptions of their physical appearances. Lincoln spoke very little about either woman. Herndon had to rely on testimony from a cousin, Dennis Hanks, to get an adequate description of Sarah. Those who knew Lincoln as a teenager later recalled him being deeply distraught by his sister's death, and an active participant in a feud with the Grigsby family that erupted afterwards.
First trip to New Orleans (1828)
Possibly looking for a diversion from the sorrow of his sister's death, nineteen-year-old Abraham made a flatboat trip to New Orleans in the spring of 1828. Lincoln and Allen Gentry, the son of James Gentry, owner of a local store near the Lincoln family's homestead, began their trip along the Ohio River at Gentry's Landing, near Rockport, Indiana. En route to Louisiana, Lincoln and Gentry were attacked by several African American men who attempted to take their cargo, but the two successfully defended their boat and repelled their attackers. Upon their arrival in New Orleans, they sold their cargo, which was owned by Gentry's father, then explored the city. With its considerable slave presence and active slave market, it is probable that Lincoln witnessed a slave auction, and it may have left an indelible impression on him. (Congress outlawed the importation of slaves in 1808, but the slave trade continued to flourish within the United States.) How much of New Orleans Lincoln saw or experienced is open to speculation. Whether he actually witnessed a slave auction at that time, or on a later trip to New Orleans, his first visit to the Deep South exposed him to new experiences, including the cultural diversity of New Orleans and a return trip to Indiana aboard a steamboat.
In 1858, when responding to a questionnaire sent to former members of Congress, Lincoln described his education as "defective". In 1860, shortly after his nomination for U.S. president, Lincoln apologized for and regretted his limited formal education. Lincoln was self-educated. His formal schooling was intermittent, the aggregate of which may have amounted to less than twelve months. He never attended college, but Lincoln retained a lifelong interest in learning. In a September 1865 interview with William Herndon, Lincoln's stepmother described Abraham as a studious boy who read constantly, listened intently to others, and had a deep interest in learning. Lincoln continued reading as a means of self improvement as an adult, studying English grammar in his early twenties and mastering Euclid after he became a member of Congress.
Dennis Hanks, a cousin of Lincoln's mother, Nancy, claimed he gave Lincoln "his first lesson in spelling—reading and writing" and boasted, "I taught Abe to write with a buzzards quill which I killed with a rifle and having made a pen—put Abes hand in mind [sic] and moving his fingers by my hand to give him the idea of how to write." Hanks, who was ten years older than Lincoln and "only marginally literate", may have helped Lincoln with his studies when he was very young, but Lincoln soon advanced beyond Hanks's abilities as a teacher.
Abraham, aged six, and his sister Sarah began their education in Kentucky, where they attended a subscription school about two miles north of their home on Knob Creek. Classes were held only a few months during the year. In 1816, when they arrived in Indiana, there were no schools in the area, so Abraham and his sister continued their studies at home until the first school at Little Pigeon Creek was established around 1819, "about a mile and a quarter south of the Lincoln farm." In the 1820s, educational opportunities for pioneer children, including Lincoln, were meager. The parents of school-aged children paid for the community's schools and its instructors. During Indiana's pioneer era, Lincoln's limited formal schooling was not unusual. Lincoln was taught by itinerant teachers at blab schools, which were schools for younger students, and paid by the students' parents. Because school resources were scarce, much of a child's education was informal and took place outside the confines of a classroom.
Family, neighbors, and schoolmates of Lincoln's youth recalled that he was an avid reader. Lincoln read Aesop's Fables, the Bible, The Pilgrim's Progress, Robinson Crusoe, and Parson Weems's The Life of Washington, as well as newspapers, hymnals, songbooks, and math and spelling books, among others. Later studies included Shakespeare's works, poetry, and British and American history. Although Lincoln was unusually tall (6 feet 3.75 inches (1.9241 m)) and strong, he spent so much time reading that some neighbors thought he was lazy for all his "reading, scribbling, writing, ciphering, writing Poetry, etc." and must have done it to avoid strenuous manual labor. His stepmother also acknowledged he did not enjoy "physical labor", but loved to read. "He (Lincoln) read so much—was so studious—too[k] so little physical exercise—was so laborious in his studies," that years later, when Lincoln lived in Illinois, Henry McHenry remembered, "that he became emaciated and his best friends were afraid that he would craze himself."
In addition to reading, Lincoln cultivated other skills and interests during his youth in Kentucky and Indiana. He developed a plain, backwoods style of speaking, which he practiced during his youth by telling stories and sermons to his family, schoolmates, and members of the local community. By the time he was twenty-one, Lincoln had become "an able and eloquent orator"; however, some historians have argued his speaking style, figures of speech, and vocabulary remained unrefined, even as he entered national politics.
Move to Illinois (1830)
In 1830, when Lincoln was twenty-one years of age, thirteen members of the extended Lincoln family moved to Illinois. Thomas, Sally, Abraham, and Sally's son, John D. Johnston, went as one family. Dennis Hanks and his wife Elizabeth, who was also Abraham's stepsister, and their four children joined the party. Hanks's half-brother, Squire Hall, along with his wife, Matilda Johnston, another of Lincoln's stepsisters, and their son formed the third family group. Historians disagree on who initiated the move, but it may have been Dennis Hanks rather than Thomas Lincoln. Thomas had no obvious reason to leave Indiana. He owned land and was a respected member of his community, but Hanks had not fared as well. In addition, John Hanks, one of Dennis's cousins, lived in Macon County, Illinois. Dennis later remarked that Sally refused to part with her daughter, Elizabeth, so Sally may have persuaded Thomas to move to Illinois.
The Lincoln-Hanks-Hall families departed Indiana in early March 1830. It is generally agreed they crossed the Wabash River at Vincennes, Indiana, into Illinois, and the family settled on a site selected in Macon County, Illinois, 10 miles (16 km) west of Decatur. Lincoln, who was twenty-one years old at the time, helped his father build a log cabin and fences, clear 10 acres (40,000 m2) of land, and put in a crop of corn. That autumn the entire family fell ill with a fever, but all survived. The early winter of 1831 was especially brutal, with many locals calling it the worst they had ever experienced. (In Illinois it was known as the "Winter of Deep Snow".) In the spring, as the Lincoln family prepared to move to a homestead in Coles County, Illinois, Abraham was ready to strike out on his own. Thomas and Sally moved to Coles County, and remained in Illinois for the rest of their lives.
Although Sally Lincoln and his cousin, Dennis Hanks, maintained that Thomas loved and supported his son, the father-son relationship became strained after the family moved to Illinois. Perhaps Thomas did not fully appreciate his son's ambition, while Abraham never knew of Thomas's early struggles. In 1851, after the move to Illinois, Abraham refused to visit his dying father, and failed to take his own sons to visit their grandparents. Historian Rodney O. Davis has argued that the reason for the strain in their relationship was due to Lincoln's success as a lawyer and his marriage to Mary Todd Lincoln, who came from a wealthy, aristocratic family, and the two men no longer related to each other's circumstances in life.
Another trip to New Orleans (1831)
Lincoln, along with John Johnson and John Hanks, accepted an offer from Denton Offutt to meet in Springfield, Illinois, and take a load of cargo to New Orleans in 1831. Departing from Springfield in late April or early May along the Sangamon River, their boat had difficulty getting past a mill dam 20 miles (32 km) northwest of Springfield, near the village of New Salem. Offutt, who was impressed by New Salem's location and believed that steamboats could navigate the river to the village, made arrangements to rent the mill and open a general store. Offutt hired Lincoln as his clerk and the two men returned to New Salem after they discharged their cargo in New Orleans.
New Salem (1831–1837)
Lincoln settles in
When Lincoln returned to New Salem in late July 1831, he found a promising community, but it probably never had a population that exceed a hundred residents. New Salem was a small commercial settlement that served several local communities. The village had a sawmill, grist mill, blacksmith shop, cooper's shop, wool carding shop, a hat maker, general store, and a tavern spread out over more than a dozen buildings. Offutt did not open his store until September, so Lincoln found temporary work in the interim and was quickly accepted by the townspeople as a hardworking and cooperative young man. Once Lincoln began working in the store, he met a rougher crowd of settlers and workers from the surrounding communities, who came into New Salem to purchase supplies or have their corn ground. Lincoln's humor, storytelling abilities, and physical strength fit the young, raucous element that included the so-called Clary's Grove boys, and his place among them was cemented after a wrestling match with a local champion, Jack Armstrong. Although Lincoln lost the fight with Armstrong, he earned the respect of the locals.
During his first winter in New Salem, Lincoln attended a meeting of the New Salem debating club. His performance in the club, along with his efficiency in managing the store, sawmill, and gristmill, in addition to his other efforts at self-improvement soon gained the attention of the town's leaders, such as Dr. John Allen, Mentor Graham, and James Rutledge. The men encouraged Lincoln to enter politics, feeling that he was capable of supporting the interests of their community. In March 1832 Lincoln announced his candidacy in a written article that appeared in the Sangamo Journal, which was published in Springfield. While Lincoln admired Henry Clay and his American System, the national political climate was undergoing a change and local Illinois issues were the primary political concerns of the election. Lincoln opposed the development of a local railroad project, but supported improvements in the Sangamon River that would increase its navigability. Although the two-party political system that pitted Democrats against Whigs had not yet formed, Lincoln would become one of the leading Whigs in the state legislature within the next few years.
By the spring of 1832, Offutt's business had failed and Lincoln was out of work. Around this time, the Black Hawk War erupted and Lincoln joined a group of volunteers from New Salem to repel Black Hawk, who was leading a group of 450 warriors along with 1,500 women and children to reclaim traditional tribal lands in Illinois. Lincoln was elected as captain of his unit, but he and his men never saw combat. Lincoln later commented In the late 1850s that the selection by his peers was "a success which gave me more pleasure than any I have had since." Lincoln returned to central Illinois after a few months of militia service to campaign in Sangamon County before the August 6 legislative election. When the votes were tallied, Lincoln finished eighth out of thirteen candidates. Only the top four candidates were elected, but Lincoln managed to secure 277 out of the 300 votes cast in the New Salem precinct.
Without a job, Lincoln and William F. Berry, a member of Lincoln's militia company during the Black Hawk War, purchased one of the three general stores in New Salem. The two men signed personal notes to purchase the business and a later acquisition of another store's inventory, but their enterprise failed. By 1833 New Salem was no longer a growing community; the Sangamon River proved to be inadequate for commercial transportation and no roads or railroads allowed easy access to other markets. In January, Berry applied for a liquor license, but the added revenue was not enough to save the business. With the closure of the Lincoln-Berry store, Lincoln was again unemployed and would soon have to leave New Salem. However, in May 1833, with the assistance of friends interested in keeping him in New Salem, Lincoln secured an appointment from President Andrew Jackson as the postmaster of New Salem, a position he kept for three years. During this time, Lincoln earned between $150 and $175 as postmaster, hardly enough to be considered a full-time source of income. Another friend helped Lincoln obtain an appointment as an assistant to county surveyor John Calhoun, a Democratic political appointee. Lincoln had no experience at surveying, but he relied on borrowed copies of two works and was able to teach himself the practical application of surveying techniques as well as the trigonometric basis of the process. His income proved sufficient to meet his day-to-day expenses, but the notes from his partnership with Berry were coming due.
Politics and the law
In 1834 Lincoln's decision to run for the state legislature for a second time was strongly influenced by his need to satisfy his debts, what he jokingly referred to as his "national debt", and the additional income that would come from a legislative salary. By this time Lincoln was a member of the Whig party. His campaign strategy excluded a discussion of the national issues and concentrated on traveling throughout the district and greeting voters. The district's leading Whig candidate was Springfield attorney John Todd Stuart, whom Lincoln knew from his militia service during the Black Hawk War. Local Democrats, who feared Stuart more than Lincoln, offered to withdraw two of their candidates from the field of thirteen, where only the top four vote-getters would be elected, to support Lincoln. Stuart, who was confident of his own victory, told Lincoln to go ahead and accept the Democrats' endorsement. On August 4 Lincoln polled 1,376 votes, the second highest number of votes in the race, and won one of the four seats in the election, as did Stuart. Lincoln was reelected to the state legislature in 1836, 1838, and 1840.
Stuart, a cousin of Lincoln's future wife, Mary Todd, was impressed with Lincoln and encouraged him to study law. Lincoln was probably familiar with courtrooms from an early age. While the family was still in Kentucky, his father was frequently involved with filing land deeds, serving on juries, and attending sheriff's sales, and later, Lincoln may have been aware of his father's legal issues. When the family moved to Indiana, Lincoln lived within 15 miles (24 km) of three county courthouses. Attracted by the opportunity of hearing a good oral presentation, Lincoln, as did many others on the frontier, attended court sessions as a spectator. The practice continued when he moved to New Salem. Noticing how often lawyers referred to them, Lincoln made a point of reading and studying the Revised Statutes of Indiana, the Declaration of Independence, and the United States Constitution.
Using books borrowed from the law firm of Stuart and Judge Thomas Drummond, Lincoln began to study law in earnest during the first half of 1835. Lincoln did not attend law school, and stated: "I studied with nobody." As part of his training, he read copies of Blackstone's Commentaries, Chitty's Pleadings, Greenleaf's Evidence, and Joseph Story's Equity Jurisprudence. In February 1836 Lincoln stopped working as a surveyor, and in March 1836, took the first step to becoming a practicing attorney when he applied to the clerk of the Sangamon County Court to register as a man of good and moral character. After passing an oral examination by a panel of practicing attorneys, Lincoln received his law license on September 9, 1836. In April 1837 he was enrolled to practice before the Supreme Court of Illinois, and moved to Springfield, where he went into partnership with Stuart.
Illinois Legislature (1834–1842)
Lincoln's first session in the Illinois legislature ran from December 1, 1834, to February 13, 1835. In preparation for the session Lincoln borrowed $200 from Coleman Smoot, one of the richest men in Sangamon County, and spent $60 of it on his first suit of clothes. As the second youngest legislator in this term, and one of thirty-six first-time attendees, Lincoln was primarily an observer, but his colleagues soon recognized his mastery of "the technical language of the law" and asked him to draft bills for them.
When Lincoln announced his bid for reelection in June 1836, he addressed the controversial issue of expanded suffrage. Democrats advocated universal suffrage for white males residing in the state for at least six months. They hoped to bring Irish immigrants, who were attracted to the state because of its canal projects, onto the voting rolls as Democrats. Lincoln supported the traditional Whig position that voting should be limited to property owners.
Lincoln was reelected on August 1, 1836, as the top vote getter in the Sangamon delegation. This delegation of two senators and seven representatives was nicknamed the "Long Nine" because all of them were above average height. Despite being the second youngest of the group, Lincoln was viewed as the group's leader and the floor leader of the Whig minority. The Long Nine's primary agenda was the relocation of the state capital from Vandalia to Springfield and a vigorous program of internal improvements for the state. Lincoln's influence within the legislature and within his party continued to grow with his reelection for two subsequent terms in 1838 and 1840. By the 1838–1839 legislative session, Lincoln served on at least fourteen committees and worked behind the scenes to manage the program of the Whig minority.
While serving as a state legislator, Illinois Auditor James Shields challenged Lincoln to a duel. Lincoln had published an inflammatory letter in the Sangamon Journal, a Springfield newspaper, that poked fun at Shields. Lincoln's future wife, Mary Todd, and her close friend, continued writing letters about Shields without Lincoln's knowledge. Shields took offense to the articles and demanded "satisfaction". The incident escalated to the two parties meeting on Missouri's Sunflower Island, near Alton, Illinois, to participate in a duel, which was illegal in Illinois. Lincoln took responsibility for the articles and accepted. Lincoln choose cavalry broadswords as the duel's weapons because Shields was known as an excellent marksman. Just prior to engaging in combat, Lincoln demonstrated his physical advantage (his long arm reach) by easily cutting a branch above Shields's head. Their seconds intervened and convinced the men to cease hostilities on the grounds that Lincoln had not written the letters.
The Illinois governor called for a special legislative session during the winter of 1835–1836 in order to finance what became known as the Illinois and Michigan Canal, which connected the Illinois and Chicago rivers and linked Lake Michigan to the Mississippi River. The proposal would allow the state government to finance the construction with a $500,000 loan. Lincoln voted in favor of the commitment, which passed 28–27.
Lincoln had always supported Henry Clay's vision of the American System, which saw a prosperous America supported by a well-developed network of roads, canals, and, later, railroads. Lincoln favored raising the funds for these projects through the federal government's sale of public lands to eliminate interest expenses; otherwise, private capital should bear the cost alone. Fearing that Illinois would fall behind other states in economic development, Lincoln shifted his position to allow the state to provide the necessary support for private developers.
In the next session a newly elected legislator, Stephen A. Douglas, went even further and proposed a comprehensive $10 million state loan program, which Lincoln supported. However, the Panic of 1837 effectively destroyed the possibility of more internal improvements in Illinois. The state became "littered with unfinished roads and partially dug canals"; the value of state bonds fell; and interest on the state's debts was eight times its total revenue. The state government took forty years to pay off this debt.
Lincoln had a couple of ideas to salvage the internal improvements program. First, he proposed that the state buy public lands at a discount from the federal government and then sell them to new settlers at a profit, but the federal government rejected the idea. Next, he proposed a graduated land tax that would have passed more of the tax burden to the owners of the most valuable land, but the majority of the legislators were unwilling to commit any further state funds to internal improvement projects. The state's financial depression continued through 1839.
Selection of Springfield as the state capital
In the 1830s Illinois welcomed more immigrants, many from New York and New England, who tended to move into the northern and central parts of the state. Vandalia, which was located in the more stagnant southern section, seemed less unsuitable as the state's seat of government. On the other hand, Springfield, in Sangamon County, was "strategically located in central Illinois" and was already growing "in population and refinement".
Those who opposed the relocation of the state government to Springfield first attempted to weaken the Sangamon County delegation's influence by dividing the county into two new counties, but Lincoln was instrumental in first amending and then killing this proposal in his own committee. Throughout the lengthy debate "Lincoln's political skills were repeatedly tested". He finally succeeded when the legislature accepted his proposal that the chosen city would be required to contribute $50,000 and 2 acres (8,100 m2) of land for construction of a new state capitol building—only Springfield could comfortably meet this financial demand. The final action was tabled twice, but Lincoln resurrected it by finding acceptable amendments to draw additional support, including one that would have allowed reconsideration in the next session. As other locations were voted down, Springfield was selected by a 46 to 37 vote margin on February 28, 1837. Under Lincoln's leadership reconsideration efforts were defeated in the 1838–1839 sessions. Orville Browning, who would later become a close Lincoln friend and confidant, guided the legislation through the Illinois Senate, and the move became effective in 1839.
Illinois State Bank
Lincoln, like Henry Clay, favored federal control over the nation's banking system, but President Jackson had effectively killed the Bank of the United States by 1835. That same year Lincoln crossed party lines to vote with pro-bank Democrats in chartering the Illinois State Bank. As he did in the internal improvements debates, Lincoln searched for the best available alternative. According to historian and Lincoln biographer Richard Carwardine, Lincoln felt:
A well-regulated bank would provide a sound, elastic currency, protecting the public against the extreme prescriptions of the hard-money men on one side and the paper inflationists on the other; it would be a safe depository for public funds and provide the credit mechanisms needed to sustain state improvements; it would bring an end to extortionate money-lending.
Opponents of the state bank initiated an investigation designed to close the bank in the 1836–1837 legislative session. On January 11, 1837, Lincoln made his first major legislative speech supporting the bank and attacking its opponents. He condemned "that lawless and mobocratic spirit ... which is already abroad in the land, and is spreading with rapid and fearful impetuosity, to the ultimate overthrow of every institution, or even moral principle, in which persons and property have hitherto found security." Blaming the opposition entirely on the political class, Lincoln called politicians "at least one long step removed from honest men," Lincoln commented:
I make the assertion boldly, and without fear of contradiction, that no man, who does not hold an office, or does not aspire to one, has ever found any fault of the Bank. It has doubled the prices of the products of their farms, and filled their pockets with a sound circulating medium, and they are all well pleased with its operations.
Westerners in the Jacksonian Era were generally skeptical of all banks, and this was aggravated after the Panic of 1837, when the Illinois Bank suspended specie payments. Lincoln still defended the bank, but it was too strongly linked to a failing credit system that lead to devalued currency and loan foreclosures to generate much political support.
In 1839 Democrats led another investigation of the state bank, with Lincoln as a Whig representative on the investigating committee. Lincoln was instrumental in the committee's conclusion that the suspension of specie payment was related to uncontrollable economic conditions rather than "any organic defects of the institutions themselves." However, the legislation allowing the suspension of specie payments was set to expire at the end of December 1840, and Democrats wanted to adjourn without further extensions. In an attempt to avoid a quorum on adjournment, Lincoln and several others jumped out of a first story window, but the Speaker counted them as present and "the bank was killed." By 1841 Lincoln was less supportive of the state bank, although he would continue to make speeches around the state supporting it. He concluded, "If there was to be this continual warfare against the Institutions of the State ... the sooner it was brought to an end the better."
In the 1830s the slaveholding states began to take notice of the growth of antislavery rhetoric in the North. Their anger focused on abolitionists, whom they accused of fomenting slave revolts by distributing "incendiary pamphlets" of the slaves. When southern legislatures passed resolutions calling for suppression of abolitionist societies, they often received a favorable response from their northern counterparts. In January 1837 the Illinois legislature passed a resolution declaring that they "highly disapprove of the formation of abolition societies", that "the right of property in slaves is sacred to the slave-holding States by the Federal Government, and that they cannot be deprived of that right without their consent", and that "the General Government cannot abolish slavery in the District of Columbia, against the will of the citizens of said District." The vote in the Illinois Senate was 18 to 0, and 77 to 6 in the House, with Lincoln and Dan Stone, who was also from Sangamon County, voting in opposition. Because relocation of the state capital was still the number one issue on the two men's agendas, they made no comment on their votes until the relocation was approved.
On March 3, with his other legislative priorities behind him, Lincoln filed a formal written protest with the legislature that stated "the institution of slavery is founded on both injustice and bad policy." Lincoln criticized abolitionists on practical grounds, arguing that "the promulgation of abolition doctrines tends rather to increase than to abate its [slavery's] evils." He also addressed the issue of slavery in the nation's capital in a different manner from the resolutions, writing that "the Congress of the United States has the power, under the constitution, to abolish slavery in the District of Columbia; but that power ought not to be exercised unless at the request of the people of said District." Lincoln biographer Benjamin P. Thomas commented on the significance of Lincoln's action:
Thus, at the age of twenty-eight, Lincoln made public avowal of his dislike of slavery, basing his position on moral grounds when he characterized the institution as an injustice with evils, while conceding the sanctity of Southern rights. In 1860, in his autobiography, he stated that the protest "briefly defined his position on the slavery question; and so far as it goes, it was then the same that it is now."
Lincoln's Lyceum Address
Lincoln's address to the Young Men's Lyceum of Springfield, Illinois on January 27, 1838, was titled "The Perpetuation of Our Political Institutions". In this speech Lincoln described the dangers of slavery in the United States, an institution he believed would corrupt the federal government.
Partnerships with Stuart and Logan
In 1837, from the start of the law partnership with Stuart, Lincoln handled most of the firms clients, while Stuart was primarily concerned with politics and election to the United States House of Representatives. The law practice had as many clients as it could handle. Most fees were five dollars, with the common fee ranging between two and a half dollars and ten dollars. Lincoln quickly realized that he was equal in ability and effectiveness to most other attorneys, whether they were self-taught like Lincoln or had studied with a more experienced lawyer. Following Stuart's elected to Congress in November 1839, Lincoln ran the practice on his own. Lincoln, like Stuart, considered his legal career as simply a catalyst to his political ambitions.
By 1840 Lincoln was drawing $1,000 annually from the law practice, along with his salary as a legislator. However, when Stuart was reelected to Congress, Lincoln was no longer content to carry the entire load. In April 1841 he entered into a new partnership with Stephen T. Logan. Logan was nine years older than Lincoln, the leading attorney in Sangamon County, and a former attorney in Kentucky before he moved to Illinois. Logan saw Lincoln as a complement to his practice, recognizing that Lincoln's effectiveness with juries was superior to his own in that area. Once again, clients were plentiful for the firm, although Lincoln received one-third of the firm's proceeds rather than the even split he had enjoyed with Stuart.
Lincoln's association with Logan was a learning experience. He absorbed from Logan some of the finer points of law and the importance of proper and detailed case research and preparation. Logan's written pleadings were precise and on point, and Lincoln used them as his model. However, much of Lincoln's development was still self-taught. Historian David Herbert Donald wrote that Logan taught him that "there was more to law than common sense and simple equity" and Lincoln's study began to focus on "procedures and precedents." During this time Lincoln did not study law books, but he did spend "night after night in the Supreme Court Library, searching out precedents that applied to the cases he was working on." Lincoln stated, "I love to dig up the question by the roots and hold it up and dry it before the fires of the mind." His written briefs, especially important in Illinois Supreme Court cases, were prepared in great detail with precedents noted that often went back to the origins of English common law. Lincoln's growing skills became evident as his appearances before the Supreme Court increased and would serve him well in his political career. By the time he went to Washington in 1861 Lincoln had appeared over three hundred times before this court. Lincoln biographer Stephen B. Oates wrote, "It was here that he earned his reputation as a lawyer's lawyer, adept at meticulous preparation and cogent argument."
Lincoln and Herndon
Lincoln's partnership with Logan was dissolved in the fall of 1844, when Logan entered into a partnership with his son. Lincoln, who probably could have had his choice of more established attorneys, was tired of being the junior partner and entered into partnership with William Herndon, who had been reading law in the offices of Logan and Lincoln. Herndon, like Lincoln, was an active Whig, but the party in Illinois at that time was split into two factions. Lincoln was connected to the older, "silk stocking" element of the party through his marriage to Mary Todd; Herndon was one of the leaders of the younger, more populist portion of the party. The Lincoln-Herndon partnership continued through Lincoln's presidential election, and Lincoln remained a partner of record until his death.
Prior to his partnership with Herndon, Lincoln had not regularly attended court in neighboring communities. This changed as Lincoln became one of the most active regulars on the circuit through 1854, interrupted only by his two-year stint in Congress. The Eighth Circuit covered 11,000 square miles (28,000 km2). Each spring and fall Lincoln traveled the district for nine to ten weeks at a time, netting around $150 for each ten-week circuit. On the road, lawyers and judges lived in cheap hotels, with two lawyers to a bed; six or eight men to a room.
Lincoln's reputation for integrity and fairness on the circuit led to him being in high demand both from clients and local attorneys who needed assistance. It was during his time riding the circuit that he picked up one of his lasting nicknames, "Honest Abe". The clients he represented, the men he rode the circuit with, and the lawyers he met along the way became some of Lincoln's most loyal political supporters. One of these was David Davis, a fellow Whig who, like Lincoln, promoted nationalist economic programs and opposed slavery without actually becoming an abolitionist. Davis joined the circuit in 1848 as a judge and would occasionally appoint Lincoln to fill in for him. They traveled the circuit for eleven years, and Lincoln would eventually appoint him to the United States Supreme Court. Another close associate was Ward Hill Lamon, an attorney in Danville, Illinois. Lamon, the only local attorney with whom Lincoln had a formal working agreement, accompanied Lincoln to Washington in 1861.
Case load and income
Unlike other attorneys on the circuit, Lincoln did not supplement his income by engaging in real estate speculation or operating a business or a farm. His income was generally what he earned practicing law. In the 1840s this amounted to $1,500 to $2,500 a year, increasing to $3,000 in the early 1850s, and was $5,000 by the mid-1850s.
Criminal law made up the smallest portion of Lincoln and Herndon's case work. In 1850 the firm was involved in eighteen percent of the cases on the Sangamon County Circuit; by 1853 it had grown to thirty-three percent. On his return from his single term in the U.S. House of Representatives, Lincoln turned down an offer of a partnership in a Chicago law firm. Based strictly on the volume of cases, Lincoln was "undoubtedly one of the outstanding lawyers of central Illinois." Lincoln was also in demand on the federal courts. He received important retainers from cases in the United States Northern District Court in Chicago.
Lincoln was involved in at least two cases involving slavery. In an 1841 Illinois Supreme Court case, Bailey v. Cromwell, Lincoln successfully prevented the sale of a woman who was alleged to be a slave, making the argument that in Illinois "the presumption of law was ... that every person was free, without regard to color." In 1847 Abraham Lincoln defended Robert Matson, a slave owner who was trying to retrieve his runaway slaves. Matson brought slaves from his Kentucky plantation to work on land he owned in Illinois. The slaves were represented by Orlando Ficklin, Usher Linder, and Charles H. Constable. The slaves ran away because they believed that once they were in Illinois they were free, since the Northwest Ordinance forbade slavery in the territory that included Illinois. In this case, Lincoln invoked the right of transit, which allowed slaveholders to take their slaves temporarily into free territory. Lincoln also stressed that Matson did not intend to have the slaves remain permanently in Illinois. Even with these arguments, judges in Coles County ruled against Lincoln and the slaves were set free. Donald notes, "Neither the Matson case nor the Cromwell case should be taken as an indication of Lincoln's views on slavery; his business was law, not morality." The right of transit was a legal theory recognized by some of the free states that a slaveowner could take slaves into a free state and retain ownership as long as the intent was not to permanently settle in the free state.
Railroads became an important economic force in Illinois in the 1850s. As they expanded they created myriad legal issues regarding "charters and franchises; problems relating to right-of-way; problems concerning evaluation and taxation; problems relating to the duties of common carriers and the rights of passengers; problems concerning merger, consolidation, and receivership." Lincoln and other attorneys would soon find that railroad litigation was a major source of income. Like the slave cases, sometimes Lincoln would represent the railroads and sometimes he would represent their adversaries. He had no legal or political agenda that was reflected in his choice of clients. Herndon referred to Lincoln as "purely and entirely a case lawyer."
In one notable 1851 case, Lincoln represented the Alton and Sangamon Railroad in a dispute with James A. Barret, a shareholder. Barret refused to pay the balance on his pledge to the railroad on the grounds that it had changed its originally planned route. Lincoln argued that as a matter of law a corporation is not bound by its original charter when that charter can be amended in the public interest. Lincoln also argued that the newer route proposed by Alton and Sangamon was superior and less expensive, and accordingly, the corporation had a right to sue Barret for his delinquent payment. Lincoln won this case and the Illinois Supreme Court decision was eventually cited by other U.S. courts.
The most important civil case for Lincoln was the landmark Hurd v. Rock Island Bridge Company, also known as the Effie Afton case. America's expansion west, which Lincoln strongly supported, was seen as an economic threat to the river trade, which ran north-to-south, primarily along the Mississippi River. In 1856 a steamboat collided with a bridge built by the Rock Island Railroad between Rock Island, Illinois, and Davenport, Iowa. It was the first railroad bridge to span the Mississippi River. The steamboat owner sued for damages, claiming the bridge was a hazard to navigation, but Lincoln argued in court for the railroad and won, removing a costly impediment to western expansion by establishing the right of land routes to bridge waterways.
Possibly the most notable criminal trial of Lincoln's career as a lawyer came in 1858, when he defended the son of Lincoln's friend, Jack Armstrong. William "Duff" Armstrong had been charged with murder. The case became famous for Lincoln's use of judicial notice—a rare tactic at that time—to show that an eyewitness had lied on the stand. After the witness testified to having seen the crime by moonlight, Lincoln produced a Farmers' Almanac to show that the moon on that date was at such a low angle it could not have provided enough illumination to see anything clearly. Based almost entirely on this evidence, Armstrong was acquitted.
Lincoln was involved in more than 5,100 cases in Illinois alone during his 23-year legal career. Though many of these cases involved little more than filing a writ, others were more substantial and quite involved. Lincoln and his partners appeared before the Illinois State Supreme Court more than 400 times.
Lincoln the inventor
Abraham Lincoln is the only U.S. president to have been awarded a patent for an invention. As a young man, Lincoln took a boatload of merchandise down the Mississippi River from New Salem to New Orleans. At one point the boat slid onto a dam and was set free only after heroic efforts. In later years, while traveling on the Great Lakes, Lincoln's ship ran afoul of a sandbar. The resulting invention consists of a set of bellows attached to the hull of a ship just below the water line. On reaching a shallow place, the bellows are filled with air and the vessel, thus buoyed, is expected to float clear. The invention was never marketed, probably because the extra weight would have increased the probability of running onto sandbars more frequently. Lincoln whittled the model for his patent application with his own hands. It is on display at the Smithsonian Institution National Museum of American History. Patent #6469 for "A Device for Buoying Vessels Over Shoals" was issued May 22, 1849.
In 1858 Lincoln called the introduction of patent laws one of the three most important developments "in the world's history." His words, "The patent system added the fuel of interest to the fire of genius," are inscribed over the US Commerce Department's north entrance.
Courtships, marriage, and family
Soon after he moved to New Salem, Lincoln met Ann Rutledge. Historians do not agree on the significance or nature of their relationship, but according to many she was his first and perhaps most passionate love. At first they were probably just close friends, but soon they had reached an understanding that they would be married as soon as Ann had completed her studies at the Female Academy in Jacksonville. Their plans were cut short in the summer of 1835, when what was probably typhoid fever hit New Salem. Ann died on August 25, 1835, and Lincoln went through a period of extreme melancholy that lasted for months.
In either 1833 or 1834, Lincoln had met Mary Owens, the sister of his friend Elizabeth Abell, when she was visiting from her home in Kentucky. In 1836, in a conversation with Elizabeth, Lincoln agreed to court Mary if she ever returned to New Salem. Mary returned in November 1836, and Lincoln courted her for a time, but they had second thoughts about their relationship. On August 16, 1837, Lincoln wrote Mary a letter from Springfield suggesting an end to the relationship. She never replied and the courtship was over.
In 1839 Mary Todd moved from her family's home in Lexington, Kentucky, to Springfield and the home of her eldest sister, Elizabeth Porter (née Todd) Edwards, and Elizabeth's husband, Ninian W. Edwards, son of Ninian Edwards. Mary was popular in the Springfield social scene, but soon was attracted to Lincoln. Sometime in 1840 the two became engaged. They initially set a January 1, 1841, wedding date, but mutually called it off. During the break in their courtship, Lincoln briefly courted Sarah Rickard, whom he had known since 1837. Lincoln proposed marriage to Sarah in 1841 but was rejected. Sarah later said that "his peculiar manner and his General deportment would not be likely to fascinate a young girl just entering the society world".
Lincoln still had conflicted feelings concerning Mary Todd. In August 1841 he visited Joshua Speed, his close friend and former roommate, who had moved to Louisville, Kentucky. Lincoln met Speed's fiancee while there, and after his return to Springfield. Speed and Lincoln corresponded over Speed's own doubts about marriage. Lincoln advised Speed and helped convince him to proceed with the marriage. In turn, Speed urged Lincoln to do the same. Lincoln resumed his courtship of Mary, and on November 4, 1842, they were married at the Edwards's home. In a letter written a few days after the wedding, Lincoln wrote, "Nothing new here except my marrying, which to me, is matter of profound wonder."
The couple had four sons. Robert Todd Lincoln was born in Springfield, Illinois, on August 1, 1843. He was their only child to survive into adulthood. Young Robert attended Phillips Exeter Academy and Harvard College. Robert died on July 26, 1926, in Manchester, Vermont. The other Lincoln sons were born in Springfield, Illinois, and died either during childhood or their teen years. Edward Baker Lincoln was born on March 10, 1846, and died on February 1, 1850, in Springfield. William Wallace Lincoln was born on December 21, 1850, and died on February 20, 1862, in Washington, D.C., during President Lincoln's first term. Thomas "Tad" Lincoln was born on April 4, 1853, and died on July 16, 1871, in Chicago, Illinois.
During the American Civil War, four of Mary Todd Lincoln's brothers fought for the Confederacy, with one wounded and another killed in action. Lieutenant David H. Todd, Mary's half-brother, served as commandant of the Libby Prison camp during the war.
State and national politics
Campaigning for Congress (1843)
In the winter of 1842–1843, with the strong encouragement of his wife, Lincoln decided to pursue election to the United States House of Representatives from the newly created Seventh Congressional District. His main rivals were his friends, Edward D. Baker and John J. Hardin. On February 14 Lincoln told a local Whig political leader, "if you should hear any one say that Lincoln don't want to go to Congress, I wish you as a personal friend of mine, would tell him you have reason to believe he is mistaken. The truth is, I would like to go very much."
At the end of February the Whigs met in Springfield, where Lincoln wrote the party platform "opposing direct federal taxes and endorsing a protective tariff, a national bank, distribution to the states of proceeeds from federal land sales, and the convention system of choosing candidates." Baker and Lincoln campaigned vigorously throughout March, but Lincoln, believing that Baker had an insurmountable lead, withdrew when the Sangamon County convention was held on March 20. Lincoln was selected as a delegate to the district convention which met on May 1 in Pekin. Although Lincoln worked hard for Baker, Hardin was selected as the Whig candidate, winning by a single vote. Lincoln then initiated a resolution that endorsed Baker for the nomination in two years. The resolution passed, which seemed to set a precedent for a single term with rotation among the party's leaders, and suggested that Lincoln would be next in line after Baker.
Campaigning for Henry Clay (1844)
In 1844 Lincoln campaigned enthusiastically for Henry Clay, the Whig nominee for president and a personal hero of Lincoln. On the campaign trail Lincoln and the other Illinois Whigs emphasized tariff issues, while touting the economic success of the Tariff of 1842 that had been passed in Congress under Whig leadership. Part of the campaign pitted Lincoln in a series of debates against Democrat John Calhoun, a candidate for Congress. Campaigning in Illinois for most of 1844, Lincoln spoke out against the annexation of Texas (a potential slave territory), promoted national and state banks, and opposed a wave of nativism that would become a major political issue a decade later. On the last issue Lincoln declared that "the guarantee of the rights of conscience, as found in our Constitution, is most sacred and inviolable, and one that belongs no less to the Catholic, than to the Protestant; and that all attempts to abridge or interfere with these rights, either of Catholic or Protestant, directly or indirectly, have our decided disapprobation, and shall ever have our most effective opposition."
Clay's opponent, James K. Polk, carried Illinois and also won the presidency. In Illinois and elsewhere Polk's support for the acquisition of Texas and Oregon seemed to carry the day. Lincoln and many other Whigs blamed the free soil Liberty Party for dividing the vote in New York, which allowed Polk to carry that state and achieve the majority in the electoral college. In responding to an antislavery Whig, who equated voting for Clay, a slaveholder, as "do[ing] evil", Lincoln asked, "If the fruit of electing Mr. Clay would have been to prevent the extension of slavery, could the act of electing him have been evil?"
Campaigning for Congress (1846)
Hardin did not run for reelection in 1844; the Whig nomination, as previously agreed, went to Baker, who won election to the seat. Baker agreed not to run for reelection in 1846, but Hardin considered a run for his old seat. Much of the Seventh District was included within the judicial circuit that Lincoln rode, so beginning in September 1845, he began soliciting the support of Whig leaders and editors as he moved through the circuit. Lincoln emphasized that Hardin should be bound by the understanding reached at Pekin in 1843. The debate over what had actually been agreed on in 1843 became public and bitter. In the end Hardin withdrew and Lincoln secured the Whig nomination. The Democrats nominated Peter Cartwright, a circuit-riding Methodist preacher.
Lincoln campaigned throughout the district, where he was already well known. Although he was presented with campaign funds $200, Lincoln returned most of the money after the election. Speaking of his actual campaign expenses, Lincoln noted, "I made the canvass on my own horse; my entertainment, being at the houses of friends, cost me nothing; and my only outlay was seventy-five cents for a barrel of cider which some farm-hands insisted I should treat them to." There were few newspaper accounts of the election, but the major political issues were the annexation of Texas, which Lincoln opposed as an expansion of slavery; the Mexican War, on which Lincoln was noncommittal; and the Oregon border dispute with Great Britain, which Lincoln avoided.
Cartwright avoided joint appearances with Lincoln and initiated a "whispering campaign" that accused Lincoln of being an infidel and a religious skeptic. Lincoln responded by pointing out that the Illinois constitution had no religious qualifications for office. On July 31 he published a handbill that admitted he was not a member of a specific Christian church, but denied that he was an "open scoffer at Christianity" or had ever "denied the truth of the Scriptures." Cartwright's campaign was effective only in counties where Lincoln was not personally known. Lincoln won the election with 56 percent of the vote, topping the numbers of Hardin (53 percent) and Baker (52 percent) in their elections. Due to the timing of the elections, the Thirtieth Congress did not convene until December 1847.
House of Representatives (1847–1849)
A Whig and an admirer of party leader Henry Clay, Lincoln was elected to a term in the U.S. House of Representatives in 1846. As a freshman House member, he was not a particularly powerful or influential figure. He spoke out against the Mexican–American War, which he attributed to President Polk's desire for "military glory—that attractive rainbow, that rises in showers of blood." He also challenged the President's claims regarding the Texas boundary and offered Spot Resolutions demanding to know the "spot" on U.S. soil where blood was first spilt. In January 1848 Lincoln was among the eighty-two Whigs who defeated eighty-one Democrats in a procedural vote on an amendment to send a routine resolution back to committee with instructions to add the words "a war unnecessarily and unconstitutionally begun by the President of the United States." The amendment passed, but the bill never reemerged from committee and was never finally voted upon.
Lincoln later damaged his political reputation with a speech in which he declared, "God of Heaven has forgotten to defend the weak and innocent, and permitted the strong band of murderers and demons from hell to kill men, women, and children, and lay waste and pillage the land of the just." Two weeks later, President Polk sent a peace treaty to Congress. While no one in Washington paid attention to Lincoln, the Democrats orchestrated angry outbursts from across his district, where the war was popular and many had volunteered. In Morgan County, Illinois, resolutions were adopted in fervent support of the war and in wrathful denunciation of the "treasonable assaults of guerrillas at home; party demagogues; slanderers of the President; defenders of the butchery at the Alamo; traducers of the heroism at San Jacinto". Warned by his law partner, William Herndon, that the damage was mounting and irreparable, Lincoln decided not to run for reelection.
Campaigning for Zachary Taylor (1848)
In the 1848 presidential election, Lincoln supported war hero Zachary Taylor for the Whig nomination and for president in the general election. In abandoning Clay, Lincoln argued that Taylor was the only Whig that was electable. Lincoln attended the Whig National Convention in Philadelphia as a Taylor delegate. Following Taylor's successful nomination, Lincoln urged Taylor to run a campaign emphasizing his personal traits, while leaving the controversial issues to be resolved by Congress. While Congress was in session Lincoln spoke in favor of Taylor on the House floor, and when it adjourned in August, he remained in Washington to assist Whig Executive Committee of Congress in the campaign. In September Lincoln made campaign speeches in Boston and other New England locations. Remembering the election of 1844, Lincoln addressed potential Free Soil voters by saying that the Whigs were equally opposed to slavery and the only issue was how they could most effectively vote against the expansion of slavery. Lincoln argued that a vote for the Free Soil candidate, former President Martin Van Buren, would divide the antislavery vote and give the election to the Democratic candidate, Lewis Cass.
With Taylor's victory, the incoming administration, perhaps remembering Lincoln's criticism of Taylor during the Mexican–American War, offered Lincoln only the governorship of remote Oregon Territory. Acceptance would end his career in the fast-growing state of Illinois, so he declined., and returned to Springfield, Illinois, where he turned most of his energies to his law practice.
- Louis A. Warren (1991). Lincoln's Youth: Indiana Years, Seven to Twenty-One, 1816–1830. Indianapolis: Indiana Historical Society. pp. 3–4. ISBN 0-87195-063-4.
- Warren, p. 4.
- Michael Burlingame (2008). Abraham Lincoln: A Life. I. Baltimore, MD: Johns Hopkins University Press. pp. 1–2. ISBN 978-0-80188-993-6.
- Warren, p. 5.
- William E. Bartelt (2008). There I Grew Up: Remembering Abraham Lincoln's Indiana Youth. Indianapolis: Indiana Historical Society Press. p. 79. ISBN 978-0-87195-263-9.
- Burlingame, v. I, p. 2–3.
- Warren, p. 6.
- Warren, p. 6 and 8.
- Warren, p. 9.
- William E. Barton (1920). The Paternity of Abraham Lincoln: Was He the Son of Thomas Lincoln? An Essay on the Chastity of Nancy Hanks. George H. Doran Company. pp. 19, 203 and 319.
- Doug Wead (2005). The Raising of a President: The Mothers and Fathers of Our Nation's Leaders. Simon & Schuster. p. 101. ISBN 0-7434-9726-0.
- "Abraham Lincoln, A North Carolinian". Bostic Lincoln Center. Retrieved December 12, 2014.
- David J. Jacobson (1948). The Affairs of Dame Rumor. New York: Rinehart and Co. p. 191.
- Carl Sandburg (1926). Abraham Lincoln: The Prairie Years. 2. New York: Harcourt Brace. p. 381.
- Warren, p. 210.
- Warren, p. 134–35.
- Bartelt, p. 67–68.
- William Lee Miller (2002). Lincoln's Virtues: An Ethical Biography (Vintage Books ed.). New York: Random House/Vintage Books. p. 4. ISBN 0-375-40158-X.
- Emanuel Hertz (1938). The Hidden Lincoln: From the Letters and Papers of William H. Herndon. New York: Viking Press. pp. 413–14.
- "Lincoln, Abraham, personal description of (To Josephus Hewett), Feb. 13, 1848", in Archer H. Shaw, compiler and ed. (1950). The Lincoln Encyclopedia: The Spoken and Written Words of A. Lincoln Arranged For Ready Reference. New York: Macmillan. p. 190. References "nearly ten years ago"—meaning 1838–39.
- "To F. W. Fell, Dec. 20, 1859", in Shaw, The Lincoln Encyclopedia, p. 288.
- Coley Taylor and Samuel Middlebrook (1936). The Eagle Screams. New York: Macaulay. p. 106 and 109.
- David Herbert Donald (1995). Lincoln. New York: Simon & Schuster. p. 1 and 116–18.
- Bartelt, p. 1 and 9. Both of the Lincoln biographical sketches appeared in the newspapers, while Lincoln's response to Scripps also appeared as part of the reporter's biography of Lincoln, which was published in July 1860.
- Miller, p. 17.
- Bartelt, p. 60 and 61.
- David Herbert Donald (1948). Lincoln's Herndon. New York: Alfred A. Knopf. p. 181.
- Warren, p. 9–10.
- Bartelt, p. 14.
- Warren, p. 12.
- Warren, p. 10.
- Donald, Lincoln, p. 22 and 24.
- Warren, p. 13.
- Donald, Lincoln, p. 23–24.
- Bartelt, p. 12 and 14.
- Bartelt, p. 109.
- Bartelt, p. 12.
- Bartelt, p. 41 and 135.
- Bartelt, p. 66.
- Bartelt, p. 68, 73–74, and 136.
- Warren, p. xvii and xx.
- Warren, p. 26.
- Warren, p. 16 and 43.
- Bartelt, p. 3, 5, and 16.
- Warren, p. 16 and 19.
- Bartelt, p. 11.
- Bartelt, p. 24.
- Warren, p. 41.
- Bartelt, p. 69.
- Bartelt, p. 34.
- Warren, p. 42.
- Bartelt, p. 25 and 71.
- James H. Madison (2014). Hoosiers: A New History of Indiana. Bloomington and Indianapolis: Indiana University Press and Indiana Historical Society Press. p. 67. ISBN 978-0-253-01308-8.
- Bartelt, p. 34 and 156.
- Bartelt, p. 24 and 104.
- Warren, p. xxi and 16.
- Bartelt, p. 18 and 20.
- Bartelt, p. 22–23, and 77.
- Dennis Hanks was the ward and nephew of Nancy's aunt, Elizabeth Sparrow and her husband Thomas. He arrived in Indiana with the Sparrows in 1817 and lived with the Sparrows on the Lincoln farm. Hanks moved into the Lincoln home after both the Sparrows died of milk sickness the week before Nancy's death. In 1821 Dennis married Lincoln's stepsister, Elizabeth Johnston. See Bartelt, p. 77 and 97, and Warren, p. 51, 53, and 58.
- Bartelt, p. 23 and 83.
- Sally was the daughter of Christopher Bush, a successful landowner in Hardin County, Kentucky, and the widow of Daniel Johnston, whom she married on March 13, 1806. After Daniel died in 1816, Sally and her children remained in Elizabethtown until her marriage to Thomas Lincoln in December 1819. Thomas brought Sally and her children to his home in Indiana. See Bartelt, p. 63.
- Bartelt, p. 10.
- Bartelt, p. 66–67.
- Warren, p. 36–39.
- Bartelt, p. 37.
- Donald, Lincoln, p. 22–23.
- Donald, Lincoln, p. 34–35. See also William Herndon, The History of Abraham Lincoln. Springfield, IL: The Lincoln Printing Co., 1888, p. 12.
- Bartelt, p. 35 and 37.
- Bartelt, p. 10, 178–79.
- Prokopowicz (2008), p. 25–28.
- Bartelt, p. 37–38.
- Bartelt, p. 7.
- Bartelt, p. 10 and 33.
- Madison, p. 110.
- Bartelt, p. 12, 65–67.
- Donald, Lincoln, p. 29.
- Bartelt, p. 80.
- Warren, p. 11 and 24.
- Bartelt, p. 26.
- Bartelt, p. 27.
- Madison, p. 111.
- Warren, p. xx.
- Bartelt, p. 8.
- Bartelt, p. 118, 143, 148.
- Warren, p. xix, 30, 46, and 48.
- Miller, p. 31.
- Donald, Lincoln, p. 55.
- Warren, p. 211.
- Miller, p. 5.
- Bartelt, p. 38–40.
- Bartelt, p. 40.
- Warren, p. 204.
- Bartelt, p. 41 and 63.
- Stephen B. Oates (1994). With Malice Toward None: The Life of Abraham Lincoln. New York: HarperPerennial. pp. 15–17. ISBN 978-0-06092-471-3.
- Bartelt, p. 65.
- Bartelt, p. 71.
- Oates, p. 15–17.
- Oates, p. 17–18; Donald, Lincoln, p. 39.
- Oates, p. 18–20; Donald, Lincoln, p. 40.
- Oates, p. 18–20; Donald, Lincoln, p. 41.
- Oates, p. 18–20; Donald, Lincoln, p. 41–43.
- Donald, Lincoln, p. 44–46.
- Donald, Lincoln, p. 47–50.
- Donald, Lincoln, p. 50–54. While Lincoln was attending his first legislative session in January 1835, the sheriff sold Lincoln's horse, saddle, bridle, and surveying equipment in partial satisfaction of the debt. Berry died soon after this, leaving Lincoln responsible for the remaining debt.
- Donald, Lincoln, p. 52–53.
- Oates, p. 26; Donald, Lincoln, p. 53; Harris, p. 16.
- Brian Dirck (2007). Lincoln the Lawyer. Urbana: University of Illinois Press. pp. 14–15. ISBN 978-0-252-03181-6.
- Oates, p. 15. Oates explained Lincoln's interest in the court proceedings: "A sort of legal buff, he watched transfixed as young country lawyers wooed juries, cross-examined witnesses, delivered impassioned summations. He listened, too, as old-timers sat on the steps of the courthouses, spitting tobacco juice and discussing the latest trials and the capricious workings of the law – the verdict a jury might reach, the sentence a judge might hand down."
- Oates, p. 28; Donald, Lincoln, p. 54.
- Donald, Lincoln, p. 53.
- Donald, Lincoln, p. 58.
- Oates, p. 32–39.
- Donald, Lincoln, p. 53–54.
- Donald, Lincoln, p. 59.
- Harris, William C. (2007). Lincoln's Rise to the Presidency. Lawrence, KS: University Press of Kansas. p. 17. ISBN 978-0-7006-1520-9.; Donald, Lincoln, p. 59–60; George Anastaplo (1999). Abraham Lincoln: A Constitutional Biography. Lanham: Rowman and Littlefield Publishers. p. 127. ISBN 0-8476-9431-3.
- Donald, Lincoln, p. 75.
- "Abraham Lincoln Prepares to Fight a Saber Duel". (originally published by Civil War Times magazine.)
- Ronald C. White, Jr. (2009). A. Lincoln: A Biography. Random House Digital, Inc. pp. 115–.
- "Lincoln's Forgotten Duel". Lib.niu.edu. Retrieved May 9, 2012.
- Tarbell, Ida Minerva (1920). The Life of Abraham Lincoln. I. pp. 186–190.
- Donald, Lincoln, p. 58–59.
- Carwardine, Richard (2003). Lincoln: A Life of Purpose and Power. p. 15. ISBN 1-4000-4456-1.; Donald, Lincoln, p. 61–62. Carwardine and Donald both emphasize that Douglas, not Lincoln, was the "prime mover" for this program.
- Carwardine, p. 16.
- Oates, p. 34–35.
- Donald, Lincoln, p. 62–64 and 75.
- Oates, p. 35.
- Carwardine, p. 16; Donald, Lincoln, p. 76.
- Donald, Lincoln, p. 62–63.
- Carwardine, p. 16–17. Lincoln further states, "I say this with the greater freedom because, being a politician myself, none can regard it as personal."
- Anastaplo, p. 127.
- Carwardine, p. 17.
- Donald, Lincoln, p. 77.
- Carwardine, p. 17. Referring to Nicolay and Hay's Abraham Lincoln: A History (v. I, p. 158–62), Carwardine notes, "Adjournment, credit resumption, and Democratic ridicule followed. Lincoln, the respecter of law and constitutional order, who 'deprecated everything that savored of the revolutionary,' always regretted the action."
- Donald, Lincoln, p. 78; Carwardine, p. 17.
- Donald, Lincoln, p. 63; Burlingame, v. I, p. 122.
- Burlingame, v. I, p. 122.
- Burlingame, v. I, p. 124.
- Burlingame, v. I, p. 126.
- Thomas, Benjamin P. (1952). "Abraham Lincoln: A Biography".
- Eric Foner (2010). The Fiery Trial: Abraham Lincoln and American Slavery. New York: W. W. Norton & Company. p. 26. ISBN 978-0-39306-618-0.
- William Kristol (June 7, 2007). "Learning from Lincoln's Wisdom". Time. Retrieved January 9, 2012.
- Donald, Lincoln, p. 70–74.
- Donald, Lincoln, p. 86–98.
- Donald. Lincoln, p. 99.
- Harris, p. 21; Donald, Lincoln, p. 99–100.
- Oates, p. 104.
- Donald, Lincoln, p. 100–3; Harris, p. 31; Oates, p. 71–72.
- Donald, Lincoln, p. 104–6; Thomas, p. 142–53.
- Donald, Lincoln, p. 105–6 and 149; Harris, p. 65.
- Donald, Lincoln, p. 146.
- Donald, LIncoln, p. 148; Thomas, p. 156.
- Harris, p. 35; Donald, Lincoln, p. 151; Oates, p. 98.
- Donald, Lincoln, p. 142–45.
- Thomas, p. 178–79.
- Charles Robert McKirdy (2011). Lincoln Apostate: The Matson Slave Case. University Press of Mississippi. pp. 20–31. ISBN 978-1-60-473987-9.
- McKirdy, p. 44–56.
- McKirdy, p. 74–86.
- Donald, LIncoln, p. 103–4.
- Donald, Lincoln, p. 154–57.
- Donald, Lincoln, p. 6.
- Burlingame, v. I, p. 337–38.
- Donald, Lincoln, p. 150–51.
- "Points to Ponder 1 – Presidential Patent — Kids Pages". Uspto.gov. Retrieved May 9, 2012.
- "Buoying Vessels Over Shoals" (PDF). Freepatentsonline.com. Retrieved May 9, 2012.
- George F. Will (January 2, 2011). "Rev the scientific engine". The Washington Post.
- Goodwin, p. 55–56. The major basis for the Lincoln-Rutledge relationship comes from oral and written surveys directed by Lincoln's law partner, William Herndon, after Lincoln's death. For documentation on the historiography of this debate, see the two journal articles by Barry Schwartz, "Ann Rutledge in American Memory: Social Change and the Erosion of a Romantic Drama", at "Archived copy". Archived from the original on July 6, 2008. Retrieved 2008-04-02., and John Y. Simon, "Abraham Lincoln and Ann Rutledge", at "Archived copy". Archived from the original on June 12, 2012. Retrieved 2009-02-12..
- Donald, Lincoln, p. 55. Donald has suggested that Lincoln's decision to study law may also have been tied to his interest in attracting Ann Rutledge.
- "Mrs. Elizabeth Abell". Retrieved August 19, 2009.
- Donald, Lincoln, p. 67–69; Thomas, p. 56–57 and 69–70. Donald quotes a key phrase from the letter, "I now say, that you can now drop the subject [of marriage], dismiss your thoughts (if you ever had any) from me forever, and leave this letter unanswered, without calling forth one accusing murmur from me."
- Donald, Lincoln, p. 84.
- Thomas, p. 85–87.
- Burlingame, p. 187.
- Thomas (1952) pp. 88–89
- Burlingame, p. 213–15. Since this was a new district, the term was not for the usual two years.
- Burlingame, p. 215.
- Burlingame, p. 215–18.
- Burlingame, p. 224–27.
- Burlingame, p. 228–29.
- Burlingame, p. 231–35.
- Burlingame, p. 235–37.
- Burlingame, p. 237–41.
- Congressional Globe, 30th Session (1848) p. 93–95
- House Journal, 30th Session (1848) p. 183–84.
- Abe Lincoln resource page
- Donald. Lincoln, p. 126–31.
- Beveridge, (1928) 1: 428–33; Donald, Lincoln, p. 140–43.
- Anastaplo, George (1999). Abraham Lincoln: A Constitutional Biography. Lanham: Rowman and Littlefield Publishers. ISBN 0-8476-9431-3.
- Bartelt, William E. (2008). There I Grew Up: Remembering Abraham Lincoln's Indiana Youth. Indianapolis: Indiana Historical Society Press. ISBN 978-0-87195-263-9.
- Burlingame, Michael (2008). Abraham Lincoln: A Life. I. Baltimore, MD: Johns Hopkins University Press. ISBN 978-0-8018-8993-6.
- Carwardine, Richard (2003). Lincoln: A Life of Purpose and Power. ISBN 1-4000-4456-1.
- Dirck, Brian (2007). Lincoln the Lawyer. Urbana: University of Illinois Press. ISBN 978-0-252-03181-6.
- David Herbert Donald (1948). Lincoln's Herndon. New York: Alfred A. Knopf. p. 181.
- Donald, David Herbert (1995). Lincoln. New York: Simon & Schuster. ISBN 0-684-80846-3.
- Eric Foner (2010). The Fiery Trial: Abraham Lincoln and American Slavery. New York: W. W. Norton & Company. p. 26. ISBN 978-0-39306-618-0.
- Harris, William C. (2007). Lincoln's Rise to the Presidency. Lawrence, KS: University Press of Kansas. ISBN 978-0-7006-1520-9.
- Herndon, William Henry (1983). Herndon's Life of Lincoln: The History and Personal Recollections of Abraham Lincoln. New York: Da Capo Press. ISBN 0-306-80195-7.
- Madison, James H. (2014). Hoosiers: A New History of Indiana. Bloomington and Indianapolis: Indiana University Press and Indiana Historical Society Press. ISBN 978-0-253-01308-8.
- McKirdy, Charles Robert (2011). Lincoln Apostate: The Matson Slave Case. Univ. Press of Mississippi. ISBN 978-1-60-473987-9.
- Miller, William Lee (2002). Lincoln's Virtues: An Ethical Biography (Vintage Books ed.). New York: Random House/Vintage Books. ISBN 0-375-40158-X.
- Oates, Stephen B. (1994). With Malice Toward None: The Life of Abraham Lincoln. New York: HarperPerennial. ISBN 978-0-06092-471-3.
- Prokopowicz, Gerald J. (2008). Did Lincoln Own Slaves?. New York: Vintage Books. ISBN 978-0-307-27929-3.
- Thomas, Benjamin P. (1952). "Abraham Lincoln: A Biography".
- Warren, Louis A. (1991). Lincoln's Youth: Indiana Years, Seven to Twenty-One, 1816–1830. Indianapolis: Indiana Historical Society. ISBN 0-87195-063-4.
Lincoln and the law
- Spiegel, Allen D. (2002). A. Lincoln Esquire, a Shrewd, Sophisticated Lawyer in his Time. Mercer University Press.
- Steiner, Mark E. (2006). An Honest Calling: The Law Practice of Abraham Lincoln. Northern Illinois University.
- Stowell, Daniel W., et al., eds. (2008). The Papers of Abraham Lincoln: Legal Documents and Cases, 4 vols. Charlottesville: University of Virginia Press.
- Stowell, Daniel W., ed. (2002). In Tender Consideration: Women, Families, and Law in Abraham Lincoln's America. University of Illinois Press.
Lincoln in Congress
- Riddle, Donald W. (1957). Congressman Abraham Lincoln. University of Illinois.
|Wikimedia Commons has media related to Abraham Lincoln.|
- The Law Practice of Abraham Lincoln, from the Papers of Abraham Lincoln, a freely accessible database that offers over 97,000 documents related to Lincoln's legal career
- The Law Practice of Abraham Lincoln: A Statistical Portrait
- Lincoln the Inventor
|
<urn:uuid:6ff7c8b1-162d-4fd7-af63-1b7a32003b70>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00015.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9684824347496033,
"score": 3.59375,
"token_count": 19924,
"url": "http://wikien3.appspot.com/wiki/Early_life_and_career_of_Abraham_Lincoln"
}
|
Presentation on theme: "Chapter 2 Ethics, 3rd Ed., 2003 Lawrence M. Hinman"— Presentation transcript:
1 Chapter 2 Ethics, 3rd Ed., 2003 Lawrence M. Hinman Understanding the Diversity of Moral Beliefs: Relativism, Absolutism and PluralismChapter 2Ethics, 3rd Ed., 2003Lawrence M. Hinman
2 What is culture clash? What happens when two cultures clash? Examples of culture clash..
3 Three approachesEthical absolutists...there is a single standard in terms of assessments that can be made, and that standard is usually their ownEthical relativists see each culture as an island unto itself, right in its own world and they deny there is any overarching standard in terms of which conflicting culture can be judgedEthical pluralists acknowledge that cultures can legitimately pass judgments on one another and encourages us to listen to what other cultures say about us as well as what we say about them
4 Purpose of the chapterLooking at three possible responses to moral conflicts (relativism, absolutism, and pluralism), assessing their respective merits and see how are applied in actual cases.
5 Two levels of moral conflict Concrete conflictsClitoridectomies (female circumcision; female genital mutilation)Forced marriage of underage girlsWhat happens when something that is legally and morally permissible in one culture is illegal and immoral in another?
6 Ethical relativist—Each culture is right unto itself, so such practices would be morally permissible in some countries and morally wrong in the USEthical absolutists—there is a single moral truth in terms of which all cultures and individuals are to be judgedPluralists try to find some middle ground (in some situations this practice may make sense, less judgmental
7 These three ethical positions provide a rich context for understanding the variation of all ethical theories that Hinman discussesDivine command —we ought to do whatever God willsIssue is whose God? and does God speak differently to each of us? or do we interpret the messages differently?Egoism—each person should act selfishly to maximize self interest
8 Utilitarian– Should act in such a way as to produce the greatest overall amount of pleasure or happinessKantian ethics —Act in ways that respect the autonomy and dignity of ourselves and others personsRights theorists —content that there is a certain universal moral minimum with which all people must comply
9 Fundamental Intuitions of Ethical Pluralism Principle of understandingPrinciple of TolerancePrinciple of standing up against evilPrinciple of fallibility
10 How different cultures have different moral codes Different cultures have different moral codes. What is right within one group maybe abhorrent to anotherTreatment of the deadPolygamySharing of wives among EskimosInfanticide
11 Exercise:Provide 5 examples of differing moral codes, customs, or behaviors…Tell us what makes them unique, odd or different
12 What is our reaction to such “Strange or different” customs? Label them as backward, uneducated or nativesPrimitiveHeathensNon-ChristiansPoke fun of them…discriminate or harassConvert them to “our” custom or thought
13 Cultural Relativism What are some examples of Cultural Relativism? Hasidic Jews in PostvilleAmish lifestylesReligious traditions—baptisms, communion servicesCelebrations—Santa Claus, Easter Bunny, HalloweenParties: when, where and howIssues of privacyHousing stylePersonal hygieneRole of children
14 William Graham Sumner“The right way” is the way which the ancestors used and which has been handed down. The tradition is its own warrant. It is not held subject to verification by experience. The notion of right is in the folkways. ….”This line of thought has probably persuaded more people to be skeptical about ethics than any other single statement.
15 “If we assume that our ethical ideas will be shared by all people at all times, we are merely naïve” Provide examples where ethical ideas in our society may have changed over the years?
16 Examples where ethical/moral ideas have changed over the years Divorce, Living together, Mixed race marriages, Allowing same sex marriagesGambling, casinos, internet pokerInternet datingWomen in the workforce, women operating farm equipment, firewomen, truck driversSpanking/punishment of childrenAcceptance of cremation for the deadCompetition vs cooperation in farming, relying on neighbors help vs outbidding the neighborAnimal welfare, recognizing that animals have certain rightsNatural resource protectionTaming “mother nature” vs “living with nature”
17 Cultural Relativism“Different cultures have different moral codes” often is used as a key to understanding morality. Proponents argue that there is not as universal truth in ethics; there are only the various cultural codes and nothing more. The customs of different societies is all that exist.Proponents would argue that customs can not be judged as correct or incorrect.Our own code of ethics has no special status, it is merely one among many
18 Cultural relativismChallenges our ordinary belief in the objectivity and universality of moral truths.It says in effect that there is no such thing as a universal truth in ethics, there are only cultural codes and nothing moreYour own code of ethics offers nothing special, it is merely one among many
19 Claims of Cultural Relativists Different societies have different moral codesThe moral code of a society determines what is right within that societyThere is no objective standard that can be used to judge one societies code as better than anotherThe moral code of our society offers nothing specialThere is no universal truth in ethics…It is arrogant to judge the conduct of other societies, we should adopt an attitude of tolerance
20 The Cultural Differences Argument Is a theory about the nature of moralityAt the heart of the Cultural Relativism is the form of their argument. They argue from facts about differences between cultural outlooks to making conclusions about the status of morality
21 For example:Premises:Different cultures have different moral codesTherefore, there are no objective truth in morality. Right and wrong are only matters of opinion, and opinions vary from culture to culture.This is cultural differences argument.
22 The Unsoundness of the Cultural Differences Argument The trouble is that the conclusion does not follow from the premise—that is even if the premise is true, the conclusion might be false.WHY? The premise concerns what people believe; some believe one way and others believe another but the conclusion concerns what really is the case
23 An example;The Greeks believed it was wrong to eat the dead. The Callatians believed it was right. Does it follow, from the mere fact that they disagreed, that there is no objective truth in the matter?No it does not follow—it could be objectively right or wrong that one or the other was simply mistaken.
24 Another example:Society “A” believes the world is flat and Society “B” believes the world to be round.Just because these two societies disagree, there is objective truth about the shape of the earth.We can verify that members of the flat earth society are simply mistaken, uneducated or failed geography in middle school.
25 The Fatal Flaw of the Cultural Difference Argument It attempts to derive a substantive conclusion about a subject from the mere fact that people disagree about it.Caution: This is a simple point of logic. We are not necessarily stating that the conclusion is false, the logic is that the conclusion does not follow from the premise.
26 The Consequences of Accepting Cultural Relativism 1. We could no longer say that custom of other societies are morally inferior to others. (This is one of the main points of Cultural Relativism)We would have to stop condemning other societies merely because they are different.Tolerance towards slavery, anti Semitism, hatred towards ethnic groups, or minorities, kiddy porn, sex slave trade---if we took the cultural relativism seriously we would have to regard these behaviors as immune from criticism
27 If we accept Cultural Relativism 2. We could decide whether actions are right or wrong just by consulting the standards of our society.In Colonial America slavery was OK, women were not allowed to vote or own property, primogeniture was practiced, etc and therefore these things were right.This position requires that we accept moral codes as proper and can not be improved.
28 If we accept Cultural Relativism 3. The idea of moral progress is called into doubtProgress implies doing things better, but cultural relativism rejects making judgments about past eras.Reform movements such as rights to women and minorities that implies modern society is better is a judgment that is impossible to make.
29 As a result, most thinkers reject the cultural relativism arguments. It makes sense to condemn some practices wherever they occurIt makes sense to acknowledge that our society while imperfect has made moral progressBecause Cultural Relativism implies these judgments make no sense, the argument goes, it cannot be right.
30 There is less disagreement than it seems There are differences across societies but the differences are often over-statedNeed to explore not particular practices or values but the belief systems that lie behind the practices.The differences are often in the belief system.
31 Source of Customs Beliefs—religious beliefs Physical circumstances of the societyJust because customs differ, there may be less disagreement on basic valuesExample: Eskimos infanticide“drastic measures are sometimes needed to ensure the family’s survival” The Eskimos values are not all that different than our own..It is only that life forces upon them choices that we do not have to make”
32 What are some other customs that differ from our own? Marriage vowsChurch attendanceNeighboringRole of womenBurial and funeralsEating habitsRites of passage of children becoming adults
33 Universal Values in Societies Value of protecting the youngTruth tellingProhibition of murder“There are some moral rules that all societies must have in common, because those rules are necessary for society to exist.”
34 What other universal values or moral rules can you think of? Prohibition against incestPersonal responsibilityThe proper role of government is to take care of its citizensEveryone should serve their countryEveryone should obey the law
35 Judging a cultural practice to be undesirable Ex. In 1996 a 17 year old girl from Togo a West African country arrived in the US and asked for asylum to avoid “ excision”, a practice referred to as “female circumcision” or “female genital mutilation”. According to the WHO, the practice is widespread in 26 African countries and 2 million girls are excised each year.Reaction in the New York Times, encouraged the idea that excision was a barbaric practice and should be condemned.
36 Young girls often look forward to this because it a acceptance into adulthood. It is an accepted practice in many villages.Consequences of excisionpainful, results in permanent loss of sexual pleasure, hemorrhage, tetanus, septicemia, death, chronic infections, hinder walking, chronic painApparent no social benefits, not a matter of religious beliefs
37 Rationale for the practice Women are incapable of sexual pleasure and less likely to be promiscuousFewer unwanted pregnancies in unmarried womenWomen will be more faithful to their husbandsUn-excised women are viewed as unclean and immatureArguments for this practice is that it benefits men, women, families and children
38 Is excision, harmful or helpful? Cultural Relativist would conclude that excision has been practiced for centuries and we should not intervene and change ancient ways.
39 We may ask whether a practice promotes or hinders the welfare of the people who lives are affected by it. And as a corollary, Is there an alternative set of social arrangements that would do a better job of promoting their welfare. If so, we may conclude that the existing practice is deficient.
40 Reluctance to Criticize Many thoughtful people have been reluctant to criticize what many view as a barbaric practice because:1. Interfering with the social customs of other people. (Europeans and Americans have been criticized for destroying other cultures, Native Americans)
41 2. Acceptance of strange practices (tolerance) toward others. 3. Reluctance to criticize other societies, do not want to express contempt
42 Lessons From Cultural Relativism Rests on invalid argumentAlthough it enjoys much appealTwo important lessonsWarns us about the dangers of assuming that our preferences are based upon some absolute rational standard. They are not. Many of our practices are merely particular to our society and it is easy to forget this.
43 There are many matters that we tend to think of in terms of objective right or wrong, that are really nothing more than social conventions.
44 Other examples of social conventions that we think of as “right” or “wrong” that are really nothing more than social conventions.Women covering their breasts,separate restrooms for men and women,men opening the door for women,No shoes, no shirt, no serviceDear _________fathers giving their daughters away in wedding ceremony,wearing a wedding band on the fourth finger of the left hand,swearing, drinking, gambling, etc
45 2. Keep an open mind—Maybe our feelings about practices, values and beliefs are merely social conventions—example homosexuality. Maybe our feelings are not necessarily perceptions of the truth…they may be nothing more than cultural conditioning
46 There is a certain appeal to cultural relativism, but there are some major shortcomings to the the theory. Many of the practices and attitudes that we think as natural law are really only cultural products. Need to keep this in mind if we are to avoid being arrogant and have open minds
|
<urn:uuid:3670dd52-de77-4ee9-a072-5ae1edd61078>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812932.26/warc/CC-MAIN-20180220090216-20180220110216-00215.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9365493655204773,
"score": 3.328125,
"token_count": 2843,
"url": "http://slideplayer.com/slide/4269794/"
}
|
Attacks and Threats on the U.S. in WW2
Germany - Atomic Bomb
December 18, 1938 Otto Hahn splits the uranium atom, releasing energy. Although top officials were invited to an atomic weapons session, the agenda described the presentation as of a technical nature and lower level individuals were assigned to attend. Little interest developed. Heavy water was recognized as a requirement. The activities to destroy the only facilities in Europe at that time, in Norway, are well documented on the commando raid, February 28, 1943, the bombing raid, November 16, 1943, and the sabotage sinking of the ferry in January, 1944. However, Germany had pretty well given up on the bomb by mid-1943 although work continued at Haigerloch until the end.
German-planned Invasion of the United States
Before the winter of 1941, Germany appeared to be moving toward a swift victory over the Soviet Union. Alfred Rosenberg, Reich Kommisar for Eastern Affairs, was ordered to print the motto "Deutschland Welt Reich" (German World Empire) and Hitler made known his intention of further conquest following victory over Russia. These plans appeared to include an invasion of the United States.
In Autumn of 1940, the attack on the US was fixed for the long-term future. This appears in Luftwaffe documents, one of which dated Octiober 29, 1940 mentions the "...extraordinary interest of Mein Führer in the occupation of the Atlantic Islands. In line with this interest...with the cooperation of Spain is the seizure of Gibraltar and Spanish and Portuguese islands, along other operations in the North Atlantic".
In July 1941, the Führer ordered that planning an attack against the United States be continued. Five months later, on December 11, 1941 Germany declared war on the United States.
Japan - January 1943 -- Atomic Bomb
Dr Hideki Yukawa was awarded the Nobel Price in physics in 1949 for his extensive work with the atom begun in 1941.
An atomic bomb project was launched by Prime Minister Hideki Tojo in January, 1943. Former colonel Toranosuke Kawashina was in charge. Design considerations were promising. All chance of success was destroyed when a German submarine carrying two tons of uranium was surrendered as it approached Japan.
Although the allied atomic bomb was developed from a threat by Germany, it was not completed until after VE day. It was used to avoid the expected 500,000 to one million US casualties from the invasion of the Japanese main islands against an army of almost three million men. Kamikaze boats and planes were being stockpiled. In addition, the public was being issued weapons. Two to five million Japanese casualties were anticipated. It can be argued the atomic bomb saved Japanese civilian and military, as well as US lives. The sudden endertainly saved the lives of thousands of POWs and slave labourers scheduled for assassination upon invasion.
Atomic Bomb -- Allies
Many nations were engaged in atomic research. Radiation was discovered by the Curie's in France. Military uses were researched until the fall of France when their laboratories, directed by the son-in-law of Curie, transferred 410 pounds of Norwegian heavy water to the British team on June 16, 1940.
British calculations showed, in 1941, that a very small amount of the fissionable isotope, uranium 235, could produce an explosion equivalent to that of several thousand tons of TNT.
The key US conference was held January 26, 1939 with increased research approved by FDR after consulting with others, including Einstein. First research contracts were let in Nov 1940 with 15 more started within a year for work lead by the Universities of Columbia, Chicago and California. A feasible design was determined in June, 1942. A decision was made to transfer control to the Army. Col James Marshall Corp of Engineers, established the Manhattan Engineering District. Brigadier General Leslie Groves was assigned September 17, 1942 to start production on a bomb and all research had been transferred by May 1943.
The effort was aided by delivery of 1,140 tons of Belgium Congo uranium ore which had been shipped to Staten Island for safekeeping in October 1940. On December 2, 1942, a US team (Enrico Fermi) activated the first atomic pile in a Chicago stadium. FDR and Churchill agreed to a joint US-UK atomic accord which was established under Englishman, Chadwick, by the Quebec Conference, August 9, 1943.
Bomb production was centered in Los Alamos, NM (Oppenheimer) Oak Ridge, TN and Hanford, WA. The first atomic bomb was successfully tested July 16, 1945. An ultimatum was given to Japan that was timed to the first availability of a bomb on July 31. Japan did not respond to the ultimatum and the treat was delayed by weather until Aug 6 before it was delivered on Hiroshima. Although the atomic bomb was a powerful and efficient weapon, the 66,00 people killed was on the progression of the numbers killed by conventional weapons: London , Pearl Harbor (2,403, December 7, 1941, 384 planes) and on Cologne, Hamburg (40,000, July 1943, 1,500 planes) and Dresden (135,000, February 13-14, 1945, 1,200 allied planes) in Germany and those that increasingly descended on Tokyo, starting with 97,000 killed on March 9, 1945 from 334 B-29s, and rained on other industrial cities. [About 55,000,000 people died in WW2. The overlapping Sino-Japanese War may have taken 50,000,000 lives.]
A third bomb was being shipped from New Mexico, target Tokyo, when the war ended. Production was geared to seven per month with an expectation that 50 bombs would be required to assure that an invasion would not be required. Release of radiation from the untested Hiroshima bomb, designed as the original gun-type and made of uranium, was a surprise. The radiation range was expected to be within the blast radius, that is, a lethal dose of radiation would only kill those already dead from concussion. The Alamogordo bomb test and later production were of the more complicated plutonium, implosion device.
Japan did not surrender to the escalation in bombings alone. The condition of Japan in mid-1945 was hopeless. The war had ended in Europe. Allied divisions, air forces, and fleets were being transferred to the Pacific. US industry and shipping of material was now devoted to war with Japan. The islands were strangled; the fleet destroyed; the air force, the army and industry had been mauled. Yet Japan gave no indication it was to give up and had an army of over 2.5 million men, many recalled from China, and ten thousand suicide planes and boats held in reserve. The civilian population was being armed and a newly created armed militia numbering 25 million and taught how to use hand grenades. Teenaged girls were trained with sharpened poles to use as bayonets. The US troops' hopeful slogan was: The Golden Gate in `48.
Negotiations had taken place with Russia with whom Japan had a treaty throughout the war until this time. By agreement with the Allies in Europe, the USSR declared war on the Empire August 8 and Emperor Hirohito finally accepted "to bear the unbearable" on August 10. Japan capitulated on Aug 15. Dissident attacks continued on the following days on the US fleet off the Japanese coast and the AAF (August 18, B-32 photo reconnaissance flight of "Hobo Queen II", 1 killed) (August 22, Japanese antiaircraft batteries near Hong Kong fire upon navy patrol planes over China Coast.) until the formal surrender September 2, 1945. Conflict continued for months in the case of some guerrillas isolated in the island campaigns.
Pearl Harbor, Dec 7, 1941
That the Japanese would attack was well known to the US government by November 1941. Japan had a tradition of surprise attack. The US had correctly identified the Japanese targets of British Singapore and the Dutch East Indies. It was assumed there would also be a sneak attack on the Philippines in support of the Japanese occupancy of these and other areas of the western Pacific, also true. An air attack on Pearl Harbor was regularly considered in war games, but the audacity to attack 2000 miles across the North Pacific attack the USN fleet headquarters was not seriously considered. At Pearl Harbor, attention was focused on getting aid to our outlying islands and to the Philippines. After Pearl Harbor, the Imperial Japanese Navy had ten battleships and ten carriers. The US had in the Pacific: 3 damaged battleships, 3 sunken, and 2 unsalvageable (Arizona and Oklahoma) and three carriers, Lexington and Enterprise at Pearl Harbor and Saratoga on the west coast.
The Pearl Harbor attack force returned to Hiroshima to rearm, December 23, 1941. The Japanese fleet was free to rampage, taking Pacific Islands, occupying the East Indies (Jan-March, 1942), raiding Ceylon and India (April 5-9, 1942) and Darwin, Australia (April 20, 1942).They quickly annihilated the combined Dutch, British, Australian, and US surface ships in the western Pacific, starting with the sinking of the British presence, the battleship Price of Wales and battle cruiser Repulse, December 10, 1941 off Malaya and followed with successes in the battles off Java, February, 1942.
The Japanese goals of conquest of the resources of Indochina and East Indies and Pacific islands for defence had been achieved within the first six months.
Success was so easy that the protective ring was expanded until blocked at the Battle of Coral Sea on May 7, 1942, with an exchange of aircraft carriers, and the Battle of Midway, June 5, 1942 with the Japanese fleet seriously damaged by the loss of four fleet carriers. The ultimate Japanese war goal was to complete the conquest of China by capturing the resource rich East Indies islands, Malaya, Java, et al. The attacks on the US, India, and Australia were to weaken reprisals and establish an aural of invincibility. After attaining her goals of suzerainty of the Western Pacific, Japan planned to negotiate a peace from a position of strength over the intimidated Allies, already under pressure with a European conflict, while retaining her newly expanded Pacific empire and to return the Pacific invasion troops to continue with her war to control China.
French Frigate Shoals -- Hawaii
December 1941 and February 1942. Pearl Harbor was observed from submarine launched sea planes on at least three occasions.
Three Kawanishi H8K2 "Emily" long range, flying boats attempted to bomb Pearl Harbor on March 5, 1942. Weather was bad and they dumped their bombs west of Honolulu, Oahu. The flying boats flew from Wotje, Marshalls and refueled from submarines at French Frigate Shoals on the northwestern end of Hawaii. The seaplane tender Ballard (AVD-10), a converted destroyer, was sent to patrol the area until it was adequately mined.
After the Japanese attack on Pearl Harbor, 16,849 Americans of Japanese ancestry were relocated in ten specially built War Relocation Authority Camps in the USA. Most of these camps were located in California. Opened in March, 1942, all were closed by 1946 most internees being released well before the end of the war. In Latin America, around 2,000 Japanese were rounded up so the US would have prisoners to exchange with Japan. During their internment, 5,918 babies were born. A total of 2,355 internees joined the US armed forces and around 150 were killed in combat.
>The 100th/442nd Regimental Combat Team was formed after its members petitioned Congress for the privilege to serve in the war. It became the most decorated unit in US military history earning 21 Medals of Honor as well as 9,486 Purple Hearts. After the war, 4,724 US citizens of Japanese ancestry, angered by this terrible injustice, renounced their American citizenship and returned to Japan.
It is strange that in Hawaii, the ethnic Japanese, over 30% of the Hawaiian population, were not interned after
Midway was shelled by two Japanese destroyers simultaneously with the attack on Pearl Harbor, Dec 7, 1941. Bad weather saved Midway from being pounded by planes of the retiring Japanese strike fleet.
Midway is the western most of the chain of volcanic islands that form the Hawaii chain. The largest Japanese fleet ever assembled, 11 BB, 8 CV, 100 ships, set out to attack the island in May, 1942. The intent was to draw the American fleet into combat where it would be mauled. From intercepted messages, the US fleet knew to wait in ambush and destroyed four Japanese aircraft carriers, with the loss of Yorktown. This battle changed the balance of sea power in the Pacific.
Guam, an American outpost in the Mariana Islands, was air raided on Dec 7 by bombers from Saipan. Guam's defensive force of 365-Marines was captured on Dec 10, 1942 by a force of 5,400 Japanese from neighbouring Saipan.
Guam was recaptured in the battle for the Marianas (Siapan, Tinian) from July 21 - Aug 8, 1944.
Wake Island is about half way between Hawaii and the Philippines. Bombing was simultaneous with Pearl Harbor A Pan Am Philippine Clipper landing in Hawaii during the air strike was rerouted to an alternate site. It immediately returned to Wake to take off the Pan Am personnel. A construction crew of 1,200, mostly youths from Idaho, could not be evacuated.
The initial invasion of Wake Island on Dec 11 was fought off by 447 US Marines. One Japanese destroyer was sunk with artillery fire and another sunk by a Marine Wildcat, along with damage to a cruiser, a transport, and two more destroyers. Two Japanese aircraft carriers and heavy cruisers were dispatched from the departing Pearl Harbor task force and the island was taken by 2,000 Imperial marines on Dec 23, 1941.
The construction crew was shipped to Japan. Five men were beheaded to assure good behaviour on the trip.
Wake Island was bypassed by later events and was not restored to US control until the end of the war.
The Japanese struck Dutch Harbor at the base of the Aleutian Islands on June 3, 1942, with planes from two carriers in support of an invasion and occupation of Attu (13 June), at the tip of the Aleutian chain, and Kiska (21 June) with 1,800 troops. Partially a diversion to cover the attack on Midway, partly geo-political, and only partly military. The capture of Alaskan islands forced the US to establish a northern defence.
Having broken the Japanese military codes, however, the U.S. knew it was a diversion and did not expend large amounts of effort defending the islands. Although most of the civilian population had been moved to camps on the Alaska Panhandle, some Americans were captured and taken to Japan as prisoners of war.
US troops retook Attu in furious fighting, May 11-30, 1943.
Thirty-four thousand US and Canadian troops landed to retake Kiska on Aug 15, but found the island had been evacuated.
Both sides had discovered that bad weather prevented further major attacks on the other's mainland from a northern route.
In response to the United States' success at the Battle of Midway, the invasion alert for San Francisco was canceled on June 8
Japanese Balloon Burn Bombs -- forest fires throughout the western United States
Taking advantage of the jet stream that circles the globe and crosses over both northern Japan and the northern United States, 9,000 balloons, each equipped with four incendiary and one anti-personnel bombs, were released to start forest fires and create terror in the western United States as far east as Michigan. Six people were killed in Oregon. The project was called Fugo (windship) and headed by Major General Sueki Kusaba. Considering the massive damage from natural fires in year 2000, this was a serious threat.
German Long Range Bomber -- New York City
The Ju 390 was a prototype high altitude, heavy bomber flown in 1944 from Bordeaux, occupied France, to New York City and returned. It was developed from the Ju 90 four engine bomber and the Ju 290. Larger than a B-29, the Ju 390 had six 1,700 hp engines and 181.6 ft wingspan. Germany had other priorities than to build a long range, strategic air force. However, a shock raid, such as Doolittle performed on Tokyo, could have happened to NYC.
Heinkel, He-177 “Griffin” with He-219A escort
One He177 was secretly being readied in Czechoslovakia to carry
Did the Germans possess any strategic bombers or aircraft capable of reaching the North American continent with a significant payload, and returning to Europe?
Rockets were tested there until 1945 and fired at Britain from launch pads on the French coast.
Researchers have found evidence that tests were carried out to fire rockets from submarines,
while a chilling speech by the camp commandant, Walter Dornberger, shows where the rockets were headed next.
"The crowning of our work will be the American machine, a two-stage rocket which will cover the distance between Germany and the United States in around 30 minutes,"' Dornberger wrote in a speech for a visit by SS chief Heinrich Himmler.
Allied intelligence knew that the Germans were working on a "New York Rocket." At least twenty of these large rockets were built at the SS underground base at Nordhausen. What happened to them is one of the enduring mysteries of World War II.
Japanese land based long-range bombers
The Japanese Navy ordered the construction of Nakajima G10N1 "Fugaku" (Mount Fuji), an ultra-long range heavy bomber, for bombing the United States mainland. The bomb-load capability of the bomber was 20,000 kg for short-range sorties; 5,000 kg for sorties against targets in the U.S. Another similar project with a similar purpose was the four engined bomber Nakajima G8N "Renzan" Rita.
The Japanese Army ordered the design of Tachikawa Ki- 74 "Patsy", an ultra long-range reconnaissance bomber originally designed to be used against Soviets in Siberian lands. Later, it was ordered for development for bombing missions against the United States. The bomb charge was 500Kg-1,000Kg. This bomber was also known as the "Japanese Siberian Bomber".
Kinoaki Matsuo, a high-ranking officer of the Black Dragon Society, wrote the Book The Three Power Alliance And The United States-Japanese War, which is purported to detail the Japanese war plans for the simultaneous invasions of the Panama Canal Zone, Alaska, California and Washington.
Fascist Italy planned to damage dock facilities and sink ships moored in New York Harbor using Maiale Midget submarines. In 1943 preparations were well underway to deploy these weapons against the United States.
The Regia Aeronautica (Italian Air Force), working in conjunction with the Regia Marina (Italian Navy), prepared two long-range Cantieri Zappata CANT Z.511 flying boats for the operation. The CANT Z.511 was powered by four 1,500 hp Piaggio P.XII RC 35 radial engines giving it a maximum range of 2,796 miles. This seaplane also had extremely good stability in waters with up to 7-foot waves. It could carry two or four Maiales.
The operation was to commence as follows: CANTs flying the Atlantic would fly low under enemy radar to a point from which the midget submarines could be launched. The crews of the submarines were special volunteers, who after completing their mission, were authorized to surrender. No plans were made for returning them to the seaplanes.
By May 1943 cooperation with supply U-boats was obtained. The CANTs had been successfully tested with Maiales man-guided torpedos and special volunteers for one-way missions. The raid was scheduled to take place under ideal weather conditions in mid-June of the same year. However, only three weeks before, both the seaplanes and their specially fitted launch racks were partially damaged by British fighters when the CANT's base in Lake Trasimento was strafed. The following July Marschal Pietro Badoglio declared an Italian armistice and the project was abandoned. The planned attack against New York might have scored a success paralleling the Italian attack in Alexandria Bay, Egypt during the Axis Powers' North African campaign.
Japanese heavy seaplane bombing raids
Vice Admiral Kazume Kinsei, a former UCLA student and the brother of a famous Japanese aero engine designer, ordered the construction of the Kawanishi H8K "Emily" Flying Boat. These seaplanes had an operational range of 4,443 miles, were equiped with four 1,850 hp 14-cylinder engines, had a top speed of 289 mph, and could climb to 27,740 feet. Using the 92-foot long and 124-foot wingspan seaplanes, Kinsei drew up plans for a concentrated air attack on the American mainland, to be launched from Wojte Atoll (Marshall Islands, South Pacific Mandate) about 2,300 miles west of Pearl Harbor. When asked about why he was interested in the seaplanes, Kinsei responded "To bomb America!"
He wanted six of the flying boats, equipped with 26,445 pounds of high explosives, to rendezvous with three submarine tankers 50 miles off the southern coast of California. Once refueled, they would take off at dawn to fly to downtown Los Angeles and drop their bombs. Then the seaplanes would fly 4,000 miles west to a second refueling from I-Boats near Japanese-controlled waters.
The plan was evaluated by Admiral Chuichi Nagumo. A trial operation against the Hawaiian Islands using a trio of H8Ks caused no significant damage and their bombs only fell in uninhabited areas.
Kinsei persisted in his idea. He envisioned a rendezvous of the H8Ks with I-Boats off the Baja California peninsula, south of southern California, from where they could take off and bomb Texas oilfields and then fly to the Gulf of Mexico. They were to operate in conjunction with German U-Boat tankers. This Axis Powers cooperation was planned for air raids up and down the North American eastern seaboard, with special "Propaganda Raids" on Boston, New York and Washington D.C.. The plan was approved by the Japanese naval high command and German U-boat Chief Admiral Karl Dönitz, who authorized the use of the first pair of "Milch Kuh" (Milk Cow) German U-boat tankers for the operation. Vice Admiral Kinsei ordered the manufacture of 30 H8Ks from the Kawanishi Company for completion in September 1942.
However, by the autumn of 1942 Japan's defensive posture compelled their navy's high command to confine all long-range aircraft to more conventional missions nearby in the South Pacific.
Dec. 7, 1941. On its way to the US west coast, I-26 tracks a US freighter. Precisely at 8:00 a.m., Dec 7, Pearl Harbor time, she surfaces and sinks Cynthia Olson with gunfire.
Dec. 15, 1941. Japanese submarine shelled Kahului, Maui, Hawaii.Dec 20. Unarmed US tanker sunk by Japanese submarine I-17 off Cape Mendocino, California. 31 survivors rescued by Coast Guard from Blunt's Reef Lightship.
Dec 20. Unarmed US tanker shelled by Japanese submarine I-23 of the coast of CaliforniaDec 22. Unarmed U.S. tanker sunk by Japanese submarine I-21 about four miles south of Piedras Blancas light, California, I-21 machine-guns the lifeboats, but inflicts no casualties. I-21 later shells unarmed U.S. tanker Idaho near the same location.
Dec 23. Japanese submarine I-17 shells unarmed tanker southwest of Cape Mendocino, California.
Dec 27. Unarmed US tanker shelled by Japanese submarine I-23 10 miles from mouth of Columbia River.
Dec 30. Submarine I-1 shells, Hilo, Hawaii.
Dec 31. Submarines shell Kauai, Maui, and Hawaii.
23 Feb 1942 The first Japanese attack on the U.S. mainland occurs when an I-17 submarine fires 13 shells at the Ellwood oil production facilities at Goleta, near Santa Barbara, California. Although only a catwalk and pumphouse were damaged, I-17 captain Nishino Kozo radioed Tokyo that he had left Santa Barbara in flames. No casualties were reported and the total cost of the damage was estimated at approximately $500.
It was not clear why this target was chosen until much later, when it was found that the commander of this particular submarine had visited the site in the 1930s and stumbled into a field of prickly pear cactus. Captain Nishino never forgave the ridicule he received from his American hosts that day.
June 20. The radio station on Estevan Point, Vancouver Island was fired on by a Japanese submarine I-26.
June 21. I-25 shells Fort Stevens, Oregon.
Sept 9. Phosphorus bombs were dropped on Mt. Emily, ten miles northeast of Brookings, Oregon, to start forest fires. It was a Yokosuka E14Y1 "Glen" reconnaissance seaplane piloted by Lt. Nubuo Fujita who had been catapulted from submarine I-25.
Sep 29. Phosphorus bombings were repeated on the southern coast of Oregon.
Japanese submarines were generally assigned as screening forces ahead of fleet movements. The US had more submarines assigned to individual action where they methodically destroyed 1,314 ships of the Japanese merchant marine fleet, isolating that island nation. However, the giant I-400 class of submarine seaplane carrier was capable of attacking San Francisco or New York, but targeted the Panama Canal before diverted as the war ended.
German Submarines -- US Coastal waters
Jan 13, 1942. U-boats commenced Operation Paukenschlag (roll of the kettledrums) on the east coast of America, sinking 87 ships of 150,000 tons between Jan and July 1942. U-boats would cruise off shore of coastal tourist towns that did not turn off their lights and target ships that became silhouetted against the coast.
Feb 28. Destroyer Jacob Jones (DD-130) struck by torpedo off NJ by U-578. There were eleven survivors.
April 5,. U552, commanded by Kapitänleutnant Erich Topp, sealed the fate of the British tanker MV British Splendour east of Cape Hatteras. The U-boat was part of the fourth wave of boats of Operation Paukenschlag, she returned to Saint Nazaire April 27, 1942 having sunk seven ships during the patrol.
Apr 26. Destroyer Sturtevant (DD-240) is sunk by mine off Marquesas Key, Florida.
May 14. Submarine U-213 mines the waters off St. John's, Newfoundland.
June 11. U-87 mines the waters off Boston.
June 11. U-373 mines the waters off Delaware Bay.
June 12. U-701 mines the waters off Cape Henry, VA.
July 27. U-166 completes mining the waters off the Mississippi River Passes.
July 30. U-166 sinks Robert E. Lee and is in turn sunk by escorting PC-566 scoring the first Coast Guard kill of an enemy submarine. Until June 2001 U-166 was thought to have been sunk two days later by a Coast Guard J4F Widgeon.
July 31. U-751 lays mines off Charleston, S.C.
Aug 8. U-98 lays mines off Jacksonville, Fla.
Aug 9. U-98 lays mines off the mouth of St. Johns River, east of Jacksonville.
Sep 10. U-69 lays mines at mouth of Chesapeake Bay.
Sep 18. U-455 lays mines off Charleston, S.C.
Nov 10. U-608 lays mines off New York City, east of Ambrose Light.
July 23, 1943. U-613, en route to mine the waters off Jacksonville, Florida, sunk by George E. Badger (DD-196) south of Azores.
July 30. U-230 lays mines off entrance to Chesapeake Bay.
Sep 11. U-107 lays mines off Charleston, South Carolina.
German Submarines -- Caribbean
Feb 16, 1942. Operation Neuland begins. U-156 shelled oil installations on Aruba and sank three tankers.
Dozens more followed.
Apr 19. U-130 shells oil installations at Curacao, N.W.I.
Sept 9. U-214 lays mines off Colon, Canal Zone, the Atlantic entrance to the Panama Canal
U-133's mission to destroy the Hoover Dam
This is only a story, U-133 would never have made it that far (see map showing its approximate path from St. Nazaire, a suitable base, to the target) as its fuel supply would never have allowed this (not even close, the type VIIC could make it to the US east coast by filling up part of its water tanks with fuel but even then it was stretching it). There was also no U-boat commander named Pfau.
Had such an unusual and daring raid been attempted during the war, people would talk and we would know about it by now.
The US broke the Japanese diplomatic code in 1932 and could read many, but not all, secret embassy and consulate messages. Through 1940, only Japanese military attaches were charged with gathering military intelligence, mostly accumulating publicly available information. With a directive on 20 January 1941, Tokyo charged the Cultural attaches to change from "enlightenment" (propaganda) and to begin using their contacts for civilian spying and to establish intelligence gathering networks to survive even after a break in diplomatic relations. This decrypted report is indicative.
9 May 1941 Nakauchi (Los Angeles) to Gaimudaijin (Tokyo) Message #067
We have already established contact with absolutely reliable Japanese in the San Pedro and San Diego area, who keep a close watch on all shipments of airplanes and other war materials, and report the amounts and destinations of such shipments. The same steps have been taken with regard to traffic across the U.S.-Mexico border.
We shall maintain connection with our second generations who are at present in the (U.S.) Army, to keep us informed of various developments in the Army. We also have connections with our second generations working in airplane plants for intelligence purposes.
A budget of $500,000 was established for 1941 -- $10,000,000 in today's money.
Hawaii. The US did not close the Japanese consulates as was done with the German and Italians. Spies and agent handlers were free to continue under diplomatic immunity to photograph and report naval and air force placement and both military and cargo movements. Military intelligence officers were sent in civilian attire on passenger liners to assure the needed information was gathered correctly. A Japanese pilot whose Zero fighter was shot down at Pearl Harbor was aided and armed by an enemy alien; both were killed while taking hostages.
California. We were losing the war, which lead to great fear of anti-US activity by enemy aliens. Atrocities against English in Hong Kong and Singapore were well known. The sneak attack on Pearl Harbor and new reports of mass murders of white people in the western Pacific seemed to confirm the correctness of that opinion. There were the usual scares: a falling star reported as a signal flare; a strange pattern found in a field reported as a possible targeting signal; a report of a surfaced submarine is later reported to have flown away.
Decoded "diplomatic information" about the spy network was available at the highest levels of Washington and, no doubt, contributed to the decision to relocation enemy aliens away from the west coast war zone.
June 28, 1941
Merchant ship, N J. harbor.
The Normandie renamed Lafayette (AP-53) burns in NY pier and capsizes at her berth
On June 12, 1942, the U-584 Innsbruck offloaded four men at Amagansett, Long Island, New York, each with equipped with a chest of detonators and explosives suitable for a year of operations.. A Coast Guardsman spotted them, and told his superiors. They planned to blow up hydroelectric dams, canal locks, and a railway station, among other locations. This operation would be foiled when a saboteur named George Dasch confessed the operation to the FBI for reasons unknown.
Four other operatives were dropped off at in Pointe Vedra Beach, south of Jacksonville, Florida from U-202. on June 17, 1942. The Florida group made their way to Cincinnati and split up, with two going to Chicago and the others to New York. However, the Dasch confession led to the arrest of all four.
Six of the eight men were executed later; the others served prison time and were repatriated after the war.
Following the failure of this mission, no more raids on America were ordered by the Nazi leadership.
When war was declared after the attack on Pearl Harbor, no battle fleet existed, the USAAF had few fighter aircraft assigned to the whole west coast, even fewer anti-aircraft batteries, and the area was in a panic. The Japanese intent was to cause diversion of defensive activity to the US coast, thereby taking away from military efforts in the Pacific. It worked better than expected. When combined with reports of murdered civilians in the western Pacific, the stage was set for a massive relocation of the enemy citizens (Iissei) and their children (nisei) from a war zone within the United States. Note: Children (nisei) obviously relocated with their parents who were enemy aliens in a war zone -- it is disingenuous to imply that Americans of Japanese ancestry were targeted for relocation.
With the Pacific coast considered a battle zone, the voluntary relocation of Japanese from coastal areas was sought on 27 February 1942. Eight thousand had relocated by 27 March when all remaining Japanese citizens on the coast defence zone were given 48 hours to report for relocation to the interior. 120,000 people were send to former CCC camps run by the War Relocation Authority. Camps established under emergency conditions sometimes had limited facilities until the permanent camps could be completed. Camp members were paid token wages of $12 as labourers and $19/month for professionals. Resettlement to communities that would accept Japanese was started when the fear of invasion had eased in 1943; 55,000 had been resettled by war end. Iowans of German ancestry were interrogated monthly.
About 4,000 enemy citizens were "interred" as security risks by the Department of Justice; 50% Japanese, 40% German, 10% Italian. As different from relocation.
By the time the US entered WW2, the war had been going on for over two years in Europe, four years in Africa, and ten years in China.
|
<urn:uuid:fcbc5c46-0853-4271-a951-74ca63973294>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814292.75/warc/CC-MAIN-20180222220118-20180223000118-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9735163450241089,
"score": 3.359375,
"token_count": 7293,
"url": "http://threats.greyfalcon.us/index.html"
}
|
The history of early Tunisia and its indigenous inhabitants, the Berbers, is obscure prior to the founding of Carthage by seafaring Phoenicians from Tyre (in present-day Lebanon) in the 9th century BC . A great mercantile state developed at Carthage (near modern-day Tunis), which proceeded to dominate the western Mediterranean world. The great Carthaginian general Hannibal engineered the monumental trans-Alpine assault on Rome in 211 BC and inflicted costly losses on the Roman Empire until choosing suicide rather than capture in 183 BC . Carthage was eventually burned to the ground by the Romans at the culmination of the Punic Wars in 146 BC . The Romans subsequently rebuilt the city, making it one of the great cities of the ancient world. With the decline of the Roman Empire, Tunisia fell successively to Vandal invaders during the 5th century AD , to the Byzantines in the 6th century, and finally to the Arabs in the 7th century. Thenceforth, Tunisia remained an integral part of the Muslim world.
In the 9th century, the governor of Tunisia, Ibrahim ibn Aghlab, founded a local dynasty nominally under the sovereignty of the 'Abbasid caliphs of Baghdad. The Aghlabids conquered Sicily and made Tunisia prosperous. In 909, the Fatimids ended Aghlabid rule, using Tunisia as a base for their subsequent conquest of Egypt. They left Tunisia in control of the subordinate Zirid dynasty until the 11th century, when the Zirids rebelled against Fatimid control. The Fatimids unleashed nomadic Arab tribes, the Banu Hilal and Banu Sulaym, to punish the Zirids, a move resulting in the destruction of the Zirid state and the general economic decline of Tunisia. In the 13th century, the Hafsids, a group subordinate to the Almohad dynasty based in
Morocco, restored order to Tunisia. They founded a Tunisian dynasty that, from the 13th century to the 16th, made Tunisia one of the flourishing regions of North Africa. In the beginning of the 16th century, however, Spain's occupation of important coastal locations precipitated the demise of Hafsid rule.
In 1574, the Ottoman Turks occupied Tunisia, ruling it with a dey appointed by the Ottoman ruler. The dey's lieutenants, the beys, gradually became the effective rulers, in fact if not in name. Ultimately, in 1705, the bey Husayn ibn 'Ali established a dynasty. Successive Husaynids ruled Tunisia as vassals of the Ottomans until 1881 and under the French until 1956, the year of Tunisia's independence (the dynasty was abolished in 1957). During the 19th century, the Tunisian dynasts acted virtually as independent rulers, making vigorous efforts to utilize Western knowledge and technology to modernize the state. But these efforts led to fiscal bankruptcy and thus to the establishment of an international commission made up of British, French, and Italian representatives to supervise Tunisian finances. Continued rivalry between French and Italian interests culminated in a French invasion of Tunisia in May 1881. A protectorate was created in that year by the Treaty of Bardo; the Convention of La Marsa (1883) allowed the Tunisian dynasty to continue, although effective direction of affairs passed to the French. French interests invested heavily in Tunisia, and a process of modernization was vigorously pursued; at the same time, direct administration in the name of the dynasty was gradually expanded. The Tunisians, in turn, supported France in World War I.
The beginnings of modern nationalism in Tunisia emerged before the outbreak of the war, with hopes of greater Tunisian participation in government encouraged during the war by pronouncements such as the Fourteen Points (1918) of Woodrow Wilson. When these hopes were not realized, Tunisians formed a moderate nationalist grouping, the Destour ("Constitutional") Party. Dissatisfaction over the group's poor organization led, in 1934, to a split: the more active members, led by Habib Bourguiba, founded the Neo-Destour Party. France responded to demands for internal autonomy with repression, including the deposition and exile of the sovereign Munsif Bey. On 23 August 1945, the two Destour parties proclaimed that the will of the Tunisian people was independence. But the French still held firm. In December 1951, they again rejected a request by the Tunisian government for internal autonomy. The situation worsened when extremists among the French colonists launched a wave of terrorism. Finally, on 31 July 1954, French Premier Pierre Mendès-France promised the bey internal autonomy. After long negotiations accompanied by considerable local disorder, a French-Tunisian convention was signed on 3 June 1955 in Paris. On 20 March 1956, France recognized Tunisian independence.
In April 1956, Habib Bourguiba formed the first government of independent Tunisia, and on 25 July 1957, the Constituent Assembly, having established a republic and transformed itself into a legislative assembly, elected Bourguiba chief of state and deposed the bey. A new constitution came into effect on 1 June 1959. Bourguiba won the first presidential election in 1959 and was reelected in 1964, 1969, and 1974, when the National Assembly amended the constitution to make him president for life.
Economic malaise and political repression during the late 1970s led to student and labor unrest. A general strike called by the General Union of Tunisian Workers (UGTT) on 26 January 1978, in order to protest assaults on union offices and the harassment of labor leaders, brought confrontations with government troops in which at least 50 demonstrators and looters were killed and 200 trade union officials, including UGTT Secretary-General Habib Achour, were arrested. Prime Minister Hedi Nouira was succeeded by Mohamed Mzali in April 1980, marking the advent of a political liberalization. Trade union leaders were released from jails, and Achour ultimately received a full presidential pardon. In July 1981, the formation of opposition political parties was permitted. In elections that November, candidates of Bourguiba's ruling Destourian Socialist Party, aligned in a National Front with the UGTT, garnered all 136 National Assembly seats and 94.6% of the popular vote. An economic slump in 1982–83 brought a renewal of tensions; in January 1984, after five days of rioting in Tunis, the government was forced to rescind the doubling of bread prices that had been ordered as an austerity measure.
After independence, Tunisia pursued a nonaligned course in foreign affairs while maintaining close economic ties with the West. Tunisia's relations with Algeria, strained during the 1970s, improved markedly during the early 1980s, and on 19 March 1983 the two nations signed a 20-year treaty of peace and friendship. Relations with Libya have been stormy since the stillborn Treaty of Jerba (1974), a hastily drafted document that had been intended to merge the two countries into the Islamic Arab Republic; within weeks after signing the accord, Bourguiba, under pressure from Algeria and from members of his own government, retreated to a more gradualist approach toward Arab unity. A further irritant was the territorial dispute between Libya and Tunisia over partition of the oil-rich Gulf of Gabes, resolved by the international Court of Justice in Libya's favor in 1982. Tunisian-Libyan relations reached a low point in January 1980, when some 30 commandos (entering from Algeria but apparently aided by Libya) briefly seized an army barracks and other buildings at Gafsa in an abortive attempt to inspire a popular uprising against Bourguiba. In 1981, Libya vetoed Tunisia's bid to join OAPEC and expelled several thousand Tunisian workers; more Tunisian workers were expelled in 1985.
Following the evacuation of the Palestine Liberation Organization (PLO) from Lebanon in August 1982, Tunisia admitted PLO Chairman Yasir Arafat and nearly 1,000 Palestinian fighters. An October 1985 Israeli bombing raid on the PLO headquarters near Tunis killed about 70 persons. By 1987, the PLO presence was down to about 200, all civilians.
In 1986 and 1987, Bourguiba dealt with labor agitation for wage increases by again jailing UGTT leader Achour and disbanding the confederation. He turned on many of his former political associates, including his wife and son, while blocking two legal opposition parties from taking part in elections. Reasserting his control of Tunisian politics, Bourguiba dismissed Prime Minister Mzali, who fled to Algeria and denounced the regime. A massive roundup of Islamic fundamentalists in 1987 was the president's answer to what he termed a terrorist conspiracy sponsored by Iran, and diplomatic relations with Tehran were broken. On 27 September 1987, a state security court found 76 defendants guilty of plotting against the government and planting bombs; seven (five in absentia) were sentenced to death.
The trusted minister of interior, who had conducted the crackdown, General Zine el-Abidine Ben Ali, was named prime minister in September 1987. Six weeks later, Ben Ali seized power, ousting Bourguiba, whom he said was too ill and senile to govern any longer. He assumed the presidency himself, promising political liberalization. Almost 2,500 political prisoners were released and the special state security courts were abolished. The following year, Tunisia's constitution was revised, ending the presidency for life and permitting the chief executive three, five-year terms. Elections were advanced from 1991 to 1989 and Ben Ali ran unopposed. Candidates of the renamed Destour Party, the Constitutional Democratic Rally (RCD), won all of the 141 seats in the Chamber of Deputies, although the Islamist Party, an-Nahda, won an average of 18% of the vote where its members contested as independents.
The Constitution does not permit political parties based on religion, race, regional or linguistic affiliation, and thus Islamist parties in Tunisia face an uphill battle in gaining official recognition. After an attack on RCD headquarters in 1990, the government moved decisively against its Islamist opposition. Thousands were arrested and in 1992 military trials, 265 were convicted.
In the March 1994 presidential election, two men not Islamist-affiliated, after announcing their candidacy for the presidency, were arrested and Ben Ali again was unopposed and was reelected with 99.9% of the vote. In the new electoral system established for the 1994 Chamber of Deputies elections, the number of seats had been increased from 144 to 163. In the new proportional system, 144 of the seats were to be contested and to go to the majority party and the remaining 19 to be distributed to the remaining contesting parties according to their vote draw at the national level. In the parliamentary elections the president's RCD took all 144 seats with the remaining six parties dividing up the 19 set-aside seats. In the 1995 municipal elections, out of 4,090 seats contested in the 257 constituencies, independent candidates and members of the five recognized political parties won only six of the seats.
In July 1998 Ben Ali announced his plans to contest the presidential elections scheduled for October 1999. Two other candidates, Mohamed Belhaj Amor of the PUP and Abderrahmane Tlili of the UDU also announced their candidacy. The parliament had again been enlarged to 182 members, with 34 seats guaranteed to the opposition. In the 1999 elections Ben Ali received 99.4% of the votes, with Amor receiving 0.3% and Tlili 0.2%. The RCD was awarded with 148 seats and the five other official parties splitting the remaining 34 seats.
In the 1990s Tunisia continued to follow a moderate, nonaligned course in foreign relations, complicated by sporadic difficulties with its immediate neighbors. Relations with Libya remained tense after ties were resumed in 1987. However, Ben Ali pursued normalized relations, which dramatically improved over the next few years. Thousands of Tunisians found work in Libya as the border was reopened. In 1992 the UN Security Council imposed sanctions against Libya due to its decision to not hand over for trial suspects in the Pan Am bombing affair. Tunisia did not wholeheartedly support all of the UN Security Council sanctions due to the real economic ties that the two countries have. Due to these ties Libya's difficulties impacted on the ability of Tunisia and the UAM (see below) to establish closer relations with the European Union. From 1995 forward, Tunisia lobbied at the international level for the cessation of the sanctions due to the suffering that was caused to the Libyan people as well as to the regional tensions that the sanctions were creating. By 1997 Tunisia had quietly resumed joint economic projects and bilateral visitation with Libya. Following Libya's 1998/99 decision to hand over the Pan Am bombing suspects for trial in the Netherlands for the 1988 Pam Am explosion over Lockerbie, Scotland, Tunisia has moved to normalize relations with Libya, including resumption of TunisAir flights to Tripoli in June 2000.
Ben Ali also appeared committed to the promotion of the Union of the Arab Maghreb, an organization that became formalized in 1989 with Mauritania, Morocco, Algeria, Tunisia, and Libya. Ben Ali became president of the organization for 1993, though at this point the active work toward unification of the five countries was put on hold due in particular to the internal difficulties that Algeria faced as well as the problems of Libya in the international community caused by Libya's refusal to turn over the Lockerbie suspects. In 1999 the leaders of Morocco and Tunisia again called for a resuscitation of the organization and pledged to work toward that end in the following year.
Tunisia's relations with Algeria in the 1990s have been controlled by the Islamist issue. The leadership of Tunisia's notofficially recognized ah-Nahda party continues to be closely watched by both countries. With the decision of the Algerian military to annul their January 1992 elections in order to prevent the Islamists from gaining control of the government, relations improved between the two countries. Algeria signed a border agreement in 1993 with Tunisia, ratified during a state visit of the Algerian leader. Reciprocal visits between the leadership of the two countries reinforced their commitment to controlling their joint border and fighting "extremism."
In 1988 'Abu Jihad, the military commander of the PLO, was assassinated near Tunis by Israeli commandos, provoking a Tunisian protest to the United Nations Security Council and a following resolution of condemnation of the Israeli aggression by the Council. However, relations with Israel then improved, and in 1993, Tunisia welcomed an official Israeli delegation as part of the peace process. Joint naval exercises between the two countries took place in March 1994. The PLO offices in Tunis were closed in 1994 as the new Palestinian Authority (PA) took up residence in Gaza. In 1996, following PA elections, Tunisia moved to establish low-level diplomatic relations with Israel as it also announced its decision to recognize PA passports. However, with the slowing of the peace process and the election of the Netanyahu government in Israel, improving relations between Israel and Tunisia cooled and remained on hold.
Ben Ali also moved to normalize relations with Egypt and visited Cairo in 1990 to that end, the first such trip by a Tunisian President since 1965. In 1997 several agreements regarding economic and cultural cooperation were signed between the two countries.
Although the United States has provided economic and military aid, Tunisia opposed American support for Kuwait following Iraq's invasion in 1990. The support of Iraq in this crisis caused a rift in relations with Kuwait that were finally healed, through Ben Ali's efforts, with the visit of Kuwait's Crown Prince to Tunis in 1996 and a loan from the Kuwait-based Arab Fund for Economic and Social Development being granted to Tunisia. At the same time, Tunisia continued good relations with Iraq and continued to call for a cessation of UN sanctions against Baghdad.
The consistent stance of Ben Ali's government toward Islamist parties has brought him friends in the west, though his own poor human rights record has provoked consternation from western governments and vocal criticism from western media and human rights organizations. Complaints against his regime have included torture under interrogation, deaths in custody, secret or unfair trials and long prison sentences for opposition leaders, inhumane prison conditions and restrictions on free speech and the press, including even controls on the use of satellite dishes. Ironically, the UN Committee against Torture (along with numerous other human rights groups and including the Arab Commission of Human Rights) denounced the police and security forces in Tunisia, while Tunisia was unanimously elected to the UN Human Rights Commission in 1997.
In July 1995, Tunisia signed an association agreement with the European Union that in 2007 would make the country part of a free-trade area around the Mediterranean known as the European Economic Area, the first southern Mediterranean country to be brought in to the planned association. The United States has continued to offer praise to Tunisia and encouragement of US investment, but has held off on requested military aid. Relations with Italy, Tunisia's second largest trading partner after France, have been complicated by the issues of illegal immigration from Tunisia and of fishing rights.
On 6 April 2000, Bourguiba died at age 96. A 7-day period of mourning was declared, and thousands of mourners lined his funeral procession route.
Following the 11 September 2001 terrorist attacks on the United States, the United States called upon all states to implement counterterrorism measures. On 11 April 2002, a truck exploded at a synagogue on the Tunisian resort island of Djerba, killing 21 people, including 14 German tourists. German intelligence officials reported the bombing was a terrorist attack, and cited links to the al-Qaeda organization. In November, Ben Ali called for an international conference on terrorism to establish an international code of ethics to which all parties would be committed. In December, the United States praised Tunisia for its efforts in combating terrorism, and for its "record of moderation and of tolerance in the region."
In a referendum held on 26 May 2002, voters overwhelmingly approved a series of constitutional amendments that would make a marked change in the country's political structure. They included: additional guarantees regarding the pre-trial and preventive custody of defendants; the creation of a second legislative body; the elimination of presidential term limits, along with the setting of a maximum age ceiling of 75 years for a presidential candidate; and the consecration of the importance of human rights, solidarity, mutual help, and tolerance as values enshrined in the constitution.
In November 2002, Ben Ali announced a series of electoral reform measures, which, in addition to the "Chamber of Councilors" approved by the May referendum, included provisions to further guarantee the fairness of voter registration and election processes, and provisions to reduce the minimum requirement for campaign financing and reimbursement by the state. He also called on radio and television operators to provide wider coverage of opposition parties and nongovernmental organizations, and introduced a bill that would guarantee citizens' privacy and protection of personal data. The next presidential and legislative elections are scheduled for 2004.
In a speech presented at a summit of the Non-Aligned movement in Kuala Lumpur, Malaysia, in February 2003, Ben Ali reiterated his call for an international conference on terrorism, and called for a peaceful solution to the crisis in Iraq. By March 2003, the UN Security Council was considering whether or not it would sanction the use of force in providing for Iraq's disarmament of weapons of mass destruction called for in its Resolution 1441 passed 8 November 2002, and the United States and UK had stationed nearly 300,000 military personnel in the Persian Gulf region.
|
<urn:uuid:76fc269e-a4da-4c02-9877-3a643f81aab8>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815812.83/warc/CC-MAIN-20180224132508-20180224152508-00416.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9722470045089722,
"score": 3.75,
"token_count": 3972,
"url": "http://www.nationsencyclopedia.com/Africa/Tunisia-HISTORY.html"
}
|
Gospel of Thomas
|Part of a series on|
The Gospel According to Thomas is an early Christian non-canonical sayings-gospel that many scholars believe provides insight into the oral gospel traditions. It was discovered near Nag Hammadi, Egypt, in December 1945 among a group of books known as the Nag Hammadi library. Scholars speculate that the works were buried in response to a letter from Bishop Athanasius declaring a strict canon of Christian scripture.
The Coptic-language text, the second of seven contained in what modern-day scholars have designated as Codex II, is composed of 114 sayings attributed to Jesus. Almost half of these sayings resemble those found in the Canonical Gospels, while it is speculated that the other sayings were added from Gnostic tradition. Its place of origin may have been Syria, where Thomasine traditions were strong.
The introduction states: "These are the hidden words that the living Jesus spoke and Didymos Judas Thomas wrote them down." Didymus (Greek) and Thomas (Aramaic) both mean "twin". Some critical scholars suspect that this reference to the Apostle Thomas is false, and that therefore the true author is unknown.
It is possible that the document originated within a school of early Christians, possibly proto-Gnostics. Some critics further state that even the description of Thomas as a "gnostic" gospel is based upon little other than the fact that it was found along with gnostic texts at Nag Hammadi. The name of Thomas was also attached to the Book of Thomas the Contender, which was also in Nag Hammadi Codex II, and the Acts of Thomas. While the Gospel of Thomas does not directly point to Jesus' divinity, it also does not directly contradict it, and therefore neither supports nor contradicts gnostic beliefs. When asked his identity in the Gospel of Thomas, Jesus usually deflects, ambiguously asking the disciples why they do not see what is right in front of them, similar to some passages in the canonical gospels like John 12:16 and Luke 18:34.
The Gospel of Thomas is very different in tone and structure from other New Testament apocrypha and the four Canonical Gospels. Unlike the canonical Gospels, it is not a narrative account of the life of Jesus; instead, it consists of logia (sayings) attributed to Jesus, sometimes stand-alone, sometimes embedded in short dialogues or parables. The text contains a possible allusion to the death of Jesus in logion 65 (Parable of the Wicked Tenants, paralleled in the Synoptic Gospels), but doesn't mention his crucifixion, his resurrection, or the final judgment; nor does it mention a messianic understanding of Jesus. Since its discovery, many scholars have seen it as evidence in support of the existence of the so-called Q source, which might have been very similar in its form as a collection of sayings of Jesus without any accounts of his deeds or his life and death, a so-called "sayings gospel".
Bishop Eusebius (AD 260/265 – 339/340) included it among a group of books that he believed to be not only spurious, but "the fictions of heretics". However, it is not clear whether he was referring to this Gospel of Thomas or one of the other texts attributed to Thomas.
Finds and publication
The manuscript of the Coptic text (CG II), found in 1945 at Nag Hammadi, Egypt, is dated at around 340 AD. It was first published in a photographic edition in 1956. This was followed three years later (1959) by the first English-language translation, with Coptic transcription. In 1977, James M. Robinson edited the first complete collection of English translations of the Nag Hammadi texts. The Gospel of Thomas has been translated and annotated worldwide in many languages.
The original Coptic manuscript is now the property of the Coptic Museum in Cairo, Egypt, Department of Manuscripts.
Oxyrhynchus papyrus fragments
After the Coptic version of the complete text was discovered in 1945 at Nag Hammadi, scholars soon realized that three different Greek text fragments previously found at Oxyrhynchus (the Oxyrhynchus Papyri), also in Egypt, were part of the Gospel of Thomas. These three papyrus fragments of Thomas date to between 130 and 250 AD. Prior to the Nag Hammadi library discovery, the sayings of Jesus found in Oxyrhynchus were known simply as Logia Iesu. The corresponding Uncial script Greek fragments of the Gospel of Thomas, found in Oxyrhynchus are:
- P. Oxy. 1 : fragments of logia 26 through 33, with the last two sentences of logion 77 in the Coptic version included at the end of logion 30 herein.
- P. Oxy. 654 : fragments of the beginning through logion 7, logion 24 and logion 36 on the flip side of a papyrus containing surveying data.
- P. Oxy. 655 : fragments of logia 36 through 39. 8 fragments designated a through h, whereof f and h have since been lost.
The wording of the Coptic sometimes differs markedly from the earlier Greek Oxyrhynchus texts, the extreme case being that the last portion of logion 30 in the Greek is found at the end of logion 77 in the Coptic. This fact, along with the quite different wording Hippolytus uses when apparently quoting it (see below), suggests that the Gospel of Thomas "may have circulated in more than one form and passed through several stages of redaction."
The earliest surviving written references to the Gospel of Thomas are found in the writings of Hippolytus of Rome (c. 222–235) and Origen of Alexandria (c. 233). Hippolytus wrote in his Refutation of All Heresies 5.7.20:
[The Naassenes] speak...of a nature which is both hidden and revealed at the same time and which they call the thought-for kingdom of heaven which is in a human being. They transmit a tradition concerning this in the Gospel entitled "According to Thomas," which states expressly, "The one who seeks me will find me in children of seven years and older, for there, hidden in the fourteenth aeon, I am revealed."
This appears to be a reference to saying 4 of Thomas, although the wording differs significantly.
In the 4th and 5th centuries, various Church Fathers wrote that the Gospel of Thomas was highly valued by Mani. In the 4th century, Cyril of Jerusalem mentioned a "Gospel of Thomas" twice in his Catechesis: "The Manichæans also wrote a Gospel according to Thomas, which being tinctured with the fragrance of the evangelic title corrupts the souls of the simple sort." and "Let none read the Gospel according to Thomas: for it is the work not of one of the twelve Apostles, but of one of the three wicked disciples of Manes." The 5th century Decretum Gelasianum includes "A Gospel attributed to Thomas which the Manichaean use" in its list of heretical books.
Date of composition
Richard Valantasis writes:
Assigning a date to the Gospel of Thomas is very complex because it is difficult to know precisely to what a date is being assigned. Scholars have proposed a date as early as 40 AD or as late as 140 AD, depending upon whether the Gospel of Thomas is identified with the original core of sayings, or with the author's published text, or with the Greek or Coptic texts, or with parallels in other literature.
Valantasis and other scholars argue that it is difficult to date Thomas because, as a collection of logia without a narrative framework, individual sayings could have been added to it gradually over time. Valantasis dates Thomas to 100 – 110 AD, with some of the material certainly coming from the first stratum which is dated to 30 – 60 AD. J. R. Porter dates the Gospel of Thomas much later, to 250 AD.
Robert E. Van Voorst states:
Most interpreters place its writing in the second century, understanding that many of its oral traditions are much older.
Scholars generally fall into one of two main camps: an "early camp" favoring a date for the "core" of between the years 50 and 100, before or approximately contemporary with the composition of the canonical gospels and a "late camp" favoring a date in the 2nd century, after composition of the canonical gospels.
Form of the gospel
Theissen and Merz argue the genre of a collection of sayings was one of the earliest forms in which material about Jesus was handed down. They assert that other collections of sayings, such as the Q document and the collection underlying Mark 4, were absorbed into larger narratives and no longer survive as independent documents, and that no later collections in this form survive. Marvin Meyer also asserted that the genre of a "sayings collection" is indicative of the 1st century, and that in particular the "use of parables without allegorical amplification" seems to antedate the canonical gospels. Maurice Casey has strongly questioned the argument from genre: the "logic of the argument requires that Q and the Gospel of Thomas be also dated at the same time as both the book of Proverbs and the Sayings of Amen-em-Opet."
Independence from Synoptic Gospels
Stevan L. Davies argues that the apparent independence of the ordering of sayings in Thomas from that of their parallels in the synoptics shows that Thomas was not evidently reliant upon the canonical gospels and probably predated them. Several authors argue that when the logia in Thomas do have parallels in the synoptics, the version in Thomas often seems closer to the source. Theissen and Merz give sayings 31 and 65 as examples of this. Koester agrees, citing especially the parables contained in sayings 8, 9, 57, 63, 64 and 65. In the few instances where the version in Thomas seems to be dependent on the Synoptics, Koester suggests, this may be due to the influence of the person who translated the text from Greek into Coptic.
Koester also argues that the absence of narrative materials (such as those found in the canonical gospels) in Thomas makes it unlikely that the gospel is "an eclectic excerpt from the gospels of the New Testament". He also cites the absence of the eschatological sayings considered characteristic of Q to show the independence of Thomas from that source.
Intertextuality with John's gospel
Another argument for an early date is what some scholars have suggested is an interplay between the Gospel of John and the logia of Thomas. Parallels between the two have been taken to suggest that Thomas' logia preceded John's work, and that the latter was making a point-by-point riposte to Thomas, either in real or mock conflict. This seeming dialectic has been pointed out by several New Testament scholars, notably Gregory J. Riley, April DeConick, and Elaine Pagels. Though differing in approach, they argue that several verses in the Gospel of John are best understood as responses to a Thomasine community and its beliefs. Pagels, for example, says that John's gospel makes two references to the inability of the world to recognize the divine light. In contrast, several of Thomas' sayings refer to the light born 'within'.
John's gospel is the only canonical one that gives Thomas the Apostle a dramatic role and spoken part, and Thomas is the only character therein described as having apistos (unbelief), despite the failings of virtually all the Johannine characters to live up to the author's standards of belief. With respect to the famous story of "Doubting Thomas", it is suggested that John may have been denigrating or ridiculing a rival school of thought. In another apparent contrast, John's text matter-of-factly presents a bodily resurrection as if this is a sine qua non of the faith; in contrast, Thomas' insights about the spirit-and-body are more nuanced. For Thomas, resurrection seems more a cognitive event of spiritual attainment, one even involving a certain discipline or asceticism. Again, an apparently denigrating portrayal in the "Doubting Thomas" story may either be taken literally, or as a kind of mock "comeback" to Thomas' logia: not as an outright censuring of Thomas, but an improving gloss. After all, Thomas' thoughts about the spirit and body are really not so different from those which John has presented elsewhere. John portrays Thomas as physically touching the risen Jesus, inserting fingers and hands into his body, and ending with a shout. Pagels interprets this as signifying one-upmanship by John, who is forcing Thomas to acknowledge Jesus' bodily nature. She writes that "...he shows Thomas giving up his search for experiential truth – his 'unbelief' – to confess what John sees as the truth...". The point of these examples, as used by Riley and Pagels, is to support the argument that the text of Thomas must have existed and have gained a following at the time of the writing of John's Gospel, and that the importance of the Thomasine logia was great enough that John felt the necessity of weaving them into his own narrative.
As the scholarly debate continues on the issue of possible John–Thomas interplay, Christopher Skinner more recently responded in part to Riley, DeConick, and Pagels with John and Thomas – Gospels in Conflict? (Wipf and Stock, Princeton Theological Monograph Series 115, 2009).
Role of James
Albert Hogeterp argues that the Gospel's saying 12, which attributes leadership of the community to James the Just rather than to Peter, agrees with the description of the early Jerusalem church by Paul in Galatians 2:1–14 and may reflect a tradition predating AD 70. Meyer also lists "uncertainty about James the righteous, the brother of Jesus" as characteristic of a 1st-century origin.
In later traditions (most notably in the Acts of Thomas, Book of Thomas the Contender, etc.), Thomas is regarded as the twin brother of Jesus. Nonetheless, this gospel holds some sentences (log. 55, 99 y 101), that are in opposition with the familiar group of Jesus, which involves difficulties, when it tries to identify him with James, the brother, quoted by Josephus in Antiquities of the Jews. Moreover, there are some sayings, (principally log. 6, 14, 104) and Oxyrh. papyri 654 (log. 6) in which Gospel is shown in the opposite point of view to Jewish mores specially in respect to the circumcision and dietary practices (log. 55), key issue, in the early Jewish-Christian community led by James (Acts 15: 1-35, Gal. 2:1–10).
In regard to 'Sabbath', it is very controversial the sense of the sentence: "if you do not keep the true, 'Sabbath', you will not see to Father". It seems that this saying is also contrary to strict observance of the Jewish law. Aelred Baker, quotes 'Macarius' of Syria: "For the soul that is considered worthy from the shameful and foul reflections keeps the sabbath a true sabbath and rests a true rest. . . . To all the souls that obey and come he gives rest from these . . . impure reflections . . ., (the souls) keeping the sabbath a true sabbath". Meyer also highlights as the Coptic Language employs two different spellings for the word translated 'sabbath' in saying. 27 (sambaton and sabbaton), it is conceivable - but probably that the text could be translated 'observe the (whole) week as the sabbath'
Depiction of Peter and Matthew
In saying 13, Peter and Matthew are depicted as unable to understand the true significance or identity of Jesus. Patterson argues that this can be interpreted as a criticism against the school of Christianity associated with the Gospel of Matthew, and that "[t]his sort of rivalry seems more at home in the first century than later", when all the apostles had become revered figures.
Parallel with Paul
According to Meyer, Thomas's saying 17: "I shall give you what no eye has seen, what no ear has heard and no hand has touched, and what has not come into the human heart", is strikingly similar to what Paul wrote in 1 Corinthians 2:9 (which was itself an allusion to Isaiah 64:4).
The late camp dates Thomas some time after 100 AD, generally in the mid-2nd century. They generally believe that although the text was composed around the mid-2nd century, it contains earlier sayings such as those originally found in the New Testament gospels of which Thomas was in some sense dependent in addition to inauthentic and possibly authentic independent sayings not found in any other extant text. J. R. Porter dates Thomas much later, to the mid-third century.
Dependence on the New Testament
Several scholars have argued that the sayings in Thomas reflect conflations and harmonisations dependent on the canonical gospels. For example, saying 10 and 16 appear to contain a redacted harmonisation of Luke 12:49, 12:51–52 and Matthew 10:34–35. In this case it has been suggested that the dependence is best explained by the author of Thomas making use of an earlier harmonised oral tradition based on Matthew and Luke. Biblical scholar Craig A. Evans also subscribes to this view and notes that "Over half of the New Testament writings are quoted, paralleled, or alluded to in Thomas... I'm not aware of a Christian writing prior to AD 150 that references this much of the New Testament."
Another argument made for the late dating of Thomas is based upon the fact that Saying 5 in the original Greek (Papyrus Oxyrhynchus 654) seems to follow the vocabulary used in the gospel according to Luke (Luke 8:17), and not the vocabulary used in the gospel according to Mark (Mark 4:22). According to this argument – which presupposes firstly the rectitude of the Two-Source Hypothesis (widely held amongst current New Testament scholars), in which the author of Luke is seen as having used the pre-existing gospel according to Mark plus a lost Q document to compose his gospel – if the author of Thomas did, as Saying 5 suggests – refer to a pre-existing gospel according to Luke, rather than Mark's vocabulary, then the gospel of Thomas must have been composed after both Mark and Luke (the latter of which is dated to between 60 AD and 90 AD).
Another saying that employs similar vocabulary to that used in Luke rather than Mark is Saying 31 in the original Greek (Papyrus Oxyrhynchus 1), where Luke 4:24's term dektos (acceptable) 4:24 is employed rather than Mark 6:4's atimos (without honor). The word dektos (in all its cases and genders) is clearly typical of Luke, since it is only employed by him in the canonical gospels Luke 4:19; 4:24; Acts 10:35). Thus, the argument runs, the Greek Thomas has clearly been at least influenced by Luke's characteristic vocabulary.
J. R. Porter states that, because around half of the sayings in the Thomas have parallels in the synoptic gospels, its "possible that the sayings in the Gospel of Thomas were selected directly from the canonical gospels and were either reproduced more or less exactly or amended to fit the author's distinctive theological outlook." According to John P. Meier, scholars predominantly conclude that Thomas depends on or harmonizes the Synoptics.
Several scholars argue that Thomas is dependent on Syriac writings, including unique versions of the canonical gospels. They contend that many sayings of the Gospel of Thomas are more similar to Syriac translations of the canonical gospels than their record in the original Greek. Craig A. Evans states that saying 54 in Thomas, which speaks of the poor and the kingdom of heaven, is more similar to the Syriac version of Matthew 5:3 than the Greek version of that passage or the parallel in Luke 6:20.
Klyne Snodgrass notes that saying 65–66 of Thomas containing the Parable of the Wicked Tenants appears to be dependent on the early harmonisation of Mark and Luke found in the old Syriac gospels. He concludes that, "Thomas, rather than representing the earliest form, has been shaped by this harmonizing tendency in Syria. If the Gospel of Thomas were the earliest, we would have to imagine that each of the evangelists or the traditions behind them expanded the parable in different directions and then that in the process of transmission the text was trimmed back to the form it has in the Syriac Gospels. It is much more likely that Thomas, which has a Syrian provenance, is dependent on the tradition of the canonical Gospels that has been abbreviated and harmonized by oral transmission."
Nicholas Perrin argues that Thomas is dependent on the Diatessaron, which was composed shortly after 172 by Tatian in Syria. Perrin explains the order of the sayings by attempting to demonstrate that almost all adjacent sayings are connected by Syriac catchwords, whereas in Coptic or Greek, catchwords have been found for only less than half of the pairs of adjacent sayings. Peter J. Williams analyzed Perrin's alleged Syriac catchwords and found them implausible. Robert Shedinger wrote that since Perrin attempts to reconstruct an Old Syriac version of Thomas without first establishing Thomas' reliance on the Diatessaron, Perrin's logic seems circular.
Lack of apocalyptic themes
Bart Ehrman argues that the historical Jesus was an apocalyptic preacher, and that his apocalyptic beliefs are recorded in the earliest Christian documents: Mark and the authentic Pauline epistles. The earliest Christians believed Jesus would soon return, and their beliefs are echoed in the earliest Christian writings. The Gospel of Thomas proclaims that the Kingdom of God is already present for those who understand the secret message of Jesus (Saying 113), and lacks apocalyptic themes. Because of this, Ehrman argues, the Gospel of Thomas was probably composed by a Gnostic some time in the early 2nd century.
N.T. Wright, the former Anglican bishop and professor of NT history at Cambridge and Oxford, now Research Professor of New Testament and Early Christianity at St Mary's College in the University of St Andrews in Scotland, also sees the dating of Thomas in the 2nd or 3rd century A.D. Wright's reasoning for this dating is that the "narrative framework" of 1st century Judaism and the New Testament is radically different from the worldview expressed in the sayings collected in the Gospel of Thomas. Thomas makes an anachronistic mistake by turning Jesus the Jewish prophet into a Hellenistic/Cynic philosopher. Wright concludes his section on the Gospel of Thomas in his book "The New Testament in the People of God" in this way: "[Thomas'] implicit story has to do with a figure who imparts a secret, hidden wisdom to those close to him, so that they can perceive a new truth and be saved by it. 'The Thomas Christians are told the truth about their divine origins, and given the secret passwords that will prove effective in the return journey to their heavenly home.' This is, obviously, the non-historical story of Gnosticism... It is simply the case that, on good historical grounds, it is far more likely that the book represents a radical translation, and indeed subversion, of first-century Christianity into a quite different sort of religion, than that it represents the original of which the longer gospels are distortions... Thomas reflects a symbolic universe, and a worldview, which are radically different from those of the early Judaism and Christianity."
Relation to the New Testament Canon
The harsh and widespread reaction to Marcion's canon, the first New Testament canon known to have been created, may demonstrate that, by 140 AD, it had become widely accepted that other texts formed parts of the records of the life and ministry of Jesus. Although arguments about some potential New Testament books, such as the Shepherd of Hermas and Book of Revelation, continued well into the 4th century, four canonical gospels, attributed to Matthew, Mark, Luke, and John, were accepted among proto-orthodox Christians at least as early as the mid-2nd century. Tatian's widely used Diatessaron, compiled between 160 and 175 AD, utilized the four gospels without any consideration of others. Irenaeus of Lyons wrote in the late 2nd century that since there are four quarters of the earth ... it is fitting that the church should have four pillars ... the four Gospels (Against Heresies, 3.11.8), and then shortly thereafter made the first known quotation from a fourth gospel—the canonical version of the Gospel of John. The late 2nd-century Muratorian fragment also recognizes only the three synoptic gospels and John. Bible scholar Bruce Metzger wrote regarding the formation of the New Testament canon, "Although the fringes of the emerging canon remained unsettled for generations, a high degree of unanimity concerning the greater part of the New Testament was attained among the very diverse and scattered congregations of believers not only throughout the Mediterranean world, but also over an area extending from Britain to Mesopotamia."
Relation to the Thomasine Milieu
The question also arises as to various sects' usage of other works attributed to Thomas and their relation to this work. The Book of Thomas the Contender, also from Nag Hammadi, is foremost among these, but the extensive Acts of Thomas provides the mythological connections. The short and comparatively straightforward Apocalypse of Thomas has no immediate connection with the synoptic gospels, while the canonical Jude – if the name can be taken to refer to Judas Thomas Didymus – certainly attests to early intra-Christian conflict. The Infancy Gospel of Thomas, shorn of its mythological connections, is difficult to connect specifically to our gospel, but the Acts of Thomas contains the Hymn of the Pearl whose content is reflected in the Psalms of Thomas found in Manichaean literature. These psalms, which otherwise reveal Mandaean connections, also contain material overlapping the Gospel of Thomas.
Importance and author
As one of the earliest accounts of the teachings of Jesus, the Gospel of Thomas is regarded by some scholars as one of the most important texts in understanding early Christianity outside the New Testament. In terms of faith, however, no major Christian group accepts this gospel as canonical or authoritative. It is an important work for scholars working on the Q document, which itself is thought to be a collection of sayings or teachings upon which the gospels of Matthew and Luke are partly based. Although no copy of Q has ever been discovered, the fact that Thomas is similarly a 'sayings' Gospel is viewed by some scholars as an indication that the early Christians did write collections of the sayings of Jesus, bolstering the Q hypothesis.
Most scholars do not consider Apostle Thomas the author of this document and the author remains unknown. J. Menard produced a summary of the academic consensus in the mid-1970s which stated that the gospel was probably a very late text written by a Gnostic author, thus having very little relevance to the study of the early development of Christianity. Scholarly views of Gnosticism and the Gospel of Thomas have since become more nuanced and diverse. Paterson Brown, for example, has argued forcefully that the three Coptic Gospels of Thomas, Philip and Truth are demonstrably not Gnostic writings, since all three explicitly affirm the basic reality and sanctity of incarnate life, which Gnosticism by definition considers illusory and evil.
Mani had three disciples: Thomas, Baddas and Hermas. Let no one read the Gospel according to Thomas. For he is not one of the twelve apostles but one of the three wicked disciples of Mani.
Many scholars consider the Gospel of Thomas to be a gnostic text, since it was found in a library among others, it contains Gnostic themes, and perhaps presupposes a Gnostic worldview. Others reject this interpretation, because Thomas lacks the full-blown mythology of Gnosticism as described by Irenaeus of Lyons (ca. 185), and because Gnostics frequently appropriated and used a large "range of scripture from Genesis to the Psalms to Homer, from the Synoptics to John to the letters of Paul."
The historical Jesus
Some modern scholars believe that the Gospel of Thomas was written independently of the canonical gospels, and therefore is a useful guide to historical Jesus research. Scholars may utilize one of several critical tools in biblical scholarship, the criterion of multiple attestation, to help build cases for historical reliability of the sayings of Jesus. By finding those sayings in the Gospel of Thomas that overlap with the Gospel of the Hebrews, Q, Mark, Matthew, Luke, John, and Paul, scholars feel such sayings represent "multiple attestations" and therefore are more likely to come from a historical Jesus than sayings that are only singly attested.
Comparison of the major gospels
The material in the comparison chart is from Gospel Parallels by B. H. Throckmorton, The Five Gospels by R. W. Funk, The Gospel According to the Hebrews by E. B. Nicholson and The Hebrew Gospel and the Development of the Synoptic Tradition by J. R. Edwards.
|Item||Matthew, Mark, Luke||John||Thomas||Nicholson/Edwards Hebrew Gospels|
|New Covenant||The central theme of the Gospels – Love God with all your heart and your neighbor as yourself||The central theme – Love is the New Commandment given by Jesus||Secret knowledge, love your friends||The central theme – Love one another|
|Forgiveness||Very important – particularly in Matthew and Luke||Assumed||Not mentioned||Very important – Forgiveness is a central theme and this gospel goes into the greatest detail|
|The Lord's Prayer||In Matthew & Luke but not Mark||Not mentioned||Not mentioned||Important – "mahar" or "tomorrow"|
|Love & the poor||Very Important – The rich young man||Assumed||Important||Very important – The rich young man|
|Jesus starts his ministry||Jesus meets John the Baptist and is baptized in the 15th year of Tiberius Caesar||Jesus meets John the Baptist, 46 years after Herod's Temple is built (John 2:20)||Only speaks of John the Baptist||Jesus meets John the Baptist and is baptized. This gospel goes into the greatest detail|
|Disciples-inner circle||Peter, Andrew, James & John||Peter, Andrew, the Beloved Disciple||Thomas, James the Just||Peter, Andrew, James, & John|
Philip, Nathanael, Thomas, Judas not Iscariot & Judas Iscariot
|Possible Authors||Unknown; Mark the Evangelist & Luke the Evangelist||The Beloved Disciple||Unknown||Matthew the Evangelist (or Unknown)|
|Virgin birth account||Described in Matthew & Luke, Mark only makes reference to a "Mother"||Not mentioned, although the "Word becomes flesh" in John 1:14||N/A as this is a gospel of Jesus' sayings||Not mentioned.|
|Jesus' baptism||Described||Seen in flash-back (John 1:32-34)||N/A||Described great detail|
|Preaching style||Brief one-liners; parables||Essay format, Midrash||Sayings, parables||Brief one-liners; parables|
|Storytelling||Parables||Figurative language & metaphor||proto-Gnostic, hidden, parables||Parables|
|Jesus' theology||1st century liberal Judaism.||Critical of Jewish authorities||proto-Gnostic||1st century Judaism|
|Miracles||Many miracles||Seven Signs||N/A||Fewer miracles|
|Duration of ministry||Not mentioned, possibly 3 years according to the Parable of the barren fig tree (Luke 13)||3 years (Four Passovers)||N/A||1 year|
|Location of ministry||Mainly Galilee||Mainly Judea, near Jerusalem||N/A||Mainly Galilee|
|Passover meal||Body & Blood = Bread and wine||Interrupts meal for foot washing||N/A||Hebrew Passover is celebrated but details are N/A Epiphanius|
|Burial shroud||A single piece of cloth||Multiple pieces of cloth||N/A||Given to the High Priest|
|Resurrection||Mary and the women are the first to learn Jesus has arisen||John adds detailed account of Mary's experience of the Resurrection||N/A||In the Gospel of the Hebrews is the unique account of Jesus appearing to his brother, James the Just.|
- The books, technically called codices had been bound by a method now called Coptic binding and placed in an earthenware jar. They were damaged by their discoverers, a group of peasants who broke the jar open and manhandled its contents.
- Modern-day scholars have numbered the sayings and even parts of the sayings, but the text contains no numbering.
- Lost Scriptures: Books that did not make it into the New Testament by Bart Ehrman, pp. 19-20
- Eerdmans Commentary on the Bible by James D. G. Dunn, John William Rogerson, 2003, ISBN 0-8028-3711-5 page 1574
- The Fifth Gospel, Patterson, Robinson, Bethge, 1998
- April D. DeConick 2006 The Original Gospel of Thomas in Translation ISBN 0-567-04382-7 page 2
- Layton, Bentley, The Gnostic Scriptures, 1987, p.361.
- Davies, Stevan L., The Gospel of Thomas and Christian Wisdom, 1983, pp. 23–24.
- DeConick, April D., The Original Gospel of Thomas in Translation, 2006, p.214
- Alister E. McGrath, 2006 Christian Theology ISBN 1-4051-5360-1 page 12
- James Dunn, John Rogerson 2003 Eerdmans Commentary on the Bible ISBN 0-8028-3711-5 page 1573
- Udo Schnelle, 2007 Einleitung in das Neue Testament ISBN 978-3-8252-1830-0 page 230
- "CHURCH FATHERS: Church History, Book III (Eusebius)".
- For photocopies of the manuscript see: http://www.gospels.net/thomas/
- A. Guillaumont, Henri-Charles Puech, Gilles Quispel, Walter Till and Yassah `Abd Al Masih, The Gospel According to Thomas (E. J. Brill and Harper & Brothers, 1959).
- Robinson, James M., General Editor, The Nag Hammadi Library in English, Revised Edition 1988, E.J. Brill, Leiden, and Harper and Row, San Francisco, ISBN 90-04-08856-3.
- Coptic Gnostic Papyri in the Coptic Museum at Old Cairo, vol. I (Cairo, 1956) plates 80, line 10 – 99, line 28.
- Bernard P. Grenfell and Arthur S. Hunt, Sayings of Our Lord from an early Greek Papyrus (Egypt Exploration Fund; 1897)
- Robert M. Grant and David Noel Freedman, The Secret Sayings of Jesus according to the Gospel of Thomas (Fontana Books, 1960).
- "P.Oxy.IV 0654".
- "P.Oxy.IV 0655".
- John P. Meier, A Marginal Jew (New York, 1990) p. 125.
- Koester 1990, pp.77ff
- Cyril Catechesis 4.36
- Cyril Catechesis 6.31
- Koester 1990 p. 78
- Valantasis, p. 12
- Patterson, Robinson, and Bethge (1998), p. 40
- Valantasis, p. 20
- Porter, J. R. (2010). The Lost Bible. New York: Metro Books. p. 9. ISBN 978-1-4351-4169-8.
- Van Voorst, Robert (2000). Jesus Outside the New Testament: an introduction to the ancient evidence. Grand Rapids: Eerdmans. p. 189.
- Theissen, Gerd; Merz, Annette (1998). The Historical Jesus: A Comprehensive Guide. Minneapolis: Fortress Press. pp. 38–39. ISBN 0-8006-3122-6.
- Meyer, Marvin (2001). "Albert Schweitzer and the Image of Jesus in the Gospel of Thomas". In Meyer, Marvin; Hughes, Charles. Jesus Then & Now: Images of Jesus in History and Christology. Harrisburg, PA: Trinity Press International. p. 73. ISBN 1-56338-344-6.
- Casey, Maurice (2002). An Aramaic Approach to Q: Sources for the Gospels of Matthew and Luke. Society for New Testament Studies Monograph Series. 122. Cambridge University Press. p. 33. ISBN 978-0521817233.
- "Misericordia University".
- Koester, Helmut; Lambdin (translator), Thomas O. (1996). "The Gospel of Thomas". In Robinson, James MacConkey. The Nag Hammadi Library in English (Revised ed.). Leiden, New York, Cologne: E. J. Brill. p. 125. ISBN 90-04-08856-3.
- Resurrection Reconsidered: Thomas and John in Conflict (Augsberg Fortess, 1995)
- Voices of the Mystics: Early Christian Discourse in the Gospel of John and Thomas and Other Ancient Christian Literature (T&T Clark, 2001)
- Beyond Belief: The Secret Gospel of Thomas. (New York: Vintage, 2004)
- Jn 1:5, 1:10
- logia 24, 50, 61, 83
- (Jn. 20:26–29)
- (logia 29, 80, 87)
- e.g. Jn. 3:6, 6:52–6 – but pointedly contrasting these with 6:63
- Pagels, Elaine. Beyond Belief: The Secret Gospel of Thomas. New York: Vintage, 2004. pp. 66–73
- Hogeterp, Albert L A (2006). Paul and God's Temple. Leuven, Netherlands; Dudley, MA: Peeters. p. 137. ISBN 90-429-1722-9.
- Turner, John D. (NHC II,7, 138,4). Retrieved 2016/01/08
- Dom Aelred Baker Vigiliae Christianae Vol. 18, No. 4 (Dec., 1964), pp. 220.
- Meyer, Marvin (1992). Gospel of Thomas, The hidden sayings of Jesus Harper Collins, San Francisco, ISBN 006065581X, pp. 81-82
- Patterson et al. (1998), p. 42
- "1 Corinthians 2:9 (footnote a.)". New International Version. Biblica, Inc. 2011. Retrieved 29 January 2011.
- Darrell L. Bock, "Response to John Dominic Crossan" in The Historical Jesus ed. James K. Beilby and Paul Rhodes Eddy. 148–149. "...for most scholars the Gospel of Thomas is seen as an early-second century text." (148–149).
- Darrell L. Bock, The Missing Gospels (Nashville: Thomas Nelson, 2006).61; 63. "Most date the gospel to the second century and place its origin in Syria...Most scholars regard the book as an early second-century work."(61); "However, for most scholars, the bulk of it is later reflecting a second-century work."(63)
- Klyne R. Snodgrass, "The Gospel of Thomas: A Secondary Gospel" in The Historical Jesus:Critical Concepts in Religious Studies. Volume 4: Lives of Jesus and Jesus outside the Bible. Ed. Craig A. Evans. 299
- Robert M. Grant and David Noel Freedman, The Secret Sayings of Jesus (Garden City, N.Y.: Doubleday & Company, 1960) 136–137.
- Strobel, Lee (2007). The Case for the Real Jesus. United States: Zondervan. p. 36.
- For general discussion, see John P. Meier, A Marginal Jew, (New York, 1991) pp. 137; pp. 163–64 n. 133. See also Christopher Tuckett, "Thomas and the Synoptics," Novum Testamentum 30 (1988) 132–57, esp. p. 146.
- Porter, J. R. (2010). The Lost Bible. New York: Metro Books. p. 166. ISBN 978-1-4351-4169-8.
- See summary in John P. Meier, A Marginal Jew (New York, 1991) pp. 135–138, especially the footnotes.
- Evans, Craig A. Fabricating Jesus: How Modern Scholars Distort the Gospels. Downers Grove, IL: IVP Books, 2008.
- Klyne R. Snodgrass, "The Gospel of Thomas: A Secondary Gospel" in The Historical Jesus:Critical Concepts in Religious Studies. Volume 4: Lives of Jesus and Jesus outside the Bible. Ed. Craig A. Evans. 298
- Nicholas Perrin, "Thomas: The Fifth Gospel?," Journal of The Evangelical Theological Society 49 (March 2006): 66–80
- Perrin, Nicholas (2003). Thomas and Tatian: The Relationship Between the Gospel of Thomas and the Diatessaron. Academia Biblica. 5. Koninklijke Brill NV, Leiden, The Netherlands: Brill Academic Publishers.
- Williams, P.J., "Alleged Syriac Catchwords in the Gospel of Thomas" Vigiliae Christianae, Volume 63, Number 1, 2009, pp. 71–82(12) BRILL
- Robert F. Shedinger, "Thomas and Tatian: The Relationship between the Gospel of Thomas and the Diatessaron by Nicholas Perrin" Journal of Biblical Literature, Vol. 122, No. 2 (Summer, 2003), pp. 388
- Ehrman, Bart D. (1999). Jesus, apocalyptic prophet of the new millennium (revised ed.). Oxford; New York: Oxford University Press. pp. 75–78. ISBN 0-19-512473-1.
- Wright, N.T. (1992). The New Testament and the People of God. Fortress Press. p. 443.
- Bruce M. Metzger, The Canon of the New Testament:its origin, development and significance p. 75
- Masing, Uku & Kaide Rätsep, Barlaam and Joasaphat: some problems connected with the story of "Barlaam & Joasaphat", the Acts of Thomas, the Psalms of Thomas and the Gospel of Thomas, Communio Viatorum 4:1 (1961) 29–36.
- Funk 1993 p. 15
- B. Ehrman (2003) pp. 57–58
- April D. De Conick (2006) The original Gospel of Thomas in translation ISBN 0-567-04382-7 pages 2–3
- Wilhelm Schneemelcher 2006 New Testament Apocrypha ISBN 0-664-22721-X page 111
- Bentley Layton 1989 Nag Hammadi codex II, 2–7: Gospel according to Thomas ISBN 90-04-08131-3 page 106
- Ehrman 2003 pp.59ff
- Davies, Stevan. "Thomas: The Fourth Synoptic Gospel", The Biblical Archaeologist 1983 The American Schools of Oriental Research. pp. 6–8
- Koester 1990 p. 84–6
- Funk 1993 p. 16ff
- Throckmorton, B. H. Gospel Parallels.
- Funk, R. W. The Five Gospels.
- Nicholson, E. B. The Gospel According to the Hebrews.
- Edwards, J. R. The Hebrew Gospel and the Development of the Synoptic Tradition.
- "In the Synoptic Gospels this is the "Greatest" Commandment" that sums up all of the "Law and the Prophets"
- Jn 13:34
- Logion 25
- The Lord says to his disciples: "And never be you joyful, except when you behold one another with love." Jerome, Commentary on Ephesians
- Matt 18:21, Lk 17:4
- Jn 20:23
- In the Gospel of the Hebrews, written in the Chaldee and Syriac language but in Hebrew script, and used by the Nazarenes to this day (I mean the Gospel of the Apostles, or, as it is generally maintained, the Gospel of Matthew, a copy of which is in the library at Caesarea), we find, "Behold the mother of the Lord and his brothers said to him, ‘John the Baptist baptizes for the forgiveness of sins. Let us go and be baptized by him.’ But Jesus said to them, ‘in what way have I sinned that I should go and be baptized by him? Unless perhaps, what I have just said is a sin of ignorance.’" And in the same volume, "‘If your brother sins against you in word, and makes amends, forgive him seven times a day.’ Simon, His disciple, said to Him, ‘Seven times in a day!’ The Lord answered and said to him, ‘I say to you, Seventy times seven.’ " Jerome, Against Pelagius 3.2
- In the so-called Gospel of the Hebrews, for "bread essential to existence," I found "mahar", which means "of tomorrow"; so the sense is: our bread for tomorrow, that is, of the future, give us this day. Jerome, Commentary on Matthew 1
- In Matthew's Hebrew Gospel it states, ‘Give us this day our bread for tomorrow." Jerome, On Psalm 135
- Matt 19:16, Mk 10:17 & Lk1 8:18
- Jn 12:8
- Jesus said "Blessed are the poor, for to you belongs the Kingdom of Heaven" Logion 54
- The second rich youth said to him, "Rabbi, what good thing can I do and live?" Jesus replied, "Fulfill the law and the prophets." "I have," was the response. Jesus said, "Go, sell all that you have and distribute to the poor; and come, follow me." The youth became uncomfortable, for it did not please him. And the Lord said, "How can you say, I have fulfilled the Law and the Prophets, when it is written in the Law: You shall love your neighbor as yourself and many of your brothers, sons of Abraham, are covered with filth, dying of hunger, and your house is full of many good things, none of which goes out to them?" And he turned and said to Simon, his disciple, who was sitting by Him, "Simon, son of Jonah, it is easier for a camel to go through the eye of a needle than for the rich to enter the Kingdom of Heaven. "Origen, Commentary on Matthew 15:14
- Matt 3:1, Mk 1:9, 3:21, Luke 3:1
- Jn 1:29
- Gospel of Thomas, Logion 46: Jesus said, "From Adam to John the Baptist, among those born to women, no one is greater than John the Baptist that his eyes should not be averted. But I have said that whoever among you becomes a child will recognize the (Father's) kingdom and will become greater than John."
- Epiphanius, Panarion 30:13
- Matt 10:1, Mk 6:8, Lk 9:3
- Jn 13:23, 19:26, 20:2, 21:7, 21:20
- Logion 13
- "There was a certain man named Jesus, about thirty years old, who chose us. Coming to Capernaum, He entered the house of Simon, who is called Peter, and said, ‘As I passed by the Sea of Galilee, I chose John and James, sons of Zebedee, and Simon, and Andrew, Thaddaeus, Simon the Zealot, Judas Iscariot; and you Matthew, sitting at the tax office, I called and you followed me. You therefore, I want to be the Twelve, to symbolize Israel.’"Epiphanius, Panarion 30:13
- Logion 12
- Logion 114
- Logion 21
- Epiphanius, Panarion 30:13, Jerome, On Illustrious Men, 2
- Although several Fathers say Matthew wrote the Gospel of the Hebrews they are silent about Greek Matthew found in the Bible. Modern scholars are in agreement that Matthew did not write Greek Matthews which is 300 lines longer than the Hebrew Gospel (See James Edwards the Hebrew gospel)
- Suggested by Irenaeus first
- They too accept Matthew's gospel, and like the followers of Cerinthus and Merinthus, they use it alone. They call it the Gospel of the Hebrews, for in truth Matthew alone in the New Testament expounded and declared the Gospel in Hebrew using Hebrew script. Epiphanius, Panarion 30:3
- Matthew 1:16, 18-25, 2:11, 13:53-55, Mark 6:2-3, Luke 1:30-35, 2:4-21, 34
- "After the people were baptized, Jesus also came and was baptized by John. As Jesus came up from the water, Heaven was opened, and He saw the Holy Spirit descend in the form of a dove and enter into him. And a voice from Heaven said, ‘You are my beloved Son; with You I am well pleased.’ And again, ‘Today I have begotten you.’ "Immediately a great light shone around the place; and John, seeing it, said to him, ‘Who are you, Lord?' And again a voice from Heaven said, ‘This is my beloved Son, with whom I am well pleased.’ Then John, falling down before Him, said, ‘I beseech You, Lord, baptize me!’ But Jesus forbade him saying, ‘Let it be so as it is fitting that all things be fulfilled.’" Epiphanius, Panarion 30:13
- Jesus said, "The (Father's) kingdom is like a shepherd who had a hundred sheep. One of them, the largest, went astray. He left the ninety-nine and looked for the one until he found it. After he had toiled, he said to the sheep, 'I love you more than the ninety-nine.'" Logion 107
- Mercer Dictionary of the Bible.
- Family of the King.
- Logion 109
- Hear Then the Parable.
- Similar to beliefs taught by Hillel the Elder. (e.g. "golden rule")Hillel Hillel the Elder
- Jn 7:45 & Jn 3:1
- Jerome, Commentary on Matthew 2
- John 2:13, 4:35, 5:1, 6:4, 19:14
- Events leading up to Passover
- Epiphanius, Panarion 30:22
- As was the Jewish practice at the time. (John 20:5–7)
- Jerome, On Illustrious Men, 2
- Matt 28:1 Mk16:1 Lk24:1
- Jn 20:11
- Jerome, On Illustrious Men, 2
- Clontz, T.E. and J., "The Comprehensive New Testament", Cornerstone Publications (2008), ISBN 978-0-9778737-1-5
- Davies, Stevan (1983). The Gospel of Thomas and Christian Wisdom. Seabury Press. ISBN 0-8164-2456-X
- DeConick, April. Recovering the Original Gospel of Thomas: A History of the Gospel and Its Growth (T&T Clark, 2005)
- Ehrman, Bart (2003). Lost Scriptures: Books that Did Not Make it into the New Testament. Oxford University Press, USA. ISBN 0-19-514182-2.
- Funk, Robert Walter and Roy W. Hoover, The Five Gospels: What Did Jesus Really Say? the Search for the Authentic Words of Jesus, Polebridge Press, 1993
- Guillaumont, Antoine Jean Baptiste, Henri-Charles Puech, G. Quispel, Walter Curt Till, and Yassah ?Abd al-Masi-h, eds. 1959. Evangelium nach Thomas. Leiden: E. J. Brill Standard edition of the Coptic text
- Koester, Helmut (1990). Ancient Christian Gospels. Harrisburg, PA: Trinity Press International. ISBN 0-334-02450-1.
- Layton, Bentley (1987). The Gnostic Scriptures: A New Translation with Annotations. Doubleday. ISBN 0-385-47843-7.
- Layton, Bentley (1989). Nag Hammadi Codex II, 2 vols, E.J.Brill. The critical edition of the seven texts of Codex II, including the Gospel of Thomas. ISBN 90-04-08131-3
- Meyer, Marvin (2004). The Gospel of Thomas: The Hidden Sayings of Jesus. HarperCollins. ISBN 978-0-06-065581-5.
- Pagels, Elaine (2003). Beyond Belief : The Secret Gospel of Thomas (New York: Random House)
- Patterson, Stephen J.; Robinson, James M.; Bethge, Hans-Gebhard (1998). The Fifth Gospel: The Gospel of Thomas Comes of Age. Harrisburg, PA: Trinity Press International. ISBN 1-56338-249-0.
- Perrin, Nicholas. Thomas and Tatian: The Relationship between the Gospel of Thomas and the Diatessaron (Academia Biblica 5; Atlanta : Society of Biblical Literature; Leiden : Brill, 2002).
- Perrin, Nicholas. Thomas: The Other Gospel (London, SPCK; Louisville, KY: Westminster John Knox: 2007).
- Robinson, James M. et al., The Nag Hammadi Library in English (4th rev. ed.; Leiden; New York: E.J. Brill, 1996)
- Plisch, Uwe-Karsten (2007). Das Thomasevangelium. Originaltext mit Kommentar. Stuttgart: Deutsche Bibelgesellschaft. ISBN 3-438-05128-1.
- Snodgrass, Klyne R. "The Gospel of Thomas: A secondary Gospel," Second Century 7, 1989. pp. 19–30.
- Tuckett, Christopher M. "Thomas and the Synoptics," Novum Testamentum 30 (1988) 132–57, esp. p. 146.
- Valantasis, Richard (1997). The Gospel of Thomas. London; New York: Routledge. ISBN 0-415-11621-X.
- The Facsimile Edition of the Nag Hammadi Codices: Codex II. E.J. Brill (1974)
- Tr. Thomas O. Lambdin. The Gospel of Thomas. sacred-texts.com.
|Wikiquote has quotations related to: Gospel of Thomas|
|Wikimedia Commons has media related to Gospel of Thomas.|
|Wikiversity has learning materials about Gospel of Thomas|
- The Gospel of Thomas. With hyperlinear translation linked to Crum's Coptic Dictionary and Plumley's Coptic Grammar. Ecumenical Coptic Project online edition, 1998 ff.
- Ecumenical Coptic Project at Internet Archive.
- Gospel of Thomas Collection at The Gnosis Archive
- Gospel of Thomas at Early Christian Writings
- Gospel of Thomas Collection Commentary and Essays by Hugh McGregor Ross
- Michael Grondin's Coptic–English Interlinear Translation of the Gospel of Thomas
- Why is the Gospel of Thomas not in the canon. Online essay by Simonas Kiela
- The Gospel of Thomas and Christian Origins by André Gagné (The Montréal Review, December 2011)
- The Gospel of Thomas by Wim van den Dungen
- Gospel of Thomas, bibliography
|
<urn:uuid:2cd93673-fff0-4058-a75a-8f6d1b9dcdcc>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.914208173751831,
"score": 3.328125,
"token_count": 12038,
"url": "https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Gospel_of_Thomas.html"
}
|
National Cemetery Administration
History of the 32nd Indiana Infantry Monument
Prepared by Alec Bennett, NCA Historian
The Civil War was a seminal event in American History; its importance can hardly be overstated. Over four years, approximately 620,000 soldiers were killed in the conflict - 360,000 from the Union and 260,000 from the Confederacy - or 1.8 percent of the total U.S. population. Fourteen percent of the total population of all military personnel was killed. One hundred fifty years later, the Civil War still remains the bloodiest conflict in American History.
The sheer number of lives lost naturally resulted in an outpouring of commemorative efforts, to provide the country an opportunity for reflection, and to honor the fallen soldiers. Americans commemorated the Civil War soldier dead through Memorial Day ceremonies, and by erecting monuments in stone and metal on battlefields, in cemeteries, around state and local government buildings, and in public parks.
The 32nd Indiana Infantry Monument was carved in January 1862 after the Battle of Rowlett's Station in Munfordville, Kentucky. It is believed to be the oldest extant Civil War monument. Private August Bloedner carved the monument to mark the interments of fellow soldiers in the 32nd Indiana Infantry, a regiment entirely comprised of German-Americans who fell in the battle. It was originally installed on the battlefield. In 1867, the 32nd Indiana Infantry Monument was moved to Cave Hill National Cemetery in Louisville, Kentucky, along with the remains of 11 of the 13 soldiers whose names are inscribed on the monument. After being reinterred in the cemetery, the 11 soldiers received individual markers, causing the carving to shift in meaning from a headstone marking the remains of multiple soldiers, to a monument commemorating their sacrifice.
August Bloedner carved the monument from St. Genevieve limestone, a type of Indiana limestone that is soft, porous, and no longer used for sculpture or building purposes. By the 1950s, the monument was beginning to spall. By the early 2000s, approximately 50 percent of the inscription was lost. To preserve what was left of the monument, on December 17, 2008 (the 147th anniversary of the Battle of Rowlett's Station), the National Cemetery Administration, U.S. Department of Veterans Affairs, removed it to an indoor storage facility at the University of Louisville, where it received professional conservation.
On August 18, 2010, NCA moved the 32nd Indiana Infantry Monument to its new home at the Frazier History Museum, in downtown Louisville. The monument is on display in the museum foyer, free of charge to the public.
To continue to honor the soldiers from the 32nd Indiana Infantry buried in Cave Hill National Cemetery, NCA commissioned a new, successor monument to be installed in the location of the original. This was an exacting task. NCA wanted to clearly represent the successor as a modern piece, while simultaneously establishing a strong visual connection with the original.
After reviewing three candidates, NCA commissioned stone carver Nicholas Benson of The John Stevens Shop, Newport, Rhode Island, to hand-carve a successor monument. Benson is a third-generation master of hand letter carving, and completed the inscriptions on the Martin Luther King Jr. and World War II memorials.
The John Stevens Shop produced the replacement monument based upon a 1955 photograph and period German-language newspaper articles; it was installed in September 2011. Though it was created from a more durable Indiana limestone, the replacement monument is the same size and form and features the same German inscription as the original. The back of the replacement monument is inscribed with an English translation.
NCA dedicated the new successor monument on December 16, 2011, almost 150 years to the day of the Battle of Rowlett's Station. Decedents of the 32nd Indiana Infantry laid a wreath on the monument to honor their ancestors.
The states along the border between the Union and Confederacy, including Maryland, Missouri, and especially Kentucky, were vitally important to either side. At the outbreak of war, President Abraham Lincoln is reported to have said, "I hope to have God on my side, but I must have Kentucky."
The state of Kentucky had long served the role as a mediator between the North and South. Henry Clay, legendary senator from the state, helped broker the Missouri Compromise of 1820 and the Compromise of 1850, both of which attempted to diffuse the contentious political issue of the expansion of slavery into new states and territories. In the years leading up to the Civil War, Kentucky was bordered by six states, three slave and three free. It was a slave state that nonetheless opposed secession.
However, after the outbreak of war, Kentucky did not initially ally with the Union. In May 1861, a month after the firing on Fort Sumter, the legislature passed, and the Governor signed, a resolution affirming the state's neutrality in the war. This ineffectual measure did little to stop it from being drawn into the conflict. Both sides recruited Kentucky citizens from across state borders. By September 1861, both the Union and the Confederacy effectively circumvented the state's announced neutrality by establishing fortifications in Kentucky. That same month the Kentucky legislature, overriding a veto by the Governor, passed a resolution demanding an immediate withdrawal of all Confederate troops. With this motion, the state officially aligned itself with the Union.
Detail from the map, "Campaigns of the Civil War in the West" by A.B. Adlerman, ca. 1910, showing the location of the Battle of Rowlett's Station. Library of Congress.
Reflective of the state's divided loyalties, soldiers from Kentucky enlisted about evenly in the Union and Confederate armed forces. As is frequently noted by historians, both President Abraham Lincoln and President Jefferson Davis of the Confederacy were born in Kentucky, about 100 miles away from each other. Moreover, four of the brothers of the Kentucky-born First Lady Mary Todd Lincoln fought for the Confederacy.
During the war, Kentucky did not experience the same level of carnage as in neighboring Tennessee and Virginia. After a series of minor clashes in late 1861, including the Battle of Rowlett's Station, the Union victory at Mill Springs on January 19, 1862, halted an early Confederate offensive into the state. The following month, Union forces invaded Tennessee and successfully captured the Confederate forts Henry and Donelson. The fall of these forts forced many of the remaining rebel troops in Kentucky to abandon their positions and head south, to help bolster the Confederate military presence in Tennessee. By June, all major Confederate forces had withdrawn from Kentucky.
In August the Confederate Army once again advanced into Kentucky in an attempt to draw the state into the Confederacy. The Kentucky Campaign culminated in the Battle of Perryville on October 8, 1862. The Union victory there again forced the Confederates to retreat back into Tennessee, and the Confederate Army did not mount another large-scale invasion of Kentucky for the rest of the war. Confederates continued to engage in raids and small guerilla actions in the state, but there no further substantial engagements.
Despite the relative lack of bloodshed in Kentucky, the state was strategically vital to the Union Army as its northern border, the Ohio River, served as a conduit for soldiers and supplies. Moreover, two of the river's tributaries, the Cumberland and the Tennessee rivers, stretched across Kentucky into Tennessee. During the war, Union forces used both of these waterways to advance into the heart of the Confederacy.
There are six Civil War-era national cemeteries in Kentucky, more than any other state except Virginia. The large number of small cemeteries here was an early outcome of two factors, as reported by Captain Edmund Whitman, who oversaw the Department of the Tennessee. "Collecting the dead into larger groups…cannot be so fully realized throughout Kentucky, as in the other states," he wrote to his commander in 1866, "not only from the very seated condition of the graves, but from the nature of the country itself, a large portion of it without railroads and with its surface broken by mountain ridges." These cemeteries were associated with local military camps and hospitals, such as Camp Nelson or Lexington national cemeteries, or with a local battle, as is the case with Mill Springs National Cemetery.
In the two decades before the Civil War, there was a surge of immigration into the United States from Ireland, Germany, and Great Britain. In particular, from 1845-54 nearly 3 million new arrivals came to America in what was the largest wave of immigration in American History up to that time. On the whole, these new immigrants were much poorer and a higher percentage was Roman Catholic, compared to previous waves of immigrants. They came during a period of sustained economic growth in the United States, while many European nations were enduring political and economic turmoil.
Approximately 1.5 million Germans immigrated to the United States during this surge. The vast majority came in search of economic opportunities, although a small minority came for political reasons. In 1848, Germany was a loose confederation of states dominated by the noble classes, without a democratically elected, representative government. Rebelling against this status quo, reformers sparked a series of revolutions in the different states, advocating for a unified Germany and increased democratic reforms. When the forces of the noble classes defeated the revolutions, many of the reformers fled to the United States. Often, these "Forty-Eighters" became political and economic leaders of the German communities in America.
In general, many German immigrants who came to the United States in the mid-19th century tended to settle in the interior of the country, in the Midwest and the Ohio River valley. Following this trend, both Louisville and Cincinnati, Ohio, were popular destinations. By the mid-1850s both Louisville and Cincinnati had multiple German-language newspapers. In 1855, immigrants comprised a quarter of Louisville's population of 43,000. In Cincinnati, many Germans settled in the Over-the-Rhine neighborhood, whose name still reflects their influence. Indianapolis' German population was not as large, and immigrants were more dispersed across the state.
Nationwide, the waves of immigration during the mid-19th century led to a political backlash, culminating in the formation and swift rise of the Know-Nothing Party, which won a series of state elections in 1854 on an anti-immigration, anti-Catholic platform. The party won the governorship and the state legislature in Massachusetts, and won control of the Maryland and Tennessee state legislatures. The next year the party won governorships in New Hampshire, Connecticut and Rhode Island.
Anti-immigrant sentiment erupted in violence during a primary election in Louisville on August 6, 1855, in which a mob threatened and intimidated the local immigrant population away from the polls. Rioting ensued, and at least 22 were killed. While nothing on the same scale occurred in Cincinnati, there was tension between the non-immigrant population and the new immigrants. The rise in the Irish and German population of Cincinnati coincided with an increase in crime, which local newspapers attributed to the recent immigrants.
After victories at the polls in 1854 and 1855, the Know-Nothing Party renamed itself the American Party, and nominated former President Millard Fillmore as its candidate in the 1856 presidential election. Fillmore won one state - Maryland - but soon after the party was effectively finished on the national stage. The Dred Scott decision of 1857 and the clashes of Bleeding Kansas divided the American Party over the issue of slavery.
At the advent of the Civil War, the immigrant population of the Northern states was much greater than the South, as cities in the North were more industrialized and offered more job opportunities to the new arrivals. While there was a small German community in Texas and other Southern states, in general there were few jobs available for immigrant labor in the South. According to the 1860 census, approximately 90 percent of the nation's immigrant population lived in the North. On the whole, the German community largely opposed slavery and strongly supported the Republican Party, and overwhelmingly enlisted in Union companies. Over the course of the war, an estimated 200,000 native Germans served, composing nearly 10 percent of the Union Army and Navy. Thus, while European immigrants contributed to the war efforts of both the blue and the gray, they enlisted in the Union ranks in much greater numbers.
On the whole, most native Germans joined military units with American-born volunteers. However, some states with large German populations formed companies consisting entirely of German-American soldiers. This includes the 9th Ohio Infantry, 74th Pennsylvania Infantry, 9th Wisconsin Infantry, and 32nd Indiana Infantry.
The 32nd Indiana Infantry consisted of German immigrants throughout the state, and from just across the state border in Cincinnati. Organized in Indianapolis during summer 1861, the 32nd Indiana Infantry marched to Kentucky that fall, joining the Army of the Ohio. It was also known colloquially as the "First German" Indiana regiment because it was entirely made up of German-Americans, many of whom were not fluent in English. At the start of the war, the 32nd Indiana consisted of 937 soldiers, a typical number for Civil War regiments.
The commanding officer of the regiment, Colonel August Willich, was personally selected by the Governor of Indiana, Oliver Morton. Col. Willich served as an officer during the German Revolutions of 1848, fighting on the side of the reformers. He immigrated to the United States in the early 1850s, and by 1858 was the editor of the Cincinnati Republikaner, a German-language newspaper.
The Battle of Rowlett's Station was fought on December 17, 1861, between the 32nd Indiana Infantry and Confederate forces consisting of Terry's Texas Rangers, the 7th Texas Cavalry and the 1st Arkansas Battalion. Approximately 70 miles south of Louisville, Union forces were charged with protecting a pontoon bridge over the Green River to ensure the passage of soldiers and supplies along the Louisville & Nashville Railroad. When the Union forces encountered Confederate troops in the woods just south of Woodsonville, the two sides exchanged fire before withdrawing from the field.
The battle was small in scope, with 40 Union and 91 Confederate casualties, and the results were indecisive. Union forces did secure the Louisville & Nashville Railroad, allowing troops and supplies to continue to move through the area. In Kentucky, the Battle of Rowlett's Station was soon overshadowed in importance by the Union victory at the nearby Battle of Mill Springs on January 19, 1862, which halted the Confederate advance into the state, and led to the Union campaign into Tennessee in February.
The Battle of Rowlett's Station was the first action for the 32nd Indiana Infantry. After bivouacking near Munfordville on February 10, the regiment moved toward Nashville and went on to fight in the Battle of Shiloh in April 1862. Over the course of the war, the regiment fought in other large battles, including Chickamauga, Resaca, and the Siege of Corinth. The 32nd Indiana Infantry was mustered out in San Antonio, Texas in December 1865.
August Bloedner was born in 1827 at Altenburg in the Duchy of Saxe-Altenburg (currently the state of Thuringia), Germany. From 1841-45, he attended the local Arts and Crafts School. In 1845, he left Altenburg for Dresden, entering the highly respected Royal Academy of Fine Arts, but soon after he withdrew before completing his studies. Bloedner came to the United States in 1849, and may have lived in New York City for a time, before settling in Cincinnati. He married Henrietta Behnke in 1856. After the outbreak of the Civil War, Bloedner enlisted in the Union Army on August 24, 1861, in Indianapolis for three years as a private, and was assigned to the 32nd Indiana Infantry Regiment.
After the Battle of Rowlett's Station, Bloedner and the 32nd Indiana Infantry bivouacked near Munfordville for approximately two months. During this time, Bloedner carved the 32nd Indiana Monument, and it was installed on the battlefield in late January 1862, marking the remains of those who fell. A fence was installed around the burial site and the monument.
The 32nd Indiana Infantry Monument is carved from St. Genevieve limestone, probably a local outcrop. It is approximately 60" wide by 49" high by 16" deep. In relief, there is a carved image of an eagle with its wings outstretched, clutching in its talons two cannons, which are resting on cannonballs. The eagle is flanked by American flags, along with an olive sprig and an oak branch. Below the relief panel, the monument is inscribed in a fraktur-like script in German with a brief description of the battle, and the names, birth dates, and birthplaces of those who fell. The translated inscription reads:
Here lie men of the 32nd First German Indiana Regiment sacrificed for the free Institutions of the Republic of the United States of North America.
They fell on 17 Dec. 1861, in an Encounter at Rowlett Station, in which 1 Regiment of Texas Rangers, 2 Regiments of Infantry, and 6 Rebel Cannons, in all over 3000 Men, were defeated by 500 German Soldiers.
Lieut. Max Sachs, born 6 Oct. 1826 in Fraustadt, Prussia
Rich Wehe, born 28 March 1832 in Leipzig
Fried. Schumacher, born 14 Jan. 1834 in Harvenfeld, Hannover
Henry Lohse, born 28 March 1835 in (unreadable)
Charles Knab, born 6 Feb. 1843 in Münchberg, Bavaria
John Fellermann, born 12 Jan. 1842 in Menzen, Hannover
Wm. Staabs, born 16 May 1820 in Coblenz, Prussia
Gari Kieffer, born 18 Feb. 1817 in Henriville, France
Christoph Reuter, born 1 Jan. 1818 in Markstedt, Bavaria
Ernst Schiemann, born 26 Feb. 1826 in Steindorfel, Saxony
Theodore Schmidt, born 8 Feb. 1839 in Hemkirchen, Hessen-Kassel
Daniel Schmidt, born 12 March 1834, in Grabowa, Prussia
George Burkhardt, born 14 Jan. 1844 in Keiselbach, Saxony
Of the 13 names inscribed on the monument, the remains of 11 were buried locally and marked by the 32nd Indiana Infantry Monument. The remains of the other two soldiers were sent to private cemeteries in Cincinnati for burial. Pvt. Max Sachs was buried in Adath Israel Cemetery (now part of Price Hill Cemetery) and Pvt. Theodore Schmidt was buried in Spring Grove Cemetery.
The 32nd Indiana Infantry Monument likely was originally intended to serve as a grave marker. Little is known about those soldiers who fell in the battle. 11 were privates, two were officers. According to the inscription on the monument, all were born in Germany, save for Carl Keiffer who was born in the Lorraine region of France on the German border. Their ages ranged between 18 to 43, which was typical for the Army; it is estimated that most Union soldiers were between the ages of 18 and 39.
After the Civil War, the Office of the Quartermaster General (OQMG) of the U.S. Army was responsible for administering a nationwide reburial program to identify the burial locations of fallen Union Soldiers, and if necessary to reinter their remains into the new national cemeteries. In 1867, the OQMG moved the remains of the 11 soldiers buried on the Rowlett's Station battlefield and the 32nd Indiana Infantry Monument to Cave Hill National Cemetery in Louisville. Cave Hill National Cemetery is imbedded in a corner of the private Cave Hill Cemetery, a premier Rural-style burial ground established in 1848. The first interments in Cave Hill National Cemetery occurred in November 1861.
When the monument was moved to Cave Hill National Cemetery, it was installed on a base of Bedford limestone, with an English inscription that reads: "In memory of the first victims of the 32nd Reg. Indiana Vol. who fell at the Battle of Rowlett's Station, December 17, 1861." At this time, a German inscription was carved above the frieze, reading: "Brought here from Fort Willich, Munfordville, KY and reinterred on 6 June 1867."
Bloedner stayed with the 32nd Indiana Infantry throughout 1862. He was promoted to sergeant in January 1863, and was wounded on September 20 of that year in the Battle of Chickamauga. In October 1863 he was promoted to first sergeant; he mustered out on September 7, 1864, after completing a three-year enlistment. He returned to Cincinnati and worked as a stone cutter, before he died of heart disease on November 17, 1872, at age 46.
Carved in late January 1862, the 32nd Indiana Infantry Monument is one of a small number of military monuments or memorials erected as the war was still raging. This is unsurprising, as the immediate energies of both sides were directed toward winning the conflict. Civil War monuments were erected more frequently after the war, as the attention of the country turned toward honoring the fallen soldiers, and as the population of Civil War veterans aged and reflected on their military service.
During the war, the Union was faced with the fundamental responsibility of burying thousands of fallen soldiers - a huge logistical challenge. In response to this need, Congress passed legislation in July 1862 enabling the president to establish national cemeteries for burial of the soldier dead. The Army covered the expenses for officer's remains to be sent back to their families. Over the course of the war, the federal government established national cemeteries near large concentrations of dead, including major battlefields, hospitals, and military installations. In many areas the dead were buried in private cemeteries. In the field, commanding officers were responsible for the burial of the men under their command. The exigencies of war meant that thousands of soldiers who fell in rural, remote areas were buried in isolated graves outside of cemeteries. Often, the identities of these soldiers were lost, their remains eventually marked with unknown headstones.
When August Bloedner carved the 32nd Indiana Infantry Monument it originally served as a headstone to mark the remains of the soldiers who fell. It would be another seven months before Congress passed legislation authorizing the acquisition of land for burial purposes, and the design of permanent government-issued headstones to mark the remains of soldiers was not established until 1873.
The Confederate States of America faced the same fundamental need to bury their dead. Similar to their northern foes, Confederate Army officers were responsible for burying the troops under their command. Burial grounds were created near hospitals; those who fell in battle, or in rural areas, were often buried outside of established cemeteries. The major difference between the burial policies of the two sides was that the Confederate government did not establish a system of cemeteries equivalent to national cemeteries in the North.
One of the first Confederate monuments - and likely the first erected to either side - was the original Francis Bartow Monument on the battlefield of First Manassas in Virginia. The marble column, approximately 6 feet tall based upon a period drawing, was erected on September 4, 1861, just six weeks after the battle. After the Confederate Army moved from the area in 1862, the monument disappeared, and today its fate is unknown. A small stone on the battlefield is believed to be the original base for the monument.
From December 31, 1862, through January 2, 1863, Union and Confederate forces clashed outside of Murfreesboro, Tennessee, in the Battle of Stones River, which resulted in a Union victory. Union General William Hazen's brigade, consisting of the 9th Indiana, 41st Ohio, 6th Kentucky, and 110th Illinois infantry regiments, was posted at the edge of the Round Forest astride the tracks of the Nashville & Chattanooga Railroad. On January 2, the brigade helped repel four Confederate attacks in fighting so vicious that afterward the soldiers nicknamed the location "Hell's Half-Acre." Later that year, members of the brigade erected a monument on the site to honor the fallen. Fabricated from limestone masonry, the Hazen's Brigade Monument is 10-square-foot block with a curved cornice, an unusual design for a Civil War monument. Forty-five soldiers were buried nearby. In 1864, stone cutters carved an inscription on the monument, including the names of 17 officers. Today, the monument is located in Stones River National Cemetery. For many years, the Hazen's Brigade Monument was considered the oldest extant Civil War monument. It is still the oldest Civil War monument in its original location.
After Lee's surrender at Appomattox, the federal government was faced with the fundamental task of locating and recovering the burials of Union soldiers scattered throughout the South. In the spring and summer of 1865, Army Officers worked to locate the remains of Union soldiers, but their efforts were piecemeal in the absence of firm policy. In response, Quartermaster General Montgomery Meigs issued General Orders No. 65 in October 1865, which called upon officers to conduct a survey across the American landscape to identify the location of the remains of soldier dead, and provide recommendations for the disposition of remains. These orders became the core of the Union's burial policy, to inter the remains of every Union soldier within a national cemetery if possible. This was the most immediate and basic act of commemorating the Civil War dead.
In the mid-to-late 1860s, the federal government established national cemeteries, largely in the South. During this process, the War Department actively reinterred the recovered Union remains in the cemeteries and constructed other modest aspects of the sites. Permanent features of national cemeteries, including superintendent's lodges, enclosing walls and fences, and government-issued headstones were not fully realized until the next decade. National cemeteries were a natural setting to focus the nation's commemorative efforts, and especially to erect monuments to honor the fallen soldiers buried there. While a handful were dedicated during the late 1860s, it was not until the cemeteries were fully constructed and landscaped that monuments to the Union dead began to be donated and erected on a regular basis.
The former Confederate states were economically and socially devastated at the end of the war. During Reconstruction, the federal government removed the civilian governments in the southern states and put the U.S. Army in control. Lincoln's assassination, along with the postwar revelations of the poor treatment of Union prisoners, undermined any sympathetic Northern feelings toward the Confederacy. In this environment, efforts to memorialize the Confederate dead were met with skepticism and hostility on the part of the North. Immediately after the war, the federal government did not attempt to organize an effort to identify and mark the remains of the Confederate dead. Instead, local memorial groups and associations took on the responsibility of reinterring Confederates in private cemeteries. Of these groups, the highest profile may have been the Hollywood Memorial Association of the Ladies of Richmond, Virginia, which identified and located the remains of thousands of Confederate soldiers in and around the city and reinterred them at Hollywood Cemetery.
Civil War battlefields were also intuitive locations for groups to erect monuments and memorials. In the decades after the war, sections of the battlefields were preserved at Shiloh and Stones River, Tennessee; Antietam, Maryland; and the Wilderness, Virginia, among others. However, Gettysburg, Pennsylvania - the site of the bloodiest battle of the war - was the natural location for the first battlefield preservation efforts to crystallize. Nine months after the battle, while the Civil War was still raging, the Pennsylvania Legislature authorized the Gettysburg Battlefield Memorial Association (GBMA), a local organization, to:
…hold and preserve, the battle-grounds of Gettysburg…and by such perpetuation, and such memorial structures as a generous and patriotic people may aid to erect, to commemorate the heroic deeds, the struggles, and the triumphs of their brave defenders.
The GBMA began purchasing portions of the battlefield soon after.
Outside of national cemeteries and battlefields, monuments were erected in private cemeteries, and on public grounds such as statehouses, local courthouses, and parks. In private cemeteries, monuments were often dedicated to the soldiers buried therein. The monuments on state and local lands were often dedicated to the soldiers from that state or locality.
The advent of Decoration Day (later renamed Memorial Day) symbolized the national mood to honor the fallen Civil War dead; naturally, national cemeteries and the grave sites of fallen soldiers became the scenes of commemorative ceremonies. The exact origins of this national day of remembrance remain unclear, and the first observance is a matter of dispute. However, by early May 1868, John Logan, the Commander in Chief of the Grand Army of the Republic (GAR), a fraternal organization of Union veterans, issued General Orders (G.O.) No. 11. While not a military order, G.O. 11 designated May 30 as a national day of commemoration of the Union dead internally to the GAR. Early on, these commemorative efforts included processions to local cemeteries, prayers and hymns.
In the South, local efforts to commemorate the dead led to ceremonies on Confederate Memorial Day. The opaque origins of Memorial Day have led some to argue that the Northern holiday was adopted from the Confederate version. In the absence of a large fraternal organization such as the GAR to establish a specific date, many Southern states observed Confederate Memorial Day on different dates. This also reflects the local, grassroots nature of such commemorative efforts.
In some cases, these commemorations culminated in the erection of memorials. For example, the monumental pyramid erected in 1869 at Hollywood Cemetery was dedicated to the Confederate enlisted men buried in the cemetery.
The federal government constructed a small number of monuments at prisoner-of-war sites in the former Confederacy, such as the granite obelisk dedicated in 1876 to the unknown dead in Salisbury National Cemetery, North Carolina. But the vast majority of monuments in national cemeteries were patriotic gestures donated by state governments, regimental groups, or other military associations such as the GAR.
The mid-to-late 1870s were characterized by an increasing number of monuments erected to Union soldiers. On the Gettysburg battlefield, the rise in memorialization was symbolic of the increased attention of the country itself to reflect upon the war's meaning. The GAR of Pennsylvania erected the first monument outside of the cemetery, a marble tablet, in 1878. The following summer, the 2nd Massachusetts Infantry placed on the battlefield the first monument dedicated to a regiment.
By the late 1870s and early 1880s, national membership in the GAR was soaring. What started as a small organization in 1866 grew to a membership of 45,000 in 1879, and to 233,000 in 1884. By the 1880s, the GAR had grown into a powerful national organization, lobbying the federal government on issues important to Union veterans, including pension benefits for veterans and their dependants. After an intense GAR lobbying effort, in 1888 the federal government adopted Decoration Day as a national holiday for federal employees. The GAR also engaged in many fundraising efforts for the carving and installation of monuments dedicated to Union veterans. In properties administered by NCA, the GAR donated monuments at the Albany Rural Cemetery Soldiers Lot, New York; and San Francisco, California, and Loudon Park (Baltimore), Maryland, national cemeteries.
Many GBMA members were also in the GAR. In the mid-1880s, the GBMA began making preparations for the 25th anniversary commemoration of the Gettysburg battle. In support of this effort, it was resolved that regimental monuments should be "placed at the location held by regiments in the line of battle." The ceremony took place on July 3-4, 1888, and included the dedication of 133 regimental monuments. Today, the regimental monuments at Gettysburg National Military Park contribute to its status as one of the most historically significant landscapes in the United States.
The 1880s were the advent of the great age of Civil War monuments. While no national inventory of Civil War monuments exists, a significant number were erected in properties currently administered by NCA, and can serve as a representative sample to illustrate the nationwide pattern of commemorative efforts. The number of monuments dedicated to the Union dead in cemeteries now overseen by NCA increased significantly in the 1880s, and peaked in the 1890s, then declining steadily from 1900-1930. Nationwide, it is likely that Civil War monument installation followed the same pattern.
Within this overall trend, from the 1890s until approximately 1920, many state legislatures in the North funded the installation of monuments in national cemeteries where large numbers of their dead were interred. For example, the state of Minnesota erected monuments in five national cemeteries: Jefferson Barracks (St. Louis), Missouri; Little Rock, Arkansas; Memphis and Nashville, Tennessee; and Andersonville, Georgia.
Southern commemorative efforts increased in the late 1880s and throughout the 1890s, in part as a result of the establishment of groups dedicated to honoring the Confederacy and the Confederate dead. The United Confederate Veterans (1889), United Daughters of the Confederacy (UDC - 1894) and the Sons of Confederate Veterans (1896) were all founded during this period. Local chapters often raised money to sponsor monuments to be placed in public spaces. Confederate monuments were more frequently erected from the 1890s onward.
By the end of the 19th century, the national mood had begun to soften away from the remnant bitterness between the Union and the Confederacy. In part, the brief Spanish-American War of 1898 fostered nationalist feelings across the country. In 1906, Congress established the Commission to Mark the Remains of the Confederate Dead. Using this as a vehicle, the federal government erected a series of monuments dedicated to Confederate soldiers who died in former prisoner-of-war camps in Northern states. Monuments were erected in Point Lookout Confederate Cemetery (Ridge), Maryland; Finn's Point National Cemetery (Salem), New Jersey; Union Cemetery (Kansas City), Missouri, and Woodlawn Cemetery (Terre Haute), Indiana.
By 1920-25, Civil War monuments dedicated to both sides of the conflict were erected less frequently as veterans aged and died, and as the nation's attention turned to commemorating 20th century conflicts. Still, the Civil War continues to resonate deeply with the American public, and periodically new monuments are erected. For example, in 1999, the African-American Civil War Memorial was dedicated in the U Street neighborhood of Washington, DC. A new monument to the U.S. Colored Troops was erected in Nashville National Cemetery in 2005, and throughout the 2000s the UDC installed monuments dedicated to the Confederate Dead at Mound City and Camp Butler (Springfield), national cemeteries, and at Rock Island Confederate Cemetery, all in Illinois, and at Confederate Stockade Cemetery (Sandusky), Ohio.
At the time August Bloedner carved the 32nd Indiana Infantry Monument, the Civil War had just started. Most of the major battles were still in the future, including Antietam, Gettysburg and Shiloh. The federal government had not established any national cemeteries, or even begun a comprehensive, organized effort to mark the remains of its fallen troops. In this context, the monument simply marked the remains of soldiers from the 32nd Indiana Infantry who fell during the Battle of Rowlett's Station.
There is a contrast between the patriotic American imagery of the carved frieze and the "foreign" German inscription which is emphasized by the fraktur-like script. In this sense the monument symbolizes the situation of German immigrants in the United States in the 1860s who lived in two worlds. The 13 soldiers whose names are inscribed on the monument enlisted, fought and died for their adopted country - even though none were native born, and even though they still spoke German as their primary language.
The 32nd Indiana Infantry Monument is one of a handful of monuments erected during the Civil War. After the war, the country was literally torn apart both politically and economically. During Reconstruction, the nation tried to repair itself while wresting with the seemingly intractable problem of welcoming the 11 seceded states back into the Union. By the late 1870s, the last federal troops withdrew from the South, marking the end of Reconstruction. In this national climate, efforts to commemorate the Civil War dead through the erection of monuments and memorials began in earnest, a reflection of the increasing public consciousness to memorialize military valor. Of the hundreds of monuments in metal and stone standing in cemeteries, battlefields, and parks dedicated to the soldiers of the bloodiest war in American history, the 32nd Indiana Infantry Monument is the first.
|
<urn:uuid:3f567d92-c191-4de8-8b8e-eaafcec68419>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00417.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9680561423301697,
"score": 3.609375,
"token_count": 7676,
"url": "https://www.cem.va.gov/cem/bloedner_monument.asp"
}
|
Aggression is overt, often harmful, social interaction with the intention of inflicting damage or other unpleasantness upon another individual. It may occur either in retaliation or without provocation. In humans, frustration due to blocked goals can cause aggression. Submissiveness may be viewed as the opposite of aggressiveness.
In definitions commonly used in the social sciences and behavioral sciences, aggression is a response by an individual that delivers something unpleasant to another person. Some definitions include that the individual must intend to harm another person. Predatory or defensive behavior between members of different species may not be considered aggression in the same sense.
Aggression can take a variety of forms, which may be expressed physically, or communicated verbally or non-verbally: including anti-predator aggression, defensive aggression (fear-induced), predatory aggression, dominance aggression, inter-male aggression, resident-intruder aggression, maternal aggression, species-specific aggression, sex-related aggression, territorial aggression, isolation-induced aggression, irritable aggression, and brain-stimulation-induced aggression (hypothalamus). There are two subtypes of human aggression: (1) controlled-instrumental subtype (purposeful or goal-oriented); and (2) reactive-impulsive subtype (often elicits uncontrollable actions that are inappropriate or undesirable). Aggression differs from what is commonly called assertiveness, although the terms are often used interchangeably among laypeople (as in phrases such as "an aggressive salesperson").
- 1 Overview
- 2 Etymology
- 3 Ethology
- 4 Evolutionary explanations
- 5 Physiology
- 6 Genetics
- 7 Society and culture
- 8 See also
- 9 References
- 10 External links
Two broad categories of aggression are commonly distinguished. One includes affective (emotional) and hostile, reactive, or retaliatory aggression that is a response to provocation, and the other includes instrumental, goal-oriented or predatory, in which aggression is used as a mean to achieve a goal. An example of hostile aggression would be a person who punches someone who insulted him or her. An instrumental form of aggression would be armed robbery. Research on violence from a range of disciplines lend some support to a distinction between affective and predatory aggression. However, some researchers question the usefulness of a hostile vs instrumental distinction in humans, despite its ubiquity in research, because most real-life cases involve mixed motives and interacting causes.
A number of classifications and dimensions of aggression have been suggested. These depend on such things as whether the aggression is verbal or physical; whether or not it involves relational aggression such as covert bullying and social manipulation; whether harm to others is intended or not; whether it is carried out actively or expressed passively; and whether the aggression is aimed directly or indirectly. Classification may also encompass aggression-related emotions (e.g. anger) and mental states (e.g. impulsivity, hostility). Aggression may occur in response to non-social as well as social factors, and can have a close relationship with stress coping style. Aggression may be displayed in order to intimidate.
The operative definition of aggression may be affected by moral or political views. Examples are the axiomatic moral view called the non-aggression principle and the political rules governing the behavior of one country toward another. Likewise in competitive sports, or in the workplace, some forms of aggression may be sanctioned and others not (see Workplace aggression).
The term aggression comes from the Latin word aggressio, meaning attack. The Latin was itself a joining of ad- and gradi-, which meant step at. The first known use dates back to 1611, in the sense of an unprovoked attack. A psychological sense of "hostile or destructive behavior: dates back to 1912, in an English translation of the writing of Sigmund Freud. Alfred Adler had theorized about an "aggressive drive: in 1908. Child raising experts began to refer to aggression, rather than anger, from the 1930s.
Ethologists study aggression as it relates to the interaction and evolution of animals in natural settings. In such settings aggression can involve bodily contact such as biting, hitting or pushing, but most conflicts are settled by threat displays and intimidating thrusts that cause no physical harm. This form of aggression may include the display of body size, antlers, claws or teeth; stereotyped signals including facial expressions; vocalizations such as bird song; the release of chemicals; and changes in coloration. The term agonistic behaviour is sometimes used to refer to these forms of behavior.
Most ethologists believe that aggression confers biological advantages. Aggression may help an animal secure territory, including resources such as food and water. Aggression between males often occurs to secure mating opportunities, and results in selection of the healthier/more vigorous animal. Aggression may also occur for self-protection or to protect offspring. Aggression between groups of animals may also confer advantage; for example, hostile behavior may force a population of animals into a new territory, where the need to adapt to a new environment may lead to an increase in genetic flexibility.
Between species and groups
The most apparent type of interspecific aggression is that observed in the interaction between a predator and its prey. However, according to many researchers, predation is not aggression. A cat does not hiss or arch its back when pursuing a rat, and the active areas in its hypothalamus resemble those that reflect hunger rather than those that reflect aggression. However, others refer to this behavior as predatory aggression, and point out cases that resemble hostile behavior, such as mouse-killing by rats. In aggressive mimicry a predator has the appearance of a harmless organism or object attractive to the prey; when the prey approaches, the predator attacks.
An animal defending against a predator may engage in either "fight or flight" in response to predator attack or threat of attack, depending on its estimate of the predator's strength relative to its own. Alternative defenses include a range of antipredator adaptations, including alarm signals. An example of an alarm signal is nerol, a chemical which is found in the mandibular glands of Trigona fulviventris individuals. Release of nerol by T. fulviventris individuals in the nest has been shown to decrease the number of individuals leaving the nest by fifty percent, as well as increasing aggressive behaviors like biting. Alarm signals like nerol can also act as attraction signals; in T. fulviventris, individuals that have been captured by a predator may release nerol to attract nestmates, who will proceed to attack or bite the predator.
Aggression between groups is determined partly by willingness to fight, which depends on a number of factors including numerical advantage, distance from home territories, how often the groups encounter each other, competitive abilities, differences in body size, and whose territory is being invaded. Also, an individual is more likely to become aggressive if other aggressive group members are nearby. One particular phenomenon – the formation of coordinated coalitions that raid neighbouring territories to kill conspecifics – has only been documented in two species in the animal kingdom: 'common' chimpanzees and humans.
Within a group
Aggression between conspecifics in a group typically involves access to resources and breeding opportunities. One of its most common functions is to establish a dominance hierarchy. This occurs in many species by aggressive encounters between contending males when they are first together in a common environment. Usually the more aggressive animals become the more dominant. In test situations, most of the conspecific aggression ceases about 24 hours after the group of animals is brought together. Aggression has been defined from this viewpoint as "behavior which is intended to increase the social dominance of the organism relative to the dominance position of other organisms". Losing confrontations may be called social defeat, and winning or losing is associated with a range of practical and psychological consequences.
Conflicts between animals occur in many contexts, such as between potential mating partners, between parents and offspring, between siblings and between competitors for resources. Group-living animals may dispute over the direction of travel or the allocation of time to joint activities. Various factors limit the escalation of aggression, including communicative displays, conventions and routines. In addition, following aggressive incidents, various forms of conflict resolution have been observed in mammalian species, particularly in gregarious primates. These can mitigate or repair possible adverse consequences, especially for the recipient of aggression who may become vulnerable to attacks by other members of a group. Conciliatory acts vary by species and may involve specific gestures or simply more proximity and interaction between the individuals involved. However, conflicts over food are rarely followed by post conflict reunions, even though they are the most frequent type in foraging primates.
Other questions that have been considered in the study of primate aggression, including in humans, is how aggression affects the organization of a group, what costs are incurred by aggression, and why some primates avoid aggressive behavior. For example, bonobo chimpanzee groups are known for low levels of aggression within a partially matriarchal society. Captive animals including primates may show abnormal levels of social aggression and self-harm that are related to aspects of the physical or social environment; this depends on the species and individual factors such as gender, age and background (e.g. raised wild or captive).
Like many behaviors, aggression can be examined in terms of its ability to help an animal survive and reproduce, or alternatively to risk survival and reproduction. This cost-benefit analysis can be looked at in terms of evolution. There are profound differences in the extent of acceptance of a biological or evolutionary basis for human aggression, however. According to the Male Warrior hypothesis,intergroup aggression represents an opportunity for men to gain access to mates, territory, resources and increased status. As such, conflicts may have created selection evolutionary pressures for psychological mechanisms in men to initiate intergroup aggression.
Violence and conflict
Aggression can involve violence that may be adaptive under certain circumstances in terms of natural selection. This is most obviously the case in terms of attacking prey to obtain food, or in anti-predatory defense. It may also be the case in competition between members of the same species or subgroup, if the average reward (e.g. status, access to resources, protection of self or kin) outweighs average costs (e.g. injury, exclusion from the group, death). There are some hypotheses of specific adaptions for violence in humans under certain circumstances, including for homicide, but it is often unclear what behaviors may have been selected for and what may have been a byproduct, as in the case of collective violence.
Although aggressive encounters are ubiquitous in the animal kingdom, with often high stakes, most are resolved through posturing, displays and trials of strength. Game theory is used to understand how such behaviors might spread by natural selection within a population, and potentially become 'Evolutionary Stable Strategies'. An initial model of resolution of conflicts is the hawk-dove game; others include the Sequential assessment model and the Energetic war of attrition. These try to understand not just one-off encounters but protracted stand-offs, and mainly differ in the criteria by which an individual decides to give up rather than risk loss and harm in physical conflict (such as through estimates of Resource holding potential).
There are multiple theories that seek to explain findings that males and females of the same species can have differing aggressive behaviors. However the conditions under which women and men differ in aggressiveness are not well understood. In general, sexual dimorphism can be attributed to greater intraspecific competition in one sex, either between rivals for access to mates and/or to be chosen by mates. This may stem from the other gender being constrained by providing greater parental investment, in terms of factors such as gamete production, gestation, lactation, or upbringing of young. Although there is much variation in species, generally the more physically aggressive sex is the male, particularly in mammals. In species where parental care by both sexes is required, there tends to be less of a difference. When the female can leave the male to care for the offspring, then females may be the larger and more physically aggressive. Competitiveness despite parental investment has also been observed in some species. A related factor is the rate at which males and females are able to mate again after producing offspring, and the basic principles of sexual selection are also influenced by ecological factors affecting the ways or extent to which one sex can compete for the other. The role of such factors in human evolution is controversial. The pattern of male and female aggression is argued to be consistent with evolved sexually-selected behavioral differences, while alternative or complimentary views emphasize conventional social roles stemming from physical evolved differences. Aggression in women may have evolved to be, on average, less physically dangerous and more covert or indirect. However, there are critiques for using animal behavior to explain human behavior. Especially in the application of evolutionary explanations to contemporary human behavior, including differences between the genders.
According to the 2015 International encyclopedia of the social and behavioral sciences, sex differences in aggression is one of the most robust and oldest findings in psychology. Past meta-analyses in the encyclopedia found males regardless of age engaged in more physical and verbal aggression while small effect for females engaging in more indirect aggression such as rumor spreading or gossiping. It also found males tend to engage in more unprovoked aggression at higher frequency than females. This analysis also conforms with the Oxford Handbook of Evolutionary Psychology which reviewed past analysis which found men to use more verbal and physical aggression with the difference being greater in the physical type. There are more recent findings that show that differences in male and female aggression appear at about two years of age, though the differences in aggression are more consistent in middle-aged children and adolescence. Tremblay, Japel and Pérusse (1999) asserted that physically aggressive behaviors such as kicking, biting and hitting are age-typical expressions of innate and spontaneous reactions to biological drives such as anger, hunger, and affiliation. Girls' relational aggression, meaning non-physical or indirect, tends to increase after age two while physical aggression decreases. There was no significant difference in aggression between males and females before two years of age. A possible explanation for this could be that girls develop language skills more quickly than boys therefore they have better ways of verbalizing their wants and needs. They are more likely to use communication when trying to retrieve a toy with the words "Ask nicely" or "Say please."
According to the journal of Aggressive Behaviour, analysis across 9 countries found boy reported more in the use of physical aggression. At the same time no consistent sex differences emerged within relational aggression. It has been found that girls are more likely than boys to use reactive aggression and then retract, but boys are more likely to increase rather than to retract their aggression after their first reaction. Studies show girls' aggressive tactics included gossip, ostracism, breaking confidences, and criticism of a victim's clothing, appearance, or personality, whereas boys engage in aggression that involves a direct physical and/or verbal assault. This could be due to the fact that girls' frontal lobes develop earlier than boys, allowing them to self-restrain.
One factor that shows insignificant differences between male and female aggression is in sports. In sports, the rate of aggression in both contact and non-contact sports is relatively equal. Since the establishment of Title IX, female sports have increased in competitiveness and importance, which could contribute to the evening of aggression and the "need to win" attitude between both sexes. Among sex differences found in adult sports were that females have a higher scale of indirect hostility while men have a higher scale of assault. Another difference found is that men have up to 20 times higher levels of testosterone than women.
Some studies suggest that romantic involvement in adolescence decreases aggression in males and females, but decreases at a higher rate in females. Females will seem more desirable to their mate if they fit in with society and females that are aggressive do not usually fit well in society, they can often be viewed as antisocial. Female aggression is not considered the norm in society and going against the norm can sometimes prevent one from getting a mate. However, studies have shown that an increasing number of women are getting arrested for domestic violence charges. In many states, women now account for a quarter to a third of all domestic violence arrests, up from less than 10 percent a decade ago. The new statistics reflect a reality documented in research: women are perpetrators as well as victims of family violence. However, another equally possible explanation is a case of improved diagnostics: it has become more acceptable for men to report female domestic violence to the authorities while at the same time actual female domestic violence has not increased at all. This can be the case when men have become less ashamed of reporting female violence against them, therefore an increasing number of women are arrested, although the actual number of violent women remains the same.
Also, males in competitive sports are often advised by their coaches not to be in intimate relationships based on the premises that they become more docile and less aggressive during an athletic event. The circumstances in which males and females experience aggression are also different. A study showed that social anxiety and stress was positively correlated with aggression in males, meaning as stress and social anxiety increases so does aggression. Furthermore, a male with higher social skills has a lower rate of aggressive behavior than a male with lower social skills. In females, higher rates of aggression were only correlated with higher rates of stress. Other than biological factors that contribute to aggression there are physical factors are well.
Regarding sexual dimorphism, humans fall into an intermediate group with moderate sex differences in body size but relatively large testes. This is a typical pattern of primates where several males and females live together in a group and the male faces an intermediate amount of challenges from other males compared to exclusive polygyny and monogamy but frequent sperm competition.
Evolutionary psychology and sociobiology have also discussed and produced theories for some specific forms of male aggression such as sociobiological theories of rape and theories regarding the Cinderella effect. Another evolutionary theory explaining gender differences in aggression is the Male Warrior hypothesis, which explains that males have psychologically evolved for intergroup aggression in order to gain access to mates, resources, territory and status.
Many researchers focus on the brain to explain aggression. Numerous circuits within both neocortical and subcortical structures play a central role in controlling aggressive behavior, depending on the species, and the exact role of pathways may vary depending on the type of trigger or intention.
In mammals, the hypothalamus and periaqueductal gray of the midbrain are critical areas, as shown in studies on cats, rats, and monkeys. These brain areas control the expression of both behavioral and autonomic components of aggression in these species, including vocalization. Electrical stimulation of the hypothalamus causes aggressive behavior and the hypothalamus has receptors that help determine aggression levels based on their interactions with serotonin and vasopressin. These midbrain areas have direct connections with both the brainstem nuclei controlling these functions, and with structures such as the amygdala and prefrontal cortex.
Stimulation of the amygdala results in augmented aggressive behavior in hamsters, while lesions of an evolutionarily homologous area in the lizard greatly reduce competitive drive and aggression (Bauman et al. 2006). In rhesus monkeys, neonatal lesions in the amygdala or hippocampus results in reduced expression of social dominance, related to the regulation of aggression and fear. Several experiments in attack-primed Syrian golden hamsters, for example, support the claim of circuity within the amygdala being involved in control of aggression. The role of the amygdala is less clear in primates and appears to depend more on situational context, with lesions leading to increases in either social affiliatory or aggressive responses.
The broad area of the cortex known as the prefrontal cortex (PFC) has been implicated in aggression, along with many other functions. such as including inhibition of emotions. Reduced activity of the prefrontal cortex, in particular its medial and orbitofrontal portions, has been associated with violent/antisocial aggression.
The role of the chemicals in the brain, particularly neurotransmitters, in aggression has also been examined. This varies depending on the pathway, the context and other factors such as gender. A deficit in serotonin has been theorized to have a primary role in causing impulsivity and aggression. Nevertheless, low levels of serotonin transmission may explain a vulnerability to impulsiveness, potential aggression, and may have an effect through interactions with other neurochemical systems. These include dopamine systems which are generally associated with attention and motivation toward rewards, and operate at various levels. Norepinephrine, also known as noradrenaline, may influence aggression responses both directly and indirectly through the hormonal system, the sympathetic nervous system or the central nervous system (including the brain). It appears to have different effects depending on the type of triggering stimulus, for example social isolation/rank versus shock/chemical agitation which appears not to have a linear relationship with aggression. Similarly, GABA, although associated with inhibitory functions at many CNS synapses, sometimes shows a positive correlation with aggression, including when potentiated by alcohol.
The hormonal neuropeptides vasopressin and oxytocin play a key role in complex social behaviours in many mammals such as regulating attachment, social recognition, and aggression. Vasopressin has been implicated in male-typical social behaviors which includes aggression. Oxytocin may have a particular role in regulating female bonds with offspring and mates, including the use of protective aggression. Initial studies in humans suggest some similar effects.
||There is a proposal that this section be split into a new article titled Testosterone and aggression. (Discuss) (May 2015)|
Hormones are chemicals that circulate in the body to affect cells and the nervous system, including the brain. Testosterone is a steroid hormone from the androgen group, which is most linked to the prenatal and postnatal development of the male gender and physique, which in turn has been linked on average to more physical aggression in many species. Early androgenization as an organizational effect on the developing brains of both males and females, making more neural circuits that control sexual behavior as well as intermale and interfemale aggression become more sensitive to testosterone. Thus, aggressive behavior tends to increase with testosterone. There are noticeable sex differences in aggression. Testosterone is present to a lesser extent in females, who may be more sensitive to its effects. Animal studies have also indicated a link between incidents of aggression and the individual level of circulating testosterone. However, results in relation to primates, particularly humans, are less clear cut and are at best only suggestive of a positive association in some contexts. In humans, there is a seasonal variation in aggression associated with changes in testosterone. For example, in some primate species, such as rhesus monkeys and baboons, females are more likely to engage in fights around the time of ovulation as well as right before menstruation. If the results were the same in humans as they are in rhesus monkeys and baboons, then the increase in aggressive behaviors during ovulation is explained by the decline in estrogen levels. This makes normal testosterone levels more effective. Castrated mice and rats exhibit lower levels of aggression, Males castrated as neonates exhibit low levels of aggression even when given testosterone throughout their development.
The challenge hypothesis outlines the dynamic relationship between plasma testosterone levels and aggression in mating contexts in many species. It proposes that testosterone is linked to aggression when it is beneficial for reproduction, such as in mate guarding and preventing the encroachment of intrasexual rivals. The challenge hypothesis predicts that seasonal patterns in testosterone levels in a species are a function of mating system (monogamy versus polygyny), paternal care, and male-male aggression in seasonal breeders. This pattern between testosterone and aggression was first observed in seasonally breeding birds, such as the song sparrow, where testosterone levels rise modestly with the onset of the breeding season to support basic reproductive functions. The hypothesis has been subsequently expanded and modified to predict relationships between testosterone and aggression in other species. For example, chimpanzees, which are continuous breeders, show significantly raised testosterone levels and aggressive male-male interactions when receptive and fertile females are present. Currently, no research has specified a relationship between the modified challenge hypothesis and human behavior, or the human nature of concealed ovulation, although some suggest it may apply.
Effects on the nervous system
Another line of research has focused on the proximate effects of circulating testosterone on the nervous system, as mediated by local metabolism within the brain. Testosterone can be metabolized to 17b-estradiol by the enzyme aromatase, or to 5-alpha-dihydrotestosterone (DHT) by 5a-reductase.
Aromatase is highly expressed in regions involved in the regulation of aggressive behavior, such as the amygdala and hypothalamus. In studies using genetic knock-out techniques in inbred mice, male mice that lacked a functional aromatase enzyme displayed a marked reduction in aggression. Long-term treatment with estradiol partially restored aggressive behavior, suggesting that the neural conversion of circulating testosterone to estradiol and its effect on estrogen receptors influences inter-male aggression. In addition, two different estrogen receptors, ERa and ERb, have been identified as having the ability to exert different effects on aggression in mice. However, the effect of estradiol appears to vary depending on the strain of mouse, and in some strains it reduces aggression during long days (16 h of light), while during short days (8 h of light) estradiol rapidly increases aggression.
Another hypothesis is that testosterone influences brain areas that control behavioral reactions. Studies in animal models indicate that aggression is affected by several interconnected cortical and subcortical structures within the so-called social behavior network. A study involving lesions and electrical-chemical stimulation in rodents and cats revealed that such a neural network consists of the medial amygdala, medial hypothalamus and periaqueductal grey (PAG), and it positively modulates reactive aggression. Moreover, a study done in human subjects showed that prefrontal-amygdala connectivity is modulated by endogenous testosterone during social emotional behavior.
In human studies, testosterone-aggression research has also focused on the role of the orbitofrontal cortex (OFC). This brain area is strongly associated with impulse control and self-regulation systems that integrate emotion, motivation, and cognition to guide context-appropriate behavior. Patients with localized lesions to the OFC engage in heightened reactive aggression. Aggressive behavior may be regulated by testosterone via reduced medial OFC engagement following social provocation. When measuring participants' salivary testosterone, higher levels can predict subsequent aggressive behavioral reactions to unfairness faced during a task. Moreover, brain scanning with fMRI shows reduced activity in the medial OFC during such reactions. Such findings may suggest that a specific brain region, the OFC, is a key factor in understanding reactive aggression.
General associations with behavior
Scientists have for a long time been interested in the relationship between testosterone and aggressive behavior. In most species, males are more aggressive than females. Castration of males usually has a pacifying effect on aggressive behavior in males. In humans, males engage in crime and especially violent crime more than females. The involvement in crime usually rises in the early teens to mid teens which happen at the same time as testosterone levels rise. Research on the relationship between testosterone and aggression is difficult since the only reliable measurement of brain testosterone is by a lumbar puncture which is not done for research purposes. Studies therefore have often instead used more unreliable measurements from blood or saliva.
The Handbook of Crime Correlates, a review of crime studies, states most studies support a link between adult criminality and testosterone although the relationship is modest if examined separately for each sex. However, nearly all studies of juvenile delinquency and testosterone are not significant. Most studies have also found testosterone to be associated with behaviors or personality traits linked with criminality such as antisocial behavior and alcoholism. Many studies have also been done on the relationship between more general aggressive behavior/feelings and testosterone. About half the studies have found a relationship and about half no relationship.
Studies of testosterone levels of male athletes before and after a competition revealed that testosterone levels rise shortly before their matches, as if in anticipation of the competition, and are dependent on the outcome of the event: testosterone levels of winners are high relative to those of losers. No specific response of testosterone levels to competition was observed in female athletes, although a mood difference was noted. In addition, some experiments have failed to find a relationship between testosterone levels and aggression in humans.
The possible correlation between testosterone and aggression could explain the "roid rage" that can result from anabolic steroid use, although an effect of abnormally high levels of steroids does not prove an effect at physiological levels.
Dehydroepiandrosterone (DHEA) is the most abundant circulating androgen hormone and can be rapidly metabolized within target tissues into potent androgens and estrogens. Gonadal steroids generally regulate aggression during the breeding season, but non-gonadal steroids may regulate aggression during the non-breeding season. Castration of various species in the non-breeding season has no effect on territorial aggression. In several avian studies, circulating DHEA has been found to be elevated in birds during the non-breeding season. These data support the idea that non-breeding birds combine adrenal and/or gonadal DHEA synthesis with neural DHEA metabolism to maintain territorial behavior when gonadal testosterone secretion is low. Similar results have been found in studies involving different strains of rats, mice, and hamsters. DHEA levels also have been studied in humans and may play a role in human aggression. Circulating DHEAS (its sulfated ester) levels rise during adrenarche (~7 years of age) while plasma testosterone levels are relatively low. This implies that aggression in pre-pubertal children with aggressive conduct disorder might be correlated with plasma DHEAS rather than plasma testosterone, suggesting an important link between DHEAS and human aggressive behavior.
Glucocorticoid hormones have an important role in regulating aggressive behavior. In adult rats, acute injections of corticosterone promote aggressive behavior and acute reduction of corticosterone decreases aggression; however, a chronic reduction of corticosterone levels can produce abnormally aggressive behavior. In addition, glucocorticoids affect development of aggression and establishment of social hierarchies. Adult mice with low baseline levels of corticosterone are more likely to become dominant than are mice with high baseline corticosterone levels.
Glucocorticoids are released by the hypothalamic pituitary adrenal (HPA) axis in response to stress, of which cortisol is the most prominent in humans. Results in adults suggest that reduced levels of cortisol, linked to lower fear or a reduced stress response, can be associated with more aggression. However, it may be that proactive aggression is associated with low cortisol levels while reactive aggression may be accompanied by elevated levels. Differences in assessments of cortisol may also explain a diversity of results, particularly in children.
In many animals, aggression can be linked to pheromones released between conspecifics. In mice, major urinary proteins (Mups) have been demonstrated to promote innate aggressive behavior in males, and can be mediated by neuromodulatory systems. Mups activate olfactory sensory neurons in the vomeronasal organ (VNO), a subsystem of the nose known to detect pheromones via specific sensory receptors, of mice and rats. Pheremones have also been identified in fruit flies, detected by neurons in the antenna, that send a message to the brain eliciting aggression; it has been noted that aggression pheremones have not been identified in humans.
In general, differences in a continuous phenotype such as aggression are likely to result from the action of a large number of genes each of small effect, which interact with each other and the environment through development and life.
In a non-mammalian example of genes related to aggression, the fruitless gene in fruit flies is a critical determinant of certain sexually dimorphic behaviors, and its artificial alteration can result in a reversal of stereotypically male and female patterns of aggression in fighting. However, in what was thought to be a relatively clear case, inherent complexities have been reported in deciphering the connections between interacting genes in an environmental context and a social phenotype involving multiple behavioral and sensory interactions with another organism.
In mice, candidate genes for differentiating aggression between the sexes are the Sry (sex determining region Y) gene, located on the Y chromosome and the Sts (steroid sulfatase) gene. The Sts gene encodes the steroid sulfatase enzyme, which is pivotal in the regulation of neurosteroid biosynthesis. It is expressed in both sexes, is correlated with levels of aggression among male mice, and increases dramatically in females after parturition and during lactation, corresponding to the onset of maternal aggression.
In humans, there is good evidence that the basic human neural architecture underpinning the potential for flexible aggressive responses is influenced by genes as well as environment. In terms of variation between individual people, more than 100 twin and adoption studies studies have been conducted in recent decades examining the genetic basis of aggressive behavior and related constructs such as conduct disorders. According to a meta-analysis published in 2002, approximately 40% of variation between individuals is explained by differences in genes, and 60% by differences in environment (mainly non-shared environmental influences rather than those that would be shared by being raised together). However, such studies have depended on self-report or observation by others including parents, which complicates interpretation of the results. The few laboratory-based analyses have not found significant amounts of individual variation in aggression explicable by genetic variation in the human population. Furthermore, linkage and association studies that seek to identify specific genes, for example that influence neurotransmitter or hormone levels, have generally resulted in contradictory findings characterized by failed attempts at replication. One possible factor is an allele (variant) of the MAO-A gene which, in interaction with certain life events such as childhood maltreatment (which may show a main effect on its own), can influence development of brain regions such as the amygdala and as a result some types of behavioral response may be more likely. The generally unclear picture has been compared to equally difficult findings obtained in regard to other complex behavioral phenotypes.
Society and culture
Humans share aspects of aggression with non-human animals, and have specific aspects and complexity related to factors such as genetics, early development, social learning and flexibility, culture and morals. Konrad Lorenz stated in his 1963 classic, On Aggression, that human behavior is shaped by four main, survival-seeking animal drives. Taken together, these drives—hunger, fear, reproduction, and aggression—achieve natural selection. E. O. Wilson elaborated in On Human Nature that aggression is, typically, a means of gaining control over resources. Aggression is, thus, aggravated during times when high population densities generate resource shortages. According to Richard Leakey and his colleagues, aggression in humans has also increased by becoming more interested in ownership and by defending his or her property. However, UNESCO adopted the Seville Statement of Violence in 1989 that refuted claims, by evolutionary scientists, that genetics by itself was the sole cause of aggression.
Many scholars assert that culture is one factor that plays a role in aggression. Tribal or band societies existing before or outside of modern states have sometimes been depicted as peaceful 'noble savages'. The ǃKung people were described as 'The Harmless People' in a popular work by Elizabeth Marshall Thomas in 1958, while Lawrence Keeley's 1996 War Before Civilization suggested that regular warfare without modern technology was conducted by most groups throughout human history, including most Native American tribes. Studies of hunter-gatherers show a range of different societies. In general, aggression, conflict and violence sometimes occur, but direct confrontation is generally avoided and conflict is socially managed by a variety of verbal and non-verbal methods. Different rates of aggression or violence, currently or in the past, within or between groups, have been linked to the structuring of societies and environmental conditions influencing factors such as resource or property acquisition, land and subsistence techniques, and population change.
The American psychologist Peter Gray hypothesizes that band hunter-gatherer societies are able to maintain relatively peaceful, egalitarian relations between members by actively resisting the tendency of any one person to dominate others, which he calls "reverse dominance", encouraging a spirit of playfulness in daily activities from childhood through adulthood, and a system of what he describes as "trustful" child-rearing, which avoids violent methods such as physical punishment. According to Gray, "Social play—that is, play involving more than one player—is necessarily egalitarian. It always requires a suspension of aggression and dominance along with a heightened sensitivity to the needs and desires of the other players".
Joan Durrant at the University of Manitoba writes that a number of studies have found physical punishment to be associated with "higher levels of aggression against parents, siblings, peers and spouses", even when controlling for other factors. According to Elizabeth Gershoff at the University of Michigan, the more that children are physically punished, the more likely they are as adults to act violently towards family members, including intimate partners. In countries where physical punishment of children is perceived as being more culturally accepted, it is less strongly associated with increased aggression; however, physical punishment has been found to predict some increase in child aggression regardless of culture. While these associations do not prove causality, a number of longitudinal studies suggest that the experience of physical punishment has a direct causal effect on later aggressive behaviors. In examining several longitudinal studies that investigated the path from disciplinary spanking to aggression in children from preschool age through adolescence, Gershoff concluded: "Spanking consistently predicted increases in children's aggression over time, regardless of how aggressive children were when the spanking occurred". similar results were found by Catherine Taylor at Tulane University in 2010. Family violence researcher Murray A. Straus argues, "There are many reasons this evidence has been ignored. One of the most important is the belief that spanking is more effective than nonviolent discipline and is, therefore, sometimes necessary, despite the risk of harmful side effects".
Analyzing aggression culturally or politically is complicated by the fact that the label 'aggressive' can itself be used as a way of asserting a judgement from a particular point of view.[according to whom?] Whether a coercive or violent method of social control is perceived as aggression – or as legitimate versus illegitimate aggression – depends on the position of the relevant parties in relation to the social order of their culture. This in turn can relate to factors such as: norms for coordinating actions and dividing resources; what is considered self-defense or provocation; attitudes towards 'outsiders', attitudes towards specific groups such as women, the disabled or the lower status; the availability of alternative conflict resolution strategies; trade interdependence and collective security pacts; fears and impulses; and ultimate goals regarding material and social outcomes.
Cross-cultural research has found differences in attitudes towards aggression in different cultures. In one questionnaire study of university students, in addition to men overall justifying some types of aggression more than women, United States respondents justified defensive physical aggression more readily than Japanese or Spanish respondents, whereas Japanese students preferred direct verbal aggression (but not indirect) more than their American and Spanish counterparts. Within American culture, southern men were shown in a study on university students to be more affected and to respond more aggressively than northerners when randomly insulted after being bumped into, which was theoretically related to a traditional culture of honor in the Southern United States. A similar sociological concept that may be applied in different cultures is 'face'. Other cultural themes sometimes applied to the study of aggression include individualistic versus collectivist styles, which may relate, for example, to whether disputes are responded to with open competition or by accommodating and avoiding conflicts. In a study including 62 countries school principals reported aggressive student behavior more often the more individualist, and hence less collectivist, their country's culture. Other comparisons made in relation to aggression or war include democratic versus authoritarian political systems and egalitarian versus stratified societies. The economic system known as capitalism has been viewed by some as reliant on the leveraging of human competitiveness and aggression in pursuit of resources and trade, which has been considered in both positive and negative terms. Attitudes about the social acceptability of particular acts or targets of aggression are also important factors. This can be highly controversial, as for example in disputes between religions or nation states, for example in regard to the Arab–Israeli conflict.
Some scholars believe that behaviors like aggression may be partially learned by watching and imitating the behavior of others. Some scholars have concluded that media may have some small effects on aggression. There is also research questioning this view. For instance, a recent long-term outcome study of youth found no long-term relationship between playing violent video games and youth violence or bullying. One study suggested there is a smaller effect of violent video games on aggression than has been found with television violence on aggression. This effect is positively associated with type of game violence and negatively associated to time spent playing the games. The author concluded that insufficient evidence exists to link video game violence with aggression. However, another study suggested links to aggressive behavior. One study suggested that adults (i.e. parents) suffering from dissociative symptoms related to post-traumatic stress disorder may be more likely to expose their children to violent programs and video games; links between these issues were also related to poverty.
Fear(survival)-induced pre-emptive aggression
According to philosopher and neuroscientist Nayef Al-Rodhan, "fear(survival)-induced pre-emptive aggression" is a human reaction to injustices that are perceived to threaten survival. It is often the root of the unthinkable brutality and injustice perpetuated by human beings. It may occur at any time, even in situations that appear to be calm and under control. Where there is injustice that is perceived as posing a threat to survival, "fear(survival)-induced pre-emptive aggression" will result in individuals taking whatever action necessary to be free from that threat.
Nayef Al-Rodhan argues that humans' strong tendency towards "fear(survival)-induced pre-emptive aggression" means that situations of anarchy or near anarchy should be prevented at all costs. This is because anarchy provokes fear, which in turn results in aggression, brutality, and injustice. Even in non-anarchic situations, survival instincts and fear can be very powerful forces, and they may be incited instantaneously. "Fear(survival)-induced pre-emptive aggression" is one of the key factors that may push naturally amoral humans to behave in immoral ways. Knowing this, Al-Rodhan maintains that we must prepare for the circumstances that may arise from humans' aggressive behavior. According to Al-Rodhan, the risk of this aggression and its ensuing brutality should be minimized through confidence-building measures and policies that promote inclusiveness and prevent anarchy.
The frequency of physical aggression in humans peaks at around 2–3 years of age. It then declines gradually on average. These observations suggest that physical aggression is not only a learned behavior but that development provides opportunities for the learning and biological development of self-regulation. However, a small subset of children fail to acquire all the necessary self-regulatory abilities and tend to show atypical levels of physical aggression across development. These may be at risk for later violent behavior or, conversely, lack of aggression that may be considered necessary within society. Some findings suggest that early aggression does not necessarily lead to aggression later on, however, although the course through early childhood is an important predictor of outcomes in middle childhood. In addition, physical aggression that continues is likely occurring in the context of family adversity, including socioeconomic factors. Moreover, 'opposition' and 'status violations' in childhood appear to be more strongly linked to social problems in adulthood than simply aggressive antisocial behavior. Social learning through interactions in early childhood has been seen as a building block for levels of aggression which play a crucial role in the development of peer relationships in middle childhood. Overall, an interplay of biological, social and environmental factors can be considered.
- Typical expectations
- Young children preparing to enter kindergarten need to develop the socially important skill of being assertive. Examples of assertiveness include asking others for information, initiating conversation, or being able to respond to peer pressure.
- In contrast, some young children use aggressive behavior, such as hitting or biting, as a form of communication.
- Aggressive behavior can impede learning as a skill deficit, while assertive behavior can facilitate learning. However, with young children, aggressive behavior is developmentally appropriate and can lead to opportunities of building conflict resolution and communication skills.
- By school age, children should learn more socially appropriate forms of communicating such as expressing themselves through verbal or written language; if they have not, this behavior may signify a disability or developmental delay
- Aggression triggers
- Physical fear of others
- Family difficulties
- Learning, neurological, or conduct/behavior disorders
- Psychological trauma
The Bobo doll experiment was conducted by Albert Bandura in 1961. In this work, Bandura found that children exposed to an aggressive adult model acted more aggressively than those who were exposed to a nonaggressive adult model. This experiment suggests that anyone who comes in contact with and interacts with children can have an impact on the way they react and handle situations.
- Summary points from recommendations by national associations
- American Academy of Pediatrics (2011): "The best way to prevent aggressive behavior is to give your child a stable, secure home life with firm, loving discipline and full-time supervision during the toddler and preschool years. Everyone who cares for your child should be a good role model and agree on the rules he's expected to observe as well as the response to use if he disobeys."
- National Association of School Psychologists (2008): "Proactive aggression is typically reasoned, unemotional, and focused on acquiring some goal. For example, a bully wants peer approval and victim submission, and gang members want status and control. In contrast, reactive aggression is frequently highly emotional and is often the result of biased or deficient cognitive processing on the part of the student."
Gender is a factor that plays a role in both human and animal aggression. Males are historically believed to be generally more physically aggressive than females from an early age, and men commit the vast majority of murders (Buss 2005). This is one of the most robust and reliable behavioral sex differences, and it has been found across many different age groups and cultures. However, some empirical studies have found the discrepancy in male and female aggression to be more pronounced in childhood and the gender difference in adults to be modest when studied in an experimental context. Still, there is evidence that males are quicker to aggression (Frey et al. 2003) and more likely than females to express their aggression physically. When considering indirect forms of non-violent aggression, such as relational aggression and social rejection, some scientists argue that females can be quite aggressive, although female aggression is rarely expressed physically. An exception is intimate partner violence that occurs among couples who are engaged, married, or in some other form of intimate relationship. In such cases, some research suggests that women are more physically aggressive than men, although differences are small and men are less likely to be injured than women are.
citation needed] Although females are less likely to initiate physical violence, they can express aggression by using a variety of non-physical means. Exactly which method women use to express aggression is something that varies from culture to culture. On Bellona Island, a culture based on male dominance and physical violence, women tend to get into conflicts with other women more frequently than with men. When in conflict with males, instead of using physical means, they make up songs mocking the man, which spread across the island and humiliate him. If a woman wanted to kill a man, she would either convince her male relatives to kill him or hire an assassin. Although these two methods involve physical violence, both are forms of indirect aggression, since the aggressor herself avoids getting directly involved or putting herself in immediate physical danger.[
There has been some links between those prone to violence and their alcohol use. Those who are prone to violence and use alcohol are more likely to carry out violent acts. Alcohol impairs judgment, making people much less cautious than they usually are (MacDonald et al. 1996). It also disrupts the way information is processed (Bushman 1993, 1997; Bushman & Cooper 1990).
Pain and discomfort also increase aggression. Even the simple act of placing one's hands in hot water can cause an aggressive response. Hot temperatures have been implicated as a factor in a number of studies. One study completed in the midst of the civil rights movement found that riots were more likely on hotter days than cooler ones (Carlsmith & Anderson 1979). Students were found to be more aggressive and irritable after taking a test in a hot classroom (Anderson et al. 1996, Rule, et al. 1987). Drivers in cars without air conditioning were also found to be more likely to honk their horns (Kenrick & MacFarlane 1986), which is used as a measure of aggression and has shown links to other factors such as generic symbols of aggression or the visibility of other drivers.
Frustration is another major cause of aggression. The Frustration aggression theory states that aggression increases if a person feels that he or she is being blocked from achieving a goal (Aronson et al. 2005). One study found that the closeness to the goal makes a difference. The study examined people waiting in line and concluded that the 2nd person was more aggressive than the 12th one when someone cut in line (Harris 1974). Unexpected frustration may be another factor. In a separate study to demonstrate how unexpected frustration leads to increased aggression, Kulik & Brown (1979) selected a group of students as volunteers to make calls for charity donations. One group was told that the people they would call would be generous and the collection would be very successful. The other group was given no expectations. The group that expected success was more upset when no one was pledging than the group who did not expect success (everyone actually had horrible success). This research suggests that when an expectation does not materialize (successful collections), unexpected frustration arises which increases aggression.
There is some evidence to suggest that the presence of violent objects such as a gun can trigger aggression. In a study done by Leonard Berkowitz and Anthony Le Page (1967), college students were made angry and then left in the presence of a gun or badminton racket. They were then led to believe they were delivering electric shocks to another student, as in the Milgram experiment. Those who had been in the presence of the gun administered more shocks. It is possible that a violence-related stimulus increases the likelihood of aggressive cognitions by activating the semantic network.
A new proposal links military experience to anger and aggression, developing aggressive reactions and investigating these effects on those possessing the traits of a serial killer. Castle and Hensley state, "The military provides the social context where servicemen learn aggression, violence, and murder." Post-traumatic stress disorder (PTSD) is also a serious issue in the military, also believed to sometimes lead to aggression in soldiers who are suffering from what they witnessed in battle. They come back to the civilian world and may still be haunted by flashbacks and nightmares, causing severe stress. In addition, it has been claimed that in the rare minority who are claimed to be inclined toward serial killing, violent impulses may be reinforced and refined in war, possibly creating more effective murderers.
As a positive adaptation theory
Some recent scholarship has questioned traditional psychological conceptualizations of aggression as universally negative. Most traditional psychological definitions of aggression focus on the harm to the recipient of the aggression, implying this is the intent of the aggressor; however this may not always be the case. From this alternate view, although the recipient may or may not be harmed, the perceived intent is to increase the status of the aggressor, not necessarily to harm the recipient. Such scholars contend that traditional definitions of aggression have no validity.
From this view, rather than concepts such as assertiveness, aggression, violence and criminal violence existing as distinct constructs, they exist instead along a continuum with moderate levels of aggression being most adaptive. Such scholars do not consider this a trivial difference, noting that many traditional researchers' aggression measurements may measure outcomes lower down in the continuum, at levels which are adaptive, yet they generalize their findings to non-adaptive levels of aggression, thus losing precision.
- Aggressive narcissism
- Conflict (disambiguation)
- Frustration-Aggression Hypothesis
- Genetics of aggression
- Non-aggression pact
- Parental abuse by children
- Passive aggressive behavior
- Relational aggressive behavior
- Resource holding potential
- School bullying
- School violence
- Social defeat
- Buss, A. H. (1961). The psychology of aggression. Hoboken, NJ: John WIley.
- Anderson, C. A.; Bushman, B. J. (2002). "Human aggression". Annual Review of Psychology. 53 (1): 27–51. PMID 11752478. doi:10.1146/annurev.psych.53.100901.135231.
- Akert, R.M., Aronson, E., & Wilson, T.D. (2010). Social Psychology (7th ed.). Upper Saddle River, NJ: Prentice Hall.
- Berkowitz, L. (1993). Aggression: Its causes, consequences, and control. New York, NY: McGraw-Hill.
- McElliskem, Joseph E. (2004). "Affective and Predatory Violence: a Bimodal Classification System of Human Aggression and Violence" (PDF). Aggression & Violent Behavior. 10 (1): 1–30. doi:10.1016/j.avb.2003.06.002.
- Bushman, B.J.; Anderson, C.A. (2001). "Is it time to pull the plug on the hostile versus instrumental aggression dichotomy?" (PDF). Psychological Review. 108 (1): 273–279. PMID 11212630. doi:10.1037/0033-295X.108.1.273.
- Ellie L. Young, David A. Nelson, America B. Hottle, Brittney Warburton, and Bryan K. Young (2010) Relational Aggression Among Students Principal Leadership, October, copyright the National Association of Secondary School Principals
- Ramírez, JM; Andreu, JM (2006). "Aggression, and some related psychological constructs (anger, hostility, and impulsivity); some comments from a research project" (PDF). Neuroscience and biobehavioral reviews. 30 (3): 276–91. PMID 16081158. doi:10.1016/j.neubiorev.2005.04.015.
- Veenema, AH; Neumann, ID (2007). "Neurobiological mechanisms of aggression and stress coping: a comparative study in mouse and rat selection lines". Brain, behavior and evolution. 70 (4): 274–85. PMID 17914259. doi:10.1159/000105491.
- Simons, Marlise (May 2010). "International Court May Define Aggression as Crime". The New York Times.
- Nathaniel Snow Violence and Aggression in Sports: An In-Depth Look (Part One) (Part 2Part 3) Bleacher Report, March 23, 2010
- Merriam-Webster: Aggression Retrieved 10 January 2012
- Online Etymology Dictionary: Aggression Retrieved 10th January 2012
- Stearns, D. C. (2003). Anger and aggression. Encyclopedia of Children and Childhood: In History and Society. Paula S. Fass (Ed.). Macmillan Reference Books
- Van Staaden, M.J, Searcy, W.A. & Hanlon, R.T. 'Signaling Aggression' in Aggression Academic Press, Stephen F. Goodwin, 2011
- Maestripieri, D. (1992). "Functional Aspects of Maternal Aggression in Mammals". Canadian Journal of Zoology. 70 (6): 1069–1077. doi:10.1139/z92-150.
- Psychology- The Science Of Behaviour, pg 420, Neil R Clarkson (4th Edition)
- Gleitman, Henry, Alan J. Fridlund, and Daniel Reisberg. Psychology. 6th ed. New York: W W Norton and Company, 2004. 431–432.
- Gendreau, PL & Archer, J. 'Subtypes of Aggression in Humans and Animals', in Developmental Origins of Aggression, 2005, The Guilford Press.
- Johnson, L. K.; Wiemer, D. F. (1982-09-01). "Nerol: An alarm substance of the stingless bee,Trigona fulviventris (Hymenoptera: Apidae)". Journal of Chemical Ecology. 8 (9): 1167–1181. ISSN 0098-0331. doi:10.1007/BF00990750.
- Tanner, CJ (2006). "Numerical assessment affects aggression and competitive ability: a team-fighting strategy for the ant Formica xerophila". Proceedings. Biological sciences / the Royal Society. 273 (1602): 2737–42. PMC . PMID 17015327. doi:10.1098/rspb.2006.3626.
- Mitani, John C.; Watts, David P.; Amsler, Sylvia J. (June 2010). "Lethal intergroup aggression leads to territorial expansion in wild chimpanzees". Current Biology. 20 (12): R507–R508. PMID 20620900. doi:10.1016/j.cub.2010.04.021.
- Adamson, D.J.; Edwards, D.H.; Issa, F.A. (1999). "Dominance Hierarchy Formation in Juvenile Crayfish Procambarus Clarkii". Journal of Experimental Biology. 202 (24): 3497–3506. PMID 10574728.
- Heitor, F.; Do Mar, Oom; Vincente, L. (2006). "Social Relationships in a Herd of Sorraia Horses Part I. Correlates of Social Dominance and Contexts of Aggression". Behavioural Processes. 73 (2): 170–177. PMID 16815645. doi:10.1016/j.beproc.2006.05.004.
- Cant, MA; Llop, J; Field, J (2006). "Individual variation in social aggression and the probability of inheritance: theory and a field test". American Naturalist. 167 (6): 837–852. doi:10.1086/503445.
- Bragin, A.V.; Osadchuk, A.V.; Osadchuk, L.V. (2006). "The Experimental Model of Establishment and Maintenance of Social Hierarchy in Laboratory Mice". Zhurnal Vysshei Nervnoi Delatelnosti Imeni I P Pavlova. 56 (3): 412–419. PMID 16869278.
- Ferguson, C.J.; Beaver, K.M. (2009). "Natural Born Killers: The Genetic Origins of Extreme Violence" (PDF). Aggression and Violent Behavior. 14 (5): 286–294. doi:10.1016/j.avb.2009.03.005.
- Hsu, Y; Earley, R.L, Wolf, L.L (February 2006). "Modulation of aggressive behaviour by fighting experience: mechanisms and contest outcomes". Biological reviews of the Cambridge Philosophical Society. 81 (1): 33–74. PMID 16460581. doi:10.1017/S146479310500686X.
- Aureli F., Cords M, Van Schaik CP. (2002). "Conflict resolution following aggression in gregarious animals: a predictive framework" (PDF). Animal Behaviour. 64 (3): 325–343. doi:10.1006/anbe.2002.3071.
- Silverberg, James; J. Patrick Gray (1992) Aggression and Peacefulness in Humans and Other Primates ISBN 0-19-507119-0
- Honess, PE; Marin, CM (2006). "Enrichment and aggression in primates". Neuroscience and biobehavioral reviews. 30 (3): 413–36. PMID 16055188. doi:10.1016/j.neubiorev.2005.05.002.
- Somit, A (1990). "Humans, chimps, and bonobos: The biological bases of aggression, war, and peacemaking". Journal of Conflict Resolution. 34 (3): 553–582. JSTOR 174228. doi:10.1177/0022002790034003008.
- McDonald, Melissa M.; Navarrete, Carlos David; Vugt, Mark Van (2012-03-05). "Evolution and the psychology of intergroup conflict: the male warrior hypothesis". Philosophical Transactions of the Royal Society of London B: Biological Sciences. 367 (1589): 670–679. ISSN 0962-8436. PMC . PMID 22271783. doi:10.1098/rstb.2011.0301.
- Vugt, Mark van (2006). "Gender Differences in Cooperation and Competition: The Male-Warrior Hypothesis" (PDF). Psychological Science.
- Buss, D.M. (2005). The murderer next door: Why the mind Is designed to kill. New York: Penguin Press.
- McCall, Grant S.; Shields, Nancy. "Examining the evidence from small-scale societies and early prehistory and implications for modern theories of aggression and violence". Aggression and Violent Behavior. 13 (1): 1–9. doi:10.1016/j.avb.2007.04.001.
- Buss, D. M., & Duntley, J. D. The evolution of aggression. (2006). In M. Schaller, J. A. Simpson, & D. T. Kenrick (Eds.), Evolution and Social Psychology (pp. 263–286). New York: Psychology Press.
- Durrant, Russil. "Collective violence: An evolutionary perspective". Aggression and Violent Behavior. 16 (5): 428–436. doi:10.1016/j.avb.2011.04.014.
- Briffa, M. (2010) Territoriality and Aggression. Nature Education Knowledge 1(8):19
- Eagly, Alice; Valerie Steffen (1986). "Gender and Aggressive Behavior: A Meta-Analytic Review of the Social Psychological Literature" (PDF). Psychological Bulletin. 100 (3): 309–30. PMID 3797558. doi:10.1037/0033-2909.100.3.309. Retrieved 6 December 2012.
- Clutton-Brock, T. H.; Hodge, S. J.; Spong, G.; Russell, A. F.; Jordan, N. R.; Bennett, N. C.; Sharpe, L. L.; Manser, M. B. (21 December 2006). "Intrasexual competition and sexual selection in cooperative mammals". Nature. 444 (7122): 1065–1068. PMID 17183322. doi:10.1038/nature05386.
- Archer, John (August 2009). "Does sexual selection explain human sex differences in aggression? Plus Open Peer Commentary" (PDF). Behavioral and Brain Sciences. 32 (3–4): 249–66; discussion 266–311. PMID 19691899. doi:10.1017/S0140525X09990951.
- Campbell, Anne (1999). "Staying Alive: Evolution, culture, and women's intrasexual aggression". Behavioral and Brain Sciences. 22 (2): 203–252. PMID 11301523. doi:10.1017/s0140525x99001818.
- The Handbook of Evolutionary Psychology, edited by David M. Buss, John Wiley & Sons, Inc., 2005. Chapter 21 by Anne Campbell.
- Zuk, M. "Sexual Selections: What We Can and Can't Learn about Sex from Animals." University of California Press, 2002
- "Gender Differences in Personality and Social Behavior". ResearchGate. doi:10.1016/B978-0-08-097086-8.25100-3. Retrieved 2015-12-05.
- "Sex differences in aggression - Oxford Handbooks". doi:10.1093/oxfordhb/9780198568308.001.0001/oxfordhb-9780198568308-e-025.
- Lussier, Patrick; Raymond Corrado (20 September 2012). "Gender Differences in Physical Aggression and Associated Developmental Correlates in a Sample of Canadian Preschoolers †". Behavioral Sciences and the Law. 30 (5): 643–671. doi:10.1002/bsl.2035. Retrieved 6 December 2012.
- Landsford, J.E (2012). "Boys' and girls' relational and physical aggression in nine countries". Aggressive Behavior. 38 (4): 298–308. doi:10.1002/ab.21433.
- Hay, D.F (2011). "The emergence of gender differences in physical aggression in the context of conflict between younger peers". British Journal of developmental psychology. 29 (2): 158–75. doi:10.1111/j.2044-835x.2011.02028.x.
- Hess, Nicole; Edward Hagen (12 November 2005). "Sex differences in indirect aggression Psychological evidence from young adults" (PDF). Evolution and Human Behavior. 27 (3): 231–245. doi:10.1016/j.evolhumbehav.2005.11.001. Retrieved 6 December 2012.
- Keeler, L.A (2007). "The differences in sport aggression, life aggression, and life assertion among adult male and female collision, contact, and non-contact sport athletes". Journal of Sport Behavior. 30 (1): 57–76.
- Xie, H (2011). "Developmental trajectories of aggression from late childhood through adolescence: similarities and differences across gender". Aggressive Behavior. 37 (5): 387–404. doi:10.1002/ab.20404.
- Young, Cathy (26 November 1999). "Feminists Play the Victim Game". New York Times. Retrieved 6 December 2012.
- Al-Ali, M.M (2011). "Social anxiety in relation to social skills, aggression, and stress among male and female commercial institute students". Education. 132 (2): 351–61.
- The Oxford Handbook of Evolutionary Psychology, Edited by Robin Dunbar and Louise Barret, Oxford University Press, 2007, Chapter 30 Ecological and socio-cultural impacts on mating and marriage systems by Bobbi S. Low
- Hermans, J.; Kruk, M.R.; Lohman, A.H.; Meelis, W.; Mos, J.; Mostert, P.G.; Van Der, Poel (1983). "Discriminant Analysis of the Localization of Aggression-Inducing Electrode Placements in the Hypothalamus of Male Rats". Brain Research. 260 (1): 61–79. PMID 6681724. doi:10.1016/0006-8993(83)90764-3.
- Delville, Yvon; Ferris, Craig F.; Fuler, Ray W.; Koppel, Gary; Richard, RW; Jr, H. Melloni; Perry, Kenneth W. (1997). "Vasopressin/Serotonin Interactions in the Anterior Hypothalamus Control Aggressive Behavior in Golden Hamsters". The Journal of Neuroscience. 17 (11): 4331–4340. PMID 9151749.
- Decoster, M.; Herbert, M.; Meyerhoff, J.L.; Potegal, M. (1996). "Brief, High-Frequency Stimulation of the Corticomedial Amygdala Induces a Delayed and Prolonged Increase of Aggressiveness in Male Syrian Golden Hamsters". Behavioral Neuroscience. 110 (2): 401–412. PMID 8731066. doi:10.1037/0735-7044.110.2.401.
- Ferris, C.F.; Herbert, M.; Meyerhoff, J.; Potegal, M.; Skaredoff, L. (1996). "Attack Priming in Female Syrian Golden Hamsters is Associated with a C-Fos-Coupled Process Within the Corticomedial Amygdala". Neuroscience. 75 (3): 869–880. PMID 8951880. doi:10.1016/0306-4522(96)00236-9.
- Crews, D; Greenberg, N; Scott, M (1984). "Role of the Amygdala in the Reproductive and Aggressive Behavior of the Lizard, Anolis Carolinensis". Physiology & Behavior. 32 (1): 147–151. PMID 6538977. doi:10.1016/0031-9384(84)90088-X.
- Amaral, D.G.; Bauman, M.D.; Lavenex, P.; Mason, W.A.; Toscano, J.E. (2006). "The Expression of Social Dominance Following Neonatal Lesions of the Amygdala or Hippocampus in Rhesus Monkeys (Macaca Mulatta)". Behavioral Neuroscience. 120 (4): 749–760. PMID 16893283. doi:10.1037/0735-7044.120.4.749.
- Potegal, M; Ferris, CF; Herbert, M; Meyerhoff, J; Skaredoff, L (1996). "Attack Priming In Female Syrian Golden Hamsters is Associated with a c-fos-coupled Process within the Corticomedial Amygdala". Neuroscience. 75 (3): 869–880. PMID 8951880. doi:10.1016/0306-4522(96)00236-9.
- Paus, T. 'Mapping Brain Development' in Developmental Origins of Aggression, 2005, The Guilford Press.
- Caramaschi, D; De Boer, SF; De Vries, H; Koolhaas, JM (2008). "Development of violence in mice through repeated victory along with changes in prefrontal cortex neurochemistry". Behavioural Brain Research. 189 (2): 263–72. PMID 18281105. doi:10.1016/j.bbr.2008.01.003.
- Pihl, RO & Benkelfat, C. 'Neuromodulators in the Development and Expression of Inhibition and Aggression' in Developmental_origins_of_aggression.html?id=XmSfJEl2v4sC&redir_esc=y Developmental Origins of Aggression, 2005, The Guilford Press.
- Heinrichs, M; Domes, G (2008). "Neuropeptides and social behaviour: effects of oxytocin and vasopressin in humans". Progress in brain research. Progress in Brain Research. 170: 337–50. ISBN 978-0-444-53201-5. PMID 18655894. doi:10.1016/S0079-6123(08)00428-7.
- Campbell, A (January 2008). "Attachment, aggression and affiliation: the role of oxytocin in female social behavior". Biological Psychology. 77 (1): 1–10. PMID 17931766. doi:10.1016/j.biopsycho.2007.09.001.
- Carlson, N. 'Hormonal Control of Aggressive Behavior' Chapter 11 in [Physiology of Behavior],2013, Pearson Education Inc.
- Van Goozen, S. 'Hormones and the Developmental Origins of Aggression' Chapter 14 in Developmental Origins of Aggression, 2005, The Guilford Press.
- "Three Important Physical Ovulation Symptoms" from BabyMed.com,http://www.babymed.com/ovulation/3-important-physical-ovulation-symptoms,2001-2015
- Wingfield, John C., Ball, Gregory F., Dufty Jr, Alfred M., Hegner, Robert E., Ramenofsky, Marilyn (1987). "Testosterone and Aggression in Birds". American Scientist. 5 (6): 602–608.
- Muller, Martin N; Wrangham, Richard W. "Dominance, aggression and testosterone in wild chimpanzees: a test of the 'challenge hypothesis'". Animal Behaviour. 67 (1): 113–123. doi:10.1016/j.anbehav.2003.03.013.
- Archer, J. (2006). "Testosterone and human aggression: An evaluation of the challenge hypothesis". Neuroscience & Biobehavioral Reviews. 30 (3): 319–201. doi:10.1016/j.neubiorev.2004.12.007.
- Soma, KK; Scotti, MA; Newman, AE; Charlier, TD; Demas, GE (2008). "Novel mechanisms for neuroendocrine regulation of aggression". Frontiers in neuroendocrinology. 29 (4): 476–89. PMID 18280561. doi:10.1016/j.yfrne.2007.12.003.
- Siegel, A., Bhatt, S., Bhatt, R., Zalcman, S. S., A; Bhatt, S; Bhatt, R; Zalcman, SS (2007). "The Neurobiological Bases for Development of Pharmacological Treatments of Aggressive Disorders". Current Neuropharmacology. 5 (2): 135–147. PMC . PMID 18615178. doi:10.2174/157015907780866929.
- Volman, I., Toni, I., Verhagen, L., Roclofs, K. (2011). "Endogenous testosterone modulates prefrontal-amygdala connectivity during social emotional behavior" (PDF). Cerebral Cortex Advance Access. 10: 1–9.
- Mehta, P. H., Beer, J. (2009). "Neural mechanisms of the testosterone-aggression relation: the role of orbitofrontal cortex. Journal of Cognitive Neuroscience". J Cogn Neurosci. 22 (10): 2357–2368. PMID 19925198. doi:10.1162/jocn.2009.21389.
- Siever L. J., LJ (2008). "Neurobiology of aggression and violence". Am J Psychiatry. 165 (4): 429–442. PMID 18346997. doi:10.1176/appi.ajp.2008.07111774.
- Handbook of Crime Correlates; Lee Ellis, Kevin M. Beaver, John Wright; 2009; Academic Press
- Mazur, A; Booth, A (1998). "Testosterone and dominance in men". The Behavioral and brain sciences. 21 (3): 353–63; discussion 363–97. PMID 10097017. doi:10.1017/s0140525x98001228.
- Albert, D.J.; Walsh, M.L.; Jonik, R.H. (1993). "Aggression in Humans: What is Its Biological Foundation?". Neuroscience and Biobehavioral Reviews. 17 (4): 405–425. PMID 8309650. doi:10.1016/S0149-7634(05)80117-4.
- Coccaro, EF; Beresford, B; Minar, P; Kaskow, J; Geracioti, T (2007). "CSF testosterone: relationship to aggression, impulsivity, and venturesomeness in adult males with personality disorder". Journal of Psychiatric Research. 41 (6): 488–92. PMID 16765987. doi:10.1016/j.jpsychires.2006.04.009.
- Chandler, D.W.; Constantino, J.N.; Earls, F.J.; Grosz, D.; Nandi, R.; Saenger, P. (1993). "Testosterone and Aggression in Children". Journal of the American Academy of Child and Adolescent Psychology. 32 (6): 1217–1222. PMID 8282667. doi:10.1097/00004583-199311000-00015.
- Pibiri, F; Nelson, M; Carboni, G; Pinna, G (2006). "Neurosteroids regulate mouse aggression induced by anabolic androgenic steroids". NeuroReport. 17 (14): 1537–41. PMID 16957604. doi:10.1097/01.wnr.0000234752.03808.b2.
- Choi, P.Y.L.; Cowan, D.; Parrott, A.C. (2004). "High-Dose Anabolic Steroids in Strength Athletes: Effects Upon Hostility and Aggression". Human Psychopharmacology: Clinical and Experimental. 5 (4): 3497–356. doi:10.1002/hup.470050407.
- "Aggression protein found in mice". BBC News. 5 December 2007. Retrieved 26 September 2009.
- Chamero P; Marton TF; Logan DW; et al. (December 2007). "Identification of protein pheromones that promote aggressive behaviour". Nature. 450 (7171): 899–902. PMID 18064011. doi:10.1038/nature05997.
- Smith, RS; Hu, R; DeSouza, A; Eberly, CL; Krahe, K; Chan, W; Araneda, RC (29 July 2015). "Differential Muscarinic Modulation in the Olfactory Bulb.". The Journal of neuroscience : the official journal of the Society for Neuroscience. 35 (30): 10773–85. PMID 26224860. doi:10.1523/JNEUROSCI.0099-15.2015.
- Krieger J; Schmitt A; Löbel D; et al. (February 1999). "Selective activation of G protein subtypes in the vomeronasal organ upon stimulation with urine-derived compounds". J. Biol. Chem. 274 (8): 4655–62. PMID 9988702. doi:10.1074/jbc.274.8.4655.
- Caltech Scientists Discover Aggression-Promoting Pheromone in Flies Caltech press release, 2009
- Siwicki, Kathleen K; Kravitz, Edward A. "fruitless, doublesex and the genetics of social behavior in Drosophila melanogaster". Current Opinion in Neurobiology. 19 (2): 200–206. PMC . PMID 19541474. doi:10.1016/j.conb.2009.04.001.
- Perusse, D. & Gendreau, P. 'Genetics and the Development of Aggression' in Developmental Origins of Aggression, 2005, The Guilford Press.
- Derringer, Jaime; Krueger, Robert F.; Irons, Daniel E.; Iacono, William G. "Harsh Discipline, Childhood Sexual Assault, and MAOA Genotype: An Investigation of Main and Interactive Effects on Diverse Clinical Externalizing Outcomes". Behavior Genetics. 40 (5): 639–648. PMC . PMID 20364435. doi:10.1007/s10519-010-9358-9.
- Konrad Lorenz, On Aggression (1963).
- E.O. Wilson, On Human Nature (Harvard, 1978) pp.101–107.
- Leakey,R.,& Lewin,R. (1978). People of the lake. New York: Anchor Press/Doubleday.
- UNESCO, (1989). The Seville Statement, Retrieved: http://www.unesco.org/cpp/uk/declarations/seville.pdf
- UNESCO Prize for Peace Education, (1989), Retrieved:http://www.demilitarisation.org/IMG/article_PDF/Seville-Statement-UNESCO-1989_a143.pdf
- Thomas, E.M. (1958). The harmless people. New York: Vintage Books.
- Keeley, L.H. (1996). War Before Civilization: The myth of the peaceful savage. New York: Oxford University Press.
- Lomas, W. (2009) Conflict, Violence, and Conflict Resolution in Hunting and Gathering Societies Totem: The University of Western Ontario Journal of Anthropology, Volume 17, Issue 1, Article 13
- Gray, Peter (16 May 2011). "How Hunter-Gatherers Maintained Their Egalitarian Ways". Psychology Today.
- Durrant, Joan; Ensom, Ron (4 September 2012). "Physical punishment of children: lessons from 20 years of research". Canadian Medical Association Journal. 184 (12): 1373–1377. PMC . PMID 22311946. doi:10.1503/cmaj.101314.
- Gershoff, E.T. (2008). Report on Physical Punishment in the United States: What Research Tells Us About Its Effects on Children (PDF). Columbus, OH: Center for Effective Discipline. p. 16.
- "Corporal Punishment" (2008). International Encyclopedia of the Social Sciences.
- Gershoff, Elizabeth T. (September 2013). "Spanking and Child Development: We Know Enough Now to Stop Hitting Our Children". Child Development Perspectives. The Society for Research in Child Development. 7 (3): 133–137. PMC . PMID 24039629. doi:10.1111/cdep.12038. Check date values in:
- Taylor CA, Manganello JA, Lee SJ, Rice JC (May 2010). "Mothers' spanking of 3-year-old children and subsequent risk of children's aggressive behavior". Pediatrics. 125 (5): e1057–65. PMID 20385647. doi:10.1542/peds.2009-2678.
- Straus, Murray A.; Douglas, Emily M.; Madeiros, Rose Ann (2013). The Primordial Violence: Spanking Children, Psychological Development, Violence, and Crime. New York: Routledge. p. 81. ISBN 1-84872-953-7.
- Bond, MH. (2004) 'Aggression and culture', in Encyclopedia of applied psychology, Volume 1.
- Andreu, Takehiro; Manuel, J.; Fujihara, Takehiro; Kohyama, Takaya; Ramirez, J. Martin (1998). "Justification of Interpersonal Aggression in Japanese, American, and Spanish Students". Aggressive Behavior. 25 (3): 185–195. doi:10.1002/(SICI)1098-2337(1999)25:3<185::AID-AB3>3.0.CO;2-K.
- Bowdle, Brian F.; Cohen, Dov; Nisbett, Richerd E.; Schwarz, Norbert (1996). "Insult, Aggression, and the Southern Culture of Honor: an "Experimental" (PDF). Journal of Personality and Social Psychology. 70 (5): 945–960. PMID 8656339. doi:10.1037/0022-3522.214.171.1245.
- Bergmüller, Silvia (2013). "The relationship between cultural individualism–collectivism and student aggression across 62 countries". Aggressive Behavior. 39: 182–200. doi:10.1002/ab.21472. line feed character in
|title=at position 91 (help)
- Nolan, P. (2007) Capitalism and freedom: the contradictory character of globalisation From page 2. Anthem Studies in Development and Globalization, Anthem Press
- SHERER, M (1 March 2004). "Aggression and violence among Jewish and Arab Youth in Israel". International Journal of Intercultural Relations. 28 (2): 93–109. doi:10.1016/j.ijintrel.2004.03.004.
- Amjad, N.; Wood, A.M. (2009). "Identifying and changing the normative beliefs about aggression which lead young Muslim adults to join extremist anti-Semitic groups in Pakistan" (PDF). Aggressive Behavior. 35 (6): 514–519. PMID 19790255. doi:10.1002/ab.20325.
- Akert, M. Robin, Aronson, E., and Wilson, D.T. "Social Psychology", 5th Edition. Pearson Education, Inc. 2005
- Freedman, J. (2002). Media violence and its effect on aggression: Assessing the scientific evidence. Toronto: University of Toronto Press.
- Christopher J. Ferguson, (2010) "Video Games and Youth Violence: A Prospective Analysis in Adolescents", Journal of Youth and Adolescence
- Sherry, J. (2001). "The effects of violent video games on aggression" (PDF). Human Communication Research. 27 (3): 409–431. doi:10.1093/hcr/27.3.409.
- Anderson, C.A.; Dill, K.E. (2000). "Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and in Life" (PDF). Journal of Personality and Social Psychology. 78 (4): 772–790. PMID 10794380. doi:10.1037/0022-35126.96.36.1992.
- Schechter DS, Gross A, Willheim E, McCaw J, Turner JB, Myers MM, Zeanah CH, Gleason MM (2009). Is maternal PTSD associated with greater exposure of very young children to violent media? Journal of Traumatic Stress. 22(6), 658–662.
- Al-Rodhan, Nayef R.F., "emotional amoral egoism:" A Neurophilosophical Theory of Human Nature and its Universal Security Implications, LIT 2008.
- Al-Rodhan, Nayef R.F., Sustainable History and the Dignity of Man: A Philosophy of History and Civilisational Triumph, Berlin, LIT, 2009.
- Tremblay, R.E. (2000). "The development of aggressive behaviour during childhood: What have we learned in the past century". International Journal of Behavioral Development. 24 (2): 129–141. doi:10.1080/016502500383232.
- Bongers, I.L.; Koot, H.M.; der Ende, J.; Verhulst, F.C. (2004). "Developmental trajectories of externalizing behaviors in childhood and adolescence". Child Development. 75 (5): 1523–1537. PMID 15369529. doi:10.1111/j.1467-8624.2004.00755.x.
- NICHD Early Child Care Research Network (2004). "Trajectories of physical aggression from toddlerhood to middle childhood: predictors, correlates, and outcomes". Monographs of the Society for Research in Child Development. 69 (4): vii, 1–129. PMID 15667346. doi:10.1111/j.0037-976X.2004.00312.x.
- Bongers, I. L.; Koot, H. M., van der Ende, J., Verhulst, F. C. (30 November 2007). "Predicting young adult social functioning from developmental trajectories of externalizing behaviour" (PDF). Psychological Medicine. 38 (7). doi:10.1017/S0033291707002309.
- Schellenberg, R. (2000). "Aggressive personality: When does it develop and why?". Virginia Counselors Journal. 26: 67–76.
- Tremblay, Richard E., Hartup, Willard W. and Archer, John (eds.) (2005). Developmental Origins of Aggression. New York: The Guilford Press. ISBN 1-59385-110-3.
- Bandura, A.; Ross, D.; Ross, S.A. (1961). "Transmission of aggression through imitation of aggressive models". The Journal of Abnormal and Social Psychology. 63 (3): 575–582. PMID 13864605. doi:10.1037/h0045925.
- American Academy of Pediatrics (2011) Ages & Stages: Aggressive Behavior HealthChildren.org, retrieved January 2012
- National Association of School Psychologists (2008) Angry and Aggressive Students
- Coie, J.D. & Dodge, K.A. (1997). Aggression and antisocial behavior. In W. Damon & N. Eisenberg (Eds). Handbook of Child Psychology, Vol. 3: Social, emotional and personality development
- Maccoby. E.E. & Jacklin. C.N. (1974). The psychology of sex differences, Stanford: Stanford University Press.
- Eagly & Steffen (1986) Psychological Bulletin. "Gender and Aggressive Behavior: A Meta-analytic Review of the Social Psychological Literature" Volume 100, No 3. pp 323–325
- Bjorkqvist, Kaj; Lagerspetz, Kirsti M.; Osterman, Karin (1994). "Sex Differences in Covert Aggression" (PDF). Aggressive Behavior. 202: 27–33.
- Archer, J. (2004). "Sex differences in aggression in real-world settings: A meta-analytic review". Review of General Psychology. 8 (4): 291–322. doi:10.1037/1089-26188.8.131.521.
- Card, N.A.; Stucky, B.D.; Sawalani, G.M.; Little, T.D. (2008). "Direct and indirect aggression during childhood and adolescence: A meta-analytic review of gender differences, intercorrelations, and relations to maladjustment". Child Development. 79 (5): 1185–1229. PMID 18826521. doi:10.1111/j.1467-8624.2008.01184.x.
- Hines, Denise A.; Saudino, Kimberly J. (2003). "Gender Differences in Psychological, Physical, and Sexual Aggression Among College Students Using the Revised Conflict Tactics Scales". Violence and Victims. 18 (2): 197–217. PMID 12816404. doi:10.1891/vivi.2003.18.2.197.
- Archer, J (2000). "Sex differences in aggression between heterosexual partners: A meta-analytic review". Psychological Bulletin. 126 (5): 651–680. PMID 10989615. doi:10.1037/0033-2909.126.5.651.
- Björkqvist, Kaj (1994). "Sex differences in physical, verbal, and indirect aggression: A review of recent research". Sex Roles. 30 (3–4): 177–188. doi:10.1007/BF01420988.
- Navis, C; Brown, SL; Heim, D (2008). "Predictors of injurious assault committed during or after drinking alcohol: a case-control study of young offenders". Aggressive behavior. 34 (2): 167–74. PMID 17922526. doi:10.1002/ab.20231.
- Turner, C.W.; Layton, J.J.; Simons, L.S. (1975). "Naturalistic studies of aggressive behavior: aggressive stimuli, victim visibility and horn honking". Journal of Personality and Social Psychology. 31 (6): 1098–1107. PMID 1142063. doi:10.1037/h0076960.
- Castle, T.; Hensley, C. (2002). "Serial Killers with Military Experience: Applying Learning Theory to Serial Murder". International Journal of Offender Therapy and Comparative Criminology. 46 (4): 453–65. PMID 12150084. doi:10.1177/0306624X02464007.
- Smith, P. (2007). "Why has aggression been thought of as maladaptive?". Aggression and Adaptation: the Bright Side to Bad Behavior: 65–83.
- Hawley, P.; Vaughn, B. (2003). "Aggression and adaptive function: The bright side to bad behavior" (PDF). Merrill-Palmer Quarterly. 49 (3): 239–242. doi:10.1353/mpq.2003.0012.
- Ferguson, C.J. (2010). "Blazing Angels or Resident Evil? Can violent video games be a force for good?" (PDF). Review of General Psychology. 14 (2): 68–81. doi:10.1037/a0018941.
|Look up aggression or aggressive in Wiktionary, the free dictionary.|
|Wikiquote has quotations related to: Aggression|
|Wikimedia Commons has media related to Aggression.|
- When Family Life Hurts: Family experience of aggression in children – Parentline plus, 31 October 2010
- Aggression and Violent Behavior, a Review Journal
- International Society for Research on Aggression (ISRA)
- Problems in the Concepts and Definitions of Aggression, Violence and some Related Terms by Johan van der Dennen, originally published in 1980
|
<urn:uuid:7143e9ef-e95d-415d-987d-52ce62201090>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813622.87/warc/CC-MAIN-20180221123439-20180221143439-00017.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8880223631858826,
"score": 3.828125,
"token_count": 19703,
"url": "https://infogalactic.com/info/Aggression"
}
|
Money laundering is concealing the transformation of profits from illegal activities and corruption into ostensibly "legitimate" assets. The dilemma of illicit activities is accounting for the origin of the proceeds of such activities without raising the suspicion of law enforcement agencies. Accordingly, considerable time and effort is put into devising strategies which enable the safe use of those proceeds without raising unwanted suspicion. Implementing such strategies is generally called money laundering. After money has been suitably laundered or “cleaned”, it can be used in the mainstream economy for accumulation of wealth, such as acquisitions of properties, or otherwise spent. Law enforcement agencies of many jurisdictions have set up sophisticated systems in an effort to detect suspicious transactions or activities, and many have set up international cooperative arrangements to assist each other in these endeavours.
In a number of legal and regulatory systems, the term money laundering has become conflated with other forms of financial and business crime, and is sometimes used more generally to include misuse of the financial system (involving things such as securities, digital currencies, credit cards, and traditional currency), including terrorism financing and evasion of international sanctions. Most anti-money laundering laws openly conflate money laundering (which is concerned with source of funds) with terrorism financing (which is concerned with destination of funds) when regulating the financial system.
Some countries treat obfuscation of sources of money as also constituting money laundering, whether it is intentional or by merely using financial systems or services that do not identify or track sources or destinations. Other countries define money laundering in such a way as to include money from activity that would have been a crime in that country, even if the activity was legal where the actual conduct occurred.
- 1 History
- 2 Definition
- 3 Methods
- 4 Magnitude
- 5 Electronic money
- 6 Reverse money laundering
- 7 Combating
- 8 Anti-money-laundering measures by region
- 9 Notable cases
- 10 Digital money and Money laundering
- 11 See also
- 12 Further reading
- 13 References
- 14 External links
The concept of money laundering regulations goes back to ancient times and is intertwined with the development of money and banking. Money laundering is first seen with individuals hiding wealth from the state to avoid taxation or confiscation or a combination of both.
In China, merchants around 2000 BCE would hide their wealth from rulers who would simply take it from them and banish them. In addition to hiding it, they would move it and invest it in businesses in remote provinces or even outside China.
Over the millennia many rulers and states imposed rules that would take wealth from their citizens and this led to the development of offshore banking and tax evasion. One of the enduring methods has been the use of parallel banking or Informal value transfer systems such as hawala that allowed people to move money out of the country avoiding state scrutiny.
In the 20th century, the seizing of wealth again became popular when it was seen as an additional crime prevention tool. The first time was during the period of Prohibition in the United States during the 1930s. This saw a new emphasis by the state and law enforcement agencies to track and confiscate money. Organized crime received a major boost from Prohibition and a large source of new funds that were obtained from illegal sales of alcohol.
In the 1980s, the war on drugs led governments again to turn to money-laundering rules in an attempt to seize proceeds of drug crimes in order to catch the organizers and individuals running drug empires. It also had the benefit from a law enforcement point of view of turning rules of evidence upside down. Law enforcers normally have to prove an individual is guilty to get a conviction. But with money laundering laws, money can be confiscated and it is up to the individual to prove that the source of funds is legitimate if they want the funds back. This makes it much easier for law enforcement agencies and provides for much lower burdens of proof.
The September 11 attacks in 2001, which led to the Patriot Act in the US and similar legislation worldwide, led to a new emphasis on money laundering laws to combat terrorism financing. The Group of Seven (G7) nations used the Financial Action Task Force on Money Laundering to put pressure on governments around the world to increase surveillance and monitoring of financial transactions and share this information between countries. Starting in 2002, governments around the world upgraded money laundering laws and surveillance and monitoring systems of financial transactions. Anti money laundering regulations have become a much larger burden for financial institutions and enforcement has stepped up significantly. During 2011–2015 a number of major banks faced ever-increasing fines for breaches of money laundering regulations. This included HSBC, which was fined $1.9 billion in December 2012, and BNP Paribas, which was fined $8.9 billion in July 2014 by the US government. Many countries introduced or strengthened border controls on the amount of cash that can be carried and introduced central transaction reporting systems where all financial institutions have to report all financial transactions electronically. For example, in 2006, Australia set up the AUSTRAC system and required the reporting of all financial transactions.
Money obtained from certain crimes, such as extortion, insider trading, drug trafficking, and illegal gambling is "dirty" and needs to be "cleaned" to appear to have been derived from legal activities, so that banks and other financial institutions will deal with it without suspicion. Money can be laundered by many methods which vary in complexity and sophistication.
Money laundering involves three steps: The first involves introducing cash into the financial system by some means ("placement"); the second involves carrying out complex financial transactions to camouflage the illegal source of the cash ("layering"); and finally, acquiring wealth generated from the transactions of the illicit funds ("integration"). Some of these steps may be omitted, depending upon the circumstances. For example, non-cash proceeds that are already in the financial system would not need to be placed.
According to the United States Treasury Department:
Money laundering is the process of making illegally-gained proceeds (i.e., "dirty money") appear legal (i.e., "clean"). Typically, it involves three steps: placement, layering, and integration. First, the illegitimate funds are furtively introduced into the legitimate financial system. Then, the money is moved around to create confusion, sometimes by wiring or transferring through numerous accounts. Finally, it is integrated into the financial system through additional transactions until the "dirty money" appears "clean."
Money laundering can take several forms, although most methods can be categorized into one of a few types. These include "bank methods, smurfing [also known as structuring], currency exchanges, and double-invoicing".
- Structuring: Often known as smurfing, this is a method of placement whereby cash is broken into smaller deposits of money, used to defeat suspicion of money laundering and to avoid anti-money laundering reporting requirements. A sub-component of this is to use smaller amounts of cash to purchase bearer instruments, such as money orders, and then ultimately deposit those, again in small amounts.
- Bulk cash smuggling: This involves physically smuggling cash to another jurisdiction and depositing it in a financial institution, such as an offshore bank, with greater bank secrecy or less rigorous money laundering enforcement.
- Cash-intensive businesses: In this method, a business typically expected to receive a large proportion of its revenue as cash uses its accounts to deposit criminally derived cash. Such enterprises often operate openly and in doing so generate cash revenue from incidental legitimate business in addition to the illicit cash – in such cases the business will usually claim all cash received as legitimate earnings. Service businesses are best suited to this method, as such enterprises have little or no variable costs and/or a large ratio between revenue and variable costs, which makes it difficult to detect discrepancies between revenues and costs. Examples are parking structures, strip clubs, tanning salons, car washes, arcades, bars, restaurants, and casinos.
- Trade-based laundering: This involves under- or over-valuing invoices to disguise the movement of money.
- Shell companies and trusts: Trusts and shell companies disguise the true owners of money. Trusts and corporate vehicles, depending on the jurisdiction, need not disclose their true owner. Sometimes referred to by the slang term rathole, though that term usually refers to a person acting as the fictitious owner rather than the business entity.
- Round-tripping: Here, money is deposited in a controlled foreign corporation offshore, preferably in a tax haven where minimal records are kept, and then shipped back as a foreign direct investment, exempt from taxation. A variant on this is to transfer money to a law firm or similar organization as funds on account of fees, then to cancel the retainer and, when the money is remitted, represent the sums received from the lawyers as a legacy under a will or proceeds of litigation.
- Bank capture: In this case, money launderers or criminals buy a controlling interest in a bank, preferably in a jurisdiction with weak money laundering controls, and then move money through the bank without scrutiny.
- Casinos: In this method, an individual walks into a casino and buys chips with illicit cash. The individual will then play for a relatively short time. When the person cashes in the chips, they will expect to take payment in a check, or at least get a receipt so they can claim the proceeds as gambling winnings.
- Other gambling: Money is spent on gambling, preferably on high odds games. One way to minimize risk with this method is to bet on every possible outcome of some event that has many possible outcomes, so no outcome(s) have short odds, and the bettor will lose only the vigorish and will have one or more winning bets that can be shown as the source of money. The losing bets will remain hidden.
- Real estate: Someone purchases real estate with illegal proceeds and then sells the property. To outsiders, the proceeds from the sale look like legitimate income. Alternatively, the price of the property is manipulated: the seller agrees to a contract that underrepresents the value of the property, and receives criminal proceeds to make up the difference.
- Black salaries: A company may have unregistered employees without written contracts and pay them cash salaries. Dirty money might be used to pay them.
- Tax amnesties: For example, those that legalize unreported assets and cash in tax havens.
- Life insurance business: Assignment of policies to unidentified third parties and for which no plausible reasons can be ascertained.
Many regulatory and governmental authorities issue estimates each year for the amount of money laundered, either worldwide or within their national economy. In 1996, the International Monetary Fund estimated that 2–5% of the worldwide global economy involved laundered money. The Financial Action Task Force on Money Laundering (FATF), an intergovernmental body set up to combat money laundering, stated, "Overall, it is absolutely impossible to produce a reliable estimate of the amount of money laundered and therefore the FATF does not publish any figures in this regard." Academic commentators have likewise been unable to estimate the volume of money with any degree of assurance. Various estimates of the scale of global money laundering are sometimes repeated often enough to make some people regard them as factual—but no researcher has overcome the inherent difficulty of measuring an actively concealed practice.
Regardless of the difficulty in measurement, the amount of money laundered each year is in the billions of US dollars and poses a significant policy concern for governments. As a result, governments and international bodies have undertaken efforts to deter, prevent, and apprehend money launderers. Financial institutions have likewise undertaken efforts to prevent and detect transactions involving dirty money, both as a result of government requirements and to avoid the reputational risk involved. Issues relating to money laundering have existed as long as there have been large scale criminal enterprises. Modern anti-money laundering laws have developed along with the modern War on Drugs. In more recent times anti-money laundering legislation is seen as adjunct to the financial crime of terrorist financing in that both crimes usually involve the transmission of funds through the financial system (although money laundering relates to where the money has come from, and terrorist financing relating to where the money is going to).
In theory, electronic money should provide as easy a method of transferring value without revealing identity as untracked banknotes, especially wire transfers involving anonymity-protecting numbered bank accounts. In practice, however, the record-keeping capabilities of Internet service providers and other network resource maintainers tend to frustrate that intention. While some cryptocurrencies under recent development have aimed to provide for more possibilities of transaction anonymity for various reasons, the degree to which they succeed—and, in consequence, the degree to which they offer benefits for money laundering efforts—is controversial. Solutions such as ZCash and Monero are examples of cryptocurrencies that provide unlinkable anonymity via proofs and/or obfuscation of information (Ring signatures). Such currencies could find use in online illicit services. ZCash in particular required use of "toxic waste" during generation of parameters in its genesis block, which could allow an attacker to generate unlimited tokens undetected, due to zk-SNARKs.
In 2013, Jean-Loup Richet, a research fellow at ESSEC ISIS, surveyed new techniques that cybercriminals were using in a report written for the United Nations Office on Drugs and Crime. A common approach was to use a digital currency exchanger service which converted dollars into a digital currency called Liberty Reserve, and could be sent and received anonymously. The receiver could convert the Liberty Reserve currency back into cash for a small fee. In May 2013, the US authorities shut down Liberty Reserve charging its founder and various others with money laundering.
Another increasingly common way of laundering money is to use online gaming. In a growing number of online games, such as Second Life and World of Warcraft, it is possible to convert money into virtual goods, services, or virtual cash that can later be converted back into money.
Reverse money laundering
Reverse money laundering is a process that disguises a legitimate source of funds that are to be used for illegal purposes. It is usually perpetrated for the purpose of financing terrorism but can be also used by criminal organizations that have invested in legal businesses and would like to withdraw legitimate funds from official circulation. The process is where the money starts out legitimate and grows “dirty” in its ultimate purpose. Unaccounted cash received via disguising financial transactions is not included in official financial reporting and could be used to evade taxes, hand in bribes and pay “under-the-table” salaries. For example, in an affidavit filed 24 March 2014 in United States District Court, Northern California, San Francisco Division, FBI special agent Emmanuel V. Pascau alleged that several people associated with the Chee Kung Tong organization, and California State Senator Leland Yee, engaged in reverse money laundering activities.
The problem of such fraudulent encashment practices (obnalichka in Russian) has become acute in Russia and other countries of the former Soviet Union. The Eurasian Group on Combating Money Laundering and Financing of Terrorism (EAG) reported that the Russian Federation, Ukraine, Turkey, Serbia, Kyrgyzstan, Uzbekistan, Armenia and Kazakhstan have encountered a substantial shrinkage of tax base and shifting money supply balance in favor of cash. These processes have complicated planning and management of the economy and contributed to the growth of the shadow economy.
Anti-money laundering (AML) is a term mainly used in the financial and legal industries to describe the legal controls that require financial institutions and other regulated entities to prevent, detect, and report money laundering activities. Anti-money laundering guidelines came into prominence globally as a result of the formation of the Financial Action Task Force (FATF) and the promulgation of an international framework of anti-money laundering standards. These standards began to have more relevance in 2000 and 2001, after FATF began a process to publicly identify countries that were deficient in their anti-money laundering laws and international cooperation, a process colloquially known as "name and shame".
An effective AML program requires a jurisdiction to criminalise money laundering, giving the relevant regulators and police the powers and tools to investigate; be able to share information with other countries as appropriate; and require financial institutions to identify their customers, establish risk-based controls, keep records, and report suspicious activities.
The elements of the crime of money laundering are set forth in the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances and Convention against Transnational Organized Crime. It is defined as knowingly engaging in a financial transaction with the proceeds of a crime for the purpose of concealing or disguising the illicit origin of the property from governments.
The role of financial institutions
While banks operating in the same country generally have to follow the same anti-money laundering laws and regulations, financial institutions all structure their anti-money laundering efforts slightly differently. Today, most financial institutions globally, and many non-financial institutions, are required to identify and report transactions of a suspicious nature to the financial intelligence unit in the respective country. For example, a bank must verify a customer's identity and, if necessary, monitor transactions for suspicious activity. This is often termed as "know your customer". This means knowing the identity of the customer and understanding the kinds of transactions in which the customer is likely to engage. By knowing one's customers, financial institutions can often identify unusual or suspicious behaviour, termed anomalies, which may be an indication of money laundering.
Bank employees, such as tellers and customer account representatives, are trained in anti-money laundering and are instructed to report activities that they deem suspicious. Additionally, anti-money laundering software filters customer data, classifies it according to level of suspicion, and inspects it for anomalies. Such anomalies include any sudden and substantial increase in funds, a large withdrawal, or moving money to a bank secrecy jurisdiction. Smaller transactions that meet certain criteria may also be flagged as suspicious. For example, structuring can lead to flagged transactions. The software also flags names on government "blacklists" and transactions that involve countries hostile to the host nation. Once the software has mined data and flagged suspect transactions, it alerts bank management, who must then determine whether to file a report with the government.
Value of enforcement costs and associated privacy concerns
The financial services industry has become more vocal about the rising costs of anti-money laundering regulation and the limited benefits that they claim it brings. One commentator wrote that "[w]ithout facts, [anti-money laundering] legislation has been driven on rhetoric, driving by ill-guided activism responding to the need to be "seen to be doing something" rather than by an objective understanding of its effects on predicate crime. The social panic approach is justified by the language used—we talk of the battle against terrorism or the war on drugs". The Economist magazine has become increasingly vocal in its criticism of such regulation, particularly with reference to countering terrorist financing, referring to it as a "costly failure", although it concedes that other efforts (like reducing identity and credit card fraud) may still be effective at combating money laundering.
There is no precise measurement of the costs of regulation balanced against the harms associated with money laundering, and given the evaluation problems involved in assessing such an issue, it is unlikely that the effectiveness of terror finance and money laundering laws could be determined with any degree of accuracy. The Economist estimated the annual costs of anti-money laundering efforts in Europe and North America at US$5 billion in 2003, an increase from US$700 million in 2000. Government-linked economists have noted the significant negative effects of money laundering on economic development, including undermining domestic capital formation, depressing growth, and diverting capital away from development. Because of the intrinsic uncertainties of the amount of money laundered, changes in the amount of money laundered, and the cost of anti-money laundering systems, it is almost impossible to tell which anti-money laundering systems work and which are more or less cost effective.
Besides economic costs to implement anti-money-laundering laws, improper attention to data protection practices may entail disproportionate costs to individual privacy rights. In June 2011, the data-protection advisory committee to the European Union issued a report on data protection issues related to the prevention of money laundering and terrorist financing, which identified numerous transgressions against the established legal framework on privacy and data protection. The report made recommendations on how to address money laundering and terrorist financing in ways that safeguard personal privacy rights and data protection laws. In the United States, groups such as the American Civil Liberties Union have expressed concern that money laundering rules require banks to report on their own customers, essentially conscripting private businesses "into agents of the surveillance state".
Many countries are obligated by various international instruments and standards, such as the 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, the 2000 Convention against Transnational Organized Crime, the 2003 United Nations Convention against Corruption, and the recommendations of the 1989 Financial Action Task Force on Money Laundering (FATF) to enact and enforce money laundering laws in an effort to stop narcotics trafficking, international organised crime, and corruption. Mexico, which has faced a significant increase in violent crime, established anti-money laundering controls in 2013 to curb the underlying crime issue.
Formed in 1989 by the G7 countries, the Financial Action Task Force on Money Laundering (FATF) is an intergovernmental body whose purpose is to develop and promote an international response to combat money laundering. The FATF Secretariat is housed at the headquarters of the OECD in Paris. In October 2001, FATF expanded its mission to include combating the financing of terrorism. FATF is a policy-making body that brings together legal, financial, and law enforcement experts to achieve national legislation and regulatory AML and CFT reforms. As of 2014[update] its membership consists of 36 countries and territories and two regional organizations. FATF works in collaboration with a number of international bodies and organizations. These entities have observer status with FATF, which does not entitle them to vote, but permits them full participation in plenary sessions and working groups.
FATF has developed 40 recommendations on money laundering and 9 special recommendations regarding terrorist financing. FATF assesses each member country against these recommendations in published reports. Countries seen as not being sufficiently compliant with such recommendations are subjected to financial sanctions.
FATF's three primary functions with regard to money laundering are:
- Monitoring members’ progress in implementing anti-money laundering measures,
- Reviewing and reporting on laundering trends, techniques, and countermeasures, and
- Promoting the adoption and implementation of FATF anti-money laundering standards globally.
The FATF currently comprises 34 member jurisdictions and 2 regional organisations, representing most major financial centres in all parts of the globe.
The United Nations Office on Drugs and Crime maintains the International Money Laundering Information Network, a website that provides information and software for anti-money laundering data collection and analysis. The World Bank has a website that provides policy advice and best practices to governments and the private sector on anti-money laundering issues.
Anti-money-laundering measures by region
Many jurisdictions adopt a list of specific predicate crimes for money laundering prosecutions, while others criminalize the proceeds of any serious crimes.
This section needs additional citations for verification. (November 2011) (Learn how and when to remove this template message)
The Financial Transactions and Reports Analysis Center of Afghanistan (FinTRACA) was established as a Financial Intelligence Unit (FIU) under the Anti Money Laundering and Proceeds of Crime Law passed by decree late in 2004. The main purpose of this law is to protect the integrity of the Afghan financial system and to gain compliance with international treaties and conventions. The Financial Intelligence Unit is a semi-independent body that is administratively housed within the Central Bank of Afghanistan (Da Afghanistan Bank). The main objective of FinTRACA is to deny the use of the Afghan financial system to those who obtained funds as the result of illegal activity, and to those who would use it to support terrorist activities.
To meet its objectives, the FinTRACA collects and analyzes information from a variety of sources. These sources include entities with legal obligations to submit reports to the FinTRACA when a suspicious activity is detected, as well as reports of cash transactions above a threshold amount specified by regulation. Also, FinTRACA has access to all related Afghan government information and databases. When the analysis of this information supports the supposition of illegal use of the financial system, the FinTRACA works closely with law enforcement to investigate and prosecute the illegal activity. FinTRACA also cooperates internationally in support of its own analyses and investigations and to support the analyses and investigations of foreign counterparts, to the extent allowed by law. Other functions include training of those entities with legal obligations to report information, development of laws and regulations to support national-level AML objectives, and international and regional cooperation in the development of AML typologies and countermeasures.
Australia has adopted a number of strategies to combat money laundering, which mirror those of a majority of western countries. The Australian Transaction Reports and Analysis Centre (AUSTRAC) is Australia's financial intelligence unit to combat money laundering and terrorism financing, which requires financial institutions and other 'cash dealers' in Australia to report to it suspicious cash or other transactions and other specific information. The Attorney-General's Department maintains a list of outlawed terror organisations. It is an offense to materially support or be supported by such organisations. It is an offence to open a bank account in Australia in a false name, and rigorous procedures must be followed when new bank accounts are opened.
The Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (Cth) (AML/CTF Act) is the principal legislative instrument, although there are also offence provisions contained in Division 400 of the Criminal Code Act 1995 (Cth). Upon its introduction, it was intended that the AML/CTF Act would be further amended by a second tranche of reforms extending to designated non-financial businesses and professions (DNFBPs) including, inter alia, lawyers, accountants, jewellers and real estate agents; however, those further reforms have yet to be progressed.
The Proceeds of Crime Act 1987 (Cth) imposes criminal penalties on a person who engages in money laundering, and allows for confiscation of property. The principal objects of the Act are set out in s.3(1):
- to deprive persons of the proceeds of, and benefits derived from the commission of offences,
- to provide for the forfeiture of property used in or in connection with the commission of such offences, and
- to enable law enforcement authorities to effectively trace such proceeds, benefits and property.
The first anti-money laundering legislation in Bangladesh was the Money Laundering Prevention Act, 2002. It was replaced by the Money Laundering Prevention Ordinance 2008. Subsequently, the ordinance was repealed by the Money Laundering Prevention Act, 2009. In 2012, government again replace it with the Money Laundering Prevention Act, 2012
In terms of section 2, "Money Laundering means – (i) knowingly moving, converting, or transferring proceeds of crime or property involved in an offence for the following purposes:- (1) concealing or disguising the illicit nature, source, location, ownership or control of the proceeds of crime; or (2) assisting any person involved in the commission of the predicate offence to evade the legal consequences of such offence; (ii) smuggling money or property earned through legal or illegal means to a foreign country; (iii) knowingly transferring or remitting the proceeds of crime to a foreign country or remitting or bringing them into Bangladesh from a foreign country with the intention of hiding or disguising its illegal source; or (iv) concluding or attempting to conclude financial transactions in such a manner so as to reporting requirement under this Act may be avoided;(v) converting or moving or transferring property with the intention to instigate or assist for committing a predicate offence; (vi) acquiring, possessing or using any property, knowing that such property is the proceeds of a predicate offence; (vii) performing such activities so as to the illegal source of the proceeds of crime may be concealed or disguised; (viii) participating in, associating with, conspiring, attempting, abetting, instigate or counsel to commit any offences mentioned above.
To prevent these Illegal uses of money, the Bangladesh government has introduced the Money Laundering Prevention Act. The Act was last amended in the year 2009 and all the financial institutes are following this act. Till today there are 26 circulars issued by Bangladesh Bank under this act. To prevent money laundering, a banker must do the following:
- While opening a new account, the account opening form should be duly filled up by all the information of the customer.
- The KYC must be properly filled.
- The Transaction Profile (TP) is mandatory for a client to understand his/her transactions. If needed, the TP must be updated at the client's consent.
- All other necessary papers should be properly collected along with the National ID card.
- If any suspicious transaction is noticed, the Branch Anti Money Laundering Compliance Officer (BAMLCO) must be notified and accordingly the Suspicious Transaction Report (STR) must be filled out.
- The cash department should be aware of the transactions. It must be noted if suddenly a big amount of money is deposited in any account. Proper documents are required if any client does this type of transaction.
- Structuring, over/ under invoicing is another way to do money laundering. The foreign exchange department should look into this matter cautiously.
- If any account has a transaction over 1 million taka in a single day, it must be reported in a cash transaction report (CTR).
- All bank officials must go through all the 26 circulars and use them.
In 1991, the Proceeds of Crime (Money Laundering) Act was brought into force in Canada to give legal effect to the former FATF Forty Recommendations by establishing record keeping and client identification requirements in the financial sector to facilitate the investigation and prosecution of money laundering offences under the Criminal Code and the Controlled Drugs and Substances Act.
In 2000, the Proceeds of Crime (Money Laundering) Act was amended to expand the scope of its application and to establish a financial intelligence unit with national control over money laundering, namely FINTRAC.
In December 2001, the scope of the Proceeds of Crime (Money Laundering) Act was again expanded by amendments enacted under the Anti-Terrorism Act with the objective of deterring terrorist activity by cutting off sources and channels of funding used by terrorists in response to 9/11. The Proceeds of Crime (Money Laundering) Act was renamed the Proceeds of Crime (Money Laundering) and Terrorist Financing Act.
In December 2006, the Proceeds of Crime (Money Laundering) and Terrorist Financing Act was further amended, in part, in response to pressure from the FATF for Canada to tighten its money laundering and financing of terrorism legislation. The amendments expanded the client identification, record-keeping and reporting requirements for certain organizations and included new obligations to report attempted suspicious transactions and outgoing and incoming international electronic fund transfers, undertake risk assessments and implement written compliance procedures in respect of those risks.
The amendments also enabled greater money laundering and terrorist financing intelligence-sharing among enforcement agencies.
In Canada, casinos, money service businesses, notaries, accountants, banks, securities brokers, life insurance agencies, real estate salespeople and dealers in precious metals and stones are subject to the reporting and record keeping obligations under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act.
The fourth and latest iteration of the EU’s anti-money laundering directive (AMLD IV) was published on 5 June 2015, after clearing its last legislative stop at the European Parliament. The new directive brings the EU’s anti-money laundering laws more in line with the US’s, which is welcome news for financial institutions that are operating in both jurisdictions.
Lack of harmonization in AML requirements between the US and EU has complicated the compliance efforts of global institutions that are looking to standardize the Know Your Customer (KYC) component of their AML programs across key jurisdictions. AMLD IV promises to better align the AML regimes by adopting a more risk-based approach compared to its predecessor, AMLD III.
Certain components of the directive, however, go beyond current requirements in both the EU and US, imposing new implementation challenges on banks. For instance, more public officials are brought within the scope of the directive, and EU member states are required to establish new registries of “beneficial owners” (i.e., those who ultimately own or control each company) which will impact banks. AMLD IV became effective 25 June 2015.
In 2002, the Parliament of India passed an act called the Prevention of Money Laundering Act, 2002. The main objectives of this act are to prevent money-laundering as well as to provide for confiscation of property either derived from or involved in, money-laundering.
Section 12 (1) describes the obligations that banks, other financial institutions, and intermediaries have to
- (a) Maintain records that detail the nature and value of transactions, whether such transactions comprise a single transaction or a series of connected transactions, and where these transactions take place within a month.
- (b) Furnish information on transactions referred to in clause (a) to the Director within the time prescribed, including records of the identity of all its clients.
Section 12 (2) prescribes that the records referred to in sub-section (1) as mentioned above, must be maintained for ten years after the transactions finished. It is handled by the Indian Income Tax Department.
Most money laundering activities in India are through political parties, corporate companies and the shares market. These are investigated by the Enforcement Directorate and Indian Income Tax Department. According to Government of India, out of the total tax arrears of ₹2,480 billion (US$39 billion) about ₹1,300 billion (US$20 billion) pertain to money laundering and securities scam cases.
Bank accountants must record all transactions over Rs. 1 million and maintain such records for 10 years. Banks must also make cash transaction reports (CTRs) and suspicious transaction reports over Rs. 1 million within 7 days of initial suspicion. They must submit their reports to the Enforcement Directorate and Income Tax Department.
- The Corruption, Drug Trafficking and Other Serious Crimes (Confiscation of Benefits) Act (CDSA). This statute criminalises money laundering and imposes the requirement for persons to file suspicious transaction reports (STRs) and make a disclosure whenever physical currency or goods exceeding S$20,000 are carried into or out of Singapore.
- The Mutual Assistance in Criminal Matters Act (MACMA). This statute sets out the framework for mutual legal assistance in criminal matters.
- Legal instruments issued by regulatory agencies (such as the Monetary Authority of Singapore (MAS), in relation to financial institutions (FIs)) imposing requirements to conduct customer due diligence (CDD).
The term ‘money laundering’ is not used as such within the CDSA. Part VI of the CDSA criminalises the laundering of proceeds generated by criminal conduct and drug tracking via the following offences:
- The assistance of another person in retaining, controlling or using the benefits of drug dealing or criminal conduct under an arrangement (whether by concealment, removal from jurisdiction, transfer to nominees or otherwise) [section 43(1)/44(1)].
- The concealment, conversion, transfer or removal from the jurisdiction, or the acquisition, possession or use of benefits of drug dealing or criminal conduct [section 46(1)/47(1)].
- The concealment, conversion, transfer or removal from the jurisdiction of another person’s benefits of drug dealing or criminal conduct [section 46(2)/47(2)].
- The acquirement, possession or use of another person’s benefits of drug dealing or criminal conduct [section 46(3)/47(3)].
Money laundering and terrorist funding legislation in the UK is governed by four Acts of primary legislation:-
- Terrorism Act 2000
- Anti-terrorism, Crime and Security Act 2001
- Proceeds of Crime Act 2002
- Serious Organised Crime and Police Act 2005
- Money Laundering Regulations 2007
- Money Laundering Regulation, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017
Money Laundering Regulations are designed to protect the UK financial system, as well as preventing and detecting crime. If a business is covered by these regulations then controls are put in place to prevent it being used for money laundering.
The Proceeds of Crime Act 2002 contains the primary UK anti-money laundering legislation, including provisions requiring businesses within the "regulated sector" (banking, investment, money transmission, certain professions, etc.) to report to the authorities suspicions of money laundering by customers or others.
Money laundering is broadly defined in the UK. In effect any handling or involvement with any proceeds of any crime (or monies or assets representing the proceeds of crime) can be a money laundering offence. An offender's possession of the proceeds of his own crime falls within the UK definition of money laundering. The definition also covers activities within the traditional definition of money laundering, as a process that conceals or disguises the proceeds of crime to make them appear legitimate.
Unlike certain other jurisdictions (notably the US and much of Europe), UK money laundering offences are not limited to the proceeds of serious crimes, nor are there any monetary limits. Financial transactions need no money laundering design or purpose for UK laws to consider them a money laundering offence. A money laundering offence under UK legislation need not even involve money, since the money laundering legislation covers assets of any description. In consequence, any person who commits an acquisitive crime (i.e., one that produces some benefit in the form of money or an asset of any description) in the UK inevitably also commits a money laundering offence under UK legislation.
This applies also to a person who, by criminal conduct, evades a liability (such as a taxation liability)—which lawyers call "obtaining a pecuniary advantage"—as he is deemed thereby to obtain a sum of money equal in value to the liability evaded.
The principal money laundering offences carry a maximum penalty of 14 years' imprisonment.
Secondary regulation is provided by the Money Laundering Regulations 2003, which was replaced by the Money Laundering Regulations 2007. They are directly based on the EU directives 91/308/EEC, 2001/97/EC and 2005/60/EC.
One consequence of the Act is that solicitors, accountants, tax advisers, and insolvency practitioners who suspect (as a consequence of information received in the course of their work) that their clients (or others) have engaged in tax evasion or other criminal conduct that produced a benefit, now must report their suspicions to the authorities (since these entail suspicions of money laundering). In most circumstances it would be an offence, "tipping-off", for the reporter to inform the subject of his report that a report has been made. These provisions do not however require disclosure to the authorities of information received by certain professionals in privileged circumstances or where the information is subject to legal professional privilege. Others that are subject to these regulations include financial institutions, credit institutions, estate agents (which includes chartered surveyors), trust and company service providers, high value dealers (who accept cash equivalent to €15,000 or more for goods sold), and casinos.
Professional guidance (which is submitted to and approved by the UK Treasury) is provided by industry groups including the Joint Money Laundering Steering Group, the Law Society. and the Consultative Committee of Accountancy Bodies (CCAB). However, there is no obligation on banking institutions to routinely report monetary deposits or transfers above a specified value. Instead reports must be made of all suspicious deposits or transfers, irrespective of their value.
The reporting obligations include reporting suspicious gains from conduct in other countries that would be criminal if it took place in the UK. Exceptions were later added for certain activities legal where they took place, such as bullfighting in Spain.
More than 200,000 reports of suspected money laundering are submitted annually to authorities in the UK (there were 240,582 reports in the year ended 30 September 2010. This was an increase from the 228,834 reports submitted in the previous year). Most of these reports are submitted by banks and similar financial institutions (there were 186,897 reports from the banking sector in the year ended 30 September 2010).
Although 5,108 different organisations submitted suspicious activity reports to the authorities in the year ended 30 September 2010, just four organisations submitted approximately half of all reports, and the top 20 reporting organisations accounted for three-quarters of all reports.
The offence of failing to report a suspicion of money laundering by another person carries a maximum penalty of 5 years' imprisonment.
Bureaux de change
All UK Bureaux de change are registered with Her Majesty's Revenue and Customs, which issues a trading licence for each location. Bureaux de change and money transmitters, such as Western Union outlets, in the UK fall within the "regulated sector" and are required to comply with the Money Laundering Regulations 2007. Checks can be carried out by HMRC on all Money Service Businesses.
In South Africa, the Financial Intelligence Centre Act (2001) and subsequent amendments have added responsibilities to the FSB to combat money laundering.
The approach in the United States to stopping money laundering is usually broken into two areas: preventive (regulatory) measures and criminal measures.
In an attempt to prevent dirty money from entering the U.S. financial system in the first place, the United States Congress passed a series of laws, starting in 1970, collectively known as the Bank Secrecy Act (BSA). These laws, contained in sections 5311 through 5332 of Title 31 of the United States Code, require financial institutions, which under the current definition include a broad array of entities, including banks, credit card companies, life insurers, money service businesses and broker-dealers in securities, to report certain transactions to the United States Department of the Treasury. Cash transactions in excess of a certain amount must be reported on a currency transaction report (CTR), identifying the individual making the transaction as well as the source of the cash. The law originally required all transactions of US$5,000 or more to be reported, but due to excessively high levels of reporting the threshold was raised to US$10,000. The U.S. is one of the few countries in the world to require reporting of all cash transactions over a certain limit, although certain businesses can be exempt from the requirement. Additionally, financial institutions must report transaction on a Suspicious Activity Report (SAR) that they deem "suspicious", defined as a knowing or suspecting that the funds come from illegal activity or disguise funds from illegal activity, that it is structured to evade BSA requirements or appears to serve no known business or apparent lawful purpose; or that the institution is being used to facilitate criminal activity. Attempts by customers to circumvent the BSA, generally by structuring cash deposits to amounts lower than US$10,000 by breaking them up and depositing them on different days or at different locations also violates the law.
The financial database created by these reports is administered by the U.S.'s Financial Intelligence Unit (FIU), called the Financial Crimes Enforcement Network (FinCEN), located in Vienna, Virginia. The reports are made available to U.S. criminal investigators, as well as other FIU's around the globe, and FinCEN conducts computer assisted analyses of these reports to determine trends and refer investigations.
The BSA requires financial institutions to engage in customer due diligence, or KYC, which is sometimes known in the parlance as know your customer. This includes obtaining satisfactory identification to give assurance that the account is in the customer's true name, and having an understanding of the expected nature and source of the money that flows through the customer's accounts. Other classes of customers, such as those with private banking accounts and those of foreign government officials, are subjected to enhanced due diligence because the law deems that those types of accounts are a higher risk for money laundering. All accounts are subject to ongoing monitoring, in which internal bank software scrutinizes transactions and flags for manual inspection those that fall outside certain parameters. If a manual inspection reveals that the transaction is suspicious, the institution should file a Suspicious Activity Report.
The regulators of the industries involved are responsible to ensure that the financial institutions comply with the BSA. For example, the Federal Reserve and the Office of the Comptroller of the Currency regularly inspect banks, and may impose civil fines or refer matters for criminal prosecution for non-compliance. A number of banks have been fined and prosecuted for failure to comply with the BSA. Most famously, Riggs Bank, in Washington D.C., was prosecuted and functionally driven out of business as a result of its failure to apply proper money laundering controls, particularly as it related to foreign political figures.
In addition to the BSA, the U.S. imposes controls on the movement of currency across its borders, requiring individuals to report the transportation of cash in excess of US$10,000 on a form called Report of International Transportation of Currency or Monetary Instruments (known as a CMIR). Likewise, businesses, such as automobile dealerships, that receive cash in excess of US$10,000 must file a Form 8300 with the Internal Revenue Service, identifying the source of the cash.
In the United States, there are perceived consequences of anti-money laundering (AML) regulations. These unintended consequences include FinCEN's publishing of a list of "risky businesses," which many believe unfairly targeted money service businesses. The publishing of this list and the subsequent fall-out, banks indiscriminately de-risking MSBs, is referred to as Operation Choke Point.
Money laundering has been criminalized in the United States since the Money Laundering Control Act of 1986. The law, contained at section 1956 of Title 18 of the United States Code, prohibits individuals from engaging in a financial transaction with proceeds that were generated from certain specific crimes, known as "specified unlawful activities" (SUAs). The law requires that an individual specifically intend in making the transaction to conceal the source, ownership or control of the funds. There is no minimum threshold of money, and no requirement that the transaction succeeded in actually disguising the money. A "financial transaction" has been broadly defined, and need not involve a financial institution, or even a business. Merely passing money from one person to another, with the intent to disguise the source, ownership, location or control of the money, has been deemed a financial transaction under the law. The possession of money without either a financial transaction or an intent to conceal is not a crime in the United States. Besides money laundering, the law contained in section 1957 of Title 18 of the United States Code, prohibits spending more than US$10,000 derived from an SUA, regardless of whether the individual wishes to disguise it. It carries a lesser penalty than money laundering, and unlike the money laundering statute, requires that the money pass through a financial institution.
According to the records compiled by the United States Sentencing Commission, in 2009, the United States Department of Justice typically convicted a little over 81,000 people; of this, approximately 800 are convicted of money laundering as the primary or most serious charge. The Anti-Drug Abuse Act of 1988 expanded the definition of financial institution to include businesses such as car dealers and real estate closing personnel and required them to file reports on large currency transaction. It required verification of identity of those who purchase monetary instruments over $3,000. The Annunzio-Wylie Anti-Money Laundering Act of 1992 strengthened sanctions for BSA violations, required so called "Suspicious Activity Reports" and eliminated previously used "Criminal Referral Forms", required verification and recordkeeping for wire transfers and established the Bank Secrecy Act Advisory Group (BSAAG). The Money Laundering Suppression Act from 1994 required banking agencies to review and enhance training, develop anti-money laundering examination procedures, review and enhance procedures for referring cases to law enforcement agencies, streamlined the Currency transaction report exemption process, required each Money services business (MSB) to be registered by an owner or controlling person, required every MSB to maintain a list of businesses authorized to act as agents in connection with the financial services offered by the MSB, made operating an unregistered MSB a federal crime, and recommended that states adopt uniform laws applicable to MSBs. The Money Laundering and Financial Crimes Strategy Act of 1998 required banking agencies to develop anti-money laundering training for examiners, required the Department of the Treasury and other agencies to develop a "National Money Laundering Strategy", created the "High Intensity Money Laundering and Related Financial Crime Area" (HIFCA) Task Forces to concentrate law enforcement efforts at the federal, state and local levels in zones where money laundering is prevalent. HIFCA zones may be defined geographically or can be created to address money laundering in an industry sector, a financial institution, or group of financial institutions.
The Intelligence Reform & Terrorism Prevention Act of 2004 amended the Bank Secrecy Act to require the Secretary of the Treasury to prescribe regulations requiring certain financial institutions to report cross-border electronic transmittals of funds, if the Secretary determines that reporting is "reasonably necessary" in "anti-money laundering /combatting financing of terrorists (Anti-Money Laundering/Combating the Financing of Terrorism AML/CFT).
- Charter House Bank: Charter House Bank in Kenya was placed under statutory management in 2006 by the Central Bank of Kenya after it was discovered the bank was being used for money laundering activities by multiple accounts containing missing customer information. More than $1.5 billion had been laundered before the scam was uncovered.Charter House Bank Kenya Scandal
- Bank of Credit and Commerce International: Unknown amount, estimated in billions, of criminal proceeds, including drug trafficking money, laundered during the mid-1980s.
- Bank of New York: US$7 billion of Russian capital flight laundered through accounts controlled by bank executives, late 1990s.
- Ferdinand Marcos: Unknown amount, estimated at US$10 billion of government assets laundered through banks and financial institutions in the United States, Liechtenstein, Austria, Panama, Netherlands Antilles, Cayman Islands, Vanuatu, Hong Kong, Singapore, Monaco, the Bahamas, the Vatican and Switzerland.
- HSBC, in December 2012, paid a record $1.9 Billion fines for money-laundering hundreds of millions of dollars for drug traffickers, terrorists and sanctioned governments such as Iran. The money-laundering occurred throughout the 2000s.
- Liberty Reserve, in May 2013, was seized by United States federal authorities for laundering $6 billion.
- Institute for the Works of Religion: Italian authorities investigated suspected money laundering transactions amounting to US$218 million made by the IOR to several Italian banks.
- Nauru: US$70 billion of Russian capital flight laundered through unregulated Nauru offshore shell banks, late 1990s
- Sani Abacha: US$2–5 billion of government assets laundered through banks in the UK, Luxembourg, Jersey (Channel Islands), and Switzerland, by the president of Nigeria.
- Standard Chartered: paid $330 million in fines for money-laundering hundreds of billions of dollars for Iran. The money-laundering took place in the 2000s and occurred for "nearly a decade to hide 60,000 transactions worth $250 billion".
- Standard Bank: Standard Bank South Africa London Branch – The Financial Conduct Authority (FCA) has fined Standard Bank PLC (Standard Bank) £7,640,400 for failings relating to its anti-money laundering (AML) policies and procedures over corporate and private bank customers connected to politically exposed persons (PEPs).
- BNP Paribas, in June 2014, pleaded guilty to falsifying business records and conspiracy, having violated U.S. sanctions against Cuba, Iran, and Sudan. It agreed to pay an $8.9 billion fine, the largest ever for violating U.S. sanctions.
- BSI Bank, in May 2017, was shut down by the Monetary Authority of Singapore for serious breaches of anti-money laundering requirements, poor management oversight of the bank's operations, and gross misconduct of some of the bank's staff.
- Jose Franklin Jurado-Rodriguez, a Harvard College and Columbia University Graduate School of Arts and Sciences Economics Department alumnus, was convicted in Luxembourg in "June 1990 in what was one of the largest drug money laundering cases ever brought in Europe" and the US in 1996 of money laundering for the Cali Cartel kingpin Jose Santacruz Londono. Jurado-Rodriguez specialized in "smurfing".
Digital money and Money laundering
To prevent the usage of decentralized digital money such as Bitcoin for the profit of crime and corruption, Australia is planning to strengthen the nation's anti-money laundering laws. Knowing the characteristics of Bitcoin, it is completely deterministic, protocol based and cannot be censored, may circumvent national laws using services like Tor to obfuscate transaction origins, and relies completely off of cryptography, not a central entity running under a KYC framework. There are several cases of criminals have cashed out a significant amount of Bitcoin after ransomware attacks, drug dealings, cyber fraud and gunrunning. Other damages such as The DAO being drained of Ether cannot be classified as money laundering under any legal definition, as decentralized virtual environments are legally stateless and cannot be intervened with by a governing body. Such an incident has been debated as to the clear definition of money laundering in a stateless environment, leading to Ethereum Classic to form.
- Bank Secrecy Act
- Currency transaction report
- Customer Identification Program
- Financial Action Task Force on Money Laundering
- Financial Crimes Enforcement Network
- Global RADAR
- Money trail
- Michael H. O'Keefe
- Office of Foreign Assets Control
- Offshore banking
- Organized crime
- Penny stock scam
- Politically exposed person
- Round-tripping (finance)
- Shell (corporation)
- Terrorist financing
- USA PATRIOT Act
- White-collar crime
- World Bank residual model
- Frank, Thomas (January 12, 2018). "Secret Money: How Trump Made Millions Selling Condos To Unknown Buyers." BuzzFeed News.
- Duhaime, Christine. "Wh is Laundering? Duhaime's Financial Crime and Anti-Money Laundering Law". Retrieved 7 March 2014.
- "Money Laundering — AML-CFT". AML-CFT. Retrieved 28 May 2017.
- "Financial Weapons of War, Minnesota Law Review (2016)". ssrn.com. SSRN .
- See for example the Anti-Money Laundering & Counter Terrorism Financing Act 2006 (Australia), the Anti-Money Laundering and Countering Financing of Terrorism Act 2009 (New Zealand), and the Anti-Money Laundering and Counter-Terrorist Financing (Financial Institutions) Ordinance (Cap 615) (Hong Kong. See also (for example) guidance on IMF and FATF websites similarly conflating the concepts.
- "Anti-Money Laundering – Getting The Deal Through – GTDT". Getting The Deal Through. Retrieved 28 May 2017.
- Sterling Seagrave (1995). Lord of the RIM.
- Nigel Morris-Cotterill (1999). "A brief history of money laundering".
- Protess, Ben & Jessica Silver-Greenberg (30 June 2014). "BNP Paribas Admits Guilt and Agrees to Pay $8.9 Billion Fine to U.S." The New York Times. Retrieved 1 July 2014.
- "AUSTRAC at a glance". AUSTRAC. Retrieved 18 August 2016.
- Reuter, Peter (2004). Chasing Dirty Money. Peterson. ISBN 978-0-88132-370-2.
- "History of Anti-Money Laundering Laws". United States Department of the Treasury. 30 June 2015. Retrieved 30 June 2015.
- Lawrence M. Salinger, Encyclopedia of white-collar & corporate crime: A – I, Volume 1, page 78, ISBN 0-7619-3004-3, 2005.
- National Drug Intelligence Center (August 2011). "National Drug Threat Assessment" (PDF). p. 40. Retrieved 20 September 2011.
- "National Money Laundering Threat Assessment" (PDF). December 2005. p. 33. Archived from the original (PDF) on 17 October 2010. Retrieved 3 March 2011.
- Baker, Raymond (2005). Capitalism's Achilles Heel. Wiley.
- Financial Action Task Force. "Global Money Laundering and Terrorist Financing Threat Assessment" (PDF). Retrieved 3 March 2011.
- "Underground Economy Issues. Ontario Construction Secretariat". Archived from the original on 16 December 2010.
- "Tax amnesties turn HMRC into 'biggest money-laundering operation in history'". Retrieved 14 June 2013.
- "Tax Crimes — AML-CFT". AML-CFT. Retrieved 28 May 2017.
- "Direct Life Insurers - AML/CFT Red Flags — AML-CFT". AML-CFT. Retrieved 28 May 2017.
- Financial Action Task Force. "Money Laundering FAQ". Retrieved 2 March 2011.
- For example, under UK law the first offences created for money laundering both related to the proceeds from the sale of illegal narcotics under the Criminal Justice Act 1988 and then later under the Drug Trafficking Act 1994.
- "Cryptocurrencies - Real Time Market Data". Investing.com. Retrieved 9 August 2017.
- Richet, Jean-Loup (June 2013). "Laundering Money Online: a review of cybercriminals methods". arXiv: .
- Zetter, Kim (May 2013). "Liberty Reserve founder indicted on $6 billion money-laundering charges". Wired. Retrieved 20 October 2013.
- Solon, Olivia (October 2013). "Cybercriminals launder money using in-game currencies". Wired. Retrieved 22 October 2013.
- International Federation of Accountants. "Anti-Money Laundering" (PDF). Retrieved 27 March 2014.
- Cassella, S.D. (2003). "Reverse money laundering". Journal of Money Laundering Control. 7 (1): 92–94.
- "Reverse Money Laundering | AML-CFT". AML-CFT. Retrieved 2017-11-16.
- Zabyelina, Yuliya (2015). "Reverse money laundering in Russia: Clean cash for dirty ends". Journal of Money Laundering Control. 18 (2): 202–21. doi:10.1108/JMLC-10-2014-0039.
- EAG (2012). "Money laundering and terrorist financing with use of physical cash and bearer instruments", 17th Plenary Meeting of the Eurasian Group on Combating Money Laundering and Financing of Terrorism, 28 December, New Delhi.
- Financial Action Task Force. "About the FATF". Retrieved 20 September 2011.
- Financial Action Task Force. "About the Non-Cooperative Countries and Territories (NCCT) Initiative". Retrieved 20 September 2011.
- "The Global Anti-Money Laundering Regime: A Short Overview, by Richard Horowitz, Cayman Islands Journal, 6 January 2010". Compasscayman.com. Retrieved 10 November 2013.
- "6 Elements of an Effective AML/CFT Compliance Programme — AML-CFT". AML-CFT. 2017-09-03. Retrieved 2017-09-03.
- Financial Action Task Force. "Money Laundering FAQ". Retrieved 20 September 2011.
- "Financial Crime Job Descriptions - FinCrimeJobs.com".
- Roth, John; et al. (20 August 2004). "Monograph on Terrorist Financing" (PDF). National Commission on Terrorist Attacks Upon the United States. pp. 54–56. Retrieved 20 September 2011.
- Ball, Deborah, et al., (22 March 2011). "U.S. Banks Oppose Tighter Money Rules". Wall Street Journal. Retrieved 19 September 2011.
- Money Laundering Bulletin, Issue 154, June 2008, Dr Jackie Harvey (Newcastle Business School
- "The Lost Trail". The Economist. 20 October 2005. Retrieved 19 September 2011.
- Levi, Michael & William Gilmore (2002). "Terrorist Finance, Money Laundering and the Rise of Mutual Evaluation: A New Paradigm for Crime Control?". European Journal of Law Reform. 4 (2): 337–364.
- Levi, Michael (May 2010). "Combating the Financing of Terrorism: A History and Assessment of the Control of 'Threat Finance'". The British Journal of Criminology. 50 (4): 650–669. doi:10.1093/bjc/azq025.
- "Coming clean". The Economist. 14 October 2004. Archived from the original on 15 June 2013.
- Bartlett, Brent (May 2002). "The Negative Effects of Money Laundering on Economic Development". Asian Development Bank. Archived from the original on 2 June 2011. Retrieved 19 September 2011.
- Article 29 Data Protection Working Party. "Opinion 14/2011 on data protection issues related to the prevention of money laundering and terrorist financing" (PDF). European Commission. Retrieved 18 February 2014.
- Article 29 Data Protection Working Group. "Opinion 14/2011 Annex: Recommendations" (PDF). European Commission. Retrieved 18 February 2014.
- American Civil Liberties Union. "The Surveillance Industrial Complex" (PDF). Retrieved 23 October 2011.
- Mallen, Patricia (13 February 2013). "In Mexico, Around $10B Every Year Come From Money Laundering, Which Was Not Illegal in 16 Out of 31 States". International Business Times. Retrieved 12 March 2014.
- GAFI, FATF (21 July 2017). "FATF Members and Observers". www.fatf-gafi.org.
- Financial Action Task Force. "Member Country and Observers FAQ".
- "Mission". Retrieved 21 June 2014.
- "High-risk and non-cooperative jurisdictions". www.fatf-gafi.org. 23 June 2017.
|last1=in Authors list (help)
- International Money Laundering Information Network. Retrieved on 21 October 2011.
- World Bank Financial Market Integrity. Amlcft.org. Retrieved on 21 October 2011.
- "fintraca.gov.af". fintraca.gov.af. Retrieved 10 November 2013.
- Australian National Security: What Australia is doing
- Financial Transaction Reports Act 1988 (Cth), s 24.
- Tyree, Alan (1997). Digital Cash. Adelaide, Australia: Butterworths. pp. 82, 86. ISBN 0 409 31316 5.
- "Money Laundering Act 2012 amended". Resource Portal of OGR Legal. OGR Legal. Retrieved 2 November 2015.
- "Laws and Acts". Bangladesh Bank.
- "মানিলন্ডারিং প্রতিরোধ আইন, ২০১২".
- Duhaime, Christine. "AML Legislation in Canada, Duhaime's Financial Crime and Anti-Money Laundering Law". Retrieved 7 March 2014.
- "AML global alignment: Two steps forward, one step back" (PDF). pwc.com. PwC Financial Services Regulatory Practice, June 2015.
- "EUR-Lex – 52013PC0045 – EN – EUR-Lex".
- "Prevention of Money Laundering Act, 2002" (PDF). Financial Intelligence Unit (FIU-IND), Ministry of Finance, India. Retrieved 10 October 2012.
- "The Prevention of Money Laundering (Amendment) Act, 2005" (PDF). Retrieved 10 November 2013.
- "The Prevention of Money Laundering (Amendment) Act, 2009" (PDF). Retrieved 10 November 2013.
- "ED – functions". Archived from the original on 26 April 2012. Retrieved 20 May 2013.
- "Recovery of Arrears From Hasan Ali Not Possible: Fin Min". Outlook India. Retrieved 6 July 2014.
- Chan, Eric. "Getting The Deal Through - Anti-Money Laundering" (PDF). Retrieved 28 May 2017.
- "Singapore Statutes Online - 65A - Corruption, Drug Trafficking and Other Serious Crimes (Confiscation of Benefits) Act". statutes.agc.gov.sg. Retrieved 28 May 2017.
- "Singapore Statutes Online - 190A - Mutual Assistance in Criminal Matters Act". statutes.agc.gov.sg. Retrieved 28 May 2017.
- Room, Reading. "Regulatory and Supervisory Framework > Anti Money Laundering and Countering the Financing of Terrorism". www.mas.gov.sg. Retrieved 28 May 2017.
- "How to conduct proper customer due diligence (CDD) — AML-CFT". AML-CFT. 19 March 2017. Retrieved 28 May 2017.
- "OPSI: Terrorism Act". Retrieved 14 February 2009.
- "OPSI: Anti-Terrorist Crime & Security Act". Retrieved 14 February 2009.
- "OPSI: Proceeds of Crime Act". Retrieved 14 February 2009.
- "OPSI: Serious Organised Crime and Police Act 2005". Retrieved 14 February 2009.
- Sections 327 – 340, Proceeds of Crime Act 2002
- Section 330, Proceeds of Crime Act 2002
- Section 340, Proceeds of Crime Act 2002
- Section 329, Proceeds of Crime Act 2002
- Section 327, Proceeds of Crime Act 2002
- Section 334, Proceeds of Crime Act 2002
- "OPSI: Money Laundering Regulations 2003". Retrieved 14 February 2009.
- "OPSI: Money Laundering Regulations 2007". Retrieved 14 February 2009.
- Section 333A, Proceeds of Crime Act 2002
- "Joint Money Laundering Steering Group". Archived from the original on 6 October 2011. Retrieved 14 February 2009.
- "Law Society AML advice". Retrieved 14 February 2015.
- Section 340(2), Proceeds of Crime Act 2002
- David Winch, "Money Laundering Law Changes" (2006)
- The Suspicious Activity Reports Regime Annual Report 2010 published by SOCA
- "FFIEC website regarding CTR Exemptions". Retrieved 3 November 2014.
- FinCEN. "Bank Secrecy Act". Archived from the original on 7 March 2011. Retrieved 2 March 2011.
- FinCEN Mission. "FinCEN mission". Archived from the original on 30 April 2011. Retrieved 2 March 2011.
- Roth, John; Douglas Greenburg and Serena Willie (2004). "Monograph on Terrorist Financing" (PDF): 54–56. Retrieved 2 March 2011.
- Joseph, Lester; John Roth (September 2007). "Criminal Prosecution of Banks Under the Bank Secrecy Act" (PDF). United States Attorneys' Bulletin. Retrieved 2 March 2011.
- "SEC resources". Retrieved 2 March 2011.
- "IRS web site regarding Form 8300". Archived from the original on 22 February 2011. Retrieved 2 March 2011.
- "Informal Value Transfer Systems", Financial Crimes Enforcement Network, 1 September 2010 Archived 5 September 2010 at the Wayback Machine.
- Cassella, Stefan (September 2007). "Money Laundering Laws" (PDF). United States Attorneys' Bulletin: 21–34. Retrieved 2 March 2011.
- "US Sentencing Commission Date, 2009" (PDF). 2009. Retrieved 2 March 2011.
- US Dept Treasury. "What is a HICFA?". Financial Crimes Enforcement Network. Archived from the original on 6 March 2014. Retrieved 6 March 2014.
- "BCCI's Criminality". Globalsecurity.org. Retrieved 3 March 2011.
- O'Brien, Timothy L. (9 November 2005). "Bank of New York Settles Money Laundering Case". New York Times. Retrieved 3 March 2011.
- Dunlap, David W. (13 January 1991). "Commercial Property: The Bernstein Brothers; A Tangled Tale of Americas Towers and the Crown". The New York Times. Retrieved 12 June 2010.
- "HSBC to Pay Record Fine to Settle Money-Laundering Charges". New York Times. 11 December 2012. Retrieved 24 January 2013.
- Josephine McKenna (7 December 2009). "Vatican Bank reported to be facing money-laundering investigation". The Times. Retrieved 12 June 2010.
- Hitt, Jack (10 December 2000). "The Billion Dollar Shack". The New York Times. Retrieved 3 March 2011.
- "Sani Abacha". Asset Recovery Knowledge Center. Retrieved 3 March 2011.[permanent dead link]
- "Standard Chartered to Pay $330 Million to Settle Iran Money Transfer Claims". New York Times. 6 December 2012. Retrieved 24 January 2013.
- "FBI — Bank Guilty of Violating U.S. Economic Sanctions". Fbi.gov. Archived from the original on 15 July 2014. Retrieved 14 July 2014.
- "Singapore BSI Bank ordered to shut down — AML-CFT". AML-CFT. 25 September 2016. Retrieved 28 May 2017.
- McGee, Jim (June 18, 1995). "FROM RESPECTED ATTORNEY TO SUSPECTED RACKETEER: A LAWYER'S JOURNEY". The Washington Post.
- RASHBAUM, William K. (April 12, 1996). "HE ADMITS LAUNDERING DRUG CASH". New York Daily News.
- Kochan, Nick (2011). The Washing Machine.
- "Bitcoin one step closer to being regulated in Australia under new anti-money laundering laws". Retrieved 18 Dec 2017.
- "Bitcoin FAQ". Retrieved 18 Dec 2017.
- "Hackers have cashed out on $143,000 of bitcoin from the massive WannaCry ransomware attack". Retrieved 18 Dec 2017.
- "Bitcoin used by CRIMINALS to launder illicit funds". Retrieved 18 Dec 2017.
- Money laundering at Curlie (based on DMOZ)
- UNODC on money-laundering and countering the financing of terrorism: profile from the United Nations Office on Drugs and Crime
- Financial Market Integrity Unit of the World Bank
- International Narcotics Control Strategy Report (INCSR), annual report issued by the United States Department of State in March every year (prepared by Bureau of International Narcotics and Law Enforcement Affairs) on country money laundering risk
|
<urn:uuid:5ddd89ff-c339-4e0a-ac4c-a47ed757d57a>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00417.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9182987809181213,
"score": 3.46875,
"token_count": 14760,
"url": "https://en.wikipedia.org/wiki/Anti-money_laundering"
}
|
Full Text View
Land transformation by humans: A review
|Table of Contents|
Search GoogleScholar for
Search GSA Today
In recent decades, changes that human activities have wrought in Earth’s life support system have worried many people. The human population has doubled in the past 40 years and is projected to increase by the same amount again in the next 40. The expansion of infrastructure and agriculture necessitated by this population growth has quickened the pace of land transformation and degradation. We estimate that humans have modified >50% of Earth’s land surface. The current rate of land transformation, particularly of agricultural land, is unsustainable. We need a lively public discussion of the problems resulting from population pressures and the resulting land degradation.
Manuscript received 14 Feb. 2012; accepted 16 Aug. 2012
“Global Change” refers to changes that alter the atmosphere and oceans, and hence are experienced globally. It also refers to local changes that are so common as to be, collectively, of global importance; these include changes in climate, in composition of air and water, in biodiversity, and in land use (Vitousek, 1992; Rockström et al., 2009). Herein, we focus on land use (Fig. 1). Vitousek (1992, p. 7) remarks that this may be the “most significant component of global change” for decades to come.
Land transformation: Before New York (top: reproduced from National Geographic Magazine, September 2009, with permission of the National Geographic Society; bottom: New York, USA, © Robert Clark/INSTITUTE for Artist Management).
Many changes in land use are a consequence of the increase in human population and the resulting demand for more resources—among them, minerals, soil, and water. This demand now exceeds that which Earth can provide sustainably. The long-term sustainability issue is more serious than, but exacerbated by, climate change.
By the middle of the nineteenth century, the extent to which humans had already modified the landscape was recognized by George Perkins Marsh (1864). Marsh understood that Earth’s ability to provide the many ecosystem services upon which we depend was exhaustible.
Over the last half century, numerous impacts of changes in land use have been identified (Lambin and Geist, 2006, p. 1). In the 1970s, it was recognized that changes in albedo and evapotranspiration due to clearing and overgrazing had led to local decreases in rainfall. In the 1980s, the role of land-use changes in the carbon cycle was highlighted. Many papers since the late 1990s have drawn attention to the effects of land use on biodiversity, ecosystem services, and soil degradation.
Humans are likely the premier geomorphic agent currently sculpting Earth’s surface (Hooke, 1994). Earth is moved and the landscape modified, commonly degraded, by many of our activities. Mining, infrastructure expansion, and urban development are obvious ones. Plowing moves huge amounts of earth and leads to accelerated erosion. Grazing and logging also increase erosion. Much of the eroded sediment ends up as colluvium on hillslopes and as alluvium in floodplains (Trimble, 1999; Wilkinson and McElroy, 2007), thus subtly altering the shape of the land. The rest is carried away by streams and rivers.
We are land animals. The resources upon which we depend come largely from the land. The land and the other inhabitants it supports, its biodiversity, provide us with food, fiber, mineral resources, medicines, industrial products, and innumerable ecosystem services like cleansing our waste water, dampening flood peaks, breaking down rocks into productive soil, maintaining the supply of oxygen in the atmosphere, and supporting pollinators for many crops and predators that control many agricultural pests (MEA, 2003 [see esp. chapter 2, p. 49–70]; TEEB, 2010). The diversity of species contributes to the stability or resilience of this life support system, facilitating continuation of services despite disturbances (Rockström et al., 2009). Degrading the land degrades our life support system. The land is an essential resource for future generations.
Land Area Modified by Human Action
Assessments of the percentage of ice-free land affected by human action vary from 20% to 100%. Humans appropriate 20% to 40% of Earth’s potential net primary biological production (Haberl et al., 2007; Imhoff et al., 2004; Vitousek et al., 1986). Nearly 24% of Earth’s surface area likely experienced decline in ecosystem function and productivity between 1981 and 2003 (Bai et al., 2008). As of 1995, ~43% of Earth’s surface area had experienced human-induced degradation (Daily, 1995). Ellis and Ramankutty (2008) concluded that more than 75% of Earth’s ice-free land area could no longer be considered wild. Of Earth’s ice-free land area, 83% is likely directly influenced by human beings (Sanderson et al., 2002). Our pollutants affect plant and animal physiology worldwide (McKibben, 1989, e.g., p. 38, 58).
The amount of earth moved by humans and the history of human earth moving have been discussed previously (Hooke, 1994, 2000). Herein, we consider the area of the landscape we humans have reconfigured.
Changes through Time in Cropland, Pasture, Forest, and Urban Land
In pioneering studies, Ramankutty and Foley (1999) and Klein Goldewijk (2001, and pers. comm., March 2010) assessed the land area used as cropland or pasture (Klein Goldewijk, 2001, only) and that covered by forest (supplemental data1), during the past 300 years. Recently, they have updated some of their estimates (Ramankutty et al., 2008; Klein Goldewijk et al., 2011), and Pongratz et al. (2008, and pers. comm., Jan. 2012) have presented new ones. All of these studies are based on data collected by the Food and Agriculture Organization of the United Nations since 1961 (http://faostat.fao.org). The authors then hindcast and sometimes forecast using satellite, ground-truth, and historical data (Fig. 2). Ramankutty et al. (2008) give values only for 2000. Thus, we adjusted the Ramankutty and Foley (1999) values for cropland in earlier years downward by the percent difference between the Ramankutty et al. (2008) and projected Ramankutty and Foley (1999) values for 2000 (see the supplemental data, Sec. C, for additional details [footnote 1]).
Changes in land use through time with extrapolations to 2050 AD. Population data and projections are from UNPD (1999). See text and supplemental data (Sec. C; see text footnote 1) for other sources and estimates. KG’01—Klein Goldewijk (2001); KG’11—Klein Goldewijk et al. (2011); P+—Pongratz et al. (2008); R+— Ramankutty et al. (2008); RF—Ramankutty and Foley (1999).
1 GSA supplemental data item 2012340, supplemental information, definitions, figures, tables, and references, is online at www.geosociety.org/pubs/ft2012.htm. You can also request a copy from GSA Today, P.O. Box 9140, Boulder, CO 80301-9140, USA; .
Noteworthy in Figure 2 are the increases in cropland and pasture over the past 300 years, the corresponding decrease in forest, and the recent decreases in the rate of change of all three.
Recent estimates of the global urban area range from 0.3 to 3.5 Mkm2 (Potere and Schneider, 2007). The wide range is due to differences in the definition of “urban” and in the methodology for identifying areas that are urban. An urban area is one in which the population density exceeds a minimum value. Different countries, however, use different minima, ranging from <200 to 4000 people/km2. Methodologically, the problem is the lack of a standard remote sensing technique for identifying urban areas. Common approaches use either the intensity of night lights or the extent of impervious ground. The former varies spatially, because more affluent countries use more power. The latter overlooks open space around houses that, nonetheless, has been modified by human action.
We think of “urban areas” as expanses of contiguous land, divided into parcels (£~1 ha) with different owners, and modified for residential or commercial purposes. This includes land covered by structures or pavement as well as intervening land modified to form gardens or parks. The estimate of 3.5 Mkm2 (CIESIN, 2010) best reflects this description. It is based on night lights, censuses, and a variety of supplementary data, and is as of 2005. We projected back-ward and forward using CIESIN’s population density (796 people/km2) and estimates of urban population from UNPD (2004, 2007a, 2007b) and Kelley and Williamson (1984).
Land Modified by Human Action as of 2007
In Table 1, we present a more comprehensive estimate, as of 2007, of the land area modified either directly by human earth moving or indirectly by actions causing changes in sediment fluxes.
Land area modified by human action (as of ca. 2007)
To obtain the areas of cropland and pasture in Table 1, 16.7 ± 2.4 Mkm2 and 33.5 ± 5.7 Mkm2, respectively, we first adjusted the Ramankutty et al. (2008) value for pasture in 2000 downward by the mean decrease between 2000 and 2007 in the FAO (2009) and Pongratz et al. (2008) estimates. We then fit a 4th order poly-nomial through the Klein Goldewijk et al. (2011) and Ramankutty et al. (2008) cropland data and extrapolated them to 2007. Finally, we then averaged these values with those of Pongratz et al. (2008).
Erosion rates are higher on agricultural land; typical estimates are 15 t ha−1y−1 for cropland and 5 t ha−1y−1 for pasture (e.g., USDA, 1989; Pimentel et al., 1995; Montgomery, 2007). Of this, ~70% is likely redeposited nearby on slopes and floodplains (Wilkinson and McElroy, 2007). Using population estimates, the per capita need for agricultural land, and a mean deposition of 1 ± 0.5 m, we estimate that the area thus reshaped in the past five millennia is ~5.3 ± 2.0 Mkm2 (see supplemental data, Sec. D [footnote 1]).
Logging operations disturb forest soils and thus also increase erosion (Elliot et al., 1998). Unlike agricultural land that is reused annually, however, logged areas recover as regrowth occurs. Furthermore, part of the logged land may not be degraded. We estimate the global area logged annually by dividing the production (3.5 Mm3 in 2007) by an estimate of the yield per hectare (15 ± 5 m3 ha−1). We assumed that 50% of the area would have been disturbed during the year in which it was cut and that due to regrowth half of the area remaining disturbed in any given year would have recovered by the next. This calculation yielded a disturbed area of 2.4 ± 1.2 Mkm2 in 2007. The uncertainty is based on uncertainties of 50% in the regeneration rate, 25% in the area initially disturbed, and 33% in the yield per hectare (see supplemental data, Sec. E [footnote 1]).
For forested area, we extrapolated the Ramankutty and Foley (1999) time series using a 4th order polynomial and averaged it with the FAO (2009) estimates, yielding 41.3 ± 2.6 Mkm2. This is identical to the Pongratz et al. (2008) estimate for forest plus shrubland. As this includes bothnatural and planted forests, we subtracted the latter, 2.7 Mkm2 (FAO, 2009). We also subtracted the area disturbed by logging, yielding 36.2 ± 2.9 Mkm2.
To estimate the area of urban development in 2007, 3.7 ± 1.0 Mkm2, we extrapolated the CIESIN (2010) estimate of the area in 2005, using an annual growth rate of 2.1% (UNPD, 2007a).
The area occupied by rural housing and businesses, 4.2 ± 1.4 Mkm2, is assessed from the rural population in 2007 (UNPD, 2007b), assuming that people would disturb a hectare of rural land.
To calculate the land area affected by roads in rural areas, we used data for 2002–2007 on the total lengths of roads of various classes in 188 countries (IRF-WRS, 2009). Other data suggest that 70% of these roads are rural. We assigned widths to these various road classes, based on standards in the United States (supplemental data, Sec. F [footnote 1]). Assuming an uncertainty of ±15% in road widths and in the percentage of rural roads, we obtained 0.5 ± 0.1 Mkm2.
A comprehensive list of reservoir volumes has been compiled by the International Committee on Large Dams and updated by Chao et al. (2008). They sum to ~10,800 km3. B.F. Chao (pers. comm.,2011) thinks the mean reservoir depth is ~50–100 m. Noting the large number of small reservoirs, we chose the lower number, yielding a total surface area of 0.2 ± 0.1 Mkm2.
Data on the global length of railways are from IUR (2008). Widths are from ADIF (2005). The product is 0.03 Mkm2. We have no basis for estimating an uncertainty.
We found summary data on the area disturbed by mining for 14 regions or countriesrepresenting 22% of Earth’s ice-free land area, all continents except Africa, and the two principal economic powerhouses of today’s economy, China and the United States (supplemental data, Sec. G [footnote 1]). The weighted mean is 0.3%. Assuming that this percentage applies globally, we obtain ~ Mkm2. For comparison, Norse et al. (1992) suggest that the area is between 0.5 and 1.0 Mkm2, but the basis for this estimate is unclear.
Our subtotal for land disturbed by human infrastructure (Table 1) is ~9.0 ± 1.7 Mkm2. We believe this is a conservative estimate because we have not evaluated the land area modified by coastal or river engineering projects; by construction of infrastructure like levees, electric power grids or wind farms; or by infrastructure from the distant past (e.g., prehistoric archaeological sites).
The data in Table 1 suggest that ~70 Mkm2, or >50% of Earth’s ice-free land area, has been directly modified by human action involving moving earth or changing sediment fluxes. Many of these activities have indirect consequences well beyond the area directly affected. Converting land to agriculture leads to local extinctions of biota in adjacent areas, the insecticides and herbicides used diffuse into the surroundings, killing non-target species (Ehrlich and Ehrlich, 1981), and fertilizers foul our streams and rivers, leading to dead zones in the ocean (Halpern et al., 2008). Invasive species commonly find footholds on surfaces disturbed by agricultural activities, and can severely reduce the usefulness of large areas (e.g., Tobler, 2007). Toxic chemicals spewed into the air from urban centers rain out over vast areas downwind. Others, like CO2, diffuse over the entire globe. Roads and railways fragment ecosystems, a key element of habitat destruction and a principal cause of loss of biodiversity (Vitousek et al., 1997; Sala et al., 2000), and runoff from them carries pollutants. The land area ecologically impacted by roads may be tens to hundreds of meters wider than the area physically disturbed (Forman, 2000). Runoff from mining areas is commonly contaminated and has a high sediment load, affecting hundreds of kilometers of riparian ecosystems. Dust raised by plowing and other human activities is deposited over distant surfaces. Dust commonly contains pathogens (Prospero et al., 2005) or heavy metals (Herut et al., 2001; Reynolds et al., 2010) that can have adverse effects on people and other organisms. Dust also accelerates melting of snow and ice on mountains, affecting water supplies downstream (Painter et al., 2010). Levees on rivers prevent natural water storage during floods, thus increasing damage downstream (e.g., Pinter et al., 2008). Deforestation and construction projects involving earth moving on steep slopes too commonly result in catastrophic failures and in human deaths (Kellerer-Pirklbauer, 2002). Thus, the impact of land transformation is much larger than suggested by the numbers in Table 1.
These impacts reduce the ecosystem services we receive, seemingly for free, from the plants, animals, insects, and microbes with whom we share the planet (MEA, 2005; TEEB, 2010). The global annual value of these services is roughly twice the global GNP (Costanza et al., 1997; Daily, 1997). They are essential for human survival. Some are likely irreplaceable.
The data in Figure 2 suggest that the rate of change in area of cropland and pasture has decreased in the last few decades. Projected into the future, these trends suggest a peak and then a decline in the areas of both. Let’s focus on cropland, because that is the land use for which data are most robust and the one of most concern, given our swelling population (Fig. 2).
At least three trends are contributing to the decline in the rate of increase in cropland:
- Urban area is increasing, commonly at the expense of agricultural land. Between 2000 and 2030, worldwide, the loss of agricultural land to urbanization may be as much as ~15,000 km2 annually (Döös, 2002).
- There is a dearth of additional land suitable for agriculture. Of Earth’s land area, 70% to 80% is unsuitable for agriculture owing to poor soils, steep topography, or adverse climate (Fischer et al., 2000, p. 49; Ramankutty et al., 2002). About half of the rest is already in crops (Table 1), and a large fraction of the other half is presently under tropical forests that beneficially take up CO2. Tropical-forest soil loses fertility rapidly, once cleared.
- Some existing agricultural land has deteriorated so much that it is no longer worth cultivating. As of ca. 1990, soils on nearly 20 Mkm2 of land, or ~40% of the global agricultural land area, had been degraded (Oldeman et al., 1991, p. 28). Of this, over half was so degraded that local farmers lacked the means to restore it.
Partially offsetting these trends may be increases in efficiency of farming and food distribution. Rudel et al. (2009), however, could not find correlations that supported this hypothesis.
Prognosis for the Future
Looking ahead a few decades, land suitable for agriculture will likely continue to diminish as urban areas expand, soil is degraded, fertile soil is washed down rivers and blown away ten times faster than it is replaced (Montgomery, 2007), and water tables decline in areas dependent on groundwater for irrigation (Gleick, 1993). Foreseeing a shortage of arable land, global investors are, in fact, buying huge tracts in Africa and South America (De Castro, 2011). In addition, despite foreseeable future technological developments, agricultural productivity is likely to decrease as (i) the supply of phosphate for fertilizer decreases (Rosmarin, 2004); (ii) petroleum (used to run farm machinery and as feedstock for fertilizer) becomes more expensive and less available; (iii) pollution adversely affects pollinators, plant growth, and predators that control agricultural pests (Peng et al., 2004; supplemental data, Sec. H [footnote 1]); and (iv) climate changes.
Will Earth be able to support the projected 2050 population of 8.9 billion? Fischer et al. (2000, p. 88) believe that it can. Döös and Shaw (1999), considering climate change, water availability, irrigation, salinization, pests, farm management, and access to fertilizers, think it likely that the demand for cereals could be met in the more developed countries, and highly unlikely that it would be met in less developed ones. Seto et al. (2010, p. 95) conclude that it is unlikely that Earth’s land resources can support current and future populations sustainably without a “breathtaking” change in our way of life. Wackernagel et al. (2002) estimate that, as of ca. 1978, the land area needed to grow crops, graze animals, provide timber, accommodate infrastructure, and absorb waste, all sustainably, already exceeded Earth’s available area, and that as of 2002, we needed 20% more land than is available. If this is the case, we are in a period of overshoot.
Overshoot occurs when populations exceed the local carrying capacity. An environment’s carrying capacity for a given species is the number of individuals “living in a given manner, which the environment can support indefinitely” (Catton, 1980, p. 4). Only a population less than or equal to the carrying capacity is sustainable.
A sustainable population is one that (i) consumes renewable resources at a rate less than the rate at which they are renewed; (ii) consumes non-renewable resources at a rate less than the rate at which substitutes can be found; and (iii) emits pollution at a rate less than the capacity of the environment to absorb the pollutants (Daly, 1991, p. 256).
It is axiomatic that, on a finite planet, there is a limit to growth. The question is, “Are we now bumping up against that limit?”
Several observations suggest that, with our present lifestyles, we are, indeed, now living in a state of overshoot. We struggle to supply the food needed by the present population. Groundwater tables are declining. Our way of life is based on non-renewables like fossil fuels, phosphates, and ores, accumulated over millions of years, with no clear plan for adequate substitutes once natural sources are exhausted. We discard many chemicals (e.g., CO2, N, plastics) faster than they can be absorbed by the environment.
When the number of individuals exceeds the carrying capacity, overuse of the environment sets up forces that, after a delay, first reduce the standard of living and then eventually the population (Catton, 1980, p. 4–5). Initiation of the correction may be manifested by stagnant or negative economic growth rates, by famine and/or water shortages, by increases in disease resulting from undernourishment (Pimentel et al., 2007), and by increases in conflict. Sound familiar? Fifty-four nations with 12% of the world’s population experienced economic declines in per capita GDP from 1990 to 2001 (Meadows et al., 2004, p. xiv; World Bank, 2003, p. 64–65). Famine, disease, and conflict are frequently in the news.
If we are in a state of overshoot, here are three ways to bring the human impact on Earth back to sustainability:
1. Reduce demand. Demand can be reduced by improving building insulation or mandating energy-efficient vehicles and appliances. Recycling reduces demand for primary materials. Tempering our impulse to buy things that we don’t really need or of which we will soon tire also reduces demand.
2. Develop technological solutions. Existing technology can mitigate our impact. Adoption of efficient building and farming practices limits degradation, and ecological restoration can partially reverse it (Rey Benayas et al., 2009). Technological breakthroughs are also possible. Simon (1996) argued that a larger population increases the likelihood of spawning the brain power needed to achieve such breakthroughs. But without well-fed bodies, brains don’t function well.
Our technological skills have enabled us to support an ever increasing population. They have also exacerbated some problems. Use of oil as an energy source in agriculture has increased efficiency, but at the expense of leaving us presently dependent on a non-renewable resource. Mechanical well drilling and pumping facilitate irrigation, but now ground-water tables are dropping unsustainably (Gleick, 1993). Given present usage, more than half of the U.S. High Plains aquifer will likely last for 50 to 200 years, but significant parts will be exhausted in <~25 years while others are already effectively spent (Buchanan et al., 2009). Use of bioengineered wheat in Punjab, India, and rice in Bali, Indonesia, increased crop yields, but also led to a variety of economic, pest, and health problems (Tiwana et al., 2007, p. xxii–xxiii; Lansing, 1991, p. 110–117).
3. Reduce the population. Increasing the availability of health care, education, and microfinancing, particularly for women in developing countries, reduces fertility. Reduced fertility reduces poverty, because available resources are distributed among fewer people. Couples worldwide can be urged to have only two children and to delay having them so there will be fewer people on Earth at any one time. These steps would first slow population growth and then lead to a long-term decline.
Reducing demand is a critical component of the solution, but in itself is not sufficient, given the magnitude of the problem. Technological progress, particularly in the energy field, is essential, but we also think it unwise to bet too heavily on unspecified future breakthroughs. Reducing and eventually reversing population growth needs to be a large part of the solution. Eventually, difficult decisions will have to be made about the size of an optimum population and how to achieve it.
We would like to leave the reader pondering three questions:
- Are natural resources (such as land, soil, water, ecosystem services, ores) the fundamental basis for a comfortable life?
- Above a certain threshold regional population, is comfort inversely proportional to population?
- How much of the unrest in the world is a consequence of insufficient natural resources to support local populations at a tolerable level? Periods of inadequate food production during the past millennium have led to unrest, war, and migration (Zhang et al., 2007). The Arab Spring is, at least in part, a consequence of high food prices and unemployment (Roubini, 2011).
We have shown, herein, that many of the problems now facing humanity will be gravely exacerbated if the population continues to increase and the land continues to degrade; many would be vastly easier to solve with a reduced population. The transition to a truly sustainable society (sensu Daly, 1991) requires more than a population policy though. Unqualified growth can no longer be our mantra. Thus, drastic changes in our economic philosophy and, hence, in the controlling legal structure are required. The needed changes are, indeed, breathtaking.
This study was supported by the University of Maine, School of Earth and Climate Sciences, and by the Spanish Research Project CGL2010-21754-CO2-01. We thank K. Klein Goldewijk for supplying unpublished data on the land area covered by forests in the past; J. Pongratz for data files on changes in landcover; B. Chao for reservoir data; D. Fastovsky, B. Housen, and especially R. Reynolds for critical comments leading to significant improvements; and J. Moore, L. Balaguer, F. Valladares, J. Oyarzun, L. Santos, J.L. Lalana, M. Tejedor, E. Serra, and L.F. Prado for help in tracking down references.
- ADIF, 2005, Environmental Report 2005: Madrid, Administrador de Infraestructuras Ferroviarias (ADIF), 139 p.; see p. 65 at http://www.adif.es/es_ES/conoceradif/doc/environmental2005.pdf (last accessed 1 Oct. 2012).
- Bai, Z.G., Dent, D.L. Olsson, L., and Schaepman, M.E., 2008, Proxy global assessment of land degradation: Soil Use and Management, v. 24, p. 223–234.
- Buchanan, R.C., Buddemeier, R.R., and Wilson, B.B., 2009, The high plains aquifer: Kansas Geological Survey Public Information Circular 18, 6 p.
- Catton, W.R., 1980, Overshoot: The Ecological Basis of Revolutionary Change: Urbana, University of Illinois Press, 298 p.
- Chao, B.F., Wu, Y.H., and Li, Y.S., 2008, Impact of artificial reservoir water impoundment on global sea level: Science, v. 320, p. 212–214.
- CIESIN, 2010, Center for International Earth Science Information Network (CIESIN), Columbia University, Gridded Population of the World (GPW), version 3 and Global Rural-Urban Mapping Project (GRUMP) Alpha Version: http://www.sedac.ciesin.columbia.edu/gpw (last accessed 1 Oct. 2012).
- Costanza, R., d’Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O’Neill, R.V., Paruelo, J., Raskin, R.G., Sutton, P., and van den Belt, M., 1997, The value of the world’s ecosystem services and natural capital: Nature, v. 387, 6630, p. 253–260.
- Daily, G.C., 1995, Restoring value to the world’s degraded lands: Science, v. 269, p. 350–354.
- Daily, G.C., editor, 1997, Nature’s Services: Washington, D.C., Island Press, 392 p.
- Daly, H.E., 1991, Steady-State Economics: Washington, D.C., Island Press, 302 p.
- De Castro, P., 2011, Corsa alla terra: Cibo e agricoltura nell’era della nuova scarsità (Rush for land: food and agriculture in a new era of shortages): Roma, Donzelli, 160 p.
- Döös, B.R., 2002, Population growth and loss of arable land: Global Environmental Change, v. 12, no. 4, p. 303–311.
- Döös, B.R., and Shaw, R., 1999, Can we predict the future food production? A sensitivity analysis: Global Environmental Change, v. 9, p. 261–283.
- Ehrlich, P.R., and Ehrlich, A.H., 1981, Extinction: New York, Random House, 305 p.
- Elliot, W.J., Page-Dumroese, D., and Robichaud, P.R., 1998, The effects of forest management on erosion and soil productivity, in Lal, R., ed., Soil Quality and Erosion: Boca Raton, Florida, St. Lucie Press, p. 195–209.
- Ellis, E.C., and Ramankutty, N., 2008, Putting people in the map: Anthropogenic biomes of the world: Frontiers in Ecology and the Environment, v. 6, no. 8, p. 439–447.
- FAO, 2009, Rethinking the value of planted forests: Food and Agriculture Organization of the United Nations, http://www.fao.org/news/story/en/item/10324/icode/ (last accessed 1 Oct. 2012).
- Fischer, G., van Velthuizen, H., and Nachtergaele, F.O., 2000, Global agro-ecological zones assessment: Methodology and results: International Institute for Applied Systems Analysis, Interim Report no. IR-00-064, 338 p.
- Forman, R.T.T., 2000, Estimate of the area affected ecologically by the road system in the United States: Conservation Biology, v. 14, no. 1, p. 31–35.
- Gleick, P.H., 1993, Water and conflict: Fresh water resources and international security: International Security, v. 18, p. 79–112.
- Haberl, H., Erb, K.H., Karusmann, F., Gaube, V., Bondeau, A., Plutzar, C., Gingrich, S., Lucht, W., and Marina Fischer-Kowalski, M., 2007, Quantifying and mapping the human appropriation of net primary production in earth’s terrestrial ecosystems: Proceedings of the National Academy of Sciences, v. 104, no. 31, p. 12,942–12,947.
- Halpern, B.S., Walbridge, S., Selkoel, K.A., Kappel, C.V., Micheli, F., D’Agrosa, C., Bruno, J.F., Casey, K.S., Ebert, C., Fox, H.E., Fujita, R., Heinemann, D., Lenihan, H.S., Madin, E.M.P., Perry, M.T., Selig, E.R., Spalding, M., Steneck, R., and Watson, R., 2008, A global map of human impact on marine ecosystems: Science, v. 319, p. 948–952.
- Herut, B., Nimmo, M., Medway, A., Chester, R., and Krom, M.D., 2001, Dray atmospheric inputs of trace metals at the Mediterranean coast of Israel (SE Mediterranean): Sources and fluxes: Atmospheric Environment, v. 35, p. 803–813.
- Hooke, R.LeB., 1994, On the efficacy of humans as geomorphic agents: GSA Today, v. 4, p. 217, 224–225.
- Hooke, R.LeB., 2000, On the history of humans as geomorphic agents: Geology, v. 28, no. 9, p. 843–846.
- Imhoff, M.L., Bounoua, L., Ricketts, T., Loucks, C., Harriss, R., and Lawrence, W.T., 2004, Global patterns in human consumption of net primary production: Nature, v. 429, p. 870–873.
- IRF-WRS, 2009, IRF World Road Statistics: International Road Federation: Geneva, IRF, 270 p.
- IUR, 2008, Synopsis 2008: Statistics of the International Union of Railways: http://www.uic.org/spip.php?article1347 (last accessed 1 Oct. 2012).
- Kellerer-Pirklbauer, V.O., 2002, The influence of land use on the stability of slopes with examples from the European Alps: Mitteilungen des Naturwissenschaftlichen Vereines für Steiermark, v. 132, p. 43–62.
- Kelley, A., and Williamson, J., 1984, Population growth, industrial revolution, and the urban transition: Population Development Review, v. 10, p. 419–441.
- Klein Goldewijk, K., 2001, Estimating global land use change over the past 300 years: The HYDE Database: Global Biogeochemical Cycles, v. 15, p. 417–433.
- Klein Goldewijk, K., Beusen, A., and van Drecht, G., 2011, The HYDE 3.1 spatially explicit database of human-induced global land-use change over the past 12,000 years: Global Ecology and Biogeography, v. 20, p. 73–86.
- Lambin, E.F., and Geist, H.J., editors, 2006, Land-Use and Land-Cover Change: Local Processes and Global Impacts: Berlin, Springer-Verlag, 222 p.
- Lansing, S.J., 1991, Priests and Programmers: Technologies of Power in the Engineered Landscape of Bali: Princeton, New Jersey, Princeton University Press, 183 p.
- Marsh, G.P., 1864, Man and Nature (1965 ed.): Cambridge, Massachusetts, Harvard University, Belknap Press, 472 p.
- McKibben, B., 1989, The End of Nature: New York, Random House, 226 p.
- MEA, 2003, Ecosystems and human well-being: A framework for assessment: Washington, D.C., Island Press, 245 p.
- MEA, 2005, Current State and Trends. Millennium Ecosystem Assessment: Washington, D.C., Island Press, 839 p.
- Meadows, D., Randers, J., and Meadows, D., 2004, Limits to Growth: The 30-year Update: White River Junction, Vermont, Chelsea Green Publishing Co., 338 p.
- Montgomery, D.R., 2007, Soil erosion and agricultural sustainability: Proceedings of the National Academy of Sciences, v. 104, p. 13,268–13,272.
- Norse, D., James, C., Skinner, B.J., and Zhao, Q., 1992, Agriculture, land use and degradation, in Dooge, J.C.I., Goodman, G.T., Rivière, J.W.M., Marton-Lefèvre, J., and O’Riordan, T., eds., An Agenda of Science for Environment and Development into the 21st Century: Cambridge, UK, Cambridge University Press, p. 79–89.
- Oldeman, L.R., Hakkeling, R.T.A., and Sombroek, W.G., 1991, World Map of the Status of Human-Induced Soil Degradation: An Explanatory Note (2nd edition): Wageningen and Nairobi: International Soil Reference and Information Centre (ISRIC), 35 p.
- Painter, T.H., Deems, J.S., Belnape, J., Hamlet, A.F., Landry, C.C., and Udall, B., 2010, Response of Colorado River runoff to dust radiative forcing in snow: Proceedings of the National Academy of Sciences, v. 107, no. 40, p. 17,125–17,130.
- Peng, S., Huang, J., Sheehy, J.E., Laza, R.C., Visperas, R.M., Zhong, X., Centeno, G.S., Khush, G.S., and Cassman, K.G., 2004, Rice yields decline with higher night temperature from global warming: Proceedings of the National Academy of Sciences, v. 101, no. 27, p. 9971–9975.
- Pimentel, D., Harvey, C., Resosudarmo, P., Sinclair, K., Kurz, D., McNair, M., Crist, S., Shpritz, L., Fitton, L., Saffouri R., and Blair, R., 1995, Environmental and economic costs of soil erosion and conservation benefits: Science, v. 267, p. 1117–1123.
- Pimentel, D., Cooperstein, S., Randell, H., Filiberto, D., Sorrentino, S., Kaye, B., Nicklin, C., Yagi, J., Brian, J., O’Hern, J., Habas, A., and Weinstein, C., 2007, Ecology of increasing diseases: Population growth and environmental degradation: Human Ecology, v. 35, no. 6, p. 653–668.
- Pinter, N., Jemberie, A.A., Remo, J.W.F., Heine, R.A., and Ickes, B.S., 2008, Flood trends and river engineering on the Mississippi River system: Geophysical Research Letters, v. 35, L23404, doi: 10.1029/2008GL035987.
- Pongratz, J., Reick, C., Raddatz, T., and Claussen, M., 2008, A reconstruction of global agricultural areas and land cover for the last millennium: Global Biogeochemical Cycles, v. 22, p. 1–16.
- Potere, D., and Schneider, A., 2007, A critical look at representations of urban areas in global maps: GeoJournal, v. 69, p. 55–80.
- Prospero, J.M., Blades, E., Mathison, G., and Naidu, R., 2005, Interhemispheric transport of viable fungi and bacteria from Africa to the Caribbean with soil dust: Aerobiologia, v. 21, p. 1–19.
- Ramankutty, N., and Foley, J.A., 1999, Estimating historical changes in global land cover: Croplands from 1700 to 1992: Global Biogeochemical Cycles, v. 13, p. 997–1027.
- Ramankutty, N., Foley, J.A., Norman, J., and McSweeney, K., 2002, The global distribution of cultivable lands: Current patterns and sensitivity to possible climate change: Global Ecology & Biogeography, v. 11, p. 377–392.
- Ramankutty, N., Evan, A.T., Monfreda, C., and Foley, J.A., 2008, 1. Geographic distribution of global agricultural lands in the year 2000: Global Biogeochemical Cycles, v. 22, p. 1–19.
- Rey Benayas, J.M., Newton, A.C., Diaz, A., and Bullock, J.M., 2009, Enhancement of biodiversity and ecosystem services by ecological restoration: A meta-analysis: Science, v. 325, p. 1121–1124.
- Reynolds, R.L., Mordecai, J.S., Rosenbaum, J.G., Ketterer, M.E., Walsh, M.K., and Moser, K.A., 2010, Compositional changes in sediments of subalpine lakes, Uinta Mountains (Utah): Evidence for the effects of human activity on atmospheric dust inputs: Journal of Paleolimnology, v. 44, no. 1, p. 161–175.
- Rockström, J., and 28 others, 2009, Planetary boundaries: Exploring the safe operating space for humanity: Ecology and Society, v. 14, no. 2, art. 32: http://www.ecologyandsociety.org/vol14/iss2/art32/ (last accessed 1 Oct. 2012).
- Rosmarin, A., 2004, The precarious geopolitics of phosphorous: Down to Earth, v. 13, p. 27–34.
- Roubini, N., 2011, World Economic Forum: Davos, Switzerland, 26 Jan. 2011 (presentation).
- Rudel, T.K., Schneider, L., Uriarte, M., Turner II, B.L., DeFries, R., Lawrence, D., Geoghegan, J., Hecht, S., Ickowitz, A., Lambin, E.F., Birkenholtz, T., Baptista, S., and Grau, R., 2009, Agricultural intensification and changes in cultivated areas, 1970–2005: Proceedings of the National Academy of Sciences, v. 106, no. 49, p. 20,675–20,680.
- Sala, O.E., Chapin, F.S. III, Armesto, J.J., Berlow, E., Bloomfield, J., Dirzo, R., Huber-Sanwald, E., Huenneke, L.F., Jackson, R.B., Kinzig, A., Leemans, R., Lodge, D.M., Mooney, H.A., Oesterheld, M., Poff, N.L., Sykes, M.T., Walker, B.H., Walker, M., and Wall, D.H., 2000, Global biodiversity scenarios for the year 2100: Science, v. 287, p. 1770–1774.
- Sanderson, E.W., Jaiteh, M., Levy, M.A., Redford, K.H., Wannebo, A.V., and Woolmfer, G., 2002, The human footprint and the last of the wild: BioScience, v. 52, no. 10, p. 891–904.
- Seto, K.C., de Groot, R., Bringezu, S., Erb, K., Graedel, T.E., Ramankutty, N., Reenberg, A., Schmitz, O.J., and Skole, D.L., 2010, Stocks, flows, and prospects of land, in Graedel, T.E., and van der Voet, E., eds., Linkages of sustainability: Cambridge, Massachusetts, MIT Press, p. 71–98.
- Simon, J.L., 1996, The Ultimate Resource 2: Princeton, New Jersey, Princeton University Press, 734 p.
- TEEB, 2010, The Economics of Ecosystems and Biodiversity (TEEB): Ecological and Economic Foundations: London, Earthscan, 400 p.
- Tiwana, N.S., Jerath, N., Ladhar, S.S., Singh, G., Paul, R., Dua, D.K., and Parwana, H.K., 2007, State of Environment Punjab – 2007: Chandigarh, Punjab State Council for Science and Technology, 243 p.
- Tobler, S., 2007, Cheatgrass: A Weedy Annual on Great Basin Rangelands: Appendix C in Southwest Utah Regional Fire Protection Plan, St. George, Utah, Five County Association of Governments, 196 p.
- Trimble, S.W., 1999, Decreased rates of alluvial sediment storage in the Coon Creek Basin, Wisconsin: Science, v. 285, p. 1244–1246.
- UNPD, 1999, The World at Six Billion: New York, Department of Economic and Social Affairs, Population Division, United Nations: See Table 4, p. 11, http://www.un.org/esa/population/publications/sixbillion/sixbillion.htm (last accessed 1 Oct. 2012).
- UNPD, 2004, World Urbanization Prospects: The 2003 Revision: New York, United Nations, Department of Economic and Social Affairs, Population Division: http://www.un.org/esa/population/publications/wup2003/2003WUPHighlights.pdf (last accessed 1 Oct. 2012).
- UNPD, 2007a, Urban Population, Development and the Environment 2007: New York, United Nations, Department of Economic and Social Affairs, Population Division: http://www.un.org/esa/population/publications/2007_PopDevt/Urban_2007.pdf (last accessed 1 Oct. 2012).
- UNPD, 2007b, Urban and rural areas, 2007: United Nations, Department of Economic and Social Affairs, Population Division, http://www.un.org/esa/population/publications/wup2007/2007_urban_rural_chart.pdf (last accessed 1 Oct. 2012).
- USDA, 1989, The second RCA appraisal: Soil, water, and related resources on nonfederal land in the United States: Analysis of conditions and trends: Washington, D.C., U.S. Department of Agriculture, 280 p.
- Vitousek, P.M., 1992, Global environmental change: An introduction: Annual Reviews of Ecological Systems, v. 23, p. 1–14.
- Vitousek, P.M., Ehrlich, P.R., Ehrlich, A.H., and Matson, P.A., 1986, Human appropriation of the products of photosynthesis: BioScience, v. 36, p. 368–373.
- Vitousek, P.M., Mooney, H.A., Lubchenco, J., and Melillo, J.M., 1997, Human domination of Earth’s ecosystems: Science, v. 277, p. 494–499.
- Wackernagel, M., Schulz, N.B., Deumling, D., Callejas Linares, A., Jenkins, M., Kapos, V., Monfreda, C., Loh, J., Myers, N., Norgaard, R., and Randers, J., 2002, Tracking the ecological overshoot of the human economy: Proceedings of the National Academy of Sciences, v. 99, no. 14, p. 9266–9271.
- Wilkinson, B.H., and McElroy, B.J., 2007, The impact of humans on continental erosion and sedimentation: GSA Bulletin, v. 119, p. 140–156, doi: 10.1130/B25899.1.
- World Bank, 2003, World Bank Atlas – 2003: Washington, D.C., World Bank, 82 p.
- Zhang, D.D., Brecke, P., Lee, H.F., He, Q-Y., and Zhang, J., 2007, Global climate change, war, and population decline in recent human history: Proceedings of the National Academy of Sciences, v. 104, no. 49, p. 19,214–19,219.
|
<urn:uuid:ff307bad-c3da-4a19-b5d7-2d1bda1ffbea>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811794.67/warc/CC-MAIN-20180218062032-20180218082032-00617.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8299676775932312,
"score": 3.765625,
"token_count": 10708,
"url": "http://www.geosociety.org/gsatoday/archive/22/12/article/i1052-5173-22-12-4.htm"
}
|
Modern periodic table with names
What is Periodic Table?
C.01 7,.01 8,.00 9,.00 10,.18 11.99 12.31 iiib IVB VB VIB viib viiib IB IIB 13.98 14.09 15.97 16.06 17.45.
Covalent Atomic radius: In a period from left to right the covalent radius of the atom reduces, while in a group from top to bottom it increases.
From Yttrium (Y) to Cadmium (Cd). .4 19 K 20 Ca games the sims social 2 21 Sc 22 Ti 23 V 24 Cr 25 Mn 26 Fe 27 Co 28 Ni 29 Cu 30 Zn 31 Ga 32 Ge 33 As 34 Se 35 Br 36 Kr * The fifth period is the second long.The 7th row of the periodic table takes completion by the addition of these four elements.The advent of the periodic table tried to explain these short comings.In this periodic table the symbols of the elements are given with their atomic masses.It ms office 2010 incl activation key generator zip is called second short period. .The first grouping was done as metals in one group and non metals in the other.Group 13: elements are also electro positive and trivalent.In this period, the 1s orbital is being filled.The modern periodic table can accommodate 118 elements.Alkali metals Alkali Earth metals Transition metals Non metals Metalloids Halogens Nobel gases Lanthanide elements Actinide elements First Periodic Table Back to Top The first periodic table is designed by Dimitrev Mendeleev basing atomic weights as the criteria.
If the element accepts one or more electrons to complete its orbit and become an ion they are called electronegative elements.
All the other elements are compared with these values based on their re activities.
If an element loses one or more electrons to form an ion it is called electropositive element and its ions have positive charge.
It is called first short period. .
The elements present in a group show similar physical chemical properties since they have similar outer electronic configurations.
In this group, the 2s 2p orbitals are being filled.
Hf 178.49 73 Ta 180.95 74 W 183.85 75 Re 186.21 76 Os 190.2 77 Ir 192.22 78 Pt 195.09 79 Au 196.97 80 Hg 200.59 81 Tl 204.37 82 Pb 207.2 83 Bi 208.96 84 Po (209) 85 At (210) 86 Rn (222).Depending on the ease with which electron or electrons are donated or accepted and the subsequent stability of the ion while forming a compound makes even elements of the same group to have more than one type of charge.The law of triads and law of octaves were improvised attempts to classify and group different elements but even these could not satisfactorily classify the elements and the periodicity of the properties.It not only includes 10 elements belonging to 5d series.e., Lanthanum (La Hafnium (Hf) to Mercury (Hg) but also contains 14 elements belonging the 4f series called lanthanides (Cerium (Ce) to Lutetium (Lu).Periods: * Each period starts with an alkali metal and ends with an inert gas element.Back to Top, with the discovery of elements and the study of their properties a need was generated to organize the findings.Development of the periodic table addressed these to an extent.Initially the elements were divided as metals and non metals.Long form OF modern periodic table.
|
<urn:uuid:7e83295e-a6bb-4541-8c54-80501342ce87>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00617.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8643700480461121,
"score": 3.8125,
"token_count": 750,
"url": "http://undragondansmonjardin.eu/modern-periodic-table-with-names.html"
}
|
Siege of Constantinople (674–678)
The First Arab Siege of Constantinople in 674–678 was a major conflict of the Arab–Byzantine wars, and the first culmination of the Umayyad Caliphate's expansionist strategy towards the Byzantine Empire, led by Caliph Mu'awiya I. Mu'awiya, who had emerged in 661 as the ruler of the Muslim Arab empire following a civil war, renewed aggressive warfare against Byzantium after a lapse of some years and hoped to deliver a lethal blow by capturing the Byzantine capital, Constantinople.
As reported by the Byzantine chronicler Theophanes the Confessor, the Arab attack was methodical: in 672–673 Arab fleets secured bases along the coasts of Asia Minor, and then proceeded to install a loose blockade around Constantinople. They used the peninsula of Cyzicus near the city as a base to spend the winter, and returned every spring to launch attacks against the city's fortifications. Finally, the Byzantines, under Emperor Constantine IV, managed to destroy the Arab navy using a new invention, the liquid incendiary substance known as Greek fire. The Byzantines also defeated the Arab land army in Asia Minor, forcing them to lift the siege. The Byzantine victory was of major importance for the survival of the Byzantine state, as the Arab threat receded for a time. A peace treaty was signed soon after, and following the outbreak of another Muslim civil war, the Byzantines even experienced a period of ascendancy over the Caliphate.
The siege left several traces in the legends of the nascent Muslim world, although it is conflated with accounts of another expedition against the city a few years previously, led by the future Caliph Yazid I. As a result, the veracity of Theophanes's account was questioned in 2010 by Oxford scholar James Howard-Johnston, who placed more emphasis on the Arabic and Syriac sources. On the other hand, echoes of a large siege of Constantinople and a subsequent peace treaty reached China, where they were recorded in later histories of the Tang dynasty.
Following the disastrous Battle of Yarmouk in 636, the Byzantine Empire withdrew the bulk of its remaining forces from the Levant into Asia Minor, which was shielded from the Muslim expansion by the Taurus Mountains. This left the field open for the warriors of the nascent Rashidun Caliphate to complete their conquest of Syria, with Egypt too falling shortly after. Muslim raids against the Cilician frontier zone and deep into Asia Minor began as early as 640, and continued under Mu'awiya, then governor of the Levant. Mu'awiya also spearheaded the development of a Muslim navy, which within a few years grew sufficiently strong to occupy Cyprus and raid as far as Kos, Rhodes and Crete in the Aegean Sea. Finally, the young Muslim navy scored a crushing victory over its Byzantine counterpart in the Battle of Phoenix in 655. Following the murder of Caliph Uthman and the outbreak of the First Muslim Civil War, Arab attacks against Byzantium stopped. In 659, Mu'awiya even concluded a truce with Byzantium, including payment of tribute to the Empire.
The peace lasted until the end of the Muslim civil war in 661, from which Mu'awiya and his clan emerged victorious, establishing the Umayyad Caliphate. From the next year, Muslim attacks recommenced, with pressure mounting as Muslim armies began wintering on Byzantine soil west of the Taurus range, maximizing the disruption caused to the Byzantine economy. These land expeditions were sometimes coupled with naval raids against the coasts of southern Asia Minor. In 668, the Arabs sent aid to Saborios, strategos of the Armeniac Theme, who had rebelled and proclaimed himself emperor. The Arab troops under Fadhala ibn 'Ubayd arrived too late to assist Saborios, who had died after falling from his horse, and they spent the winter in the Hexapolis around Melitene awaiting reinforcements.
In spring 669, after receiving additional troops, Fadhala entered Asia Minor and advanced as far as Chalcedon, on the Asian shore of the Bosporus across from the Byzantine capital, Constantinople. The Arab attacks on Chalcedon were repelled, and the Arab army was decimated by famine and disease. Mu'awiya dispatched another army, led by his son (and future Caliph) Yazid, to Fadhala's aid. Accounts of what followed differ. The Byzantine chronicler Theophanes the Confessor reports that the Arabs remained before Chalcedon for a while before returning to Syria, and that on their way they captured and garrisoned Amorium. This was the first time the Arabs tried to hold a captured fortress in the interior of Asia Minor beyond the campaigning season, and probably meant that the Arabs intended to return next year and use the town as their base, but Amorium was retaken by the Byzantines during the subsequent winter. Arab sources on the other hand report that the Muslims crossed over into Europe and launched an unsuccessful attack on Constantinople itself, before returning to Syria. Given the lack of any mention of such an assault in Byzantine sources, it is most probable that the Arab chroniclers—taking account of Yazid's presence and the fact that Chalcedon is a suburb of Constantinople—"upgraded" the attack on Chalcedon to an attack on the Byzantine capital itself.
Opening moves: the campaigns of 672 and 673
The campaign of 669 clearly demonstrated to the Arabs the possibility of a direct strike at Constantinople, as well as the necessity of having a supply base in the region. This was found in the peninsula of Cyzicus on the southern shore of the Sea of Marmara, where a raiding fleet under Fadhala ibn 'Ubayd wintered in 670 or 671. Mu'awiya now began preparing his final assault on the Byzantine capital. In contrast to Yazid's expedition, Mu'awiya intended to take a coastal route to Constantinople. The undertaking followed a careful, phased approach: first the Muslims had to secure strongpoints and bases along the coast, and then, with Cyzicus as a base, Constantinople would be blockaded by land and sea and cut off from the agrarian hinterland that supplied its food.
Accordingly, in 672 three great Muslim fleets were dispatched to secure the sea lanes and establish bases between Syria and the Aegean. Muhammad ibn Abdallah's fleet wintered at Smyrna, a fleet under a certain Qays (perhaps Abdallah ibn Qays) wintered in Lycia and Cilicia, and a third fleet, under Khalid, joined them later. According to the report of Theophanes, the Emperor Constantine IV (r. 661–685), upon learning of the Arab fleets' approach, began equipping his own fleet for war. Constantine's armament included siphon-bearing ships intended for the deployment of a newly developed incendiary substance, Greek fire. In 673, another Arab fleet, under Gunada ibn Abu Umayya, captured Tarsus in Cilicia, as well as Rhodes. The latter, midway between Syria and Constantinople, was converted into a forward supply base and centre for Muslim naval raids. Its garrison of 12,000 men was regularly rotated back to Syria, a small fleet was attached to it for defence and raiding, and the Arabs even sowed wheat and brought along animals to graze on the island. The Byzantines attempted to obstruct the Arab plans with a naval attack on Egypt, but it was unsuccessful. Throughout this period, overland raids into Asia Minor continued, and the Arab troops wintered on Byzantine soil.
In 674, the Arab fleet sailed from its bases in the eastern Aegean and entered the Sea of Marmara. According to the account of Theophanes, they landed on the Thracian shore near Hebdomon in April, and until September were engaged in constant clashes with the Byzantine troops. As the Byzantine chronicler reports, "Every day there was a military engagement from morning until evening, between the outworks of the Golden Gate and the Kyklobion, with thrust and counter-thrust". Then the Arabs departed and made for Cyzicus, which they captured and converted into a fortified camp to spend the winter. This set the pattern that continued throughout the siege: each spring, the Arabs crossed the Marmara and assaulted Constantinople, withdrawing to Cyzicus for the winter. In fact, the "siege" of Constantinople was a series of engagements around the city, which may even be stretched to include Yazid's 669 attack. Both Byzantine and Arab chroniclers record the siege as lasting for seven years instead of five. This can be reconciled either by including the opening campaigns of 672–673, or by counting the years until the final withdrawal of the Arab troops from their forward bases, in 680.
The details of the clashes around Constantinople are unclear, as Theophanes condenses the siege in his account of the first year, and the Arab chroniclers do not mention the siege at all but merely provide the names of leaders of unspecified expeditions into Byzantine territory. Thus from the Arab sources it is only known that Abdallah ibn Qays and Fadhala ibn 'Ubayd raided Crete and wintered there in 675, while in the same year Malik ibn Abdallah led a raid into Asia Minor. The Arab historians Ibn Wadih and al-Tabari report that Yazid was dispatched by Mu'awiya with reinforcements to Constantinople in 676, and record that Abdallah ibn Qays led a campaign in 677, the target of which is unknown. At the same time, the Byzantines had to face a Slavic attack on Thessalonica and Lombard attacks in Italy. Finally, in autumn 677 or early 678 Constantine IV resolved to confront the Arab besiegers in a head-on engagement. His fleet, equipped with Greek fire, routed the Arab fleet. It is probable that the death of admiral Yazid ibn Shagara, reported by Arab chroniclers for 677/678, is related to this defeat. At about the same time, the Muslim army in Asia Minor, under the command of Sufyan ibn 'Awf, was defeated by the Byzantine army under the generals Phloros, Petron and Cyprian, losing 30,000 men according to Theophanes. These defeats forced the Arabs to abandon the siege in 678. On its way back to Syria, the Arab fleet was almost annihilated in a storm off Syllaion.
The essential outline of Theophanes' account may be corroborated by the only near-contemporary Byzantine reference to the siege, a celebratory poem by the otherwise unknown Theodosius Grammaticus, which was earlier believed to refer to the second Arab siege of 717–718. Theodosius' poem commemorates a decisive naval victory before the walls of the city—with the interesting detail that the Arab fleet too possessed fire-throwing ships—and makes a reference to "the fear of their returning shadows", which may be interpreted as confirming the recurring Arab attacks each spring from their base in Cyzicus.
Importance and aftermath
Constantinople was the nerve centre of the Byzantine state. Had it fallen, the Empire's remaining provinces would have been unlikely to hold together, and would have become easy prey for the Arabs. At the same time, the failure of the Arab attack on Constantinople was a momentous event in itself. It marked the culmination of Mu'awiya's campaign of attrition, pursued steadily since 661. Immense resources were poured into the undertaking, including the creation of a huge fleet. Its failure had similarly important repercussions, and represented a major blow to the Caliph's prestige. Conversely, Byzantine prestige reached new heights, especially in the West: Constantine IV received envoys from the Avars and the Balkan Slavs, bearing gifts and congratulations and acknowledging Byzantine supremacy. The subsequent peace also gave a much-needed respite from constant raiding to Asia Minor, and allowed the Byzantine state to recover its balance and consolidate itself following the cataclysmic changes of the previous decades.
The failure of the Arabs before Constantinople coincided with the increased activity of the Mardaites, a Christian group living in the mountains of Syria that resisted Muslim control and raided the lowlands. Faced with this new threat, and after the immense losses suffered against the Byzantines, Mu'awiya began negotiations for a truce, with embassies exchanged between the two courts. These were drawn out until 679, giving the Arabs time for a last raid into Asia Minor under 'Amr ibn Murra, perhaps intended to put pressure on the Byzantines. The peace treaty, of a nominal 30-year duration, provided that the Caliph would pay an annual tribute of 3,000 nomismata, 50 horses and 50 slaves. The Arab garrisons were withdrawn from their bases on the Byzantine coastlands, including Rhodes, in 679–680.
Constantine IV used the peace to proceed against the mounting Bulgar menace in the Balkans, but his huge army, comprising all the available forces of the Empire, was decisively beaten, opening the way for the establishment of a Bulgar state in the northeastern Balkans. In the Muslim world, after the death of Mu'awiya in 680, the various forces of opposition within the Caliphate manifested themselves. The Caliphate's division during this Second Muslim Civil War allowed Byzantium to achieve not only peace, but also a position of predominance on its eastern frontier. Armenia and Iberia reverted for a time to Byzantine control, and Cyprus became a condominium between Byzantium and the Caliphate. The peace lasted until Constantine IV's son and successor, Justinian II (r. 685–695, 705–711), broke it in 693, with devastating consequences: the Byzantines were defeated, Justinian was deposed and a twenty-year period of anarchy followed. Muslim incursions intensified, leading to a second Arab attempt at conquering Constantinople in 717–718, which also proved unsuccessful.
Later Arab sources dwell extensively on the events of Yazid's 669 expedition and supposed attack on Constantinople, including various mythical anecdotes, which are taken by modern scholarship to refer to the events of the 674–678 siege. Several important personalities of early Islam are mentioned as taking part, such as Ibn Abbas, Ibn Umar and Ibn al-Zubayr. The most prominent among them in later tradition is Abu Ayyub al-Ansari, one of the early companions (Ansari) and standard-bearer of Muhammad, who died of illness before the city walls during the siege and was buried there. According to Muslim tradition, Constantine IV threatened to destroy his tomb, but the Caliph warned that if he did so, the Christians under his rule would suffer. Thus the tomb was left in peace, and even became a site of veneration by the Byzantines, who prayed there in times of drought. The tomb was "rediscovered" after the Fall of Constantinople to the Ottoman Turks in 1453 by the dervish Sheikh Ak Shams al-Din, and Sultan Mehmed II (r. 1444–1446, 1451–1481) ordered the construction of a marble tomb and a mosque adjacent to it. It became a tradition that Ottoman sultans were girt with the Sword of Osman at the Eyüp mosque upon their accession. Today it remains one of the holiest Muslim shrines in Istanbul.
This siege is even mentioned in the Chinese dynastic histories of the Old Book of Tang and New Book of Tang. They record that the large, well-fortified capital city of Fu lin (拂菻; i.e. Byzantium) was besieged by the Da shi (大食, i.e. the Umayyad Arabs) and their commander "Mo-yi" (Chinese: 摩拽伐之, Pinyin: Mó zhuāi fá zhī), who Friedrich Hirth has identified as Mu'awiya. The Chinese histories then explain that the Arabs forced the Byzantines to pay tribute afterwards as part of a peace settlement. In these Chinese sources, Fu lin was directly related to the earlier Daqin, which is now considered by modern sinologists as the Roman Empire. Henry Yule remarked with some surprise the accuracy of the account in Chinese sources, which even named the negotiator of the peace settlement as "Yenyo", or Ioannes Pitzigaudes, the unnamed envoy sent to Damascus in Edward Gibbon's account in which he mentions an augmentation of tributary payments a few years later due to the Umayyads facing some financial troubles.
Modern reassessment of the events
The narrative of the siege accepted by modern historians relies largely on Theophanes' account, while the Arab and Syriac sources do not mention any siege, but rather individual campaigns, only a few of which reached as far as Constantinople. Thus the capture of an island named Arwad "in the sea of Kustantiniyya" is recorded for 673/674, although it is unclear if this refers to the Sea of Marmara or the Aegean, and Yazid's 676 expedition is also said to have reached Constantinople. The Syriac chroniclers also disagree with Theophanes in placing the decisive battle and destruction of the Arab fleet by Greek fire in 674 during an Arab expedition against the coasts of Lycia and Cilicia, rather than Constantinople. This was followed by the landing of Byzantine forces in Syria in 677/678, which began the Mardaite uprising that threatened the Caliphate's grip on Syria enough to result in the peace agreement of 678/679.
Based on a re-evaluation of the original sources used by the medieval historians, the Oxford scholar James Howard-Johnston, in his acclaimed 2010 book Witnesses to a World Crisis: Historians and Histories of the Middle East in the Seventh Century, rejects the traditional interpretation of events, based on Theophanes, in favour of the Syriac chroniclers' version. Howard-Johnston asserts that no siege actually took place, based not only on its absence in the eastern sources, but also on the logistical impossibility of such an undertaking for the duration reported. Instead, he believes that the reference to a siege was a later interpolation, influenced by the events of the second Arab siege of 717–718, by an anonymous source that was then used by Theophanes. According to Howard-Johnston, "The blockade of Constantinople in the 670s is a myth which has been allowed to mask the very real success achieved by the Byzantines in the last decade of Mu'awiya’s caliphate, first by sea off Lycia and then on land, through an insurgency which, before long, aroused deep anxiety among the Arabs, conscious as they were that they had merely coated the Middle East with their power".
- Kaegi (2008), pp. 369ff.; Lilie (1976), pp. 60–68; Treadgold (1997), pp. 303–307, 310, 312–313
- Kaegi (2008), p. 372; Lilie (1976), pp. 64–68; Treadgold (1997), pp. 312–313
- Lilie (1976), p. 68
- Lilie (1976), p. 69; Treadgold (1997), p. 318
- Kaegi (2008), pp. 373, 375; Lilie (1976), pp. 69–71; Treadgold (1997), p. 320
- Lilie (1976), pp. 71–72; Treadgold (1997), p. 320
- Lilie (1976), pp. 72–74, 90; Treadgold (1997), p. 325
- Lilie (1976), pp. 73–74
- Lilie (1976), p. 75; Treadgold (1997), p. 325; Mango & Scott (1997), p. 492
- Lilie (1976), p. 76 (Note #61)
- Haldon (1990), p. 63; Lilie (1976), pp. 90–91
- Lilie (1976), pp. 75, 90–91; Treadgold (1997), p. 325; Mango & Scott (1997), p. 493
- Lilie (1976), pp. 76–77; Treadgold (1997), p. 325
- Lilie (1976), pp. 74–76
- Haldon (1990), p. 64; Lilie (1976), pp. 77–78; Treadgold (1997), p. 325; Mango & Scott (1997), pp. 493–494
- Mango & Scott (1997), p. 494 (Note #3)
- Lilie (1976), p. 80 (Note #73); Mango & Scott (1997), p. 494 (Note #3)
- Haldon (1990), p. 64
- Brooks (1898), pp. 187–188; Lilie (1976), pp. 78–79; Mango & Scott (1997), p. 494
- Lilie (1976), pp. 79–80; Treadgold (1997), p. 325; Mango & Scott (1997), p. 495
- Treadgold (1997), p. 326
- Haldon (1990), p. 64; Lilie (1976), pp. 78–79; Treadgold (1997), pp. 326–327; Mango & Scott (1997), p. 494
- Olster (1995), pp. 23–28
- Lilie (1976), p. 91
- Lilie (1976), pp. 80–81, 89–91
- Haldon (1990), p. 66
- Haldon (1990), p. 64; Kaegi (2008), pp. 381–382; Lilie (1976), pp. 81–82; Treadgold (1997), p. 327
- Lilie (1976), p. 83; Treadgold (1997), pp. 328–329
- Lilie (1976), pp. 99–107; Treadgold (1997), pp. 330–332
- Kaegi (2008), pp. 382–385; Lilie (1976), pp. 107–132; Treadgold (1997), pp. 334–349
- Canard (1926), pp. 70–71; El-Cheikh (2004), p. 62
- Canard (1926), pp. 71–77; El-Cheikh (2004), pp. 62–63; Turnbull (2004), p. 48
- Paul Halsall (2000) . Jerome S. Arkenberg, ed. "East Asian History Sourcebook: Chinese Accounts of Rome, Byzantium and the Middle East, c. 91 B.C.E. – 1643 C.E." Fordham.edu. Fordham University. Retrieved 2016-09-10.
- Jenkins, Philip (2008). The Lost History of Christianity: the Thousand-Year Golden Age of the Church in the Middle East, Africa, and Asia – and How It Died. New York: Harper Collins. pp. 64–68. ISBN 978-0-06-147280-0.
- Foster, John (1939). The Church in T'ang Dynasty. Great Britain: Society for Promoting Christian Knowledge. p. 3.
- Yule, Henry (1915). Henri Cordier (ed.), Cathay and the Way Thither: Being a Collection of Medieval Notices of China, Vol I: Preliminary Essay on the Intercourse Between China and the Western Nations Previous to the Discovery of the Cape Route. London: Hakluyt Society, pp. 48–49, footnote #1 on p. 49.
- Brooks (1898), pp. 186–188; Howard-Johnston (2010), pp. 302–303, 492–495; Stratos (1983), pp. 90–95
- Kaldellis, Anthony (2010). "Bryn Mawr Classical Review 2010.12.24". Bryn Mawr Classical Review. Retrieved 14 July 2012.
- Howard-Johnston (2010), pp. 303–304
- Brooks, E.W. (1898). "The Arabs in Asia Minor (641–750), from Arabic Sources". The Journal of Hellenic Studies. The Society for the Promotion of Hellenic Studies. XVIII: 182–208.
- Canard, Marius (1926). "Les expéditions des Arabes contre Constantinople dans l'histoire et dans la légende" [The Expeditions of the Arabs Against Constantinople in History and Legend]. Journal Asiatique (in French) (208): 61–121. ISSN 0021-762X.
- El-Cheikh, Nadia Maria (2004). Byzantium Viewed by the Arabs. Cambridge, Massachusetts: Harvard Center for Middle Eastern Studies. ISBN 978-0-932885-30-2.
- Haldon, John F. (1990). Byzantium in the Seventh Century: The Transformation of a Culture. Revised Edition. Cambridge, United Kingdom: Cambridge University Press. ISBN 978-0521319171.
- Howard-Johnston, James (2010). Witnesses to a World Crisis: Historians and Histories of the Middle East in the Seventh Century. Oxford: Oxford University Press. ISBN 978-0-19-920859-3.
- Kaegi, Walter E. (2008). "Confronting Islam: Emperors versus Caliphs (641–c. 850)". In Shepard, Jonathan. The Cambridge History of the Byzantine Empire c. 500–1492. Cambridge: Cambridge University Press. pp. 365–394. ISBN 978-0-52-183231-1.
- Lilie, Ralph-Johannes (1976). Die byzantinische Reaktion auf die Ausbreitung der Araber. Studien zur Strukturwandlung des byzantinischen Staates im 7. und 8. Jhd [Byzantine Reaction to the Expansion of the Arabs. Studies on the Structural Change of the Byzantine State in the 7th and 8th Cent.] (in German). Munich: Institut für Byzantinistik und Neugriechische Philologie der Universität München.
- Mango, Cyril; Scott, Roger (1997). The Chronicle of Theophanes Confessor. Byzantine and Near Eastern History, AD 284–813. Oxford: Oxford University Press. ISBN 978-0-19-822568-3.
- Olster, David (1995). "Theodosius Grammaticus and the Arab Siege of 674-78". Byzantinoslavica. 56 (1): 23–28. ISSN 0007-7712.
- Stratos, Andreas N. (1983). "Siège ou blocus de Constantinople sous Constantin IV" [Siege or Blockade of Constantinople under Constantine IV]. Jahrbuch der österreichischen Byzantinistik (in French). Vienna: Verlag der Österreichischen Akademie der Wissenschaften. 33: 89–107. ISSN 0378-8660.
- Treadgold, Warren (1997). A History of the Byzantine State and Society. Stanford, California: Stanford University Press. ISBN 978-0-804-72630-6.
- Turnbull, Stephen (2004). The Walls of Constantinople, AD 324–1453. Oxford: Osprey Publishing. ISBN 978-1-84176-759-8.
- Radic, Radivoj (2008). "Two Arabian sieges of Constantinople (674–678; 717/718)". Encyclopedia of the Hellenic World, Constantinople. Foundation of the Hellenic World. Retrieved 9 July 2012.
|
<urn:uuid:a33d35cf-900f-43e1-8261-c0114cc5bb40>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00017.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9204840064048767,
"score": 3.625,
"token_count": 5871,
"url": "http://en.wikipedia.org.mevn.net/wiki/Siege_of_Constantinople_(674%E2%80%93678)"
}
|
Fantanita Reserve covers a scarp, crossed by narrow and shallow valleys and hosts over 500 species of plants characteristic of southern Dobrogea area, predominting the Pontic elements, followed by the Balkan Continental, Mediterranean and Eurasian. The fauna includes many species characteristic of the steppe of Dobrudja.
The remarkable value of the site is given by the presence of protected species of bird population worldwide, of the Mediterranean, Balkan and Black Sea species of mammals and reptiles and species of invertebrates, especially Lepidoptera family which has a maximal value.
Erithacus Rubecula (Red goiter). For both male and female the feathers are greysh with a rusty stain on the chest. It nests in hallows or fallen tree trunks, the nest has 5-6 blue eggs, rusty stained. The female incubates them alone, for 13-14 days. Winter is often singing in places where it is installed.
Sturnus Vulgaris (Starling). The body is black, with intense green and purple reflections on the head and chest. It nests in hollows and holes in walles, reed roofs and concrete pillars. Eggs,5-7 in number, are light blue.
The incubation, 13-14 days, is made by both partners. It can raise 2 generations of cubs/ season. In winter the northern populations retreat to south-west area. There are large flocks even in winter, especially in the south-east, as birds coming from the north.
They imitate songs of other birds.
Cuculus Canorus (Cuckoo). They belong to Cuculidae family and look like sparrow. Slender birds, of medium size. They feed on insects, insect larvae and fruits.
Most of them lay their eggs in nests of other birds, but most cuckoos raise their babies alone.
Accipiter Gentilis (Blackthorn hawk). Length 50-60 cm, it is a medium size predator. It has brownish grey feathers on the back, the chest is a more lighter brown with black stripes. Attacks small birds and mammals. The nest usually has 3-5 eggs. Brooding is done more by the female, for 35-38 days. The male supplies her in this time with food. The attack is premeditated, using very different tactics, depending on the prey or location.
Accipiter Nisus (Hawk). It is a very rare species of brooding bird. The male has dark gray feathers on the back, the female brown. On the chest the male has red stripes and the female brown. The nest has 3-5 white eggs speckled with brown. The brooding is done more by the female for 32-35 days.
It resembles to the short legged hawk, but the main difference is in the iris, wich is yellow and not orange.
Pica Pica (The Magpie). Magpie has black plumage with white shoulders and chest, the tail is black with green reflections. Strictly sendentary, it makes a nest from thornes, with entrance from lateral sides. Eggs, 5-7 in number are layed in April. The eggs - green with brown staines are rotten by the female in 17-18 days.
Turdus Merula (merle, blackbird) has a length of 27 cm. The male is black, the female is dark brown. Nest is built at low height of the ground, in bushes, shrubbery. The full spawning is laid down since April, especially for populations in parks towns. The 4-5 blue-green eggs, sprinkled with brown, are rotten by the female for 14-15 days. There are 2-3 series of chicken that arise in summer time.
Upupa Epops (hoopoe). The popular name is Armenian cuckoo. Its length is 28 cm. As a distinctive sign the hoopoe has a comb made of orange and black feathers on top. The wings are white with black stripes and the rest of the body is yellow-orange. Hoopoe makes its nest in the hollows that are not lining up. Female lays up about 8 eggs and the incubation period is 16 days.The babies comes out by turns and they are fed by both parents.
Perdix Perdix (Partridge). The feathering is bright brown with the neck and goiter gray. The male has a more pronounced brown spot on the neck. It nests on soil, in the grass. The eggs, 10-20, green-yellow, are layed in May and are rottened only by the females. Although sedentary, without food partridge wanders from one place to another.
Ablepharus Kitaibelii (Small lizard). The total length of the lizard is 8-12 cm, but just the tail has 5-7 cm. The legs are short and skinny, the hole of the ear is very obvious. Scales are smooth and wide. The lizard is very agile, hard to see and catch. It walks and runs by lateral movement of the trunk and tail. It is mostly active in the first hours of the morning and before sunset. The female lais up to 15 eggs. In captivity it is tamed easily, feeds on earthwormes, flies, spiders.
Apodemus Sylvaticus (Forest mouse). Rear paw 20- 25 mm, tail 120 – 170 scaly rings, weight 18 – 25 g. The belly and legs are white, the back is red, and it has a yellow stain on the chest. It is very good at climbing trees, it feeds in general with seeds, wild fruits and more rarelly with grains. Digs deep galleries 30-50 (70) cm, especially under the roots of trees and bushes. The female has 4 – 6 cubs, 3 – 4 times a year.
Lepus Europaeus (The hare). The head and the body 600 – 700 mm, tail 75 – 100 mm, rear paw 135 – 150 mm, ear 120 –140 mm, skull 85 – 95 mm, weight 3 – 5 kg.
The back is yellowish-brown, with black spotes. The abdomen is white and the nack is yellowish brown.The tail is dark brown in the front and underneth white.
Erinaceus Europaeus (Hedgehog). Length of body 200 – 300 mm, tail 20 – 45mm, rear paw 40 – 45 mm, skull 55 – 65 mm. The body is stocky and short. Wide ears and little black eyes. The face is yellowish-white or yellowish-red. Black whiskers. Spines at the middle and top dark brown, the rest yellowish. The female is more bigger than the male, with sharper whiskers, stronger body and a brighter color.The cubs are born with white spines. It hibernates from automn to March. The adult female has 3 – 8 little cubs. Brings real agricultural services by destroying a large quantity of insects, worms and mole cricket.
Spermophilus citellus (Gopher). The head and the body 180 – 230 mm, tail 50 – 70 mm, ear 10 mm, rear paw 35 – 40 mm, weight 240- 340 g. The head is round with small ears. The lips, chin and neck are white. The forehead is a mixture of yellow and reddish brown. Whiskers and nails are black. It diggs up long galleries 30-40 (sometimes 150 m) where it gathers supplies for winter. The food is in summer seeds, roots, grains, rarely consuming animal feed. The mating period is in spring (march - april). The female gives birth once or twice a year to 3-8 cubs.
Vormela Peregusna (stained ferret). Head and trunk 32-38 cm tail (final brushless) 15-20 cm, rear foot (without claws) 4-5 cm. Bushy tail, ears rather large and rounded at the top, the edges white. Head brown, dark. A transverse white band above the eyes. Sides white mouth. Abdomen blackish. Tail gray. The typical steppe, avoiding places look, rarely met and river valleys, in addition to human settlements, through the gardens, barns, straw sire. Digs a hole and often uses gopher galleries. It feeds on rodents, birds, lizards, gopher summer more. Pairing in March. 4-8 female puppies born after 8 weeks.
Mustela Eversmanni (Steppe Polecat). Head and trunk 35-38 cm, tail 12-16 cm, weight 0.7 to 1.3 kg. Similar species Mustela putorius, but with lighter hair. Back brown-yellowish abdomen slightly darker. Eye stain darker. Spread in eastern Europe and western Asia, prefer steppe. Known to us only in Dobrudja.
Misocricetus Newton. It is found only in Dobrudja.
Talpa Europaea (mole). Length of the body 125 – 150 mm, tail 25-28 mm, rear paw 15-19 mm, skull 30-38 mm. The body is short, fat and cylindrical. The eyes are abanos black and they get confused with the fur color, they are as small as a grain of opioum. Short tail, the fur is generally dark brown with bluesh or white reflections. Some have the fur black with white spots and very rare, some are white completelly. It lives only undergorund in complex galleries recognized by the mounds of earth on top.
Spalax Leucodon. The body is 18-27 cm long, weight 140 – 220 g. It is covered in thick and silky fur with a gray-reddish color.
Legs, very small compared to the body, have strong nails used for digging. It is perfectly adapted to life underground, living almost exclusevely in its galleries. Each one diggs its own network of galleries. The cubs nest has a 30 cm diameter and is lined with herbs. To make the galleries it uses the claws and teeth. It feeds exclusively on grass and roots.
The mating period occurs in spring, when they rise to the soil, especially males. They make one genaration of 2 – 4 cubs a year. In the rest of the time it lives alone. It is on the food list of fox, polecat and night predators.
Coluber Jugularis (Evil Serpent). Harmless, attitudes of cobra, useful in biological control of field rodents.
Testudo Graeca (Dobrudja Turtle). Length of 15-27 cm. Males are distinguished from females by long and strong tails, by insertion of the posterior notch size, the scales supracodal very bulging. Shell yellow-dark brown or olive uniform, each plate is bordered by black spots, irregular. They prefer dry land, with high bushes, forest steppe, the loving warmth. It feeds on plant roots, earthworms. Females lay by May-June, 4-12 spherical eggs, white, which hatch in 70-80 days. Perform well in captivity. Live 90-125 years. It is protected by law and declared a natural monument.
Zebrina Detrita is a species of air-breathing land snail, a terrestrial pulmonate gastropod mollusk in the family Enidae.
Carpocoris Mediterraneus is a species of shield bug in the Pentatomidae family. It is widespread throughout the Mediterranean region. It is a polyphagous vegetarian.
Ephesia Fulminea (Moth). Family Noctuidae. This family is the largest in the Lepidoptera and has approximately 20,000 species. Most moths are gray to brown in color and have line or spots on their wings. Some species are brightly colored. They are small to large in size, and most species are medium-sized with wings of 2-4.5 cm. When at rest, adults of most species hold their wings above their bodies like a roof. Noctuids are typically nocturnal , though some species are diurnal . Most larvae feed on plant foliage , dead leaves, lichens, and fungi, many are serious forest pests.
Acrotylus Insubricus. It is one of the most greedy insects, devouring daily amount of food equal to its weight. Adults weigh 2-3 grams and length does not exceed 50 mm.
|Carpocoris Mediterraneus||Ephesia Fulminea
|
<urn:uuid:e1df3567-db6f-41e8-876f-0616ef9d8099>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00217.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9203380346298218,
"score": 3.375,
"token_count": 2639,
"url": "http://murfatlar-generaltoshevo.ro/en/murfatlar/fantanita/fauna"
}
|
2 Characteristics of Stars Groups of stars that form patterns in the sky are called constellationsExample: Ursa Major (Big Bear), Ursa Minor (Little Bear), and OrionThe last two stars in Ursa Major’s “dipper” are called the “Pointer Stars” and are be used to find Polaris (North Star)Polaris is located directly above the North Pole (90º N), and is only visible in the northern hemisphere (above the Equator)
4 Circumpolar Constellations Because of the Earth’s rotation, the constellations appear to moveIf the constellations 1) appear to move around Polaris and 2) can be seen at all times of year and 3) can be seen at all times of night, they are called circumpolar constellationsThe constellations Ursa Major and Ursa Minor are both circumpolar constellationsUsing time exposure photography, the apparent motion of the stars around Polaris can be recorded as circular trails
5 The stars don’t move – WE DO!!!! VIFThe apparent motion of stars is due to the Earth’s daily rotation on its axis.The stars don’t move – WE DO!!!!
6 Here is a time-lapse photo of circumpolar star movement… Here is a time-lapse photo of circumpolar star movement…
8 The positions of the constellations as viewed from Earth changes from season to season This is caused by the revolution of the Earth and the change in Earth’s position in its orbit around the sunExample: Orion the Hunter is a winter constellation
9 Ex – when the Earth is in this position (Nov 21), the bright sun during the day blocks our view of all of the constellations toward the lower right side of the diagram
11 Physical Properties of Stars Stars differ in size, density, mass, composition, and colorThe color of a star is determined by it surface temperature (ESRT’s P. 15 top)The hotter the star, the bluer the color. The cooler the star, the redder the color. (yeah, yeah, I know, it’s backwards….)The sun is an AVERAGE SIZE, medium, yellow star
15 Physical Properties of Stars Most stars are made up of mostly hydrogen and helium (approx. 98%)The remaining 2% may be other elementsA spectral analysis (remember Ch. 20) of the star can tell us what elements a star is made of, since the radiated spectrum depends on a star’s composition and temperature
16 (See the H-R Diagram in the ESRT’s P.15) Some stars may appear to be brighter than othersThe star’s brightness may be described in three ways 1. APPARENT MAGNITUDE 2. LUMINOSITY 3. ABSOLUTE MAGNITUDE(See the H-R Diagram in the ESRT’s P.15)
17 Apparent Magnitude How bright a star appears (apparent) to us on Earth The farther a star is from Earth (increasing distance), the dimmer it will look even though it may actually be a very bright starBecause of this, apparent magnitude does not tell the true brightness of a star
18 Luminosity The actual (true) brightness of the star Depends on the size and temperature of the starHotter stars are more luminous (brighter) than cooler starsIf the temperatures are the same, a larger star will be more luminous
19 Absolute MagnitudeThe luminosity of the stars if they all brought to the same distance from Earthaka – picture all the stars lined up the same distance from Earth, then compare their brightnessThis is the most useful when comparing the brightness of the stars
23 The sun is the closest star to Earth It is approx. 150,000,000 km (93,000,000 miles) from the EarthThis distance is called an astronomical unit (AU)The next closest star to Earth, after the sun, is Proxima CentauriIt is 300,000 times farther away from Earth than the sun. Because of the great distances in space, larger units of measure must be usedThe light-year is the distance that light travels in one yearSince light can travel 300,000 km/sec (186,000 miles/sec), light travels 9.5 trillion km/year!!!Proxima Centuri is 4.3 light-years from Earth!
24 One Astronomical Unit (AU) = 150,000,000 km So…One Astronomical Unit (AU) = 150,000,000 kmAnd, one light year (LY)= trillion km (9,500,000,000,000 km)
25 Okay… let’s calculate the distances from Earth to each planet in Astronomical Units (AU)
26 Just divide the distance from the Sun in km by 150,000,000 km. Remember – 1 AU = 150,000,000 kmJust divide the distance from the Sun in km by 150,000,000 km.Example: Jupiter = 778,300,000 km ,000,000 kmJupiter is 5.19 AU from the Sun
28 large clouds of dust and gas in space are the basic materials needed for star formation the majority of this gas is hydrogensome outside force causes the cloud of gas and dust to be pushed togetheras the gas and dust get closer, friction between the particles causes the temperature to increasethe attraction of gravity between the particles causes them to continue to move together, and density also increases
29 In a nuclear reactor like Indian Point, nuclear fission takes place friction increases and temperature increases until the center becomes so hot that nuclear fusion takes placehydrogen atoms are forced together to form helium atoms, and a tremendous amount of energy is releasedIn a nuclear reactor like Indian Point, nuclear fission takes placeThis is when radioactive atoms are split apart to release energy
31 OK, so stars form from hydrogen gas and dust, but where does that gas & dust come from????
32 SUPERNOVAS One of the most energetic explosive events occur at the end of a star's lifetime, when its nuclear fuel is exhausted and it is no longer supported by the release of nuclear energyIf the star is particularly massive, then its core will collapse and in so doing will release a huge amount of energyThis will cause a blast wave that ejects the star's gas envelope into interstellar space
35 SUPERNOVA 1987 – right image is the star that became the left image after going supernova – shone brighter than most galaxies for a few months!
36 Here are some images of nebulae, courtesy of our friend Hubble… Clouds of dust & gas (supernova remnants?)2 Main Types:Diffuse Nebula – nearby star illuminates the gas/dust cloudDark Nebula – Dark patch against more-distant stars (dust/gas is blocking the light from stars behind it)Here are some images of nebulae, courtesy of our friend Hubble…
48 LIFE CYCLE OF STARSVIF!!!! - A star’s life cycle is determined by its MASSThe larger the star, the faster it burns out!A star’s MASS is determined by the MATTER available in the nebula of formation
49 LIFE CYCLE OF STARS SUN-LIKE STARS RED GIANT PLANETARY NEBULA (NOVA) (UP TO 1.5 X MASS OF OUR SUN)RED GIANTPLANETARY NEBULA (NOVA)WHITE DWARFBLACK DWARFSTELLAR NURSERYMASSIVE STARS(1.5 – 3 X OUR SUN)RED SUPERGIANTSUPERNOVANEUTRON STARSTARS FORM IN A NEBULA OF GAS & DUSTSUPERMASSIVE STARS> 3 X OUR SUNRED SUPERGIANTSUPERNOVABLACKHOLE
50 DEATH OF A SUN-LIKE STAR RED GIANTNEBULAWHITE DWARFBLACK DWARFSTAR COOLS ARE SHRINKS BECOMING ONLY A FEW THOUSAND MILES ACROSS!NO NUCLEAR REACTIONLONGEST, MOST STABLE PERIOD OF A STAR’S LIFE – CONVERTS HYDROGEN TO HELIUM, RADIATING HEAT & LIGHTSTAR LOSES ALL HEAT TO SPACE AND BECOMES COLD AND DARK CARBON BALLNUCLEAR FUEL DEPLETES, CORE CONTRACTS, SHELL EXPANDSOUTER LAYERS DRIFT OFF INTO SPACE IN SPHERE-LIKE PATTERN
51 GIANTS/SUPERGIANTS the brightest & largest kind of star luminosities of 10,000 to 100,000radii of 20 to several hundred solar radii (they are about the size of Jupiter's orbit!!!!)two types are red supergiants (Betelgeuse and Antares) and blue supergiants (Rigel)
52 Betelgeuse a red supergiant, with about 20 times the mass and 800 times the radius of the Sun, so huge that it could easily contain the orbits of Mercury, Venus, Earth, Mars & Jupiter. It will probably explode as a supernova at some point within the next 100,000 years. Even at its relatively remote distance, it normally ranks as the tenth brightest star in the sky.Rigel, a blue supergiant, has a diameter of about 100 million kilometers, some seventy times that of the Sun. Within a few million years, it will probably evolve to become a red supergiant like its neighbor in Orion (though not in physical space), Betelgeuse.
53 Dwarf StarsA term used, oddly enough, to describe any star that is of normal size for its massThe Sun, for example, is classified as a yellow dwarfIn general, dwarf stars lie on the main sequence and are in the process of converting hydrogen to helium by nuclear fusion in their cores
55 White DwarfsA medium sized star that has exhausted most or all of its nuclear fuel and has collapsed to a very small sizeTypically part of a planetary nebulaEventually cools into a BLACK dwarf (lump of carbon)This takes BILLIONS of years!This is the fate of OUR SUN!
56 Neutron StarThe imploded core of a massive star produced by a supernova explosionThe most dense known objects in the universe!A sugar cube size of neutron star material weighs 100 million tons!!!!!!!
60 3,700 LY wide dust-disk encircling a 300 million solar mass blackhole in the center of an elliptical galaxy.The disk is a remnant of an ancient galaxy collision and could be “swallowed” up by the blackhole in a few billion years.
62 Big Bang TheoryThe Big Bang Theory is the dominant scientific theory about the origin of the universeAccording to the big bang, the universe was created sometime between 10 billion and 20 billion years ago from a cosmic explosion that hurled matter and in all directions
63 Galaxy FormationThe formation of all the galaxies is explained by the Big Bang TheorySimply put, it states that the universe was a big ball of hydrogen gas that exploded outwardThe expanding cloud had areas that condensed into galaxies that are still expanding out from the center (the universe is getting larger)We can see this via RED SHIFT!
66 Galaxies system containing millions to billions of stars Ex. the Milky Way galaxy contains over 500,000 million starsMilky Way galaxy is a spiral shaped galaxy with a large central cluster of stars, and thinner “arms” radiating out from the centerThe solar system is located on one of the arms of the Milky Way galaxy about 2/3 away from the center
67 Origin of the Milky Way Formed 10-12 billion years ago Possibly collided with smaller galaxiesGlobular star clusters formedStars and solar systems formed roughly 5 billion years ago
|
<urn:uuid:7a976c65-bcc5-4cd6-9adb-36a52b1dab74>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813832.23/warc/CC-MAIN-20180222002257-20180222022257-00217.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.881328821182251,
"score": 4.03125,
"token_count": 2491,
"url": "http://slideplayer.com/slide/4208478/"
}
|
Presentation on theme: "Settling the Americas How did early people adapt to life in North America? Page 20."— Presentation transcript:
1 Settling the AmericasHow did early people adapt to life in North America?Page 20
2 Settling the Americas – Lesson 1 How did the first Native Americans arrive in North America?Water routesLand routesWhy did hunter-gatherers settle in the Americas?They were following game that supplied their food and clothing.Glaciers trapped water to expose the floor of the Bering Sea between Siberia and Alaska. This formed a land bridge called Beringia.Hunter-gatherers crossed a land bridge following animals and picked berries, grasses, and mushrooms.
4 Settling the AmericasSurpluses in food allowed people time to specialize in trade, building, and pottery.What are the three main reasons civilizations develop?FarmingSurplusSpecialization
5 What are two of the earliest civilizations in Mexico? OlmecMayaWhat led to the decline of the Maya civilization?The people could not produce enough food for everyone.The Olmec were the first to use chocolate, develop a calendar, and understand the idea of zero.Teotihuacán was the first major city in the Americas. Its temples and streets were laid out according to the position of the sunThe Maya had a calendar, developed a mathematics system, built pyramids, created a system of writing, and studied the stars.Movie 45:52 – 53:06Page 23
6 What are three early North American civilizations and where did they settle? Hohokam (present-day Arizona)Ancestral Pueblo (the Southwest)Mound Builders (the Midwest)Why did some early people build mounds?The Hopewell used mounds for burials and religious ceremonies.Mississippians used mounds for burial and to watch the sun and stars.The Hohokam farmed using irrigation and built homes from adobe.Irrigation supplies land with water through a series of pipes and ditches.The Ancestral Pueblo built homes into the sides of cliffs and used dry farming.Dry farming uses collected rainwater and melted snow.Homes had special rooms, called kivas, for meetings and religious purposes.Cahokia was the greatest Mississippian city. In 1100 A.D., it was one of the largest cities in the world.Movie 37:38 – 45:45Page 25
7 Settling the AmericasWhat are two factors that affect the way that cultures developed?ClimateNatural resourcesWhat three crops were important to the Hohokam and the Ancestral Pueblo?maizebeanssquashHow did the availability of natural resources affect people’s decisions to settle.
8 Native Americans of the West – Lesson 2 How did environments of the West affect the lives of Native Americans?Page 28
9 Native Americans of the West Inuit were hunters who used different parts of animals for food, clothing, tools, and weapons.The Tlingit and other Pacific Northwestern groups used waterways to hunt and trade.Pacific Northwest groups made totem poles to tell stories about important family members and to celebrate special events.Potlatches are feasts at which guests receive gifts from the host.
10 made tools and shelter from natural resources Native Americans of the WestAlikeTlingitInuitconservednatural resourcesgot most food from seamade tools and shelter from natural resourceshunterslived in Arcticbuilt pit houses, igloos, tentswealthy tradersknown for craftsbuilt plank housesPage 31
11 People of the Southwest – Lesson 3 How did the Pueblo and Navajo adapt to a desert environment?Movie 30:10-37:38Page 32
12 The Navajo were hunter-gatherers who migrated to the Southwest. Native People of the SouthwestPuebloThe Pueblo used dry farming and built homes from adobe. Homes were secured by raising ladders so intruders could not enter.They also made jewelry.NavajoThe Navajo were hunter-gatherers who migrated to the Southwest.They borrowed ideas from the Pueblo to adapt to the desert environment. They used dry farming, wove cotton to make cloth, and made jewelry from silver and turquoise.They lived in hogans, which are dome shaped homes made from log or stick frames then covered with mud or sod.The Navajo captured sheep and became shepherds.They used the meat for food and they used wool to make clothes and blankets.
13 made silver and turquoise jewelry Native People of the SouthwestPuebloAlikeNavajobuilt single-family hogansraised sheep“walked in beauty”built adobe apartmentsgrew maizeused dry farmingwove cotton clothmade silver and turquoise jewelryPage 35
14 Native Americans of the Plains – Lesson 4 How did Native Americans of the Plains use natural resources to survive?Page 36
15 Native Americans of the Plains Native Americans of the Plains hunted bison for food, clothing, and to make teepees.Teepees are cone-shaped homes made with long poles and covered with animal hides.The Lakota kept records of important events of each year. These records are called winter counts.Boys and girls were taught different skills to prepare them for adulthood.List two ways life changed for Native Americans on the Plains after the arrival of horses.Hunted on horsebackTraded with faraway groupsPage 37
16 People of the Eastern Woodlands – Lesson 5 How did groups of the Eastern Woodlands live?Page 40
17 People of the Eastern Woodlands Eastern Woodlands groups used materials from the forest for food and clothing. for example, they ate muskrat and deer meat.Slash-and-burn farming is when people cut down, or slash, trees to allow rays of sunlight to reach a plot of land. Then they burn the undergrowth to clear room for crops.After the harvest, they leave the plot of land empty for several years. This prevents the soil from wearing out.Identify two major Native American groups that lived in the Eastern Woodlands.CreekIroquoisWhat kind of farming did they use and why?They used slash-and-burn farming because the forests were so thick.Page 41
18 They arranged the town around a council house or Chokofa. People of the Eastern WoodlandsCreekThe Creek built wattle-and-daub huts for individual families. Huts were made from poles and covered with grass, mud, or thatch.They arranged the town around a council house or Chokofa.They also decorated pots with stamps.IroquoisThe Iroquois built homes on top of steep-sided hills with wood. These homes are called longhouses.The used high log fences to protect their villagesPage 43
19 celebrated Green Corn Festival People of the Eastern WoodlandsCreekIroquoisAlikehad huts for individual familiesused wattle-and-daubarranged around a council hutstamped designs on potteryhad longhouses for several familiesbuilt of woodprotected village with fencemade wampumgrew corncelebrated Green Corn Festivalplayed lacrosse
20 Government in the Woodlands People of the Eastern WoodlandsGovernment in the WoodlandsCreekFormed a confederacyDivided towns into two typesWar towns (red)—declared war, planned battles, and held meetings with enemy groupsPeace towns (white)—passed laws and held prisonersPage 44
21 Government in the Woodlands People of the Eastern WoodlandsGovernment in the WoodlandsThe League of Six Nations is an example of an early democracy.Benjamin Franklin borrowed some of its ideas to include in the U.S. Constitution.IroquoisWomen led the clans and appointed male leaders.Formed the Iroquois ConfederacyBecame known as the League of Six Nations after the six Iroquois groups that formed itPurpose of the confederacy was to promote peace among Iroquois groups.Page 45
22 ReviewIn which areas of North America did native people settle and develop their cultures?WestSouthwestPlainsEastern WoodlandsWhat are three farming techniques that native people used?Irrigation – West in California desertDry Farming – SouthwestSlash-and-burn – Eastern WoodlandsHow did people in the Pacific Northwest use the sea?They used the sea to hunt and trade.
23 Describe some of the homes of native people and who built them. Adobe – bricks made from mud and straw; protects from extreme heat and cold (Hohokam and Pueblos)Cliffs – built into the sides of cliffs (Ancestral Pueblos)Hogans – dome-shaped homes made from log or stick frames and covered with mud or sod (Navajo)Teepees – cone-shaped homes made with long poles and covered with animal hides (Plains)Wattle-and-daub huts – made from poles and covered with grass, mud, or thatch (Creek)Longhouses – built with wood on tops of steep-sided hills (Iroquois)
24 Which Native American group formed the League of Six Nations? The Iroquois formed the League of Six Nations.How did Native Americans on the Great Plains adapt to the environment?They hunted bison and built lodges from grass, sticks, and soil.
|
<urn:uuid:52cdfe44-59fa-4fe4-9cc7-f0763da2a646>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.18/warc/CC-MAIN-20180218192636-20180218212636-00418.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9455022811889648,
"score": 3.96875,
"token_count": 1938,
"url": "http://slideplayer.com/slide/4150619/"
}
|
Tapestry, woven decorative fabric, the design of which is built up in the course of weaving. Broadly, the name has been used for almost any heavy material, handwoven, machine woven, or even embroidered, used to cover furniture, walls, or floors or for the decoration of clothing. Since the 18th and 19th centuries, however, the technical definition of tapestry has been narrowed to include only heavy, reversible, patterned or figured handwoven textiles, usually in the form of hangings or upholstery fabric. Tapestry traditionally has been a luxury art afforded only by the wealthy, and even in the 21st century large-scale handwoven tapestries are too expensive for those with moderate incomes.
Tapestries are usually designed as single panels or sets. A tapestry set is a group of individual panels related by subject, style, and workmanship and intended to be hung together. The number of pieces in a set varies according to the dimensions of the walls to be covered. The designing of sets was especially common in Europe from the Middle Ages to the 19th century. A 17th-century set, the Life of Louis XIV, designed by the king’s painter Charles Le Brun, included 14 tapestries and two supplementary panels. The number of pieces in 20th-century sets is considerably smaller. Polynesia, designed by the modern French painter Henri Matisse, for example, has only two pieces, and Mont-Saint-Michel, woven from a cartoon by the contemporary engraver and sculptor Henri-Georges Adam, is a triptych (three panels). Until the 19th century, tapestries were often ordered in Europe by the “room” rather than by the single panel. A “room” order included not only wall hangings but also tapestry weavings to upholster furniture, cover cushions, and make bed canopies and other items. Most Western tapestry, however, has been used as a type of movable monumental decoration for large architectural surfaces, though in the 18th century, tapestries were frequently encased in the woodwork.
In the West, tapestry traditionally has been a collective art combining the talents of the painter, or designer, with those of the weaver. The earliest European tapestries, those woven in the Middle Ages, were made by weavers who exercised much of their own ingenuity in following the cartoon, or artist’s sketch for the design.
Though he followed the painter’s directions and pattern fairly closely, the weaver did not hesitate to make departures from them and assert his own skills and artistic personality. In the Renaissance, tapestries increasingly became woven reproductions of paintings, and the weaver was no longer regarded as the painter’s collaborator but became his imitator. In medieval France and Belgium, as well as now, a painter’s work was always executed in tapestry through the intermediary of the weaver. Tapestry woven directly by the painter who created it remains an exception, almost exclusive to ladies’ handiwork.
Wool has been the material most widely used for making the warp, or the parallel series of threads that run lengthwise in the fabric of the tapestry. The width-running, weft, or filling threads, which are passed at right angles above and below the warp threads, thereby completely covering them, are also most commonly of wool. The advantages of wool in the weaving of tapestries have been its availability, workability, durability, and the fact that it can be easily dyed to obtain a wide range of colours. Wool has often been used in combination with linen, silk, or cotton threads for the weft. These materials make possible greater variety and contrast of colour and texture and are better suited than wool to detail weaving or to creating delicate effects. In European tapestry, light-coloured silks were used to create pictorial effects of tonal gradation and spatial recession. The sheen of silk thread was often used for highlights or to give a luminous effect when contrasted to the dull and darkly coloured heavier woolen threads. In 18th-century European tapestries, silk was increasingly used, especially at the Beauvais factory in France, to achieve subtle tonal effects. Most of the Chinese and Japanese tapestries have both warp and weft threads of silk. Pure silk tapestries were also made in the Middle Ages by the Byzantines and in parts of the Middle East. Wholly linen tapestries were made in ancient Egypt, while Copts, or Egyptian Christians, and medieval Europeans sometimes used linen for the warp. Cotton and wool were employed for pre-Columbian Peruvian tapestries as well as for some of the tapestries made in the Islamic world during the Middle Ages. Since the 14th century, European weavers have used gold and silver weft threads along with wool and silk to obtain a sumptuous effect. These threads were made of plain or gilded silver threads wound in a spiral on a silk thread.
Tapestry is first of all a technique. It differs from other forms of patterned weaving in that no weft threads are carried the full width of the fabric web, except by an occasional accident of design. Each unit of the pattern or the background is woven with a weft, or thread of the required colour, that is inserted back and forth only over the section where that colour appears in the design or cartoon. As in the weaving of plain cloth, the weft threads pass over and under the warp threads alternately and on the return go under where before it was over and vice versa. Each passage is called a pick, and when completed the wefts are pushed tightly together by various devices (awl, reed, batten, comb, or serrated fingernails in Japan). The weft threads so outnumber the warps that they conceal them completely. The warps in a finished tapestry appear only as more or less marked parallel ridges in the texture, or grain of the fabric, according to their coarseness or fineness.
The thickness of the warp influences the thickness of the tapestry fabric. In Europe during the Middle Ages, the thickness of the wool tapestry fabric in such works as the 14th-century Angers Apocalypse tapestry was about 10 to 12 threads to the inch (5 to the centimetre). By the 16th century the tapestry grain had gradually become finer as tapestry more closely imitated painting. Known for the regularity and distinctness of its tapestries, the royal French tapestry factory in Paris known as the Gobelins used 15 to 18 threads per inch (6 to 7 per centimetre) in the 17th century and 18 to 20 (7 to 8) in the 18th century. Another royal factory of the French monarchy at Beauvais had as many as 25 or even 40 threads per inch (10 to 16 per centimetre) in the 19th century. These excessively fine grains make the fabric very flat and regular, tending to imitate the canvas of a painting. The grain of 20th-century tapestry approximated that used in 14th- and 15th-century tapestry. The Gobelins factory, for instance, used 12 or 15 threads per inch (5 or 6 per centimetre).
In many 20th-century tapestries a finer grain was contrasted with the effects of a heavier weave. The grain of silk tapestries, of course, is much finer than those made of wool. It is not uncommon for the silk tapestries of China to have as many as 60 warp threads per inch (about 24 per centimetre).
Where the weft margin of a colour area is straight and parallel to the warps, it forms a kind of slit, or relais, which may be treated in any of five different ways. First, it may simply be left open, as in Chinese silk tapestries, which are called kesi (cut silk) for that reason. Second, it may be left open on the loom but sewed up afterward, as in European tapestries from the 14th to the 17th centuries and also in some later types. Third, the weaver may dovetail his wefts, passing from one side and from the other in turn over a common warp. This may be either “comb” dovetailing—single wefts alternating—or “sawtooth” dovetailing—clusters first from one side, next from the other. Dovetailing has the double disadvantage of making the fabric heavier at that point and of blurring the outline. Persian weavers of the 16th century developed a successful variant in silk tapestry rugs whereby a black outline weft was dovetailed over two warps—one of each of the adjacent colour areas—effectively hiding the coloured wefts in the compacting of the weave and providing a strong clear image. The same device is found in pre-Columbian Peru.
The fourth treatment—interlocking—was introduced in the Gobelins factory in the 18th century. Here wefts of juxtaposed colour segments are looped through each other between the two warps that mark, respectively, the margin of each colour. This technique produces a continuous surface of even weight that was prized by the French weavers because the resultant effect more closely approximated that of painting.
A curious variant of these weaving techniques is achieved when between every two rows of wefts there is a weft that runs the full width of the tapestry, thereby making the fabric solid. This technique, if strictly classified, would be called brocade weaving, but the principle is that of tapestry, with the cloth insert subordinate. Rarely used, the technique was employed in Japan in the 7th and 8th centuries, in eastern Persia in the 10th century, and in pre-Columbian Peru.
Instead of the plain-cloth method of weaving usually used in making tapestries, a twill technique can be used. In this type of weave the weft is floated over two or more warps, then under one or more warps, with this underpassage shifting always one to the right or left, thereby making a diagonal ribbing. As far as can be determined, this technique first appeared in medieval Persia and from the 17th century on was especially used in the Iranian provinces of Khorāsān and Kermān to make shawls of goat’s hair or wool. It is also used to make the famed Kashmir shawls and, along with many other crafts, was probably introduced into Kashmir from Persia, in the 16th century. In contemporary European tapestries this technique, usually called eccentric weaving, occasionally has been used in making some of the experimental abstract hangings of the later 20th century.
European tapestry may be woven on either a vertical loom (high-warp, or haute-lisse in French) or a horizontal loom (low-warp, or basse-lisse). In early high-warp looms the warps were attached to a beam at the top, and groups of warp threads were weighted at the bottom. The weft was beaten up (i.e., pushed) toward the top as the weaving progressed. High-warp looms of this type are pictured on ancient Greek vases. In later high-warp looms the vertical frame has heavy uprights holding a horizontal roller at top and bottom, on which the warps are stretched. Each warp passes through a loop of cord (the lisses), and the loops encircling the warps that correspond to uneven numbers are fastened to one slender cylinder; those to the even-numbered warps are fastened to another cylinder. Both cylinders are above the weaver but within reach so that he can pull forward first with one, then with the other set of warps (i.e., form the shed) in order to pass his bobbin behind them. The bobbin (broche) is a short, pointed, slim cylinder of polished wood on which the weft yarn is wound.
The low-warp loom, on the other hand, has the rollers on the same level at table height so that the warps stretched between them are horizontal. To leave the weaver’s hands free, the warps are attached to two slats, or poles, each of which is connected with a treadle so that the weaver’s foot depresses the odd-numbered or even-numbered series of warps to form a passageway for the bobbin, called a shuttle on the low-warp loom. The cylinders in both instances serve to roll up the finished portion and unroll a further length of unwoven warps so that the section in process is always taut and in a convenient relation to the weaver. At both types of loom the weaver works from the back side, that is, he weaves the tapestry on the wrong side. He has, however, a hand mirror, which he puts through the unwoven warps holding it to reflect the right side of the portion in process. While the high-warp weaver can examine his finished work directly by walking around to the other side of his loom, the low-warp worker has to tilt up his frame.
Of the two techniques, low-warp is more commonly used. Of the great European tapestry works only one, Gobelins, has traditionally used high-warp looms. Several weavers can work simultaneously on either kind of loom. Depending on the complexity of the design and the grain or thickness of the tapestry texture, a 20th-century weaver at the Gobelins could produce 32 to 75 square feet (3 to 7 square metres) a year.
In Western tapestry the medieval cartoon, or preparatory drawing, was usually traced and coloured by a painter on a canvas the size of the tapestry to be woven. At the end of the 15th century the weaver probably wove directly from a model, such as a painting, and consequently copied not a diagrammatic pattern but the original finished work of the painter. At the beginning of the 17th century there arose a clear distinction between the model and the cartoon. The model was the original reference on which the cartoon was based. Cartoons were rapidly and freely used and were often copied.
More than one tapestry can be woven from a cartoon. At the Gobelins factory, for instance, the 17th-century “Indies” tapestry set was woven eight times, remade, and slightly altered by the late Baroque painter François Desportes (1661–1743); these cartoons were woven several more times during the 18th century.
The border of a cartoon tended to be redesigned every time it was commissioned, since each patron would have a different heraldic device or personal preference for ornamental motifs. Borders were frequently designed by an artist different from the one who conceived the cartoon for the central narrative or principal image. As an element of tapestry design, however, borders or frames were important in European tapestry only from the 16th to the 19th century. This device was seldom used before the 16th century or after the 19th, largely because the notion of tapestry as a reproduction of or substitute for a painting was most popular in those four centuries.
A fully painted cartoon requires much of the painter’s time and is tedious to make. Beginning in the 20th century, other solutions were adopted. The cartoon may be a photographic enlargement of a fully painted model or, more simply, a numbered diagrammatic drawing. The latter type of cartoon was worked out by the famous French tapestry designer Jean Lurçat during World War II. In this method each number corresponds to a precise colour and each cartoonist has his own range of colours. The colours are not indicated in a photographic enlargement, but the weaver refers to a small colour model provided by the painter and from it makes a selection of wool samples.
The high-warp weaver has the full-size cartoon, which he follows as it hangs beside or behind him. The low-warp worker has the cartoon laid under the warps, so he follows it from immediately above. In both cases the main outlines are drawn with ink on the warps after they have been mounted, or attached to the loom. The design is executed, in all European work since the Middle Ages, at right angles to the loom, so that in the finished hanging the warps usually run horizontally rather than vertically as they ran on the loom. Though in certain pieces the warps run vertically, it is aesthetically advantageous for the tapestries to be executed horizontally, since the warp ribbing tends to create a texture more or less reinforced by linear shadows, which, if vertical, sever the design but if horizontal bind it into continuity. Practically, however, horizontal warps are disadvantageous, since the horizontal slits made in weaving will pull apart more rapidly than vertical slits because of the weight of the hanging.
Periods and centres of activity
Ancient Western world
Examples of tapestry weaving from the ancient world are so isolated and fragmentary as to make it uncertain either when or where the art originated. The earliest known tapestry weaving was done in linen by the ancient Egyptians between 1483 and 1411 bce. Preserved by the dry desert climate of Egypt, three tapestry fragments were found in the tomb of Thutmose IV. Two of the fragments have cartouches of Egyptian pharaohs, and the third is a series of hieroglyphs. In the tomb of Tutankhamen (c. 1323 bce), a robe and glove woven by the tapestry technique have also been found.
Although no examples remain, writers of antiquity are unanimous in proclaiming the magnificence of Babylonian and Assyrian tapestries. Some scholars have speculated that the ancient Egyptians learned the art of tapestry from the ancient peoples of Mesopotamia. During that period when the few preserved Egyptian tapestry fragments were made, Mesopotamian ideas, techniques, and, perhaps, craftsmen were entering Egypt. These scholars conjecture that, since tapestry weaving did not occur in quantity again in Egypt until the 4th century ce, it is likely that the craft was not indigenous.
Tapestry weaving continued to flourish in western Asia in the 1st millennium bce. Fragments of wool tapestries dating from the 4th or 3rd century bce have been found in graves in Ukraine near Kerch in the Crimean peninsula. The ornamental motifs of these fragments are of a widely diffused Hellenistic style that was especially prevalent in Syrian art at the time. Another fragment showing close Syrian connections is a piece of silk tapestry dating about 200 to 500 years later and found in China at Loulan in the Uygur Autonomous Region of Xinjiang. Other fragments have been found in Syria at the archaeological sites of Palmyra and Doura-Europus. If climatic conditions for textile preservation in the Middle East had been more favourable, it might be possible to theorize that Syria was a great centre of tapestry weaving, especially at the start of the Christian Era.
There are literary descriptions of the making of tapestry in ancient Greece and Rome. In the Odyssey, Homer (8th century bce?) describes Penelope working on a tapestry that was unraveled each night as she waited for Odysseus. The Roman poet Ovid (43 bce–17 ce) in the Metamorphoses describes the tapestry looms used by Minerva and Arachne in their mythological weaving contest. During the period of the empire the Romans apparently imported a considerable number of the tapestries used in their public buildings as well as in the homes of the wealthy. Since the Latin terms referring to tapestry and weaving are Greek in origin, it is generally supposed that the art of tapestry making was taught to the Romans by the Greeks.
Called kesi (cut silk), tapestry has long been produced in China, traditionally being made entirely of silk; Chinese tapestries are extremely fine in texture and light in weight. The weave is finished perfectly on both sides so that the tapestries are reversible. The warps are vertical in relation to the pattern, rather than horizontal as in European weaving. Sometimes the weaver uses metal threads to make his hangings more sumptuous or highlights the design by painting, although this is not considered a commendable expedient.
Many kesi, such as Dongfang Shuo Stealing the Peaches of Longevity, imitated paintings and were mounted on scrolls or album leaves in the same manner as the pictures they copied. Tapestries to cover large wall surfaces, such as the kesi (7 feet 3 inches by 5 feet 9 inches; 2.2 by 1.75 metres) of Fenghuang in a Rock Garden (late Ming period), were usually brighter in colour, heavier in texture, and frequently woven with metal threads. Tapestry was also used to decorate furniture and clothing.
The earliest surviving examples of kesi date from the Tang dynasty (618–907 ce). Eighth-century remains have been found in desert oases around Turfan in the Uygur Autonomous Region of Xinjiang, China, and late Tang fragments have been found in the Mogao Caves near the town of Dunhuang in Gansu province. It is thought that these weavings are probably not representative of the more fully developed kesi of the Tang period because they show only simple repeating patterns of flowers, vines, ducks, lions, etc., and were found in relatively remote areas of Central Asia along the silk-trade route. In comparison is the more sophisticated 8th-century kesi that hangs in the Taima-dera, a temple near Nara, Japan. Based on the story of the Tang dynasty priest Shandao, this 43-square-foot (4-square-metre) weaving is the oldest known complete Chinese wall tapestry.
During the Song dynasty (960–1279) the imperial family encouraged painting and patronized the art of tapestry. An important weaving centre was at Dingzhou in Hebei province. Under the Yuan dynasty (1206–1368) a government factory for weaving kesi was established at Hangzhou in Zhejiang province. Characterized by their rich ornamental designs, the Hangzhou kesi were frequently woven with gold and silver thread. Examples of tapestry from the Ming period (1368–1644) are rare and exquisite. The kesi executed during the rule of the great Manchu emperor Kangxi (also called Xuanye; 1661–1722) are the finest tapestries produced during the Qing dynasty (1644–1911/12). They are distinguished for their delicate colouring and the use of philosophical and religious themes. Later Qing kesi have survived in great abundance and show a decided artistic and technical decline. This is especially evident in the frequent use of painting to perfect design details in 19th-century kesi.
The tapestry technique traveled from China to Japan in the late 15th or early 16th century during the Muromachi (Ashikaga) period (1338–1573). Japanese tapestry called tsuzure-nishiki (polychrome tapestry) differs from the Chinese kesi in its more pronounced surface relief. This is achieved through the use of thick cotton weft threads covered with silk, gold, or silver thread.
Paralleling the great period of sumptuous brocade manufacturing, the production of tsuzure flourished during the Tokugawa (Edo) period (1603–1867), especially in the early 17th century and throughout the entire 18th century. These polychrome tapestries were primarily used to decorate garments and for wrapping gifts; on rare occasions they were also used as wall hangings. Although the tapestry industry declined in quality in the 19th century, it was revitalized in the 20th century. Monumental wall hangings and theatre curtains are woven in the textile factories of Ōsaka and Kyōto by both traditional Japanese and European tapestry techniques.
The history of the art in Korea remains obscure. Rather coarse wool tapestry-woven rugs with stylized motifs, however, are still produced there.
The most skilled weaving in pre-Columbian America was achieved by the Andean Indian cultures of ancient Peru. The origins of tapestry weaving among these peoples are believed to date as early as the beginnings of the Christian Era. By the 6th and 7th centuries the technique of tapestry weaving was established, and a large number of pieces in this medium have survived, particularly from the 8th to the 12th centuries. Most of these tapestry weavings have been found in Peruvian coastal burial sites, where the dry desert climate prevented their deterioration. The dead were buried in clothes that display some of the most varied and skilled techniques of weaving and needlework ever current in any culture. Tapestry weaving was used principally to make garment decorations that were usually integral to the garment fabric. Narrow strips to ornament the edges of clothing were common, as were panels covering the entire surface of the cuzma, a poncho-like Indian shirt. Fragments of tapestry wall hangings have also survived.
According to chronicles written by Spanish colonizers and scenes painted on ancient Peruvian pottery, weaving was generally done by women whose great manual skill made up for the simplicity of the looms, which are still used by Indian craftsmen. The workmanship was extremely fine. Certain tapestry fragments have been found with 150 to 250 weft threads per square inch (60 to 100 per square centimetre). The warps of the tapestries are of undyed cotton, being, therefore, either white or brown. The wefts are of wool from the llama, guanaco, alpaca, or vicuña, with cotton sometimes used to obtain bright white. The tapestries are usually polychrome, for the range of available colours made with natural dyes was large. Strong colour contrasts were preferred to the use of subtly graded tones of colours, especially in the Inca period (c. 13th to 16th century). Compositions tended to bold conventionalized designs often of human or animal figures and elaborate geometric patterns. Plant motifs are comparatively rare.
After the Spanish conquest, looms from Spain were imported by the viceroyalty of Peru, and the weaving of tapestry was continued during the colonial period. The skilled Inca and later mestizo weavers evolved a curious blending of European influences and Indian traditions.
Tapestry may also have been current in other developed pre-Columbian cultures of Central America and Mexico. Climatic conditions, however, have been destructive to textiles.
Middle Ages in Egypt and the Near East
Tapestry weaving was done by the Copts, or Egyptian Christians, from the 3rd to about the 12th century ce. Their tapestries are of great interest not only because of their artistic quality and technical skill but also because they are a bridge between the art of the ancient world and the art of the Middle Ages in western Europe. Fragments from the 5th to the 7th century are particularly numerous, and the largest number of examples have survived in the Egyptian cemetery sites of Akhmīm, Antinoë, and Ṣaqqārah. As a result of a change in burial customs, perhaps attributable to Romanization and the widespread adoption of Christianity in Egypt, the ancient practice of mummification and its attendant ritual fell into disuse after the 4th century ce. The dead were subsequently buried in daily clothes or were wrapped in discarded wall hangings and tapestries. The clothing was ornamented with tapestry trimming, which was either woven into the fabric or attached to tunics and cloaks. Other burial furnishings included pillows and coverings. Tapestries were also used for the decoration of Christian churches, but few of these wall hangings have survived.
Coptic tapestries were woven with woolen wefts on linen warps, though a few with silk wefts have been preserved. Cotton wefts were occasionally used to obtain a brighter white. Primarily in the 7th century and perhaps also the 8th century, tapestry ornamentation was often supplemented by embroidery, as in border margins. In a special variant, which is not true tapestry, characteristic ornamental motifs such as meanders or other geometric repeats are executed with a free bobbin that follows the design without regard to consistency of weft direction.
Many of the early Coptic tapestries were done in a silhouette technique in which the motif or design was in a single dark colour, usually a tone of purple achieved by dying with madder and indigo, against a lighter background colour. After the 5th century, polychrome tapestries became increasingly common.
Many Coptic tapestry trimmings were woven with indigenous designs. Recurring motifs related to the ancient Egyptian funerary cult of Osiris and included the grape vine or ivy and the wine amphora. These motifs were considered appropriate to burial robes because of their relevance to revival in a life after death. Other favourite subjects were the hunter on horseback, boy-warriors, desert animals (especially the lion and the hare), creatures of mythology, dancing figures, and baskets of fruits and flowers. Christian subjects are as a rule late in date and are mostly figures of saints, standing or on horseback, against a red background. Depictions of biblical stories are rare. Some of the Coptic designs were copied, in a more or less distorted manner, from those woven into silk textiles imported from Syria.
After the invasion of Egypt by the Muslims in 640, the quality of Coptic tapestry began to deteriorate, although the industry continued to flourish by adapting itself to the tastes of the conquerors. During the Tūlūnid period (868–905) bands of tapestry trimming in wool or often in silk, occasionally with metal-thread enrichments, were woven into white or dark green linen garments. In the Fātimid period (909–1171) silk tapestry weaving in golden yellow and scarlet became common. The motifs of the Islamic period of Egyptian weaving were often interlacing geometric patterns frequently enclosing inscriptions or highly stylized small birds, animals, and flowers. Many of these inscriptions merely simulate writing, but many are legible. Giving religious phrases or the names and titles of rulers, they are in handsome angular Kufic scripts on earlier pieces and in cursive scripts later.
From the 6th to the 8th century ce, and doubtless from then on, striking wool tapestries were being made in Syria corresponding in style to the contemporary silk textiles with animals or birds in energetic heraldic stylization, framed in roundels, and almost always on a red ground. Later, from the 11th to the 13th century, highly distinctive silk- and gold-thread tapestries were produced in Syria incorporating pagan motifs from classical antiquity.
Fewer specimens of Persian tapestries have survived, but one notable fragment, now in the Moore Collection at Yale University, bears an ibex in the style of the Sāsānian period. A single piece from the Seljuq period (11th century) established a continuation of the use of the tapestry technique, which reappears in the 16th century (intermediate examples apparently having all been destroyed) as the medium for rich silk- and metal-thread rugs, of which only three are known still to exist (also in the Moore Collection, New Haven, Connecticut), though others are illustrated in Persian miniatures. The modern descendants of these are kilims, or pileless carpets woven by the tapestry technique. Common to the entire Near East, these rugs are especially produced in the Caucasus and Asia Minor, as well as in parts of eastern Europe. Occasionally silk, they are more often wool with simple geometric patterns in bold colours.
Early Middle Ages in western Europe
Numerous documents dating from as early as the end of the 8th century describe tapestries with figurative ornamentation decorating churches and monasteries in western Europe, but no examples remain, and the ambiguity of the terms used to refer to these hangings makes it impossible to be certain of the technique employed. The 11th-century so-called Bayeux Tapestry depicting the Norman Conquest of England, for example, is not a woven tapestry at all but is a crewel-embroidered hanging.
Like the art of stained glass, western European tapestry flourished largely from the beginnings of the Gothic period in the 13th century to the 20th century. Few pre-Gothic tapestries have survived. Perhaps the oldest preserved wall tapestry woven in medieval Europe is the hanging for the choir of the church of St. Gereon at Cologne in Germany. This seven-colour wool tapestry is generally thought to have been made in Cologne in the early 11th century. The medallions with bulls and griffons locked in combat were probably adapted from Byzantine or Syrian silk textiles. The Cloth of Saint Gereon is thematically ornamental, but an early series of three tapestries woven in the Rhineland for the Halberstadt Cathedral were narrative. Dating from the late 12th and early 13th centuries, these wool and linen hangings are highly stylized and schematic in their representations of figures and space, with all forms being outlined. The Tapestry of the Angels, showing scenes from the life of Abraham and St. Michael the Archangel, and the Tapestry of the Apostles, showing Christ surrounded by his 12 disciples, were both intended to be hung over the cathedral’s choir stalls and therefore are long and narrow. The third hanging, called the Tapestry of Charlemagne Among the Four Philosophers of Antiquity, is a vertical wall hanging related to works produced by the convent at Quedlinburg in the German Rhineland during the Romanesque period of the 12th and early 13th centuries.
Fragments of a tapestry with traces of human figures and trees reminiscent of hangings described in the Norse sagas were found in an early 9th-century burial ship excavated at Oseberg in Norway. One of the major works of Romanesque weaving is a more complete tapestry dating from around the end of the 12th or early 13th century that was made for the Norwegian church of Baldishol in the district of Hedmark. Originally a set of wool hangings on the 12 months of the year, only the panels of April and May have survived. The pronounced stylization of the images relates these tapestries to those executed for Halberstadt Cathedral.
In the 14th century the western European tradition of tapestry became firmly established. At that time the most sophisticated centres of production were in Paris and Flanders. Large numbers of tapestries are recorded in inventories. The more luxurious standards of living being adopted by the wealthy of the Gothic period extended the use of tapestries beyond the customary wall hangings to covers for furniture. Survivals of 14th-century workmanship, however, are rare, and the most important of these were produced by Parisian weavers. The outstanding example of their art is the famous Angers Apocalypse, which was begun in 1377 for the duke of Anjou by Nicolas Bataille (flourished c. 1363–1400). This monumental set originally included seven tapestries, each measuring approximately 16.5 feet in height by 80 feet in length (5.03 by 24.38 metres). Based on cartoons drawn by Jean de Bandol of Bruges (flourished 1368–81), the official painter to Charles V, king of France, only 67 of the original 105 scenes have survived. A slightly later series (c. 1385) possibly woven in the same Parisian workshop is the Nine Heroes. This set is not a religious narrative but illustrates the chivalric text Histoire des neuf preux (“Story of the Nine Heroes”) by the early 14th-century wandering minstrel, or jongleur, Jacques de Longuyon.
Flanders, particularly the city of Arras, was the other great centre of the tapestry industry in 14th-century Europe. The tapestry produced there had such an international reputation that terms for tapestry in Italian (arrazzo) and Spanish (drap de raz) and English (arras) were derived from the name of this Flemish city. Long a medieval centre of textile weaving, Arras became an important tapestry centre when the leading citizens decided to create a luxury industry to alleviate the economic crisis caused by a decline in the sale of Arras textiles due to the popularity of cloth from the Flemish region of Brabant.
The greatest tapestries of the 15th century were produced in the Flemish cities of Arras, Tournai, and Brussels. In the first half of the century it was Arras that particularly prospered under the patronage of the dukes of Burgundy. Duke Philip the Good (1396–1467) had a specially designed building erected in the city to allow for better conservation of his tapestry collection. Between 1423 and 1467 no fewer than 59 master tapestry weavers were working in Arras, but following the French siege of the city in 1477 under King Louis XI the industry declined. After approximately 1530 it was no longer active. While the importance of Arras waned, that of Tournai and eventually Brussels waxed—their tapestries becoming the most sought after in the late 15th century. Local identification marks did not become general until the 16th century, and continual intercourse between the various medieval centres of tapestry making, particularly Arras and Tournai, adds to the difficulty of determining where individual tapestries were made. Despite the prestige of Arras workmanship, it is ironic that only one set of tapestries dating from 1402 is inscribed with the actual name. Large fragments showing scenes from the lives of St. Piat and St. Eleutherius survive in the cathedral of Tournai, for which they were commissioned. The imagery of these tapestries, like that of most Gothic hangings, was closely related to the styles of painting current at the time. Other important examples of supposed Arras tapestries inspired by Franco-Flemish book miniatures or paintings on wood panels include the early 15th-century tapestry of The Annunciation, which was probably woven after a cartoon by Melchior Broederlam (active 1381–c. 1409), and the Court Scenes, related to the Très Riches Heures du duc de Berry illuminated by the Limbourg brothers (active early 15th century).
Whether a tapestry is an Arras or not is usually determined by comparison with the History of St. Piat and St. Eleuthère. One of the finest works so attributed is the early 14th-century fragment from the set in the Museo Civico at Padua, Italy, illustrating the Geste of Jourdain de Blaye, a medieval chivalric story adapted from the ancient Greco-Roman romance Apollonius of Tyre.
The craft, practiced since the end of the 13th century at Tournai, proved so prosperous that in 1398 a regulation concerning production was published. It is the oldest known ordinance regulating the craft of tapestry weaving. Among partially surviving tapestries ordered in the late 15th century by the court of Burgundy were two sets produced by the weaver and tapestry merchant Pasquier Grenier (died 1493) for Philip the Good. One set, The Story of Alexander, was purchased in 1459, and the other, The Knight of the Swan, was bought in 1462.
Cited by many scholars as an example of mid-15th-century Tournai weaving under the influence of Arras are the four renowned tapestries of The Hunts of the Dukes of Devonshire. Typical of the developed late Gothic Tournai style are the compacted vertical compositions of The Story of Strong King Clovis (mid-15th century) and The Story of Caesar (c. 1465–70). Many of the attributed Tournai weavings are heavily outlined and have a solemnity that contrasts to the more fanciful nature of Arras weavings. A sense of monumentality is created by the immense size of many of these supposed Tournai weavings and by the way the vast surfaces are densely filled with superimposed imagery.
A producer of tapestry since the 14th century, in the 15th century Brussels vied with Arras and Tournai. By mid-century, Brussels was noted for its highly skilled reproductions of religious paintings by Flemish masters of late Gothic realism, such as in the tapestry of The Adoration of the Magi. These panels were called “altarpiece tapestries” because they were usually intended for churches or private chapels, where they either were used as an altar cloth or antependium or were hung behind the altar as an altarpiece or fabric retable. In scale, altarpiece tapestries approximated the dimensions of the painting they copied and were, therefore, much smaller in size than the muralesque wall hangings of Arras and Tournai. Silk was commonly used to obtain the greater degree of naturalistic detail essential in reproducing a painting.
In the late 15th and early 16th centuries, Brussels also became famous for its production of tapis d’or, or “golden carpets,” so called because of the profuse use of gold threads. Examples such as The Triumph of Christ, popularly known as the Mazarin Tapestry (c. 1500), are characterized by their richness of effect.
Perhaps the best-known late Gothic hangings were the fanciful tapestries usually referred to as millefleurs (“thousand flowers”). A red or dark-blue ground strewn with flora and fauna sometimes serves as a setting for heraldic devices such as in the late 15th-century tapestry with the coat of arms of Philip the Good or acts as a background for scenes of the chivalric aristocratic life during the late Middle Ages, such as in The Hunt of the Unicorn or The Lady and the Unicorn. The origin of millefleurs tapestries is disputed, but it is thought that they were woven in the Flemish workshops of Brussels and Bruges and by itinerant weavers in the Loire Valley of France.
Itinerant Flemish and French weavers, setting up their looms in cities where there was temporary employment, carried tapestry weaving to Italy as early as the 15th century. Before the 16th century, however, most tapestries were bought in France and Flanders. Small workshops attached to the courts of various Italian nobles sporadically appeared for brief periods in Siena, Brescia, Todi, Perugia, Urbino, Mantua, Modena, Genoa, and Ferrara. The only one of importance was the Flemish-directed workshop of Ferrara, established around 1445 by the duke Lionello d’Este, who commissioned the famous Ferrarese early Renaissance painter Cosmè Tura (c. 1430–95) to make cartoons for his weavers.
Two new trends became apparent in the 16th century. The first, brought about by war and persecution in Flanders, resulted in the widespread diffusion of the Flemish art of tapestry weaving. Many Flemish artisans in the 16th century were forced to become refugees. Some grouped together to live the life of traveling craftsmen, while others attempted to reestablish their trade abroad. Flemish weavers were welcomed everywhere as carriers of a great tradition. Such itinerant masters established shops from England to Italy. The second important new trend emanated from Italy and reflected the superiority attached by the Italian Renaissance to the art of painting. The decisive step, which was to bring about the subordination of weaving to painting for more than 400 years in the art of tapestry, was taken when Pope Leo X commissioned the famed weaver Pieter van Aelst (flourished late 15th–early 16th century) of Brussels to make a series of tapestries illustrating the Acts of the Apostles from cartoons produced between 1514 and 1516 by Raphael (1483–1520). Little or no concession had been made to the tapestry medium for which the cartoons were intended, but the tapestries were a great success, and numerous copies of them were subsequently made.
The occupation of Arras by the French in the late 15th century and successive sieges of Tournai in the early 16th century contributed to the rise of Brussels as the leading tapestry centre of Flanders—a position it maintained until the 17th century. The patronage of the papacy and the imperial houses of Spain and Austria, along with other European royalty and the skill of its weavers, who were among the finest in Europe, combined to establish the international reputation of Brussels tapestry. The industry was controlled by a monopoly of rich merchants. Tapestry making proved so prosperous in the period between 1510 and the outbreak of the Peasants’ War in 1568 that the industry had to be protected by regulations against frauds and forgeries. A number of communal ordinances followed one another in rapid succession, the most important being that of 1528, requiring each tapestry woven in Brussels to bear the mark of the city—a flat red shield flanked by two B’s standing for Brussels and the province of Brabant. The same imperial edict issued by Emperor Charles V also required manufacturers and merchants to use the signature or monogram of the master weaver or workshop.
It is the designs of the Flemish painter Bernard van Orley (1492?–1541) that are most characteristic of the Renaissance style of Brussels tapestry. Van Orley attempted to reconcile the traditions of late Gothic northern realism and the monumentality and idealism of Italian Renaissance art with the artistic potential of the tapestry medium. His earlier works, such as The Legend of Our Lady of Le Sablon and The Revelation of St. John (1520–30), still show compositional elements that link them to medieval Flemish art. Later, his work was influenced by the cartoons of Italian artists that were woven in Brussels workshops, such as Raphael’s Act of the Apostles and the designs for The Story of Scipio and Fructus Belli, executed by Raphael’s disciple, the Mannerist painter and architect Giulio Romano (1499–1546). Van Orley adapted the Italians’ preference for monumentality and their feeling for depth and sculptural modeling to Flemish tastes and traditions for genre and naturalistic detail in sets such as The Battle of Pavia, The Story of Abraham, The Story of Tobias, and The Hunts of the Emperor Maximilian I (before 1528). Among his followers in the first half of the 16th century were the Flemish painters Pieter Coecke van Aelst (1502–50), Jan Vermeyen (c. 1500–59), and Michel Coxcie (1499–1592). It was not only the cartoonists of Brussels who achieved international reputations but also the weavers of the early 15th century. Among the best known are Pieter van Aelst, Pieter and Willem Pannemaker, and Frans (active c. 1540–90) and Jacob Geubels (active c. 1580–1605).
Other limited centres of tapestry making in 16th-century Flanders were Antwerp, Bruges, Enghien, Oudenaarde, Grammont, Alost, Lille, and Tournai. Perhaps the most distinctive type of tapestry produced in these cities was the verdures of Enghien and Oudenaarde. French tapestry weaving, after its eclipse in the 15th century when nomadic weavers seem to have been more active than established shops, owes much of its eventual prestige to an unusual degree of royal patronage. This resulted in the 17th century in the foundation of the Gobelins and Beauvais state factories, the names of which have now become household words. A prelude to this development was the factory established by Francis I in 1538 near Paris at the château of Fontainebleau to make tapestries for his royal residences. Staffed by Flemish weavers, the cartoons were largely furnished by two Italian Mannerist artists, Francesco Primaticcio (1504–70) and Rosso Fiorentino (1494–1540), who were court painters to the king. The six tapestries, based on their murals for the Galerie des Réformes in the château, are the first tapestries in which sculpture as well as painting is imitated in the highly illusionistic manner of a trompe-l’oeil (“fool-the-eye”) effect.
The Fontainebleau workshop, which was active for only 12 years, provided the springboard for subsequent developments in Paris, where in 1551 Henry II established and endowed with special privileges the Hôpital de la Trinité factory.
In the first third of the 16th century, Franco-Flemish weavers and small court workshops continued to supply the only indigenous Italian tapestry. Weaving was done in Genoa, Verona, Venice, Milan, and Mantua. The first internationally important Italian tapestry factory was established in 1536 in Ferrara by Duke Ercole II of the house of Este. The Arrazeria Medicea founded in 1546 in Florence by the Medici grand duke Cosimo I (1519–74) was the most important tapestry factory instituted in Italy during the 16th century and survived into the early 18th century. It was headed initially by the famous mid-15th-century Flemish weavers Nicolas Karcher and Jan van der Roost, both of whom had worked in the Ferrara workshop of Duke Ercole II.
Cartoons were designed by such leading Mannerist artists of Florence as Jacopo Pontormo (1494–1556/57), Francesco Salviati (1510–63), Il Bronzino (1503–72), and Bachiacca (1494–1557), who designed the Grotesques (c. 1550), one of the most famous and influential tapestry sets produced by the Arrazeria Medicea.
The major textile art in medieval England was embroidery. When woven tapestries were needed, they were imported from Flanders. Although occasional references to Arras weavers in England date from the 13th century and a few indigenous armorial tapestries have survived from the 15th century, it was only after the middle of the 16th century that the English organized tapestry works. The first important workshops were set up in Barcheston (Warwickshire) by a wealthy squire, William Sheldon (died 1570). They initially produced cushion covers and small hangings of heraldic and ornamental subjects. The shops later created a set of topographic tapestries. Woven in 1588 from contemporary maps of the Midland counties, these tapestries featured bird’s-eye views of hills, trees, and towns, surrounded, according to the custom of the period, by Flemish-styled borders of architectural and figural ornament. Many of the men who worked in these shops were Flemings who had fled the mid-16th-century religious persecutions in the Lowlands.
Germany was one of the first regions to receive Flemish weavers fleeing religious persecution in the Lowlands. Their small workshops prospered in such cities as Cologne, Hamburg, Kassel, Leipzig, Torgau, Lüneburg, Frankenthal, and Stuttgart. Most of the works produced were in the Flemish style. In Switzerland, on the other hand, where tapestry making had flourished in the 14th and 15th centuries, the industry almost ceased to exist except around Basel and Lucerne.
17th and 18th centuries
It was due to the initiative of Henry IV, whose planning of his nation’s economy emphasized the luxury production that has since been commercially important in France, that decisive steps were taken in establishing a French tapestry industry. In 1608 Henry gave official recognition to the French workshop (using the high-warp method) of Girard Laurent and Dubout by establishing them in the Louvre, and at the same time he encouraged the immigration of Flemish weavers practicing the low-warp method who would help Paris to compete with the flourishing industries of Brussels and Antwerp.
At the turn of the 16th–17th centuries, two Flemish weavers had been taken to France by government arrangement to establish low-warp looms in Paris: François de La Planche (or Franz van den Planken; 1573–1627) and Marc de Comans (1563–before 1650). Satisfactory working conditions were found for them in the old Gobelins family dyeworks on the outskirts of the city, and so began the establishment commonly known by that name that has lasted ever since. One of its first ambitious productions was an allegorical invention lauding Catherine de Médicis under the guise of Artemisia. The cartoons for this set were chiefly by the French Mannerist painter Antoine Caron (c. 1515–93). The Baroque verve and vitality of the Flemish painter Peter Paul Rubens (1577–1640) and Simon Vouet (1590–1649) brought new life to French designs in the early 17th century.
De La Planche died in 1627 and was succeeded by his son, who broke with the Comans family and moved to the Faubourg Saint-German-des-Près, leaving the Comans at the Gobelins. Competition became bitter, but both continued to produce a considerable quantity, as well as good quality, until they were superseded in 1662 by the royal factory, which purchased the Gobelins works at its location.
The Gobelins was officially established in 1667, receiving the title Manufacture Royale des Meubles de la Couronne (“Royal Factory of Furnishings to the Crown”). Initially it included all the king’s artisan corps (tapestry weavers, cabinetmakers, goldsmiths and silversmiths, etc.) that produced furnishings for the royal residences, especially the château of Versailles. Louis XIV’s finance minister, Jean-Baptiste Colbert (1619–83), always alert to profitable opportunities, recruited skilled personnel not only from the de La Planche and Comans shops but also from the old Louvre enterprise and thus established a new tapestry works with both high- and low-warp looms. The Gobelins’ first director was the painter Charles Le Brun (1619–90), who had managed the short-lived royal tapestry works established in 1658 by Colbert’s predecessor, Nicolas Fouquet (1615–80), at his château of Vaux-le-Vicomte near Paris. Le Brun applied himself with prodigious energy to his new position and proved to have a special talent for the task of celebrating the glory of Louis XIV. Among the most important sets he designed were The Elements, The Seasons, The Child Gardeners, The Story of Alexander, and, above all, the Life of Louis XIV and the Royal Residences (most of these sets are in the possession of Mobilier National in Paris).
When Le Brun died, the painter Pierre Mignard (1612–95) became director. The draining of the royal treasury closed the Gobelins in 1694. The factory opened again in 1699, when a lighter spirit was introduced into tapestry design by the decorative inventions, especially grotesques, of Claude Audran III (1658–1734), who designed such sets as The Grotesque Months and The Portières of the Gods. Louis XV (1710–74), in his turn, was celebrated in a set of Hunts by the Rococo painter Jean-Baptiste Oudry (1686–1755). Oudry was director of the Gobelins from 1733 until his death in 1755, when he was succeeded by François Boucher (1703–70), the outstanding artist-director of the 18th century. Boucher and Charles-Antoine Coypel (1694–1752), a Rococo painter, designed many of the popular alentours tapestries, in which the central subject, presented as a painting bordered by a frame simulating gilded wood, is eclipsed by the rich use of ornamental devices surrounding it. Boucher’s Loves of the Gods were also alentours and enjoyed a great success and popularity, especially among the English nobility. The Story of Don Quixote was designed by Coypel and woven nine times between 1714 and 1794.
Oudry’s sophistication and polished elegance posed new problems for the weavers. Now indeed it was necessary for them to learn to paint with a bobbin, and to this end hundreds of new dyes were perfected for both wool and silk, until about 10,000 hues were available, to effect almost imperceptible tonal modulations; and interlocking of the wefts was introduced to render the transitions practically invisible, while the finest textures practical were used.
The Gobelins succeeded in surviving the French Revolution. Napoleon as emperor, like Louis XIV, desired an art of apotheosis and ordered a set of tapestries (1809–15) that were devoted to his reign. Paintings by such French Neoclassical painters as Jacques-Louis David (1748–1825), Carle Vernet (1758–1836), and Anne-Louis Girodet-Trioson (1767–1824) were woven into tapestries in the late 18th and early 19th centuries.
Another major state-subsidized factory established in 1664 at Beauvais had been carried on by two Flemings, Louis Hinart for 20 years and Philippe Behagle for 27 more. It was administered in much the same way as the Gobelins. Beauvais, however, was a private enterprise with royal patronage intended to produce tapestries for the nobility and the rich bourgeoisie, while Gobelins’ work was only for the king.
Two types of decorative panels were particularly developed at Beauvais in the late 17th century, the architectural composition and the grotesque. The former, such as in the set of Marine Triumphs (1690), usually shows a complex fantasy architecture reminiscent of Baroque stage sets. In the latter, architectural tracery defines a complex of panels, framing a medley of festoons, scarves, vases, musical instruments, putti, masks, and comedy actors, such as in The Rope Dancer and the Dromedary (c. 1689).
Both Oudry and Boucher designed for the Beauvais factory. The Fables of La Fontaine, by Oudry, were among the most popular tapestries of the 18th century. In 1736 Boucher designed Italian genre scenes for the set Village Festivities and later in the Second Chinese Set did Chinese fantasies. He also designed various pastoral scenes with titillating overtones. The Beauvais factory became noted for tapestry to upholster furniture with and panels for screens. These were usually floral designs and in the 19th century were especially fashionable in finely woven silk. By the end of the century, though technical standards were maintained, artistic deterioration set in.
Factories at the neighbouring old tapestry-making communities of Aubusson and Felletin, which had operated for a century and a half as modest private undertakings, were allowed to use the royal Aubusson mark as of 1665. From a small house industry, in which weavers independently produced inexpensive tapestries on their own low-warp looms for a bourgeois clientele, the tapestry makers soon produced hangings, upholstery fabrics, and carpets in Aubusson. The most effective tapestries are the chinoiseries, or genre fantasies set in China, a theme popular in Rococo art. Those designed by Jean Pillement (1728–1808) are especially famous. Coarse and rather dull, the verdures, or “garden tapestries,” which were the first Beauvais tapestries, were made in quantities. Aubusson architectural panels either imitate those of the Gobelins and Beauvais factories, often with more complex elements and the addition of animals, or depict a damasked wall hung with a painting or cluster of decorative objects and garlands. The factory was especially successful in its production of carpets with conventional geometric ornamental motifs or floral designs.
The dominant influence on the Brussels industry of the 17th century was the Antwerp painter Peter Paul Rubens, whose most famous set was the Triumph of the Eucharist (1627–28). Imitations and adaptations of his style were legion. Heavy and elaborate columns were often substituted for side borders. On a more modest scale are the tapestry versions of genre paintings by David Teniers the Younger (1610–90), in which the border frequently simulated the actual picture frame.
The first major tapestry factory to be established in Germany was founded in 1604 in Munich by Duke Maximilian of Bavaria. The designers and weavers were all Flemish. Although the factory closed after only 11 years of operation, the quality of its workmanship was outstanding. Following the loss of religious freedom in France when the Edict of Nantes was revoked in 1685, many French weavers, especially from the Aubusson factory, sought refuge from persecution in Germany as had the persecuted Flemish weavers of the 16th century. The workshop established in 1686 in Berlin by the great elector Frederick William of Brandenburg (1620–88) employed many of these displaced Aubusson weavers. It produced tapestries mainly for the palaces built by the great elector’s son, King Frederick I of Prussia (1657–1713), after whose death the factory closed.
French designers and weavers continued to produce a large number of tapestries in the 18th century. Tapestry production was centred principally in Munich, Berlin, Würzburg, Dresden, Schwabach, and Erlangen.
In Scandinavia tapestries for the Danish and Swedish royalty were woven in Copenhagen and Stockholm. The weavers and designers were usually French and Flemish. Norway and Sweden continued to produce folk tapestries. Of the nearly 1,300 registered Norwegian tapestries, approximately 1,250 originated in small rural communities. These tapestries were usually coarse in texture, stylized and schematic in design, and boldly coloured.
James I established in 1619 by royal charter a factory of tapestry weaving at Mortlake near London. It was staffed by 50 Flemings. Philip de Maecht, a member of the famous late 16th- and 17th-century family of Dutch tapestry weavers, was brought from the de La Planche-Comans factory in Paris, where he had been the master weaver, to hold the same position at Mortlake. The royal factory flourished under the patronage of the Stuart monarchs James I and Charles I. Many of the early tapestries produced at Mortlake were modeled after hangings woven in Brussels. Rubens supplied cartoons and in 1623 suggested to Charles I the purchase of seven of the Raphael cartoons for the Acts of the Apostles. A new set was woven from these cartoons at Mortlake and is preserved at the Mobilier National in Paris. The redesigned borders have been attributed to the renowned Flemish painter to the English court, Sir Anthony Van Dyck (1599–1641). Although the factory weathered the Puritan austerity of the Commonwealth period, it deteriorated under Charles II and closed in 1703.
From the late 17th century Francis Poyntz (died 1685) and his brothers had a studio in Soho, where a number of weavers originally employed in the royal factory produced a distinct style of tapestry based on Chinese and Indian lacquerwork.
Cardinal Francesco Barberini, the nephew of Pope Urban VIII, in 1633 established a tapestry factory in Rome. Even though it enjoyed papal patronage, it lasted only until 1679. Clement XI tried to establish another Roman tapestry works in 1710, which also failed. During the 18th century other small factories briefly existed in Turin and Naples. They were staffed mainly with weavers left unemployed by the closing of the Medici factory (Arrazeria Medicea) in Florence.
During the 15th and 16th centuries Franco-Flemish tapestries were imported in great quantities, and Flemish weavers were invited to Spain in order to repair and care for them. For a short time in the 17th century a factory, established by Philip IV (1605–65), operated at Pastrana near Madrid. It was not until Philip V (1683–1746) established the Real Fábrica de Tapices y Alfombras de Santa Barbara (Royal Factory of Tapestries and Rugs of St. Barbara) in 1720 at Madrid, however, that important tapestry was produced in Spain. Initially, the weavers and director were Flemings. The first tapestries made at Santa Barbara were woven from the cartoons of such Flemish Baroque painters as David Teniers the Younger (1610–90) and Philips Wouwerman (1619–68) or based on famous paintings by such Italian artists as Raphael and Guido Reni (1575–1642). When the early Neoclassical painter Anton Raphael Mengs (1728–79) became director, the factory entered its most brilliant period of production. The Spanish painter Francisco Bayeu (1734–95) and his painter son-in-law Francisco de Goya (1746–1828) were commissioned to make cartoons. From 1777 to 1790 Goya made 43 cartoons for the Los tapices (“The Tapestries”) series depicting Spanish daily life. The painted models for this are among the finest works of Goya’s Rococo style.
The French destroyed the factory in 1808, but after the Napoleonic occupation, production was resumed until 1835. The tapestries produced during this period were largely copies of works woven in the 18th century.
A tapestry factory staffed by weavers from the Gobelins was established at St. Petersburg in 1716 by Tsar Peter the Great (1672–1725). Although tapestries were produced until 1859, production was often plagued with difficulties. The most striking designs were a set of grotesques (1733–38) and a series of portraits, of which those of Catherine the Great (1729–96) are the most noteworthy.
19th and 20th centuries
Most 19th-century tapestries reproduced paintings or previously woven designs. The influence of the Industrial Revolution was inescapable, of course, not only in tools, materials, and dyes but in the new middle-class market and its demands. Machine-made tapestry, although an achievement in mechanical weaving, became a threat to the survival of the original handicraft.
The necessity for the revitalization and purification of the tapestry art was first recognized by the artists associated with the Arts and Crafts Movement in late 19th-century England. Decrying the loss of individual creativity, they revived the ideals of medieval craftsmanship in an attempt to counter the effects of industrialization on the decorative or applied arts. The leader and most important figure of the movement was the artist William Morris (1834–96), who established a tapestry factory at Merton Abbey in Surrey near London. For about 15 years he and his associates had been designing not only for looms but also for pictorial wall decorations and stained-glass windows. They were well prepared professionally, therefore, to design tapestries. Morris and the painter-illustrator Walter Crane (1845–1915) contributed cartoon sketches, but most Merton tapestries were designed by the Pre-Raphaelite painter Sir Edward Burne-Jones (1833–98). More venturesome than any of the Merton Abbey products were the tapestry designs made in the 1880s by the artist and architect Arthur Heygate Mackmurdo (1851–1942), who in 1882 founded the Century Guild, the first of many groups of artists-craftsmen-designers to follow the teachings of William Morris. This tradition, influenced by the tapestry revival in mid-20th-century France, has continued in Scotland. The most ambitious 20th-century tapestry designed by a British artist, Graham Sutherland’s (1903–80) enormous Christ in Glory (1962) for Coventry Cathedral, was, however, woven on looms in Felletin, France. This is the largest tapestry ever to have been made there (78 feet 1 inch by 38 feet 1 inch; 23.8 by 11.6 metres).
In Europe during the late 19th century there was a resurgence of tapestry based on folk traditions. This trend was already apparent in Norway shortly after 1890, when special efforts were made to base a modern tapestry art on native medieval weavings. The leaders were Gerhard Munthe (1849–1929), a well-known painter, and Frida Hansen (1855–1931), a weaver who studied the peasant craftsmanship of Norway and evolved an individual, light, and open weave. Somewhat later developments in Scandinavia occurred in Sweden and Finland. Märta Måås-Fjetterström (1873–1941) became the best-known Swedish tapestry artist, and her atelier continued to produce excellent works. In Finland a freer, more colourful art, more delicately scaled, has been practiced by many; among the best known are Martta Taipale, Laila Karttunen, and, for damask tapestry, Dora Jung. In Norway and to a lesser degree in Denmark, similar work has been done. The church in the Scandinavian countries has been unusually receptive to this art. Traditional folk weaving was also behind the revival of tapestry making in several other countries after World War I, including Czechoslovakia and Hungary. Poland produced especially original designs executed in a remarkably free technique. Following the tradition of heavy-grained native weaving, mid-20th-century Polish designer-weavers such as Magdalena Abakanowicz and Wojciech Sadley used unconventional materials such as jute, sisal, horsehair, and raffia in abstract tapestries that emphasize the nature of the material, tactile stimulation, plasticity, or surface relief.
Germany, emulating Scandinavia, also began a revival of tapestry weaving around the turn of the 20th century. In the state of Schleswig-Holstein a small tapestry industry was set up from 1896 to 1903 at Scherrebek, followed by similar enterprises at nearby Kiel and Meldorf. The most significant development, however, occurred at the design school of the Bauhaus, where tapestry was created during the 1920s and early 1930s. Abstract in composition, the Bauhaus designs were deeply rooted in the theory that the technology of the craft should be revealed in the work and in expressing the nature of the materials used, especially by the exploitation of heavy fibres as strong textural elements. Anni Albers, wife of the painter and Bauhaus instructor Josef Albers, became the chief practitioner of this kind of tapestry. Like most modern tapestry weavers, she also designed for the textile industry. After World War II, tapestry works were established in Munich and Nürnberg, and individual weavers worked throughout Germany and in Vienna. Among the Germans, unlike the French, stained glass rather than tapestry generated greater enthusiasm as a revived craft in the post-World War II period. A few individual designers worked on their own looms in the United States and Canada, where most large-scale tapestries continued to be imported from Europe. The Latin American revival of indigenous folkcrafts aroused interest in tapestry making in Mexico and Panama. South American centres of tapestry art developed in Brazil, Chile, and Colombia.
Modern tapestry design was hindered during the greater part of the 19th century in France by the academic administration of the state factories, although progressive artists began to be affected by the English Arts and Crafts Movement in the late 1880s. The painters Paul Gauguin (1848–1903) and Émile Bernard (1868–1941) were among those who took an interest in tapestry weaving, though they did not actually do tapestry cartoons as did Aristide Maillol (1861–1944). It was not until after World War I that France initiated and led the 20th-century revitalization of tapestry as an art. Many of the great modern artists of the school of Paris—Pablo Picasso (1881–1973), Georges Braque (1882–1962), Henri Matisse (1869–1954), Fernand Léger (1881–1955), Georges Rouault (1871–1958), and Joan Miró (1893–1983), among others—permitted their works to be reproduced in 1932. These reproductions were done with extraordinary fidelity under the supervision of Marie Cuttoli, a Paris connoisseur and promoter of exceptional taste. The Aubusson factory, chosen for this important weaving, became once again a great centre for tapestry. The direct translation of painting into tapestry, however, left little scope for the weaver, and it is the trend begun simultaneously by Jean Lurçat (1892–1966) that may be said to have truly inaugurated the 20th-century tapestry renaissance. Although he began experimenting in 1916, Lurçat’s art did not become definitive until the 1930s, when under the influence of Gothic tapestry, particularly the 14th-century Angers Apocalypse, and in collaboration with François Tabard, master weaver at Aubusson, he formulated the principles that were to make tapestry once again a joint creation between artist and weaver—an art in its own right. No longer merely an imitation painting, tapestry once again exploited the coarser texture and the bolder but more limited range of colours that characterized medieval hangings.
In 1947 Lurçat founded the important Association des Peintures-Cartonniers de Tapisserie (Association of Cartoon Painters of Tapestry). Also active in this organization were the important French tapestry designers Marc Saint-Saëns and Jean Picart Le Doux, who were Lurçat’s foremost disciples. Lurçat was held in great esteem by Dom Robert, a Benedictine monk whose tapestries of poetic fantasy were largely inspired by Persian and medieval European manuscript illumination. Other major French designers of representational compositions were the artists Marcel Gromaire (1892–1971) and Henri Matisse and the architect Le Corbusier (1887–1965).
In the 1950s tapestry designs became increasingly abstract. Among the most notable pieces were those designed by the sculptor and printmaker Henri-Georges Adam (1904–67). Using only black and white, his tapestries are monumental tonal abstractions that reflect his work as an engraver. The sculptor Jean Arp (1887–1966) and the painter Victor Vasarely are other abstract designers of postwar tapestries.
After World War II the Belgians, influenced by the weaving activity in France during the 1930s, revived their tapestry industry. In 1945 the Forces Murales movement was organized in Tournai by cartoon painters including Louis Deltour, Edmond Dubrunfaut, and Roger Somville, who became the leading designers of Belgian tapestries. This was followed in 1947 by the organization in Tournai of a collective tapestry workshop, the Centre de Rénovation de la Tapisserie, active until 1951. Small workshops continued to flourish in Belgium, especially in the cities of Tournai, Brussels, and Malines.
The renewed international interest in tapestry is clearly related to the austerity of modern architecture. Suitable settings for large-scale wall hangings are provided by the often vast expanses of bare wall surface in contemporary buildings. Le Corbusier not only used tapestries to decorate his architectural interiors but designed them. He frequently referred to tapestries as nomadic murals, recognizing their importance as movable and interchangeable decoration.
In 1962 the first international exhibition of tapestry was held at Lausanne in Switzerland, which after 1965 became an important biennial event. This exhibition clearly demonstrated the tremendous worldwide interest in the medium generated in the middle 20th century as well as indicating the immense variety of tapestry designs, materials, and techniques.Madeleine Jarry
|
<urn:uuid:d2dee46f-4842-442a-8c6c-e8ff61c21f7e>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812758.43/warc/CC-MAIN-20180219171550-20180219191550-00618.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9700751304626465,
"score": 3.5625,
"token_count": 16402,
"url": "https://www.britannica.com/art/tapestry"
}
|
Indigenous Australians are the Aboriginal and Torres Strait Islander people of Australia, descended from groups that existed in Australia and surrounding islands prior to European colonization. The earliest definite human remains found in Australia are those of Mungo Man, which have been dated at about 40,000 years old, although the time of arrival of the first Indigenous Australians is a matter of debate among researchers, with estimates including thermoluminescence dating to between 61,000 and 52,000 years ago, as well as a suggestion of up to 125,000 years ago.
There is great diversity among different Indigenous communities and societies in Australia, each with its own mixture of cultures, customs and languages. In present-day Australia these groups are further divided into local communities. At the time of initial European settlement, over 250 languages were spoken; it is currently estimated that 120 to 145 of these remain in use, but only 13 of these are not considered endangered.
Aboriginal people today mostly speak English, with Aboriginal phrases and words being added to create Australian Aboriginal English (which also has a tangible influence of Indigenous languages in the phonology and grammatical structure). The population of Indigenous Australians at the time of permanent European settlement has been estimated at between 318,000 and 1,000,000 with the distribution being similar to that of the current Australian population, with the majority living in the south-east, centered along the Murray River.
Since 1995, the Australian Aboriginal Flag and the Torres Strait Islander Flag have been among the official flags of Australia.
Indigenous Australians teach us that reality is a dream or illusion. Within Aboriginal belief systems, a formative epoch known as the Dreamtime stretches back into the distant past when the creator ancestors known as the First Peoples traveled across the land, creating and naming as they went. Indigenous Australia's oral tradition and religious values are based upon reverence for the land and a belief in this Dreamtime.
The Dreaming is at once both the ancient time of creation and the present-day reality of Dreaming. There were a great many different groups, each with its own individual culture, belief structure, and language. These cultures overlapped to a greater or lesser extent, and evolved over time. Major ancestral spirits include the Rainbow Serpent, Baiame, Dirawong and Bunjil.
Traditional healers (known as Ngangkari in the Western desert areas of Central Australia) were highly respected men and women who not only acted as healers or doctors, but were generally also custodians of important Dreamtime stories.
Indigenous Australians most ancient civilization on Earth, extensive DNA study confirms Telegraph - September 23, 2016
Scientists used the genetic traces of the mysterious early humans that are left in the DNA of modern populations in Papua New Guinea and Australia to reconstruct their journey from Africa around 72,000 years ago. Experts disagree on whether present-day non-African people are descended from explorers who left Africa in a single exodus or a series of distinct waves of traveling migrants. The new study supports the single migration hypothesis. It indicates that Australian aboriginal and Papuan people both originated from the same out-of Africa migration event some 72,000 years ago, along with ancestors of all other non-African populations alive today. Tracing the Papuan and Australian groups' progress showed that around 50,000 years ago they reached Sahul - a prehistoric supercontinent that once united New Guinea, Australia and Tasmania before they were separated by rising sea levels.
Looking to the Stars of Australian Aboriginal Astronomy Ancient Origins - December 30, 2015
Astronomy played an important role in many ancient societies. Through this natural science, the ancients were able to make calendars, navigate during the night, and even explore the nature of the universe through mythology and philosophy. Some civilizations well-known for their astronomical developments include the Babylonians, the ancient Egyptians, and the ancient Greeks. The astronomy of many other cultures, however, has been side-lined, as a result of the prevailing Euro-centric view of astronomy, and civilization, in general. One of these is the astronomy of the Australian Aboriginal people, considered by some to be the oldest in the world.
Could this legend be about a UFO and coincide with Ancient Alien Theory?
Aboriginal legends reveal ancient secrets to science BBC - May 19, 2015
Scientists are beginning to tap into a wellspring of knowledge buried in the ancient stories of Australia's Aboriginal peoples. But the loss of indigenous languages could mean it is too late to learn from them. The Luritja people, native to the remote deserts of central Australia, once told stories about a fire devil coming down from the Sun, crashing into Earth and killing everything in the vicinity. The local people feared if they strayed too close to this land they might reignite some otherworldly creature. The legend describes the crash landing of a meteor in Australia's Central Desert about 4,700 years ago, says University of New South Wales (UNSW) astrophysicist Duane Hamacher. It would have been a dramatic and fiery event, with the meteor blazing across the sky. As it broke apart, large fragments of metal-rich rock would have crashed to Earth with explosive force, creating a dozen giant craters.
Current scientific discoveries seem to verify Aboriginal legends passed down for millennia. Ancient cave art also suggests that ancient Aboriginals understood much about the heavens and perhaps ancient alien visitors (see Wondjina Figures below)
Aboriginal legends an untapped record of natural history written in the stars PhysOrg - March 3, 2015
Aboriginal legends could offer a vast untapped record of natural history, including meteorite strikes, stretching back thousands of years, according to new UNSW research. Dr Duane Hamacher from the UNSW Indigenous Astronomy Group has uncovered evidence linking Aboriginal stories about meteor events with impact craters dating back some 4,700 years. Dr Hamacher, an astrophysicist studying Indigenous astronomy, examined meteorite accounts from Aboriginal communities across Australia to determine if they were linked to known meteoritic events. One of the meteorite strikes, at a place called Henbury in the Northern Territory, occurred around 4,700 years ago. The level of detail contained in the local oral traditions suggested the Henbury event had been witnessed and its legend passed down through generations over thousands of years - a remarkable record.
Ancient Sea Rise Tale Told Accurately for 10,000 Years Scientific American - January 26, 2015
Without using written languages, Australian tribes passed memories of life before, and during, post-glacial shoreline inundations through hundreds of generations as high-fidelity oral history. Some tribes can still point to islands that no longer exist - and provide their original names. That's the conclusion of linguists and a geographer, who have together identified 18 Aboriginal stories - many of which were transcribed by early settlers before the tribes that told them succumbed to murderous and disease-spreading immigrants from afar - that they say accurately described geographical features that predated the last post-ice age rising of the seas.
The history of Indigenous Australians is thought to have spanned 40,000 to 45,000 years, although some estimates have put the figure at up to 80 000 years before European settlement and as low as 10,000 years. For most of this time, the Indigenous Australians lived as nomads and as hunter-gatherers with a strong dependence on the land and their agriculture for survival.
The path of Australian Aboriginal history changed radically after the 18th- and 19th-century settlement of the British: Indigenous people were displaced from their ways of life, were forced to submit to European rule, and were later encouraged to assimilate into Western culture. Since the 1960s, reconciliation has been the pursuit of European Australian - Indigenous Australian relations.
There are several hundred Indigenous peoples of Australia, many are groupings that existed before the British annexation of Australia in 1788. Before Europeans, the number was over 400.
Indigenous or groups will generally talk of their "people" and their "country". These countries are ethnographic areas, usually the size of an average European country, with around two hundred on the Australian continent at the time of White arrival.
Within each country, people lived in clan groups - extended families defined by the various forms Australian Aboriginal kinship. Inter-clan contact was common, as was inter-country contact, but there were strict protocols around this contact.
The largest Aboriginal people today is the Pitjantjatjara who live in the area around Uluru (Ayers Rock) and south into the Anangu Pitjantjatjara Yankunytjatjara in South Australia, while the second largest Aboriginal community are the Arrernte people who live in and around Alice Springs. The third largest are the Luritja, who live in the lands between the two largest just mentioned. The Aboriginal languages with the largest number of speakers today are the Pitjantjatjara, Warlpiri and Arrernte.
Indigenous Australians are the original inhabitants of the Australian continent and nearby islands, and these peoples' descendants. Indigenous Australians are distinguished as either Aboriginal people or Torres Strait Islanders, who currently together make up about 2.6% of Australia's population.
The Torres Strait Islanders are indigenous to the Torres Strait Islands which are at the northern-most tip of Queensland near Papua New Guinea. The term "Aboriginal" has traditionally been applied to indigenous inhabitants of mainland Australia, Tasmania, and some of the other adjacent islands. The use of the term is becoming less common, with names preferred by the various groups becoming more common.
There is great diversity between different Indigenous communities and societies in Australia, each with its own unique mixture of cultures, customs and languages. In present day Australia these groups are further divided into local communities.
The population of Indigenous Australians at the time of permanent European settlement has been estimated at between 318,000 and 750,000, with the distribution being similar to that of the current Australian population, with the majority living in the south-east, centered along the Murray River.
Though Indigenous Australians are seen as being broadly related, there are significant differences in social, cultural and linguistic customs between the various Aboriginal and Torres Strait Islander groups.
Mungo Man, whose remains were discovered in 1974 near Lake Mungo in New South Wales, is the oldest human yet found in Australia. Although the exact age of Mungo Man is in dispute, the best consensus is that he is at least 40,000 years old. Stone tools also found at Lake Mungo have been estimated, based on stratigraphic association, to be about 50,000 years old. Since Lake Mungo is in south-eastern Australia, many archaeologists have concluded that humans must have arrived in north-west Australia at least several thousand years earlier.
There is no clear or accepted origin of the indigenous people of Australia. Although they migrated to Australia through Southeast Asia they are not demonstrably related to any known Asian or Polynesian population. There is evidence of genetic and linguistic interchange between Australians in the far north and the Austronesian peoples of modern-day New Guinea and the islands, but this may be the result of recent trade and intermarriage.
It is believed that first human migration to Australia was achieved when this landmass formed part of the Sahul continent, connected to the island of New Guinea via a land bridge. It is also possible that people came by boat across the Timor Sea. The exact timing of the arrival of the ancestors of the Indigenous Australians has been a matter of dispute among archaeologists. The most generally accepted date for first arrival is between 40,000-80,000 years BP.
In 1971 finds of Aboriginal stone tools in a quarry in Penrith in New South Wales were dated to 47,000 years BP. A 48,000 BCE date is based on a few sites in northern Australia dated using thermoluminescence. A large number of sites have been radiocarbon dated to around 38,000 BCE, leading some researchers to doubt the accuracy of the thermoluminescence technique. Radiocarbon dating is limited to a maximum age of around 40,000 years. Some estimates have been given as widely as from 30,000 to 68,000 BCE. Earlier dates are requiring new techniques such as optically stimulated luminescence (OSL) and accelerator mass spectrometry (AMS), and the evidence for an earlier date for arrival is growing. Charles Dortch has dated recent finds on Rottnest Island, Western Australia at 70,000 years BP.
The rock shelters at Malakunanja II (a shallow rock-shelter about 50 kilometres inland from the present coast) and of Nauwalabila I (70 kilometers further south) show evidence of used pieces of ochre - evidence for paint used by artists 60,000 years ago. Using OSL Rhys Jones has obtained a date for stone tools in these horizons dating from 53,000-60,000 years ago.
Thermoluminescence dating of the Jinmium site in the Northern Territory suggested a date of 200,000 BCE. Although this result received wide press coverage, it is not accepted by most archaeologists. Only Africa has older physical evidence of habitation by modern humans. There is also evidence of a change in fire regimes in Australia, drawn from reef deposits in Queensland, between 70 and 100,000 years ago, and the integration of human genomic evidence from various parts of the world also supports a date of before 60,000 years for the arrival of Australian Aboriginal people in the continent.
Humans reached Tasmania approximately 40,000 years ago by migrating across a land bridge from the mainland that existed during the last ice age. After the seas rose about 12,000 years ago and covered the land bridge, the inhabitants there were isolated from the mainland until the arrival of European settlers.
Short statured aboriginal tribes inhabited the rainforests of North Queensland, of which the best known group is probably the Tjapukai of the Cairns area. These rainforest people, collectively referred to as Barrineans, were once considered to be a relict of an earlier wave of Negrito migration to the Australian continent, but this theory no longer finds much favor.
There has been a long history of contact between Papuan peoples of the Western Province, Torres Strait Islanders and the Aboriginal people in Cape York. The introduction of the dingo, possibly as early as 3,500 BCE, showed that contact with South East Asian peoples continued, as the closest genetic connection to the dingo seems to be the wild dogs of Thailand. This contact was not just one way, as the presence of kangaroo ticks on these dogs demonstrates. Dingoes began and evolved in Asia. The earliest known dingo-like fossils are from Ban Chiang in north-east Thailand (dated at 5,500 years BP) and from north Vietnam (5,000 years BP). According to skull morphology, these fossils occupy a place between Asian wolves (prime candidates were the pale footed (or Indian) wolf Canis lupus pallipes and the Arabian wolf Canis lupus arabs) and modern dingoes in Australia and Thailand.
Similarly Aboriginal people also seem to have lived a long time in the same environment as the now extinct Australian megafauna, stories of which are preserved in the oral culture of many Aboriginal groups. The recent European scientific belief that it was the arrival of the Australian Aboriginal people on the continent, and their introduction of fire-stick farming, that was responsible for these extinctions is contested by Aboriginal people themselves, and others who argue that mass extinctions of Australian megafauna occurred only 20,000 years ago, with the Ice Age Maxima, during which times much if not most of the continent was reduced to desert and sand-dune conditions.
There is evidence that there may have been a significant reduction in Australian Aboriginal populations during this time, and there would seem to have been specific "refugia", in which Aboriginal populations during this time were confined. Corridors between these areas seem to be routes by which people kept in contact, and they seem to have been the basis of what have been called "Songlines" to the present day.
A knowledgeable person is able to navigate across the land by repeating the words of the song, which describe the location of landmarks, waterholes, and other natural phenomena. In some cases, the paths of the creator-beings are said to be evident from their marks, or petrosomatoglyphs, on the land, such as large depressions in the land which are said to be their footprints.
By singing the songs in the appropriate sequence, Indigenous people could navigate vast distances, often traveling through the deserts of Australia's interior. The continent of Australia contains an extensive system of songlines, some of which are of a few kilometres, whilst others traverse hundreds of kilometres through lands of many different Indigenous peoples - peoples who may speak markedly different languages and have different cultural traditions.
Since a songline can span the lands of several different language groups, different parts of the song are said to be in those different languages. Languages are not a barrier because the melodic contour of the song describes the nature of the land over which the song passes. The rhythm is what is crucial to understanding the song. Listening to the song of the land is the same as walking on this songline and observing the land.
In some cases, a songline has a particular direction, and walking the wrong way along a songline may be a sacrilegious act (e.g. climbing up Uluru where the correct direction is down). Traditional Aboriginal people regard all land as sacred, and the songs must be continually sung to keep the land "alive".
Molyneaux & Vitebsky note that the Dreaming Spirits "also deposited the spirits of unborn children and determined the forms of human society," thereby establishing tribal law and totemic paradigms.
Following the Ice Age, Aboriginal people around the coast, from Arnhem Land, the Kimberley and the south west of Western Australia, all tell stories of former territories that were drowned beneath the sea with the rising coastlines after the Ice Age. It was this event that isolated the Tasmanian Aboriginal people on their island, and probably led to the extinction of Aboriginal cultures on the Bass Strait Islands and Kangaroo Island in South Australia. In the interior, the end of the Ice Age may have led to the recolonization of the desert and semi-desert areas by Aboriginal people of the Northern Territory. This may have been in part responsible for the spread of languages of the Pama–Nyungan language phylum, and secondarily responsible for the spread of male initiation rites involving circumcision.
Indigenous Australians were limited to the range of foods occurring naturally in their area, but they knew exactly when, where and how to find everything edible. Anthropologists and nutrition experts who studied the tribal diet in Arnhem Land found it to be well-balanced, with most of the nutrients modern dietitians recommend. But food was not obtained without effort. In some areas both men and women had to spend from half to two-thirds of each day hunting or foraging for food. Each day the women of the horde went into successive parts of one countryside, with wooden digging sticks and plaited dilly bags or wooden coolamons.
They dug yams and edible roots and collected fruits, berries, seeds, vegetables and insects. They killed lizards, bandicoots and other small creatures with digging sticks. The men went hunting. Small game such as birds, possums, lizards and snakes were often taken by hand. Larger animals and birds such as kangaroos and emus were speared or disabled with a thrown club, boomerang, or stone. Many indigenous devices were used to get within striking distance of prey. The men were excellent trackers and stalkers and approached their prey running where there was cover, or 'freezing' and crawling in the open. They were careful to stay downwind, and sometimes covered themselves with mud to disguise their smell.
Frequently disguises were used. Mud also served as camouflage, or the hunter held a bush in front of him while stalking in the open. He glided through water with a bunch of rushes or a lily-leaf over his head until he was close enough to pull down a water-bird. He prepared 'hides' and, with bait or bird calls, lured birds to within grabbing distance. He attracted emus, which are inquisitive birds, by imitating their movements with a stick and a bunch of feathers or some other simple device. Likewise, it was common to use the pelts of animals as a disguise, and imitate them in order to get within striking range of their herd.
Fish were sometimes taken by hand by stirring up the muddy bottom of a pool until they rose to the surface, or by placing the crushed leaves of poisonous plants in the water to stupefy them. Fish spears, nets, wicker or stone traps were also used in different areas. Lines with hooks made from bone, shell, wood or spines were used along the north and east coasts. Dugong, turtle and large fish were harpooned, the harpooner launching himself bodily from the canoe to give added weight to the thrust.
Hunting was frequently organized on co-operative lines. Groups of men combined to drive animals into a line of spearsmen, a brush-fence, or large nets. Sometimes a U-shaped area was fenced and the trapped animals killed. Animals were also trapped in snares, pits, and partly enclosed water-holes. there was a fairly clear division of labour between the sexes in food-collecting, but this was not rigidly maintained.
The main concern was to get food. Hunting was arduous, and the men often had to walk, run, or crawl long distances. In poor country the men often returned empty-handed but the women invariably collected something - perhaps only a few roots and tiny lizards - but sufficient to tide the family over. Inland, the quest for water was a life and death matter. Indigenous Australians survived where others would perish. they knew all the water holes and soaks in their area. They drained dew, and obtained water from certain trees and roots. They even dug up and squeezed out frogs which store water in their bodies.
At the time of first European contact, it is estimated that between 315,000 and 750,000 people lived in Australia, with upper estimates being as high as 1.25 million. Population levels are likely to have been largely stable for many thousands of years, and it has been estimated that between 1 and 5 billion people had lived in Australia before British colonization.
The regions of heaviest indigenous population were the same temperate coastal regions that are currently the most heavily populated. The greatest population density was to be found in the southern and eastern regions of the continent, the Murray River valley in particular. However, indigenous Australians maintained successful communities throughout Australia, from the cold and wet highlands of Tasmania to the more arid parts of the continental interior. In all instances, technologies, diets and hunting practices varied according to the local environment.
Post-colonisation, the coastal indigenous populations were soon absorbed, depleted or forced from their lands; the traditional aspects of Aboriginal life which remained persisted most strongly in areas such as the Great Sandy Desert where European settlement has been sparse.
The mode of life and material cultures varied greatly from region to region. While Torres Strait Island populations were agriculturalists who supplemented their diet through the acquisition of wild foods, most Indigenous Australians were hunter-gatherers. Indigenous Australians along the coast and rivers were also expert fishermen. Some Aboriginal and Torres Strait Islander people relied on the dingo as a companion animal, using it to assist with hunting and for warmth on cold nights.
Some writers have described some mainland indigenous food and landscape management practices as "incipient agriculture". In present-day Victoria, for example, there were two separate communities with an economy based on eel-farming in complex and extensive irrigated pond systems; one on the Murray River in the state's north, the other in the south-west near Hamilton in the territory of the Djab Wurrung, which traded with other groups from as far away as the Melbourne area.
On mainland Australia no animal other than the dingo was domesticated, however domestic pigs were utilized by Torres Strait Islanders. The typical indigenous diet included a wide variety of foods, such as pig, kangaroo, emu, wombats, goanna, snakes, birds, many insects such as honey ants, Bogong moths and witchetty grubs. Many varieties of plant foods such as taro, coconuts, nuts, fruits and berries were also eaten.
A primary tool used in hunting is the spear, launched by a woomera or spear-thrower in some locales. Boomerangs were also used by some mainland indigenous peoples. The non-returnable boomerang - known more correctly as a Throwing Stick - more powerful than the returning kind, could be used to injure or even kill a kangaroo.
Permanent villages were the norm for most Torres Strait Island communities. In some areas mainland Indigenous Australians also lived in semi-permanent villages, most usually in less arid areas where fishing could provide for a more settled existence. Most indigenous communities were semi-nomadic, moving in a regular cycle over a defined territory, following seasonal food sources and returning to the same places at the same time each year. From the examination of middens, archaeologists have shown that some localities were visited annually by indigenous communities for thousands of years. In the more arid areas Indigenous Australians were nomadic, ranging over wide areas in search of scarce food resources.
The Indigenous Australians lived through great climatic changes and adapted successfully to their changing physical environment. There is much ongoing debate about the degree to which they modified the environment. One controversy revolves around the role of indigenous people in the extinction of the marsupial megafauna (also see Australian megafauna). Some argue that natural climate change killed the megafauna. Others claim that, because the megafauna were large and slow, they were easy prey for human hunters. A third possibility is that human modification of the environment, particularly through the use of fire, indirectly led to their extinction.
Indigenous Australians used fire for a variety of purposes: to encourage the growth of edible plants and fodder for prey; to reduce the risk of catastrophic bushfires; to make travel easier; to eliminate pests; for ceremonial purposes; for warfare and just to "clean up country." There is disagreement, however, about the extent to which this burning led to large-scale changes in vegetation patterns.
here is evidence of substantial change in indigenous culture over time. Rock painting at several locations in northern Australia has been shown to consist of a sequence of different styles linked to different historical periods.
Some have suggested, for instance, that the Last Glacial Maximum, of 20,000 years ago, associated with a period of continental wide aridity and the spread of sand-dunes, was also associated with a reduction in Aboriginal activity, and greater specialisation in the use of natural foodstuffs and products. The Flandrian transgression associated with sea-level rise, particularly in the north, with the loss of the Sahul Shelf, and with the flooding of Bass Strait and the subsequent isolation of Tasmania, may also have been periods of difficulty for affected groups.
Harry Lourandos has been the leading proponent of the theory that a period of hunter-gatherer intensification occurred between 3,000 and 1,000 BCE. Intensification involved an increase in human manipulation of the environment (for example, the construction of eel traps in Victoria), population growth, an increase in trade between groups, a more elaborate social structure, and other cultural changes. A shift in stone tool technology, involving the development of smaller and more intricate points and scrapers, occurred around this time. This was probably also associated with the introduction to the mainland of the Australian dingo.
Many indigenous communities also have a very complex kinship structure and in some places strict rules about marriage. In traditional societies, men are required to marry women of a specific moiety. The system is still alive in many Central Australian communities. To enable men and women to find suitable partners, many groups would come together for annual gatherings - commonly known as corroborees - (see below) at which goods were traded, news exchanged, and marriages arranged amid appropriate ceremonies. This practice both reinforced clan relationships and prevented inbreeding in a society based on small semi-nomadic groups.
The historical record tends to favor distinct and widespread evidence of cannibalism in indigenous communities. That the practice was observed by anthropologists from the time of European settlement and well into the 20th century has been noted by a number of writers, including W.E. Roth in his monumental study "The Queensland Aborigines". In Arnhem Land in northern Australia, a study of warfare among the Indigenous Australian Murngin people in the late 19th century found that over a 20-year period no less than 200 out of 800 men, or 25% of all adult males, had been killed in intertribal warfare.
In 1770, Lieutenant James Cook claimed the east coast of Australia in the name of Great Britain and named it New South Wales. British colonization of Australia began in Sydney in 1788. The most immediate consequence of British settlement - within weeks of the first colonists' arrival - was a wave of European epidemic diseases such as chickenpox, smallpox, influenza and measles, which spread in advance of the frontier of settlement. The worst-hit communities were the ones with the greatest population densities, where disease could spread more readily. In the arid centre of the continent, where small communities were spread over a vast area, the population decline was less marked.
The second consequence of British settlement was appropriation of land and water resources. The settlers took the view that Indigenous Australians were nomads with no concept of land ownership, who could be driven off land wanted for farming or grazing and who would be just as happy somewhere else. In fact the loss of traditional lands, food sources and water resources was usually fatal, particularly to communities already weakened by disease.
Additionally, Indigenous Australians groups had a deep spiritual and cultural connection to the land, so that in being forced to move away from traditional areas, cultural and spiritual practices necessary to the cohesion and well-being of the group could not be maintained. Proximity to settlers also brought venereal disease, to which Indigenous Australians had no tolerance and which greatly reduced Indigenous fertility and birthrates. Settlers also brought alcohol, opium and tobacco, and substance abuse has remained a chronic problem for Indigenous communities ever since.
The combination of disease, loss of land and direct violence reduced the Aboriginal population by an estimated 90% between 1788 and 1900. Entire communities in the moderately fertile southern part of the continent simply vanished without trace, often before European settlers arrived or recorded their existence.
The Palawah, or Indigenous people of Tasmania, were particularly hard-hit. Nearly all of them, apparently numbering somewhere between 2,000 and 15,000 when white settlement began, were dead by the 1870s. It is widely claimed that this was the result of a genocidal policy, in the form of the "Black War". However, such claims are disputed by historian Keith Windschuttle, who claims that only 118 Aboriginal Tasmanians were killed in 1803-47 and that many of these were killed in self-defense.
Another scholar, H. A. Willis, has subsequently disputed Windschuttle's figures and has documented 188 Palawah killed by settlers in 1803–34 alone, with possibly another 145 killed during the same period. Such counts do not consider undocumented violence and must be regarded as minimum estimates. It is also claimed, but untrue, that the last Indigenous Tasmanian was Truganini, who died in 1876. This belief stems from a distinction between "full bloods" and "half castes" that is now generally regarded as racist. Palawah people survived, in missions set up on the islands of Bass Strait.
On the mainland, prolonged conflict followed the frontier of European settlement. In 1834, John Dunmore Lang wrote: "There is black blood at this moment on the hands of individuals of good repute in the colony of New South Wales of which all the waters of New Holland would be insufficient to wash out the indelible stains."
In 1838, twenty eight Indigenous people were killed at the Myall Creek massacre; the hanging of the white convict settlers responsible was the first time whites had been executed for the murder of Indigenous people. Many Indigenous communities resisted the settlers, such as the Noongar of south-western Australia, led by Yagan, who was killed in 1833.
The Kalkadoon of Queensland also resisted the settlers, and there was a massacre of over 200 people on their land at Battle Mountain in 1884. There was a massacre at Coniston in the Northern Territory in 1928. Poisoning of food and water has been recorded on several different occasions. The number of violent deaths at the hands of white people is still the subject of debate, with a figure of around 10,000 - 20,000 deaths being advanced by historians such as Henry Reynolds.
Nevertheless, deadly infectious diseases like smallpox, influenza and tuberculosis were always major causes of Indigenous deaths. Smallpox alone killed more than 50% of the Aboriginal population. Reynolds, and other historians, estimate that up to 3,000 white people were killed by Indigenous Australians in the frontier violence.
By the 1870s all the fertile areas of Australia had been appropriated, and Indigenous communities reduced to impoverished remnants living either on the fringes of European communities or on lands considered unsuitable for settlement.
Some initial contact between Indigenous people and Europeans was peaceful, starting with the Guugu Yimithirr people who met James Cook near Cooktown in 1770. Bennelong served as interlocutor between the Eora people of Sydney and the British colony, and was the first Indigenous Australian to travel to England, staying there between 1792 and 1795.
Indigenous people were known to help European explorers, such as John King, who lived with a tribe for two and a half months after the ill fated Burke and Wills expedition of 1861. Also living with Indigenous people was William Buckley, an escaped convict, who was with the Wautharong people near Melbourne for thirty-two years, before being found in 1835. Many Indigenous people adapted to European culture, working as stock hands or laborers. The first Australian cricket team, which toured England in 1868, was principally made up of Indigenous players.
As the European pastoral industries developed, several economic changes came about. The appropriation of prime land and the spread of European livestock over vast areas made a traditional Indigenous lifestyle less viable, but also provided a ready alternative supply of fresh meat for those prepared to incur the settlers' anger by hunting livestock. The impact of disease and the settlers' industries had a profound impact on the Indigenous Australians' way of life.
With the exception of a few in the remote interior, all surviving Indigenous communities gradually became dependent on the settler population for their livelihood. In south-eastern Australia, during the 1850s, large numbers of white pastoral workers deserted employment on stations for the Australian gold-rushes.
Indigenous women, men and children became a significant source of labor. Most Indigenous labor was unpaid, instead Indigenous workers received rations in the form of food, clothing and other basic necessities.
In the later 19th century, settlers made their way north and into the interior, appropriating small but vital parts of the land for their own exclusive use (waterholes and soaks in particular), and introducing sheep, rabbits and cattle, all three of which ate out previously fertile areas and degraded the ability of the land to carry the native animals that were vital to Indigenous economies.
Indigenous hunters would often spear sheep and cattle, incurring the wrath of graziers, after they replaced the native animals as a food source. As large sheep and cattle stations came to dominate northern Australia, Indigenous workers were quickly recruited. Several other outback industries, notably pearling, also employed Aboriginal workers.
In many areas Christian missions provided food and clothing for Indigenous communities and also opened schools and orphanages for Indigenous children. In some places colonial governments provided some resources.
In spite of the impact of disease, violence and the spread of foreign settlement and custom, some Indigenous communities in remote desert and tropical rainforest areas survived according to traditional means until well into the 20th century.
In 1914 around 1200 Aboriginal people answered the call to arms, despite restrictions on Indigenous Australians serving in the military. As the war continued, these restrictions were relaxed as more recruits were needed. Many enlisted by claiming they were Maori or Indian.
By the 1920s, the Indigenous population had declined to between 50,000 and 90,000, and the belief that the Indigenous Australians would soon die out was widely held, even among Australians sympathetic to their situation. But by about 1930, those Indigenous Australians who had survived had acquired better resistance to imported diseases, and birthrates began to rise again as communities were able to adapt to changed circumstances.
In the Northern Territory, significant frontier conflict continued. Both isolated Europeans and visiting Asian fishermen were killed by hunter gatherers until the start of World War II in 1939. It is known that some European settlers in the centre and north of the country shot Indigenous people during this period. One particular series of killings became known as the Caledon Bay crisis, and became a watershed in the relationship between Indigenous and non-Indigenous Australians.
Well into the 20th Century, Indigenous Australians were - both in Australia itself and in many other countries – the subject of widespread crude racist stereotyping. For example, the American birth control campaigner Margaret Sanger could write casually: "The aboriginal Australian, the lowest known species of the human family, just a step higher than the chimpanzee in brain development, has so little sexual control that police authority alone prevents him from obtaining sexual satisfaction on the streets" (What Every Girl Should Know, 1920).
By the end of World War II, many Indigenous men had served in the military. They were among the few Indigenous Australians to have been granted citizenship; even those that had were obliged to carry papers, known in the vernacular as a "dog license", with them to prove it. However, Aboriginal pastoral workers in northern Australia remained unfree laborers, paid only small amounts of cash, in addition to rations, and severely restricted in their movements by regulations and/or police action.
On 1 May 1946, Aboriginal station workers in the Pilbara region of Western Australia initiated the 1946 Pilbara strike and never returned to work. Mass layoffs across northern Australia followed the Federal Pastoral Industry Award of 1968, which required the payment of a minimum wage to Aboriginal station workers, as they were not paid by the Pastoralist discretion, many however were not and those who were had their money held by the government. Many of the workers and their families became refugees or fringe dwellers, living in camps on the outskirts of towns and cities.
In 1984, a group of Pintupi people who were living a traditional hunter-gatherer desert-dwelling life were tracked down in the Gibson Desert in Western Australia and brought in to a settlement. They are believed to be the last uncontacted tribe in Australia.
In 1949, the right to vote in federal elections was extended to Indigenous Australians who had served in the armed forces, or were enrolled to vote in state elections. At that time, those Indigenous Australians who lived in Queensland, Western Australia and the Northern Territory were still ineligible to vote in state elections, consequently they did not have the right to vote in federal elections.
All Indigenous Australians were given the right to vote in Commonwealth elections in Australia by the Menzies government in 1962. The first federal election in which all Aboriginal Australians could vote was held in November 1963. The right to vote in state elections was granted in Western Australia in 1962 and Queensland was the last state to do so in 1965.
The 1967 referendum, passed with a 90% majority, allowed the Commonwealth to make laws with respect to Aboriginal people, and for Aboriginal people to be included in counts to determine electoral representation. This has been the largest affirmative vote in the history of Australia's referendums.
In 1971, Yolngu people at Yirrkala sought an injunction against Nabalco to cease mining on their traditional land. In the resulting historic and controversial Gove land rights case, Justice Blackburn ruled that Australia had been terra nullius before European settlement, and that no concept of Native title existed in Australian law. Although the Yolngu people were defeated in this action, the effect was to highlight the absurdity of the law, which led first to the Woodward Commission, and then to the Aboriginal Land Rights Act.
In 1972, the Aboriginal Tent Embassy was established on the steps of Parliament House in Canberra, in response to the sentiment among Indigenous Australians that they were "strangers in their own country". A Tent Embassy still exists on the same site.
In 1975, the Whitlam government drafted the Aboriginal Land Rights Act, which aimed to restore traditional lands to indigenous people. After the dismissal of the Whitlam government by the Governor-General, a reduced-scope version of the Act (known as the Aboriginal Land Rights Act 1976) was introduced by the coalition government led by Malcolm Fraser. While its application was limited to the Northern Territory, it did grant "inalienable" freehold title to some traditional lands.
A 1987 federal government report described the history of the "Aboriginal Homelands Movement" or "Return to Country movement" as "a concerted attempt by Aboriginal people in the 'remote' areas of Australia to leave government settlements, reserves, missions and non-Aboriginal townships and to re-occupy their traditional country."
In 1992, the Australian High Court handed down its decision in the Mabo Case, declaring the previous legal concept of terra nullius to be invalid. This decision legally recognized certain land claims of Indigenous Australians in Australia prior to British Settlement. Legislation was subsequently enacted and later amended to recognize Native Title claims over land in Australia.
In 1998, as the result of an inquiry into the forced removal of Indigenous children (see Stolen generation) from their families, a National Sorry Day was instituted, to acknowledge the wrong that had been done to Indigenous families. Many politicians, from both sides of the house, participated, with the notable exception of the Prime Minister, John Howard.
In 1999 a referendum was held to change the Australian Constitution to include a preamble that, amongst other topics, recognised the occupation of Australia by Indigenous Australians prior to British Settlement. This referendum was defeated, though the recognition of Indigenous Australians in the preamble was not a major issue in the referendum discussion, and the preamble question attracted minor attention compared to the question of becoming a republic.
In 2004, the Australian Government abolished The Aboriginal and Torres Strait Islander Commission (ATSIC), which had been Australia's top Indigenous organiation. The Commonwealth cited corruption and, in particular, made allegations concerning the misuse of public funds by ATSIC's chairman, Geoff Clark, as the principal reason. Indigenous specific programs have been mainstreamed, that is, reintegrated and transferred to departments and agencies serving the general population. The Office of Indigenous Policy Coordination was established within the then Department of Immigration and Multicultural and Indigenous Affairs, and now with the Department of Families, Community Services and Indigenous Affairs to coordinate a "whole of government" effort.
In June 2005, Richard Frankland, founder of the 'Your Voice' political party, in an open letter to Prime Minister John Howard, advocated that the eighteenth-century conflicts between Indigenous and colonial Australians "be recognised as wars and be given the same attention as the other wars receive within the Australian War Memorial". In its editorial on 20 June 2005, Melbourne newspaper, The Age, said that "Frankland has raised an important question," and asked whether moving "work commemorating Aborigines who lost their lives defending their land ... to the War Memorial [would] change the way we regard Aboriginal history."
In 2008, Prime Minister Kevin Rudd made a formal apology to the Aboriginal people.
Modern day scientists and others often say that the Australian Aborigines arrived in the continent of Australia, by crossing land bridges or landing on the northern shores by canoes.
Lock of hair pins down early migration of Aborigines BBC - September 22, 2011
A lock of hair has helped scientists to piece together the genome of Australian Aborigines and rewrite the history of human dispersal around the world. DNA from the hair demonstrates that indigenous Aboriginal Australians were the first to separate from other modern humans, around 70,000 years ago. This challenges current theories of a single phase of dispersal from Africa.
Australia discovered by a Southern Route PhysOrg - July 22, 2009
Genetic research indicates that Australian Aborigines initially arrived via south Asia. Researchers found telltale mutations in modern-day Indian populations that are exclusively shared by Aborigines.
Dr Raghavendra Rao worked with a team of researchers from the Anthropological Survey of India to sequence 966 complete mitochondrial DNA genomes from Indian 'relic populations'. He said, "Mitochondrial DNA is inherited only from the mother and so allows us to accurately trace ancestry. We found certain mutations in the DNA sequences of the Indian tribes we sampled that are specific to Australian Aborigines. This shared ancestry suggests that the Aborigine population migrated to Australia via the so-called 'Southern Route'".
The 'Southern Route' dispersal of modern humans suggests movement of a group of hunter-gatherers from the Horn of Africa, across the mouth of the Red Sea into Arabia and southern Asia at least 50 thousand years ago. Subsequently, the modern human populations expanded rapidly along the coastlines of southern Asia, southeastern Asia and Indonesia to arrive in Australia at least 45 thousand years ago. The genetic evidence of this dispersal from the work of Rao and his colleagues is supported by archeological evidence of human occupation in the Lake Mungo area of Australia dated to approximately the same time period.
Discussing the implications of the research, Rao said, "Human evolution is usually understood in terms of millions of years. This direct DNA evidence indicates that the emergence of 'anatomically modern' humans in Africa and the spread of these humans to other parts of the world happened only fifty thousand or so years ago. In this respect, populations in the Indian subcontinent harbor DNA footprints of the earliest expansion out of Africa. Understanding human evolution helps us to understand the biological and cultural expressions of these people, with far reaching implications for human welfare."
To the early Europeans, the Aborigines of the Sydney district (and later those throughout the whole continent), were primitives, natives or Noble Savages. So, descriptions of them (either written or in sketches/ paintings), were classificatory and comparative. There were a number of physical distinctions between different tribes. It was noted that the Gundungurra who lived in the Blue Mountains west of Camden were taller and stronger than the Eora / Dharawal who lived on the coast. Or so European observers said. Some tribespeople were said to be darker than others (dark brown or black) and were different in other ways, but anyone who indulges in descriptions should ask themselves why they are doing this. People are people and differences of color and shape shouldn't matter. However derogatory descriptions of Aborigines during the 19th century were often a justification for massacres and poisoning of people.
Each tribe had their own particular style of spears. Basically, all spears were made from timber or from the stems of plants. They ranged in length from about 1.5 meters to 4 or 5 meters with various forms of points, tips or blades. Some spear tips were prongs which were used to catch fish; others were made from stone flakes while others were made from fish bones and shells. Spears were mainly used for hunting but they were also used in battles.
The Aboriginal people of the Sydney, Illawarra and Shoalhaven district (and most, if not in all parts of Australia), were often observed by early settlers to be naked. The men and women of some tribes are known to have worn a belt around their middle made of hair, animal fur, skin or fiber which they used to carry tools and weapons.
These belts often had a flap at the front, however, this was a modification that was added during European colonization when the British colonists and authorities were concerned about modesty and imposed their standards on the Aborigines - who were unashamed of their nakedness. However, Aboriginal people needed to be warm in winter months and did make cloaks which they made from animal skins e.g.., possum skins. They worn them during the day and used them as blankets during the night. A number of skins were needed to make the garment and they were cleaned, dried and sewn together.
During colonization individual settlers gave the Aborigines their old clothes (known as slops). So the people were often recorded as wearing a variety of clothes such as army or navy jackets, trousers, petticoats and blouses (etc).
From the 1830's a number of Governors issued English blankets to the Aborigines through Magistrates and well respected settlers in various parts of the country. The blankets were not as warm as possums skin cloaks and many Aborigines caught influenza and bronchitis and died from these diseases.
Although there were over 250-300 spoken languages with 600 dialects at the start of European settlement, fewer than 200 of these remain in use - and all but 20 are considered to be endangered.
Before colonization there were between 200 and 250 Aboriginal languages spoken throughout the continent of Australia. In other words the Aborigines did not speak the same or 'one' language. It has also been estimated that there were as many as 600 languages spoken at the time of colonization. However, it has also been said, that there was one language and several dialects.
The 'one language' theory fits with the theory of the migratory origins of the people in the continent. In other words that all Aborigines belong to the one race as descendants of people who came from Asia, Africa and other places across land bridges. Whether this happened or not is speculative. What is certain, is that the Aborigines who belonged to a particular tribe spoke a language that was different to their neighbors. This fact has led to scientists identifying Language or Cultural groups which were comprised of a number of tribes who spoke the same language. It is also certain that some Aboriginal people spoke more than one language and it is interesting to note, that when the Europeans arrived in this country some Aborigines quickly learned to speak English while the Europeans themselves often struggled to speak even a few Aboriginal words.
In 1888 it was said that the language of the Australian Aborigines was "in fullness of tone, variety of sound, and easy flow, is not to be surpassed. In proof of this it is only necessary to refer to the Aboriginal names of the various locations throughout the colonies.
Some Aboriginal words are still used today. For example the word Bundi is the basis for the name Bondi n Sydney's eastern suburbs which has become the most famous beach in the world. Bennelong Point (the site of the Sydney Opera House) is named after Bennelong an Aborigine of the Manly area who was kidnapped by Governor Arthur Philip); Botany Bay was known as Kamay to the Aborigines of the area; Cronulla is based on the word Kurranulla meaning 'pink shell'; Dapto in the Illawarra district is a corruption of the word Dappeto; Dhurawal Bay on the George's River near Liverpool is named after the traditional tribe of the Sydney district the Dharawal also called the Eora.
Aboriginal language had ice age origins News in Science - December 13, 2006
Clendon says the continent, known as Sahul, was relatively densely populated on the land bridge connecting northern Australia to New Guinea, now separated by the Arafura Sea. The other populated area was along what is now Australia's eastern seaboard. The two population groups were separated by a vast, cold, windswept, arid stretch of land that covered most of the continent, says Clendon, who was with the Batchelor Institute of Indigenous Tertiary Education when he published the research. The eastern group spoke a tongue that became what is known today as Pama Nyungan and includes languages like Pitjantjatjara, Yolngu and Warlpiri. And the Arafurans spoke another family of languages used in northern Australia today. "What I'm suggesting is that Pama Nyungan and non-Pama Nyungan languages go back about 13,000 years to when there was a land bridge between New Guinea and Australia," he says.
Until now, the reason why these two Aboriginal language groups are so different, each with a distinct grammar and vocabulary, has been a mystery. Climate change - Around 11,000 years ago what was the Arafura plain was flooded by rising seas as the ice age ended. This caused the northern people to migrate into either New Guinea or to northern parts of Australia. Meanwhile, increased rainfall and warmer temperatures made inland parts of the continent more habitable and sparked a westward migration of eastern dwellers. This introduced their language group to more central areas of Australia. Both groups maintained their distinct languages, Clendon says. His hypothesis provides an alternative picture to the traditional view that 6000 years ago a single proto-language spread from the Gulf of Carpentaria around Australia, eventually giving rise to existing Aboriginal languages. "We know about changes in climate and sea levels at the end of the Pleistocene era. I'm suggesting the way languages are configured in Australia today are a result of those changes that happened at the end of the ice age."
Provocative but unconvincing - Writing in a reply to Clendon's article, Professor Nicholas Evans, an expert in Aboriginal languages from the University of Melbourne, describes Clendon's hypothesis as "fresh and provocative". However, he says there are flaws in the argument, including that there is only weak evidence of similarities between southern New Guinea and northern Aboriginal languages. Evans says he remains to be convinced about Clendon's proposal. "But it adds a welcome alternative to a field in which we are still a long way from having any clear picture of the unimaginably long human occupation of Sahul," he says.
Hunting is a word that is used to identify the practice of catching and killing game either as a sport or as a source of food. Gathering is the collecting of food such as plants, berries, eggs or insects. Fishing is another method of obtaining food.The Aborigines who lived in areas which included waterways such as rivers or were on the seacoast, made canoes from bark or tree trunks.
The Eora / Dharawal made canoes which carried up to three or four people for fishing. In other areas, the canoes were much larger and included dugouts and outrigger types. They were made from tree trunks (not just the bark). Aboriginal men and women who lived in coastal regions or in areas where there were rivers, caught and collected food by fishing. Males usually used spears, while females used hand lines with hooks made from shells and rocks as sinkers. Fish species were also caught by the use of fish traps. Some traps were made from rocks in the form of a pen. At high tide fish could swim in and out of them, but some were trapped within the rock walls at low tide. Traps were also constructed from sticks and tree branches across rivers to make a dam. When sufficient numbers were trapped the people would enter the water, scoop up the fish in their hands and throw them onto the river bank to be collected for cooking.
Males hunted animals such as kangaroos, wallabies, echidnas and possums. But also reptiles (snakes and lizards) and birds such as ducks, swans and parrots. They used spears and boomerangs to hit, catch and kill - but also climbed trees to get their food. Sometimes they hunted in parties or groups and each person shared the catch. On these occasions some of the men acted as 'beaters' driving animals towards another group of men who were armed and waiting to spear the animals that were driven towards them. Sometimes they used fire to drive the animals forward.
Aboriginal woman (often carrying babies on their backs) and assisted by young children left the camp on a daily basis searching and collecting berries, yams and other sources of food.
Gathering provided the bulk or main source of food for the Australian Aborigines. It has also been said that some tribes people were mainly 'vegetarians' because 'meat' was not readily available in some areas. It is also a fact that some Aboriginal people ate more marine life (fish, oysters and mussels etc) because these food items were predominant in the area in which they lived.
Survival was highly dependent upon knowledge of the life-cycle of flora and fauna and it is certain that the Aborigines had excellent understanding as they learned to track, hunt and gather food from when they were young children.
In 1972 Australian Anthropologist, Kenneth Maddock said: "Australia is the only continent to have been populated until modern times exclusively by hunters and gatherers..." (The Australian Aborigines. A Portrait of their society). He also quoted statistics showing that in 10,000 BC all human beings (100%) were hunters and gatherers; by 1,500 AD this had reduced to about 1% because mankind had generally developed skills in the cultivation of crops and domestication of animals. By 1960 only 0.001% of the world's population were hunters and gatherers.
The fact that the Aborigines did not cultivate land to grow crops or domesticate animals, they have often been portrayed as being a backward race. However this can be disputed. After all, the Aborigines did harvest crops in the sense that they made a form of flour from various types of flora. Domestication of animals was not possible due to the type (or perhaps kind) of animals that roamed the continent of Australia. For example kangaroos, wombats, possums and snakes.
Sheep and cow were introduced by Europeans. But there is evidence to suggest that the Aborigines of the Cowpastures district (Campbelltown area) herded and killed cattle that had escaped from the Port Jackson area circa 1788 and found there way to that area. These cattle had been transported from Africa and before vandals destroyed it, there was a cave in the Campbelltown area that was called Bull Cave, because of the drawings of cattle on the walls.
Those Aborigines who lived in coastal regions or near waterways caught fish and eels in a number of ways. Males often used a spear but are known to have also built fish-traps by making rectangular areas with rocks, that stood above the water at low tide. This meant that fish could swim into the traps at high tide and were trapped as the tide receded.
In the Illawarra district the Aborigines were often observed barricading (blocking) rivers with tree branches and logs. As fish swam down the river towards the sea they were trapped behind the dam where they were scooped up and thrown onto the shore. The Aborigines also fished from rocks and beaches using hand lines made from plants and hooks made from shells. Stones were used as sinkers.
Aboriginal people had to catch and collect their food, each and every day of their life. They were successful at doing this because they had an intimate knowledge of food-chain cycles, the migration patterns of birds and of the habitat where they lived. No doubt there were times when there were food shortages. But the essential point is that the Aboriginal people had a complete understanding of the flora and fauna within their tribal territory. They also engaged in land management practices - mainly burning grass and weeds.
Their totemic practices protected species because a person could not eat his own totem and others needed permission to catch another person's totem on his land. For example, a man whose totem was a waterfowl would not eat that bird (otherwise it would be a form of cannibalism). Other members of the tribe could not hunt the bird in the territory that belonged to another man. This provided a safe environment for different species.
Aboriginal Australians were social beings who lived in a number of social groups sometimes called bands, clans, sub-tribes and tribes, but essentially in a family or kinship group who were 1) of the same blood-line and 2) were related to other people through totems.
The social groupings of ATSI people meant that their relationships were far more extensive than our own method of identifying people as mother, father, brother, sister and cousins (etc). Aboriginal relationships are difficult to understand but the relationships of an Aboriginal male child are detailed in following script (with western ones shown in brackets), to give some idea of them: The family was usually comprised of father's father (grandfather) and often his brother or brothers who was / were known also known as father's father (no western equivalent); his wife or wives (grandmother); a father (father) and perhaps his brothers (uncles) who was also considered to be an Aboriginal male child's father.
Each family group had a headman or Elder who was the leader of the unit. He decided when to move camp and settled disputes
Food such as oysters, mussels and pippies were enjoyed. Sometimes they cooked them on the ashes of a fire and the Sydney Aborigines are known to have taken a fire with them aboard their canoes when they went fishing. This meant they could cook and eat their catch as they continued catching fish. They also took some of their catch back to the camp to share with others, but eating food while catching it gave them the energy to collect sufficient quantity for others.
Animals, birds and reptiles were also caught and cooked on an open fire. However they 'scorched' rather than cooked these foods. In other words, they did not roast the joint of a kangaroo like Europeans do today. For example by placing a leg of lamb in an oven for an hour or two. The Aborigines simply singed the food to remove feathers, scales and fur and ate partly cooked meat.
Other sources of food included yams (sweet potatoes), berries and intestines such as liver (yuck). But they generally hunted and collected the wide variety of food that was available in the places in which they lived.
One food that was cooked by the Aborigines was a type of bread which was also popular among early European settlers who called it damper. This is made by grinding seeds into flour, mixing this with water into a doughy paste and cooking it in the ashes of a warm fire.
The Aborigines lived within a tribal territory where they obtained their daily food needs. Some tribes lived in desert country, while others lived in mountain, coastal or timbered areas. This meant that the members of different tribes ate different foods. It also meant that some of them were constantly on the move hunting and gathering. Others lived a semi-nomadic life in areas where there were amply food supplies.
The Eora / Dharawal people who lived on the coastal area between the Hawkesbury River and the Shoalhaven River were hunters and gatherers of fish, shellfish, plants and animals. They caught fish such as bream, groper, snapper and whiting; collected shellfish including oysters (rock and mud), cockles and conniwink.
Plant foods included: native cherries, the cabbage palm, water lilies, five-corners and pigface. Animals, birds and reptiles such as kangaroos, ducks and snakes were also hunted for consumption purposes.
Every tribe in Australia was divided into a number of small social groups, but for marriage purposes, into two main groups sometimes called marriage moieties. People didn't marry outside of their group. Marriage arrangements were made when children were very young and sometimes before they were born.
Aboriginal people were social beings as they lived and gathered together in family groups . Their camps were comprised of a number of gunyas (bark huts), but the people also lived in caves or in the open air. Some camps were comprised of as few as 6 to 10 people while in others there were up to 400 people. No doubt the availability of food was a factor in the size of a camp. Each day, various members of the group would leave the camp to hunt and gather food and return to the camp to share the catch with others.
During the 1830s William Govett (surveyor), visited a camp and recorded (in Sketches of New South Wales), that the people usually settled in their camp as night fell and were engaged in a number of activities - normal family life - sharing stories about the happenings of the day, repairing weapons, singing songs and playing games etc. Govett described a young man in one gunya using double sets of strings to make diamonds, squares, circles and other shapes. He also told of an adult amusing a young child by placing a leaf on the back of his left hand, striking it with his finger causing the leaf to ascend perpendicularly to the squeals of delight from the child.
Aboriginal people lived in family groups. The Elder or Elders gunyah (hut) were situated in the center of the camp and others spanned out in circles around the central hut. However, the people often slept in the open and in caves, so it is likely that the Elder decided where he wanted to sleep with his wife or wives and everyone one else spread-out from the spot he had chosen. No doubt some people were more important than others and the most important ones camped near the Elders.
The affinity of attachment to a particular area of land by the Aborigines was based on their Dreamtime beliefs, that the land had been created for them by ancestral heroes and heroines. Every rock, tree and waterhole; every animal, bird and insect; the sky above and all it contained were believed to have been created in the Dreamtime.
At some indefinite time the creators disappeared, however, many were believed to have remained in secret places in the land - in rivers, caves and other places. In other words, the Aborigines believed that their land had been created by spirits who continued to live in the land.
This was a superstitious belief, but it was very important to the Aborigines. For example, there were never any wars of conquest between Aboriginal tribes. They were too superstitious to do this and living in the land of another tribe would have involved them in living among strange and no doubt hostile spirits.
Land was spiritual, but also an economic resource as it provided the people with food, sources of wood, fiber and glue for making spears, utensils and other implements. However the people respected these aspects of their land and were environmentalists in the sense of 'taking care' of the land through their practices of performing increase ceremonies, singing 'Songlines' and relationships with flora and fauna through a system of totemic relationships.
Traditional Aboriginal people (before their society was changed with the arrival of the British into their lands), lived in relatively small groups which have been called clans, bands, family groups, sub-tribes and by a variety of other names.
The larger (well known term) social unit known as a tribe, was made up of a number of smaller social units (clans and bands etc). Maybe we can explain it this way: A clan was a family group made up of a grandfather and his wife or wives, his sons and their wife or wives and their children. A number of these groups formed a tribe. The exact number of clans which comprised a tribe cannot be said precisely, as this varied. However in the Sydney district it is known that in 1788 there were at least 30 clans of the Eora / Dharawal tribe. Each clan had a name for themselves based on the name in their language for the area they lived in. For example the men of Cadi were known as the Cadigal (Cadjigal) females added the postfix eean so the women from Cadi were the Cadieean and they lived around South Head, Elizabeth Bay, Rushcutters Bay to present day Circular Quay. The Gweagal / Gweaeean lived at Kurnell. The clans that formed a tribe were those who believed in the same Dreamtime creation stories, spoke the same language and celebrated the same customs such as initiation rites.
Culture is a celebration of beliefs and usually (if not always) includes rites of passage from one stage of life to another. Culture is stories and songs.
Particularly because their stories and songs informed them about creation, the relationship between mankind and nature and were the source of their tribal laws. The tradition of initiation was an expression of Aboriginal culture and was carried out for thousands of years in exactly the way that had been ordered by the ancestors in the Dreamtime. On another level the stories and songs were believed to be important for the preservation and conservation of their land and all it contained. This involved singing Songlines that had been sung by the ancestors and the concept of taking care.
Until 1788 the Aborigines of Australia lived and celebrated a culture that was basically unchanged for thousands of years. Each tribe had their own beliefs - their own songs and stories, but until colonization, they were the oldest surviving race in the entire world. They existed as a race of people well before the Egyptians were building the pyramids, while the Greeks were constructing the Pantheon and while Britain was ruled by the Roman Empire. However the first Europeans to arrive in the continent considered the 'natives' to be primitives. This was largely due to a lack of understanding about the culture of the Aborigines.
A cultural group was comprised of two or more tribes that associated with each other for cultural purposes. For example to celebrate corroborees, barter or exchange goods, conduct initiation ceremonies or intermarry.
On the Far South Coast of New South Wales early records show that members of the Yuin tribe often associated with those from the Canberra area. These tribes did not associate with the Dharawal tribe of the Shoalhaven, Illawarra and Sydney districts, who gathered from time to time with the Gundungurra of the Goulburn and Camden area.
The Australian Aboriginal flag was originally designed as a protest flag for the land rights movement of Indigenous Australians but has since become a symbol of the Aboriginal people of Australia. The flag is a yellow disc on a horizontally divided field of black and red. It was designed in 1971 by Harold Thomas, an Aboriginal artist descended from the Luritja of Central Australia. On 14 July 1995, both the Aboriginal flag and the Torres Strait Islander Flag were officially proclaimed by the Australian government as "Flags of Australia" under Section 5 of the Flags Act 1953.
The flag was first flown on National Aborigines' Day in Victoria Square in Adelaide on 12 July 1971. It was also used in Canberra at the Aboriginal Tent Embassy from late 1972. In the early months of the embassy-which was established in February that year - other designs were used, including a black, green and red flag made by supporters of the South Sydney Rabbitohs rugby league club, and a flag with a red-black field containing a spear and four crescents in yellow.
Cathy Freeman caused controversy at the 1994 Commonwealth Games by waving both the Aboriginal flag and Australian national flag during her victory lap of the arena, after winning the 200 metres sprint; only the national flag is meant to be displayed. Despite strong criticism from both Games officials and the Australian team president Arthur Tunstall, Freeman flew both flags again after winning the 400 metres.
The decision (by Prime Minister Paul Keating) to make the Aboriginal flag a national flag was opposed by the Liberal Opposition at the time, with John Howard making a statement on 4 July 1995 that "any attempt to give the flags official status under the Flags Act would rightly be seen by many in the community not as an act of reconciliation but as a divisive gesture." However since Howard took office in 1996, the flag has remained a national flag. This decision was also criticised by Thomas himself, who said the flag "doesn't need any more recognition"
In 1997 the Federal Court of Australia declared that Harold Thomas was the owner of copyright in the design of the Australian Aboriginal flag, and thus the flag has protection under Australian copyright law. Thomas had sought legal recognition of his ownership and compensation following the Federal Government's 1995 proclamation of the design. His claim was contested by two others, Mr. Brown and Mr. Tennant. Since then, Thomas has awarded rights solely to Carroll and Richardson Flags for the manufacture and marketing of the flag.
The National Indigenous Advisory Committee campaigned for the Aboriginal flag to be flown at Stadium Australia during the 2000 Summer Olympics. SOCOG announced that the Aboriginal flag would be flown at Olympic venues. The flag was flown over the Sydney Harbour Bridge during the march for reconciliation of 2000, and many other events.
On the 30th anniversary of the flag in 2001, thousands of people were involved in a ceremony where the flag was carried from the Parliament of South Australia to Victoria Square. Since 8 July 2002, after recommendations of the Council's Reconciliation Committee, the Aboriginal Flag has been permanently flown in Victoria Square and the front of the Town Hall.
In Aboriginal society every person (particular every initiated male) was considered to be equal. No one had authority over anyone else in the sense of ruling them, but this is not to say that there weren't leaders. There are always leaders in any society - people who have personal qualities that others admire. But there were no elected leaders in Aboriginal society. There were also people who performed particular roles. For example clever men also known as Koradjis and as Doctors by Europeans, had or acquired special skills and were considered to be authorities on certain matters.
There were leaders known as Elders. People whom others listened to, asked for advice and generally obeyed when they issued orders. The Elders were considered to be wise in knowledge of the Dreamtime the law and the lore's of the tribe. An Elder was usually a male but gray hair and old age were not the only criteria to be an Elders. In fact some elderly people were not considered to be Elders.
To understand the role of the Elders it is necessary to understand that the Aborigines lived in small family groups also known as clans, bands and sub-tribes. Within the immediate family groups, the eldest males and females were treated with respect and acknowledged as leaders in the sense that they made decisions about the family. For example they settled disputes and decided when the group would move camp to another area. When a number of blood-line families lived together it is likely that the Elder of the group was the person considered by the members to be the wisest of the older people.
In large groups which may have been comprised of several hundred people, a number of Elders met to make decisions on behalf of the group. This has become known as an Elder's Council, but it wasn't a council in the sense of being a form of government. Instead such councils met for the purpose of conducting initiation, marriage and burial ceremonies
In traditional Aboriginal society females were not considered to be Elders. However, older females often acted as midwives and as authorities on other matters relevant to their gender. The role of female Elders today, as spokespersons for groups, appears to be a phenomena of the 20th century.
The Aborigines had a number of laws that governed their society. They ranged from family discipline (whereby children and others were expected to conform and behave to a code of conduct) to laws about trespassing, food taboos, marriage laws or regulations and breaches of acceptable behavior such as rape, murder and stealing.
The source of the laws was sometimes Dreamtime stories that told of the behavior of men, woman and children (sometimes in allegorical forms of animals, birds or reptiles - etc. in which the perpetrators actions were punished by being beaten, speared or by banishment.
Aboriginal boys and girls played a number of games such as running, wrestling, climbing, throwing and ball games. No doubt they were fun to play but they all had a serious purpose. They were not simply for amusement.
Kicking balls made from grass or fur bound with vines taught people agility, but they also had to effect of forming individuals into teams which taught them cooperation and working with others.
Throwing sticks was a form of preparation for spear throwing. Drawing animal tracks in the earth trained children to observe their environment and provided them with the skills necessary to catch food.
Digging games trained people to collect food such as yams; climbing games enabled people to develop other survival skills - the main purpose behind all the games that Aboriginal children played.
A corroboree is a ceremonial meeting of Aboriginal Australians. The word was coined by the European settlers of Australia in imitation of the Aboriginal word caribberie. At a corroboree Aborigines interact with the Dreamtime through dance, music and costume. Many ceremonies act out events from the Dreamtime. Many of the ceremonies are sacred and people from outside a community are not permitted to participate or watch. "Their bodies painted in different ways, and they wore various adornments, which were not used every day."
In the northwest of Australia, corroboree is a generic word to define theatrical practices as different from ceremony. Whether it be public or private, ceremony is for invited guests. There are other generic words to describe traditional public performances: juju and kobbakobba for example. In the Pilbara, corroborees are yanda or jalarra. Across the Kimberley the word junba is often used to refer to a range of traditional performances and ceremonies.
Corroboree and ceremony are strongly connected but different. In the 1930s Adolphus Elkin wrote of a public pan-Aboriginal dancing "tradition of individual gifts, skill, and ownership" as distinct from the customary practices of appropriate elders guiding initiation and other ritual practices (Elkin 1938:299). Corroborees are open performances in which everyone may participate taking into consideration that the songs and dances are highly structured requiring a great deal of knowledge and skill to perform.
Corroboree is a generic word to explain different genres of performance which in the northwest of Australia include balga, wangga, lirrga, junba, ilma and many more. Throughout Australia the word corroboree embraces songs, dances, rallies and meetings of various kinds. In the past a corroboree has been inclusive of sporting events and other forms of skill display. It is an appropriated English word that has been reappropriated to explain a practice that is different to ceremony and more widely inclusive than theatre or opera.
Aborigines held a Corroboree in which there were elements of music, song and movement that imitated or replicated animal movements, hunting prowess, battles or ceremonies of initiation that had been conducted for thousands of years. Corroborees are part of Aboriginal culture. They were not simply dances, but were highly significant events and belong to the Australian Aborigines.
The Australian Aborigines used a limited variety of implements to make musical sounds. The didgeridoo (see separate listing) is probably the best known, but others included rattles, clapping sticks and two boomerangs clapped together. However they do not appear to have used drums. The exception may be the Torres Strait Islander people. Another instrument that wasn't used, was a flute or whistle.
The melodies, tunes, harmonies and rhythms of Aboriginal music included traditional ceremonial songs that were handed down from generation to generation. It was very important in Aboriginal thinking, to replicate the songs that had been first played and sung by the ancestors in the Dreamtime. When the traditional music and songs were used, living men considered themselves to be in the Dreamtime. Particularly during initiation ceremonies.
New songs were created from time to time. They told of important events in the history of the tribe. Events such as great battles or hunting expeditions. Other songs and music were for general amusement or entertainment and early European observations of the Aborigines included camp life where the people played games and sang songs around their camp fires.
Aboriginal Art and Design
Australian Petroglyphs and Rock Art
Aborigines decorated their bodies with tattoos that conveyed messages particularly at ceremonial times. The patterns represented the totems of individuals or denoted information about the tribe itself.
Death was always a time of sorrow and supernatural fear among traditional ATSI people. Wailing or crying was a common occurrence among the mourners who often painted their bodies with pipe clay, red ochre, or charcoal when a relative or friend died. In some districts people wore a head covering made of feathers. Others beat their bodies with sticks or clubs, or cut themselves with shells or stone knives to cause bleeding. In these instances the period of sorrow or mourning, was considered to be at an end when their wounds were healed.
Relatives and close friends often sat beside a grave of a deceased person, but this was related to their superstitious beliefs. Sitting beside a grave - sometimes shaded with a hut or covering to provide shelter for the mourner or mourners - involved ensuring that the deceased person's spirit had gone to the 'sky camp' or to its spirit-place. Obviously it is impossible to say 'how' they knew or considered when this happened. However after the mourning period was completed, a deceased person's name was never mentioned again. This often involved inventing new words for totems but was based on their superstitious beliefs in a personal spirit and ghosts.
The belief in a personal spirit was based on the Dreamtime stories that told the people that birth was the result of a spirit-child entering a woman's body. Or in some parts of the country, birth had been an act of the creators. For example in Arnham Land the Djanggau Sisters (who were considered to be daughters of the Sun and arrived in the area in a bark canoe with their brother Bralgu)created the land and gave birth to the first-people to live there. In other words birth and death were great mysteries involving supernatural beings.
The people also believed that a person's spirit could visit living people to harm or warn them of danger. This usually resulted in a 'inquiry' about the death of a person who was considered to have died prematurely or in unusual circumstances. The inquiry - usually undertaken in consultation with an Elder or a Clever Man - looked for actions undertaken by some person that had caused the death of an individual. Any culprit was severely punished. The belief in a person spirit also led the people to take great precautions in the burial or cremation of the deceased.
A number of difference 'races' of people believe or have believed that when a person dies, their soul (or inner spirit) is born again - in the form of an animal, bird, reptile, fish or as another human being. The Eora / Dharawal Aborigines believed in transmigration also known as transmutation or metephsychosis. For example during the 1830s Quaker James Backhouse toured the Illawarra district and recorded that some Aboriginal men were mortified when some Europeans shot and killed some dolphins. The Aborigines of the area believed that after death, their warriors became dolphins. This belief was bolstered by the habit of dolphins to herd fish and to protect people from shark attacks.
Another example of the belief in reincarnation was given by David Collins who noted that when a European was about to shoot a raven, an Aborigine stepped into the firing line to stop him from doing this because 'him brother'. In other words the bird was the man's totem and he was compelled to do everything possible to make sure that the raven wasn't killed.
Aboriginal people are spiritual though they had no formal religion.
The word spirit has many different meanings. For example it can be used to refer to the immaterial part of a human being often called his or her soul or to the personality of people when they are said to have a courageous or cowardly spirit. Or to describe qualities of people or (other) animals when they are said to be high spirited. Spirit can also refer to supernatural beings such as a deity (god) or to evil manifestation such as ghosts.
Aboriginal and Torres Strait Islander Australians believed in a number of spirits. In particular to ancestral spirits; a personal spirit; animal spirits, deceased spirits or ghosts and evil spirits. Their beliefs were founded - like every other aspect of their life - on Dreamtime myths which informed them that their world had been created by was filled with the supernatural. This was something to be taken notice of and was the basis of them being very superstitious people.
Animal Spirits: During the Dreamtime the creators made spirits of every living creature including that of every animal, bird, reptile, insect and form of marine life (etc). Wherever they rested the creators left the spirits of living creatures behind them. This was the origin of life. The Aborigines believed they were intrinsically linked to every other 'species' because of the actions of the creators. They also believed that it was their personal responsibility to ensure the continuation of 'animal' life through the concept of taking care. This involved the singing of songs and performing of ceremonies which were believed to ensure the continuation of the birth of each species.
During the Dreamtime the creators had metamorphosed into various forms of animals, birds and other species. Individuals were linked to the creators through totemic relationships and did not eat their personal totem. To do so would be a form of cannibalism. The practice had the effect of providing a safe sanctuary for different species.
ATSI people also believed that particular animal spirits could harm living people. For example they believed that killing a willy-wagtail would result in the spirit of this bird becoming angry and to the creation of storms of violence which could destroy others.
Evil Spirits: A number of Dreamtime stories related stories of evil spirits. One Queensland story recorded by A.W. Howitt told of a group who went to hunt and fish leaving behind two boys in camp, with instructions not to leave the camp: The boys played about for a time in the camp, and then getting tired of it, went down to the beach where a Thugine came out of the sea, and being always on the watch for unprotected children, caught the two boys and turned them into rocks that now stand between Double Island Point and Inskip Point and have deep water close to them. 'Here you see', the old men used to say, 'the result of not paying attention to what you are told by your elders'."
The Thugine mentioned in this story is one of hundreds of evil spirits whose evil deeds were recorded in stories and songs. Along the south-east coast of New South Wales evil spirits were and are known as Goonges. Generally speaking contemporary Aboriginal people still believe in these spirits. For example if they go to a particular area they believe they must be invited to stay there; if they are not welcome they will feel this and to remain there under these circumstances will result in being punished. Punishment may mean death or injury and this may extend to other members of a family. Some areas are forbidden to women because the male spirits that are believed to live there will punish them if they disobey the trespassing laws.
Beliefs in spirits and ghosts among Aboriginal Australians was common to all tribes throughout the continent, although there were a number of variations in the actual names that were used to describe them. Contextually the beliefs were one aspect of Aboriginal culture and need to be understood from their perspective. Modern day Western understanding tends to 'see' body, mind and spirit as separate entities, which we somehow or other manage to unite into concepts of person or oneness. This understanding can lead to skepticism about spirit as this has largely become associated with religious beliefs. Traditional Aborigines did not think this way. They certainly understood the separate concepts of body and spirit, but in such a way that they seen as being united with other people and every other living creature, in a unique oneness. This applied to the past, present and future in an ontology (philosophy) that humanism, rationalism and science cannot understand.
The Australian Aborigines believed that the land they lived in (and owned) along with all it contained (every rock, tree, waterhole and cave), was created for them during the Dreamtime.
In some areas of the continent the creators were all-powerful figures such as Biami. In other areas creation was the result of the actions of ancestral heroes and heroines. In Central Australia the Tnatantja Pole was responsible for forming mountain ranges and valleys.
Because Aboriginal society was very spiritual (in the sense that spirits were thought to have made the land and were responsible for birth and sometimes death),it is not surprising that Aboriginal people 'believed' in magic.
It was practiced in a number of ways. For example through the pointing of the bone (sometimes called singing someone) which was believed to cause death. People who had been 'pointed' often died, not as a result of the magic itself, but because of their belief that they would die ie., death through superstition or imagination. In the same way, people were 'cured' of sickness / illness through the use of magic stones and crystals.
Boys began a period of initiation from when they were 7 or 8 years of age. The first initiation ceremonies they attended were designed to make them independent on their mothers and other females. At other ceremonies and meetings with older males they were informed about the history and customs of the tribe and were taught how to survive and to be dependent on other males. Initiation continued over a number of years and boys gradually acquired knowledge through learning stories, attending ceremonies and through education by initiated males.
Pain endurance was an important part of initiation of males and was considered to be manly. In theEora / Dharawal tribe teenage boys attended a tooth evulsion ceremony when a front tooth was knocked out during the ceremony. In some tribes boys were circumcised at puberty as a pain endurance test.
Initiation was also a time of obedience as boys were expected to comply with food and other taboos during this time. For example Louisa Atkinson reported in her reminiscences of knowing the Aborigines of the south coast of New South Wales (published as A Voice in the Country: Sydney Mail 19th September 1863), that two boys of the Picton area disobeyed a food taboo and were punished by death.
'For some time the lads are not permitted to mingle with the tribe, or eat particular food. The tooth is knocked out by the point of a boomerang...should they disobey the regulations deadly consequences ensue. This report goes on to report that two initiates killed and ate a duck. Mullich (a Koradji or Clever Man of the area)discovered what they had done: in consequence the lads were surprised when asleep, stunned by a blow of a club, and an insidious poison, administered to them, under which they sank in about three months.
Girls did not participate in initiation ceremonies. At puberty they were married and went to live with their husband. However, their mothers and other women prepared them in knowledge about their bodies and sexual intercourse. Ceremonies included ritual bathing, separation from the main tribal group for varying periods of time and food taboos.
Traditional Aboriginal people had great respect for older people such as Grandfathers and Grandmothers. However old age, seniority or maturity were not sufficient for a person to be considered an Elder.
Elders (who were usually males), were people who were considered to be wise in tribal knowledge and worldly matters. They were leaders of family or kinship groups who made decisions about moving camp, when boys would be initiated, when girls would be married and settled disputes among other members of the social unit.
Senior females were not considered to be Elders in traditional Aboriginal society. However they did play important roles in tribal matters. For example they decided when girls would undergo rituals in preparation for marriage, conducted or organized ceremonies including those that males and children participated in (but not initiation ceremonies). They also acted as midwives and story-tellers.
Today some Aboriginal people call themselves Elders but are not recognized by traditional people. Sometimes because they are too young to be Elders or live in areas that is not their traditional land. There are also a number of female Elders in society today, but this seems to be an adaptation of the traditional leadership laws. However Aboriginal laws are not and probably never have been static and there is a great need today, for female Aborigines to be involved in achieving rights, recognition and reforms for all ATSI people.
One important aspect of traditional Aboriginal life was the custom of being led by Elders (see Elders). However, Governor Lachlan Macquarie set about changing Aboriginal society by awarding some Aboriginal people with a Brass plates and calling them Kings. This was a breach of traditional tribal laws, but the people who accepted these titles were those 1) who were considered by the authorities to have shown an inclination to accept the new way of life under British Law or 2) to those who had led exploration parties.
Britain was of course based on a monarchy and various Governors and settlers such as Alexander Berry in the Shoalhaven district also rewarded some Aborigines with the title of King. Females were not awarded brass plates as Queens. But the men who accepted the title of King were eager to have it known that their wives were Queens and their children Princes and Princesses. Circa 1810 to 1820 (the period when Governor Macquarie was in charge of the colony), there were many inter-tribal disputes over the awarding of brass plates. In other words the traditional people of various areas resented those Aborigines who did not belong to their tribe, or who had not become Elders, accepting European titles and being styled as Kings over their traditional lands.(also see Brass Plates on our Historical Pages which includes a photograph).
Aboriginal lore was an important and vital aspect of community life. Lore means 'the facts and stories about a particular subject or topic'. For example Aboriginal people learned their 'laws' from those Dreamtime stories that informed the listeners about acceptable and unacceptable behavior together with the punishment offenders received.
The lore's / laws were serious as they were considered to have originated from the ancestors and therefore were considered to be the law-givers or law-makers and law was an important aspect of Aboriginal life. On the other hand there were those early colonists who believed that the Aborigines were a lawless race of people. They accused them (as some do today), of having a genetic 'fault' as natural thieves and murderers.
It is certainly true that the Aborigines of the Sydney district stole axes and other weapons from the colonists. But history records this as happening after their own weapons and tools were stolen by the convicts (who sold them to sailors who took them back to England to sell them). This is not a justification. It is a simple fact that the Aborigines considered it quid pro quo ie., good enough to steal from those who stole from them.
They also stole corn, potatoes and other food from the early settlers. Perhaps they were starving. On the other hand the early colonists were struggling to survive in the colony and the Aborigines may have stolen their food as a strategy to drive them out of their land. Murder was also exacted by the Aborigines. They believed that anyone who shot one of them should be punished and exacted this on the Europeans.
Aboriginal lore (in songs and stories about a particular topic) also taught and guided the people to survive. Some stories informed them about the life cycle of birds, animals and insects. Others (often called Songlines) were like oral road maps and identified tracks that the people followed when moving around their tribal territory or when visiting other tribes.
Aboriginal lore (law) required a person who did not 'belong' to a particular area, to be invited or granted permission, to enter into the territory of a tribe. In other words, he or she could not simply wander into the land of another tribe. To do so invited hostility that could result in the death of the individual for trespassing.
When someone wanted to visit another tribe, they carried a message stick - a piece of bark or timber that was decorated with symbols. These symbols have sometimes been said to have been a written form of language. This is not correct. But they were a form of passport that identified the intent or authority of the bearer and 'communication' took place verbally (or by sign language), between the 'stranger' and those whom s/he wanted to visit. "The passing of a boundary line by the blacks of another territory was considered as an act of hostility against the denizens of the invaded grounds, and wars were frequently the sequence of such transgressions." (The Aborigines of Australia, Roderick J Flanagan, 1888, pp 46)
Bora ceremony 1898
A Bora is the name given both to an initiation ceremony of Indigenous Australians, and to the site on which the initiation is performed. At such a site, boys achieve the status of men. The initiation ceremony differs from culture to culture, but often involves circumcision and scarification, and may also involve the removal of a tooth or part of a finger. The ceremony, and the process leading up to it, involves the learning of sacred songs, stories, dances, and traditional lore. Many different clans will assemble to participate in an initiation ceremony.
The word Bora was originally from South-East Australia, but is now often used throughout Australia to describe an initiation site or ceremony. It is called a Burbung in the language of the Darkinjung, to the North of Sydney. The name is said to come from that of the belt worn by initiated men. The appearance of the site varies from one culture to another, but it is often associated with stone arrangements, rock engravings, or other art works. Women are generally prohibited from entering a bora.
In South East Australia, the Bora is often associated with the creator-spirit Baiame. In the Sydney region, large Earth mounds were made, shaped as long bands or simple circles. Sometimes the boys would have to pass along a path marked on the ground representing the transition from childhood to manhood, and this path might be marked by a stone arrangement or by footsteps, or mundoes, cut into the rock. In other areas of South-East Australia, a Bora site might consist of two circles of stones, and the boys would start the ceremony in the larger, public, one, and end it in the other, smaller, one, to which only initiated men are admitted. Matthews (1897) gives an excellent eye-witness account of a Bora ceremony, and explains the use of the two circles.
Bora rings, found in South-East Australia, are circles of foot-hardened earth surrounded by raised embankments. They were generally constructed in pairs (although some sites have three), with a bigger circle about 22 metres in diameter and a smaller one of about 14 metres. The rings are joined by a sacred walkway. While most are confined to south east Queensland and eastern New South Wales, five earth rings have been recorded near the Victorian town of Sunbury, although Aboriginal use has not been documented.
Bora rings in the form of circles of individually placed stones are evident in Werrikimbe National Park in northern New South Wales.
The Aborigines in some parts of Australia the tribes called the places where initiation ceremonies were held, bora grounds. They were called Buna grounds in other parts of the country, but the sites were not randomly chosen and were used for thousands of years by the tribe. The bora ground itself was identified by two circles that were drawn on the ground or were formed by rocks or pebbles. The circles were connected by a path and other symbols were drawn into the earth or carved into trees near the grounds. These symbols were highly significant in ceremonies and also warned people (women and uninitiated youths and strangers), to stay away from the area.
Almost all of the Koori (preferred name of Australian aboriginies) shaman are initiated within one large group, called "The Dreamers". This is due to the fact that Australia has some of the strongest, and chaotic magic, around. All of the shaman are needed to put a check on that chaos. A Koori shaman takes only a small penalty for some tasks when astrally perceiving. As a trade off they are unable to mask. Any magician (full or adept) will notice this, whether or not he can assence. Mundanes even can tell when one of The Dreamers has entered the room. A Koori shaman will rarely travel outside of Australia, the need is too great in the outback for that. White Australian shamans cannot join the dreamers, but some are associated with the koori group.
The Australian aboriginal shamans - "clever men" or "men of high degree" - described "celestial ascents" to meet with the "sky gods" such as Baiame, Biral, Goin and Bundjil. Many of the accounts of ritualistic initiation bare striking parallels to modern day UFO contactee and abduction lore. The aboriginal shamanic "experience of death and rising again" in the initiation of tribal "men of high degree" finds some fascinating parallels with modern day UFO abduction lore involving the Gray Aliens. The "chosen one" (either voluntarily or spontaneously) is set upon by "spirits", ritualistically "killed", and then experiences a wondrous journey (generally an aerial ascent to a strange realm) to met the "sky god." He is restored to life -- a new life as the tribal shaman.
Ritual death and resurrection, abduction by powerful beings, ritual removal or rearrangement of body parts, symbolic disembowelment, implanting of artifacts, aerial ascents and journeys into strange realms, alien tutelage and enlightenment, personal empowerment, and transformation - these and many other phenomena are recurring elements of the extraordinary shamanic tradition.
The Australian aboriginal shamans - "clever men" or "men of high degree" -- described "celestial ascents" to meet with the "sky gods" such as Baiame, Biral, Goin and Bundjil. Many of the accounts of ritualistic initiation bare striking parallels to modern day UFO contactee and abduction lore. The aboriginal shamanic "experience of death and rising again" in the initiation of tribal "men of high degree" finds some fascinating parallels with modern day UFO abduction lore. The "chosen one" (either voluntarily or spontaneously) is set upon by "spirits", ritualistically "killed", and then experiences a wondrous journey (generally an aerial ascent to a strange realm) to met the "sky god." He is restored to life - a new life as the tribal shaman.
Ritual death and resurrection, abduction by powerful beings, ritual removal or rearrangement of body parts, symbolic disembowelment, implanting of artifacts, aerial ascents and journeys into strange realms, alien tutelage and enlightenment, personal empowerment, and transformation - these and many other phenomena are recurring elements of the extraordinary shamanic tradition.
Australian Petroglyphs and Rock Art
Australia - Sacred Sites
ANCIENT AND LOST CIVILIZATIONS
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME
|
<urn:uuid:9dadb223-0062-4c8f-9229-0fd4b6d8bf26>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813691.14/warc/CC-MAIN-20180221163021-20180221183021-00418.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9796940088272095,
"score": 3.96875,
"token_count": 21620,
"url": "http://crystalinks.com/aboriginals.html"
}
|
From Wikipedia, the free encyclopedia.
The Hebrew calendar is the annual calendar used in Judaism. It is based upon both the lunar cycle (which defines months) and the solar cycle (which defines years). This is in contrast to the Gregorian calendar, which is based solely upon the solar cycle.
Jews have been using the lunar calendar since Biblical times, but usually referred to months by number rather than name. During the Babylonian exile, they adopted Babylonian names for months and possibly a regular pattern of intercalating the 13th month. Some sects, such as the Essenes, used a solar calendar.
The Hebrew year 1 started on Sunday, September 6, 3761 BC, the traditional Jewish date of Creation. This means that adding 3761 to a Gregorian year number will yield the Hebrew year number (within one year). This actually only works until the Gregorian year 22,203, but it's a fairly good rule of thumb.
The Hebrew month is tied to the average time taken by the Moon to cycle from lunar conjunction to lunar conjunction. Twelve lunar months are approx. 354 days while while the solar year is approx. 365 days so an extra lunar month must be added every two or three years.
The calendar is thus also tied to a 19-year cycle of 235 lunar months. The average Hebrew year length is 365.2468 days, in contrast to the average tropical solar year which is measured at roughly 365.2422 days. Approximately every 216 years, the Hebrew year is "slower" than the average solar year by a full day. Since the average Gregorian year is 365.2425 days and repeats every 400 years, the average Hebrew year is slower by a day every 231 Gregorian years.
There are exactly 14 different patterns that Hebrew calendar years may take. Each of these patterns is called a "keviyah" (Hebrew for "species"), and is distinguished by the day of the week for Rosh Hashanah of that particular year and by that particular year's length.
- A chaserah year (Hebrew for "deficient" or "incomplete") is 353 or 383 days long because a day is taken away from the month of Kislev. The Hebrew letter ח"het", and the letter for the weekday denotes this pattern.
- A kesidrah year ("regular" or "in-order") is 354 or 384 days long. The Hebrew letter ק"qof", and the letter for the week-day denotes this pattern.
- A shlemah year ("abundant" or "complete") is 355 or 385 days long because a day is added to the month of Heshvan. The Hebrew letter ש"shin", and the letter for the week-day denotes this pattern.
Hebrew time measurement is governed by rabbinic law, which divides the hours up into 1080 parts, (a part lasts 3 and 1/3 seconds and each minute has 18 parts). This simplifies calculations, as only days, hours and parts are required. The weekdays start with Sunday (day 1) and proceed to Saturday (day 7). Since some calculations use division, a remainder of 0 signifies Saturday.
The calendar is based on virtual lunar conjunctions called "molads" spaced precisely 29 days, 12 hours, and 793 parts apart. Actual conjunctions vary from the molads by up to 13 hours in each direction due to the nonuniform velocity of the moon. This value for the interval between molads (the mean synodic month) was known to the Babylonians by about 250 BCE and later verified by the Greek astronomer Hipparcus. Its remarkable accuracy was achieved using records of eclipses over long periods. Measured using an absolute scale, such as an atomic clock, the mean synodic month is becoming gradually longer, but since the rotation of the earth is slowing even more the mean synodic month is becoming gradually shorter in terms of the day-night cycle. The value 29-12-793 was almost exactly correct in 1 CE and is now about 0.6 s per month too great.
The 19 year cycle has 12 non-leap and 7 leap years. There are 235 lunar months in each cycle. This gives a total of 6939 days, 16 hours and 595 parts for each cycle. Due to the vagaries of the Hebrew calendar, 19 Hebrew years can be either 6939, 6940, 6941, or 6942 days each. To start on the same day of the week, the days in the cycle must be divisible by 7, but none of these values can be so divided. This keeps the Hebrew calendar from repeating itself too often. The calendar almost repeats every 247 years, except for an excess of 50 minutes (905 parts). So the calendar actually repeats every 36,288 cycles (every 689,472 Hebrew years).
The leap years of 13 months are the 3rd, 6th, 8th, 11th, 14th, 17th, and the 19th years. Dividing the Hebrew year number by 19, and looking at the remainder will tell you if the year is a leap year (for the 19th year, the remainder is zero). A Hebrew leap year is one that has 13 months in it, a non-leap year has 12 months. A mnemonic word in Hebrew is GUCHADZaT (the Hebrew letters gimel-vav-het aleph-dalet-zayin-tet, i.e. 3, 6, 9, 1, 4, 7, 9. See Hebrew numerals). Another mnemonic is that the intervals of the major scale follow the same pattern as do Hebrew leap years: a whole step in the scale corresponds to two non-leap years between consecutive leap years, and a half step to one non-leap between two leap years.
A Hebrew non leap-year will only have 353, 354, or 355 days. A leap year will have 383, 384, or 385 days.
Although simple math would calculate 21 patterns for the calendar years, there are other limitations which means that Rosh Hashanah may only occur on Mondays, Tuesdays, Thursdays, and Saturdays, according to the following table:
|Day of Week||Number of Days|
Basically, the Hebrew months alternate between a short month and a long month, for example: Tishrei (30 days), Cheshvan (also spelled Heshvan) (29 days), Kislev (30 days), Tevet (29 days), Shevat (30 days), Adar (29 days), Nisan (30 days), Iyar (29 days), Sivan (30 days), Tammuz (29 days), Av (30 days), Elul (29 days).
For leap years, a 30 day month of Adar 1 is added immediately after the month of Shevat, and the 29 day Adar is called Adar 2. This is because the 11.25 days between 12 lunar months and one solar year in just three years adds up to more than a month.
The 265 days from the first day of the 29 day month of Adar (the last one of the year) and ending with the 29th day of Heshvan forms a fixed length period that has all of the festivals specified in the Bible, such as Pesach (Nisan 15), Shavuot (Sivan 6), Rosh Hashannah (Tishrei 1), Yom Kippur (Tishrei 10), Sukkot (Tishrei 15), and Shemini Atzeret (Tishrei 22).
The festival period from Pesach up to and including Shemini Atzeret is exactly 185 days long. The time from the traditional day of the vernal equinox up to and including the traditional day of the autumnal equinox is also exactly 185 days long. This has caused some unfounded speculation that Pesach should be March 21st, and Shemini Atzeret should be September 21, which are the traditional days for the equinoxes. Just as the Hebrew day starts at sunset, the Hebrew year starts in the Autumn (Rosh Hashanah), although the mismatch of solar and lunar years will eventually move it to another season.
Karaites use the lunar month and the solar year, but determine when to add a leap month by observing barley, rather than a fixed calendar. This occasionally puts them a month out of sync with the rest of the Jews.
|
<urn:uuid:63969c42-b92d-4e0f-aa9c-3403d172769c>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00418.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9343785643577576,
"score": 3.65625,
"token_count": 1757,
"url": "http://www.factbook.org/wikipedia/en/h/he/hebrew_calendar_1.html"
}
|
Glycemic load is a way of determining how much glucose enters the bloodstream when a particular food is eaten.
All carbohydrates are converted to glucose – yes rice, pasta, apples and onions. Once eaten they break down into glucose in the gut. The glycemic load is calculated by multiplying the glycemic index of a carbohydrate by the actual amount of glucose in the portion size eaten (carbohydrate density). The resulting answer shows you how much actual glucose enters the bloodstream after a particular meal.
All carbohydrates when digested are broken down into glucose. Single glucose molecules are able to get absorbed from the gut into the bloodstream. Carbohydrates break down into single glucose molecules at different speeds during digestion. If a carbohydrate breaks down very quickly, you get relatively more glucose crossing from the gut into the bloodstream in a short space of time, increasing blood sugar levels rapidly. Your body responds to rapidly rising blood sugar with a large release of insulin.
The glycemic index (GI) is a measurement of how quickly each carbohydrate reaches the bloodstream as pure glucose. The glycemic index is measured by consuming a food containing 50g of available carbohydrate, and then measuring blood glucose each 15 – 30 minutes for 2 – 3 hours. The results are mapped onto a graph. A curve is then compared to a reference food – usually glucose. The lower the GI of a food, the flatter the curve on the graph, the slower it is digested and converted to glucose. Glycemic index ranges is referenced against glucose, which has the value of 100.
High glycemic index = greater than 70
Intermediate glycemic index = 55 – 70
Low glycemic index = less than 55
Many of the foods we eat in abundance today are high in Glycemic index – most breads, grains, crackers, muffins and cakes are high GI. So we are constantly eating foods that push our blood sugar levels up high.
In general the more food is processed the the faster it will break down. E.g. fine ground flour in white bread, instant potatoes. Food that is less processed and contains more fibre and is in a larger piece so it will break down more slowly e.g. whole vegetables.
Here are some examples of the different densities of carbohydrates, that is the amount of glucose that the food is converted into.
1 cup of cooked rice, polenta, couscous = 10 teaspoons sugar (3 ½ tablespoons) i.e 45 grams
1 cup cooked pasta = 8 teaspoons sugar, 40g
1 baked potato = 8 teaspoons sugar, 40g
1 cup broccoli = ½ teaspoons sugar, 3g
1 cup pineapple = 4 teaspoons sugar, 18g
1 cup strawberries = 2 teaspoons sugar, 9g
As you can see the paleo foods (fibrous fruit and vegetables) have a far lower density, so they do not turn into loads of glucose.
Many of today’s everyday foods are high in density and are also high in glycemic index, and when eaten in a typical portion, we get a large amount of sugar going into our bloodstream very quickly, in other words a large sugar load, called Glycemic load.
When we take some typical foods and work out glycemic load – you can see there is a big difference between grains and paleo foods. Paleo carbohydrates – vegetables and fruit are typically low in glycemic load.
Glycemic load = GI x carbohydrate grams per serving, then divide by 100
Here are some examples:
Pasta (GI) 37 x 42grams (1 cup) = 16
Bagel (GI) 72 x 35grams (1 whole) = 25
Apple (GI) 36 x 18grams (1 whole) = 6.5
Broccoli (GI) 5 x 10grams (4 cups cooked) = 0.5
More on the Glycemic Index can be found here, from the University of Sydney
Here is an overview of a number of foods and their glycemic index (all foods, not paleo only)
To work out the Glycemic load, multiply a typical serving size by the GI number as shown above, then divide by 100
Glycemic index of foods
Glucose is the reference food, GI glucose is 100
Low GI foods are those below 50, Moderate GI foods are between 50 – 70 and high GI are more than 70.
Breads & Crackers
French Baguette 95
Rice cake 82
Rice cracker 82
Water cracker 78
Wholemeal bread 71
White bread 70
Pita bread white 57
Vita wheat 55
Sourdough wheat 54
Burgen Soy Lin 36
Rice bubbles 89
Coco pops 77
Just right 60
Muesli untoasted 56
Special K 54
Sultana Bran 52
Porridge (ave) 50
Muesli toasted 43
All bran 30
Grains / pasta
Calrose rice 83
Basmati rice 59
Brown rice 55
Long grain white rice 50
Pearled barley, boiled 25
Bulgur wheat 48
Noodles, instant 47
Egg fettuccine 32
Dates, dried 103
Rock melon 65
Potato, baked 85
French Fries 75
Potato, new 62
Kidney beans, canned 52
Baked beans 48
Chick peas 33
Black beans 30
Kidney beans 27
Soya beans 18
Dairy foods Note – dairy products trigger a much bigger insulin response than the glycemic index would indicate.
Milk, skim 32
Milk, whole 27
Yoghurt, flav, low fat 33
Ice-cream 36 – 80 (hte more fat the low the GI)
Soft drinks 68
Orange juice 53
Apple juice 41
Tomato juice 38
Jelly beans 80
Life savers 70
Mars bar 68
Muesli bar 61
Fructose 20 (Although low GI fructose has other problems as it needs to be processed by the liver)
|
<urn:uuid:3f960ce9-4b3b-4f81-83bc-7bc374127e95>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00618.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8488864302635193,
"score": 3.8125,
"token_count": 1241,
"url": "http://paleozonenutrition.com/paleo-zone-basics/glyemic-index-and-glycemic-load/"
}
|
1902 Encyclopedia > Spectroscopy
SPECTROSCOPY. The spectroscope is an instrument which separates luminous vibrations of different wave-lengths, as far as is necessary for the object in view. It consists of three parts,the collimator, the prism or grating, and the telescope. The collimator carries the slit through which the light is admitted and a lens which con-verts the diverging pencil of light into a parallel pencil. The pencils carrying light of different wave-lengths are turned through different angles by the prism or grating, which is therefore the essential portion of the spectro-scope. The telescope serves only to give the necessary magnifying power, and is dispensed with in small direct vision spectroscopes. For a description of the different kinds of prism used, see OPTICS; and for an explanation of the action of the grating, see WAVE THEORY. The most important adjustment in the spectroscope is that of the collimator. Especially in instruments of large resolving power it is essential for good definition that the light should enter the prism or fall on the grating as a parallel pencil. For a method allowing an easy and accurate adjustment for each kind of ray, see an article in Phil. Mag., vol. vii. p. 95 (1879).
Prisms are nearly always used in the position of minimum deviation, but, if the collimator is properly adjusted, this is by no means a necessary condition for good definition. Prisms as generally cut, with an isosceles base, give the greatest resolving power in the position of minimum deviation, but the loss in resolving power is not great for a small displacement. The dispersion and magnifying power of a prism can be considerably altered by a change of its position, and a knowledge of this fact is of great value to an experienced observer. The use of a prism in a position different from that of minimum deviation is, however, a luxury which only those acquainted with the laws of optics can indulge in with safety.
Lord Rayleigh has given the theory of the spectro-scope under OPTICS, and shown on what its resolving power depends. There is no connexion between resolving power and dispersion, any value of resolving power being consistent with any value of dispersion. To obtain large resolving power with small dispersion requires, however, the use of inconveniently large telescopes and prisms or gratings. It is easy, on the other hand, to obtain small resolving power together with large dispersion.
The following definitions would be found of general use if adopted. Resolving Power.The unit resolving power of a spectroscope in any part of the spectrum is that resolving power which allows the separation of two lines differing by the thousandth part of their own wave-length or wave-number,the wave-number being the number of waves in unit length. Purity.The unit purity of a spectrum is that purity which allows the separation of two lines differing by the thousandth part of their own wave-length or wave-number. We speak of the resolving power of a spectroscope and of the purity of a spectrum. The resolving power is a constant for each spectroscope, and independent of the width of the slit. The purity of a spectrum, on the other hand, depends on the width of the slit, unless that width is small compared to a certain quantity presently to be mentioned. The resolving power of a spectroscope is numerically equal to the greatest purity of spectrum obtainable by it.
Adopting these definitions, we get from Lord Rayleigh's equations for the resolving power _ of a grating
1000 R = mn,
where n is the total number of lines used on the grating and m the order of the spectrum. For a spectroscope with simple prisms we get
== IMAGE ==
where <2 and t-^ are the greatest and smallest lengths of paths in the dispersive medium. If we put for the re-fractive index of the medium /_ = A + -B/A2 we may write
== IMAGE ==
It will be seen that, while the resolving power of a spectroscope with grating depends only on the order of the spec-trum and is independent of the wave-length for each order, the resolving power of a spectroscope with prism will vary inversely as the third power of the wave-length A, so that the resolving power will be about eight times as great in the violet as in the red (see OPTICS). If compound prisms are used we must write
== IMAGE ==
where t2 is the greatest effective length of path in one medium, tx in the other medium, B2 and 2?_ being the dis-persive constants for the two media.
The purity P of a spectrum is given by the equation
== IMAGE ==
where d denotes the width of slit and ip is the angle sub-tended by the collimator lens at the slit. If the slit is sufficiently narrowed, d ip may be made small compared to A, and in that case the purity of the spectrum is independ-ent of the width of slit and equal to the resolving power. If, on the other hand, a wide slit is used, so that d \f> is large compared to A, the purity becomes inversely pro-portional to the width of slit. In actual work the slit is generally of such width that neither term in the denomi-nator of the expression for purity can be neglected.
There is a necessary limit to the resolving power of all optical instruments, depending on the fact that light con-sists of a series of groups of waves incapable of interfering with each other. If it is true, as is generally believed, but without sufficient reason, that a retardation of 50,000 wave-lengths is sufficient to destroy the capability of interfer-encethat is to say, that the groups consist on the average of approximately 50,000 wavesthe maximum purity obtainable in any spectroscope is 50. The closest line resolved with a grating, as far as the present writer is aware, requires a resolving power of about 100. Professor Piazzi Smyth has with prisms realized a purity of 50. It would seem, therefore, that the theoretical limit of purity has very nearly been reached, for, though the estimate of 50,000 waves to the group is in all probability too small, there are other considerations which render it highly improbable that the total number of waves to the group should, for sunlight at any rate, be more than two or three times larger. The limit of possible purity will very likely depend on the temperature of the luminous body.
Almost the greatest practical difficulty which the spectro-scopist has to contend with generally is the want of suffi-cient light. The following remarks apply to line spectra principally, but they hold also almost entirely for the spectra of fluted bands, which break up into lines under high resolving power. The maximum illumination for any line is obtained when the angular width of the slit is equal to the angle subtended by one wave-length at a distance equal to the collimator aperture. In that case d\p = \ and the purity is half the resolving power. Hence when light is a consideration we shall not, as a rule, realize more than half the resolving power of the spectroscope. If the visual impression depended only on the intensity of illumination, a further widening of the slit should not increase the visi-bility of a line. As a matter of fact spectroscopists gener-ally work with slits wider than that which theoretically gives full illumination. The explanation of the fact is physiological, visibility depending on the apparent width of the object. If different spectroscopes have their slits of such width that the apparent width of a line as seen by the eye is the same, and if the magnifying power is such that the pupil is just filled with light, the purity of the spectrum is directly proportional to the resolving power. We come to the conclusion, therefore, that for both narrow and wide slits the efficiency of a spectroscope depends ex-clusively on its resolving power. It has been pointed out by Lord Eayleigh that, owing to the want of definition in the optical images on the retina when the full aperture of the pupil is used, the pencil must be contracted to a third or a quarter of its natural width, if full resolving power is to be obtained. This is accompanied with a serious loss of light, which can be partly obviated by contracting the horizontal aperture only (the refracting edge being supposed vertical). There are two ways of doing this. One con-sists in the use of magnifying half prisms. But the loss of light by reflexion in simple half prisms more than counterbalances the advantage ; compound half prisms like those used by Christie may, however, be employed. We may also use prisms of three or four times the height of the effective horizontal aperture, with correspondingly large telescopes, and then by the eye-piece contract the beam until its vertical section fills the pupil. The latter plan, though theoretically best, involves more expensive appa-ratus and prisms of very homogeneous material.
The question of illumination is important also when photography is used for spectroscopic analysis. For a given intensity of the source of light the intensity of the image on the sensitive film will be directly proportional to the solid angle of the cone of light forming the last image, and will be independent of the arrangement of inter-mediate lenses. Hence lenses with as short a focus com-pared to aperture as is consistent with good definition should be used in the camera.
The methods of recording and reducing spectroscopic observations are described in all books and treatises on the subject and may therefore be passed over here.
A lens is often used to concentrate the light of the source on the slit. There is some loss of light due to reflexion from the surface of the lens, but its position, aperture, and focal length do not affect the luminosity of the spectrum seen as long as the whole collimator is filled with light.
Bodies are rendered luminous for spectroscopic investi-gation either by being placed in the Bunsen flame or by the help of the electric current. A little difficulty may arise where the body is given in solution and does not show its characteristic lines in the flame. Lecoq de Bois-baudran takes the spark from the surface of the solution. The present writer has found the tube sketched in the figure on the next page a great improvement on those commonly used, if a sufficient quantity of the solution is at hand ; otherwise the method is too wasteful. The current is brought into the solution by a platinum wire, sealed into a small glass tube; the platinum wire reaches about to the level of the open end of the tube. A capillary of thick-walled glass tubing is placed over the platinum wire; the liquid rises in the capillary and sparks can be taken as from a solid. The lines due to the glass are easily eliminated. If a small quantity of material only is avail-able, the plan adopted by Bunsen and ex-tensively used by Hartley seems the most successful. Pointed pieces of charcoal (Bun-sen) or pieces of graphite pointed to a knife edge (Hartley) are impregnated with the liquid, and the spark is taken from them, stances, when introduced into a vacuum tube, especially near the negative pole, and under great exhaustion, show a characteristic phosphorescence. Becquerel was the first to examine the spectra shown under these circumstances, and Crookes has lately used the same method with great success.
A good deal of discussion has taken place on the spectra of the metalloids, owing to the fact that they seem to be able to give different spectra under different circumstances. Spectra have occasionally been assigned to the elements which on further investiga-tion were found to belong to some compound present. According to the general opinion of spectroscopists at present, different spectra of the same elements are always due to different allotropic condi-tions. If a complex molecule breaks up into simpler molecules the breaking up is always accompanied by a change of spectrum.
Nitrogen.(a) The line spectrum appears whenever a strong spark (jar discharge) is taken in nitrogen gas. It is always present when metallic spectra are examined by the ordinary method of allowing the jar discharge to pass between metallic poles. Hartley (Phil. Trans., 1884, part i.) has measured the ultra-violet lines of the air spectrum, but has not separated the oxygen from the nitrogen lines. (5) The band spectrum of the positive discharge, which is generally called the band spectrum of nitrogen, always appears when the discharge is sufficiently reduced in intensity. The spectrum consists of two sets of bands of different appearance, one in the less refrangible part and one in the more refrangible part of the spectrum, the two sets of bands overlapping in the green. Hence some observers believe the spectrum to be made up of two distinct spectra. Plueker and Hittorf (Phil. Trans., 1865) give a coloured drawing of this spectrum, which is one of the most beautiful that can be observed. The most complete drawing of it is given by Piazzi Smyth (Trans. Roy. Soc. Edin., vol. xxxii. part iii.), and there is also a good drawing by Hassclberg (Mem. Acad. Imp. de St.Petersb., vol. xxxii.). (c) The glow which surrounds the negative electrode in an exhausted tube shows in many cases a spectrum which, as a rule, is not seen in any other part of the tube. The memoir of Hasselberg contains a drawing of it. The spectrum seen when a weak spark is taken in a current of ammonia is neither that of nitrogen nor that of hydrogen, but must be due to a compound of these gases. When the pressure of the gas is reduced, a single band is seen having a wave-length from 5686 to 5627 ATth metres (Nature, vi. p. 359). When a spark is taken from a liquid solution of ammonia a more complicated spectrum appears (Lecoq de Bois-baudran), and, if ammonia and hydrogen are burnt together either in air or oxygen, a complicated spectrum is obtained the chemical origin of which has not been satisfactorily explained. Drawings of it are given by Dibbits (Pogg. Ann., cxxii. p. 518) and by Hofmann (Pogg. Ann., cxlvii. p. 95). The absorption spectrum of the red fumes of nitrogen tetroxide has often been mapped ; the most perfect drawing is given by Dr B. Hasselberg (Mem. Acad. Imp. de St. Pet., xxvi.). According to Moser (Pogg. Ann., elx. p. 177), three bands close to the solar line C disappear when the vapour is heated. Recently Deslandes has obtained in vacuum tubes some ultra-violet bands which seem to be due to a compound of nitrogen and oxygen (O.R., chap. i. p. 1256, 1885).
Oxygen.(a) The elementary line spectrum of oxygen is that which appears at the highest temperature to which we can subject oxygen, that is, whenever the jar and air break are introduced into the electric circuit. It consists of a great number of lines, especially in the more refrangible part of the spectrum, (b) The compound line spectrum of oxygen appears at lower temperatures than the first. It consists, according to Piazzi Smyth, of six triplets and a number of single lines. This spectrum corresponds to the band spectrum of nitrogen, (c) The continuous spectrum of oxygen appears at the lowest temperature at which oxygen is luminous. The wide part of a Plueker tube, for instance, filled with pure oxygen generally shines with a faint yellow light, which gives a continuous spectrum. Even at atmospheric pressure this spectrum can be ob-tained by putting the contact breaker of the induction coil out of adjustment, so that the spark is weakened, (d) The spectrum of the negative glow was first accurately described by Wiillner, and is always seen in the glow surrounding the negative electrode in oxygen. It consists of five bands, three in the red and two in the green. For further information respecting these spectra, see Schuster (Phil. Trans., clxx. p. 37, 1879) and Piazzi Smyth (Trans. Roy. Soc. Edin., vol. xxxii. part iii.). According to Egoroff, the A and B lines of the solar spectrum are due to absorption by oxygen in our atmosphere, and some recent observations of Janssen seem to support this view.
Carbon.(a) The line spectrum appears when a very strong spark is sent through carbonic oxide or carbonic acid. The ultra-violet lines observed by Hartley when sparks are taken from graphite electrodes also belong probably to this spectrum, (b) Considerable discussion has taken place as to the origin of the spectrum seen at the base of a candle or a gas flame. At first observations seemed to point to the fact that it was due to a hydrocarbon. It has been ascertained, however, that sparks taken in cyanogen gas, even when dried with all care, show the spectrum, and a flame of cyanogen and oxygen gives the same bands brilliantly. These facts have convinced the majority of observers that the spectrum is a true carbon spectrum. The best drawing is given by Piazzi Smyth, who ascribes the spectrum, however, to a hydrocarbon. The flame of cyanogen, which had already been examined by Faraday and Draper before the days of spectrum analysis, shows a series of bands in the red, reaching into the green. There is no doubt that they are due to a compound of nitrogen and oxygen. Another series of bands in the blue, violet, and ultra-violet have been also proved by Liveing and Dewar to be due to a compound of nitrogen and carbon. If the discharge is passed at low pressure through carbonic acid or carbonic oxide a spectrum is seen which seems to belong to carbonic oxide. A very beautiful and remarkable drawing of this spectrum, especially of its most brilliant band, has been published by Piazzi Smyth.
Very little need be said of the remaining metalloids, as we do not possess a sufficiently careful examination of their spectra. Chlorine, bromine, and iodine show bands by absorption. If a spark is passed through the gases line spectra appear. Sulphur volatilized in a vacuum tube may show either a line or a band spectrum under the influence of the electric discharge. The absorption through the vapour of sulphur is continuous at first on volatilization, but as the vapour is heated to 1000° the continuous spectrum gives way to a band spectrum. A spark through the vapour of phosphorus gives a line spectrum. We may obtain the spectra of fluorine, silicon, and boron by comparing the spectra given by sparks taken in atmospheres of fluoride of boron and fluoride of silicon.
Spectra of Metals and their Compounds.
Hydrogen.If sparks are taken through hydrogen, four well-known lines appear in the visible region of the spectrum. The remarkable series of ultra-violet lines photographed by Dr Huggins in the spectra of some stars which in their visible part show hydro-gen chiefly has suggested the question whether the whole series is not due to that gas. This has now been proved to be the case by Cornu, who has recently examined the hydrogen spectrum with great care. In vacuum tubes filled with hydrogen a complicated spectrum often appears which is so persistent that nearly all 'ob-servers have ascribed it to hydrogen (though Salet had given reasons against that conclusion). According to Cornu, the purer the gas the feebler does this spectrum become, so that the above-mentioned line spectrum seems to be the only true hydrogen spectrum. A flame of hydrogen in air or oxj'gen shows a number of lines in the ultra-violet belonging apparently to an oxide of hydrogen (Live-ing and Dewar, Huggins). Aqueous vapour gives an absorption spectrum principally in the yellow.
Alkali Metals.The metals of the alkali group are distinguished by the fact that their salts give the true metal spectra when ren-dered luminous in the Bunsen burner ; that is to say, their salts are decomposed and the radiation of their metallic base is sufficiently powerful to be visible at the temperature of the flame. Their spectra are not so easily seen if sparks are taken from the liquid solution, but Lecoq de Boisbaudran has obtained fine spectra of sodium and potassium by taking the spark from a semi-fluid bead of the sulphates. The most complete description of the spectra of sodium and potassium seen when the metals are heated up in the voltaic arc is given by Liveing and Dewar (Proc. Roy. Soc., xxix. p. 378, 1879), who have also mapped their ultra-violet lines (Phil. Trims., 1883, pt. i.). Abney has found a pair of infra-red lines belonging to sodium, with wave-lengths 8187 and 8199 (Proc. Roy. Soc, xxxii. p. 443, 1881). Becquerel finds lines in the infra-red at 11,420. The vapour of sodium and potassium heated up in a tube is coloured and shows a spectrum of fluted band; but in the case of sodium the yellow line is always present at the same time. It is probable that the band spectrum belongs to the vapour, con-taining two atoms in each molecule, and that at higher tempera-tures the molecules are split up, the single atoms showing the line spectra. Both potassium and sodium show an additional absorption line (5510 for Na and 5730 for Ka) at the temperature at which the fluted bands appear. According to a suggestion of Liveing and Dewar, these lines may depend on the presence of hydrogen, which it is very difficult to exclude. These experimenters have also de-scribed interesting but complicated absorption phenomena depend-ing on the simultaneous presence of two or more metals. Thus sodium and magnesium show a band in the green (\ = 5300), which does not appear when sodium alone or magnesium alone is volati-lized. Potassium and magnesium show similarly two lines in the red (Proc. Boy. Soc, xxvii. p. 350, 1878). If a spark is taken from potassium in an atmosphere of carbonic oxide a band appears (5700) depending probably on a combination between the potassium and the carbonic oxide. Lockyer has observed certain curious phenomena (Proc. Boy. Soc, vol. xxii. p. 378) taking place at the temperature at which the band spectrum of sodium changes into the line spectrum ; these phenomena deserve a fuller investigation. Lithium furnishes a good example of a change in the relative in-tensity of lines at different temperatures. At the temperature of the flame the red line is the most powerful, an orange line being also seen. When a spark is taken from a liquid solution the orange line is far the strongest, and a blue line is seen, which in its turn rapidly gains in intensity as the temperature is raised. When the spark is taken from solutions of different strengths the more con-centrated solution shows a change in relative intensity of lines in the direction in which an increase of temperature would act. Com-bination of the metals with transparent acids does not when in solution show any appreciable absorption in the visible part of the spectrum ; but Soret has mapped their ultra-violet absorption.
Metals of Alkaline Earths.Calcium, strontium, and barium are distinguished by the fact that their volatile compounds give fine spectra in the Bunsen flame. The more stable salts, as the phos-phates and silicates, give the reaction only feebly or not at all. When a salt like the chloride of barium is introduced into the flame the spectrum is seen to change gradually; the spectrum seen at first is different according as the chloride, bromide, or iodide is used, while the spectrum which finally establishes itself is the same for the different salts of the same metal. Mitscherlich, who was the first to investigate carefully these phenomena (Pogg. Ann., exxi. p. 459, 1864), ascribes the spectra seen at first to the compound placed in the flame, while gradually the oxide spectrum gets the upper hand. This explanation has always been accepted, and receives support from the fact that the bromide spectrum is strengthened by introducing bromine vapour into the flame, and the other compound spectra can be similarly strengthened by introducing suitable vapours. There is an observation, however, made by Pro-fessors Liveing and Dewar which in one case is not compatible with Mitscherlich's explanation. "A mixture of barium carbonate, aluminium filings, and lamp-black heated in a porcelain tube gave two absorption lines in the green, corresponding in position to bright lines seen when sparks are taken from a solution of barium chloride, at wave-lengths 5242 and 5136, marked a and /3 by Lecoq de Boisbaudran." These two lines, or rather bands, are the brightest in the spectrum commonly ascribed to barium chloride. In addi-tion to the compound spectra the brightest of the metallic lines seen at a low temperature appear in the flame. The metallic line is in the violet with calcium, in the blue with strontium, and in the green with barium. Sparks taken from a solution of the metallic salts show the compound spectra well, and in addition more of the true metallic lines than the flame. The best drawings of the compound spectra are those given in Lecoq de Boisbaudran's Atlas ; but measurements with higher resolving powers are much wanted. When the salts are introduced into the voltaic arc numer-ous metallic lines appear which have been mapped by Thalen. Liveing and Dewar have investigated those lines which can be reversed and have also mapped the ultra-violet spectra. Captain Abney has mapped a pair of infra-red lines belonging to calcium between 8500 and 8600, and, according to Becquerel, with the help of a phosphorescent screen bands or lines appear of still lower refrangibility (8830 to 8880). Lockyer (Phil. Trans., clxiii. p. 253, 1873, and clxiv. p. 805, 1874) has measured and mapped as regards their length the lines of these as well as of many of the other metals.
Metals of Magnesium Group.Beryllium presents comparatively simple spectroscopic phenomena, as far as it has hitherto been investigated. Two green lines were mapped by Thalen and five in the ultra-violet by Hartley (Jour. Chem. Soc, June 1883). The spectrum of magnesium is well known from its green triplet; but the vibrations of the metal seem very sensitive to a change of conditions. Full details are given by Liveing and Dewar in Proc Boy. Soc, xxxii. p. 189. These authors have found that some of the bands seen occasionally, when magnesium wire is burned in air, are due to a compound of magnesium and hydrogen. The spec-trum appears when sparks are taken from magnesium poles in an atmosphere containing hydrogen. For a description of the pecu-liarities of the flame, arc, and spark spectrum, the reader is referred to the original paper. The ultra-violet spectrum, which contains several repetitions of the green triplet, has also been mapped and measured by Hartley and Adeney (Phil. Trans., clxxv., 1874, pt. i.). The spectra of zinc and cadmium are obtained either by sparks from liquid solution or by the spark, with Leyden jar, from the metal poles. The ultra-violet spectra show for both elements a remarkable series of triplets, the lines of the cadmium triplet being about three times as far apart as those of the zinc triplets. The least refrangible of the series is in the blue with wave-lengths 5085-1, 4799-1, 4677'0 for cadmium, and 4809-7, 4721-4, 4679'5 for zinc.
Lead Group.The spectrum of lead is best obtained by taking the spark from the metallic poles. Care must be taken, however, to renew the surface frequently, otherwise the oxide spectrum will gradually make its appearance. The oxide itself shows its spectrum, according to Lecoq de Boisbaudran, in the Bunsen burner. The salts of thallium show the principal metal line at the temperature of the flame. The spark spectrum is more complicated. The ultra-violet spectra of both lead and thallium have been mapped.
Copper Group.The spectra of the metals belonging to this group are easily obtained in the ordinary way. When copper chloride is introduced into the Bunsen flame a fine spectrum of bands is seen. It is the same spectrum which is found when com-mon salt is thrown upon white hot coals. This reaction for copper chloride is very sensitive, but it has never been satisfactorily decided whether the presence of copper is really necessary for its production or whether the spectrum belongs to a peculiar condition of chlorine vapour. Silver when first volatilized gives a green vapour, which at a low temperature shows continuous absorption, but at a higher temperature a spectrum of fluted bands (Lockyer). Mercury shows its lines with great brilliancy if introduced and heated in a vacuum tube. Some of the lines widen easily, and at higher pressures a con-tinuous spectrum completely covers the background. The copper salts in aqueous solution absorb principally the red end of the spectrum, the green salts also the violet end. The glass, coloured green with oxide of copper, transmits through sufficient thickness exclusively the yellow and green rays between D and E (H. W. Vogel).
Cerium Group. Yttrium gives a good spark spectrum from the solution of the chloride ; the salts show no absorption bands. Crookes has found, however, that a certain substance yields brilliant phosphorescent bands under the influence of the negative pole in a vacuum tube. These bands he has, after a lengthy investigation, put down to yttrium compounds, and explained the changes they undergo in different compounds and the sensitiveness of the reaction. Lecoq de Boisbaudran, who obtains the same spectrum by taking a spark (without Leyden jar) from solutions, making the solution the positive pole, has expressed an opinion that the bands are not due to yttrium but to two substances provisionally called by him Za and Z/3. He has also under certain conditions seen a higher temperature spectrum, which he ascribes to Z7, leaving it undecided whether Z7 is a new substance or identical with Za (Phil. Trans., 1883, p. 891, and C.R., ci. p. 552, cii. p. 153).Banlhanumis easily recognized by a strong spark spectrum.Cerium, like yttrium and lanthanum, has no peculiar absorption spectrum when in combin-ation and solution ; although the salts are strongly coloured yellow, its line spectrum has characteristic lines in the blue.Didymium is characterized speetroscopically by the fine absorption spectra of its salts. Different salts show slightly different spectra, but they can be recognized at first sight as didymium spectra. The crystals of didymium salts show remarkable differences in the absorption spectra according to the direction in which the ray traverses the crystal. Light reflected from the powdered salts shows the character-istic spectrum. According to Auer von Welsbach (Monatsschr. f. Chemie, vi. p. 477), didymium has lived up to its name didvpoi, " twins," for by fractional crystallization he has found it to be an intimate mixture of two substances, each of them giving half the ab-sorption spectrum and half the emission spectrum of didymium. Terbium has a characteristic line spectrum when the spark is taken from a solution of the salts.The salts of erbium give a characteristic absorption spectrum, but till recently the drawings of it contained also absorption bands due to thulium and holmium. The spectrum of erbium, as previously mapped by Thalen, belongs almost exclusively to ytterbium ; but he has recently mapped the lines belonging to what is now known as erbium (C.R., xci. p. 326). Erbium salts heated in the Bunsen burner show a spectrum of bright bands without apparent volatilization. Ytterbium, discovered by Marignac (atomic weight 17'3, Kilson), gives an ab-sorption band in the ultra-violet. Its luminous spectrum is rich in lines (Thalen, C.R., xci. p. 326).Samarium, also discovered by Marignac and called by him originally Y/3, gives absorption bands in the visible part and in the ultra-violet (Soret, G.B., xc. p. 212). It frequently occurs with didymium, and most of the maps of the didymium spectrum contain the samarium bands. When pre-cipitated with another metal it shows a brilliant phosphorescent spectrum (Crookes), which, however, is slightly different accord-ing to the metal. The peculiar yttrium spectrum is very weak even when it is mixed in considerable quantities with samarium. But when the quantity of yttrium is increased to about 60 per cent, a very rapid change takes place, and afterwards it is the samarium spectrum which is very weak. A band in the orange peculiar to the mixture, weak in pure samarium and absent in yttrium, is strongest in a mixture containing about 80 per cent, of samarium and 20 per cent, of yttrium.Holmium, identified as a separate element by Soret (C.R., xci. p. 378), has absorption bands in the visible part of the spectrum (6405, 5363, 4855 on Lecoq's map of chloride of erbium), and also a strongly marked ultra-violet absorption spectrum.Thulium, likewise first recognized by Soret, is band 6840 on Lecoq's drawing of chloride of erbium, and also possesses a band at 4645. Thalen has measured the bright line spectrum (C.R., xci. p. 376, 1880).Scaiulium is characterized by a bright line spectrum (Thalen, O.R., xci. p. 48,1880).Gadolinium (Marignac's Ya) has a weak absorption spectrum in the ultra-violet and a characteristic phosphorescent spectrum (Proe. Roy. Soc, February 1886); but the latest researches of Crookes have rendered it probable that it is a mixture of several new elements (Proa. Roy. Hoc, 10th June 1886).The mosandrium of Lawrence Smith seems a mixture of gadolinium and terbium. philippium of De la Fontaine was a mixture of yttrium and terbium; and the latest dccipium of the same chemist is probably holmium.
Aluminium Group.The spectra of the metals belonging to this group can be obtained in the ordinary way by means of the electric spark. The chloride of indium shows the two strongest metallic lines, one in the indigo and one in the violet, when intro-duced into the Bunsen flame. According to Claydon and Heycock, a number of other lines appear when the spark is taken from the metal electrodes. When a weak spark is taken from aluminium electrodes in air a band spectrum is often seen belonging apparently to the oxide, for it disappears when the spark is taken in hydrogen. Gallium, another metal belonging to this group, was first discovered by means of its spectroscopic reaction. The chloride shows two violet lines feebly in the Bunsen flame, but strongly if a spark is taken from the liquid solution. The ultra-violet lines of indium and of aluminium have been photographed by Hartley and Adeney, as well as by Liveing and Dewar. Some of the lines had been pre-viously mapped by Cornu, whose researches extend furthest into the ultra-violet. According to Stokes, aluminium shows lines more refrangible than those of any other metal, and the wave-lengths of their lines as measured by Cornu are for one double line 1934, 1929, and for another 1860, 1852.
Metals of the Iron Group.The spectroscopic phenomena of this group are somewhat complicated. The line spectra can be obtained either by taking sparks from the metal or from the solution of a salt, and also by placing the metal in the voltaic arc. The lines are very numerous and very liable to alter in relative intensity under different circumstances. The great difference shown, for instance, between the arc and spark spectra of iron in the ultra-violet region is shown in the map by Liveing and Dewar in Phil. Trans., 1885, pt. i. The visible part has also been investigated by the same authors and by Lockyer, and much information has thus been added to the knowledge previously obtained by Kirchhoff, Angstrom, and Thalen. That part of the iron spectrum lying between a wave-length of 4071 and 2947 has been mapped by Cornu; Liveing and Dewar's observations refer chiefly to the more re-frangible region. Considering the very important part which the iron spectrum plays in solar observations, a full investigation of its changes by a variation of temperature would at the present time be of great value. If observations with the method adopted by Lecoq de Boisbaudran were repeated with higher resolving powers they would add much to our knowledge. Some of the manganese salts, such as the chloride or carbonate, seem to be the only salts belonging to this group which show a characteristic spectrum when heated in the Bunsen burner or the oxyhydrogen flame. The spectrum observed in these cases is, according to Watts, the characteristic spectrum of the Bessemer flame, which disappears at the right moment for stopping the blast; it is probably due to an oxide of manganese. When a spark spectrum is taken from a solution of the chloride the same spectrum is seen, but the relative intensity of the lines depends on the length and the strength of the spark. The green - coloured manganates show a continuous absorption at the two ends of the spectrum, transmitting in con-centrated solutions almost exclusively the green part of the spec-trum. The absorption bands of permanganate of potassium are well known and seem to be due to the permanganic acid, as they appear also with other permanganates. The green salts of nickel show a continuous absorption at the two ends of the spectrum. The cobalt salts show well-defined absorption bands. Their careful investigation by Dr W. J. Russell deserves special notice (Proc. Roy. Soc., xxxii. p. 258, 1881).
Metals of Chromium Group.The metallic spectra of this group have been measured principally by Thalen in the usual way. Lockyer and Roberts have obtained a channelled spectrum of chromium by absorption. As regards the spectra of compounds of chromium, the absorption of the vapour of chloro - chromic anhydride has been measured by Emerson and Reynolds (Phil. Mag., xlii. p. 41, 1871), and consists of a series of regularly dis-tributed bands. The chromium salts all possess a decided colour and show interesting absorption phenomena. The chromates ab-sorb the violet and blue completely, also the extreme red, and transmit only the orange, yellow, and in dilute solutions part ol the green. The most complete investigation of the salts in which chromium plays the part of a base is due to Erhard in a dissertation published at Freiburg. Potassium chrom-alum, ammonia chrom-alum, sulphate of chromium, when in solution, give an identical absorption for the same amount of chromium. The extreme red is freely transmitted by the violet solution, but the absorption grows rapidly towards the yellow. An indistinct absorption band (X = 6790 to X=6740) is seen when the layer is thick or the solution concentrated. The strongest absorption takes place for a wave-length of 5800. The green is transmitted again more freely, the minimum absorption taking place for a wave-length 4880; the absorption then grows rapidly towards the violet. When the solutions are heated the colour changes to green, the absorp-tion is increased throughout the spectrum, except in the green, where it remains nearly unchanged, and the minimum of absorption shifts to a wave-length of 5090. The solution, which remains green on cooling, has, when compared with its original state, an increased absorption in the red and blue and a slightly diminished absorption in the green. When light is sent through plates cut out of crystals of potassium chrom-alum or ammonia chrom-alum, three absorption bands (6860, 6700, 6620) are seen in the red. The green and blue show the same absorption as the solution. The chloride in solution gives the same absorption as the chrom-alums,transmitting, how-ever, slightly more light for the same quantity of chromium. The hot solution also shows the same changes, but with this difference that colour and absorption phenomena are almost entirely recovered on cooling. The nitrate (solution of chromic hydroxide in nitric acid) agrees with chrom-alum, but transmits more light. Red crystals of potassic chromic oxalate only transmit the red with an absorption band slightly less refrangible than B (\=6867). The blue salt has the absorption band at a wave-length of 7040 and transmits part of the light in the green and blue. The solutions of the salts show the same absorption as the crystals, with the position of the absorption band apparently unchanged. The warm solutions absorb more than the cold ones. The oxalate of chromium gives an absorption band of 6910 to 6860 and transmits the green and blue more freelythan the double salt. The tartrate onlyshows the absorp-tion band in the red very weakly and absorbs more red than the previously mentioned solutions. The acetate transmits more yellow than the other salts and has some broad absorption bands near a wave-length of 7170. When the solution is heated it becomes green, absorbing the red more than when cold, but leaving the green and blue absorption unchanged. The absorption phenomena shown by uranium salts are more complicated than those of the chromium salts, but they are at the same time more characteristic, as the spectra are more definitely broken up into bands. According to Vogel, the uranic and uranous salts behave differently (Praktische Spectral-Analyse, p. 247), but a more careful investigation is de-sirable. Sorby finds that a mixture of zirconium and uranium dissolved in a borax bead shows characteristic bands, which are visible neither with uranium nor with zirconium alone.
There is little to be said as regards the remaining groups of metals (tin, antimony, gold). Their spectra are best obtained by taking the spark from metallic electrodes or by volatilization in the voltaic arc.
Influence of Temperature and Pressure on Spectra of Gases.
If the spectrum of an element is examined under different conditions of temperature or pressure, it is often found to differ considerably. The change may be smallthat is to say, the lines or bands may only show a different distribution of relative intensity or it may be so large that no relationship at all can be discovered between the spectra. It has been pointed out by Kirchhoff that a change in the thickness of the luminous layer may produce a change in the appearance of the spectrum, and Zollner and Wiillner have endeavoured to explain in this way a number of important varia-tions of spectra. But their explanation does not stand the test of close examination. The thickness of layer cannot be neglected in the discussion of solar and stellar spectra, or in the comparison of absorption spectra of liquids; but none of the phenomena which we shall notice here are affected by it.
Widening of Lines.The lines of a spectrum are found to widen under certain conditions, and, although probably all spectra are subject to this change, some are much more affected by it than others. The lines of hydrogen and sodium, for instance, widen so easily that it is sometimes difficult to obtain them quite sharp. "When a system of lines widens it is generally found that the most refrangible lines widen most easily. A line my expand equally towards both sides or chiefly towards one side ; in the latter case the expansion towards the less refrangible side preponderates pretty nearly in every case. It is the almost unanimous opinion of spectro-scopists that the widening is produced by an increase of pressure. If sparks are passed through gases, the lines are always broader at high than at low pressures, and the metallic lines are also broader when a spark is taken from them at higher pressures. "Without altering the pressure, we may often produce a widening of lines by an increase in the intensity of the discharge, but here the pressure is indirectly increased by the rise of temperature. According to the molecular theory of gases, the following explanation might be given for the widening of lines. As long as a molecule vibrates by itself uninfluenced by any other molecule, its vibrations will take place in regular periods. The lines of its spectrum will conse-quently be sharp. But, if the molecule is placed in proximity with others, its vibrations will be disturbed by occasional encounters. During each encounter forces may be supposed to act between the molecules, and these forces will affect the regularity of the vibra-tion. The question arises, whether for a given temperature and pressure a line may be of different width according as the molecule is placed in an atmosphere of similar or dissimilar molecules. Such a difference exists in all probability. If gases are mixed in different proportions, the lines are sharper when an element is present in small quantities, although the total pressure may be the same. There is one cause which limits the sharpness of spectroscopic lines : the molecules of a gas have a translatory motion. Those molecules which are moving towards us will send us light which is slightly more refrangible than those which move away from us ; hence each line ought to appear as a band. In reality the width of lines generally is greater than that due to this cause.
Spectra of Different Orders.-Spectra may be classified according to their general appearance. The different classes have been called orders by Pliicker and Hittorf. At the highest temperature we always obtain spectra of lines which need no further description. At a lower temperature we often get spectra of channelled spaces or fluted bands. When seen in spectroscopes of small resolving power these seem made of bands which have a sharp boundary on one side and gradually fade away on the other. With the help of more perfect instruments it is found that each band is made up of a number of lines which lie closer and closer together as the sharp edge is approached. Occasionally the bands do not present a sharp edge at all, but are made up of a number of lines of equal intensity at nearly equal distances from each other. Continuous spectra, which need not necessarily extend through the whole range of the spectrum, form a third order, and appear generally at a lower temperature than either band or line spectrum. One and the same element may at different temperatures possess spectra of different orders. A discussion has naturally arisen as to the cause of these remarkable changes of spectra, and it is generally believed that they are due to differences of molecular structure. Thus sulphur vapour when volatilized shows by absorption a continuous spectrum until its temperature is raised to 1000°, when the continuous spectrum gives way to a spectrum of bands. We know that the molecule of sulphur is decomposed as the temperature is raised, and we are thus justified in saying that the band spectrum belongs to the molecule containing two atoms, while the continuous spectrum belongs to the more complex molecule which first appears on volatilization. When a strong electric spark is passed through the vapour of sulphur a bright line spectrum is seen, and this is believed to be due to a further splitting up of the molecule into single atoms.
Long and Short Lines.If the spectrum of a metal is taken by passing the spark between two poles in air the pressure of which is made to vary, the relative intensity of some of the lines is often seen to change. Similar variations take place if the intensity of the discharge is altered, as, for instance, by interposing or taking out a Leyden jar. It is a matter of importance to be able to use a method which in the great majority of cases will give at once a sure indication how each line will behave under different circum-stances. This method we now proceed to describe. It has often been remarked, even by the earliest observers, that the metallic lines when seen in a spectroscope do not always stretch across the field of view, but are sometimes confined to the neighbourhood of the metallic poles. Some observations which Loekyer made jointly with Professor Frankland led him to conclude that the distance which each metallic line stretched away from the pole could give some clue to the behaviour of that line in the sun. In 1872 Loekyer worked out his idea. An image of the spark was formed on the slit of the spectroscope, so that the spectrum of each section of the spark could be examined. Some of the metallic lines were then seen to be confined altogether to the neighbour-hood of the poles, wdiile others stretched nearly across the whole field. The relative length of all the lines was estimated. Tables and maps are added to the memoir. The longest lines (that is, those which stretch away farthest from the pole) are by no means always the strongest; and there are many instances where a faint line is seen to stretch nearly across the whole field of view, while a strong line may be confined to the neighbourhood of the pole, or is reduced sometimes to a brilliant point only. We give a few conspicuous examples of lines which are long and weak or short and strong. In lithium the blue line (46027) is brilliant but short. In lead 4062-5, one of the longest lines, is faint and according to Loekyer difficult to observe. In tin 5630'0 is the longest line, but it is faint, while the stronger lines near it (5588'5 and 5562-5) are shorter. The zinc lines 4923'8, 4911-2, 48097, 4721-4, 4679'5 are given by Thalen as of equal intensity, but the three most refrangible ones are longer. On reduction of pressure Loekyer found that some of the shorter lines rapidly decreased in length, while the longer lines remained visible and were some-times hardly affected. When the spark was taken from a metallic salt instead of from the metal the short lines could not be seen, but only the long lines remained. An alloy behaves in the same manner as a compound, and by gradually reducing one constituent of an alloy we may gradually reduce the number of lines, which disappear in the inverse order of their length. Subsequent work has shown that the longest lines are also generally those which are most persistent on reduction of temperature, so that in the voltaic arc the longest lines seen in the spark are absent. In order to explain these facts it seems necessary in the first place to assume that the short lines are lines coming out at a high temperature only ; but this explanation is not sufficient. Why should a mixture of different elements only show the longest lines of that constituent which is present in small quantities ? In the case of chemical combinations we might assume that, the spark having to do the work of decomposition, the temperature of the metal is lowered, and that therefore the short lines are absent. But this cannot be if a chemical compound is replaced by a mechani-cal mixture. All these facts would be explained, however, if we assume that the spectrum of a molecule that is excited by molecules of another kind consists of those lines chiefly which a molecule of the same kind is already capable of bringing out at a lower temperature. It would follow from this that the effects of dilution are the same as those of a reduction of temperature, which is the case.
Other Changes in Relative Intensity of Lines. Besides the changes we have noticed, there are others which have not been brought under any rule as yet. Lines appear sometimes at a low temperature which behave differently from the proper low-tem-perature lines. These require further investigation. They may, in some cases at least, be due to some compound of the metal with other elements present. We give some examples. If a spark is taken from lead without the condenser the line 5005 appears, and Huggins has found it to be sensibly coincident with the chief line of the nebulae. It is given as a strong line by Lecoq de Bois-baudran, who used feeble sparks, and in many cases it seems to behave as a low-temperature line ; it ought to be a long line therefore, but it is in reality short. In line 6100 of tin, Salet noticed that when a hydrogen flame contains a compound of tin an orange line appears, which is apparently coincident with the orange line of lithium. This line does not figure on any of the maps of the tin spectrum. Loekyer found that zinc, volatilized in an iron tube, showed by absorption a green line. It is very likely the line 5184 seen by Lecoq de Boisbaudran in sparks taken from solution of zinc salts. In the absorption spectra of sodium and potassium lines appear in the green which were shown by Liveing and Dewar not to be coincident with any known line of these metals. It was suggested by them that they are due to hydrogen compounds. The wave-length of the sodium line is 5510 and that of the potassium line 5730. Lecoq de Boisbaudran mentions that an increase of temperature is often accompanied by a relatively greater increase in the bril-liancy of the more refrangible rays. It is often said that such an increase is a direct consequence of the formula established by Kirchhoff. If the absorbing power of a molecule remains the same while the temperature is increased, it follows that the blue rays gain more quickly in intensity than the red ones, but the less refrangible rays ought never to decrease in intensity, the quantity of luminous matter remaining the same. Now such a decrease is actually observed in many cases when there is no reason to suppose that the quantity of luminous matter has been reduced. We must conclude, therefore, that the observed differences in the spectra are not solely regulated by Kirchhoff's law ; but it is a perfectly plausible hypothesis that a higher temperature is in general accompanied by a decrease in the absorbing power of the less refrangible rays. As a stronger impact often brings out higher tones, stronger molecular shocks may bring out waves of smaller length. There are several instances of a regular increase in the relative intensity of the blue rays which may be ascribed to this cause. The most remarkable instance is perhaps seen in the spectrum of phosphoretted hydrogen. If a little phosphorus is intro-duced into an apparatus generating hydrogen, the name will show a series of bands chiefly in the green. The spectrum gets more brilliant if the flame is cooled. This can be done, according to Salet, by pressing the flame against a surface kept cool by means of a stream of water or by surrounding the tube, at the orifice of which the gas is lighted, by a wider tube through which cold air is blown. The process of cooling the flame, according to Lecoq, changes the relative intensity of the bands in a perfectly regular manner. The almost invisible least refrangible band becomes strong, and the second band, which was weaker than the fourth, now becomes stronger. Another example of a similar change is the spectrum shown by a Bunsen burner. By charging the burner with an indifferent gas (N, HC1, C02) the flame takes a greenish colour, and, though the spectrum is not altered, the least refran-gible of the bands are increased in intensity. While in these instances the changes are perfectly regular, the more refrangible rays gaining in relative intensity as the temperature is increased, there are other cases, some of which have already been mentioned, in which the changes are very irregular ; such are those which take place in the spectra of tin, lithium, and magnesium. In the case of zinc the less refrangible of the group of blue rays gains in relative intensity. We cannot, therefore, formulate any general law.
Numerical Relations between the Wave-lengths of Lines belonging to the Spectrum of a Body.
It seems a priori probable that there is a numerical relation between the different periods of the same vibrating system. In certain sounding systems, as an organ-pipe or a stretched string, the relation is a simple one, these periods being a submultiple of one which is called the fundamental period. The harmony of a com-pound sound depends on the fact that the different times of vibra-tion are in the ratio of small integer numbers, and hence two vibrations are said to be in harmonic relation when their periods are in the ratio of integers. We may with advantage extend the expression " harmonic relation " to the case of light, although the so-called harmony of colours has nothing to do with such connexions. We shall therefore define an " harmonic relation " between different lines of a spectrum to be a relation such that the wave-lengths or wave-numbers are in the ratio of integers, the integers being suffi-ciently small to suggest a real connexion. Some writers use the word in a wider sense and call a group of lines harmonics when they show a certain regularity in their disposition, giving evidence of some law, that law not being in general the harmonic law. We shall here use the expression in its stricter sense only. We begin by discussing the question whether there are any well-ascertained cases of harmonic relationship between the different vibrations of the same molecule. The most important set of lines exhibiting such a relationship are three of the hydrogen lines which, when pro-perly corrected for atmospheric refraction, are, as pointed out by Johnstone Stoney, very accurately in the ratio of 20 :27 : 32 (Phil. Mag., xli. p. 291, 1871). Other elements also show such ratios ; but when a spectrum has many lines pure accident will cause several to exhibit whatever numerical relations we may wish to impose on them. If we calculate the number of harmonic ratios which, with an assumed limit of accuracy, we may expect in a spectrum like that of iron, we find that there are in reality fewer than we should have if they were distributed quite at random (Proc. Roy. Soc, xxxi. p. 337, 1881). With fractions having a denominator smaller than seventy the excess of the calculated over the observed values is very marked, while there are rather more coincidences than we should expect on the theory of probability if we take fractions having a denominator between seventy and a hundred. The cause of this, probably, is to be sought in the fact that the lines of an element are liable to form groups and are not spread over the whole spectrum, as they would be if they were distributed at random. This increases the probability of coincidence with fractions between high numbers, and diminishes the probability of coincidence with fractions between lower numbers. There is one point which deserves renewed investigation. When the limits of agreement between which a coincidence is assumed to exist are taken narrower, there is an increased number of observed as compared with calculated coincidences in the iron spectrum ; and this would seem to point to the existence of some true harmonic ratios. With the solar maps and gratings put at our disposal by Professor Rowland, we may hope to obtain more accurate measurements, and therefore more definite information. Even if the wave-lengths of two lines are found to be occasionally in the ratio of small integer numbers, it does not follow that the vibrations of molecules are regulated by the same laws as those of an organ-pipe or of a stretched string. E. J. Balmer has indeed lately suggested a law which differs in an important manner from the laws of vibration of the organ-pipe and which still leaves the ratios of the periods of vibration integer numbers. According to him, the hydrogen spectrum can be represented by the equation
== IMAGE ==
where X0 is some wave-length and m an integer number greater than 2. The following table (I.) shows the agreement between the calculated and observed hydrogen lines. And the agreement is a very remarkable one, for the whole of the hydrogen spectrum is represented by giving to m successive integer values up to sixteen.
== TABLE ==
The differences between the observed and the calculated numbers show a regular increase towards the ultra-violet. It might be thought that a better agreement could be obtained by taking a number slightly different from four in the denominator ; but this is not the case. On the contrary, the agreement in the visible part is at once destroyed if we make the ultra-violet lines fit better. The agreement is not improved but rendered slightly worse if we take account of atmospheric refraction.
As a first approximation Balmer's expression gives a very good account of the hydrogen spectrum. If the law was general we should find that in the iron spectrum, for instance, which is the only spectrum carefully examined, those fractions would occur more frequently than others which can be put into the form m-j(m ?i ), that is to say, f and f for fractions made up of numbers smaller than 10. A reference to the table in Proc. Roy. Soc, vol. xxxi. p. 337, shows that those fractions do not occur more frequently than others. But, if we change the sign of n- in the denominator, we find | and ^ as the only fractions falling within the range of spectrum examined, and these two fractions are indeed those which occur more frequently than any others made up of numbers smaller than 10.
It might be worth trying to see whether the wave-lengths of lines making up a fluted band can be put into the form maj.Ma \> > according to the sign chosen in the denominator, the band would shade off towards the blue or red. The form of expression seems at first sight well adapted, for it shows how by giving m gradually in-creasing numbers the lines come closer and closer together towards what appears in the spectrum as the sharp edge of the band. If we take periods of vibration instead of wave-lengths Balmer's expression would reduce to
== IMAGE ==
where T0 is a fixed period of vibration, n a constant integer, and m an integer to which successive values are given from n upwards.
It is often observed, and has already been mentioned, that the spectrum of some elements contains in close proximity two or three lines forming a characteristic group. Such doublets or triplets are often repeated, and if the harmonic law was a general one we should expect the wave-lengths of these groups to be ruled by it; but such is not the ease. The sodium lines which lie in the visible part of the spectrum are all double, the components being the closer to-gether the more refrangible the group. But neither are the lines themselves in any simple ratios of integers, nor do the distances between the lines show much regularity. The ultra-violet lines of sodium as photographed by Liveing and Dewar are single, with the exception of the least refrangible of them (3301). But this line is a very close double, and it may be that the others will ultimately be resolved. Some elements, such as magnesium, calcium, zinc, cadmium, show remarkable series of triplets; and the relative dis-tances of the three lines seem well maintained in each of them. Even the distances when mapped on the wave-number scale are so nearly the same for each element that it would be a matter of great importance to settle definitively whether the slight variations which are found to exist are real or due to errors of measurement. In the following table (II.) we give the position of the least refrangible line of each triplet together with the distances between the first and second (column B) and between the second and third line of each triplet (column C). The figures in column A represent the number of waves in one millimetre. For the zinc and calcium triplets the measurements of Liveing and Dewar are given; the magnesium triplets are put down as measured by Cornu as well as by Hartley and Adeney. The differences in these measurements will give an idea of the degree of uncertainty. The triplets of cadmium are farther apart and are mixed up with a greater number of single lines.
== TABLE ==
Relation between Spectrum of a Body and Spectra of its Compounds.
The spectrum of a body is due to periodic motion within the molecules. If we are justified in believing that the molecule of mercury vapour contains a single atom, it follows that atoms are capable of vibration under the action of internal forces, for mercury vapour has a definite spec-trum. We may consider, then, the spectrum to be de-termined in the first place by forces within the atom, but to be affected by the forces which hold together the different atoms within the molecule. The closer the bond of union the greater the dependence of the vibrations on the forces acting between the different atoms. Experimental evidence seems to favour these views, for we observe that whenever elements are loosely bound together we can recognize the influence of each constituent, while in the compounds which are sufficiently stable to resist the temperature of incandes-cence the spectrum of the compound is perfectly distinct from the spectra of the elements. The oxides and haloid salts of the alkaline earths, for instance, have spectra in which we cannot trace the vibrations of the component atoms ; but the spectra of the different salts of the same metal show a great resemblance, the bands being similar and similarly placed. The spectrum seems displaced towards the red as the atomic weight of the haloid increases. No satisfactory numerical relationship has, however, been traced between the bands. The number of compounds which will endure incandescence without decomposition is very small, and this renders an exhaustive investigation of the relationship between their spectra very difficult.
The compounds whose absorption spectra have been investigated have often been of a more unstable nature, and, moreover, dissociation seems going on in liquid solutions to a large extent ; the influence of the component radicals in the molecule is more marked in consequence. Dr Gladstone, at an early period in the history of spectrum analysis, examined the absorption spectra of the solu-tion of salts, each constituent of which was coloured. He concluded that generally, but not invariably, the following law held good : " When an acid and a base combine each of which has a different influence on the rays of light a solution of the resulting salt will transmit only those rays which are not absorbed by either, or, in other words, which are transmitted by both." He mentions as an important exception the case of ferric ferro-cyanide, which, when dissolved in oxalic acid, transmits blue rays in great abundance, though the same rays are absorbed both by ferro-cyanides and by ferric salts. Soret has confirmed, for the ultra-violet rays, Dr Gladstone's conclusions with regard to the identity of the absorption spectra of different chromâtes. The chromâtes of sodium, potassium, and ammonia, as well as the bichromates of potassium and ammonia, were found to give the same absorption spectrum. Nor is the effect of these chromâtes confined to the blocking out simply of one end of the spectrum, as in the visible part, but two distinct absorption bands are seen, which seem unchanged in position if one of the above-mentioned chromâtes is replaced by another. Chromic acid itself showed the bands, but less distinctly, and Soret does not consider the purity of the acid sufficiently proved to allow him to draw any certain conclusion from this observation. Erhard's work on the absorption spectra of the salts in which chromium plays the part of base has already been mentioned. Nitric acid and the nitrates of transparent bases, such as potassium, sodium, and ammonia, show spectra, according to Soret, which are not only qualitatively but also quantitatively identical ; that is to say, a given quantity of nitric acid in solution gives a characteristic absorption band of exactly the same width and darkness, whether by itself alone or combined with a transparent base. It also shows a continuous absorption at the most refrangible side, beginning with each of the salts mentioned at exactly the same point. The ethereal nitrates, however, give different results. In 1872 Hartley and Huntington examined by photographic methods the absorption spectra of a great number of organic compounds. The normal alcohols were found to be transparent to the ultra-violet rays, the normal fatty acids less so. In both cases an increased number of carbon atoms increases the absorption at the most refrangible end. The fact that benzene and its derivatives are remarkable for their powerful absorption of the most refrangible rays, and for some characteristic absorption bands appearing on dilution, led Hartley to a more extended examination of some of the more complicated organic substances. He determined that definite absorption bands are only produced by substances in which three pairs of carbon atoms are doubly linked together, as in the benzene ring. More recently he has subjected the ultra-violet absorption of the alkaloids to a careful investigation, and has arrived at the conclusion that the spectra are sufficiently characteristic to "offer a ready and valuable means of ascertaining the purity of the alkaloids and particularly of establishing their identity. " " In comparing the spectra of substances of similar constitution it is observed that in such as are derived from bases by the substitution of an alkyl radical for hydrogen, or of an acid radical for hydroxyl, the curve is not altered in character, but may vary in length when equal weights are examined. This is explained by the absorption bands being caused by the compactness of structure of the nucleus of the molecule, and that equal weights are not molecular weights, so that by substituting for the hydrogen of the nucleus radicals which exert no selective absorption the result is a reduction in the ab-sorptive power of a given weight of the substance. . . . Bases which contain oxydized radicals, as hydroxyl, methoxyl, and carboxyl, increase in absorptive, power in proportion to the amount of oxygen they contain."
It would seem, however, by comparing the above results with those obtained by Captain Abney and Colonel Festing that the absorption of a great number of organic substances is more char-acteristic in the infra-red than in the ultra-violet. Some of the conclusions arrived at by these experimentalists are of great im-portance, as the following quotations will show :"Regarding the general absorption we have nothing very noteworthy to remark, beyond the fact that, as a rule, in the hydrocarbons of the same series those of heavier molecular constitution seem to have less than those of lighter." This effect agrees with the observations made by Hartley and Huntington in the ultra-violet, in so far as a general shifting of the absorption towards the red seems to take place as the number of carbon atoms is increased. Such a shifting would increase the general absorption in the ultra-violet as observed by Hartley and Huntington, and decrease it in the infra-red as observed by Abney and Festing. Turning their atten-tion next to the sharply defined lines, the last named, by a series of systematic experiments, concluded that these must be due to the hydrogen atoms in the molecule. "A crucial test was to observe spectra containing hydrogen and chlorine, hydrogen and oxygen, and hydrogen and nitrogen. We therefore tried hydrochloric acid and obtained a spectrum containing some few lines. Water gave lines, together with bands, two lines being coincident with those in the spectrum of hydrochloric acid. In ammonia, nitric acid, and sulphuric acid we also obtained sharply marked lines, coincidences in the different spectra being observed, and nearly every line mapped found its analogue in the chloroform spectrum, and usually in that of ethyl iodide. Benzene, again, gave a spectrum consisting prin-cipally of lines, and these were coincident with some lines also to be found in chloroform. It seems, then, that the hydrogen, which is common to all these different compounds, must be the cause of the linear spectrum. In what manner the hydrogen annihilates the waves of radiation at these particular points is a question which is, at present at all events, an open one, but, that the linear absorp-tions, common to the hydrocarbons and to those bodies in which hydrogen is in combination with other elements, such as oxygen and nitrogen, are due to hydrogen, there can be no manner of doubt. The next point that required solution was the effect of the presence of oxygen on the body under examination. ... It appears that in every case where oxygen is present, otherwise than as a part of the radical, it is attached to some hydrogen atom in such a way that it obliterates the radiation between two of the lines which are due to that hydrogen. ... If more than one hydroxyl group be present, we doubt if any direct effect is produced beyond that produced by one hydroxyl group, except a possible greater general absorption; a good example of this will be found in cinnamic alcohol and phenyl-propyl alcohol, which give the same spectra as far as the special absorptions are concerned. . . . Hitherto we have only taken into account oxygen which is not contained in the radical; when it is so contained it appears to act differently, always supposing hydrogen to be present as well. We need only refer to the spectrum of aldehyde, which is inclined to be linear rather than banded, or rather the bands are bounded by absolute lines, and are more defined than when oxygen is more loosely bonded."
"An inspection of our maps will show that the radical of a body is represented by certain well-marked bands, some differing in position according as it is bonded with hydrogen, or a halogen, or with carbon, oxygen, or nitrogen. There seem to be characteristic bands, however, of any one series of radicals between 1000 and about 1100, which would indicate what may be called the central hydrocarbon group, to which other radicals may be bonded. The clue to the composition of a body, however, would seem to lie between X 700 and X 1000. Certain radicals have a distinctive absorption about X 700 together with others about X 900, and if the first be visible it almost follows that the distinctive mark of the radical with which it is connected will be found. Thus in the ethyl series we find an absorption at 740, and a characteristic band, one edge of which is at 892 and the other at 920. If we find a body containing the 740 absorption and a band with the most refrangible edge commencing at 892, or with the least refrangible edge terminating at 920, we may be pretty sure that we have an ethyl radical present. So with any of the aromatic group ; the crucial line is at 867. If that line be connected with a band we may feel certain that some derivative of benzine is present. The benzyl group show this remarkably well, since we see that phenyl is present, as is also methyl. It will be advantageous if the spectra of ammonia, benzine, aniline, and dimethyl aniline be com-pared, when the remarkable coincidences will at once become apparent, as also the different weighting of the molecule. The spectrum of nitro-benzine is also worth comparing with benzine and nitric acid. ... In our own minds there lingers no doubt as to the easy detection of any radical which we have examined, . . . and it seems highly probable by this delicate mode of analysis that the hypothetical position of any hydrogen which is replaced may be identified, a point which is of prime importance in organic chemistry. The detection of the presence of chlorine or bromine or iodine in a compound is at present undecided, and it may well be that we may have to look for its effects in a different part of the spectrum. The only trace we can find at present is in ethyl bromide, in which the radical band about 900 is curtailed in one wing. The difference between amyl iodide and amyl bromide is not sufficiently marked to be of any value."
The absorption spectra of the didymium and cobalt salts afford many striking examples of the complicated effects of solution and combination in the spectra. It is impossible to explain these with-out the help of illustrations, and we must refer the reader, therefore, to the original papers. Some very interesting changes have been noticed in the position of absorption bands when certain colouring matters are dissolved in different liquids. Characteristic absorp-tion bands appear for each colouring matter in slightly different positions according to the solvent. Hagenbach, Kraus, Kundt, and Claes have studied the question. In a preliminary examina-tion Professor Kundt had come to the conclusion that solvents displaced absorption bands towards the red in the order of their dispersive powers; but the examination of a greater number of cases has led him to recognize that no generally valid rule can be laid down. At the same time highly dispersive media, like bisul-phide of carbon, always displace a band most towards the red end, while with liquids of small dispersion, like water, alcohol, and ether, the band always appears more refrangible than with other solvents ; and as a general rule the order of displacement is approximately that of dispersive power.
Relations of the Spectra of Different Elements.
Various efforts have been made to connect together the spectra of different elements. In these attempts it is generally assumed that certain lines in one spectrum corre-spond to certain lines in another spectrum, and the ques-tion is raised whether the atom with the higher atomic weight has its corresponding lines more or less refrangible.
No definite judgment can as yet be given as to the success of these efforts. Lecoq de Boisbaudran has led the way in these speculations, and some of the similarities in different spectra pointed out by him are certainly of value. But whether his conclusion, that "the spectra of the alkalis and alkaline earths when classed according to their refran-gibilities are placed as their chemical properties in the order of their atomic weight," will stand the test of further research remains to be seen. Ciamician has also published a number of suggestive speculations on the question, and Hartley has extended the comparison to the ultra-violet rays.
When metallic spectra are examined it is often found
that some line appears to belong to more than one metal.
This is often due to a common impurity of the metals;
but such impurities do not account for all coincidences.
The question has been raised whether these coincidences
do not point to a common constituent in the different
elements which show the same line. If this view is correct,
we should have to assume that the electric spark decom-
poses the metals, and that the spectrum we observe is
not the spectrum of the metal but that of its constituents.
Further investigation has shown, however, that in nearly
all cases the assumed coincidences were apparent only.
With higher resolving powers it was found that the lines
did not occupy exactly the same place. With the large
numbers of lines shown by the spectra of most of the
metals some very close coincidences must be expected by
the doctrine of chances. The few coincidences which our
most powerful spectroscopes have not been able to resolve
are in all probability accidental only. (A. S*.)
375-1 Phil. Trans., clxxv. p. 49 (1884).
375-2 We may refer for all to Watts, Index of Spectra, for a list of wave-lengths of the different spectra.
378-1 Phil. Trans., clxiii. p. 253 (1873).
379-1 Ann. Chim. Phys., xxviii. p. 57 (1873).
379-2 Spectre Lumineux, p. 188 (1874).
379-3 Op. cit., p. 43 (1874).
379-4 Wied. Ann., xxv. p. 80 (1885).
380-1 Measured by Thalén.
380-2 Measured by Liveing and Dewar.
380-3 Phil. Mag., xiv. p. 418 (1857).
380-4 Phil. Trans., part ii. (1885)
380-5 Phil. Trans., iii. p. 887 (1881).
381-1 Bunsen, "On the Inversion of the Bands in the Didymium Absorption Spectra,' Phil. Mag., xxviii. p. 246 (1864), and xxxii. p. 177 (1866); Russell, "On the Absorption Spectra of Cobalt Salts," Proc. Roy. Soc., xxxii. p. 258 (1881).
381-2 Wied. Ann., iv. p. 34 (1878).
381-3 Wied. Ann., iii. p. 389 (1878).
381-4 Wien. Ber., lxxviii. (1878).
381-5 Journal Chem. Soc., September 1883.
The above article was written by: Arthur Schuster, Ph.D., F.R.S., Professor of Applied Mathematics, Owens College, Manchester.
|
<urn:uuid:6f0b5783-7eea-4af6-93f3-0fef33c4e343>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00218.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9524933099746704,
"score": 3.8125,
"token_count": 17325,
"url": "http://www.1902encyclopedia.com/S/SPE/spectroscopy.html"
}
|
French partitive articles quizlet
Vous is the formal you form. Using it shows respect and social distance. It should always be used when addressing strangers except for in certain environments like. Students will make a poster.
select... home site index overview characters credits search help... nouns determiners adverbs adjectives verbs negation prepositions pronouns conjunctions tense/mood.
Indefinite Articles The indefinite article, un/une, is used exactly like the English indefinite article- a/an. It is used when referring to a single instance that is a part of a group that consists of.
The French Partitive Articles - DU / DE LA / DE L / DES FrenchTastic1 Uploaded on Sep 28, 2010 : Learn the best ways to use the French Partitive articles associated with.
French Lesson - Learn the french definite, indefinite and partitive articles as well as when to use them. Learn how to say a, an, one, the, some and any in French. French grammar rules for articles.
Month January February March April May June July August September October November December Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Year 2015 2014 2013.
In French, articles and determiners are required on almost every common noun, much more so than in English. They are inflected to agree in gender (masculine or feminine) and number (singular or.
A, AN or ONE, SOME, ANY -- To translate this notion, the French use a combination of 2 articles ; the indefinite article (un, une, des, negative pas de), and the partitive article (du, de la, de l’.
Meaning and usage of the French partitive article The partitive article indicates an unknown quantity of something, usually food or drink. It is often omitted in English. Avez-vous bu du thé ? Did you.
Test yourself on French definite, indefinite, and partitive.
france: French partitive articles quizlet
- french partitive articles quizlet medical terminology
- french partitive articles quizlet app
- french partitive articles quizlet microbiology
- french partitive articles quizlet anatomy
- french partitive articles quizlet spanish
- french partitive articles quizlet flashcards
- french partitive articles quizlet vocabulary
- french partitive articles quizlet biology
- french partitive articles quizlet website
- french partitive articles quizlet login
- french partitive articles quizlet psychology
|
<urn:uuid:2ae34fd1-2b6a-457f-9a81-3ba072742a5e>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813883.34/warc/CC-MAIN-20180222022059-20180222042059-00419.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8596009612083435,
"score": 3.6875,
"token_count": 536,
"url": "http://x909668m.bget.ru/french-partitive-articles-quizlet/"
}
|
A weather ship, or Ocean Station Vessel, was a ship stationed in the ocean as a platform for surface and upper air meteorological observations for use in weather forecasting. They were primarily located in the north Atlantic and north Pacific oceans, reporting via radio. In addition to their weather reporting function, these vessels aided in search and rescue operations, supported transatlantic flights, acted as research platforms for oceanographers, monitored marine pollution, and aided weather forecasting both by weather forecasters and within computerized atmospheric models. Research vessels remain heavily used in oceanography, including physical oceanography and the integration of meteorological and climatological data in Earth system science.
The idea of a stationary weather ship was proposed as early as 1921 by Météo-France to help support shipping and the coming of transatlantic aviation. They were used during World War II but had no means of defense, which led to the loss of several ships and many lives. On the whole, the establishment of weather ships proved to be so useful during World War II for Europe and North America that the International Civil Aviation Organization (ICAO) established a global network of weather ships in 1948, with 13 to be supplied by Canada, the United States, and Europe. This number was eventually negotiated down to nine. The agreement of the use of weather ships by the international community ended in 1985.
Weather ship observations proved to be helpful in wind and wave studies, as commercial shipping tended to avoid weather systems for safety reasons, whereas the weather ships did not. They were also helpful in monitoring storms at sea, such as tropical cyclones. Beginning in the 1970s, their role was largely superseded by weather buoys because of the ships' significant cost. The removal of a weather ship became a negative factor in forecasts leading up to the Great Storm of 1987. The last weather ship was Polarfront, known as weather station M ("Mike"), which was removed from operation on January 1, 2010. Weather observations from ships continue from a fleet of voluntary merchant vessels in routine commercial operation.
The primary purpose of an ocean weather vessel was to take surface and upper air weather measurements, and report them via radio at the synoptic hours of 0000, 0600, 1200, and 1800 Universal Coordinated Time (UTC). Weather ships also reported observations from merchant vessels, which were reported by radio back to their country of origin using a code based on the 16-kilometer (9.9 mi) square in the ocean within which the ship was located. The vessels were involved in search and rescue operations involving aircraft and other ships. The vessels themselves had search radar and could activate a homing beacon to guide lost aircraft towards the ships' known locations. Each ship's homing beacon used a distinctly different frequency. In addition, the ships provided a platform where scientific and oceanographic research could be conducted. The role of aircraft support gradually changed after 1975, as jet aircraft began using polar routes. By 1982, the ocean weather vessel role had changed too, and the ships were used to support short range weather forecasting, in numerical weather prediction computer programs which forecast weather conditions several days ahead, for climatological studies, marine forecasting, and oceanography, as well as monitoring pollution out at sea. At the same time, the transmission of the weather data using Morse code was replaced by a system using telex-over-radio.
|C||Charlie||52° 45"||−35° 30"|
The director of France's meteorological service, Météo-France, proposed the idea of a stationary weather ship in 1921 in order to aid shipping and the coming of transatlantic flights. Another early proposal for weather ships occurred in connection with aviation in August 1927, when the aircraft designer Grover Loening stated that "weather stations along the ocean coupled with the development of the seaplane to have an equally long range, would result in regular ocean flights within ten years." During 1936 and 1937, the British Meteorological Office (Met Office) installed a meteorologist aboard a North Atlantic cargo steamer to take special surface weather observations and release pilot balloons to measure the winds aloft at the synoptic hours of 0000, 0600, 1200, and 1800 UTC. In 1938 and 1939, France established a merchant ship as the first stationary weather ship, which took surface observations and launched radiosondes to measure weather conditions aloft.
Starting in 1939, United States Coast Guard vessels were being used as weather ships to protect transatlantic air commerce, as a response to the crash of Pan American World Airways Hawaii Clipper during a transpacific flight in 1938. The Atlantic Weather Observation Service was authorized by President Franklin Delano Roosevelt on January 25, 1940. The Germans began to use weather ships in the summer of 1940. However, three of their four ships had been sunk by November 23, which led to the use of fishing vessels for the German weather ship fleet. Their weather ships were out to sea for three to five weeks at a time and German weather observations were enciphered using Enigma machines. By February 1941, five 327-foot (100 m) United States Coast Guard cutters were used in weather patrol, usually deployed for three weeks at a time, then sent back to port for ten days. As World War II continued, the cutters were needed for the war effort and by August 1942, six cargo vessels had replaced them. The ships were defenseless, which led to the loss of the USCGC Muskeget (WAG-48) with 121 aboard on September 9, 1942. In 1943, the United States Weather Bureau recognized their observations as "indispensable" during the war effort.
The flying of fighter planes between North America, Greenland, and Iceland led to the deployment of two more weather ships in 1943 and 1944. Great Britain established one of their own 80 kilometres (50 mi) off their west coast. By May 1945, frigates were used across the Pacific for similar operations. Weather Bureau personnel stationed on weather ships were asked voluntarily to accept the assignment. In addition to surface weather observations, the weather ships would launch radiosondes and release pilot balloons, or PIBALs, to determine weather conditions aloft. However, after the war ended, the ships were withdrawn from service, which led to a loss of upper air weather observations over the oceans. Due to its value, operations resumed after World War II as a result of an international agreement made in September 1946, which stated that no fewer than 13 ocean weather stations would be maintained by the Coast Guard, with five others maintained by Great Britain and two by Brazil.
History of the fleet
The establishment of weather ships proved to be so useful during World War II that the International Civil Aviation Organization (ICAO) had established a global network of 13 weather ships by 1948, with seven operated by the United States, one operated jointly by the United States and Canada, two supplied by the United Kingdom, one maintained by France, one a joint venture by the Netherlands and Belgium, and one shared by the United Kingdom, Norway, and Sweden. The United Kingdom used Royal Navy corvettes to operate their two stations, and staffed crews of 53 Met Office personnel. The ships were out at sea for 27 days, and in port for 15 days. Their first ship was deployed on July 31, 1947.
During 1949, the Weather Bureau planned to increase the number of United States Coast Guard weather ships in the Atlantic from five at the beginning of the year to eight by its end. Weather Bureau employees aboard the vessels worked 40 to 63 hours per week. Weather ship G ("George") was dropped from the network on July 1, 1949, and Navy weather ship "Bird Dog" ceased operations on August 1, 1949. In the Atlantic, weather vessel F ("Fox") was discontinued on September 3, 1949, and there was a change in location for ships D ("Dog") and E ("Easy") at the same time. Navy weather ship J ("Jig") in the north-central Pacific Ocean was placed out of service on October 1, 1949. The original international agreement for a 13 ship minimum was later amended downward. In 1949, the minimum number of weather ships operated by the United States was decreased to ten, and in 1954 the figure was lowered again to nine, both changes being made for economic reasons. Weather vessel O ("Oboe") entered the Pacific portion of the network on December 19, 1949. Also in the Pacific, weather ship A ("Able") was renamed ship P ("Peter") and moved 200 miles (320 km) to the east-northeast in December 1949, while weather vessel F ("Fox") was renamed N ("Nan").
Weather ship B ("Baker"), which had been jointly operated by Canada and the United States, became solely a United States venture on July 1, 1950. The Netherlands and the United States began to jointly operate weather ship A ("Able") in the Atlantic on July 22, 1950. The Korean War led to the discontinuing of weather vessel O ("Oboe") on July 31, 1950 in the Pacific, and ship S ("Sugar") was established on September 10, 1950. Weather ship P's ("Peter") operations were taken over by Canada on December 1, 1950, which allowed the Coast Guard to begin operating station U ("Uncle") 2,000 kilometres (1,200 mi) west of northern Baja California on December 12, 1950. As a result of these changes, ship N ("Nan") was moved 400 kilometres (250 mi) to the southeast on December 10, 1950.
Responsibility for weather ship V ("Victor") transferred from the United States Navy to the United States Coast Guard and Weather Bureau on September 30, 1951. On March 20, 1952, Vessels N ("November") and U ("Uncle") were moved 32 to 48 kilometres (20 to 30 mi) to the south to lie under airplane paths between the western United States coast and Honolulu, Hawaii. Weather vessel Q ("Quebec") began operation in the north-central Pacific on April 6, 1952, while in the western Atlantic, the British corvettes used as weather ships were replaced by newer Castle-class frigates between 1958 and 1961.
In 1963, the entire fleet won the Flight Safety Foundation award for their distinguished service to aviation. In 1965, there were a total of 21 vessels in the weather ship network. Nine were from the United States, four from the United Kingdom, three from France, two from the Netherlands, two from Norway, and one from Canada. In addition to the routine hourly weather observations and upper air flights four times a day, two Russian ships in the northern and central Pacific Ocean sent meteorological rockets up to a height of 80 kilometres (50 mi). For a time, there was a Dutch weather ship stationed in the Indian Ocean. The network left the Southern Hemisphere mainly uncovered. South Africa maintained a weather ship near latitude 40° South, longitude 10° East between September 1969 and March 1974.
When compared to the cost of unmanned weather buoys, weather ships became expensive, and weather buoys began to replace United States weather ships in the 1970s. Across the northern Atlantic, the number of weather ships dwindled over the years. The original nine ships in the region had fallen to eight after ocean vessel C ("Charlie") was discontinued by the United States in December 1973. In 1974, the Coast Guard announced plans to terminate all United States stations, and the last United States weather ship was replaced by a newly developed weather buoy in 1977.
A new international agreement for ocean weather vessels was reached through the World Meteorological Organization in 1975, which eliminated Ships I (India) and J (Juliett), and left ships M ("Mike"), R ("Romeo"), C ("Charlie"), and L ("Lima") across the northern Atlantic, with the four remaining ships in operation through 1983. Two of the British frigates were refurbished, as there was no funding available for new weather ships. Their other two ships were retired, as one of the British run stations was eliminated in the international agreement. In July 1975, the Soviet Union began to maintain weather ship C ("Charlie"), which it would operate through the remainder of the 1970s and 1980s. The last two British frigates were retired from ocean weather service by January 11, 1982, but the international agreement for weather ships was continued through 1985.
Because of high operating costs and budget issues, weather ship R ("Romeo") was recalled from the Bay of Biscay before the deployment of a weather buoy for the region. This recall was blamed for the minimal warning given in advance of the Great Storm of 1987, when wind speeds of up to 149 km/h (93 mph) caused extensive damage to areas of southern England and northern France. The last weather ship was Polarfront, known as weather station M ("Mike") at 66°N, 02°E, run by the Norwegian Meteorological Institute. Polarfront was withdrawn from operation on January 1, 2010. Despite the loss of designated weather ships, weather observations from ships continue from a fleet of voluntary merchant vessels in routine commercial operation, whose number has decreased since 1985.
Use in research
Beginning in 1951, British ocean weather vessels began oceanographic research, such as monitoring plankton, casting of drift bottles, and sampling seawater. In July 1952, as part of a research project on birds by Cambridge University, twenty shearwaters were taken more than 161 kilometres (100 mi) offshore in British weather ships, before being released to see how quickly they would return to their nests, which were more than 720 kilometres (450 mi) away on Skokholm Island. 18 of the twenty returned, the first in just 36 hours. During 1954, British weather ocean vessels began to measure sea surface temperature gradients and monitored ocean waves. In 1960, weather ships proved to be helpful in ship design through a series of recordings made on paper tape which evaluated wave height, pitch, and roll. They were also useful in wind and wave studies, as they did not avoid weather systems like merchant ships tended to and were considered a valuable resource.
In 1962, British weather vessels measured sea temperature and salinity values from the surface down to 3,000 metres (9,800 ft) as part of their duties. Upper air soundings launched from weather ship E ("Echo") were of great utility in determining the cyclone phase of Hurricane Dorothy in 1966. During 1971, British weather ships sampled the upper 500 metres (1,600 ft) of the ocean to investigate plankton distribution by depth. In 1972, the Joint Air-Sea Interaction Experiment (JASIN) utilized special observations from weather ships for their research. More recently, in support of climate research, 20 years of data from the ocean vessel P ("Papa") was compared to nearby voluntary weather observations from mobile ships within the International Comprehensive Ocean-Atmosphere Data Set to check for biases in mobile ship observations over that time frame.
- "Britain's First Weather Ship". Popular Mechanics. Vol. 89 no. 1. Hearst Magazines. January 1948. p. 136. ISSN 0032-4558.
- Malcolm Francis Willoughby (1980). The U.S. Coast Guard in World War II. Ayer Publishing. pp. 127–130. ISBN 978-0-405-13081-6. Retrieved January 18, 2011.
- Mark Natola, ed. (2002). Boeing B-47 Stratojet. Schiffer Publishing Ltd. pp. 120–121. ISBN 0764316702.
- Peter B. Schroeder (1967). Contact at Sea. The Gregg Press, Inc. p. 55.
- Captain C. R. Downes (1977). "History of the British Ocean Weather Ships" (PDF). The Marine Observer. XLVII: 179–186. Retrieved March 24, 2011.
- United States Weather Bureau (October 1950). "Changes Made in Ocean Projects" (PDF). Weather Bureau Topics. 9 (10): 132. Retrieved January 22, 2011.
- United States Weather Bureau (October 1949). "Changes in Ocean Stations" (PDF). Weather Bureau Topics. 8 (46): 489. Retrieved January 22, 2011.
- United States Weather Bureau (August 1949). "Two Ocean Stations Dropped" (PDF). Weather Bureau Topics. 8 (44): 457. Retrieved January 22, 2011.
- Robertson P. Dinsmore (December 1996). "Alpha, Bravo, Charlie... Ocean Weather Ships 1940–1980". Woods Hole Oceanographic Institution Marine Operations. Retrieved January 31, 2011.
- United States Weather Bureau (January 1950). "Changes Made in Pacific Stations" (PDF). Weather Bureau Topics. 9 (1): 7. Retrieved January 22, 2011.
- United States Weather Bureau (May 1952). "Station "Q" Established" (PDF). Weather Bureau Topics. 11 (5): 79. Retrieved January 31, 2011.
- Yaw-l Chu (March 1985). "Chapter 8: The Migration of Diamondback Moth" (PDF). Proceedings of the First International Workshop. The Asian Vegetable Research and Development Center: 79. Retrieved March 25, 2011.
- United States Weather Bureau (April 1952). "Pacific Stations Relocated" (PDF). Weather Bureau Topics. 11 (4): 48. Retrieved January 31, 2011.
- Steven K. Esbensen and Richard W. Reynolds (April 1981). "Estimating Monthly Averaged Air-Sea Transfers of Heat and Momentum Using the Bulk Aerodynamic Method". Journal of Physical Oceanography. American Meteorological Society. 11: 460. Bibcode:1981JPO....11..457E. doi:10.1175/1520-0485(1981)011<0457:EMAAST>2.0.CO;2. Retrieved March 24, 2011.
- George Lee Dowd, Jr. (August 1927). "The First Plane to Germany". Popular Science. Vol. 111 no. 2. Popular Science Publishing Company, Inc. p. 121. Retrieved January 18, 2011.
- United States Weather Bureau (April 1952). "Atlantic Weather Project". Weather Bureau Topics. 11 (4): 61.
- David Kahn (2001). Seizing the enigma: the race to break the German U-boat codes, 1939–1943. Barnes & Noble Publishing. pp. 149–152. ISBN 978-0-7607-0863-7. Retrieved January 18, 2011.
- United States Weather Bureau (February 1949). "AWP Headquarters Moves to New York" (PDF). Weather Bureau Topics. 8 (37): 353. Retrieved January 22, 2011.
- United States Weather Bureau (October 1949). "Ocean Weather Duty" (PDF). Weather Bureau Topics. 8 (46): 488. Retrieved January 22, 2011.
- United States Weather Bureau (November 1949). "Navy Ocean Station Discontinued" (PDF). Weather Bureau Topics. 8 (47): 503. Retrieved January 22, 2011.
- Hans Ulrich Roll (1965). Physics of the marine atmosphere. Academic Press. pp. 14–15. ISBN 978-0-12-593650-7. Retrieved January 18, 2011.
- United States Weather Bureau (January 1951). "Changes in Pacific Ocean Station Program" (PDF). Weather Bureau Topics. 10 (1): 12. Retrieved January 31, 2011.
- United States Weather Bureau (August 1951). "Bureau to Operate Pacific Station "V"" (PDF). Weather Bureau Topics. 10 (8): 157. Retrieved January 31, 2011.
- Ursula von St Ange (2002). "History of Ocean Wave Recording in South Africa". Council for Scientific and Industrial Research. Retrieved March 25, 2011.
- J. F. Robin McIlveen (1998). Fundamentals of weather and climate. Psychology Press. p. 31. ISBN 978-0-7487-4079-6. Retrieved January 18, 2011.
- National Research Council (U.S.). Ocean Science Committee, National Research Council (U.S.). Study Panel on Ocean Atmosphere Interaction (1974). The role of the ocean in predicting climate: a report of workshops conducted by Study Panel on Ocean Atmosphere Interaction under the auspices of the Ocean Science of the Ocean Affairs Board, Commission on Natural Resources, National Research Council. National Academies. p. 40. Retrieved January 18, 2011.
- Hans-Jörg Isemer (August 13, 1999). "Trends in Marine Surface Wind Speed: Ocean Weather Stations versus Voluntary Observing Ships" (PDF). National Oceanic and Atmospheric Administration. p. 76. Retrieved March 25, 2011.
- Pan-European Infrastructure for Ocean & Marine Data Management (September 11, 2010). "North Atlantic Ocean Weather Ship (OWS) Surface Meteorological Data (1945–1983)". British Oceanographic Data Centre. Retrieved January 31, 2011.
- "Changes to the Manning of the North Atlantic Ocean Stations" (PDF). The Marine Observer. LII: 34. 1982.
- "Romeo Would Have Spied the Storm". New Scientist. Vol. 116 no. 1583. IPC Magazines. October 22, 1987. p. 22. Retrieved January 18, 2011.
- Quirin Schiermeier (June 9, 2010). "Last Weather Ship Faces Closure". Nature News. 459 (7248): 759. doi:10.1038/459759a. Retrieved March 18, 2011.
- National Data Buoy Center (2009-01-28). The WMO Voluntary Observing Ships (VOS) Scheme. National Oceanic and Atmospheric Administration. Retrieved on 2011-03-18.
- World Meteorological Organization (July 1, 2002). "The WMO Voluntary Observing Programme: An Enduring Partnership" (PDF). Bureau of Meteorology. p. 2. Retrieved March 25, 2011.
- "What Makes a Good Seaboat?". New Scientist. Vol. 7 no. 184. The New Scientist. May 26, 1960. p. 1329. Retrieved January 18, 2011.
- Stanislaw R. Massel (1996). Ocean surface waves: their physics and prediction. World Scientific. pp. 369–371. ISBN 978-981-02-2109-6. Retrieved January 18, 2011.
- Carl. O. Erickson (March 1967). "Some Aspects of the Development of Hurricane Dorothy" (PDF). Monthly Weather Review. 95 (3): 121–130. Bibcode:1967MWRv...95..121E. doi:10.1175/1520-0493(1967)095<0121:SAOTDO>2.3.CO;2. Retrieved January 18, 2011.
- Barry Saltzman (1985). Satellite oceanic remote sensing. Academic Press. p. 110. ISBN 978-0-12-018827-7. Retrieved January 18, 2011.
- Hans von Storc and Francis W. Zwiers (2001). Statistical analysis in climate research. Cambridge University Press. p. 57. ISBN 978-0-521-01230-0. Retrieved January 18, 2011.
- Adams, Michael R. (2010). Ocean Station: Operations of the U.S. Coast Guard, 1940–1977. Eastpoint, Maine: Nor'Easter Press. ISBN 978-0-9779200-1-3.
|Wikimedia Commons has media related to Weather stations (Ocean).|
|
<urn:uuid:cdbd1216-86bc-4e7a-802f-f14d7d55404e>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00619.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.931033730506897,
"score": 3.859375,
"token_count": 4835,
"url": "https://en.wikipedia.org/wiki/Ocean_station"
}
|
|From left, STS-114 astronauts Stephen Robinson, James Kelly, Andrew Thomas, Wendy Lawrence, Charles Camarda, Eileen Collins, and Soichi Noguchi.|
NASA’s Space Operations Mission Directorate provides
many critical enabling capabilities that make possible
much of NASA’s science, research, and exploration
achievements. It does
this through the three themes of the Space Shuttle
Program, the International Space Station (ISS), and
The Space Shuttle Program
builds on the Shuttle’s
primacy as the world’s most versatile launch system.
he Space Shuttle, first launched in 1981, returned
to flight in 2005, with Discovery carrying the
STS-114 crew to the ISS.
|On July 26, 2005, Space Shuttle Discovery launched into a clear blue sky on the historic Return to Flight mission, STS-114.|
The ISS establishes
a permanent human presence in Earth orbit. It also
provides a long-duration, habitable laboratory for
science and research activities investigating the
limits of human performance, expanding human experience
in living and working in space, and enabling the
Support consists of Launch Services, Space Communications,
and Rocket Propulsion Testing. These “enabling” services
are critical for conducting space exploration, as
well as aeronautical, materials science, biological,
and physical research.
Humans in space are the primary focus of this directorate.
Space is still the new frontier, and astronauts are
the pioneers of that frontier. The directorate explains,
explores, and chronicles the space projects humans
are involved in now and will be involved in, come
Space Shuttle: Return to Flight
Space Shuttle Discovery launched from Kennedy Space Center on July 26, 2005, ending a 2.5-year wait for the historic Return to Flight mission. STS-114 included breathtaking in-orbit maneuvers, tests of new equipment and procedures, a first-of-its-kind spacewalking repair task, and telephone calls from two world leaders.
Discovery touched down on August 9 at Edwards Air Force
Base, California, following a successful reentry. The
orbiter returned to Kennedy on August 21, atop a modified
Boeing 747 called the Shuttle Carrier Aircraft. Discovery
then entered the Orbiter Processing Facility, where
it will be readied for mission STS-121.
|Space Shuttle Discovery is docked to the International Space Station’s Destiny laboratory, with the Earth’s horizon in the background.|
During STS-114, NASA accomplished a variety of goals
while also learning some important lessons. At liftoff,
a large piece of insulating foam broke off the External
Tank. Now, NASA engineers are working to determine
what caused this and how to prevent it from happening
in the future.
Using the new Orbiter Boom Sensor System, Discovery crewmembers took an unprecedented up-close look at the orbiter’s Thermal Protection System. This collection of new data was expanded on flight day 3, when Commander Eileen Collins guided Discovery through the first-ever “rendezvous pitch maneuver” as the orbiter approached the ISS for docking.
The slow-motion backflip allowed Space Station crewmembers John Phillips and Sergei Krikalev to snap high-resolution photographs for mission managers to use to ensure Discovery was in good shape to come home.
During the first of three spacewalks, Mission Specialists Stephen Robinson and Soichi Noguchi tested new repair techniques for the
|Stephen Robinson is attached to a foot restraint on the International Space Station’s Canadarm2. This robotic extension guided Robinson to the underside of Discovery, where he removed two pieces of ceramic fabric, known as “gap fillers,” that were protruding from heat-shielding tiles.|
outer skin of the Space Shuttle’s
heat shield and installed equipment outside the ISS.
They also repaired a control moment gyroscope. Two
days later, Robinson and Noguchi again ventured out
into the vacuum of space to replace a different, failed
control moment gyroscope, putting all four of the Station’s
gyroscopes back into service.
When two thermal protection tile gap-fillers were spotted jutting out of Discovery’s underside, astronauts and other experts on the ground devised a plan to ensure that the protrusions would not cause higher-than-normal temperatures on the Space Shuttle during atmospheric reentry.
Ground controllers sent up plans to the Shuttle-Station complex for Robinson to ride the Space Station’s robotic arm beneath the Shuttle and, with surgical precision, pluck out the gap-fillers.
|A close-up view of Discovery’s underside is featured in this image photographed by Robinson—whose shadow is visible on the thermal protection tiles—during the mission’s third session of extravehicular activities.|
Work on the Shuttle underbelly had never been tried
before, but with Mission Specialist Wendy Lawrence
and Pilot Jim Kelly operating the robotic arms, Mission
Specialist Andy Thomas coordinating, and fellow spacewalker
Noguchi keeping watch, Robinson delicately completed
the extraction during the third and final spacewalk.
“Okay, that came out very easily,” Robinson said, after carefully removing one of the fillers. “It looks like this big patient is cured.”
The crew received phone calls from U.S. President George W. Bush and Japanese Prime Minister Junichiro Koizumi, who offered congratulations and appreciation for the astronauts’ hard work.
Together, both the Discovery and ISS crews paid tribute
to the astronauts of Columbia, as well as others who
gave their lives for space exploration.
With the mission drawing to a close, the Multi-Purpose
Logistics Module, Raffaello, was removed from the ISS
and reinstalled in Discovery’s payload bay. Raffaello
arrived with more than 12,000 pounds of equipment and
supplies and carried about 7,000 pounds of Station
material on the trip back to Earth. After 9 days of
cooperative work, Discovery undocked from the ISS.
The STS-114 crew was given an extra day in orbit on
August 8, when the first attempt to land at Kennedy
was foiled by uncooperative weather. Even though cloudy
skies reappeared at the Shuttle’s home port the next
morning, NASA was ready with a backup plan: a landing
at Edwards Air Force Base in the high desert of California,
where the weather was perfect.
|The Sun rises on Discovery as it rests on the runway at Edwards Air Force Base, California, after a safe landing August 9, 2005, to complete the STS-114 mission.|
Capsule Communicator Ken Ham congratulated the returning
crew on a spectacular test flight. “Stevie Ray, Soichi,
Andy, Vegas, Charlie, Wendy, and Eileen—welcome home,
Those words, Collins said, were great to hear. “We’re
happy to be back, and we congratulate the whole team
for a job well done.”
International Space Station: Sustaining a Human Presence in Space
While awaiting Discovery’s arrival, Expedition 11 NASA Science Officer John Phillips and Commander Sergei Krikalev conducted the first of their three renal stone experiment sessions aboard the ISS. The renal stone experiment investigates whether potassium citrate, a proven Earth-based therapy used to minimize renal (kidney) stone development, can be effective as a countermeasure to reduce the risk of kidney stone formation for crewmembers in space. Astronauts are at an increased risk of developing kidney stones, because urinary calcium levels are typically much higher in space.
The renal stone investigation was designed as a double-blind
study. The crewmembers do not know whether they are
taking the potassium citrate or a placebo. Further,
the principal investigator who interprets the data
does not know in advance which crewmembers have taken
the potassium citrate or which have taken the placebo.
The principal investigator is studying the urine chemistry
of the samples to determine each individual’s risk
of renal stone formation. If the investigator’s hypothesis
is correct, the crewmembers identified as having a
lower renal stone formation risk will be those who
had taken the potassium citrate pills in-flight as
Back to top
During their initial session, Phillips and Krikalev performed a urine collection over the course of 24 hours and logged everything they ate and drank for 48 hours.
|The International Space Station was photographed from Space Shuttle Discovery after the two spacecraft undocked.|
This experiment is crucial to long-duration missions,
since kidney stones can incapacitate a crewmember,
and, in the worse case, threaten life if there is no
way to get the astronaut back to Earth quickly.
In a previous ISS research effort, Expedition 10 Commander and NASA Science Officer Leroy Chiao and Flight Engineer Salizhan Sharipov conducted an experiment to shed more light on what is currently known about microgravity’s effects on human muscle and bone.
In carrying out the ADvanced Ultrasound in Microgravity (ADUM) experiment, Chiao and Sharipov performed ultrasound bone scans on each other by taking turns as operator and subject. The bone scans were taken of the shoulder, elbow, knee, and ankle, monitored remotely from the ground, and videotaped and photographed for downlink and analysis.
Since there is no room for a fully functioning staff of doctors aboard the ISS, nor is it feasible for a crewmember to return to Earth for a quick medical checkup, this experiment could lead to efficient diagnosing of medical problems with minimal use of onboard resources. Ability of crewmembers to use an ultrasound machine with remote instruction—sending information to the ground for analysis—can assist in timely treatment, as well as avert unnecessary evacuation. Crewmembers as far away as Mars could eventually be remotely examined by doctors on Earth using a modification of this technology. This type of capability is essential for long-term space exploration.
Back to top
|Many of NASA’s most famous missions—from those observing Earth, such as EOS, Aura, and Landsat, to interplanetary and deep space missions like the Mars Exploration Rover and Deep Space 1—are launched on expendable launch vehicles.|
The Expedition 10 crewmembers also conducted a session
with the Miscible Fluids in Microgravity experiment.
Fluids do not behave the same on Earth as in the microgravity
environment inside the orbiting Space Station. This
experiment studies how miscible fluids, or those that
completely dissolve, interact without the interference
The test involved Chiao pulling tinted water from a syringe through a drinking straw and into another syringe containing a mixture of honey and water. The way the fluid interacted was videotaped and photographed for observation. This research could help scientists improve the way plastics and other polymers are produced on Earth and in space.
NASA’s Payload Operations team at Marshall Space Flight Center is coordinating the aforementioned ISS science activities.
Flight Support: Launch Services
Many of NASA’s most famous missions are launched on
expendable launch vehicles (ELVs). These missions are
unpiloted and can accommodate all types of orbit inclinations
In 1997, Kennedy was assigned lead center program responsibility for NASA’s acquisition and management of ELV launch services. Its ELV Program Office provides a single focal point for these services, while affording NASA the benefits of consolidated and streamlined technical and administrative functions. The program, with its vision statement, “Global Leadership in Launch Service Excellence,” provides launch services for NASA, NASA-sponsored payloads, and other government payloads.
Primary launch sites are Cape Canaveral Air Force Station, Florida, and Vandenberg Air Force Base, California; other launch locations are NASA’s Wallops Island, Virginia; Kodiak Island, Alaska; and Kwajalein Atoll, in the Republic of the Marshall Islands, in the North Pacific.
Since 1990, NASA has been purchasing ELV launch services directly from commercial providers, whenever possible, for its scientific and applications missions that are not assigned to fly on the Space Shuttle. Because ELVs can accommodate all types of orbit inclinations and altitudes/attitudes, they are ideal for launching Earth-orbit and interplanetary missions.
Kennedy is also responsible for NASA oversight of launch operations and countdown management. A motivated and skillful team is in place to meet the mission of the ELV program: “To provide launch service excellence, expertise, and leadership to ensure mission success for every customer.”
In late-May 2005, NASA successfully launched a new environmental satellite for the National Oceanic and Atmospheric Administration (NOAA), using a Boeing Delta II 7320-10 ELV. The satellite, NOAA-18, aims to improve weather forecasting and monitor environmental events around the world.
|NOAA-18 is the latest polar-orbiting satellite developed by NASA for the National Oceanic and Atmospheric Administration (NOAA). NOAA-18 will collect information about Earth’s atmosphere and environment to improve weather prediction and climate research across the globe.|
The NOAA-18 spacecraft lifted off from Vandenberg
Air Force Base, on the Delta II. Approximately 65
minutes later, the spacecraft separated from the
ELV second stage.
“The satellite is in orbit and all indications are that we have a healthy spacecraft,” said Karen Halterman, the NASA Polar-orbiting Operational Environmental Satellites (POES) project manager, based at Goddard Space Flight Center. “NASA is proud of our partnership with NOAA in continuing this vital environmental mission,” she added.
NOAA-18 will collect data about the Earth’s surface and atmosphere. The data are input to NOAA’s long-range climate and seasonal outlooks, including forecasts for El Niño and La Niña. NOAA-18 is the fourth in a series of five POES with instruments that provide improved imaging and sounding capabilities.
NOAA-18 has instruments used in the International Search and Rescue Satellite-Aided Tracking System, called COSPAS-SARSAT, which was established in 1982. NOAA POES detect emergency beacon distress signals and relay their location to ground stations, so rescue can be dispatched. SARSAT is credited with saving approximately 5,000 lives in the United States and more than 18,000 worldwide.
|The National Oceanic and Atmospheric Administration’s spacecraft NOAA-18 leaped away from the smoke and steam clouds as it lifted off from Vandenberg Air Force Base in California. It was launched by NASA on a Boeing Delta II 7320-10 expendable launch vehicle.|
NOAA manages the POES program and establishes
requirements, provides all funding, and distributes
environmental satellite data for the United
States. Goddard procures and manages the development
and launch of the satellites for NOAA on a
Flight Support: Space Communications
Sophisticated signal-processing techniques and simple proof-of-principle antenna arrays built from PVC pipe, aluminum foil, and copper wire could revolutionize the way NASA obtains data from its Earth-observing satellites.
If the adaptive array system being studied by NASA and Georgia Institute of Technology (Georgia Tech) researchers ultimately proves feasible, it could make information from the Space Agency’s Earth-observing satellites more widely and rapidly available. The “off-the-shelf” technology has already demonstrated that it can successfully receive one satellite telemetry frequency.
“The dream would be to make these NASA information services available to anybody sitting at a computer, almost like video-on-demand,” explained Mary Ann Ingram, a professor at Georgia Tech’s School of Electrical and Computer Engineering. “Timely information from Earth-observing satellites could be useful in many ways, such as directing operations to fight a forest fire, for instance.”
Information from satellites such as Earth Observing-1
(EO-1) is now downlinked to various 11-meter dishes,
primarily in the Arctic Circle, where subzero temperatures
create maintenance and reliability issues for their
complex aiming mechanisms. Typically, satellites
such as EO-1 are in contact with these antenna
systems 5 to 8 times a day, for 10 minutes at a
time. The present antenna systems require resident
crews to operate and maintain them.
|Operation and Deployment Experiments Simulator (ODES) engine testing at White Sands Test Facility in New Mexico.|
The NASA/Georgia Tech project envisions replacing
these antennas with a network of inexpensive antenna
arrays that would have no moving parts and use
sophisticated software—instead of careful aiming—to
gather data from the satellites. The network could
lower operational costs while improving access
to the information.
“When people use cell phones to make calls, there are no moving parts on the antennas,” noted Dan Mandl, mission director for NASA’s EO-1 program at Goddard. “What I would like to do is build a continuous cell-like network around the world that would provide almost unlimited opportunities to downlink data.”
Back to top
Mandl compared NASA’s existing downlink system to old-fashioned pay phones located off expressway exits. “If you witness an accident, you can open your cell phone and call for assistance,” he said. “But if you don’t have a cell phone, you have to get off the highway at the next exit and hunt for a pay phone. What we would like to do is give these satellites the equivalent of cell phones to allow anytime, anywhere contact.”
The proof-of-principle adaptive arrays being tested by Ingram and her research team are built from inexpensive components, including common PVC piping and aluminum foil. Signals from the four antennas are analyzed using a processing technique that learns to improve its performance, by constructively combining scattered and reflected versions of the signal and by suppressing noise and interference. This eliminates the need for costly front-end hardware and precise aiming of the antenna arrays, and enables flexibility in the location of the ground station.
“Instead of one big aperture from an 11-meter dish, we’re going to use several smaller apertures and connect them with digital signal processing,” Ingram explained. “A smaller aperture has a wider beam, so the tracking requirement won’t be as great. They may pick up interference, especially in tracking a satellite at a low-elevation angle, but because we combine multiple apertures, we can null out the interference.”
The arrays individually will not provide the same data rate as NASA’s large structures, but having more of them spread out around the world will compensate for that. Network capacity studies show that two ground stations, each with seven 0.75-meter dishes or eight electronically steered antennas, could equal the data capacity of NASA’s existing 11-meter dish in Poker Flats, Alaska, at a significantly lower cost.
And because an array does not depend on precisely aiming a dish, each one could potentially communicate with more than one satellite at a time. “What we’d really like to have is a shared antenna resource, in which software is used to separate out the signals,” Mandl explained. “As we get more satellites up in space, this will become more important.”
In testing performed at Georgia Tech, researchers were able to downlink EO-1 information in the S-band, a frequency used for transmissions at low data rates. They had to develop a special filter to eliminate interference from terrestrial repeater stations of popular satellite radio services.
“We have demonstrated the lower rates in S-band, and, during the upcoming year, we will work on X-band for higher rates,” Mandl said. “Ultimately, we would like to demonstrate Ka band, which is in the 27-28 gigahertz range. You could potentially get anywhere from 300 megabits to a gigabit of data in that stream.”
To extend satellite reception time, researchers are also examining several technical issues, such as array-based synchronization and optimization of the tilt angles of the planar apertures of the electronically steered antennas. This optimization could quadruple the download capacity for a ground station with eight electronically steered antennas.
If successful, the adaptive array project would give NASA more flexibility in design of future high-data rate satellites that may generate terabits of data on each orbit of the Earth. Reliably downlinking that amount of information will require a new approach, Mandl noted.
“If you are in the Arctic and the motor moving your dish breaks down, it may take a few weeks to fix it,” Mandl said. “If this could be done with no moving parts, using techniques of digital signal processing and software radio, one of the most desirable features will be a high level of reliability. That’s important for space applications and locations where you can just put equipment out there and not require an operator or maintenance crew.”
Flight Support: Rocket Propulsion Testing
|NASA scientists began generating plasma energy in a 9-inch vacuum chamber in NASA’s Propulsion Research Laboratory at the Marshall Space Flight Center. In partnership with researchers at the University of Texas at Austin, Johnson Space Center, and the University of Alabama, Marshall scientists are developing innovative magnetic nozzles capable of properly channeling superheated plasma without nozzle deterioration, causing the plasma to accelerate to velocities far faster than those of conventional chemical propulsion systems. Such component technology could support development of next-generation, plasma-propelled spacecraft capable of safely and quickly carrying robotic or human exploration missions deep into the solar system.|
On July 6, 1962, NASA selected the White Sands
Test Facility (WSTF) as the site for Johnson Space
Center’s Propulsion Systems Development Facility.
This site was chosen for its isolated location
and topography, which minimized the inherent hazards
of aerospace propulsion testing to the general
population. WSTF began testing rocket engines in
1964. More than 310 engines have been tested, for
a total number of firings exceeding 2.1 million.
WSTF’s 300 and 400 Propulsion Test Areas were originally constructed to test the engines for the Apollo Command and Service Modules and the Lunar Module. In September 1964, the first firing test of the main rocket engine for the Apollo Command and Service Modules was conducted. The Lunar Module descent engine, which allowed the craft to land softly on the Moon, and the ascent engine, which was used to launch the craft from the lunar surface, were certified for flight after hundreds of firings in the 400 Area. The reaction control system, which consisted of the small thrusters that control the spacecraft attitude, was also certified for flight at WSTF.
Today, six test stands provide vacuum test capability, and three test stands provide ambient testing, 5,000 feet above sea-level, for the Space Shuttle, the ISS, and for other government agency tests.
Stennis Space Center is NASA’s primary center for testing and proving flight-worthy rocket propulsion systems for the Space Shuttle and future generations of space vehicles. Having conducted engine testing for 4 decades, Stennis is NASA’s program manager for rocket propulsion testing with total responsibility for conducting and managing all NASA propulsion test programs.
The Exploration Systems Mission Directorate is responsible for creating new capabilities and supporting technologies that enable sustained and affordable human and robotic exploration. This mission directorate is also responsible for effective utilization of ISS facilities and other platforms for research that support long-duration human exploration.
Plasma Energy Technology to Propel Deep-Space Missions
NASA scientists have begun generating plasma energy in a 9-inch vacuum chamber in NASA’s Propulsion Research Laboratory at Marshall. In partnership with researchers at the University of Texas at Austin, Johnson, and the University of Alabama, in Huntsville, Marshall scientists are developing innovative magnetic nozzles capable of properly channeling superheated plasma without nozzle deterioration, causing the plasma to reach very high velocities.
Back to top
|Susan Young Lee, lead hardware engineer, and Eric Park, computer scientist, working on a K-10 Rover in one of Ames Research Center’s robotics laboratories.|
Such component technology could support development of next-generation,
plasma-propelled spacecraft capable of safely and quickly carrying robotic
or human exploration missions deep into the solar system. This could
dramatically reduce travel times to Earth’s neighboring planets and extend
the capabilities of future space exploration missions.
The new research project has two objectives: development of an innovative magnetic nozzle design capable of directing the flow of plasma, and determining how to efficiently eject the plasma from the nozzle to produce the greatest propulsive thrust.
Plasma is a highly conductive medium formed when a gas is heated and ionized—the process in which the gas’s neutral atoms shed electrons and acquire a positive charge. When properly channeled through a magnetic nozzle, plasma can be accelerated to velocities dramatically faster than those of conventional chemical propulsion systems.
Propellant in a plasma state can be accelerated with the use of electromagnetic energy sources to increase the propulsion system’s specific impulse—the equivalent of a car’s gas mileage. Such a nozzle, magnetically insulated against the superheated plasma flow, would enable plasma acceleration at temperatures far beyond those conventional materials can endure.
The second challenge is rooted in the physics of magnetized plasma flow. A plasma propulsion system requires magnetic coils to generate and channel the plasma. These coils produce closed magnetic field lines—circular loops of magnetic energy that form around the power source—and prevent the plasma from detaching and leaving the spacecraft.
The research consortium seeks to test mechanisms that allow the plasma stream—already properly shaped by the magnetic nozzle—to break away from the spacecraft, generating maximum thrust by dispersing the plasma at exactly the right moment following expulsion from the rear of the spacecraft. Eventually, NASA hopes to adapt this research to develop a new class of rockets incorporating magnetic nozzles and plasma propulsion systems.
NASA Develops Robot With Human Traits
NASA researchers envision futuristic robots that “act” like people, enabling these mechanical helpers to work more efficiently with astronauts. Human-robot cooperation, in turn, will enable exploration of the Moon and Mars, and even large-scale construction in extraterrestrial places. Because human crews will be limited to small teams, astronauts will need robot helpers to do much of each team’s work.
Though remotely controlled machines and robots that work entirely on their own are valid goals, a research team at Ames Research Center plans to focus on robots that are partly controlled by people and operate independently the rest of the time.
|An eight-legged Scorpion robot prototype test under development at Ames Research Center is just one example of the innovative robotics work being done at that center.|
There are three main areas under development. One is called collaborative
control, during which the human being and the robot will speak to one
another and work as partners. The second area is building robots with
reasoning mechanisms that work similarly to human reasoning. Thirdly,
the researchers will conduct field tests of people and robots working
Many experiments will occur in a special, indoor laboratory under construction at Ames, featuring a control room with a window looking out on robots working in a large area that will simulate the surface of a moon or planet. The control room will imitate a human habitat on the Moon or Mars.
The robots will help assemble buildings, test equipment, weld structures, and dig with small tools. Human-robot teams will use a checklist and a plan to guide their joint efforts. The robot development work will focus on specific tasks essential for basic exploration mission operations including: shelter and work hangar construction, piping assembly and inspection, pressure vessel construction, habitat inspection, resource collection, and transport.
Scientists say human-robot cooperation will result in a better outcome than human- or robot-only teams could accomplish. To make human-machine teaming a reality, a NASA multi-pronged effort is underway to develop robot intelligence. Similar to human thinking, it is designed to improve the mechanical workings of robots and to standardize human-robot communications.
Robots Will Search for Lunar Water Deposits
The Vision for Space Exploration spells out a long-term strategy of returning to the Moon as a step towards sending humans to Mars and beyond. The Moon, so nearby and accessible, is a great place to try out new technologies critical to living on alien worlds before venturing across the solar system.
Whether a Moon base will turn out to be feasible hinges largely on the question of water. Colonists need water to drink. They need water to grow plants. They can also break water apart to make air (oxygen) and rocket fuel (oxygen + hydrogen). Furthermore, water is surprisingly effective at blocking space radiation. Surrounding the base with a few feet of water would help protect explorers from solar flares and cosmic rays. The problem is that water is dense and heavy. Carrying large amounts of it from Earth to the Moon would be expensive. Settling the Moon would be so much easier if water were already there.
Astronomers believe that comets and asteroids hitting the Moon eons ago left water behind. (Scientists believe that Earth may have received its water in the same way.) Water on the Moon does not last long. It evaporates in sunlight and drifts off into space. Only in the shadows of deep, cold craters could an explorer expect to find any, frozen and hidden. Indeed, there may be deposits of ice in such places.
In the 1990s, two spacecraft, Lunar Prospector and Clementine, found tantalizing signs of ice in shadowed craters near the Moon’s poles—perhaps as much as a cubic kilometer. The data were not conclusive, though.
To find out if lunar ice is truly there, NASA plans to send a robotic scout. The Lunar Reconnaissance Orbiter, or “LRO” for short, is scheduled to launch in 2008 and to orbit the Moon for a year or more. Carrying six different scientific instruments, LRO will map the lunar environment in greater detail than ever before. LRO’s instruments will do many things: they will map and photograph the Moon in detail, sample its radiation environment, and hunt for water.
Back to top
The spacecraft’s Lyman-Alpha Mapping Project (LAMP) will attempt to peer into the darkness of permanently shadowed craters at the Moon’s poles, looking for signs of ice hiding there. By looking for the dim glow of reflected starlight, LAMP senses a special range of ultraviolet light wavelengths. Not only is starlight relatively bright in this range, but also the hydrogen gas that permeates the universe radiates in this range as well. To LAMP’s sensor, space itself is literally aglow in all directions. This ambient lighting may be enough to see what lies in the inky blackness of these craters.
The spacecraft is also equipped with a laser that can shine pulses of light into dark craters. The main purpose of the instrument, called the Lunar Orbiter Laser Altimeter (LOLA), is to produce a highly accurate contour map of the entire Moon. As a bonus, it will also measure the brightness of each laser reflection. If the soil contains ice crystals, as little as 4 percent, the returning pulse would be noticeably brighter.
One of LRO’s instruments, Diviner, will map the temperature of the Moon’s surface. Scientists can use these measurements to search for places where ice could exist. Even in the permanent shadows of polar craters, temperatures must be very low for ice to resist evaporation. Thus, Diviner will provide a “reality check” for LRO’s other ice-sensitive instruments, identifying areas where positive signs of ice would not make any sense, because the temperature is simply too high.
Not far from some permanently shadowed craters are mountainous regions in permanent sunlight, known romantically as “peaks of eternal sunshine.” Conceivably, a Moon base could be placed on one of those peaks, providing astronauts with constant solar power—not far from crater valleys below, rich in ice and ready to be mined.
NASA’s Desert ‘Rats’ Test New Gear
|The Remote Field Demonstration Test Site serves as sort of a dry run of a dry run. Researchers use the rugged terrain and varied climate to test prototype space suits and innovative equipment.|
Arizona’s high desert is not quite as tough on equipment as the Moon
or Mars, but few places on Earth can give prototype space suits, rovers,
and science gear a better workout.
A NASA-led team headed for sites near Flagstaff, Arizona, in September, to test innovative equipment. Engineers and scientists led the Desert Research and Technology Studies (RATS) team from Johnson and Glenn Research Center. The team included members from NASA centers, universities, and private industry. Their efforts may help America pursue the Vision for Space Exploration to return to the Moon and travel beyond.
The sand, grit, dust, rough terrain, and extreme temperature swings of the desert are attractive, simulating some of the conditions that may be encountered on the Moon or Mars. Crews wearing prototype-advanced space suits used and evaluated the new equipment for 2 weeks.
“For field testing, the desert may be the closest place on Earth to Mars, and it provides valuable hands-on experience,” said Joe Kosmo, Johnson’s senior project engineer for the experiments. “This work will focus on the human and robotic interaction we’ll need for future lunar and planetary exploration, and it will let us evaluate new developments in engineering, science, and operations,” he added.
Engineers in the Exploration Planning and Operations Center at Johnson provided mission control-type monitoring of the field tests.
The test equipment included:
New space suit helmet-mounted speakers
and microphones for communications.
A “field assistant”
electric tractor that follows test subjects in
space suits, and is guided by space suit-mounted
A wireless network, for use on other
planets, that can relay data and messages among
spacewalkers, robots, and rovers as they explore
A two-wheeled chariot that is pulled
by the electric tractor to carry astronauts.
an autonomous robotic support vehicle that can
retrieve geologic samples.
Analytical equipment mounted
on two mobile geology laboratories.
NASA’s Science Mission Directorate carries out the scientific exploration of the Earth, Moon, Mars, and beyond; charts the best route of discovery; and reaps the benefits of Earth and space exploration for society. By combining Earth and space science, NASA is best able to establish an understanding of the Earth, other planets, and their evolution, bringing the lessons of our study of Earth to the exploration of the solar system and assuring the discoveries made here will enhance our work there.
Deep Impact Mission
|Artist Pat Rawlings illustrates the moment of impact and the forming of the crater during the Deep Impact Mission.|
Comets are time capsules that hold clues about the formation and
evolution of the solar system. They are composed of ice, gas, and
dust, primitive debris from the solar system’s distant and coldest
regions that formed 4.5 billion years ago. Deep Impact, a NASA Discovery
Program mission, is the first to probe beneath the surface of a comet
and reveal the secrets of its interior.
At the culmination of the 6-year mission, on July 3, 2005, a 370-kilogram impactor was released from the Deep Impact spacecraft. The spacecraft watched from a safe distance while the impactor collided with comet Tempel 1 at 6.3 miles per second (10 kilometers per second) or 23,000 miles per hour (37,000 kilometers per hour), on July 4. The impact created a magnificent flash of light as an immense cloud of fine powdery material was ejected and subsequently captured in 4,500 images from the spacecraft’s cameras.
Back to top
Scientists continue to analyze the gigabytes of data collected from the 4th of July fireworks in deep space. It is estimated that the crater formed from the impact is between 165 and 820 feet (50 and 250 meters) wide. Analyzed data will be combined with that of other NASA and international comet missions. Results from these missions will lead to a better understanding of both the solar system’s formation and implications of comets colliding with planetary surfaces.
Mars Exploration Rover Mission
We’re going to overtime—for the third time.
|Full frame experiment data record acquired on Sol 494 of Spirit’s mission to Gusev Crater, at approximately 16:37:46 Mars local solar time.|
In April 2005,
NASA approved up to 18 more months of operations
for Spirit and Opportunity, the twin Mars rovers that have
already surprised engineers and scientists by continuing
active exploration for more than 20 months—well past their
3-month primary mission.
The rovers have proven their value with major discoveries about ancient watery environments on Mars that might have harbored life. Shortly after landing in January 2004, Opportunity found geological evidence of a shallow ancient sea. More than a year later, Spirit found a new class of water-affected rock. The Science Mission Directorate leadership decided to extend the mission through September 2006 to take advantage of having such capable resources still healthy and in excellent position to continue the Mars adventures.
With the rovers already performing well beyond their original design lifetimes, there is a distinct possibility that, at any time, a part could wear out and therefore disable the robotic explorers. Both rovers, however, show no signs of letting up, despite traveling through dust devils and sand traps. Through August 2005, Spirit and Opportunity have explored over 6.5 miles (10.5 kilometers) of Martian terrain.
The Cassini spacecraft is embarking on a new mission phase that will give it a ringside seat at Saturn—literally. After concentrating on flybys of Saturn’s moons since arriving last year, Cassini began a 5-month study of the stately planet’s magnificent rings in April with 12 instruments onboard. Knowing how the rings form and how long they have been there are central questions for the Cassini-Huygens mission.
|In this true color view, Mimas, one of the innermost moons of Saturn, drifts along in its orbit, against the azure backdrop of Saturn’s northern latitudes.|
In a spectacular kickoff to its first season of prime ring
viewing, Cassini has confirmed earlier suspicions of an unseen
moon hidden in a gap in Saturn’s outer “A” ring, known as the
The moon, provisionally called S/2005 S1, was first seen in a time-lapse sequence of images taken on May 1, 2005, as Cassini began its climb to higher inclinations in orbit around Saturn. A day later, an even closer view was obtained, which has allowed measurement of its size and brightness.
S/2005 S1 is the second-known moon to exist within Saturn’s rings. The other is Pan, which orbits in the Encke Gap of the “A” ring. Imaging scientists had predicted the new moon’s presence and its orbital distance from Saturn after a July 2004 sighting of a set of peculiar spiky and wispy features in the Keeler Gap’s outer edge. The similarities of the Keeler Gap features to those noted in Saturn’s “F” ring and the Encke Gap led imaging scientists to conclude that a small body, a few kilometers across, was lurking in the center of the Keeler Gap, awaiting discovery.
NASA scientists have also concluded that another Saturn moon, Phoebe, is an interloper to the Saturn system from the deep outer solar system.
Back to top
When Cassini flew by Phoebe on its way to Saturn on June 11, 2004, little was known about the battered, crater-filled moon at that time. During the encounter, scientists got the first detailed look at Phoebe, which allowed them to determine its makeup and mass. As new information unfolded, scientists were able to determine that Phoebe has an outer solar system origin, akin to Pluto and other members of the Kuiper Belt.
|Specially calculated Cassini orbits place Earth and Cassini on opposite sides of Saturn’s rings, a geometry known as occultation. Cassini conducted the first radio occultation observation of Saturn’s rings on May 3, 2005.|
“Phoebe was left behind from the solar nebula, the cloud of
interstellar gas and dust from which the planets formed,” said
Dr. Torrence Johnson, a Cassini imaging team member at the
Jet Propulsion Laboratory (JPL). “It did not form at Saturn.
It was captured by Saturn’s gravitational field and has been
waiting eons for Cassini to come along.”
Phoebe has a density consistent with that of the only Kuiper Belt objects for which densities are known. Phoebe’s mass, combined with an accurate volume estimate from images, yields a density of about 100 pounds per cubic foot (1.6 grams per cubic centimeter), much lighter than most rocks, but heavier than pure ice, which is about 58 pounds per cubic foot (0.93 grams per cubic centimeter). This suggests a composition of ice and rock similar to that of Pluto, and Neptune’s moon, Triton. Whether the dark material on other moons of Saturn is the same primordial material as on Phoebe remains to be seen.
Meanwhile, new observations have been made about Saturn’s largest moon, Titan. Huygens, a European Space Agency probe with six instruments onboard, landed safely on Titan on January 14, 2005, recording hundreds of megabytes of data during its descent through the atmosphere and while on the surface. Titan is the only known moon in our solar system that has a thick atmosphere. Huygens revealed that the thick atmosphere of this giant moon is rich in organic compounds, whose chemistry may be similar to that of primordial Earth several billion years ago.
“Titan is not just a dot in the sky; these new observations show that Titan is a rich, complex world, much like the Earth in some ways,” said Dr. Michael Flasar, the Composite Infrared Spectrometer (CIRS) instrument principal investigator at Goddard. In all, there will be 45 flybys of Titan during the Cassini-Huygens nominal mission, giving scientists more information to unravel the mysteries of its thick atmosphere and other Earth-like processes, such as tectonics, erosion, winds, and perhaps volcanism, which may have shaped Titan’s surface.
|The Boeing Delta II launch vehicle for NASA’s Swift spacecraft is silhouetted against a rosy sky at sunrise, waiting for liftoff.|
Scientists using the Swift satellite—launched on November 20,
2004—and several ground-based telescopes have detected the
most distant explosion yet, a gamma-ray burst from the edge
of the visible universe.
This powerful burst was detected September 4, 2005. It marks the death of a massive star and the birth of a black hole. It comes from an era soon after stars and galaxies first formed, about 500 million to 1 billion years after the Big Bang. Gamma-ray bursts are the most powerful explosions the universe has seen since the Big Bang. They occur approximately once per day and are brief, but intense, flashes of gamma radiation.
“We designed Swift to look for faint bursts coming from the edge of the universe,” said Swift principal investigator, Dr. Neil Gehrels, of Goddard. “Now we’ve got one, and it’s fascinating. For the first time, we can learn about individual stars from near the beginning of time. There are surely many more out there,” he added.
The Swift satellite is designed specifically for gamma-ray burst science. Its three instruments work together to observe gamma-ray bursts and afterglows in the gamma-ray, X-ray, and optical wavebands. The Burst Alert Telescope (BAT) monitors the entire sky to catch a gamma-ray burst and calculate an initial position. Within seconds of detecting a burst, Swift will relay the burst’s location to ground stations, allowing both ground-based and space-based telescopes around the world the opportunity to observe the burst’s afterglow. Armed with the position, the Swift spacecraft autonomously points two other onboard telescopes within their field-of-view, within 90 seconds. All three telescopes watch the gamma-ray burst and afterglow unfold. During Swift’s 2-year nominal mission, scientists should have data for approximately 200 gamma-ray bursts to determine their origin and study activities of the early universe.
Detecting Coastal Pollution
Back on Earth, a NASA-funded study of marine pollution in southern California concluded that space-based synthetic aperture radar can be a vital observational tool for assessing and monitoring ocean hazards in urbanized coastal regions.
|An artist’s rendering of the Swift spacecraft with a gamma-ray burst in the background.|
“Clean beaches and coastal waters are integral to southern
California’s economy and lifestyle,” said
Dr. Paul DiGiacomo, a JPL oceanographer and lead author
of a study recently published in the Marine Pollution Bulletin.
“Using southern California as a model system, we’ve shown
existing high-resolution, space-based radar systems can
be used to effectively detect and assess marine pollution
hazards. This is an invaluable tool for water quality managers
to better protect public health and coastal resources,”
DiGiacomo and colleagues from JPL; the University of California, Santa Barbara; and the University of Southern California, Los Angeles, examined satellite radar imagery of the state’s southern coastal waters. The area is adjacent to 20 million people, nearly 25 percent of the U.S. coastal population.
“The key to evaluating and managing pollution hazards in urban coastal regions is accurate, timely data,” DiGiacomo said. “Since such hazards are usually localized, dynamic, and episodic, they’re hard to assess using oceanographic field sampling. Space-based imaging radar works day and night, regardless of clouds, detecting pollution deposits on the sea surface. Combined with field surveys and other observations, including shore-based radar data, it greatly improves our ability to detect and monitor such hazards.”
The study described three major pollutant sources for southern California: storm water runoff, wastewater discharge, and natural hydrocarbon seepage.
“During late fall to early spring, storms contribute more than 95 percent of the region’s annual runoff volume and pollutant load,” said JPL co-author Ben Holt. “Californians are accustomed to warnings to stay out of the ocean during and after storms. Even small storms can impact water quality. Radar data can be especially useful for monitoring this episodic seasonal runoff.”
DiGiacomo noted that a regional southern California marine water quality-monitoring survey is under way, involving JPL and more than 60 other organizations, including the Southern California Coastal Water Research Project. Its goal is to characterize the distribution and ecological effects of storm water runoff in the region. Space radar and other satellite sensor data are being combined, including NASA’s Moderate Resolution Imaging Spectroradiometers (MODIS). The sensors provide frequent observations, subject to clouds, of ocean color that can be used to detect regional storm water runoff and complement the finer resolution, but less frequent, radar imagery.
The second largest source of the area’s pollution is wastewater discharge. Publicly owned treatment works discharge daily more than 1 billion gallons of treated wastewater into southern California’s coastal waters. Even though it is discharged deep offshore, submerged plumes occasionally reach the surface and can contaminate local shorelines.
Natural hydrocarbon seeps are another local pollution hazard. Underwater seeps in the Santa Barbara Channel and Santa Monica Bay have deposited tar over area beaches. Space-imaging radar can track seepage on the ocean surface, as well as human-caused oil spills, which are often affected by ocean circulation patterns that make other tracking techniques difficult.
Back to top
Further research is necessary to determine the composition of pollution hazards detected by radar. “From imaging radar, we know where the runoff is, but not necessarily which parts of it are harmful,” Holt said. “If connections can be established, imaging radar may be able to help predict the most harmful parts of the runoff.”
While the researchers said environmental conditions such as wind and waves can limit the ability of space radar to detect ocean pollution, they stressed the only major limitation of the technique is infrequent coverage. “Toward the goal of a comprehensive coastal ocean observing system, development of future radar missions with more frequent coverage is a high priority,” DiGiacomo noted.
Detecting Airborne Pollution
NASA scientists have discovered that pollution could catch an airborne “express train,” or wind current, from Asia all the way to the southern Atlantic Ocean.
|The red arrows on this globe trace the fast track of ozone pollution from Asia as it contributes to the highest ozone episodes found in the South Atlantic.|
Scientists believe that, during certain seasons, as much as
half of the ozone pollution above the Atlantic Ocean may be
speeding down a track of air from the Indian Ocean. As it rolls
along, it picks up more smog from air peppered by thunderstorms
that bring the pollution up from the Earth’s surface.
Bob Chatfield, a scientist at Ames, said, “Man-made pollution from Asia can flow southward, get caught up into clouds, and then move steadily and rapidly westward across Africa and the Atlantic, reaching as far as Brazil.”
Chatfield and Anne Thompson, a scientist at Goddard, used data from two satellites and a series of balloon-borne sensors to spot situations when near-surface smog could catch the wind current westward several times annually from January to April.
During those periods of exceptionally high ozone in the South Atlantic, especially during late winter, researchers noticed Indian Ocean pollution follows a similar westward route, wafted by winds in the upper air. They found the pollution eventually piles up in the South Atlantic. “We’ve always had some difficulty explaining all that ozone,” Thompson admitted.
“Seasonal episodes of unusually high ozone levels over the South Atlantic seem to begin with pollution sources thousands of miles away in southern Asia,” Chatfield said. Winds are known to transport ozone and pollutants thousands of miles away from their original sources.
Clearly defined, individual layers of ozone in the tropical South Atlantic were traced to lightning sources over nearby continents. In addition to ozone peaks associated with lightning, high levels of ozone pollution came from those spots in the Sahel area of North Africa where vegetation burned. However, even outside these areas, there was extra ozone pollution brought by the Asian “express train.”
The scientists pinpointed these areas using the joint NASA-Japan Tropical Rainfall Measuring Mission (TRMM) satellite to see fires and lightning strikes, both of which promote ozone in the lower atmosphere. Researchers also identified large areas of ozone smog moving high over Africa using the Total Ozone Mapping Spectrometer (TOMS) satellite instrument.
They further confirmed the movement of the smog by using sensors on balloons in the Southern Hemisphere Additional Ozonesondes (SHADOZ) network. A computer model helped track the ozone train seen along the way by the SHADOZ balloon and satellite sensors. The scientists recreated the movement of the ozone from the Indian Ocean region to the southern Atlantic Ocean.
Going to ‘Extremes’
Hundreds of feet under the Alaskan tundra, Marshall astrobiologist Dr. Richard Hoover ignored the eerie silence of the icy tunnel around him, and even the bones of woolly mammoths and steppe bison jutting from the jagged walls, frozen where they died tens of thousands of years ago.
Forget the fossils.
Hoover was instead poring over pale blue and white patches covering an ice wedge in the tunnel wall. It was a microbial community of bacteria and fungi, growing in total darkness, thriving at temperatures that have hovered below freezing for thousands of years.
For Hoover and his research colleagues, proof of life is the real find, especially in a subterranean tomb, sleeping under ice from the Pleistocene Age. In this unlikely place, they discovered a new life form, a never-before-seen bacterial species they have dubbed Carnobacterium pleistocenium. It is roughly 32,000 years old—and it is still alive.
|Dr. Elena Pikuta, a scientist at the University of Alabama in Huntsville, and Dr. Richard Hoover, a NASA astrobiologist, lead a team of researchers who recently discovered a new life form: an “extremophile” that lives and thrives in conditions inhospitable to most life on Earth.|
The bacterium—the first fully described, validated species
ever found alive in ancient ice—is one of NASA’s latest discoveries
of an “extremophile.” Extremophiles are hardy life forms that
exist and flourish in conditions hostile to most known organisms,
from the potentially toxic chemical levels of salt-choked lakes
and alkaline deserts to the extreme heat of deep-sea volcanoes
and hydrothermal vents. NASA and its partner organizations
study the potential for life in such extreme zones to help
understand the limitations of life on Earth and to prepare
robotic probes and, eventually, human explorers to search other
worlds for signs of life.
The search for extremophiles is a key element of the Vision for Space Exploration, which aims to reveal unimaginable life forms that could be thriving in conditions few Earth species could tolerate.
“The existence of microorganisms in these harsh environments suggests—but does not promise—that we might one day discover similar life forms in the glaciers or permafrost of Mars, or in the ice crust and oceans of Jupiter’s moon, Europa,” Hoover noted.
There are approximately 7,000 validly described species of bacteria, though far more are surmised to exist in nature. The vast majority of bacteria are harmless to humans. Only a very few—less than 1 percent of all known species—are dangerous, and many, Hoover noted, are valuable to human life, aiding us in numerous ways: aiding in the production of valuable proteins and life-saving drugs; culturing wine, dairy products, and other foods; and assisting in the biological extraction of gold and other precious metals from ore wastes.
Back to top
Carnobacterium pleistocenium could offer new breakthroughs in medicine, Hoover said. “The enzymes and proteins it possesses, which give it the ability to spring to life after such long periods of dormancy, might hold the key to long-term cryogenic, or very low-temperature, storage of living cells, tissues, and perhaps even complex life forms,” he said.
The Aeronautics Research Mission Directorate is committed to developing tools and technologies that can help to transform how air transportation systems operate, how new aircraft are designed and manufactured, and how our Nation’s air transportation system can reach unparalleled levels of safety and security. Such tools and technologies will drive the next wave of innovation, enabling missions to be performed in completely new ways and creating new missions that were never before possible.
|A collection of NASA’s research aircraft on the ramp at the Dryden Flight Research Center in July 1997: X-31, F-15, SR-71, F-106, F-16XL Ship #2, X-38, Radio Controlled Mothership, and X-36.|
NASA has been at the forefront of aeronautics research
for decades, and just recently celebrated the 90th anniversary
of its predecessor, the National Advisory Committee for
Aeronautics (NACA). From March 3, 1915, until its incorporation
into NASA on October 1, 1958, NACA provided technical advice
to the U.S. aviation industry and conducted cutting-edge
research in aeronautics. NACA was created by President
Woodrow Wilson, to “direct and conduct research and experimentation
in aeronautics, with a view to their practical solution.”
NASA has continued this tradition.
In the 1920s, NACA engineers developed a low-drag streamlined cowling for aircraft engines, which all aircraft manufacturers then adopted. This innovation resulted in significant operating cost savings. NACA engineers also demonstrated the advantages of mounting engines into the leading edges of multi-engine aircraft wings rather than suspending them, which also became an industry standard.
Through the 1930s, NACA engineers developed several families of airfoils. Many of these were successful as wing and tail sections, propellers, and helicopter rotors used in general aviation and in military aircraft.
During the 1940s, NACA researchers developed the laminar-flow airfoil, which solved the problem of turbulence at the wing trailing edge that limited aircraft performance. The research helped pioneer advances in transonic and supersonic flight. NACA also developed a supersonic wind tunnel, speeding the advent of operational supersonic aircraft and helping to determine the physical laws affecting supersonic flight. In 1945, Robert Jones, one of the premier aeronautical engineers of the 20th century, formulated the swept-back wing concept to reduce shockwave effects at critical supersonic speeds. Also in the mid-1940s, NACA engineers pioneered research in thermal ice prevention systems for aircraft.
In 1952, NACA’s engineers formed the blunt body concept, which suggested that a blunt shape would absorb only a very small fraction of the heat generated during reentry into Earth’s atmosphere. The principle was significant for missile nose cones; the Mercury, Gemini, Apollo, and Space Shuttle craft; and unmanned probes. That same year, NACA began studying problems likely to be encountered in space.
|The Lunar Landing Training Vehicle (LLTV) gave astronauts valuable training in the critical final phases of the descent onto the Moon.|
In 1954, NACA proposed development of a piloted research
vehicle to study the problems of flight in
the upper atmosphere and at hypersonic speeds. This
led to the development of the rocket-propelled X-15 research
With NACA’s transformation into NASA in 1958, research for space travel became a high-profile endeavor. NASA and Bell Aerosystems Company developed a Lunar Landing Training Vehicle (LLTV) simulator for the Apollo Program. This allowed a pilot to make a vertical landing in a simulated Moon environment. Donald “Deke” Slayton, then NASA’s astronaut chief, said there was no other way to simulate a Moon landing except by flying the LLTV.
Four decades of supersonic-combustion ramjet (scramjet) propulsion research culminated in 2004, with two successful flights of the X-43A hypersonic technology demonstrator. The X-43A attained a maximum speed of Mach 9.6, flying freely under its own power. It set world airspeed records for an aircraft powered by an air-breathing engine. The flights proved that scramjet propulsion may be a viable technology for powering future space-access vehicles and hypersonic aircraft.
NASA will continue to develop and validate high-value technologies that enable exploration and discovery. The Agency continues its legacy work in aeronautics with breakthrough developments in quieter supersonic and subsonic flight, and autonomous, high-altitude, long-endurance robotic aircraft.
Currently, among its many aeronautics research endeavors, NASA is working toward zero-emission aircraft; smoother, safer airline flights; and elimination of low-visibility-induced accidents.
APEX: Measuring Emissions So That Future Aircraft Fly Cleaner
NASA has been studying various types of emissions from commercial aircraft to develop ways to reduce them and protect the environment. In recent years, fine-particle emissions from aircraft have been identified as possible contributors to global climate changes and to lowering local air quality. These emissions are produced when a hydrocarbon fuel (such as modern jet fuel, which is primarily kerosene) does not burn completely. Incomplete combustion often occurs at the lower power settings used for aircraft descent, idling, and taxiing. This produces fine carbon particles, or soot, as well as particles of nonvolatile organic compounds. In addition, engine erosion and small amounts of metal impurities in jet fuel can be emitted in engine exhaust.
|The DC-8 airborne laboratory flies three primary types of missions: sensor development, satellite sensor verification, and basic research studies of the Earth’s surface and atmosphere.|
Another type of particle emission is formed when exhaust
cools, converting volatile aerosols of sulfur compounds
and organic compounds to small, solid particles. These
types of emissions are not addressed by current international
regulations, which focus on visible smoke, but the international
community is concerned about the effects that these emissions
may have and is identifying possible regulations. In addition,
reducing all types of aircraft emissions is necessary for
the U.S aircraft industry to remain competitive in the
|Data gathered by the DC-8 airborne laboratory at flight altitude and by remote sensing have been used for scientific studies in archaeology, ecology, geography, hydrology, meteorology, oceanography, volcanology, atmospheric chemistry, soil science, and biology.|
Recently, Glenn took part in the successful Aircraft Particle
Emissions Experiment (APEX). NASA’s
DC-8 airborne laboratory was used with CFM-56 engines to
improve understanding of particle emissions from commercial
aircraft engines. It was the first and most extensive set
of data obtained about gaseous and particulate emissions
from an in-service commercial engine. Many different instruments
were used, and a tremendous amount of data was obtained.
NASA scientists ran tests to investigate the effects of thrust and fuel type. The team used different engine operating settings to vary thrust, and three different fuels were used: a typical jet fuel, a fuel with high sulfur content, and a fuel with high aromatic compound content. In addition, the Environmental Protection Agency ran tests to simulate landing-takeoff cycles to study the emissions that would be created at an airport. It was the first time that so many different groups had worked together to study so many different aspects of the emissions from commercial aircraft engines.
Smoothing Out the Skies
Passengers on a Delta Air Lines jet could have a smoother ride, thanks to NASA-developed technology. Delta is installing a special production-prototype radar, which can detect turbulence associated with thunderstorms, on one of its B737-800 aircraft. The radar, called the Turbulence Prediction and Warning System (TPAWS), was developed for NASA’s Aviation Safety and Security Program at Langley Research Center.
NASA teamed with Delta Air Lines, of Atlanta; AeroTech Research (USA), Inc., of Newport News, Virginia; and Rockwell Collins, of Cedar Rapids, Iowa, for the in-service evaluation of the radar unit, which also includes turbulence hazard prediction capabilities.
“The TPAWS technology is an enhanced turbulence detection radar system, which detects atmospheric turbulence by measuring the motions of the moisture in the air,” said Jim Watson, the TPAWS project manager. “It is a software signal processing upgrade to existing predictive Doppler wind shear systems, also developed by NASA, that are already on airplanes.”
|Dispatcher’s display of turbulence-encounter reports integrated with weather data. These reports are used to safely guide Delta Air Lines flights in real time.|
The idea behind the turbulence detection system is to give
flight crews advanced warning, so they can avoid turbulence
encounters or advise flight attendants and passengers to
sit down and buckle up to avoid injury. Turbulence encounters
are hazardous, and they cost the airlines money and time
in the form of re-routing flights, late arrivals, and additional
inspections and maintenance to aircraft. Atmospheric turbulence
encounters are the leading cause of injuries to passengers
and flight crews in non-fatal airline accidents. Federal
Aviation Administration statistics show an average of 58
airline passengers are hurt in U.S. turbulence incidents
each year. Ninety-eight percent of those injuries happen
because people do not have their seatbelts fastened.
NASA researchers say the TPAWS radar can detect about 80 percent of all atmospheric turbulence encounters. It can also detect thunderstorm-related turbulence at an average of 3 to 5 minutes ahead of the aircraft. According to studies done by Dryden Flight Research Center engineers, it takes a little more than a minute and a half to get 95 percent of passengers seated, carts stored, and flight attendants secured. Delta flight crews will use and evaluate the technology during regularly scheduled flights in the United States and South America. The prototype is expected to fly for 6 to 9 months.
Researchers from NASA, the companies involved, and the Federal Aviation Administration, will evaluate interim and final results of the turbulence prediction radar system. If the evaluation is successful, the technology may be adopted for new and existing aircraft.
NASA has already tested TPAWS on a research aircraft based at Langley. The TPAWS-equipped plane searched for turbulence activity around thunderstorms for 8 weeks. The jet flew within a safe distance of storms, so researchers could experience the turbulence and compare the radar prediction to how the plane responded to the encounters. After one severe patch of turbulence, a NASA research pilot said his confidence in the enhanced radar had “gone up dramatically,” since the plane’s weather radar had shown nothing at the same time the TPAWS display had shown rough skies ahead.
Back to top
|
<urn:uuid:70107cfc-05e4-4fdd-b6ef-8b132b013c23>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00619.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9227452874183655,
"score": 3.3125,
"token_count": 13953,
"url": "https://spinoff.nasa.gov/Spinoff2005/rd.html"
}
|
4 Compounds, not elements, determines most of what we experience in this world How elements bond together in compounds determines the properties of matter that we observeMany of the elements that we observe in nature are poisonous in their natural form, but as compounds, they sustain and enable lifeIonic compounds, unlike covalent compounds, dissociates in water to form positive and negative ionsCovalent compounds remain as whole units when dissolved in waterMetals form special bonds that are neither ionic or covalent, but somewhere in the middleMetals conduct electricity because their valence electrons are not fixed and can move from atom to atom
5 Sulfur can help life because it is an essential nutrient and is found in amino acids Sulfur also causes the foul smell in garlic, rotten eggs and in skunksIron is important to many body functions including the transferring of oxygen from lungs to the cells (hemoglobin)Iron is also magnetic in nature and when the two (iron and sulfur) are mixed they each maintain their physical and chemical properties, until both are heated close to the melting point of steel (where a chemical reaction occurs)The resulting compound is iron sulfide and no longer has the properties of iron or sulfur, but new properties
6 What is a chemical bond?Using a molecule building kit, a chemical bond is a stick and atoms joined togetherThe sticks between the atoms represent bonds between each atomInside the atom are positive and negative charges that attract and repel each other (protons repel other protons and electrons repel each other)There are also intermolecular forces, forces between one atom and another atom or atomsBecause of protons of one atom attracting electrons of another atom, there is an array of charges that each atom has to deal with which can cause a shift in atomic chargesPolarization is an uneven distribution of positive or negative charge that occurs when anything (like other atoms) creates an charge outside of the atom (this is how bonds form)
7 That distortion is called polarization. The electron cloud responds to changes in the electromagnetic environment.That distortion is called polarization.
8 What happens when two hydrogen atoms approach each other? Each nucleus attracts the electron cloud of the other atom.Each nucleus repels the other nucleus.The electron cloud repels the other electron cloud.At a certain distance there is an equilibrium between attractive and repulsive forces.
9 In nonpolar covalent bonds, the difference between the electronegativity of both atoms is very little (between 0.3 and 0.0)In polar covalent bonds the difference between the electronegativity of both atoms is a moderate difference (between 1.7 and 0.3)In ionic bonds the difference between the electronegativity of both atoms is a large difference (between 3.3 and 1.7)
10 Covalent bond is formed when atoms share electrons (sometimes the sharing is equal, sometimes it is not equal sharing)Ionic bonds are formed when metals give 1, 2, or three electrons to a nonmetal (each atom becomes charged (as positive or negative)In a non-polar covalent bond, the sharing of electrons is equal so the electrons are evenly distributed so there is little charge separation on the surface of the moleculeIn a polar covalent bond, the uneven sharing of electrons creates a region that is more positive, and another area that is more positive than the opposite end.Metallic bonds both atoms have a low electronegativity and low ionization energy so they don’t attract each other’s electrons very wellAs a result of metallic bonding, a bunch of atoms share electrons
11 Assignment Take a new sheet of paper and fold it into three sections Write your name, the title of the chapter and the numberOn the first section from the sheet of paper, please write six things that you learned from your notes so far that could appear on your test.
12 Sometimes when two or more molecules combine the result can be a polar or non polar molecule, even though individual bonds are polarThe reason for this is because when a molecule is called polar, the overall molecule is polar and not just certain bonds (the surface of the molecule can be non polar, even though is has polar bonds)
13 When a chemical bond is formed, some valence electrons are either shared or transferred between atomsOnly unpaired unshared electrons can participate in chemical bondsIt is important to know that in the outer shell, the 5th, 6th and 7th valence electrons pair up and reduce the amount of electrons available to take part in chemical bondingThe number of valence electrons affects bond number and ion chargeIn a molecular compound, each unpaired unshared valence electron can form one covalent chemical bondExample: both nitrogen and phosphorous atoms have three unpaired valence electrons (so they can form three covalent bonds)All atoms react chemically to reach the octet configuration
14 Atoms are neutral charge but unpaired unshared electrons can cause atoms to become positive or negative ions by the gaining or loss of electronsWhen an atom (metal) loses one or more electrons, it becomes a positive ion with a “+” 1, 2, or 3 chargeGroup 1 metals lose 1 electron to become a +1 ionGroup 2 metals lose 2 electron to become a +2 ionGroup 13 metals lose 3 electron to become a +3 ionWhen an atom (nonmetal) gains one or more electrons, it becomes a negative ion with a “-” 1, 2, or 3 chargeGroup 17 metals gain 1 electron to become a -1 ionGroup 16 metals gain 2 electron to become a -2 ionGroup 15 metals gain 3 electron to become a -3 ion
15 A covalent bond An ionic bond Electrons are shared between the two nuclei.One or more electrons are transferred to form ions.The positive and negative ions attract each other.
16 Some atoms are more greedy for electrons than others! Electrons are unevenly shared between oxygen and hydrogen.
17 Oxygen is slightly more electronegative than hydrogen. Electronegativity(electron sharing)Oxygen is slightly more electronegative than hydrogen.This results in uneven sharing of electrons.
18 Types of bond Nonpolar covalent bond Polar covalent bond Atom 1 Atom 2 Ionic bondAtom 1Atom 2Difference in ENElectron sharinghigh ENvery littleequal or nearly equal sharinghigh ENmedium ENmoderateuneven sharinghigh ENlow ENlargetransfer of electronsEN = electronegativity
20 Types of bondAn ionic crystalIonic bonds connect atoms to all neighbors, not just a single neighbor as in a molecule.
21 Types of bond A metallic bond Like a covalent bond Like an ionic bondElectrons are sharedNo two atoms are specifically bonded togethermetallic bond: an attraction between metal atoms that loosely involve many electrons.
22 Nonpolar covalent bond Polar covalent bond Ionic bondDifference in ENElectron sharingvery littleequal or nearly equal sharingmoderateuneven sharinglargetransfer of electronsEN = electronegativity
24 Electronegativity of atoms: Difference in electronegativity:I – I = 2.66 – 2.66 = 0
25 Electronegativity of atoms: I = 2.66 Difference in electronegativity:I – I = 2.66 – 2.66 = 0The I–I bond is nonpolar covalent.
26 Difference in electronegativity: O – C = 3.44 – 2.55 = 0.89 of atoms:C = 2.55O = 3.44Difference in electronegativity:O – C = 3.44 – 2.55 = 0.89The C–O bond is polar covalent.0.89
27 Nonpolar bonds in a molecule make the molecule nonpolar.
28 Polar bonds in a molecule make the molecule polar.
29 AssignmentWrite a three dollar summary of what you learned (a paragraph, with a topic sentence and three supporting sentences)Turn to page 224 and complete # 1 – 5 then turn them inHonors chemistry Homework:Page 224 #
34 Ionic bondsWrite the electron configuration for a magnesium ion (Mg2+).Asked: Electron configuration of Mg2+Given: Mg, atomic number of 12, charge of +2Relationships: The electron configuration of magnesium is 1s22s22p63s2.
35 Ionic formulasWhat is the correct formula for calcium oxide, a compound used in making paper and pottery, and adjusting the pH of soils?Asked: The formula for the ionic compound calcium oxideGiven: Calcium oxide is made from calcium and oxygen ions. Calcium forms +2 ions and oxygen forms –2 ions.Relationships: Ca2+ and O2– must combine in a ratio that will balance out the positive and negative charges.
36 As stated earlier, elements share, lose or gain electrons to satisfy the octet rule and become more stableThe exception to the octet rule are those closest to helium such a hydrogen, lithium, beryllium and boron (since helium is the closest completely stable noble gasFor those elements mentioned above, the octet rule is more the duel rule or the rule of “2” these elements bond chemically to have a configuration of two electrons
37 In a covalent bond, each shared electron is seen as a valence electron by both elements In H2, each atom shares a electron to have two electrons in their outer shellIn water H2O, each the hydrogen shares an electron with the oxygen giving them two valence electrons and oxygen eight valence electronsWhen ions are formed, they have an electron configuration of the closest noble gasNa+ has the neon configurationO2- has the Neon configuration
39 Covalent bondsBonds form in such a way that each atom in the compound achieves the same number of valence electrons as the closest noble gas atom.
40 Covalent bonds Ion formation Electrons are transferred so that each element has 8 valence electrons and has the same configuration as the closest noble gas. The light elements H, Li, Be, and B prefer to have 2 valence electrons.Ion formationAtoms gain or lose one or more electrons to reach the same electron configuration as the closest noble gas, with 8 valence electrons.octet rule: rule that states that elements transfer or share electrons in chemical bonds to reach a stable configuration of eight valence electrons.
41 AssignmentOn the second section of that sheet of paper, please write six things that you learned from your notes so far that could appear on your test.
42 Ionic compounds generally form crystals because of the interchanging of +tive and –tive charges Ionic compounds are neutral even though they are made up of trillions of charged ionsThe formula of an ionic compound can be determined as long as you cancel the positive and negative
43 As stated earlier, covalent bonds share electrons not transfer them The number of covalent bonds is equal to the number of unpaired valence electronsOnly hydrogen and nonmetals are commonly found as covalent bondsCarbon-like compounds form four covalent bondNitrogen-like compounds form three covalent bondOxygen-like compounds form two covalent bondHalogens form one covalent bond
44 Carbon has four valence electrons and they are all unpaired Oxygen has six valence electrons, but only two are unpaired and able to form covalent bondsIf Lewis dot structures are drawn for elements, it can be used to tell the valence electrons and the unpaired electrons (which is the same as the number of possible covalent bonds)Atoms or molecules with unpaired electrons are highly reactive and are known as free radicalsFree radicals are responsible for aging, and diseases such as cancerAntioxidants are a good part of diet because they prevent free radicals form reacting with and damaging DNA
45 AssignmentWrite a detailed three dollar summary of what you learned (a paragraph, with a topic sentence and three supporting sentences)Turn to page 224 and complete # 6 – 7,Page 226 # then turn them inHonors chemistry homeworkPage 225 #
46 Vocabulary: Section 3 Isomer Free radical Antioxidant VSEPR Region of electron densityTrigonal planarLone pairsTetrahedralTrigonal pyramidalBent
47 Why can’t a water molecule be like this? Each water molecule contains one oxygen atom and two hydrogen atoms.One central oxygen atomWhy can’t a water molecule be like this?One hydrogen atom on either side
48 Why can’t a water molecule be like this? The oxygen forms one bondOne hydrogen forms two bondsOne hydrogen forms one bondThe Lewis structures indicate that it is not possibleWhy can’t a water molecule be like this?
49 Lewis structures for individual atoms are like puzzle pieces. Put them together to form molecules.
50 Use Lewis structures to predict: 1) the chemical formula2) the bonding pattern3) the shape of the moleculeTo be discussedlater in this sectionH2O is flat and bent
51 Lewis Dot StructuresLewis dot structures allows chemist to be able to identify and predict how elements will join together to form moleculesIf you have a formula, you can use Lewis structures to determine how they will joinThe goal of using Lewis structures is to end up with each atom having no unpaired electrons and each having eight valence electrons (unless it is hydrogen, helium, lithium, beryllium or boron)
52 The chemical formula for water is H2O Use Lewis structures to predict:1) the chemical formulaThe chemical formula for water is H2O(2 hydrogen atoms for every 1 oxygen atom)
53 IsomersSometimes there is more than one way to satisfy the molecular formulaIsomers are when there is more than one way to represent a chemical formulaFor example, C2H6O can form ethanol as well as dimethyl ether (they both have different chemical and physical properties)
55 Consider the chemical formula C2H6O Dimethyl ether
56 Two isomers of C2H6OEthanolDimethyl etherisomer: a specific structure of a molecule, only used when a chemical formula could represent more than one molecule.
57 Give three isomers for the formula C3H8O Give three isomers for the formula C3H8O. Show the Lewis dot diagram and the structural formula for each molecule.
58 Give three isomers for the formula C3H8O Give three isomers for the formula C3H8O. Show the Lewis dot diagram and the structural formula for each molecule.Asked: The Lewis dot diagrams and structural formulas for the three molecules represented by the formula C3H8OGiven: Carbon has four unpaired electrons, hydrogen has one, and oxygen has two. Three carbons, eight hydrogens and one oxygen form each molecule.Relationships: The atoms will bond together such that all unpaired electrons will be paired up with electrons from other atoms.
59 Give three isomers for the formula C3H8O Give three isomers for the formula C3H8O. Show the Lewis dot diagram and the structural formula for each molecule.Asked: The Lewis dot diagrams and structural formulas for the three molecules represented by the formula C3H8OGiven: Carbon has four unpaired electrons, hydrogen has one, and oxygen has two. Three carbons, eight hydrogens and one oxygen form each molecule.Relationships: The atoms will bond together such that all unpaired electrons will be paired up with electrons from other atoms.
60 Double and Triple Bonds There are many compounds or molecules with more than one bond between two atomsEthene and ethyne have double and triple bonds respectivelyOxygen also has double bondsLewis dot structures show two dimensional representations of chemical bonding whit is a limitation since the 3D shape of a molecule determines the chemical properties of a moleculeVSEPR – Valence Shell Electron Pair RepulsionThe first three words “VSE” represent the valence electrons and how they react and the last two words “PR” represents the paired electrons that are not sharedPaired electrons are not shared in a chemical bond, but they do effect the shape of the moleculePaired electrons repel each other as well as repel shared ones
61 Multiple bonds Sharing a pair of electrons is called a single bond. Carbon, nitrogen and oxygen commonly form double and triple bonds.Double bond(2 pairs of electrons)Triple bond(3 pairs of electrons)EtheneEthyne
62 AssignmentOn the third section of that sheet of paper, please write six things that you learned from your notes so far that could appear on your test.
63 Electron DensityIf you rubbed a balloon against your hair, it would pull electrons off of your hair and become more negativeIf you put two charged balloons together, they would repel each other (since similar charges repel)The same thing happens when there are two regions of electron density around the atomThe electrons repel each other until they are the maximum distance apartWhen you have two balloons that are negatively charged, they move apart in a linear shape apart (180o)When they are three areas, the shape of the repulsion (120o) which is called trigonal planar shapedWhen they are four areas, the shape of the repulsion (109.5o) which is called tetrahedral shaped
64 VSEPR theoryMolecular polarity is an uneven distribution of molecular charges between atomsVSEPR stands for valence-shell, electron-pair repulsionVSEPR theory states that repulsion between the sets of valence electrons surrounding an atom causes these sets
65 Two areas of electron density repel to form linear shapes Two regionsTwo areas of electron density repel to form linear shapesThe two 180o angles formed around each carbon make the entire molecule straight.
66 Three areas of electron density repel to form trigonal planar shapes Three regionsThree areas of electron density repel to form trigonal planar shapesThese three regions of electron density repel, forming 120o angles between the three atoms bonded to each carbon atom
67 Four regionsThe four regions of electron density around the carbon repel, forming angles of 109.5o.
68 Four regionsDifferent geometries formed by atoms with four regions of electron densityTetrahedralTrigonal pyramidalBent
69 Water and ammonia have similar angles even though they are not the same.
71 AssignmentWrite a detailed three dollar summary of what you learned (a paragraph, with a topic sentence and three supporting sentences)Turn to page 224 and complete # 8 – 14 then turn them inHonors chemistry homeworkPage 225 #
72 Test: - Next week Tuesday or Thursday depending on your class. Homework requirement: Learn all terms and concepts covered on this topic.Make sure you have all assignments between page 224 and 227 completed and turned in by your test date.
|
<urn:uuid:398059d7-8db8-49ce-bd00-c7f977ca6f17>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00219.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8961682319641113,
"score": 3.546875,
"token_count": 3943,
"url": "http://slideplayer.com/slide/3913676/"
}
|
Fluorine is the 13th most common element and is widespread in the environment, although this gas only exists as compounds (fluorides) with other elements. The commonest being calcium fluoride.
At the beginning of the 20th century it was noticed that the inhabitants of some areas of the USA had mottled tooth enamel (fluorosis). Investigators discovered high amounts of naturally occurring calcium fluoride in the drinking water. Children in these areas appeared to have less tooth decay and hence the notion got around that fluoride protected children’s teeth. Studies in the USA and other countries including the UK followed. These seemed to confirm the association between fluoride water content and caries reduction.
Fluoride is believed to protect teeth by replacing hydroxyapatite by the more resistant fluorapatite in the growing period up to age 12. Fluoride also has strong antibacterial activity. Theoretically at least, fluoride should offer some protection against tooth decay.
And many studies support this conclusion. For instance, a seven year study published in 1981 examined 5 year olds in 4 urban communities in England. They found an excellent inverse correlation between fluoride content of the water supplies and the incidence of caries.
Or Does It?
Although proponents of fluoridation speak of volumes of such research over many years, in reality much of it demonstrating a protective effect is weak by today’s standards. The NHS Centre for Reviews and Dissemination, University of York, (NHSCRD) in their systematic review published in 2000 included 214 studies. But they noted that “the quality of studies was low to moderate.” In spite of this they concluded that there was a beneficial reduction in caries.
In 1988, Philip Sutton, a former senior lecturer in Dental Science investigated claims by the proponents of fluoridation that 128 studies confirm caries reduction of 50% - 75%. He found that none of the studies made any attempt to avoid bias, 34 of the studies didn’t exist, 20 were about something else, and 51 were too poor scientifically to consider. Of the 23 that were left, none showed fluoridation to be beneficial in any scientifically acceptable way.
There is also much evidence which negates the positive findings.
A 1982 study covering several countries showed caries reduction of between 17% and over 50% in unfluoridated areas. This was confirmed by the World Health Organisation and the US National Institute for Dental Research in its study of 39,000 children.
In 1948, North Shields, which has little or no naturally occurring fluoride, was compared to South Shields, where the water has a natural fluoride content of 1.4 parts per million. Dental caries was found to be the same in both towns. All that fluoride did was to delay the onset of caries by several years according to the research.
The discoverer of streptomycin, Professor Albert Schatz agrees that fluoridation simply delays the appearance of caries because it delays the eruption of teeth. “Fluoridated children develop the same amount of tooth decay...The only difference is that caries start developing approximately 1.2 years later.”
The lowest rates of caries in Canada are to be found in British Columbia where 11% of the population have fluoridated water compared to 40 - 70% in the rest of Canada.
73% of the Republic of Ireland’s population live in fluoridated areas. From 1972 - 1992 the rate of decayed, missing and filled teeth (DMFT) in 12 year olds fell from 5.4 to 1.9. Yet they are only 6th in the European league table. First is unfluoridated Finland (1975-1991 DMFT fell from 7.5 to 1.2). In second place is unfluoridated Denmark (1978-1992 DMFT fell from 6.4 to 1.3).
Some studies show that fluoride causes an increase in decay. An Indian survey of 400,000 children found higher decay in fluoridated areas and a survey of 20,000 Japanese students found higher rates of decay.
In some places fluoridation was practised but then halted. Kuopia in Finland stopped fluoride treatment after 33 years. The result? Over the following 6 year period teeth got better. There was a similar story in two towns in the former East Germany. Because this finding was so unexpected a further survey was carried out in 2 other towns. 3 years after stopping fluoridation DMFT fell by 38.5% in one and 20.6% in the other.
It’s Toxic & Accumulates
Britain and Ireland only allow hexafluorosilicic acid or hexafluorosilicate to be used for water fluoridation. It contains a form of silicon which has been linked to cancer. These fluorosilicates aren’t pure either. They contain a number of contaminants which includes lead, arsenic and mercury. According to toxicologist Professor Phyllis Mullenix, “the ‘fifty years’ of studies about fluoride safety do not exist.”
There is no dispute that fluoride is potentially toxic and that the effect is cumulative. In fact the Journal of the American Dental Association stated back in 1936 that “fluoride at the 1ppm concentration is as toxic as arsenic and lead.” In those days the dental profession were keen to remove fluoride from water. They have since reversed their position, no longer considering it toxic at that level.
In 1942 the editor of the Journal of the American Medical Association described fluorides as “general protoplasmic poisons.” And in 1950 the pharmacists reference book US Dispensary described fluorides as “violent poisons to all living tissue”. As recently as 1984 a toxicology reference book gave fluoride a toxicology rating of 4 (very toxic). They go on: “the fact is that fluoride is more toxic than lead and just slightly less toxic than arsenic.”
Yet the US Environmental Protection Agency set the maximum contaminant level for lead at 0.015ppm and for fluoride at 4.0ppm. That’s 266 times higher. Does that make any sense?
More than 100 fatal acute fluoride intoxications were reported between 1935 and 1981.
Fluoride & Cancer
It should come as no surprise that fluoride could have potentially detrimental affects on health. Fluoride has been linked to genetic damage. One study found that just 1ppm inhibited DNA repair and damaged chromosomes. Another found “a highly significant increase in mutation.” A review of such studies concluded that “the weight of the evidence leads to the conclusion that fluoride exposure results in increased chromosome aberrations”. Some of the studies that produced positive results were at 1-5ppm, levels equivalent to human exposure. However whether fluoride produces chromosome damage in vivo in humans “should be considered unresolved”, they stated.
Fluoride inhibits several enzyme systems. It can combine with catalase for instance, to inhibit its activity. Catalase is an essential part of our antioxidant defence system.
The ten largest fluoridated areas in the USA were compared with the ten largest unfluoridated in the 1970’s. Cancer rates were similar before fluoridation. But after 20 years these areas had a cancer death rate 10% higher.
Other epidemiological analysis in the 1980’s found significant correlation between fluoridated areas of the USA and cancer incidence. An interesting finding was that women’s hormonal cancers increased while male hormonal cancers decreased. The authors wondered whether fluoride could act as an environmental hormone. A significant dose response relationship was also found for bone cancers in male teenagers.
A rodent study found that the more fluoride they ingested the higher the incidence of bone cancers they developed.
Because of these findings Dr Perry Cohn surveyed a number of areas of New Jersey. He found the incidence of bone cancers in boys was up to 4.6 times higher in the fluoridated areas.
Some studies also suggest that fluoride has a causal relationship with respiratory, oral and uterine cancers. Of course not all studies find fluoride guilty and the NHSCRD found no clear association with any cancer.
Fluoride & The Brain
Fluoride inhibits the brain chemical acetylcholinesterase. In 1995 an animal study demonstrated that fluoride affects the central nervous system.
Chinese scientists showed that children in highly fluoridated areas have a lower IQ than those who are fluoride free.
Fluoride may also affect the brain by combining with aluminium to form aluminium fluoride and may increase the absorption of lead.
It also competes with iodine for absorption and was used to treat an overactive thyroid for many years, often at intakes below 1mg a day.
Fluoride has also been shown to accumulate in the pineal gland to inhibit melatonin production in animals. This causes earlier onset of sexual maturity.
Many mood altering drugs like Prozac (fluoxetine), designed to act on the central nervous system, include fluoride in their chemical makeup.
How Much Do We Ingest?
It is possible that something which is toxic at a high dose could be beneficial at a low dose. The 1ppm level is supposed to give children a protective 1mg a day assuming they drink 1 litre a day. However I don’t see any health warnings on our taps not to drink more than a litre a day, and many commercial drinks and juices use fluoridated water.
The UK Department of Health suggests a safe intake is 3mg a day. The official position of the US National Academy of Sciences is that the “crippling daily dose” is 10mg - 20mg a day over a 10 - 20 year period (remember the effects of fluoride are cumulative). So if we take in just 1mg we shouldn’t suffer with bone disease until we’re at least 100 years old. But do we just ingest 1mg a day?
Apart from water, sources of fluoride include bonemeal, bran, beets, yams, sunflower seeds, whey, milk, cheese, garlic, green vegetables, kelp, gelatin and small fish eaten with bones. Sodium silicofluoride spray (an insecticide) remains in the peel of oranges and many marmalades contain orange peel. The pesticide cryolate, which is over 50% fluorine is used on apples, raisins, lettuce, tomatoes, potatoes, peaches and most berries. Tea can be a major source of fluoride. A cup of tea may contain up to 0.2mg of fluoride, so many adults get a daily dose from 5 cups. Vegetables cooked in fluoridated water averaged 0.4 mg per kg., whereas those cooked in nonfluoridated water averaged only 0.2mg per kg. Then there are
toothpastes, gels, flosses and mouthwashes, Teflon coated cookware (poly-tetra-fluoroethane), cigarettes and some pharmaceuticals.
Early figures calculated intake to be 1.5mg a day. In the 1970’s it was put at 3.0mg. By the 1990’s the US Dept. of Health put the figure for US fluoridated cities at 6.5mg. Current intake is thought to approach 8mg.
Natick in Massachussets fluoridated its water supply in 1998. All water bills carry this message: “we recommend that pregnant women, parents of children under 3 and individuals with known fluoride sensitivity consult with their personal physicians before drinking this water.”
How Much Do We Excrete?
It’s your kidneys’ job to excrete fluoride. Healthy ones will excrete about 50%. But what if it’s not up to par? This is what the Journal of the American Medical Association had to say in 1972: “Children, the elderly and any person with impaired kidney function are in the high risk group for fluoride poisoning and must be warned to monitor their fluoride intake. Also at high risk are people with immunodeficiencies, diabetes and heart ailments as well as anyone with calcium, magnesium and vitamin C deficiencies. At the level of 0.4ppm renal impairment has been shown.”
How elderly are the elderly? The US Agency for Toxic Substances and Disease Registry reiterated the above statement in 1993 and added: “People over the age of 50 often have decreased fluoride renal clearance.”
Fluoride Harms Children
A 5 year study of children under 6 in the USA between 1989 and 1994 found that several hundred children were treated at health care facilities each year because of ingestion of toxic amounts of home-use dental fluoride products i.e. toothpastes, rinses and gels. The frequently cited dose was 5mg per kilogram bodyweight. Outcomes were “generally not serious.”
And what about bathtime? Shampoos, bubble baths and soaps contain sodium lauryl sulphate. It is used by drug companies to increase the absorption of medications that act on the skin. Fluoride can also be absorbed through the skin. Added to bath water absorption is increased by 9%!
One of the objectives of fluoridation is to even out inequalities in health. But it’s possible that the poorest children will be affected the worst.
The work of Professor Schatz in Chile showed that the more malnourished a child, the more susceptible they were to fluoride toxicity. He believed that high levels of infant mortality there was linked to fluoride ingestion. As a result of his work fluoridation was stopped in that country although it was later reinstated.
A diet rich in vitamins and minerals will decrease the intestinal absorption of fluoride. One study found that poorer children had 2.3 times as much dental fluorosis as children from higher income families.
Back in 1952 the Journal of the American Dental Association said that “malnourished infants and children, especially if deficient in calcium intake, may suffer from the effects of water containing fluorine while healthy children would remain unaffected.” This was reaffirmed by Professor Massler of the University of Illinois College of Dentistry in 2000 who said that “lower levels of fluoride ingestion...may not be safe for malnourished infants and children.”
Have poorer children been helped by fluoride? Liverpool has more than twice the number of underprivileged children that Gateshead. Yet the rate of dental decay for 5 year olds is the same in each city. What’s more, Gateshead is fluoridated and Liverpool isn’t!
Who Wants Mottled Teeth?
Those in favour of fluoridation do not deny this negative effect. This was reaffirmed by the NHSCRD: “there is a dose response relationship between water fluoride level and the prevalence of fluorosis. Fluorosis appears to occur frequently (48%) at fluoride levels typically used in artificial fluoridation schemes (1ppm). The proportion of fluorosis that is ethically concerning is lower (12½%).”
Is this a minor cosmetic issue or does it indicate toxicity? Surely the latter since fluoride also accumulates in the bones and suggests enzyme/protein damage . If I were a child I certainly wouldn’t consider permanently stained teeth as just a cosmetic issue.
Fluoride & Bones
Studies published in the 1960’s showed that incidence of osteoporosis was substantially higher in areas where the drinking water contained low levels of fluoride. Another did not support this finding but found that higher levels of fluoride than were added to the water supply were protective. Fluoride is believed to stimulate bone formation in combination with calcium and vitamin D. It does this by entering into the collaganous matrix of bone to form large hydroxyapatite crystals which are more resistant to osteoclastic attack. However with skeletal fluorosis the bones may become brittle and more fragile.
Fluoride also seems to be a potent stimulator of osteoblastic bone formation to increase spinal bone mass. However clinical trials have proved disappointing. Vertebral bone densities increased without any decrease in fracture rates and there was an increase in non-vertebral fractures. Even so, many European countries use slow release sodium fluoride as a therapy for osteoporosis.
Only 5 countries in the world fluoridate their water supply to any great degree. Only 2% of the population of Western Europe drink it, and most of those are in England. All supportive studies are either poor or moderate. If the benefits are so obvious why do so few countries utilise it?
When the idea was first muted, intakes of fluoride were low. But today we can ingest it from a variety of dental sources, pesticide residues, commercial products and drugs.
Even if fluoride does protect children’s teeth, we don’t need any more than is already in our environment. Dental decay has been falling without the ‘benefit’ of fluoride. Children don’t get decayed teeth because of a shortage of fluoride, but because of nutrition and lifestyle factors. It’s these that need to be addressed. Fluoride ingested by the poorest children will just increase their risk of toxicity.
No doctor would prescribe a drug without a consideration of dosage. And yet when it comes to fluoride, the sky’s the limit, even though fluoride is a known toxin and it accumulates in the body; even though a large percentage of the population will have difficulty excreting it because of health problems or their age.
How do you limit intake to 1mg? Are the Water Police going to raid our homes for the ‘crime’ of drinking more than 1 litre of water a day?
With increasing life expectancy, how many people are going to spend the last decades of their life with bone disease thanks to the accumulation of fluoride over their lifetime?
In short, there is no scientific, medical, ethical or moral case for water fluoridation.
This article was first published in Enzyme Digest No. 64 Spring 2004
Any health and medical information published on this website is not intended as a substitute for informed medical advice and you should not take any action before consulting with a health care professional.
|
<urn:uuid:4a50ab48-9a5f-4f72-9423-32e1ae67e0e8>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00219.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9599895477294922,
"score": 3.75,
"token_count": 3765,
"url": "http://n16health.com/water_fluoride.html"
}
|
Exploring Raspberry Pi: Interfacing to the Real World with Embedded Linux
DescriptionExpand Raspberry Pi capabilities with fundamental engineering principles
Exploring Raspberry Pi is the innovators guide to bringing Raspberry Pi to life. This book favors engineering principles over a 'recipe' approach to give you the skills you need to design and build your own projects. You'll understand the fundamental principles in a way that transfers to any type of electronics, electronic modules, or external peripherals, using a "learning by doing" approach that caters to both beginners and experts. The book begins with basic Linux and programming skills, and helps you stock your inventory with common parts and supplies. Next, you'll learn how to make parts work together to achieve the goals of your project, no matter what type of components you use. The companion website provides a full repository that structures all of the code and scripts, along with links to video tutorials and supplementary content that takes you deeper into your project.
The Raspberry Pi's most famous feature is its adaptability. It can be used for thousands of electronic applications, and using the Linux OS expands the functionality even more. This book helps you get the most from your Raspberry Pi, but it also gives you the fundamental engineering skills you need to incorporate any electronics into any project.
- Develop the Linux and programming skills you need to build basic applications
- Build your inventory of parts so you can always "make it work"
- Understand interfacing, controlling, and communicating with almost any component
- Explore advanced applications with video, audio, real-world interactions, and more
Be free to adapt and create with Exploring Raspberry Pi.
Part I Raspberry Pi Basics 1
Chapter 1 Raspberry Pi Hardware 3
Chapter 2 Raspberry Pi Software 23
Chapter 3 Exploring Embedded Linux Systems 55
Chapter 4 Interfacing Electronics 113
Chapter 5 Programming on the Raspberry Pi 159
Part II Interfacing, Controlling, and Communicating 217
Chapter 6 Interfacing to the Raspberry Pi Input/Outputs 219
Chapter 7 Cross-Compilation and the Eclipse IDE 275
Chapter 8 Interfacing to the Raspberry Pi Buses 309
Chapter 9 Enhancing the Input/Output Interfaces on the RPi 363
Chapter 10 Interacting with the Physical Environment 405
Chapter 11 Real-Time Interfacing Using the Arduino 453
Part III Advanced Interfacing and Interaction 481
Chapter 12 The Internet of Things 483
Chapter 13 Wireless Communication and Control 535
Chapter 14 Raspberry Pi with a Rich User Interface 577
Chapter 15 Images, Video, and Audio 615
Chapter 16 Kernel Programming 647
|
<urn:uuid:d2a75119-b1c8-473b-ad53-03f6f6ff88a4>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00619.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8360161781311035,
"score": 3.359375,
"token_count": 530,
"url": "https://www.wiley.com/en-gb/Exploring+Raspberry+Pi%3A+Interfacing+to+the+Real+World+with+Embedded+Linux-p-9781119188681"
}
|
From Wikipedia, the free encyclopedia
John F. Kennedy
From Wikipedia, the free encyclopedia
|This article may be too long to read and navigate comfortably. Please consider splitting content into sub-articles and using this article for a summary of the key points of the subject. (May 2010)|
|The coat of arms of John F. Kennedy|
|Date of origin||1961|
|Shield||Sable three helmets in profile Or within a bordure per saltire gules and ermine.|
|Crest and mantle||Upon a torse Or and sable,Between two olive branches a cubit sinister arm in armour erect the hand holding a sheaf of four arrows points upwards all proper, the mantling gules doubled argent.|
John Fitzgerald "Jack" Kennedy (May 29, 1917 – November 22, 1963), often referred to by his initials JFK, was the 35th President of the United States, serving from 1961 until his assassination in 1963.
After Kennedy's military service as commander of the Motor Torpedo Boat PT-109 during World War II in the South Pacific, his aspirations turned political. With the encouragement and grooming of his father, Joseph P. Kennedy, Sr., Kennedy represented Massachusetts's 11th congressional district in the U.S. House of Representatives from 1947 to 1953 as a Democrat, and served in the U.S. Senate from 1953 until 1960. Kennedy defeated then Vice President and Republican candidate Richard Nixon in the 1960 U.S. presidential election, one of the closest in American history. He was the second-youngest President (after Theodore Roosevelt), the first President born in the 20th century, and the youngest elected to the office, at the age of 43. Kennedy is the first and only Catholic and the first Irish American president, and is the only president to have won a Pulitzer Prize. Events during his administration include the Bay of Pigs Invasion, the Cuban Missile Crisis, the building of theBerlin Wall, the Space Race, the African American Civil Rights Movement and early stages of the Vietnam War.
Kennedy was assassinated on November 22, 1963, in Dallas, Texas. Lee Harvey Oswald was charged with the crime but was shot and killed two days later by Jack Ruby before he could be put on trial. The FBI, the Warren Commission, and the House Select Committee on Assassinations concluded that Oswald was the assassin, with the HSCA allowing for the probability of conspiracy based on disputed acoustic evidence. The event proved to be an important moment in U.S. history because of its impact on the nation and the ensuing political repercussions. Today, Kennedy continues to rank highly in public opinion ratings of former U.S. presidents.
Early life and education
Kennedy was born at 83 Beals Street in Brookline, Massachusetts on Tuesday, May 29, 1917, at 3:00 p.m., the second son of Joseph P. Kennedy, Sr., and Rose Fitzgerald; Rose, in turn, was the eldest child of John "Honey Fitz" Fitzgerald, a prominent Boston political figure who was the city's mayor and a three-term member of Congress. Kennedy lived in Brookline for his first ten years of life. He attended Brookline's public Edward Devotion School from kindergarten through the beginning of 3rd grade, then Noble and Greenough Lower School and its successor, the Dexter School, a private school for boys, through 4th grade. In September 1927, Kennedy moved with his family to a rented 20-room mansion in Riverdale, Bronx, New York City, then two years later moved five miles (8 km) northeast to a 21-room mansion on a six-acre estate in Bronxville, New York, purchased in May 1929. He was a member of Scout Troop 2 at Bronxville from 1929 to 1931 and was to be the first Boy Scout to become President. Kennedy spent summers with his family at their home in Hyannisport, Massachusetts, also purchased in 1929, and Christmas and Easter holidays with his family at their winter home in Palm Beach, Florida, purchased in 1933. In his primary school years, he attended Riverdale Country School, a private school for boys in Riverdale, for 5th through 7th grade.
For 8th grade in September 1930, the 13-year old Kennedy was sent fifty miles away toCanterbury School, a lay Catholic boarding school for boys in New Milford, Connecticut. In late April 1931, he had appendicitis requiring an appendectomy, after which he withdrew from Canterbury and recuperated at home.
In September 1931, Kennedy was sent to The Choate School (now Choate Rosemary Hall), an elite boys boarding school in Wallingford, Connecticut, for his 9th through 12th grade years. His older brother Joe Jr., was already at Choate, two years ahead of him, a football star and leading student in the school. Jack thus spent his first years at Choate in his brother's shadow. He reacted with rebellious behavior that attracted a coterie. Their most notorious stunt was to explode a toilet seat with a powerful firecracker. In the ensuing chapel assembly the autocratic headmaster, George St. John, brandished the toilet seat and spoke of certain "muckers" who would "spit in our sea." The defiant Jack Kennedy took the cue and named his group "The Muckers Club." Kennedy remained close friends to the end of his life with several of his Choate fellows, including especially Kirk LeMoyne "Lem" Billings. Throughout his years at Choate, Kennedy was beset by health problems, culminating in 1934 with his emergency hospitalization atYale-New Haven Hospital from January until March. In June 1934 he was admitted to the Mayo Clinic in Rochester, Minnesota and diagnosed with colitis. When Kennedy graduated from Choate in June 1935 his superlative in The Brief, the school yearbook (of which he had been business manager), was "Most likely to Succeed."
In September 1935, he sailed on the SS Normandie on his first trip abroad with his parents and his sister Kathleen to London with the intent of studying for a year with Professor Harold Laski at the London School of Economics (LSE) as his elder brother Joe had done. Mystery surrounds his time at LSE and there is uncertainty about how long he spent there before returning to America. In October 1935, Kennedy enrolled late and spent six weeks at Princeton University. He was then hospitalized for two months' observation for possible leukemia at Peter Bent Brigham Hospital in Boston in January and February 1936. He recuperated at the Kennedy winter home in Palm Beach in March and April, spent May and June working as a ranch hand on a 40,000-acre (160 km²)cattle ranch outside Benson, Arizona, and in July and August raced sailboats at the Kennedy summer home in Hyannisport.
In September 1936 he enrolled as a freshman at Harvard College, where he produced that year's annual Freshman Smoker, called by a reviewer "an elaborate entertainment, which included in its cast outstanding personalities of the radio, screen and sports world." He tried out for the football, golf, and swimming teams. He earned a spot on the varsity swim team. He resided in Winthrop House during his sophomore through senior years, again following two years behind his elder brother, Joe. In early July 1937, Kennedy took his convertible, sailed on the SSWashington to France, and spent ten weeks driving with a friend through France, Italy, Germany, Holland, and England. In late June 1938, Kennedy sailed with his father and his brother Joe on the SS Normandie to spend July working with his father, recently appointed U.S. Ambassador to theCourt of St. James's by President Roosevelt, at the American embassy in London, and August with his family at a villa near Cannes. From February through September 1939, Kennedy toured Europe, the Soviet Union, the Balkans, and the Middle East to gather background information for his Harvard senior honors thesis. He spent the last ten days of August in Czechoslovakia and Germany before returning to London on September 1, 1939, the day Germany invaded Poland. On September 3, 1939, Kennedy and his family were in attendance at the Strangers Galleryof the House of Commons to hear speeches in support of the United Kingdom's declaration of war on Germany. Kennedy was sent as his father's representative to help with arrangements for American survivors of the SS Athenia, before flying back to the U.S. on Pan Am's Dixie Clipperfrom Foynes, Ireland to Port Washington, New York on his first transatlantic flight at the end of September.
In 1940, Kennedy completed his thesis, "Appeasement in Munich," about British participation in the Munich Agreement. He initially intended his thesis to be private, but his father encouraged him to publish it as a book. He graduated cum laude from Harvard with a degree in international affairs in June 1940, and his thesis was published in July 1940 as a book entitled Why England Slept, and became a bestseller. From September to December 1940, Kennedy was enrolled and audited classes at the Stanford Graduate School of Business. In early 1941, he helped his father complete the writing of a memoir of his three years as an American ambassador. In May and June 1941, Kennedy traveled throughout South America.
In the spring of 1941, Kennedy volunteered for the U.S. Army, but was rejected, because of his chronic lower back problems. Nevertheless, in September of that year, the U.S. Navy accepted him, because of the influence of the director of the Office of Naval Intelligence (ONI), a former naval attaché to Joseph Kennedy. As an ensign, Kennedy served in the office which supplied bulletins and briefing information for the Secretary of the Navy. It was during this assignment that the attack on Pearl Harbor occurred. He attended the Naval Reserve Officer Training Corps and Motor Torpedo Boat Squadron Training Center before being assigned for duty in Panama and eventually the Pacific theater. He participated in various commands in the Pacific theater and earned the rank of lieutenant, commanding a patrol torpedo (PT) boat.
On August 2, 1943, Kennedy's boat, the PT-109, along with PT-162 and PT-169, were ordered to continue a nighttime patrol near New Georgia in the Solomon Islands when it was rammed by the Japanese destroyerAmagiri. Kennedy was thrown across the deck, injuring his already-troubled back. Nonetheless, Kennedy gathered his men together and swam, towing a badly burned crewman by using a life jacket strap he clenched in his teeth. He towed the wounded man to an island and later to a second island from where his crew was subsequently rescued. For these actions, Kennedy received the Navy and Marine Corps Medalunder the following citation:
For extremely heroic conduct as Commanding Officer of Motor Torpedo Boat 109 following the collision and sinking of that vessel in the Pacific War Theater on August 1–2, 1943. Unmindful of personal danger, Lieutenant (then Lieutenant, Junior Grade) Kennedy unhesitatingly braved the difficulties and hazards of darkness to direct rescue operations, swimming many hours to secure aid and food after he had succeeded in getting his crew ashore. His outstanding courage, endurance and leadership contributed to the saving of several lives and were in keeping with the highest traditions of the United States Naval Service.
However, General Douglas MacArthur had a different opinion about the event: "Those PT boats carried only one torpedo [sic]. They were under orders to fire it and then get out. They were defenseless. Kennedy hung around, however, and let a Japanese destroyer mow him down. When I heard about it, I talked to his superior officer. He should have been court-martialed."
In October 1943, Kennedy took command of Motor Torpedo Boat PT-59 which was converted from a torpedo boat to a gunboat. On the night of November 2, 1943, the PT-59 and PT-236 took part in the rescue of ambushed Marines on Choiseul Island. Later, Kennedy was honorably discharged in early 1945, just a few months before Japan surrendered. Kennedy's other decorations in World War II included the Purple Heart,American Defense Service Medal, American Campaign Medal, Asiatic-Pacific Campaign Medal with three bronze service stars, and the World War II Victory Medal.
The incident of the PT-109 was popularized when he became president and would be the subject of several magazine articles, books, comic books, TV specials, and a feature length movie, making thePT-109 one of the most famous U.S. Navy ships of the war. Scale models and even a G.I. Joe figure based on the incident were still being produced in the 2000s. The coconut which was used to scrawl a rescue message given to Solomon Islander scouts who found him was kept on his presidential desk and is still at the John F. Kennedy Library.
During his presidency, Kennedy privately admitted to friends that he didn't feel that he deserved the medals he had received, because the PT-109 incident had been the result of a botched military operation that had cost the lives of two members of his crew. When later asked by a reporter how he became a war hero, Kennedy (known for a sense of humor) joked: "It was involuntary. They sank my boat."
In May 2002, a National Geographic expedition led by Robert Ballard, found what is believed to be the wreckage of the PT-109 in the Solomon Islands.
Early political career
After World War II, Kennedy had considered the option of becoming a journalist before deciding to run for political office. Prior to the war, he had not strongly considered becoming a politician as a career, because his family, especially his father, had already pinned its political hopes on his elder brother. Joseph, however, was killed in World War II, giving John seniority. When in 1946 U.S. Representative James Michael Curley vacated his seat in an overwhelmingly Democratic district to become mayor of Boston, Kennedy ran for the seat, beating his Republican opponent by a large margin. He was a congressman for six years but had a mixed voting record, often diverging from President Harry S. Truman and the rest of the Democratic Party. In 1952, he defeated incumbent Republican Henry Cabot Lodge, Jr. for the U.S. Senate.
Kennedy married Jacqueline Lee Bouvier on September 12, 1953. Charles L. Bartlett, a journalist, introduced the pair at a dinner party.Kennedy underwent several spinal operations over the following two years, nearly dying (in all he received the Catholic Church's last rites four times during his life) and was often absent from the Senate. During his convalescence in 1956, he published Profiles in Courage, a book describing eight instances in which U.S. Senators risked their careers by standing by their personal beliefs. The book was awarded the Pulitzer Prize for Biography in 1957. From the time of publication, there have been rumors that this work was actually coauthored by his close adviser Ted Sorensen, who had joined his Senate office staff in 1953 and would serve as a speechwriter for Kennedy until his death. In May 2008, Sorensen confirmed these rumors in his autobiography.
In the 1956 presidential election, presidential nominee Adlai Stevenson left the choice of a Vice Presidential nominee to the Democratic convention, and Kennedy finished second in that balloting to Senator Estes Kefauver of Tennessee. Despite this defeat, Kennedy received national exposure from that episode that would prove valuable in subsequent years. His father, Joseph Kennedy, Sr., pointed out that it was just as well that John did not get that nomination, as some people sought to blame anything they could on Catholics, even though it was privately known that any Democrat would have trouble running against Eisenhower in 1956.
The Civil Rights Act of 1957 was put forward by President Eisenhower but he "conceded" there were aspects of it he didn't understand. This led Southern senators to "emasculate" his bill.Kennedy voted against letting the bill bypass the Senate Judiciary Committee, which was led by Senator James Eastland, a segregationist from Mississippi. Kennedy argued procedure should be followed and the bill could be voted on in the full Senate after a motion to discharge by the committee, but his vote was seen by some as appeasement of Southern opponents. Kennedy voted for Title III of the proposed act, which would have given the Attorney General injunctive powers, but Lyndon Johnson agreed to let the provision die as a compromise measure. After consulting two Harvard legal scholars, Kennedy voted for Title IV, the "Jury Trial Amendment", which in cases of criminal contempt called for conviction by jury. Many civil rights advocates at the time criticized the vote as one that would lead to rendering the Act too weak. A compromise final bill which Kennedy supported was passed in September. Staunch segregationists such as senators James Eastland and John McClellan and Mississippi Governor James P. Coleman were early supporters of Kennedy's presidential campaign. In 1958, Kennedy was re-elected to a second term in the United States Senate, defeating his Republican opponent, Boston lawyer Vincent J. Celeste, by a wide margin.
Senator Joseph McCarthy was a friend of the Kennedy family: Joseph Kennedy, Sr. was a leading McCarthy supporter; Robert F. Kennedy worked for McCarthy's subcommittee, and McCarthy dated Patricia Kennedy. In 1954, when the Senate was poised to condemn McCarthy, John Kennedy drafted a speech calling for McCarthy's censure, but never delivered it. When on December 2, 1954, the Senate rendered its highly publicized decision to censure McCarthy, Senator Kennedy was in the hospital. Though absent, Kennedy could have "paired" his vote against that of another senator, but chose not to; neither did he ever indicate then nor later how he would have voted. The episode damaged Kennedy's support in the liberal community, especially with Eleanor Roosevelt, as late as the 1956 and 1960 elections.
1960 presidential election
On January 2, 1960, Kennedy officially declared his intent to run for President of the United States. In the Democratic primary election, he faced challenges from Senator Hubert Humphrey of Minnesota and Senator Wayne Morse of Oregon. Kennedy defeated Humphrey in Wisconsinand West Virginia and Morse in Maryland and Oregon, although Morse's candidacy is often forgotten by historians. He also defeated token opposition (often write-in candidates) in New Hampshire, Indiana, and Nebraska. In West Virginia, Kennedy visited a coal mine and talked to mine workers to win their support; most people in that conservative, mostly Protestant state were deeply suspicious of Kennedy's Roman Catholicism. His victory in West Virginia cemented his credentials as a candidate with broad popular appeal. At the Democratic Convention, he gave the well-known "New Frontier" speech, which represented the changes America and the rest of the world would be going through: "For the problems are not all solved and the battles are not all won—and we stand today on the edge of a New Frontier ... But the New Frontier of which I speak is not a set of promises—it is a set of challenges. It sums up not what I intend to offer the American people, but what I intend to ask of them."
With Humphrey and Morse out of the race, Kennedy's main opponent at the convention in Los Angeles was Senator Lyndon B. Johnson of Texas. Adlai Stevenson, the Democratic nominee in 1952 and 1956, was not officially running but had broad grassroots support inside and outside the convention hall. Senator Stuart Symington of Missouri was also a candidate, as were several favorite sons. On July 13, 1960, the Democratic convention nominated Kennedy as its candidate for President. Kennedy asked Johnson to be his Vice Presidential candidate, despite opposition from many liberal delegates and Kennedy's own staff, including Robert Kennedy. He needed Johnson's strength in the South to win what was considered likely to be the closest election since 1916. Major issues included how to get the economy moving again, Kennedy's Roman Catholicism, Cuba, and whether the Soviet space and missile programs had surpassed those of the U.S. To address fears that the fact that he was Catholic would impact his decision-making, he famously told the Greater Houston Ministerial Association on September 12, 1960, "I am not the Catholic candidate for President. I am the Democratic Party candidate for President who also happens to be a Catholic. I do not speak for my Church on public matters — and the Church does not speak for me." Kennedy also brought up the point of whether one-quarter of Americans were relegated to second-class citizenship just because they were Catholic.
In September and October, Kennedy debated Republican candidate and Vice President Richard Nixon in the first televised U.S. presidential debates in U.S. history. During these programs, Nixon, nursing an injured leg and sporting "five o'clock shadow", looked tense and uncomfortable, while Kennedy appeared relaxed, leading the huge television audience to deem Kennedy the winner. Radio listeners, however, either thought Nixon had won or that the debates were a draw. Nixon did not wear make-up during the initial debate, unlike Kennedy. The debates are now considered a milestone in American political history—the point at which the medium of television began to play a dominant role in national politics. After the first debate Kennedy's campaign gained momentum and he pulled slightly ahead of Nixon in most polls. On Tuesday, November 8, Kennedy defeated Nixon in one of the closest presidential elections of the twentieth century. In the national popular vote Kennedy led Nixon by just two-tenths of one percent (49.7% to 49.5%), while in the Electoral College he won 303 votes to Nixon's 219 (269 were needed to win). Another 14 electors from Mississippi and Alabama refused to support Kennedy because of his support for the civil rights movement; they voted for Senator Harry F. Byrd, Sr. of Virginia.
|Wikisource has original text related to this article:|
John F. Kennedy was sworn in as the 35th President at noon on January 20, 1961. In his inaugural address he spoke of the need for all Americans to be active citizens, famously saying, "Ask not what your country can do for you; ask what you can do for your country." He also asked the nations of the world to join together to fight what he called the "common enemies of man: tyranny, poverty, disease, and war itself." He added: "All this will not be finished in the first one hundred days. Nor will it be finished in the first one thousand days, nor in the life of this Administration, nor even perhaps in our lifetime on this planet. But let us begin." In closing, he expanded on his desire for greater internationalism: "Finally, whether you are citizens of America or citizens of the world, ask of us here the same high standards of strength and sacrifice which we ask of you."
President Kennedy's foreign policy was dominated by American-Soviet relations. Much foreign policy revolved around proxy interventions in the context of the early stage Cold War.
John F. Kennedy gave a speech at Saint Anselm College on May 5, 1960, regarding America's conduct in the new realities of the emerging Cold War. Kennedy's speech detailed how American foreign policy should be conducted towards African nations, noting a hint of support for modern African nationalism by saying that "For we, too, founded a new nation on revolt from colonial rule".
Cuba and the Bay of Pigs Invasion
Prior to Kennedy's election to the presidency, the Eisenhower Administration created a plan to overthrow the Fidel Castro regime in Cuba. Central to such a plan, which was structured and detailed by the Central Intelligence Agency (CIA) with approval from the US Military but with minimal input from the United States Department of State, was the arming of a counter-revolutionary insurgency composed of anti-Castro Cubans. U.S.-trained Cuban insurgents, led by CIA paramilitary officers from the Special Activities Division, were to invade Cuba and instigate an uprising among the Cuban people in hopes of removing Castro from power. On April 17, 1961, Kennedy ordered the previously planned invasion of Cuba to proceed. With support from the CIA, in what is known as the Bay of Pigs Invasion, 1,500 U.S.-trained Cuban exiles, called "Brigade 2506," returned to the island in the hope of deposing Castro. However, Kennedy ordered the invasion to take place without U.S. air support. By April 19, 1961, the Cuban government had captured or killed the invading exiles, and Kennedy was forced to negotiate for the release of the 1,189 survivors. The failure of the plan originated in a lack of dialog among the military leadership, a result of which was the complete lack of naval support in the face of organized artillery troops on the island who easily incapacitated the exile force as it landed on the beach. After twenty months, Cuba released the captured exiles in exchange for $53 million worth of food and medicine. Furthermore, the incident made Castro wary of the U.S. and led him to believe that another invasion would occur.
Cuban Missile Crisis
Kennedy addressing the nation on October 22, 1962 about the buildup of arms on Cuba
|Problems listening to this file? See media help.|
The Cuban Missile Crisis began on October 14, 1962, when CIA U-2 spy planes took photographs of a Soviet intermediate-range ballistic missile site under construction in Cuba. The photos were shown to Kennedy on October 16, 1962. The United States would soon be posed with a serious nuclear threat. Kennedy faced a dilemma: if the U.S. attacked the sites, it might lead to nuclear war with the U.S.S.R., but if the U.S. did nothing, it would endure the threat of nuclear weapons being launched from close range. Because the weapons were in such proximity, the U.S. might have been unable to retaliate if they were launched pre-emptively. Another consideration was that the U.S. would appear to the world as weak in its own hemisphere.
Many military officials and cabinet members pressed for an air assault on the missile sites, but Kennedy ordered a naval quarantine in which the U.S. Navy inspected all ships arriving in Cuba. He began negotiations with the Soviets and ordered the Soviets to remove all defensive material that was being built on Cuba. Without doing so, the Soviet and Cuban peoples would face naval quarantine. A week later, he and Soviet Premier Nikita Khrushchev reached a basically cordial, lasting agreement. Khrushchev agreed to remove the missiles subject to U.N. inspections if the U.S. publicly promised never to invade Cuba and quietly remove its Jupiter missiles stationed in Turkey. The removal of the Jupiter missiles was not a great concession as they were viewed as obsolete and Kennedy believed the US Navy Polarlis subs could fill their role. This crisis had brought the world closer to nuclear war than at any point before or since. In the end, "the humanity" of the two men prevailed.
Latin America and communism
Arguing that "those who make peaceful revolution impossible, will make violent revolution inevitable," Kennedy sought to contain communism in Latin America by establishing the Alliance for Progress, which sent foreign aid to troubled countries in the region and sought greater human rights standards in the region. He worked closely with Governor of Puerto Rico Luis Muñoz Marín for the development of the Alliance of Progress, as well as developments in the autonomy of the Commonwealth of Puerto Rico.
As one of his first presidential acts, Kennedy asked Congress to create the Peace Corps. Through this program, Americans volunteer to help underdeveloped nations in areas such as education, farming, health care, and construction.
The extent of Kennedy's involvement in Vietnam remained classified until the release of the Pentagon Papers in 1971.
In Southeast Asia, Kennedy followed Eisenhower's lead by using limited military action as early as 1961 to fight the Communist forces led by Ho Chi Minh. Proclaiming a fight against the spread of Communism, Kennedy enacted policies providing political, economic, and military support for the unstable French-installed South Vietnamese government, which included sending 16,000 military advisors and U.S. Special Forces to the area. Kennedy also authorized the use of free-fire zones, napalm, defoliants, and jet planes. U.S. involvement in the area escalated until Lyndon Johnson, his successor, directly deployed regular U.S. forces for fighting the Vietnam War.
By July 1963, Kennedy faced a crisis in Vietnam: despite increased U.S. support, the South Vietnamese military was only marginally effective against pro-Communist Viet Minh and Viet Congforces. Regarding Ngo Dinh Diem, the Catholic President of South Vietnam, as insufficiently anti-Communist, the U.S. gave secret assurances of non-interference for an impending coup d'état.On November 1, 1963, South Vietnamese generals overthrew the Diem government, arresting and soon killing Diem (though the circumstances of his death were obfuscated). Kennedy sanctioned Diem's overthrow. One reason to support the coup was a fear that Diem might negotiate a neutralist coalition government which included Communists, as had occurred in Laos in 1962. Dean Rusk, Secretary of State, remarked "This kind of neutralism...is tantamount to surrender."
During his time in office, Kennedy increased the number of U.S. military in Vietnam from 800 to 16,300. It remains a point of some controversy among historians whether or not Vietnam would have escalated to the point it did had Kennedy served out his full term and been re-elected in 1964.Fueling the debate are statements made by Kennedy and Johnson's Secretary of Defense Robert McNamara that Kennedy was strongly considering pulling out of Vietnam after the 1964 election. In the film "The Fog of War", not only does McNamara say this, but a tape recording of Lyndon Johnson confirms that Kennedy was planning to withdraw from Vietnam, a position Johnson states he strongly disapproved of. Additional evidence is Kennedy's National Security Action Memorandum (NSAM) 263, dated October 11, 1963, which ordered withdrawal of 1,000 military personnel by the end of 1963. Nevertheless, given the stated reason for the overthrow of the Diem government, such action would have been a policy reversal, but Kennedy was generally moving in a less hawkish direction in the Cold War since his acclaimed speech about World Peace at American University the previous June 10, 1963. According to historian Lawrence Freedman, regarding Kennedy's statements about withdrawing from Vietnam, it was, "less of a definite decision than a working assumption, based on a hope for stability rather than an expectation of chaos".
After Kennedy's assassination, the new President Lyndon B. Johnson immediately reversed his predecessor's order to withdraw 1,000 military personnel by the end of 1963 with his own NSAM 273 on November 26, 1963.
American University speech
|Wikisource has original text related to this article:|
Speech from American University by John F. Kennedy, June 10, 1963. Duration 26:47.
|Problems listening to this file? See media help.|
On June 10, 1963, Kennedy delivered the commencement address at American University in Washington, D.C., proclaiming that "The United States, as the world knows, will never start a war. We do not want a war. We do not now expect a war," but cautioning that, "We shall be prepared if others wish it. We shall be alert to try to stop it. But we shall also do our part to build a world of peace where the weak are safe and the strong are just."
West Berlin speech
Speech from the Berlin Wall by John F. Kennedy, June 26, 1963. Duration 9:22.
|Problems listening to this file? See media help.|
|Wikisource has original text related to this article:|
Under simultaneous and opposing pressures from the Allies and the Soviets, Germany was divided. TheBerlin Wall separated West and East Berlin, the latter being under the control of the Soviets. On June 26, 1963, Kennedy visited West Berlin and gave a public speech criticizing communism. Kennedy used the construction of the Berlin Wall as an example of the failures of communism: "Freedom has many difficulties and democracy is not perfect, but we have never had to put a wall up to keep our people in." The speech is known for its famous phrase "Ich bin ein Berliner". Nearly five-sixths of the population was on the street when Kennedy said the famous phrase. He remarked to aides afterwards: "We'll never have another day like this one."
During Kennedy's time in office he encountered problems with the Israeli government regarding the production of nuclear weapons in Dimona. Although the existence of a nuclear plant was initially denied by the Israeli government, David Ben-Gurion, in a speech to the Israeli Knesset on December 21, 1960, stated that the purpose of the nuclear plant established at Beersheba was for "research in problems of arid zones and desert flora and fauna". When Ben-Gurion met with Kennedy in New York, he claimed that Dimona was being developed to provide nuclear power for desalinization and that "for the time being the only purposes [of the nuclear plant] are for peace". Kennedy did not believe this, and in May 1963 sent a letter to Ben-Gurion stating, "this commitment and this support would seriously be jeopardized in the public opinion in this country and the West as a whole if it should be thought that this Government was unable to obtain reliable information on a subject as vital to peace as Israel's efforts in the nuclear field." Ben-Gurion repeated previous reassurances that Dimona was being developed for peaceful purposes, and Israel firmly resisted American pressure to open its nuclear facilities to International Atomic Energy Agency (IAEA) inspections. According to Seymour Hersh, the Israelis set up false control rooms to show American inspectors. Abe Feinberg stated, "It was part of my job to tip them off that Kennedy was insisting on [an inspection]." The State Department argued that if Israel wanted U.S. tanks, it should be prepared in return to accept international supervision of its nuclear program. Kennedy had tried to control the arms being sold and given to Israel because the Israelis would not sign the IAEA compacts for the Dimona nuclear site, they would not fully admit its purpose and continued to insist it was for peaceful energy purposes. In early March 1965, the director of the State Department's Office of Near Eastern Affairs, Rodger P. Davies, had come to the conclusion that Israel was developingnuclear weapons. He reported that the target date for acquisition of a nuclear capability by Israel was 1968-69. A science attache at the embassy in Tel Aviv had concluded that parts of the Dimona facility had been "purposely mothballed" to mislead American scientists during their visit. Dimona was never placed under IAEA safeguards despite efforts made by various U.S. administrators and presidents. On May 1, 1968, Undersecretary of State Katzenbach told President Johnson that Dimona was producing enough plutonium to produce two bombs a year.Attempts to write Israeli adherence to the NPT into contracts for the supply of U.S. weapons continued throughout 1968.
In 1963, the Kennedy administration backed a coup against the government of Iraq headed by General Abdel Karim Kassem, who five years earlier had deposed the Western-allied Iraqi monarchy. The CIA helped the new Ba'ath Party government led by Abdul Salam Arif in ridding the country of suspected leftists and Communists. In a Ba'athist coup, the government used lists of suspected Communists and other leftists provided by the CIA, to systematically murder untold numbers of Iraq's educated elite—killings in which Saddam Hussein himself is said to have participated. The victims included hundreds of doctors, teachers, technicians, lawyers, and other professionals as well as military and political figures. According to an op-ed in The New York Times, the U.S. sent arms to the new regime, weapons later used against the sameKurdish insurgents the U.S. supported against Kassem and then abandoned him. American and UK oil and other interests, including Mobil,Bechtel, and British Petroleum, were conducting business in Iraq.
On the occasion of his visit to the Republic of Ireland in 1963, President Kennedy joined with Irish President Éamon de Valera to form The American Irish Foundation. The mission of this organization was to foster connections between Americans of Irish descent and the country of their ancestry. Kennedy furthered these connections of cultural solidarity by accepting a grant of armorial bearingsfrom the Chief Herald of Ireland. Kennedy had near-legendary status in Ireland, due to his ancestral ties to the country. Irish citizens who were alive in 1963 often have very strong memories of Kennedy's momentous visit. He also visited the original cottage at Dunganstown, near New Ross, where previous Kennedys had lived before emigrating to America, and said: "This is where it all began ..." On December 22, 2006, the Irish Department of Justice released declassified police documents that indicated that Kennedy was the subject of three death threats during this visit. Though these threats were determined to be hoaxes, security was heightened.
Nuclear Test Ban Treaty
Troubled by the long-term dangers of radioactive contamination and nuclear weapons proliferation, Kennedy pushed for the adoption of a Limited or Partial Test Ban Treaty, which prohibited atomic testing on the ground, in the atmosphere, or underwater, but did not prohibit testing underground. The United States, the United Kingdom, and the Soviet Union were the initial signatories to the treaty. Kennedy signed the treaty into law in August 1963.
Kennedy called his domestic program the "New Frontier". It ambitiously promised federal funding for education, medical care for the elderly, economic aid to rural regions, and government intervention to halt the recession. Kennedy also promised an end to racial discrimination. In 1963, he proposed a tax reform which included income tax cuts, but this was not passed by Congress until 1964, after his death. Few of Kennedy's major programs passed Congress during his lifetime, although, under his successor Johnson, Congress did vote them through in 1964–65.
Kennedy ended a period of tight fiscal policies, loosening monetary policy to keep interest rates down and encourage growth of the economy.Kennedy presided over the first government budget to top the $100 billion mark, in 1962, and his first budget in 1961 led to the country's first non-war, non-recession deficit. The economy, which had been through two recessions in three years and was in one when Kennedy took office, accelerated notably during his brief presidency. Despite low inflation and interest rates, GDP had grown by an average of only 2.2% during the Eisenhower presidency (scarcely more than population growth at the time), and had declined by 1% during Eisenhower's last twelve months in office. Stagnation had taken a toll on the nation's labor market, as well: unemployment had risen steadily from under 3% in 1953 to 7%, by early 1961.
The economy turned around and prospered during the Kennedy administration. GDP expanded by an average of 5.5% from early 1961 to late 1963, while inflation remained steady at around 1% and unemployment began to ease; industrial production rose by 15% and motor vehicle sales leapt by 40%. This rate of growth in GDP and industry continued until around 1966, and has yet to be repeated for such a sustained period of time.
Federal and military death penalty
As President, Kennedy oversaw the last pre-Furman federal execution, and, as of 2008, the last military execution. Governor of Iowa Harold Hughes, a death penalty opponent, personally contacted Kennedy to request clemency for Victor Feguer, who was sentenced to death by a federal court in Iowa, but Kennedy turned down the request and Feguer was executed on March 15, 1963. Kennedy commuted a death sentence imposed by military court on seaman Jimmie Henderson on February 12, 1962, changing the penalty to life in prison.
On March 22, 1962, Kennedy signed into law HR5143 (PL87-423), abolishing the mandatory death penalty for first degree murder in the District of Columbia, the only remaining jurisdiction in the United States with a mandatory death sentence for first degree murder, replacing it with life imprisonment with parole if the jury could not decide between life imprisonment and the death penalty, or if the jury chose life imprisonment by a unanimous vote. The death penalty in the District of Columbia has not been applied since 1957, and has now been abolished.
The turbulent end of state-sanctioned racial discrimination was one of the most pressing domestic issues of Kennedy's era. The United States Supreme Court had ruled in 1954 in Brown v. Board of Education that racial segregation in public schools was unconstitutional. However, many schools, especially in southern states, did not obey the Supreme Court's judgment. Segregation on buses, in restaurants, movie theaters, bathrooms, and other public places remained. Kennedy supported racial integration and civil rights, and during the 1960 campaign he telephonedCoretta Scott King, wife of the jailed Reverend Martin Luther King, Jr., which perhaps drew some additional black support to his candidacy. John and Robert Kennedy's intervention secured the early release of King from jail.
In September 1962, James Meredith tried to enroll at the University of Mississippi, but he was prevented from doing so by white students and other Mississippians. Robert Kennedy, then Attorney General, responded by sending some 400 U.S. Marshals, while President Kennedy reluctantly sent about 3,000 federal troops after the situation on campus turned violent. Riots at the campus left two dead and dozens injured. Meredith finally enrolled in his first class. Kennedy also assigned federal marshals to protect Freedom Riders.
As President, Kennedy initially believed the grass roots movement for civil rights would only anger many Southern whites and make it even more difficult to pass civil rights laws through Congress, which was dominated by conservative Southern Democrats, and he distanced himself from it. As a result, many civil rights leaders viewed Kennedy as unsupportive of their efforts.
On June 11, 1963, President Kennedy intervened when Alabama Governor George Wallace blocked the doorway to the University of Alabama to stop two African American students, Vivian Malone and James Hood, from enrolling. Wallace moved aside after being confronted by federal marshals, Deputy Attorney General Nicholas Katzenbach and the Alabama National Guard. That evening Kennedy gave his famous civil rights address on national television and radio. Kennedy proposed what would become the Civil Rights Act of 1964.
Kennedy signed the executive order creating the Presidential Commission on the Status of Women in 1961. Commission statistics revealed that women were also experiencing discrimination. Their final report documenting legal and cultural barriers was issued in October 1963, a month before Kennedy's assassination.
In 1963, FBI Director J. Edgar Hoover, who hated civil-rights leader Martin Luther King, Jr. and viewed him as an upstart troublemaker,presented the Kennedy Administration with allegations that some of King's close confidants and advisers were communists. Concerned that the allegations, if made public, would derail the Administration's civil rights initiatives, Robert Kennedy warned King to discontinue the suspect associations, and later felt compelled to issue a written directive authorizing the FBI to wiretap King and other leaders of the Southern Christian Leadership Conference, King's civil rights organization. Although Kennedy only gave written approval for limited wiretapping of King's phones "on a trial basis, for a month or so", Hoover extended the clearance so his men were "unshackled" to look for evidence in any areas of King's life they deemed worthy. The wire tapping continued through June 1966 and was revealed in 1968.
Due to a recession, Kennedy used the power of federal agencies to influence US Steel not to institute a price increase. The Wall Street Journal wrote that the administration had set prices of steel "by naked power, by threats, by agents of the state security police." Yale law professor Charles Reich wrote in The New Republic that the administration had violated civil liberties by calling a grand jury to indict US Steel so quickly.
John F. Kennedy initially proposed an overhaul of American immigration policy that later was to become the Immigration and Nationality Act of 1965, sponsored by Kennedy's brother Senator Edward Kennedy. It dramatically shifted the source of immigration from Northern and Western European countries towards immigration from Latin America and Asia and shifted the emphasis of selection of immigrants towards facilitating family reunification. Kennedy wanted to dismantle the selection of immigrants based on country of origin and saw this as an extension of his civil rights policies.
|Wikisource has original text related to this article:|
|Wikisource has original text related to this article:|
Kennedy speaking at Rice University on September 12, 1962, committing the United States to put a man on the moon by the end of the 1960s.
|Problems listening to this file? See media help.|
Kennedy was eager for the United States to lead the way in the Space Race. Sergei Khrushchevsays Kennedy approached his father, Nikita, twice about a "joint venture" in space exploration—in June 1961 and autumn 1963. On the first occasion, the Soviet Union was far ahead of America in terms of space technology. Kennedy first announced the goal for landing a man on the Moon in speaking to a Joint Session of Congress on May 25, 1961, saying
"First, I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him back safely to the earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish."
Kennedy later made a speech at Rice University on September 12, 1962, in which he said
"No nation which expects to be the leader of other nations can expect to stay behind in this race for space."
"We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard."
On November 21, 1962, however, in a Cabinet Room meeting with NASA Administrator James Webb and other officials, Kennedy said
"This is important for political reasons, international political reasons... Because otherwise we shouldn't be spending this kind of money, because I'm not that interested in space. I think it's good, I think we ought to know about it, we're ready to spend reasonable amounts of money. But...we’ve spent fantastic expenditures, we’ve wrecked our budget on all these other domestic programs, and the only justification for it, in my opinion, to do it in the pell-mell fashion is because we hope to beat them [the Soviets] and demonstrate that starting behind, as we did by a couple of years, by God, we passed them. I think it would be a helluva thing for us."
On the second approach to Khrushchev, the Ukrainian was persuaded that cost-sharing was beneficial and American space technology was forging ahead. The U.S. had launched a geostationary satellite and Kennedy had asked Congress to approve more than $25 billion for the Apollo Project.
Khrushchev agreed to a joint venture in late 1963, but Kennedy was assassinated before the agreement could be formalized. On July 20, 1969, almost six years after his death, Project Apollo's goal was finally realized when men landed on the Moon.
Native American relations
Construction of the Kinzua Dam flooded 10,000 acres (4,047 ha) of Seneca nation land that they occupied under the Treaty of 1794, and forced approximately 600 Seneca to relocate to the northern shores upstream of the dam at Salamanca, New York. Kennedy was asked by the American Civil Liberties Union to intervene and halt the project but he declined citing a critical need for flood control. He did express concern for the plight of the Seneca, and directed government agencies to assist in obtaining more land, damages, and assistance to help mitigate their displacement.
President Kennedy was assassinated in Dallas, Texas, at 12:30 p.m. Central Standard Time on November 22, 1963, while on a political trip to Texas to smooth over factions in the Democratic Party between liberals Ralph Yarborough and Don Yarborough (no relation) and conservative John Connally. He was shot once in the upper back and was killed with a final shot to the head. He was pronounced dead at 1:00 p.m. Only 46, President Kennedy died younger than any U.S. president to date. Lee Harvey Oswald, an employee of the Texas School Book Depository from which the shots were suspected to have been fired, was arrested on charges of the murder of a local police officer and was subsequently charged with the assassination of Kennedy. He denied shooting anyone, claiming he was a patsy, but was killed by Jack Ruby on November 24, before he could be indicted or tried. Ruby was then arrested and convicted for the murder of Oswald. Ruby successfully appealed his conviction and death sentence but became ill and died of cancer while the date for his new trial was being set.
President Johnson created the Warren Commission—chaired by Chief Justice Earl Warren—to investigate the assassination, which concluded that Oswald was the lone assassin. The results of this investigation are disputed by many.
On November 25, 1963, John F. Kennedy's body was buried in a small plot, (20 ft. by 30 ft.), in Arlington National Cemetery. Over a period of 3 years, (1964–1966), an estimated 16 million people had visited his grave. On March 14, 1967, Kennedy's body was moved to a permanent burial plot and memorial at Arlington National Cemetery. The funeral was officiated by Father John J Cavanaugh.
The honor guard at JFK`s graveside was the 37th Cadet Class of the Irish Army. JFK was greatly impressed by the Irish Cadets on his last official visit to the Republic of Ireland, so much so that Jackie Kennedy requested the Irish Army to be the honor guard at the funeral.
Kennedy's wife, Jacqueline and their two deceased minor children were buried with him later. His brother, Senator Robert Kennedy, was buried nearby in June 1968. In August 2009, his brother, Senator Edward M. Kennedy, was also buried near his two brothers. JFK's grave is lit with an "Eternal Flame." Kennedy and William Howard Taft are the only two U.S. Presidents buried at Arlington.
Administration, Cabinet and judicial appointments 1961–1963
Kennedy appointed the following Justices to the Supreme Court of the United States:
Image, social life and family
John Kennedy met his future wife, Jacqueline Bouvier, when he was a congressman. They were married a year after he was elected senator, on September 12, 1953. Kennedy and his wife were younger in comparison to presidents and first ladies that preceded them, and both were popular in ways more common to pop singers and movie stars than politicians, influencing fashion trends and becoming the subjects of numerous photo spreads in popular magazines. Although Eisenhower had allowed presidential press conferences to be filmed for television, Kennedy was the first president to ask for them to be broadcast live and made good use of the medium. Jacqueline brought new art and furniture to the White House, and directed a restoration. They invited a range of artists, writers and intellectuals to rounds of White House dinners, raising the profile of the arts in America. TheKennedy family is one of the most established political families in the United States, having produced a President, three senators, and multiple other Representatives, both on the federal and state level. Jack Kennedy's father, Joseph P. Kennedy was a prominent American businessman and political figure, serving in multiple roles, including Ambassador to the United Kingdom, from 1938 to 1940.
Outside on the White House lawn, the Kennedys established a swimming pool and tree house, while Caroline attended a preschool along with 10 other children inside the home.
The president was closely tied to popular culture, emphasized by songs such as "Twisting at the White House." Vaughn Meader's First Family comedy album—an album parodying the President, First Lady, their family and administration—sold about four million copies. On May 19, 1962, Marilyn Monroe, with whom Kennedy likely had a long-term relationship, sang 'Happy Birthday' for the president at a large party in Madison Square Garden. The charisma of Kennedy and his family led to the figurative designation of "Camelot" for his administration, credited by his wife to his affection for the contemporary Broadway musical of the same name.
Behind the glamorous facade, the Kennedys also experienced many personal tragedies. Jacqueline had amiscarriage in 1955 and a stillbirth in 1956. Their newborn son, Patrick Bouvier Kennedy, died in August 1963. Kennedy had two children who survived infancy. One of the fundamental aspects of the Kennedy family is a tragic strain which has run through the family, as a result of the violent and untimely deaths of many of its members. John's eldest brother, Joseph P. Kennedy, Jr., died in World War II, at the age of 29. It was Joe Jr. who was originally to carry the family's hopes for the Presidency. Then of course both John himself, and his brother Robert died as a result of assassinations. Edward had brushes with death, the first in a plane crash and the second as a result of a car accident, known as theChappaquiddick incident. Edward died, at age 77, on August 25, 2009 from the effects of a malignant brain tumor.
Years after his death, it was revealed that in September 1947, at age 30 and while in his first term in Congress, President Kennedy was diagnosed by Sir Daniel Davis at The London Clinic with Addison's disease, a rare endocrine disorder. In 1966, his White House doctor, Janet Travell, revealed that Kennedy also had hypothyroidism. The presence of two endocrine diseases, Addison's Disease and hypothyroidism, raises the possibility that Kennedy had autoimmune polyendocrine syndrome type 2 (APS 2). Details of these and other medical problems were not publicly disclosed during Kennedy's lifetime.
Caroline Bouvier Kennedy was born in 1957 and is the only surviving member of JFK's immediate family. John F. Kennedy, Jr. was born in 1960, just a few weeks after his father was elected. John died in 1999 when the small plane he was piloting crashed en route to Martha's Vineyard, killing him, his wife and his sister-in-law.
In October 1951, during his third term as Massachusetts's 11th district congressman, the then 34-year-old Kennedy embarked on a seven-week Asian trip to India, Japan, Vietnam, and Israel with his then 25-year-old brother Robert (who had just graduated from law school four months earlier) and his then 27-year-old sister Patricia. Because of their eight-year separation in age, the two brothers had previously seen little of each other. This 25,000-mile (40,000 km) trip was the first extended time they had spent together and resulted in their becoming best friends in addition to being brothers. Robert was campaign manager for Kennedy's successful 1952 Senate campaign and later successful 1960 presidential campaign. The two brothers worked closely together from 1957 to 1959 on the Senate Select Committee on Improper Activities in the Labor and Management Field when Robert was its chief counsel. During Kennedy's presidency, Robert served in his cabinet as Attorney General and was his closest advisor.
Kennedy is reported to have had affairs with individuals including Marilyn Monroe, Gunilla von Post and Mimi Beardsley Alford, author ofOnce Upon A Secret. Mary Pinchot Meyer, a serious paramour of JFK, claimed she was using LSD to change the awareness of men in power; her supplier was Timothy Leary, the LSD guru.
Kennedy came in third (behind Martin Luther King, Jr. and Mother Teresa) in Gallup's List of Widely Admired People of the twentieth century.
Television became the primary source by which people were kept informed of events surrounding John F. Kennedy's assassination. Newspapers were kept as souvenirs rather than sources of updated information. In this sense it was the first major "tv news event" of its kind, the tv coverage uniting the nation, interpreting what went on and creating memories of this space in time. All three major U.S. television networks suspended their regular schedules and switched to all-news coverage from November 22 through November 25, 1963, being on the air for no less than 70 hours, making it the longest uninterrupted news event on American tv until 9/11. The record was broken only just before 13:00 UTC, September 14, 2001, by which time the networks had been on for 72 hours straight, covering the terror attacks on the World Trade Center and Pentagon. Kennedy's state funeral procession and the murder of Lee Harvey Oswald were all broadcast live in America and in other places around the world. The state funeral was the first of three in a span of 12 months: The other two were for General Douglas MacArthur and Herbert Hoover.
The assassination had an effect on many people, not only in the U.S. but around the world. Many vividly remember where they were when first learning of the news that Kennedy was assassinated, as with the Japanese attack on Pearl Harbor on December 7, 1941 before it and the September 11 attacks after it. U.N. Ambassador Adlai Stevenson said of the assassination: "all of us... will bear the grief of his death until the day of ours." Many people have also spoken of the shocking news, compounded by the pall of uncertainty about the identity of the assassin(s), the possible instigators and the causes of the killing as an end to innocence, and in retrospect it has been coalesced with other changes of the tumultuous decade of the 1960s, especially the Vietnam War.
Special Forces have a special bond with Kennedy. "It was President Kennedy who was responsible for the rebuilding of the Special Forces and giving us back our Green Beret," said Forrest Lindley, a writer for the newspaper Stars and Stripes who served with Special Forces in Vietnam. This bond was shown at JFK's funeral. At the commemoration of the 25th anniversary of JFK's death, Gen. Michael D. Healy, the last commander of Special Forces in Vietnam, spoke at Arlington Cemetery. Later, a wreath in the form of the Green Beret would be placed on the grave, continuing a tradition that began the day of his funeral when a sergeant in charge of a detail of Special Forces men guarding the grave placed his beret on the coffin.
Ultimately, the death of President Kennedy and the ensuing confusion surrounding the facts of his assassination are of political and historical importance insofar as they marked a turning point and decline in the faith of the American people in the political establishment—a point made by commentators from Gore Vidal to Arthur M. Schlesinger, Jr. and implied by Oliver Stone in several of his films, such as his landmark 1991 JFK.
Kennedy's continuation of Presidents Harry S. Truman and Dwight D. Eisenhower's policies of giving economic and military aid to the Vietnam War preceded President Johnson's escalation of the conflict. This contributed to a decade of national difficulties and disappointment on the political landscape.
Many of Kennedy's speeches (especially his inaugural address) are considered iconic; and despite his relatively short term in office and lack of major legislative changes coming to fruition during his term, Americans regularly vote him as one of the best presidents, in the same league asAbraham Lincoln, George Washington, and Franklin D. Roosevelt. Some excerpts of Kennedy's inaugural address are engraved on a plaque at his grave at Arlington.
He was posthumously awarded the Pacem in Terris Award. It was named after a 1963 encyclical letter by Pope John XXIII that calls upon all people of goodwill to secure peace among all nations. Pacem in Terris is Latin for 'Peace on Earth.'
President Kennedy is the only president to have predeceased both his mother and father. He is also the only president to have predeceased a grandparent. His grandmother, Mary Josephine Hannon Fitzgerald, died in 1964, just over eight months after his assassination.
- John F. Kennedy International Airport, American facility (renamed from Idlewild in December 1963) in New York City's Queens County; nation's busiest international gateway
- John F. Kennedy Memorial Airport American facility in Ashland County, Wisconsin, near city of Ashland
- John F. Kennedy Memorial Bridge American seven-lane transportation hub across Ohio River; completed in late 1963, the bridge links Kentucky and Indiana
- John F. Kennedy School of Government, American institution (renamed from Harvard Graduate School of Public Administration in 1966)
- John F. Kennedy Space Center, U.S. government installation that manages and operates America's astronaut launch facilities
- John F. Kennedy University, American private educational institution founded in California in 1964; locations in Pleasant Hill, Campbell, Berkeley, and Santa Cruz
- USS John F. Kennedy (CV-67), U.S. Navy aircraft carrier ordered in April 1964, launched May 1967, decommissioned August 2007; nicknamed "Big John"
- John F. Kennedy High Schools in multiple localities
Coat of arms
In 1961, Kennedy was presented with a grant of arms for all the descendants of Patrick Kennedy from the Chief Herald of Ireland. The design of the arms strongly alludes to symbols in the coats of arms of the O'Kennedys of Ormonde and the Fitzgeralds of Desmond, from whom the family is believed to be descended. The crest is an armored hand holding four arrows between two olive branches, elements taken from the coat of arms of the United States of America and also symbolic of Kennedy and his brothers.
Kennedy received a signet ring engraved with his arms for his forty-fourth birthday as a gift from his wife, and the arms were incorporated into the seal of the USS John F. Kennedy. Following his assassination, Kennedy was honored by the Canadian government by having a mountain,Mount Kennedy, named for him, which his brother, Robert Kennedy, climbed in 1965 to plant a banner of the arms at the summit.
President Kennedy comments on the possible prevention of the Cold War
Announcement by John F. Kennedy to go to the moon (0:11m)
|Problems listening to these files? See media help.|
- The White House Situation Room reports on the assassination to an airplane with several Cabinet members as it flies to Hawaii, Nov 22,1963 (MP3 format 7.5 MB 33-Min.)
- Abraham Zapruder, photographer of the primary film of assassination, the Zapruder film.
- History of the United States (1945–1964)
- John F. Kennedy International Airport
- Jesuit Ivy
- JFK Reloaded, a video game
- Kennedy Curse
- Kennedy Doctrine
- Lincoln Kennedy coincidences urban legend
- List of assassinated American politicians
- List of Presidents of the United States
- List of United States Presidents who died in office
- Operation Northwoods
- Orville Nix, photographer of the second film of assassination
- Peace Corps
- Robert F. Kennedy assassination
- "Senator, you're no Jack Kennedy" retort by Senator Lloyd Bentsen, 1988 VP debate
- Ballard, Robert. Collision with History: The Search for John F. Kennedy's PT 109, National Geographic, Washington, D.C. 2002, ISBN 0-7922-6876-8
- Brauer, Carl. John F. Kennedy and the Second Reconstruction (1977)
- Burner, David. John F. Kennedy and a New Generation (1988)
- Casey, Shaun. The Making of a Catholic President: Kennedy vs. Nixon 1960 (2009)
- Dallek, Robert (2003). An Unfinished Life : John F. Kennedy, 1917–1963. Brown, Little. ISBN 0-316-17238-3.
- Collier, Peter & Horowitz, David. The Kennedys (1984)
- Cottrell, John. Assassination! The World Stood Still (1964)
- Douglass, James W., JFK and the Unspeakable: Why He Died and Why it Matters (Orbis Books, 2008), positive assessment
- Donovan, Robert J. PT-109: John F. Kennedy in WW II, 40th Anniversary Edition, McGraw Hill, (reprint 2001), ISBN 0-07-137643-7
- Fay, Paul B., Jr. The Pleasure of His Company (1966)
- Freedman, Lawrence. Kennedy's Wars: Berlin, Cuba, Laos and Vietnam (2000)
- Fursenko, Aleksandr and Timothy Naftali. One Hell of a Gamble: Khrushchev, Castro and Kennedy, 1958–1964 (1997)
- Giglio, James. The Presidency of John F. Kennedy (1991), standard scholarly overview of policies
- Goldzwig, Steven R. and Dionisopoulos, George N., eds. In a Perilous Hour: The Public Address of John F. Kennedy, text and analysis of key speeches (1995)
- Harper, Paul, and Joann P. Krieg eds. John F. Kennedy: The Promise Revisited (1988), scholarly articles on presidency
- Harris, Seymour E. The Economics of the Political Parties, with Special Attention to Presidents Eisenhower and Kennedy (1962)
- Heath, Jim F. Decade of Disillusionment: The Kennedy–Johnson Years (1976), general survey of decade
- Hellmann, John. The Kennedy Obsession: The American Myth of JFK (1997), negative assessment
- Hersh, Seymour. The Dark Side of Camelot (1997), highly negative assessment
- House Select Committee on Assassinations. Final Assassinations Report (1979)
- Kenney, Charles, John F. Kennedy: The Presidential Portfolio, Public Affairs, New York, 2000, ISBN 1-891620-36-3
- Kunz, Diane B. The Diplomacy of the Crucial Decade: American Foreign Relations during the 1960s (1994)
- Manchester, William. Portrait of a President: John F. Kennedy in Profile (1967)
- Manchester, William. The Death of a President: November 20-November 25 (1967)
- Newman, John M., JFK and Vietnam: Deception, Intrigue, and the Struggle for Power (1992)
- O'Brien, Michael. John F. Kennedy: A Biography (2005), the most detailed biography
- Parmet, Herbert. Jack: The Struggles of John F. Kennedy (1980)
- Parmet, Herbert. JFK: The Presidency of John F. Kennedy (1983)
- Piper, Michael Collins. Final Judgment (2004: sixth edition). American Free Press
- Reeves, Richard. President Kennedy: Profile of Power (1993), balanced assessment of policies
- Reeves, Thomas. A Question of Character: A Life of John F. Kennedy (1991) hostile assessment of his character flaws
- Schlesinger, Arthur, Jr. A Thousand Days: John F. Kennedy in the White House (1965), by a close advisor
- Schlesinger, Arthur, Jr. Robert Kennedy And His Times (2002 re-print)
- Smith, Jean Edward. Kennedy and Defense: The Formative Years. Air University Review (March–April 1967)
- Sorensen, Theodore. Kennedy (1966), by a close advisor
- This page was last modified on 26 July 2010 at 01:18.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
- Contact us
- About Wikipedia
- ► 2016 (23)
- ► 2012 (12)
- ► 2011 (16)
- ► 8月 (18)
- ▼ 7月 (47)
- ► 6月 (66)
- ► 5月 (52)
- ► 4月 (52)
- ► 3月 (52)
- ► 2月 (109)
- ► 12月 (61)
- ► 11月 (113)
- ► 10月 (37)
- ► 9月 (64)
- ► 8月 (84)
- ► 7月 (86)
- ► 6月 (30)
- ► 5月 (16)
- ► 3月 (28)
- ► 2月 (15)
|
<urn:uuid:3512885c-cd98-44fb-800d-cea811e51874>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00619.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.972507655620575,
"score": 3.4375,
"token_count": 14157,
"url": "https://matuoka1248.blogspot.com/2010_07_28_archive.html"
}
|
Welcome to this week's Torah portion, Deuteronomy - or D'varim (pronounced "De-var-eem") in Hebrew! The word "d'varim" is the plural form of the word "d'var," which means, "word." This Torah portion actually serves as a recap of the major events in the lives of the Hebrews who were rescued from Egypt...
It just so happens that this particular Torah portion often falls in the Hebrew month of Av (the fifth month on the Hebrew calendar), which is traditionally regarded as the most tragic month. On the first day of Av, Aharon (or Aaron - the first High Priest of Israel) died (see Numbers 33:38), which was regarded as a prophetic omen of the future destruction of both of the Temples on the Ninth of Av. The Ninth of Av, also known as Tishah B'Av, is an annual day of mourning that recalls the many tragedies that have befallen the Jewish people over the centuries, some of which occurred on the ninth day of the Hebrew month of Av.
Tishah B'Av also falls between the two times Moshe received the tablets of the covenant (the first during Shavuot and the second after a period of repentance, during Yom Kippur). This means that just two months after celebrating the Sinai revelation, we believers (YHWH's people) mourn for the destruction of the Temple and the beginning of the long exile of the Jewish people. However, two months later, we celebrate national atonement and the restoration of the covenant during the High Holy Day of Yom Kippur!
This whole period is also prophetic because Shavuot (which falls 50 days after Passover) recalls the ascension of Y'shua and the giving of the Ruach HaKodesh (Holy Spirit). Tishah B'Av foretells of Israel's long exile and the "age of grace" extended to the Gentiles; and Yom Kippur foretells the coming atonement of the Jewish people at the end of the age, when Israel accepts Y'shua as their great High Priest of the New Covenant (Jeremiah 30:24).
Before we resume with our Torah portion in Deuteronomy, we quickly need to back up to Numbers 32 which revealed why the Israelites ended up in the desert for 40 years; and that, because of it, "a new generation" of Israelites would be going into the Promised Land:
Numbers 32: 13 Thus ADONAI's anger blazed against Isra'el, so that he made them wander here and there in the desert forty years, until all the generation that had done evil in the sight of ADONAI had died out.
And this is why we now see Moshe reviewing the entire 40 year history for a "new generation"! He wants them to know and remember so they can learn to appreciate what YHWH did for them and to pass that knowledge along to future generations.
Deuteronomy 1: 3 On the first day of the eleventh month of the fortieth year, Moshe spoke to the people of Isra'el, reviewing everything ADONAI had ordered him to tell them. 4 This was after he had defeated Sichon, king of the Emori, who lived in Heshbon, and 'Og, king of Bashan, who lived in 'Ashtarot, at Edre'i. 5 There, beyond the Yarden, in the land of Mo'av, Moshe took it upon himself to expound this Torah and said: 6 "ADONAI spoke to us in Horev. He said, 'You have lived long enough by this mountain.
7 Turn, get moving and go to the hill-country of the Emori and all the places near there in the 'Aravah, the hill-country, the Sh'felah, the Negev and by the seashore - the land of the Kena'ani, and the L'vanon, as far as the great river, the Euphrates River. 8 I have set the land before you! Go in, and take possession of the land ADONAI swore to give to your ancestors Avraham, Yitz'chak and Ya'akov, and their descendants after them.'
This, of course refers back to Genesis 15, where we first learned that YHWH gave the Land to HIS people:
Genesis 15: 18 That day ADONAI made a covenant with Avram: "I have given this land to your descendants - from the Vadi of Egypt to the great river, the Euphrates River - 19 the territory of the Keni, the K'nizi, the Kadmoni, 20 the Hitti, the P'rizi, the Refa'im, 21 the Emori, the Kena'ani, the Girgashi and the Y'vusi."
Notice in Deuteronomy 1:8 above we see proof positive that YHWH gave the Land of Israel to the Hebrews. In all this time NOBODY has ever bought the Land, nor does the Bible tell us that He took it back and gave it to someone else. Therefore, it still belongs to the Israelites (which includes the "foreigners/aliens" who chose to come out of Egypt with the Hebrews during the Exodus).
Leviticus 25 tells us: 23 "The land is not to be sold in perpetuity, because the land belongs to me - you are only foreigners and temporary residents.:
Moshe reiterated YHWH's words in Deuteronomy 1:21: Look! ADONAI your God has placed the land before you. Go up, take possession, as ADONAI, the God of your ancestors, has told you. Don't be afraid, don't be dismayed.
Returning to Moshe's sermon in Deuteronomy, we see that he "pulled no punches" concerning the history of his rebellious people, constantly reminding them of who they are:
Deuteronomy 1: 12 "But you burdensome, bothersome and quarrelsome!
Deuteronomy 1: 26 "But you would not go up. Instead you rebelled against the order of ADONAI your God; 27 and in your tents you complained, 'It's because ADONAI hated us that he has brought us out of the land of Egypt, only to hand us over to the Emori to destroy us. 28 What sort of place is it that we're heading for? Our brothers made our courage fail when they said, "The people are bigger and taller than we are; the cities are great and fortified up to the sky; and finally, we have seen 'Anakim there."'
29 "I answered you, 'Don't be fearful, don't be afraid of them. 30 ADONAI your God, who is going ahead of you, will fight on your behalf, just as he accomplished all those things for you in Egypt before your eyes, 31 and likewise in the desert, where you saw how ADONAI your God carried you, like a man carries his child, along the entire way you traveled until you arrived at this place. 32 Yet in this matter you don't trust ADONAI your God, 33 even though he went ahead of you, seeking out places for you to pitch your tents and showing you which way to go, by fire at night and by a cloud during the day.'
34 "ADONAI heard what you were saying, became angry and swore, 35 'Not a single one of these people, this whole evil generation, will see the good land I swore to give to your ancestors, 36 except Kalev the son of Y'funeh -he will see it; I will give him and his descendants the land he walked on, because he has fully followed ADONAI.' 37 "Also, because of you ADONAI was angry with me and said, 'You too will not go in there. 38 Y'hoshua the son of Nun, your assistant -he will go in there. So encourage him, because he will enable Isra'el to take possession of it. 39 Moreover, your little ones, who you said would be taken as booty, and your children who don't yet know good from bad -they will go in there; I will give it to them, and they will have possession of it. 40 But as for yourselves, turn around and head into the desert by the road to the Sea of Suf.'
41 "Then you answered me, 'We have sinned against ADONAI. Now we will go up and fight, in accordance with everything ADONAI our God ordered us.'And every man among you put on his arms, considering it an easy matter to go up into the hill-country. 42 But ADONAI said to me, 'Tell them, "Don't go up, and don't fight, because I am not there with you; if you do, your enemies will defeat you."' 43 So I told you, but you wouldn't listen. Instead, you rebelled against ADONAI's order, took matters into your own hands and went up into the hill-country; 44 where the Emori living in that hill-country came out against you like bees, defeated you in Se'ir and chased you back all the way to Hormah. 45 You returned and cried before ADONAI, but ADONAI neither listened to what you said nor paid you any attention.
Please take a look at verse 39 above, as it refers to the responsibility of parents teaching their young ones about YHWH and His Torah, and the children's ultimate responsibilities and accountability to Him once they become adults. Most of today's youth (including those of many believers) receive no real training of any kind from their parents who are usually too busy "making a living" or thinking only of themselves, or who have fallen into the modern trap of apathy and lethargy and refuse to bother teaching their children about anything, much less YHWH and His Torah. Our "western" society allows children "the right" to decide right from wrong on their own - "morals" that many of them learned from watching TV....
In Deuteronomy 2, we learn that most of those who were rebellious during the 40-year trek through the wilderness had died out:
Deuteronomy 2: 14 The time between our leaving Kadesh-Barnea and our crossing Vadi Zered was thirty-eight years - until the whole generation of men capable of bearing arms had been eliminated from the camp, as ADONAI had sworn they would be.
One cannot help but wonder how many of us today would have been among the dead, had we been among YHWH's people in those days!
Moving on to the Haftarah and B'rit Chadasha portions, we find something very interesting. First, we see Isaiah lamenting the rebelliousness of YHWH's people:
Isaiah 1: 2 "Hear, heaven! Listen, earth! For ADONAI is speaking. "I raised and brought up children, but they rebelled against me. 3 An ox knows its owner and a donkey its master's stall, but Isra'el does not know, my people do not reflect. 4 "Oh, sinful nation, a people weighed down by iniquity, descendants of evildoers, immoral children! They have abandoned ADONAI, spurned the Holy One of Isra'el, turned their backs on him! 5 "Where should I strike you next, as you persist in rebelling? The whole head is sick, the whole heart diseased.
6 From the sole of the foot to the head there is nothing healthy, only wounds, bruises and festering sores that haven't been dressed or bandaged or softened up with oil. 7 "Your land is desolate, your cities are burned to the ground; foreigners devour your land in your presence; it's as desolate as if overwhelmed by floods. 8 The daughter of Tziyon is left like a shack in a vineyard, like a shed in a cucumber field, like a city under siege." 9 If ADONAI-Tzva'ot had not left us a tiny, tiny remnant, we would have become like S'dom, we would have resembled 'Amora.
The above passage is one of the scripture references where many Christians get the idea that YHWH has turned his back on "the Jews" and replaced them with the Gentile church. This, of course, isn't true, as YHWH will not go back on His covenants. For instance:
Genesis 12: 3 I will bless those who bless you, but I will curse anyone who curses you; and by you all the families of the earth will be blessed."
Deuteronomy 7: 6 For you are a people set apart as holy for ADONAI your God. ADONAI your God has chosen you out of all the peoples on the face of the earth to be his own unique treasure. 7 ADONAI didn't set his heart on you or choose you because you numbered more than any other people - on the contrary, you were the fewest of all peoples. 8 Rather, it was because ADONAI loved you, and because he wanted to keep the oath which he had sworn to your ancestors, that ADONAI brought you out with a strong hand and redeemed you from a life of slavery under the hand of Pharaoh king of Egypt.
Jeremiah 31: 35 This is what ADONAI says, who gives the sun as light for the day, who ordained the laws for the moon and stars to provide light for the night, who stirs up the sea until its waves rorar -- ADONAI-Tzva'ot is his name: 36 "If these laws leave my presence," says ADONAI, "then the offspring of Isra'el will stop being a nation in my presence forever." 37 This is what ADONAI says: "If the sky above can be measured and the foundations of the earth be fathomed, then I will reject all the offspring of Isra'el for all that they have done," says ADONAI.
Zechariah 8: 22 Yes, many peoples and powerful nations will come to consult ADONAI-Tzva'ot in Yerushalayim and to ask ADONAI's favor.' 23 ADONAI-Tzva'ot says, 'When that time comes, ten men will take hold - speaking all the languages of the nations - will grab hold of the cloak of a Jew and say, "We want to go with you, because we have heard that God is with you."'"
Romans 11: 25. (For I want you to know this) mystery, that blindness of heart has in some measure befallen Israel until the fullness of the Gentiles will come in: 26. And then will all Israel live. As it is written: A deliverer will come from Tsiyon and will turn away iniquity from Ya'akov. 27. And then will they have the covenant that proceed from me when I will have forgiven their sins.
NOTE: All Israel refers to those souls who make Teshuva (turn to YHWH) and welcome the Spirit of Mashiyach. Paul does not say or mean that every Jew or Israelite by race will enter into the Malchut Elohim (Kingdom) (see Matt 22:2-14; 25:1-12).
The passage below is one of the scripture references where many Christians get the idea that YHWH no longer requires Torah obedience - BIG MISTAKE!
Isaiah 1: 11 "Why are all those sacrifices offered to me?" asks ADONAI. "I'm fed up with burnt offerings of rams and the fat of fattened animals! I get no pleasure from the blood of bulls, lambs and goats! 12 Yes, you come to appear in my presence; but who asked you to do this, to trample through my courtyards? 13 Stop bringing worthless grain offerings! They are like disgusting incense to me! Rosh-Hodesh, Shabbat, calling convocations - I can't stand evil together with your assemblies! 14 Everything in me hates your Rosh-Hodesh and your festivals; they are a burden to me - I'm tired of putting up with them!
Here YHWH was not saying Torah obedience was abolished; instead, He was saying "Don't do these things if you're NOT going to be obedient!"
Brit Chadasha portion:
Now look at what Yeshua said - which reiterates how mankind is supposed to be!
John 15: 1. I am the Vine of Truth and my Father is the Cultivator. 2. Every branch that is on me that does not give fruit, He takes it away. And that which bears fruit He prunes it that it might produce more fruit. 3. You are already pruned because of the word which I have spoken with you. 4. Abide in me and I in you, as the branch is not able to produce fruit by itself unless it should abide in the vine. Likewise you are also not able unless you abide in me. 5. I am the Vine and you are the branches. Whoever abides in me and I in him, this man will produce plentiful fruit because without me you are not able to do anything.
6. Now unless a man abide in me, he is cast aside like a branch that is withered and they pluck it and place it into the fire that it may burn. 7. Now if you abide in me and my words abide in you, anything that you desire to ask will be given to you. 8. In this the Father is glorified that you bear abundant fruit and that you be my disciples. 9. As my Father has loved me, so too I have loved you. Abide in my love. 10. If you keep my Commandments, you will abide in my love, just as I have kept the Commandments of my Father, and I abide in His love. 11. I have spoken these things with you that my joy may be in you and that your joy might be full in you.
FOOTNOTES FROM THE AENT:
Here Y'shua uses the divine form of "I am" (Ena-na) indicating that YHWH is speaking through him.
Aramaic "shebista" is the word for "branch"; however, the Netzer/branch of Isaiah 11:1-2 is also being pictured here. The Netzer wordplays with haNatzrati "the Nazarene" and "haNetzarim," "the Netzarim" title for disciples; see Acts 24:5.
The "Commandments of my Father" always refers to Torah; see also John 15:5. Y'shua and his Talmidim (disciples) keep his Father's Commandments, but mainstream Christianity is not only anti-Torah; they turned rebellion against Torah into a "fashionable" form of lawlessness. See Daniel 7:24; 2 Thessalonians 2:7; 2 Timothy 2:19; Titus 2:14; 2 Peter 2:21; 1 John 3:4; Hebrews 2:2-4; Romans 4:15; Matthew 7:23; 13:41.
Hebrews 3: 7. Because the Ruach haKodesh has said: Today, if you will hear his voice, 8. and do not harden your hearts to bring him to wrath, like those who provoke, and as in the day of temptation in the wilderness, 9. when your fathers tried my patience, and proved, (and) saw my works forty years. 10. Therefore I was disgusted with that generation, and said: This is a people, whose heart has strayed, and they have not known my ways: 11. so that I swore in my anger, that they should not enter into my rest. 12. Beware, therefore, my Brothers, so that there will not be in any of you an evil heart that does not believe, and you depart from the living Elohim.
13. But look deeply into yourselves all the days, during the day which is called today; and let none of you be hardened, through the deceitfulness of sin.14. For we have part with the Mashiyach, if we endure in this firm confidence, from the beginning to the end: 15 as it is said, Today, if you will hear his voice, and do not harden your hearts, to anger him. 16. But who were they that heard, and angered him? It was not all they, who came out of Egypt under Moshe. 17. And with whom was he disgusted forty years, but with those who sinned, and whose corpses fell in the wilderness? 18. and of whom swore he, that they should not enter into his rest, but of those who did not believe? 19. So we see that they could not enter, because they did not believe.
Hebrews 4: 1. Let us fear, therefore, or else while there is a firm promise of entering into his rest, any among you should be found coming short of entering. 2. For to us also is the announcement, as well as to them: but the Word they heard did not benefit them because it was not combined with the faith of those who heard it. 3. But we who have believed do enter into rest. But as he said, As I have sworn in my wrath that they will not enter into my rest: for behold, the works of Elohim existed from the foundation of the world. 4. As he said of the Shabbat, Elohim rested on the seventh day from all his works. 5. And here again, he said, They will not enter into my rest.
6. Therefore, because there was a place where one and another might enter; and those persons from the first to whom the Good News was delivered did not enter, because they had no faith: 7. again he established another day, a long time afterwards; as above written, that Dawid said, Today, if you will hear his voice, and do not harden your hearts. 8. For if Yehoshua, the son of Nun, had given them rest, he would not have spoken afterwards of another day. 9. For there remains a Shabbat for the people of Elohim. 10. For he who had entered into his rest has also rested from his works as Elohim did from his. 11. Let us, therefore, strive to enter into that rest; or else we fall short, after the way of those who did not believe.
|
<urn:uuid:cc062a97-e6e1-4f8d-8c88-b304f36e1d64>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.8/warc/CC-MAIN-20180220224819-20180221004819-00619.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9763121604919434,
"score": 3.34375,
"token_count": 4766,
"url": "https://www.therefinersfire.org/parashah_44.htm"
}
|
Presentation on theme: "The Bellarmine Jug 1550-present"— Presentation transcript:
1 The Bellarmine Jug 1550-present The Glossary of Historic Ceramic Terms defines the jug as, a stoneware jug or bottle decorated with a molded bearded human face molded onto the neck.Known also as ‘Greybeards’, ‘Barmannskrugen’ or ‘Barbmans’Bellarminesecond half of the 16th centuryA round-bellied,narrow-necked vesselwith a bearded mask,at first collectively called Bartmanner (bearded men)and made at Frechen, near Cologne,in the 16th century.
2 Earliest dated bellarmine, 1550 Frank Thomas Collection It was changed in mockery into the likeness ofCardinal Bellarmine,and became popular with Protestants under the name bellarmine or grey-beard as a coarse retort to the cardinal's unanswerable arguments against Protestantism in his Controversies.It is now obsolete,but many remain.New Catholic DictionaryThe notion that bellarmines were ever intended to be representations or caricatures of Bellarmino has been satisfactorily and extensively demolished (the Cardinal was only eight years old when this example was made), and clearly the name was a post hoc jest.
3 but later turned up in many different areas of the continent. The jugs originated in the Germanic areas of Europe in the early 1500's,but later turned up in many different areas of the continent.The origins of the jugs is still a mystery and the connection toSt. Robert Bellarmine is also questioned.William Cartwright, The Ordinary, 1634 –, English author and divine. An ardent royalist and disciple of Ben Jonson, he had a high reputation as a preacher and author. In addition to his poems, which are now almost entirely forgotten, Cartwright wrote plays, of which The Ordinary (1634) and The Royal Slave (1636) were the most successful.It is not until William Cartwright’s play The Ordinary in 1634 that the term bellarmineis used to describe the jug, by which time Cardinal Bellarmino,had been dead for a dozen years.
4 Grey-white glazed Bellarmine, 1585. Fitzwilliam Museum. Bellarmine with Tudor arms and inscription of Elizabeth I, British Museum.
5 Beardman jugs are stoneware: water-resistant and durable, made from dense opaque non-porous clay fired at temperatures of 1200°-1280° C (2191°-2336° F).The clay turns white, buff, gray, or red and is glazed for aesthetic reasons.STONEWARE: Clay which can be fired within 2% of total vitrification or less are considered to be stoneware. Stoneware clays are usually made up of blended clay bodies to produce a malleable, strong clay which can be worked on the potters’ wheel and fired to a vitreous state. Color and texture of stoneware clays can vary quite a lot.
6 Stonewares were imported from Europe to the American Colonies until the end of the Revolutionary War.Germany and England were the largest producers and exporters of stoneware. American production began in the mid 18th century andboth imitated and competed with theEuropean imports despite trade restrictions.Large scale manufacture did not occur untilimmediately after the Revolutionary War.The large centers in the North spread fromNew Jersey and New York intoNew England.The southern centers were concentrated in Philadelphia and eastern Pennsylvania. Over time, more potteries started and began spreading further south.The tradition of salt glaze and alkaline glaze stoneware continued there well into the mid 19th century.
7 Jug bodies were made on a potter's wheel Jug bodies were made on a potter's wheel. After that the handle was fitted. Relief decorations including the beard face were prepared separately in molds. The molds were usually short-lived, especially those for the beard face; sometimes they were used for only a few jugs each, which results in the many different figures shown.The lower part of the beard was damaged, enabling us to see how the face was applied to the the jug.
8 Bartmann jug with pewter lid Bartmann jug from Cologne 1535-50 Stoneware salt-glaze German Rhineland, late sixteenth century(National Museums and Galleries of Wales)Bartmann jug from CologneDark grey clay body with salt glaze. Decorated rectangular bearded face mask, foliage, fruit and blossoms and lion mask roundels.Found in London. (Museum of London)The greybeard or Bartmann jugs and bottles are now amongst the most popular pieces on display from Cologne, and it seems likely that they were intended for the luxury end o fhte market. They were skillfully made, and the applied relief ornament is especially striking. The bearded face of a man or mask is applied to the throat or neck of the bulbous jugs, and molded and sprigged intertwined oak leaves and acorns, or rose briars and flowers, adorn the body of the vessels. The applied decoration was made from molds, and the figurative or botanical friezes, portrait roundels and plant ornaments were copied from pattern books. From around 1500 these pieces have turned bases, and many are seen with pewter lids.The de;cline in Cologne’s porduction of salt glaze was rapid. Resented by the earthenware potters, and meeting hostility to their firings because of the problems caused by the smoke and pollution from the kilns, the stoneware potters were driven out of their workshops in the city center. Following legal proceedings in 1544, the bailiffs were sent by the City Fathers to knock down the kilns. Salt-glaze production ended within the city of Cologne, with a total ban in 1556,. Many skilled potters migrated to Frechen or Raeren, and took with them their designs and techniques.The German bellarmine jug and the English stein are the most common forms of brown salt glazed stoneware produced for foreign markets.
9 Salt FiringThe salt glaze characteristic of a beardman jug was formed by throwing common salt into the kiln during firing.The salt (sodium chloride) interacts with thesilica and aluminain the clay to form a thin glaze which often has aslightly pitted surface which potters call 'orange peel'.
10 Salt is usually added to the kiln between 2100F and 2400F. Salt being adding in angle iron, other methods include paper wrapped "salt burritos" or by ladling damp salt into the firebox with long handled metal scoop. Salt is usually added to the kiln between 2100F and 2400F.
11 Today many potters are adding soda ash to achieve effects similar to salt firing.
12 VocabularySpies-A hole left in the front and back of kiln at different levels to enable the pottter to view atmosphere, cones and draw rings during firing.Cones-Most accurate way of determining temperature during a firing. Placed in different parts of the kiln they will also indicate any variation of temperature.Draw Rings-drawing out trial rings of clay from the kiln is a way of judging how the salting is proceeding. The ‘draw rings’ should be made from the same clays that have been used to make the wares.
13 Cones and Draw RingsBefore firingCones and Draw RingsAfter firingWatching the cones through the spy hole
14 Drawing a ring by inserting an iron bar through the ring, then carefully lifting it out of the kiln through the spy hole.
15 triple applied crowned oval medallions Color is added to stoneware by dipping in a slip (liquid clay) before firing. Blue and purple wares were first developed at Raeren from c.1587: the blue color came from cobalt, and purple from manganese. Siegburg wares are usually off-white.triple applied crowned oval medallionsof the arms of the City of Amsterdam;splashes of cobalt on bearded maskbrown mottled tiger surfaceheight: 8" base diameter: 4¼" 15906 5/8” x 2 5/8”, 1690splashes of cobalt blueon mask and each ofthree applied medallionsUnusual globular light gray and freckled bulbous triple medallion Bellarmine jug; Holland or low countries; body molded from red clay; surface colors under lustrous lead glaze; handle formed of two twisted cords resembling rope and terminating in three thumb pressed tails; repeating applied oval medallions (1 ¾" x 2") of flower blossoms over leaves; bearded face mask; splashes of cobalt blue on mask and each of three applied medallions; smooth glazed base with sand particles fused into glaze.height: 6 5/8" base diameter: 2 5/8" circa 1690
16 The lower part of jugs is usually colorless (apart from drips), Frechen, GermanGerman, 1600The lower part of jugs is usually colorless (apart from drips),because the artisan had to hold the jug while dipping it in the slip.
17 The mottled surface may originally have been unintentional, The Frechen potters soon realized that the effect was seen as a quality and should be produced deliberately, as it was in great demand by foreign markets. Many of these ‘getigerten’, or ‘Tiger-ware’, bottles and jugs were given additional value by being embellished with silver mountings when they were imported into England.Tiger-ware?Bartmann bottle,Stoneware salt glaze, 17th centuryIron wash under coarse-speckled salt glaze,‘Tiger’ware.
18 Salt-glaze stoneware ‘bellarmine’, medallion showing a man holding a cup and staff. This bottle represents the earliest salt-glaze stoneware made in England.Imports to BritainThe first salt-glaze pots seen in Britain were those imported from Germany as early as the mid-fourteenth century.The majority of the brown stoneware bottles from the Rhineland were shipped to London from the Low Countries, together with the wine and beer that was decanted from the casks into the vessels.During the second half of the sixteenth century, London became a redistribution center for the imported Rhenish stoneware.The goods would be re-exported to all the other British ports, around East Anglia as far north as Newcastle on the east coast, and right round the southern ports, into Bristol in the west and some as far north as Scotland. The vast trade in Rhenish stoneware, supplied from various centers, reached a high point during the seventeenth century. In the first half of that century alone, it is estimated that over twenty million of these mass-produced stoneware vessels were imported to London.
19 bearing the insignia of Height 21.1cm. ,A rare stonewaresalt-glaze bottlebearing the insignia ofthe crown and thistleandthe initials ‘CR’(Charles II)
20 In Anthony Wood’s ‘Pocket Almanacs’, the entry for 30 December 1677‘One of the followers of Exeter Coll., when Dr. John Prideaux was rector, as tis said, sent his servitour after 9:00 at nightto fetch some ale from the alehouse.When he came home with it under his gowne, the proctor met him and ask’d him what he made out so late andwhat he had under his gowne.He answered that his master had sent him to the stationer’s to borrow Bellarmine and that it was Bellarmine that he had under his arme;and so went home.Whereupon in following times, a bottle with a great belly was called a "Bellarmine",as it is to this day.’ Dr. John Prideaux was a Rector of Exeter College from 1612 until 1643, so that –the term bellarmine was in use in the first half of the seventeenth century
21 John Dwight 1671–98,English potter, founder of the Chelsea porcelain factory. The registration in 1671 of his patent for the "Mystery of transparent earthenware …" is the first certain recorded event of his life. He is considered to have laid the foundation of the pottery industry in England and to have set a standard not excelled elsewhere. There are examples of his work at the Victoria and Albert Museum and the British Museum.Height 26cm; 1685This tall ‘bellarmine’, still containing the remnants of a charm against witchcraft. The unusual proportions of this bellarmine together with the crude mask and rudimentary seal suggest that this bottle is not of continental origin. Witches’ bottles were commonly used during the seventeenth century and were usually buried under the hearth or threshold as protection against witchcraft.(Jonathan Horne)
22 Bellarmine Jugs continue to be excavated today Bellarmine Jugs continue to be excavated today. Jugs have been found in Iceland, Maine, New Jersey
23 The Popham colonists abandoned the fort after a year Bath, Maine,near the mouth of theKennebec River, English colonists, with George Popham as their leader, established Fort St. George in 1607, the same year Jamestown, Virginia, was founded.The Popham colonists abandoned the fort after a yearand the site appears to have been vacant for two centuries.This fact becomes the major archaeological importance of theFort St. George site:it means that we can now look at that critical, initial year ofEnglish colonization in considerable detail and begin to understand what life of the colony was like in
24 The focus of the Popham project archaeologists during their summer 2000 excavations was to locate a building thought to have been occupied by a man named Raleigh Gilbert.Gilbert was second in command of the colony and probably a distant relative of George Popham. He was also the nephew of, and named after, Sir Walter Raleigh. Artifacts found at Raleigh Gilbert's house point toward its occupant as being a man of high status due to the type and quality of certain objects, like the ones shown here--fragments from a 17th century, German-made stoneware "Bellarmine" jug.
25 This Bartmann jug was excavated in 1610 within the walls of James Fort This Bartmann jug was excavated in 1610 within the walls of James Fort. It has three medallions around its belly consisting of a coat-of-arms depicting a crowned shield that has been divided into four quarters. The first and third quarters each exhibit a single lion passant, which means that he is walking with his right paw raised. The second and fourth quarters each have two lions passant. In the first quarter, which is the upper left-hand corner of the shield, there is a heraldic device known as a fess with a label on chief. This is the band across the upper third of the escutcheon that is carrying three stylized fleurs-de-lis. It is this label that identifies the medallion as Italian and, more specifically, as representing a member of the Tuscan Anjou party of Guelfs who from medieval times were staunch supporters of the Pope.Guelf coats-of-arms have never before been recorded on German stoneware. Further, there is no documented trade of the ware in Italy, so the Bartmann jug from Pit 1 is extremely rare. It must have been commissioned by an individual, perhaps an Italian merchant, who had trade or other contacts with northwest Europe.
|
<urn:uuid:e9e885d8-89d7-44eb-b01b-8b757fa800bb>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00019.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9666248559951782,
"score": 3.53125,
"token_count": 3348,
"url": "http://slideplayer.com/slide/4346710/"
}
|
Presentation on theme: "Don’t worry about writing"— Presentation transcript:
1 Key Issue #3 – Where are Agricultural Regions in More Developed Countries?
2 Don’t worry about writing Commercial agriculture in MDC’s can be divided into 6 main types:mixed crops and livestockdairyinggrain farminglivestock ranchingMediterranean agriculturegardening and fruit culture.The location of each depends largely on climate
3 MIXED CROP AND LIVESTOCK FARMING Most common form of ag. west of the Appalachians in the US and W Europe.
4 Characteristics of Mixed Crop and Livestock Farming The biggest characteristic is its integration of crops and livestock.Most of the crops grown are fed to the animals. In turn, the livestock provide manure to grow more crops.A typical mixed farm devotes nearly all land to growing crops, but more than 3/4ths of its income derives from the sale of animal products.In the US, beef, pork, and chickens are the main animals grown on farms.
5 Corn is typically grown and fed to the animals because of its higher yield per acre than other grains. It can also be sold and processed into oil, margarine, and other food products.Soybeans are the 2nd most grown product. They also can be fed directly to the animals or sold for use in human products like tofu, soy milk, or soybean cooking oil.Farmers are able to distribute the work load across the entire year and receive seasonal variations of income.
6 DAIRY FARMINGDairy farming is the most important type of commercial agriculture practiced on farms near the large urban areas in N. America & Europe.It accounts for about 20% of the total value of all agricultural output.Traditionally, fresh milk was rarely consumed except directly on the farm or in nearby villages.With the rapid growth of cities in MDC’s in the 19th century, the demand for milk to urban residents increased.Rising incomes allowed people to buy milk, which was once considered a luxury.
7 Why are dairy farms located near urban areas? Mostly because of transportation issues since milk is highly perishable.The ring surrounding a city from which milk can be supplied without spoiling is known as the milkshed.Before railroads, the range was 30 miles. Today, its more like 300 miles
9 Regional Variations Dairies in the eastern US tend to supply milk. To the west, farmers tend to sell their output to processors who make cheese, butter, or evaporated or condensed milk.ex: 5% of milk is processed in Pennsylvania, while 90+% is processed in Wisconsin.Dairy farmers, like other farmers, tend to sell their products to distributors who then sell to consumers
10 Problems for Dairy Farmers Cows require constant attention… milking 2x a day, feeding during winter, etc.The number of farms with milk cows declined in the US by 2/3rds from , due to lack of profitability and excessive workload.However, the number of cows only declined by 1/8th and production actually increased by 1/4th.
15 GRAIN FARMINGGrain is the seed from various grasses, like wheat, corn, barley, oats, millet, rice, and others.Commercial grain agriculture is distinguished from mixed crop and livestock farming because it is grown primarily for human consumption.The most important crop grown is wheat, used to make bread flour. It sells for a higher price and has more uses than other crops, thus making it more profitable to ship remotely.The US is by far the largest commercial grain producer.Commercial grain farms are generally located where it is too dry for mixed crop and livestock agriculture.
16 The McCormick reaper, invented in the 1830s, first permitted large-scale wheat production. Today, the combine machine does three tasks: reaps, threshes, and cleans.Unlike other agricultural products, wheat is grown to a considerable extent for international trade and is the world’s leading export crop.As the US and Canada account for about half of the world’s wheat exports, they are appropriately labeled the world’s “breadbasket.”
23 LIVESTOCK RANCHINGRanching is the commercial grazing of livestock over an extensive area. It is adapted to semiarid or arid land… it is practiced in MDC’s, where the vegetation is too sparse and the soil too poor to support crops.Read
24 MEDITERRANEAN AGRICULTURE Mediterranean agriculture exists primarily in the lands that border the Med. Sea. Farmers in Southern California, Chile, and South Africa also practice it.Prevailing sea winds provide moisture and keep winter temperatures warm. Summers are hot and dry, but sea breezes provide relief. The land is usually hilly, and mountains frequently plunge directly into the sea.Most crops in the Med. Lands are grown for human consumption.Horticulture – which is the growing of fruits, vegetables, and flowers – and tree crops form the commercial base of Med. Farming.
25 Around the Mediterranean Sea, the two most important cash crops are olives and grapes. 2/3rds of the world’s wine is made in countries bordering the Mediterreanean Sea.Despite the importance of grapes and olives, about half of the land is devoted to growing cereals.The rapid growth of urban areas in the US, especially Los Angeles, has converted high-quality agricultural land into housing developments…Thus far, it has been offset with expansions into arid lands, which requires massive irrigation, so it is yet to be seen what problems this may bring.
27 COMMERCIAL GARDENING & FRUIT FARMING This is the predominant type of agriculture in the Southeast US… long growing season and humid climate and is accessible to the large markets of the Northeast US.The type of agriculture practiced in the SE is typically called truck farming because “truck” was a Middle English word meaning bartering.
28 Truck farms grow lots of the fresh fruits & vegetables that people demand in MDC’s. They are highly efficient large-scale operations that take full advantage of machines at every stage of the growth process.Truck farmers are willing to experiment to maximize efficiency and are willing to hire migrant workers to keep down costs.Farms tend to specialize in a few crops, and a handful of farms may dominate national output of some fruits and vegetables.
31 PLANTATION FARMINGThis is a form of commercial agriculture practiced in the tropics and subtropics, especially in Latin America, Africa, and Asia. They are situated in LDC’s, but are owned and operated by Europeans and North Americans and grow crops primarily for MDC’s.A plantation is a large farm that specializes in one or two crops like cotton, sugarcane, coffee, rubber, tobacco, etc.They are located in sparsely settled areas, so they import workers and provide them with food, housing, and social services.Until the Civil War, plantations were important in the US South
41 Key Issue #4 – Why Does Agriculture Vary Among Regions?
42 Three types of reasons help to explain differences among agricultural regions: EnvironmentalCulturalEconomic
43 ENVIRONMENTAL AND CULTURAL FACTORS Regions of distinctive agricultural practices exist in part b/c of differences in climate.Ex: the Middle East is dry, so pastoral nomadism occurs. Central Africa has a tropical climate, so shifting cultivation is the predominant type.The correlation between agriculture and climate is by no means perfect, but clearly some relationship persists between climate and agriculture.
44 ECONOMIC ISSUES FOR SUBSISTENCE FARMERS Two economic issues discussed in earlier chapters influence the choice of crops planted by subsistence farmers…First, b/c of rapid population growth in less developed countries, subsistence farmers must feed an increasing number of people.Second, b/c of adopting the international trade approach to development, subsistence farmers must grow more food for export instead of for direct consumption
45 Subsistence Farming and Population Growth Read Ester Boserup’s explanation of why population growth influences the distribution of types of subsistence farming.Pages
46 Subsistence Farming and International Trade To expand production, subsistence farmers need higher yield seeds, fertilizer, pesticides, and machinery. Some needed supplies can be secured through trade.To generate the funds they need to buy agricultural supplies, LDC’s must produce something they can sell to MDC’s… the LDC’s sell some manufactured goods, but most raise funds through the sale of crops.
47 Consumers in MDC’s are willing to pay high prices for fruits and vegetables that would otherwise be out of season, or for crops such as coffee and tea that cannot be grown there b/c of the climate.The sale of export crops brings a LDC foreign currency, a portion of which can be used to buy agricultural supplies. However, with rapidly growing populations, the money may have to be used to feed the people.
48 Drug CropsThe export crops chosen in some LDC’s, especially in Latin America and Asia, are those that can be converted to drugs.Various drugs, such as coca leaf, marijuana, opium, and hashish, have distinctive geographic distributions.Coca leaf is grown in NW S. America, esp. Colombia, Peru, and Bolivia. Most of its processing and distribution is based in Colombia.Mexico grows the majority of the marijuana that reaches the US.Most opium originates in Asia, especially Afghanistan, Myanmar, and Laos. Thailand serves as the transportation hub for distribution for MDC’s.
49 Last year, Afghanfarmers grew 93%of the world’s opium
50 ECONOMIC ISSUES FOR COMMERCIAL FARMERS Two economic factors influence the choice of crops or livestock by commercial farmers:access to marketsoverproduction
51 Access to MarketsB/c the purpose of commercial farming is to sell produce off the farm, the distance from the farm to the market influences the farmer’s choice of crop to plant.Geographers use the von Thϋnen model to help explain the importance of proximity to market in the choice of crops on commercial farms.
52 The von Thϋnen model shows that a commercial farmer must combine two sets of monetary values to determine the most profitable crop:the value of the yield per hectare (acre)the cost of transporting the yield per hectareThese calculations demonstrate that farms located closer to the market tend to select crops with higher transportation costs per hectare of output, whereas more distant farms are more likely to select crops that can be transported less expensively.
53 von Thϋnen (the man) noticed that crops were grown in different rings around the cities in the area near his home in Northern Germany…milk and market oriented gardens were in the first ring (perishable)wood lots where timber was cut for construction and fuel was in the second (heavy)third ring was used for various crops and pasture; often rotatedthe outermost ring was devoted exclusively to animal grazing which requires lots of space.
58 Von Thünen developed a model of agricultural land use Von Thünen developed a model of agricultural land use. His model was created before industrialization and is based on the following limiting assumptions:The central market place is located within what is referred to as an "Isolated State", suggesting a community that is self sufficient and has no external influences.This "Isolated State" is surrounded by an unoccupied, unused land.The land of the State is completely homogeneous, having no rivers, mountains or other obstructions. Furthermore, the soil, climate and all other factors on agriculture are the same.In the "Isolated State" there are no major veins of transportation. That is to say that the farmers in the State transport their own products to the market via oxcart, over land, directly to the central marketplace.Farmers in the state do what they need to earn the greatest profit in the marketplace.
59 Although von Thϋnen developed the model for a small region with a single market center, it also applies to a national or global scale.
60 Overproduction in Commercial Farming Commercial farmers suffer from low incomes b/c they produce too much food rather than too little as agriculture becomes more efficient.While the food supply in MDC’s has increased dramatically, the demand has remained about the same b/c the markets are already saturated.In MDC’s, consumption of a particular commodity does not change significantly because the price falls… i.e. people do not switch from wheat to corn products just because the price of corn falls.
61 The US government has three policies to attack the problem of excess productive capacity… 1st, farmers are encouraged to avoid planting crops that are in excess supply… if so, the gov’t encourages planting fallow crops like clover to restore nutrients to the soil2nd, the government pays farmers when certain commodity prices are low. The government sets a target price and pays farmers the difference from what they receive in the market.3rd, the government buys surplus production and sells or donates it to foreign governments. In addition, low-income Americans receive food stamps in part to stimulate their purchase of additional food.
|
<urn:uuid:645ac8f6-a718-473f-9125-0eae92130912>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817908.64/warc/CC-MAIN-20180226005603-20180226025603-00019.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9448111057281494,
"score": 3.25,
"token_count": 2770,
"url": "http://slideplayer.com/slide/2381065/"
}
|
Associate Editor: Isobel Jane Simpson; University of California, Irvine, US
U.S. dry natural gas production increased from 18 to 27 trillion ft3 between 2005 and 2015 (EIA, 2016b). Use of natural gas offers potential climate benefits compared to coal or oil (EIA, 2016a), but those benefits depend upon the emissions of methane, the primary component of natural gas and a potent greenhouse gas. This study is part of a larger study designed to compare, and possibly reconcile, estimates of methane emissions developed from aircraft “top-down” measurements (Schwietzke et al., 2017) and inventory-based “bottom up” estimates, including the results presented here and studies of production facilities (Bell et al., 2017), gathering compressor stations (Vaughn et al., 2017), and measurements made by downwind techniques (Robertson et al., 2017; Yacovitch et al., 2017) at a variety of facilities.
Gathering pipelines refer to the pipelines that connect wells to gathering compressor stations or processing plants, and connect those facilities to transmission pipelines or distribution systems. Inlet pressures of gathering systems range from 30 to 7,720 kPa (Mitchell et al., 2015), but most gathering pipelines operate at the low end of that pressure range. Gathering pipeline systems consist of pipelines and auxiliary components for operation of the pipelines including pig launchers and receivers, blocking valves, and a variety of other, less common, components (e.g. “knock out bottles” used to remove liquids from pipelines on older systems). Pig launchers/receivers are used to insert/remove cleaning plugs, called “pigs”, into gathering lines to remove water and debris from the pipeline. Block valves are used to isolate sections of pipeline, or reroute the flow of natural gas (SM-S1).
Gathering pipeline network methane emissions originate from three sources:
- Emissions from pipelines between auxiliary equipment. Pipelines are typically underground, although some older systems utilize above-ground pipelines. Underlying causes of pipeline emissions include corrosion, failed joints, and structural stresses caused by settling earth or the traversal of heavy equipment. While many new pipelines are constructed of plastic material, most older gathering lines, as well as some recently-built pipelines, are constructed from steel, and are thus subject to corrosion as the pipeline ages. In addition, even for pipeline constructed of polyethylene, significant infrastructure is still constructed using steel pipe and equipment, such as above-ground auxiliary equipment, road crossings, and other higher-stress areas. These can also exhibit corrosion problems. Pipelines may also be damaged by accidental contact by outside parties.
- Emissions from auxiliary equipment, such as emissions from valve packing, or seals on pig launcher doors. Auxiliary equipment is also called “above ground” equipment by operators.
- Episodic emission from pipeline operations. Episodic emissions are releases of gas that occur for defined, typically short, periods. While gas may be released due to emergency situations arising from mishaps, the two most-common planned episodic emissions for gathering pipelines are the blowdown of lines for maintenance and the blowdown and purging of pig launchers and receivers during pigging operations.
This study measured the first two types of emissions – underground pipelines and auxiliary equipment – and performed an engineering estimation of planned episodic emissions.
The authors are unaware of any recently published studies of gathering pipeline emissions, and as a result, emission factors are unknown for this sector (Heath et al., 2015). EPA’s greenhouse gas inventory (GHGI) uses emission factors based upon measurements of distribution mains from a 1996 GRI/EPA study (GRI/EPA, 1996) to approximate emissions for gathering pipelines. The majority of gathering pipelines are not regulated by the U. S. Pipeline and Hazardous Material Administration because they do not cross state boundaries and are in rural areas that fall below population proximity rules (Pipeline and Hazardous Materials Safety Administration, 2016). Recent studies have characterized emissions for gathering and processing plants (Mitchell et al., 2015) and well pads (Allen et al., 2013; Allen, et al., 2015a; Allen, et al., 2015b), but none of these studies performed measurements on gathering pipelines. Several recent studies have evaluated regional methane emissions using aircraft measurements (Beck et al., 2012; Karion et al., 2015; Peischl et al., 2015), but the methods utilized did not support attribution to specific portions of the gathering infrastructure. Other ground-based leak detection campaigns focused on another type of natural gas pipeline: distribution systems (Phillips et al., 2013; Jackson et al., 2014; Gallagher et al., 2015); this study makes specific comparisons to measurements of distribution mains from a study by Lamb et al. that included most distribution infrastructure between the city gate and the consumer’s meter (Lamb et al., 2015). However, distribution pipelines carry dry market gas to customers and thus operate differently than gathering pipelines carrying raw production gas between wells and processing or compression facilities. In summary, since no recent study has systematically measured methane emissions from gathering pipelines, estimates have been based upon aggregate emission factors from distribution pipeline measurements.
Although limited to one basin, this study represents a first attempt to characterize gathering pipeline methane emissions. While the data are not sufficiently representative to provide methane emission factors at the regional or national level, the study provides initial information about the mix of emission sources and guidance to design future gathering pipeline studies.
The field campaign for this study occurred during a coordinated 4-week campaign in the Fayetteville shale play in Arkansas, USA during September–October 2015 (SM-S2). Measured pipelines, along with wells and compressor stations in the campaign area, are shown in Figure 1. There were approximately 5,650 active wells in the study area, which produced approximately 2.5 billion cubic feet per day (bcfd) at the time of the study (Arkansas Oil and Gas Commission, 2015). All active wells and the associated pipelines in the study area were completed after 2004, and 79% of all active wells went online after 2008 (AOGC, 2016). Natural gas produced in the study area is “sweet and dry” (90–98% methane, 0.5–6% ethane), produces no natural gas liquids, and requires minimal upgrading (i.e. no NGL extraction) to achieve pipeline quality. Water is separated from the gas at the well pads utilizing gravity-type separators, and gas is further dehydrated at the gathering compressor station using glycol dehydrators. The pipelines measured for this study were operated by two study partners. For their systems, the suction side of the gathering compressors operates between 100 and 325 kPa (15–50 psia). Due to the low suction pressures, gathering pipelines between wells and gathering compressor stations are larger in diameter than many basins – typically 4 to 20 inches (10 to 51 cm) in diameter. Underground segments are constructed largely of polyethylene (commonly known as “poly” pipe), coupled to steel segments for above-ground infrastructure. Pipelines from other operators, which were not measured in this study, vary in configuration, with at least one partner company operating their well-to-compressor pipelines at 1–2.8 MPa (150–400 psia), using smaller diameter steel lines. Considering the entire study area, 69% of well-to-compressor gathering pipelines are plastic, and all measurements were made on this type of pipeline. Lines between compressor stations and transmission pipelines in the study area, which were not measured in this study, are constructed of steel and operate between 6 and 8 MPa (850–1150 psia).
Gathering pipelines are installed in rights of way (ROW), defined by an easement allowing the operator to access and maintain their pipelines. A ROW segment may contain more than one pipeline, but all ROWs measured in this study contain a single pipeline from a single operator. Partner personnel inspect pipelines at irregular intervals by driving or walking the ROWs. In general, these inspections concentrate on identifying encroachment or damage, and teams do not routinely measure leak volume or mass flow from leaks when found. Teams from one partner may carry a laser gas detector to look for leaks (e.g. a Heath, Inc. RMLD). This partner also does occasional flyovers to look for encroachment and distressed vegetation, a possible sign of a gas leak. The other partner does regular leak surveys only on regulated lines (a small fraction of the total). For non-regulated lines, they walk lines and conduct vegetation control biennially, and during these activities they assess for visible indications of leaks.
The study partners who supported measurement on their pipeline systems operated an estimated 83% of gathering pipelines in the study area at the time of the field campaign. However, measurement was not practical on all partner ROWs. ROWS were excluded for the following reasons: too steep to traverse with the measurement equipment, covered with un-harvested crops, access was restricted by the landowner, or the ROW was covered with vegetation growth too dense to traverse with the available screening equipment (SM-S2). One partner company cuts brush on ROWs every two years, and during the study period only the western half of the study area was sufficiently cleared for measurement. In general, both partners operated their pipelines in a similar fashion, and no differences in operation were identified due to season or location within the basin.
During the measurement campaign, measurement days were allocated to each operator in proportion to the number of wells they operate. Each measurement day, sections of accessible ROWs were selected for measurement. After specific ROWs to be measured were determined each day, the measurement team screened as much of the selected ROWs as possible. Measurements were made on 12 days, traveling an average of 8 km per day with a minimum of 4 km in a day and a maximum of 15 km per day.
Measurement teams screened and measured both pipeline leaks and emissions from auxiliary equipment along the pipeline. Underground pipeline leaks were detected by using a vehicle-based measurement system (VMS) that drove the ROWs looking for methane mixing ratios above background levels. Measurement vehicles were outfitted with a gas collection manifold on the front bumper of the vehicle routed to a Los Gatos Research Ultraportable Greenhouse Gas Analyzer, with a detection threshold of 0.01 ppm over ambient methane mixing ratio (SM-S3). Elevated emissions were further investigated using hand-held equipment including a RMLD-IS laser gas detector and a Detecto PAK Infrared (DP-IR) probe-type detector, both from Heath Consultants, and a Bascom-Turner Gas Sentry instrument sensitive to methane mixing ratios from 100 ppm to 100% CH4. One underground leak was detected and localized using these instruments (see below). The Heath instruments have a self-test feature used daily, but were not calibrated in the field. The Gas Sentry instrument was zeroed in clean air and bump tested daily (Bump Test of Gas Monitors, 2014). The Los Gatos instrument was calibrated daily using calibration gases as specified in the operations manual.
Measurement methods followed the methods utilized in a previous study of distribution systems (Lamb et al., 2015), which were developed for measurement of distribution pipeline leaks, and were supervised by the same scientists. In short, detected pipeline leaks were covered with an impermeable cover which enclosed the leak location and was held against the ground around its edge by weights. High flow methods were utilized to measure the leak rate: The emission gas and air were drawn from the enclosure and methane mixing ratio and total mass flow were measured. Methane mass flow from the leak was then calculated from mass flow and methane enhancements above the background mixing ratio. Measurements were made using an INDACO high flow instrument calibrated daily using zero air and span gas (2.5% CH4 and 100% CH4 in air) and checked at mid-day and the end of the day (see Lamb et al., 2015, SI S-3.1). The flow sensor was checked against an independent air flow meter (TSI VelociCheck 8340) at the beginning and end of each sampling event. Instruments are listed in SM-S3. In this study, only one pipeline leak was detected, and gas emissions occurred from a distinct hole in the ground several cm in diameter (an emission pattern called a ‘gopher hole’ by operational personnel) which was readily enclosed with an impermeable cover of approximately 1 m2 (see Lamb et al., 2015, SI S-3.2). Since the gas in the study area is dry, there were few volatile organic compounds in the gas stream, and thus low risk of poisoning sensors or skewing the methane mixing ratios measured by the INDACO instrument. Uncertainty in the enclosure method, analyzed in SM-S3, is small relative to the uncertainty caused by frequency of the leak count, and was not included in simulation models.
While screening the ROWs, measurement vehicles would periodically arrive at auxiliary equipment (block valve and/or pig launcher) and measurement staff would survey the components with the INDACO instrument as described above, to quantify detected methane emissions sources (SM-S3).
Due to the limited scope of the study, measurement results presented here should not be construed as sufficient to develop emission factors for gathering pipelines in general. However, study measurements provide insight into the mix of emissions, and associated mathematical models provide guidance on the measurement requirements necessary to develop nationally-applicable emission factors.
Study area estimates
Monte Carlo methods (Ross, 2006) were utilized to estimate total emissions for the study area. Field measurements were utilized to model emissions, and emission drivers – commonly called activity data – were developed from public data and non-public partner data provided to the study team (SM-S4). Activity data were provided by the two study partners who provided both data and access to gathering lines, and one data partner who provided information on company equipment but did not provide access. Together, the study team had activity data for 98% of gathering pipeline length in the study area, as estimated from active well count (AOGC, 2016) – a level of completeness unique to this study. The available activity data are summarized in Table 1. All companies provided pipeline lengths and material type.
|Study Partner 1||Study Partner 2||Data Partner||Summary Information|
|Pipeline Length||✓||✓||✓||4683 km|
|Pipeline Type||✓||✓||✓||69% Polyethylene|
|Pig Launchers||✓||E||✓||3539 [3342 to 3753]|
|Block Valves||E||E||✓||2322 [2250 to 2404]|
For auxiliary equipment, emissions were modeled exclusively using measurements made in the field campaign. Auxiliary equipment counts were available from one study partner and the data partner, and the study team estimated auxiliary equipment counts for the other study partner utilizing satellite imaging (SM-S4).
Two sources of uncertainty exist for emissions from pipeline leaks. First, it is unknown if the measured emission rate is representative of the mean emission rate of possible leaks within the study area. Therefore, this emission rate is modeled using a lognormal distribution. To develop the parameters for the distribution, the mean of the lognormal distribution was set to the size of the single leak observed in the field campaign and the standard deviation was estimated by analogy to leaks measured on distribution mains (Lamb et al., 2015). The development of the lognormal distribution, and comparison to the assumption of a triangular distribution, is described in SM-S4. Second, uncertainty also exists in the frequency at which leaks occur within the pipeline system. This uncertainty was modeled by analyzing the probability of finding one event (the observed leak) assuming a range of possible, but unknown, leak counts within the study population. This uncertainty analysis follows the method used by a previous study to characterize the frequency of rare large emitters in the transmission and storage sector (Zimmerle et al., 2015). For this study, we are interested in the probability of finding one pipeline leak while surveying 96 km of pipeline randomly selected from the total population of 3948 km of pipeline that could be screened for leaks. Given the number of leaks in the total population, the probability of identifying one leak is given by the hypergeometric distribution. Combining the probabilities for all possible total leak populations results in the probability distribution shown in Figure 2 (SM-S4). For sample sizes that are small relative to the population, the mode matches the leak frequency from the field campaign, but the distribution has a strong upward skew which shifts the mean leak frequency above the frequency seen in the field campaign. In practical terms, this distribution indicates that there is a substantial probability that the number of leaks found in a small survey is an underestimate of the mean leak frequency. Skew becomes less pronounced as the sampled proportion of the population increases. For the sample size in this study, the upward skew results in a mean probability of twice the field campaign (50 km/leak) and a wide, asymmetric, 95% confidence interval (CI) of 18 to 425 km/leak. This analysis provides an estimate of the uncertainty inherent in finding rare events given a limited sample size. The same distribution is also utilized to analyze the coverage required in future pipeline studies to provide an upper bound on emissions from gathering pipeline leaks.
In addition to steady state emissions from gathering lines and auxiliary equipment, there are additional episodic emissions when pig launchers and receivers are vented during launch and receive operations. These emissions were not measured due to the high instantaneous emissions rate during venting. Instead, emissions from each pig launch/receive event were calculated based upon geometry of vessel, pressure before release, average ground temperature, and gas composition (SM-S4).
The study estimate is compared with the EPA’s greenhouse gas inventory (GHGI), and greenhouse gas reporting program (GHGRP), as well as measurements of distribution mains made in a recent study (Lamb et al., 2015). We localize emissions estimates to the study area by utilizing activity estimates developed in this study combined with emission factors from the GHGI, the GHGRP, and the Lamb study. Since these methods/sources stratify pipelines by material (steel or plastic), pipeline length by material type was estimated for all pipelines in the study area. Since the GHGRP and GHGI do not call out emissions from auxiliary equipment as a separate emissions source, and the auxiliary equipment on distribution systems differs from that on gathering systems, comparisons focus exclusively on pipeline leaks.
Finally, an empirical 95% confidence interval (CI) is utilized throughout, defined as the 2.5%/97.5% percentiles for two-sided analyses, and 0%/95% when discussing pipeline screening guidelines for future studies.
Results and discussion
We first consider measurement results for the field campaign, which are summarized in Table 2, and detailed in the SM spreadsheet. The field campaign surveyed 95 auxiliary equipment locations and detected 98 total leaks, of which 72% originated from valve packing. While the underlying cause of each leak is unknown, field operators report that valve packing must often be loosened prior to operating a valve during pigging operations or to allow a blocking valve to be turned by hand, and it is possible the packing was not re-tightened sufficiently after the operation was complete, resulting in a fugitive emission. The remaining detected leaks were from pig launcher doors (13%), flanges (12%), and gauges (2%). A total of 0.83 kg CH4/hr of emissions were measured, with valves contributing 49%, pig launcher doors 47%, flanges 3% and gauges 1%. There was no statistical difference in auxiliary equipment emissions between the two partner companies (SM-S4). This study did not detect any failures of auxiliary equipment releasing gas at high rates, nor did it estimate the frequency at which such failures may occur.
|Auxiliary Equipment Type||Locations Screened1||Locations with Detected Methane Enhancements3||Locations with Measurable Emissions4||Measured Methane Emissions Rates (kg CH4/hr)|
|Pipeline leaks||96 km||1||NA||1||NA||4.0||NA2|
A single underground pipeline emission, measured at 4.0 kg CH4/hr, was found while screening a total of 96 km of pipeline. This raises the question of how effective the VMS was in detecting underground pipeline leaks. While the detection efficacy of the VMS could not be assessed with controlled studies in gathering pipeline conditions, there is high confidence in use of the method since it has been utilized successfully in recent distribution pipeline studies. However, to assess the chance that the VMS “missed a leak,” the study conducted a qualitative post-campaign analysis of the VMS’s detection sensitivity. All methane enhancements seen by the VMS are summarized in Figure 3a. For the single pipeline leak identified in this study (4 kg CH4/hr), the VMS noted a maximum methane mixing ratio of 11,160 ppm, in a clearly defined peak, and methane enhancements were above 10 ppm up to 37 m away from the emission source, as seen in Figure 3b. To determine if the VMS would have detected smaller emission rates, the mixing ratios recorded by the VMS were reviewed for locations when the VMS was within 50 m of identified emissions from above-ground auxiliary equipment. Since these sources were independently screened and measured, reviewing atmospheric mixing ratios seen by the VMS provides an independent check of the VMS’s capabilities. Qualitatively, a review would expect to see elevated methane mixing ratios – defined here as 3 ppm above the background mixing ratio of 1.9 ppm – when the VMS was near auxiliary equipment emissions. An example, shown in Figure 3c, indicates that the VMS detected an enhancement when 7 m from a 0.087 kg CH4/hr emission source, and peaked at 36 ppm when 1.2 m away from the emission source. Additional examples are provided in SM-S3. This qualitative analysis indicates that the VMS would likely have identified pipeline methane emissions one to two orders of magnitude smaller than the single underground pipeline leak detected during the study, assuming the gas was emitted to atmosphere within the ROW and/or upwind of the VMS. Therefore, it is a reasonable assumption that either (a) the single leak detected here is the only underground pipeline leak in the ROWs measured during the study, or, (b) any undetected leaks were substantially smaller than 4 kg CH4/hr.
Using methods described earlier, our analysis indicates that planned episodic emissions are small relative to other gathering pipeline emission sources: There were 13 pigging operations during the measurement campaign, which contributed an estimated 31 kg of emitted methane, or 1.3 % of the 2430 kg (4.8 kg CH4/hr) of the measured methane emissions from pipelines and auxiliary equipment during the same period. No pipeline blowdowns occurred during the field study. Therefore, to simplify the analysis presented here, planned episodic emissions are not included in the analysis below but are reported in SM-S4. Unplanned episodic emissions (e.g. a pipeline breach) were not analyzed in this study.
Estimated gathering pipeline emissions for the study area
Table 3 summarizes the simulated methane emissions for gathering pipeline systems in the study area, termed the “study estimate”, which was developed using the Monte Carlo methods described earlier. Simulation results estimate total study area methane emissions to be 402 [95 to 1065] kg CH4/hr. Underground pipeline leaks dominate the total, contributing 93% [79% to 98%] of mean estimated methane emissions. Additionally, the uncertainty in leak frequency – number of pipeline leaks per km of pipeline – dominates the confidence interval.
|Study Model Estimate
|Emission Component||Mean (kg CH4/hr)||95% Confidence Interval||Mean Fraction of Emissions4||Confidence Interval for Fraction of Emissions3|
|Pig Launchers1||15||+15%/–14%||6.0%||1% to 16%|
|Block Valves1||4||+15%/–14%||1.5%||0.4% to 4%|
|Pipeline Leaks2||382||+173%/–80%||93%||79% to 98%|
|Study Area Total||402||+165%/–76%||100%||–|
Due to the number of auxiliary components measured and the number of leak measurements, the CI’s for auxiliary equipment emissions are much tighter (approximately ±15%). Auxiliary equipment contributes, on average, 7% [2% to 21%] of total emissions. Most emissions detected on auxiliary equipment could be eliminated by screening for emissions after maintenance operations and tightening valve packing or seal latches on pig launchers. However, it should be emphasized that such control actions would eliminate only 7% of gathering pipeline emissions based upon this study’s results. Emission rates for auxiliary equipment across the entire basin are significantly below that of other infrastructure in the gathering sector. For example, a 2015 national study (Marchese et al., 2015) measured 13 gathering compressor stations in Arkansas and found an average facility-level emission rate of 99 kg CH4/hr, which is larger than the estimated mean emissions from all auxiliary pipeline equipment in the basin. Given an estimated 120 compressor stations in the study area, and assuming that no auxiliary equipment components have undetected major malfunctions, measurements completed here indicate that auxiliary equipment emissions approach negligibility relative to other gathering emission sources.
In contrast, the estimated 382 [75 to 1045] kg CH4/hr estimated for pipeline leaks is not negligible. The measured leak, 4 kg CH4/hr, approaches the facility-level emission rate of the lowest-emitting gathering stations measured in Arkansas in the Marchese study (7.5 ± 2.3 kg CH4/hr). With due caution caused by the small sample size available here, pipeline leaks are comparable to other infrastructure, suggesting future measurement and analysis of gathering pipelines should focus on pipeline leak detection and measurement.
The study estimate is compared to other studies in Figure 4 (SM-S4 & SM-S5). The comparison utilizes activity data developed in this study and emission factors from the GHGRP (US CFR, n.d.), the 2015 GHGI (EPA, 2015), and recent emissions data for distribution mains (Lamb et al., 2015). Since all methods utilize this study’s activity estimate, comparisons focus only on differences in emission rates for the mix of pipeline equipment in the study area. Since GHGRP emission factors are provided without CI’s, only the mean estimate is shown. The probability distribution of the GHGI emission factors were estimated from 90% CI’s listed in the GRI/EPA report used to develop the emission factors (GRI/EPA, 1996).
The CI of the GHGI-based estimate overlaps the CI of the study estimate, and the GHGRP-based estimate falls within the CI of the study estimate. Therefore, this study provides no evidence of issues with the GHGI and GHGRP emission factors for the study area. Since the infrastructure in this basin is newer than most basins, and wet gas production may have different impacts on gathering line emissions, the agreement noted here should not be construed as representative of other basins.
The comparison with the distribution estimate is included because past revisions of the GHGI have utilized distribution mains as a source for gathering line emission factors. In this comparison, confidence intervals of the study estimate do not overlap with emissions estimated using emission factors from (Lamb et al., 2015). Therefore, measurements performed here indicate that emission factors based upon new distribution pipeline measurements should not be utilized to estimate gathering pipeline emissions. Instead, additional measurements should be made on a representative sample of gathering pipelines.
Pipeline screening guidelines for future studies
The current study indicates that pipeline leaks are rare events in the study area. The uncertainty analysis presented above provides a conceptual model to understand how the frequency of these rare events contributes to uncertainty in the resulting emissions estimates. Using this conceptual model, it is possible to pose the question: What size of field campaign would be necessary to constrain uncertainty associated with estimates of pipeline leak emissions to a desired fraction of total basin emissions?
To exercise this conceptual model, it is first necessary to define a frequency range over which pipeline leaks might occur. Given that range, it is possible to explore the fraction of a basin that would need to be screened and measured to meet the desired constraint on emissions estimates. Leak surveys are occasionally completed for operators, but unfortunately are seldom published. To estimate the range of frequencies, the authors contacted several organizations which had done recent leak surveys, and several agreed to provide data under the condition of confidentiality. In all cases, leaks were detected, but not measured:
- A leak detection survey of 595 km of an old gathering system in Pennsylvania indicated approximately 0.3 km per leak, of which 10% were large enough to be audible (Abele, 2016).
- A helicopter survey (with an unknown lower detection limit) of a variety of pipeline types found 16,000 leaks in 225,000 km of survey, or ≈14 km per leak.
- An operator managing 790 km of newer, low-pressure pipeline reports “less than 5 underground leaks” in two years. Assuming all leaks remained un-reported for six months, this would translate into a leak frequency of ≈160 km per leak.
These qualitative data indicate leak frequency ranging from 0.3 to 160 km/leak. The current study’s data of 96 km/leak is somewhat centered within the reported range and our estimated CI (18–425 km/leak) includes the low-frequency (160 km/leak), but not the high-frequency end of the range (0.3 km/leak). This is unsurprising, as the pipelines measured in this study are typically no more than 10 years old, in contrast to the data above, which indicate that pipelines with high leak frequencies occur in regions with older pipelines where corrosion and/or other damage may be more prevalent.
Figure 5 shows simulation results for five leak frequencies for a basin pipeline length similar to that sampled in this campaign – approximately 4000 km of gathering pipeline. The simulation assumes a leak emission factor of 4 kg CH4/hr for all pipeline leaks. We also assume that total emissions from all sources in this hypothetical basin can be estimated using Peischl’s measurement of the eastern Fayetteville shale (Peischl et al., 2015). The bounding question is: Assuming a leak frequency is available a priori from other data (e.g. leak surveys), what fraction of the gathering pipelines in the study area would need to be measured to constrain uncertainty in the resulting pipeline leak emissions estimate to within 1% of the region’s total emissions? For this analysis we compare the upper, one-sided, 95% confidence limit of our leak estimate to the mean Peischl estimate of total study area emissions (SM-S6).
Figure 5 provides the upper 95% confidence limit as a fraction of the Peischl estimate for a range of leak frequencies. In areas where leaks occur less frequently than 1 leak per 100 km of pipeline, a field campaign measuring 5% of the basin pipeline would constrain any underestimate of emissions from gathering pipelines to be less than 1% of total basin emissions. The current study measured 2.4% of the basin and found 1 leak in 96 km of pipeline. Therefore, the uncertainty analysis indicates that measuring approximately twice the pipeline length as this study (≈200 km), and finding no more than two pipeline leaks, the upper bound on emissions would be in error by no more than 1% of total study area emissions. For basins with higher leak frequencies, pipeline emissions account for a larger fraction of total emissions, and relatively more pipeline must be measured to reduce uncertainty in the total leak count. For example, for areas with leak frequencies of 1 leak in 2 km, 25% of the pipeline network must be measured to constrain uncertainty to within 1% of total basin emissions.
Field measurements indicate that above-ground equipment exhibit emissions that are small relative to other sources in the gathering system within the study area. Underground pipeline leaks are more challenging to detect, isolate and measure than auxiliary equipment, but study results show that a single underground leak can dominate total emissions. Gas mixing ratios near the leak location may also exceed lower explosive limits, providing a safety incentive to find and fix these issues. Assuming the observations of this study hold for other basins, these data suggest future emissions studies should focus on detecting underground pipeline leaks and devote relatively fewer resources to characterizing above-ground auxiliary equipment.
Field campaign experience in this study also suggests that emissions from underground leaks can be characterized with random screening of pipeline systems, but the fraction of the pipeline length to screen is strongly dependent upon the number of leaks found. Establishing an a priori “estimated leak frequency” for a gathering system, potentially through periodic screening, would provide system-specific guidance on how much of the pipeline system would need to be subjected to leak detection and measurement in order to constrain uncertainty in emissions estimates to be less than a given fraction of total emissions in the basin or system.
Data Accessibility Statement
Datasets produced in this work are available as online supplementary material accompanying this publication.
The supplemental files for this article can be found as follows:
- S1. Description of Gathering Lines and Auxiliary Equipment. DOI: https://doi.org/10.1525/elementa.258.s1
- S2. Study Area Definition and Pipeline Selection. DOI: https://doi.org/10.1525/elementa.258.s1
- S3. Measurement Equipment used in Study. DOI: https://doi.org/10.1525/elementa.258.s1
- S4. Measurement and Modeling Methods. DOI: https://doi.org/10.1525/elementa.258.s1
- S5. Results & Study Comparisons. DOI: https://doi.org/10.1525/elementa.258.s1
- S6. Future Gathering Pipeline Measurements. DOI: https://doi.org/10.1525/elementa.258.s1
- Dataset S1. Gathering Pipeline SM Data.xlsx. DOI: https://doi.org/10.1525/elementa.258.s2
|
<urn:uuid:c91d9a91-7524-4560-8b28-6ca340eeca71>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815560.92/warc/CC-MAIN-20180224112708-20180224132708-00219.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9416261911392212,
"score": 3.75,
"token_count": 7213,
"url": "https://www.elementascience.org/articles/10.1525/elementa.258/print/"
}
|
Archaeologists from the New York State Museum uncovered the foundation remains of a small house along North Country Road in Rocky Point, New York, in 1991. The house was occupied during parts of the eighteenth and nineteenth centuries, then left abandoned in the wilderness for roughly 150 years. The site was rediscovered during a cultural resources survey, performed by archaeologists for the New York State Department of Transportation, in advance of proposed highway improvements to New York State Route 25A. The small archaeological site, which consisted of a house foundation measuring 11 x 13 feet and associated archaeological deposits, was identified as the home of Betsey Prince through census data and deeds for adjacent properties.
Betsey Prince was listed as the head of a household in the 1820 Federal census. Her household was one of four comprised entirely of free people of color and located on North Country Road in Rocky Point in the early nineteenth century. The household was documented as early as 1790 (and was likely inhabited even earlier), but the occupants were variously identified as Prince, Prince Jessup, Rice Jessup, Betty Jessup, Betty or Betsey Prince, and Elizabeth Jessup in Federal census data, deeds, a tax document, and a probate inventory. In addition to the variety of names, the inhabitants and neighbors of the Betsey Prince site were racially identified with variance, as “colored,” “negro,” “black,” “mulatto,” and “mustey.” For the sake of consistency, they will be referred to here as free black people.
The archaeological site was determined eligible for listing on the National Register of Historic Places because it could provide information about people who lived in the late eighteenth and early nineteenth centuries that we know little about – free black people. The site was excavated by archaeologists because impending plans to widen New York State Route 25A would destroy it. The artifacts (stored at the New York State Museum) provide evidence of the everyday lives of the people who lived at the Betsey Prince site. Archival research aided in connecting names and identities with the site. Together, these resources provide the basis for a narrative of lifeways for a group that was marginal to history, but integral to the functioning of a rural, early American economy.
A prolonged abolition of slavery was facilitated throughout New York State by the Gradual Emancipation Act of 1799. Although the promise of freedom was made, many people of color remained legally enslaved in New York until 1827. During this time, many small and large white households held enslaved Africans, and some such households were listed near the free black settlement at Rocky Point. The presence of the free black settlement would have been conspicuous among the predominantly white communities of rural Long Island. However, they were part of a diverse non-white population, which included captive Africans and Indians, and people of color who were both recently freed and born free (on Long Island, or elsewhere and relocated to Long Island from various places, including New York City, New Jersey, Connecticut, and the Caribbean).
How black people negotiated their identities at this time is certainly difficult to understand. The variety of racial categories mentioned above suggests a lack of consistency in how people were both perceived and classified. The inconsistencies in names may point to the biases of census takers, tax assessors, and government clerks, or may be indicative of individual representation. Perhaps different names were given under different circumstances. It is therefore important to consider the role people of color played in constructing their own identities in early America, as it was not uncommon for black people to change their names more than once.
This socio-historical context is essential for interpreting the data from the Betsey Prince archaeological site. Working within a framework that recognizes racism, segregation, the complexities of identity formation, and the struggle for civil liberties will produce insight into the active lives of the site’s occupants. As such, the documents and archaeological evidence from the Betsey Prince site offer a unique opportunity to investigate identity construction through social interactions, labor, domestic activities, and gender.
Soon after the Europeans settled in New Amsterdam and New England, Africans were visible and noted in histories of the rural settlements. For Long Island, early-twentieth-century historian Peter Ross notes that:
On broad lines it may be asserted that each owner of the soil, as soon as he was wealthy enough, in early times bought at least one slave to aid in its cultivation, and that as wealth increased it became quite fashionable to have one or more negroes as domestic servants as well as farmhands.
The accounts of the earliest enslaved Africans on Long Island provide little information beyond presence. Indeed, many of the English settlers of eastern Long Island were involved in triangular trade with the West Indies and Africa, and therefore, are responsible for the entry of a significant portion of enslaved Africans to Long Island. Due to its length, Long Island had an interesting relationship with both the colony of New Amsterdam and/or New York, and the New England colonies. This position allowed for movement of trade goods and people (free and enslaved) across the Island, further west to New York City and New Jersey, and north to New England.
The exact origins or African identities of captive Africans that came to Long Island are difficult to determine, but some patterns are visible in trade routes. Research on the trade of captive Africans to New York City suggests that West Africans (from the Senegambia, Sierra-Leone and Liberia, the Gold Coast, the Bight of Benin, and the Niger Delta) were the largest African group imported to New York during the period of English rule in the seventeenth century. Without providing direct evidence for the origins of captive Africans, shipping records indicate a continued direct trade between New York and the West African coast. This trade was supplemented by a provisional trade between the Northeast and the Caribbean, which resulted in the importation of captive Africans (from West and Central Africa) via the West Indies.
Once Africans arrived in southern New England, they established relationships with people of other backgrounds and ethnicities. Local historians have long recognized an historical link between the historic African and indigenous populations in the area, and some local descendants of African and Native American ancestry recognize a racially-mixed heritage in their ancestral history.
By the eighteenth century, the non-white population included free and enslaved people of African, indigenous, and mixed-ancestry. The number of enslaved individuals residing in Suffolk County was highest between the years 1749 and 1790, after which attitudes about slavery began to change. Although the total number of black people in Suffolk County was also at its height near 1790, this population declined throughout the nineteenth century.
The 1790 Federal census provides an interesting glimpse of the African presence in Suffolk County, as it is the only Federal census taken prior to the passage of Manumission legislation in New York State. People of color comprised 13.5% of the population of Suffolk County at this time; nearly half of this group (49%) was enslaved. All enslaved people were listed as members of white households during this census. A simple comparison of the populations of enslaved to free individuals in 1790 produces indiscriminate results. The average number of enslaved individuals to slaveholding households amounts to 2.2, but this calculation may not be an accurate indicator of the enslaved experience. In fact, approximately 1/3 of the enslaved population was listed in white-headed households that contained 5 or more enslaved individuals (with or without free people of color as well). Perhaps this calculation helps to produce a more realistic reflection of slaveholding patterns in Suffolk County.
The low number of enslaved Africans in the majority of Long Island households has been misinterpreted by historians to suggest the insignificance of slavery to Long Island social and economic life. As a result, a paternalistic depiction of northern rural slavery emerged, as enslaved Africans were described as perceived “members” of the white slave-owning households. The social, political, and economic conditions for free and enslaved black people were probably more complex than paternalism suggests. The majority of slaveholding households included 2 enslaved individuals, and many of these households contained free Africans as well. In fact, 83% of the free African population resided in white-headed households (some of which contained enslaved Africans). In addition, the census indicates that households that contained higher numbers of free and enslaved Africans were listed near each other. Furthermore, many white property owners had both free and enslaved laborers working for them. This perspective has strong implications for the creation of social networks among black people within and outside white households.
For the Town of Brookhaven, non-white people comprised 16% of the population in 1790. About 46% of this non-white Brookhaven population was enslaved, and 40% of the enslaved population was one of 5 or more enslaved Africans listed in a white slaveholding household. The remaining 54% of Brookhaven’s black population was listed as “free,” however, most of these individuals were listed as residents of white households. Of a total of 275 free non-white people, only 17% of these individuals were listed in households comprised exclusively of free non-white persons.
The Records of the Town of Brookhaven contain approximately 70 manumission records between the years 1798 and 1826, and the 1820 and 1830 Federal census rolls indicate that many of these freed people took the surnames of their former owners.
Following contemporary debates about citizenship and freedom, this was a time when ideas were changing about slavery. Africans and African Americans suffered many restrictions on their civil liberties. While enslaved black people were slowly gaining legal freedom, others were losing the rights they had attained. Racism was taking a new institutionalized form and this likely contributed to the disappearance or demise of the black community at Rocky Point. With this context in mind, the Betsey Prince site could provide important information about a group of people who was carving out an existence despite economic, political, and social challenges during the early nineteenth century.
The Black Settlement at Rocky Point
Rocky Point is located in the northern portion of the Town of Brookhaven. Although Europeans had arrived in the Town of Brookhaven by the mid-seventeenth century, the territory of present-day Rocky Point remained unsettled until the eighteenth century. Noah Hallock, who built a farmstead at Hallock Landing around 1721, is perhaps the earliest documented settler. The earliest settlements were established near the North Shore at Hallock Landing and Rocky Point Landing. By the nineteenth century, homesteads were evident on North Country Road.
As mentioned above, the Betsey Prince archaeological site was identified by archaeologists through census data and deeds for adjacent properties. Further documentary research indicates that this site was originally inhabited by a free black person named Prince Jessup. A man named Prince is identified as “Negro” in the 1800 Federal census, which states that he was the head of a household consisting of eight “other free persons.” He is identified next to two other households consisting of free persons of non-white or racially-mixed heritage. The Prince household is identified again in the 1810 Federal census, and Prince, under the name of Rice Jessup, remains the head of a household consisting of eight “other free persons, except Indians, not taxed,” next to two similarly represented households.
The census data indicate that the Jessup household was located within a small succession of free black or racially-mixed heritage households between approximately 1790 and 1850. Some of the names of the other residents surrounding the Jessup household in Rocky Point (over a 60 year period) include Jonah Miller, Mineus Lyman, Benjamin Davis, and Titus Sell. The documents suggest that these men were farmers and laborers who acquired property from whites, but the acquisition of their property was not as clearly documented as their loss or sale of property. Miller, Lyman, Davis, Sell, the Jessups, and others were variably identified as “colored,” “negro,” “black,” “mulatto,” and “mustey”, and their names were often reported without concern for consistency.
These individuals represent a group that is invisible in history and the landscape. Nonetheless, their presence was real and unique to Suffolk County. The Miller, Davis, Sell, and Jessup families resided consecutively and owned property in the Town of Brookhaven during the period of gradual emancipation in New York (between 1799 and approximately 1830). At least one of these residents was enslaved and his manumission was recorded in the Records of the Town of Brookhaven in the early nineteenth century. Some of these residents were also among the earliest members of the Mount Sinai Congregational Church, located almost four miles to the west of Rocky Point.
The free black community at Rocky Point was not spatially isolated, but situated among the contemporary white households on North Country Road, many of which included free and enslaved people of color residing in their homes. Church records and account books indicate that these free black individuals attended the same churches, frequented the same stores, sought care from the same doctors, and worked alongside white residents and their laborers. These were not privileged members of the community. Indeed the social relationship between these individuals and their white neighbors was probably multifarious, and documents provide little evidence of such complexities. It is apparent, however, that these individuals were distinct from their enslaved neighbors in the manner of legal freedom, and their documented activities suggest a moderate degree of social mobility in the larger, albeit racialized, rural society.
Jonah Miller was perhaps at the center of the free-black community at Rocky Point. Jonah Miller, who was probably one of the earliest black property owners in the community, was a farmer that accumulated a large amount of property between the late eighteenth and early nineteenth centuries. There is no indication that he was enslaved in the eighteenth century, although at least one local historian has suggested that his last name may have been acquired from his previous slaveowner, possibly one of the Millers of Miller Place located three miles to the west. In Brookhaven in 1776, Richard Miller held seven enslaved black people, while Timothy and William Miller each held one enslaved black person, but if there was a relationship between Jonah Miller and one of these men, it remains unknown.
The first mention of Miller in the records was to register his cattle earmark with the Town of Brookhaven in 1789. In 1799 and 1810, Miller was the only black property owner in the Town of Brookhaven to be taxed. His property included one house (valued at $50) and 150 acres of land (valued at $300) in 1799, and he was taxed on property valued at $300 again in 1810. His prominence in the community is suggested by his serving as a witness for several financial transactions for his black neighbors, being a member of Mount Sinai Congregational Church, and having at least one child attend school.
While Jonah Miller’s economic and social activities seem to be well-documented, this is not the case for his neighbors. In particular, there are few documents highlighting the activities of inhabitants at the Betsey Prince site. For these individuals, the story is uncovered through archaeology.
Archaeology at the Betsey Prince Site
In 1993, archaeologists with the NYSM uncovered the foundation of a house, the remains of a brick chimney, the cellar hole, a storage pit dug into the base of the cellar, a small midden in the rear yard, and three additional artifact concentrations and in the yard (Figure 1).
Figure 1: Excavations of the Betsey Prince site were completed by archaeologists from the New York State Museum in 1993. Courtesy of the New York State Museum.
The house consists of two rooms: a main room which measured roughly 11 x 13 feet, and a 6 x 8 foot kitchen wing with fireplace located west of the main room (Figure 2). The house foundation and chimney base consisted of unmortared fieldstone boulders, and the chimney was brick. The remainder of the house was likely wood frame and clapboard construction, as was typical of the New England building tradition on eastern Long Island. Seven thousand ninety five artifacts (excluding brick, mortar, and shell fragments) were recovered at the site.
Figure 2: Plan of the house at the Betsey Prince site. Courtesy of the New York State Museum.
Following a review of the documents that are available for the site, it is apparent that the site was occupied by Prince and Elizabeth Jessup, a married or common law couple. Prince was likely a laborer who found work locally. In 1810, as many as eight free people of color were living in the small, two-room house. Prince’s property included a lot of 6 acres, 1 house, and 1 barn valued at $100 in 1815. Some small-scale agriculture was probably done at the site. Prince died shortly after his property was evaluated for tax purposes, and his wife Elizabeth left her mark on a document appointing an administrator to his estate in 1816. A probate inventory enumerated the valuable items in Prince Jessup’s estate in 1816.
In such a small household, it is difficult to identify individual activities, particularly those that are engendered. As mentioned earlier, eight people lived in the house in 1810. How does one understand domestic activities in such a small, shared space? This analysis benefits from the perception of the household as landscape. Following a multiscalar approach, the terms landscape, space, and built environment can be used interchangeably. Landscape represents unique and shared experiences. It is material, but it is also meaningful and complex. As such, material landscapes “shape and reflect social relations.” The household, lot, and yard space are considered meaningfully constructed space representative of the larger landscape.
Most of the recovered artifacts were used in food preparation, storage, and consumption, and were ceramic wares. One hundred seventeen ceramic vessels were identified, including 15-20 storage/dairy vessels, 22-29 kitchen vessels, 30-33 tablewares, 49 teawares, and one additional vessel. In addition, 44 buttons, 19 other personal items, 21 pieces of tobacco pipes, 12 tools, 326 architectural artifacts, and various other items, including faunal material, were collected. (Figures 3-6) Most of the artifacts were recovered from within the dwelling. Outside the structure, domestic disposal was limited to specific areas of the yard.
Very little information about diet is available from the faunal remains at the site, but it seems that the site’s inhabitants relied mostly on animal products and bivalve mollusks (clams and oysters). A probate inventory for the Jessup estate indicates that 14 fowl, three geese, and one cow were present on the small farm, and only sixteen fragments of animal bone were recovered archaeologically, fifteen of which were identified as kitchen and/or unidentified bone. The paucity of animal bone may be indicative of the acidic conditions of Long Island soil, recovery methods at the site, or consumption practices by the site’s inhabitants. The abundance of shell recovered from the site suggests that the household diet may have been more dependent on maritime resources, despite the distance of more than two miles from the coast. African American and Native American men were frequently employed at sea during the early historic period on Long Island, as were many white residents. Perhaps someone living at or near the Jessup household labored at sea and returned with clams and oysters for household consumption.
The assemblage of teawares at the Betsey Prince site raises questions about the social and economic status of the household (Figures 7, 8). The teaware collection began as a non-matched set of creamwares, fine red earthenwares, Chinese porcelain, Jackfield, and red stoneware, but a later preference for polychrome pearlwares was demonstrated. By the nineteenth century, tea drinking was a common practice in American households. Tea serving and consumption is associated with domesticity, and it can be representative of social interactions.
Figures 7 and 8: A collection of ceramics from the Betsey Prince site, some of which are teawares. Courtesy of the New York State Museum.
One scholar notes the use of medicinal teas by root doctors, midwives, and healers, “brewed from medicinal herbs or substances, salves, or whiskey-based ‘home made bitters’.” Tea cups, tea bowls, and other teawares were also used by African American spiritual leaders as vessels for conjure. Women in colonial New England were recognized for their knowledge of roots and herbal remedies. Perhaps knowledge of healing practices and recipes were shared among people of African, indigenous, and even European descent. If the teawares were used for a purpose other than traditional English tea drinking, then the influence of African, European, and indigenous healing practices may have been demonstrated in the alternative uses of these items.
The teawares could also demonstrate social interaction or social connectedness, and thus serve to identify a point of social congregation. This type of activity was observed through the recovery of matched tea sets at the Harriet Tubman home in Auburn, New York, where Tubman established a home for the aged and served others.
Is the predominance of teawares at the site suggestive of Betsey Jessup’s activities? Could these items be viewed as representative of gendered activities? Some scholars have warned against the association of specific items with certain genders. This trend of identifying separate “spheres of activities” is essentialist and as problematic as trying to identify ethnic markers. However, multiple lines of evidence can provide a better foundation for understanding gendered activities. In the case of the Betsey Prince site, this can be accomplished by comparing the probate inventory with the archaeological record.
The probate inventory enumerated additional material that is indicative of household activities, including a hand saw, square, four chisels, a drawing knife, five axes, a grindstone, a pounding barrel, and a ladder. Similar items recovered during excavations include a horse shoe, a horse shoe nail, a whetstone, and five chisel fragments. In the late eighteenth and early nineteenth centuries people of color in rural Long Island performed various skilled and unskilled tasks for a living. The cutting of cordwood was a common practice in the wooded interior portions of Brookhaven Town, and the material represented in Jessup’s probate inventory suggests he may have engaged in this type of labor. In addition, the four or more chisels, drawing knife, and grindstone/whetstone mentioned in the archival and archaeological records suggest skilled woodworking was an activity performed at the Jessup household or by its members.
An interesting aspect of the probate inventory is the paucity of ceramic and glass items appraised. Four bottles, two stone jugs, one jug, and a two gallon stone jug are the only vessels listed for the storage or preparation of food and/or beverage. This presents a different image of household items than what was recovered during archaeological investigation. As with most documents, the probate inventory provides insight into what was valuable to the appraisers, and may not necessarily reflect what was valued by the owner. However, it is interesting that the extensive collection of teawares was not appraised. If Betsey Jessup owned the ceramic and glass items, then it is possible that her property was excluded from the appraisal. Perhaps the appraisers distinguished between his and her property, or perhaps the appraisers did not assess the ceramic and glass wares as valuable items. Alternatively, the presentation of the estate to the appraisers may not have included the ceramic and glass items. Betsey may have kept these items concealed when her husband’s estate was appraised. It is also possible that she acquired her collection of teawares after her husband’s death. Many of the wares, however, would have been out of date by then. There is a range of factors that contribute to the presence or absence of items in probate inventories, but with the archaeological record, it is incredibly useful for understanding the material assemblage with greater accuracy.
The comparison between the documentary and archaeological records provides subtle hints into the gendered activities at the Betsey Prince site. This is particularly useful at a site such as this one, comprised of a small space that was shared by a husband, wife, and several other individuals. But what is the purpose of separating male from female activities? This analysis is only useful when it is understood within its socio-historical context. Political, economic, racial, ethnic, and class conditions also shape each person’s experiences and lifeways, in the past and in the present. These factors should be considered in an attempt at understanding gendered lifeways at an archaeological site. After all, it is not the artifacts that define a person’s identity in the past, but rather the political, economic, and social experiences that impact a person’s identity. As such, the artifacts must be understood simply as the material residue of social experience.
The “Search” for an African American Archaeological Context
Several categories of material culture have become associated with African American archaeological sites. These items – including blue beads, modified pieces of ceramics, and cowrie shells, to name a few – have been found on free and enslaved sites throughout the Americas, and these items have been regarded by many as representative of an African presence. These artifacts are significant within the context of an archaeological site, but the search for these items is essentialist if they are considered apart from each other, or separate from their placement within the site. When found in specific contexts, these items demonstrate African American meaning. Often identified in caches or bundles, these items have been found in hidden locations of African American homes or workplaces, such as within walls, hidden beneath floor boards, within root cellars, in hearths, below doorways and windows, and in room corners.
Caches or bundles have been found on African American sites containing items such as “crystals, pebbles, gnarled roots, pieces of iron, metal, ivory, wooden rings, and crab claws, usually wrapped in leaves or a cloth.” From an assortment of narratives collected by the Works Progress Administration (WPA) that discuss conjuring, healing, and divining, two scholars have identified “pins, nails, buttons, beads, coins, white ceramics, crystals, jewelry, stones, cloth, and their contexts” as the material evidence of these activities. Not all of these items are found, and some caches contain previously unmentioned artifacts. The diversity of items recovered from similar spiritual and/or healing bundles in various locations may be reflective of the diversity of resources available, ingenuity, the particularity of the conjuring or healing activity, and the heterogeneity of African American people.
Working with the field notes, photographs, and artifacts from the Betsey Prince site, the author investigated the possibilities of identifying potential caches at the site. This research was conducted between 2006 and 2008, more than ten years following the excavation of the site. Locations have been identified in African American houses as sensitive zones for potentially identifying African American caches or bundles. The excavation field notes and artifact catalog for the Betsey Prince site were consulted to explore the conditions of these locations, where possible, and the potential to interpret the data as reminiscent of archaeological material recovered from other known African American contexts. Unfortunately, the locations of windows and doors could not be identified at the Betsey Prince site. However, the chimney base/sill, cellar floor, and cellar storage pit were all thoroughly investigated by the NYSM, and the data was subsequently explored for possible caches. One context seems similar to caches found in sub-floor pits at slave houses in Virginia, but problems with chronological comparisons and clear contexts of intentional placement makes these resemblances tentative at best.
The search for Africanisms in African American culture is a politically charged process of attempting to identify authenticity. In his study of black music, Paul Gilroy makes the point of distinguishing the challenges of two main approaches to interpreting identity formation within the African diaspora: the essentialist approach of ethnic absolutism inherent in Africentrism and its pluralist critique. Residing at an intermediary position within this debate, he argues that “the unabashedly hybrid character of …black Atlantic cultures continually confounds any simplistic (essentialist or anti-essentialist) understanding of the relationship between racial identity and racial non-identity, between folk cultural ethnicity and pop cultural betrayal.” Without denying the importance of African symbols and meaning to the formation of black Atlantic identities and cultures, Gilroy argues that the complex experiences of movement and culture contact have contributed to a hybrid Atlantic identity that emphasizes agency in the selection of cultural features.
Theresa Singleton provides a review of the debates surrounding the interpretation of African American culture in anthropology and archaeology. She notes that although the identification of ethnic markers may provide an initial reference point, “all too often these artifacts become the primary focus of the archaeological discussion, and the interpretation of other objects is not given much consideration.” As an alternative, she suggests that researchers explore the possibilities wherein artifacts and practices may have acquired new meaning, as in appropriation.
The concept of ethnogenic bricolage presents a useful theoretical framework for understanding the processes of identity formation and the creation of an African American culture. This framework emphasizes hybridity, as well as multiple uses or meanings for artifacts, as they may be demonstrated archaeologically. For instance, an English teaware recovered from an African American or Native American context may have been traditionally interpreted as a sign of acculturation. But the item may have had other meanings or uses in different contexts or to different people. It is therefore important to consider the dynamic formation of culture and the various ways to demonstrate tradition, resistance, and appropriation materially.
Unlike acculturation and creolization studies, which are dependent on the recognition of an imbalance of power, this concept of ethnogenic bricolage emphasizes agency within a dynamic culture. Christopher Fennell states that “in a process of ethnogenic bricolage, individuals of different cultural heritages interact over time to formulate new social networks with new repertoires of key symbols, communicative domains, and cultural practices. Those new symbols are created and developed over time in large part through engagements with the multiple elements of abbreviated, multivalent symbols from each of the contributing cultural groups.” This method attempts to understand why certain cultural traits are selected for in the creation of the new blended culture. Its usefulness in accomplishing this fate, however, is yet to be determined.
Perhaps the hybrid composition of early America, which was marked by power shifts and struggle, is best understood as a location of dynamic and contested cultural production. It is likely that the members of the indigenous, African, and Euro-American populations on Long Island, consciously or unconsciously, participated in the formation of a blended culture. By exploring the alternative explanations for the uses of the teawares at the Betsey Prince site, this analysis has investigated hybridity and multiple meanings archaeologically.
Into the Twentieth Century: What Happened to the Betsey Prince Site?
Federal census data, historic maps, and property deeds are useful for reconstructing the demographic changes that occurred in the black neighborhood at Rocky Point in the late nineteenth and early twentieth centuries. Although the black neighborhood at Rocky Point remained small for a short time after Elizabeth Jessup and Jonah Miller died, the residents disappeared from the area by 1920. Meanwhile, black settlements were growing in other parts of northern Brookhaven Town.
The 1830 Federal census, the last census to identify Betty Prince as a head of household, lists her household alongside the households of Benjamin Davis and Jonah Miller. Jonah Miller died before the 1840 census, leaving Benjamin Davis, James Day, and nearby Lydia Phillips documented in the neighborhood. At this time Elizabeth Jessup may have lived with James Day, a seaman whose household included one woman between the ages of 55-100, according to the 1840 census.
Although the Betsey Prince site was abandoned by 1840, the African American community continued. Over the next 80 years, a small number of African American households made the Rocky Point area their home, including the households of David Smith, Abraham Tobias, Theodore Tredwell, James Douglass, and William White. Benjamin Davis lived in his home until he died (approximately 1875). As late as the 1870 census, he was listed as a 90-year-old laborer.
In 1848, Setauket’s Bethel AME Church was founded approximately 12 miles to the west. For Setauket “the church was the religious and educational center for the black community.” Black people lived in Setauket since the English settled in the area. As early as 1815 the Laurel Hill Cemetery was established on Christian Avenue for people of color to bury their dead. The beginnings of the black church and community can be traced at least to that time. The establishment of a church is considered central to the construction of community identity, and served to represent the freedom of Africans in the nineteenth century. Indeed throughout Long Island, African American settlements were often located geographically around an African Methodist Episcopal, AME Zion, or Baptist Church.
Many current black residents of Setauket can trace their ancestry to the earliest black people who made the area their home. Hart, Sells, Tobias, Brewster and Smith are among the oldest names for Brookhaven Town’s, and especially Setauket’s black families. Theodore Green is one such Setauket man who could trace his ancestry to Titus Sells, Rachel Holland Hart, and Abraham Jones Tobias. For members of Green’s family and the extended community, family histories maintain that their ancestors were of African and Native American descent. Free and enslaved Africans intermarried with Setauket, Shinnecock and Montaukett Indians. For these families, and many other black Long Islanders, their identities comprise African, indigenous, and even European heritages.
By the late nineteenth century, many southern African Americans had migrated to New York City, and by the early twentieth century, there was a growing population of southern African Americans in eastern Long Island. Like Africans and African Americans before them, these individuals migrated north in search of employment and better opportunities for themselves and their families. Southern black people migrated to Long Island and lived and worked alongside the descendants of the earliest African inhabitants (enslaved and free). Many attended the churches and joined the organizations that were previously established by Long Island African Americans in the nineteenth century. New churches and organizations were established as small groups of African-Americans settled throughout eastern Long Island.
The 1900 Federal census indicates an influx of German-born residents to the Rocky Point area, and most of these immigrants settled on North Country Road where the early-nineteenth-century black residents had lived. By this time, only one black resident remained in the area, James Douglass, who probably resided in the Benjamin Davis/Theodore Tredwell house. The census suggests Douglass, a 70-year-old woodcutter, lived there alone, although he was married for 25 years.
James Douglass is replaced by William White in the 1910 Federal census. White, a 39 year old laborer of “odd jobs” lived with his wife Molly, four children, and mother Ansaline Brewster (74 years old). Identified as mulatto, this is the last household of color identified prior to the significant development changes to Rocky Point that would erase the memories of the black neighborhood.
By 1920, the area was primarily inhabited by German immigrants. No black or mulatto people are listed in the 1920 census or identified on a 1917 map in this area. According to the Federal census, William H. White and his family moved to Crystal Brook Hollow Road in Mount Sinai, west of Rocky Point. It is around this time that real estate companies, such as the Chauncey Real Estate Company, began to buy vast pieces of land along North Country Road in Rocky Point precisely where the eighteenth- and nineteenth-century free black people settled. These parcels were then sold to the Radio Corporation of America. Some of these deeds hint at the early-nineteenth-century presence as they mention Jonah Road as a boundary marker, or the Betsey Prince site as property belonging to the “heirs of Elizabeth Jessup.” The rural landscape of Rocky Point made it an ideal setting for the Radio Corporation of America’s installation of a field of towers over 6,200 acres.
For African Americans, life on Long Island during the twentieth century was constantly impacted by racism. In the 1910s and 1920s, the Ku Klux Klan was revived and active throughout Long Island. In Suffolk County, membership was high, and the organization was pervasive in churches, civic organizations, and politics. The political strength of the Klan has been demonstrated in patterns of community segregation, and exclusionary housing practices continued through the middle of the twentieth century. Organized efforts, aimed at “preserving existing housing patterns,” were instituted in suburban models (e.g., Levittown) and red-lining practices (i.e., limited financing to non-white homeowners). African American Long Islanders had limited residential options during this period. They purchased land where permitted and established cohesive communities, some of which remain today.
For Long Island, and other parts of the American northeast, historians and archaeologists alike are exploring the histories of people of color in the colonial and early historic periods. Historical archaeology is uniquely positioned to contribute to this research, to create a more complete image of the past. It may supplement the historical record, or it may challenge it. In many cases, historical archaeologists are interested in illuminating the histories of people who have been identified by Eric Wolf as “without” history. Through the investigation of archaeological remains, documents, and oral testimony, archaeologists can produce a narrative of lifeways at a site that may not have been achieved from the use of just one line of evidence.
For the Betsey Prince site, historical archaeology presents a more complete history of the site, and positions that site within the larger history of Rocky Point. Although the African American history of Rocky Point remains buried in the landscape, an investigation into archived documents, Federal census data, and archaeology has provided the basis for understanding the material dimensions of the site’s inhabitants. Prince Jessup, who lived at the site, was a farmer and/or laborer who died around 1815. The probate inventory from his estate indicated that he owned a collection of tools for woodworking and farming, furniture, domestic items, and farm animals. The archaeological record, however, presented evidence of an affinity for teawares, which were noticeably absent from the probate inventory. Together, these resources provide a more complete picture of the assemblage at the Betsey Prince site, and are suggestive of gendered activities.
The documentary record provided the evidence for identifying the occupants of the Betsey Prince site as black, and for establishing the social, historical, and political conditions of the time. With this context secured, there was little need to “authenticate” the site as African American archaeologically. An African American archaeological context was explored, and possibilities for distinctly African American material culture were identified, but problems with chronology and the identification of intentional placement makes comparisons between this and other sites tentative. In future work at African American archaeological sites, special attention should be paid to sensitive zones within structures during excavation to determine intentional placement.
Archived documents and Federal census data were also used to understand settlement patterning, household composition, and demographics within and around the historic black enclave in Rocky Point. The archaeological record indicates abandonment of the Betsey Prince site by the mid-nineteenth century, and the documentary record is useful in exploring why the site and the black settlement disappeared, providing the evidence for changes in demographics, settlement patterns, and land use into the twentieth century.
On Long Island, as in other parts of the country, current demographics, settlement patterns, and levels of wealth can be understood as the product of history. As such, historical archaeology will play an important role in expanding our knowledge of these processes, while highlighting culture contact, African American history, and the creation of early American societies.
Many thanks are owed to Mark LoRusso and Andrea Lain at the New York State Museum. Mark LoRusso was the Principal Investigator of the site in the 1990s and graciously shared his notes and memories of the excavations. Andrea Lain facilitated access to the collections for my master’s thesis research.
Mark LoRusso, A Cultural Resource Survey Report for Data Recovery Investigations of the Betsey Prince Site (NYSM #10625) and the Prince-Miller Site (NYSM #10626) PIN 0327.67.121 and PIN 0327.78.101, NY 25A, Town of Brookhaven, Suffolk County, New York for the New York State Department of Transportation (Anthropological Survey, New York State Museum, 1998); Mark LoRusso, “The Betsey Prince Site: An Early Free Black Domestic Site on Long Island,” in Nineteenth- and Early Twentieth-Century Domestic Site Archaeology in New York State (New York State Museum Bulletin 495, 2000), 195.
LoRusso, “The Betsey Prince site,” 199.
Allison Manfra, “Race and Ethnicity in Early America Reflected through Evidence from the Betsey Prince Archaeological Site, Long Island, New York” (master’s paper, Syracuse University, 2008); U.S. Bureau of the Census, Population Schedules, 1790-1850; Land deeds (Suffolk County Clerk, Suffolk County Center, Riverhead, New York [SCC] 1818: Deed Liber [DL]84:858, 1833:DL S:265); Brookhaven Tax Receipt for Prince (William Floyd Estate, Mastic, New York, 1815); Probate Inventory for the Prince Jessup Estate (Surrogate’s Court, Suffolk County Center, Riverhead, New York, 1816: Liber D:97).
Manfra, 19.
LoRusso, “The Betsey Prince site,” 195.
Ira Berlin and Leslie M. Harris, Slavery in New York (New York, NY: Plenum Press, 2005), 16-17; Graham Russell Hodges, Root and Branch: African Americans in New York and East Jersey, 1613-1863 (Chapel Hill, NC: University of North Carolina, 1999), 25; Edna Greene Medford, The New York African Burial History Final Report, prepared by Howard University for the General Services Administration Northeastern and Caribbean Region, November 2004, 209-210.
Manfra, 25.
James Oliver Horton, Free People of Color: Inside the African American Community (Washington, DC: Smithsonian Institution Press, 1993), 155.
Peter Ross, A History of Long Island from the Earliest Settlement to the Present Time (New York, NY: Lewis Publishing, 1902), 121.
Helen Wortis, “Blacks on Long Island: Population Growth in the Colonial Period,” Journal of Long Island History 9 (1974); Richard Shannon Moss, Slavery on Long Island: a Study in Local Institutional and Early African-American Communal Life (New York, NY: Garland Publishing, 1993); Lynda R. Day, Making a Way to Freedom: A History of African-Americans on Long Island (Interlaken, NY: Empire State Books, 1997).
Medford, 60-61.
Ibid., 90-93.
Ibid., 83-89.
Day, 89-91; John Strong, The Algonquin Peoples of Long Island from Earliest Times to 1700 (Interlaken, NY: Empire State Books, 1997), 281-282; Theodore Green, personal communication, ca. 2006.
Anne Hartell, “Slavery on Long Island,” Nassau County Historical Journal (Fall 1943): 56; Grania Bolton Marcus, “Discovering the African American Experience on Long Island,” in Exploring African-American History on Long Island and Beyond (Hempstead, NY: Long Island Studies Institute, 1995).
James Moore, “Slavery in the 18th century New York Hinterland: the Spatial Dimension.” Paper presented at the 38th Annual Conference on Historical and Underwater Archaeology, York, England, January 2005.
Robert K. Fitts, “The Landscapes of Northern Bondage,” Historical Archaeology 30, No. 2 (1996): 54.
Records of the Town of Brookhaven, Suffolk County, New York (Port Jefferson, NY: Times Steam Job, 1888).
Mildred H. Gillie, Kate W. Strong, Margaret S. Davis, and Osborn Shaw, Historical Sketches of Settlements and Villages of Northern Brookhaven Town, 1655-1955 (Bellport, NY: U. S. Press, 1955); Dagmar von Bernewitz, Rocky Point: A Historical Perspective, (Long Island, NY: Dag and Rob, 1997).
Nineteenth-century settlement patterns can be seen on the following maps: F. H. Gerdes, North Side of Long Island from Mount Misery to Friar’s Head (United States Coastal and Geodetic Survey, Washington, DC: 1838); J. Chace, Map of Suffolk County, Long Island, New York (John Duglas, Pennsylvania: 1858); F. W. Beers, Atlas of Long Island, New York (F. W. Beers, Comstock, and Cline, New York: 1873).
LoRusso, “The Betsey Prince site,” 199.
Manfra 9, 18.
Records of the Town of Brookhaven, 150.
The Mount Sinai Congregational Church was established in 1789.
Sarah and Jonah Miller, Mineus Lyman, and Betty Sells are listed as members of the Mount Sinai Congregational Church, (Mount Sinai Congregational Church membership records, Mount Sinai Congregational Church, Mount Sinai, New York); account books list consumption and exchanges in Suffolk County (Suffolk County Historical Society, Riverhead, New York); Titus Sells and Black Benjamin were the parents of children enrolled in Miller Place Schools, c. 1830 (U.S. Bureau of the Census, Population Schedules, 1830). Some of these references are discussed in Manfra 2008, and some were reprinted in Grania Bolton Marcus’s Discovering the African-American Experience in Suffolk County, 1620-1860 (Mattituck, NY: Amereon House, 1995).
Calendar of Historical Manuscripts in the Office of the Secretary of State,
Albany, New York (Ridgewood, NJ: Gregg Press, 1968 ).
Records of the Town of Brookhaven, Suffolk County, New York, Book C (New York, NY: Derrydale Press, 1931), 43; Brookhaven Assessment List, 1799, Suffolk County Historical Society, Riverhead, New York; Brookhaven Tax List, 1810, Suffolk County Historical Society, Riverhead, New York.
LoRusso, A Cultural Resources Survey, 70.
The archaeological data from the Betsey Prince site was re-examined for a 2008 MA paper in Historical Archaeology at Syracuse University. Further archival research was conducted for this project, and during the process, the site was identified as occupied by Prince and Elizabeth Jessup. Allison Joyce Manfra, “Race and Ethnicity in Early America Reflected through Evidence from the Betsey Prince Archaeological Site, Long Island, New York” (master’s paper, Syracuse University, 2008).
Brookhaven Tax Assessment, 1815, William Floyd Estate, Mastic, New York.
Deborah L. Rotman, “Introduction: Exploring Shared Spaces and Divided Places on the American Historical Landscape,” in Shared Spaces and Divided Places: Material Dimensions of Gender Relations and the American Historical Landscape, (Knoxville, TN: The University of Tennessee Press, 2003), 3.
James Delle, An Archaeology of Social Space: Analyzing Coffee Plantations in Jamaica’s Blue Ridge Mountains, (New York, NY: Plenum Press, 1998), 14, quoted in Rotman, 4.
LoRusso, A Cultural Resources Survey, 43.
LoRusso, A Cultural Resources Survey, 78.
Rodris Roth, “Tea-drinking in Eighteenth Century America: Its Ettiquette and Equipage,” in Material Life in America, 1600-1860 (Boston, MA: Northeastern University Press, 1988).
Diana diZerega Wall, “Family Meals and Evening Parties: Constructing Domesticity in Nineteenth-Century Middle-Class New York,” in Lines That Divide: Historical Archaeologies of Race, Class, and Gender (Knoxville, TN: University of Tennessee Press, 2000).
Douglas Armstrong, personal communication, 2008.
Laurie A. Wilkie, “Secret and Sacred: Contextualizing the Artifacts of African American Magic and Religion,” Historical Archaeology 31, no. 4 (1997), 85.
Newbell Niles Puckett, Folk Beliefs of the Southern Negro (Patterson Smith, Montclair, NJ: Patterson Smith, 1968 ); Wilkie, 85.
Jane C. Beck, “Traditional Folk Medicine in Vermont,” Annual Proceedings of the Dublin Seminar for New England Folklife 15 (1992), 39.
Douglas Armstrong, personal communication, 2008.
Elizabeth Brumfiel, “Methods in Feminist and Gender Archaeology: A Feeling for Difference– and Likeness,” in Handbook of Gender in Archaeology (Oxford, MA: AltaMira Press, 2006), 42.
Moss, xiv.
Mark P. Leone, Cheryl Janifer LaRoche, and Jennifer J. Babiarz, “The Archaeology of Black Americans in Recent Times,” Annual Review of Anthropology 34, 582; for a review of African American archaeology, see Charles E. Orser, “Archaeology of the African Diapora,” Annual Review of Anthropology 27, 63-82 and Theresa Singleton, “Archaeology of Slavery in North America,” Annual Review of Anthropology 24, 119-40.
Mark P. Leone and Gladys-Marie Fry, “1999 Conjuring in the Big House Kitchen: An Interpretation of African American Belief Systems Based on the Uses of Archaeology and Folklore Sources,” Journal of American Folklore 112, no. 445 (1999), 372-403.
Leone and Fry, 379.
Leone and Fry, 382.
Leone and Fry, 376-377.
Manfra, 64-65, 68-73.
Ibid., 72-73, 82.
Paul Gilroy, The Black Atlantic: Modernity and Double Consciousness (Cambridge, MA: Harvard University, 1993).
Gilroy, 99.
Theresa A. Singleton, “African Diaspora Archaeology in Dialogue,” in Afro-Atlantic Dialogues:Anthropology in the Diaspora (Santa Fe, NM: School of American Research Press, 2006), 262.
Christopher C. Fennell, Crossroads and Cosmologies: Diasporas and Ethnogenesis in the New World (Gainesville, FL: University Press of Florida, 2007).
Fennell, 129-30.
United States Census Bureau population schedules for 1850-80, 1900, 1910.
Theodore A. Green, “The Hart-Sells Connection,” in William Sydney Mount: Family, Friends, and Ideas: Essays by Members of the William Sydney Mount Project, Three Village Historical Society (Setauket, NY: Three Village Historical Society, 1999), 67.
Marcus, 178.
Day, 54.
Theodore Green, personal communication, ca. 2006.
Ralph Watkins, “A Survey of the African American Presence in the History of the Downstate New York Area,” in Afro-Americans in New York Life and History 15, no. 1 (1991), 53-79.
Day, 109.
E. Belcher Hyde, Atlas of a Part of Suffolk County, Long Island, New York: North Side- Sound Shore (Brooklyn, NY: E. Belcher Hyde, 1917).
Natalie Aurucci Stiefel, Looking Back at Rocky Point: In the Shadow of the Radio Towers, Volume 1- 20th Century (Rocky Point NY: Amron Copy and Printing Center, 2003).
Property deed, SCC 1898: DL 566:468.
Property deed, SCC 1912: DL 829:153.
Steve Wick, Heaven and Earth: the Last Farmers of the North Fork (New York, NY: St. Martin’s Press, 1996), 90.
Wick, 90; Jane S. Gombieski; “Klokards, Kleagles, Kludds, and Kluxers: The Ku Klux Klan in Suffolk County, 1915-1928, Part One.” Long Island Historical Journal 6, no. 1 (1993), 41-62.
Bernie Bookbinder, Long Island: People and Places, Past and Present (New York, NY: Harry N. Abrams, 1998)192; The Klan’s role in community segregation in Freeport is mentioned in Rosalynd Baxtall and Elizabeth Ewen’s Picture Windows (New York, NY: Basic Books, 2000) 30.
About 10 years of multidisciplinary research was conducted at the archaeological site of Sylvester Manor on Shelter Island, adding to our knowledge of culture contact and labor on a northern plantation. The results were published in The Historical Archaeology of Sylvester Manor, a special issue of Northeast Historical Archaeology 36 (2007). More recent research by the Hofstra University’s Center for Public Archaeology at King Manor in Queens and Lloyd Manor in Lloyd Harbor will also add to our understanding of labor and culture contact in New York.
Eric R. Wolf, Europe and the People Without History (Berkeley, CA: University of California Press, 1997).
|
<urn:uuid:8a6e8ee6-7998-4c96-a7b2-c9dd79f9a049>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00020.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9523663520812988,
"score": 3.765625,
"token_count": 11283,
"url": "http://lihj.cc.stonybrook.edu/2011/articles/rocky-point%E2%80%99s-african-american-past-a-forgotten-history-remembered-through-historical-archaeology-at-the-betsey-prince-site/"
}
|
Wednesday, February 14, 2018
Monday, February 12, 2018
Sunday, February 11, 2018
Saturday, February 10, 2018
Friday, February 09, 2018
Brother Frederick Douglass was one of the greatest and most influential African Americans of the nineteenth century (and of American history). He stood alone as not only a heroic abolitionist, but an audacious freedom fighter in general. From his birthday in Maryland on 1818 to his passing on 1895 in Washington, D.C., Frederick Douglass personified excellence, courage, strength, and intellectual greatness. Traveling the world was to oppose the abominable act of slavery was definitely a part of his life. His voice was powerful and stirred up people to advance the creed of equality. His autobiography, Narrative of the Life of Frederick Douglass, an American Slave (1845) and his second book, My Bondage and My Freedom (1855) outlined the extensive scope of his life story. Also, he worked relentlessly during the Civil War to defeat the nefarious Confederacy (who brutalized and enslaved black people. Their own documents condoned slavery too). That is why he organized speeches and rallies in order for him to encourage black Americans to join the Union Army in fighting for the noble cause of freedom. Later, he saw the Union to become victorious. After the Civil War, Frederick Douglass defended the rights of not only black people, but of women, immigrants, and the oppressed in general. Land reform, the abolition of the death penalty, peace, and other causes that he fought for galvanized future generations. He was a leader, an ambassador, and an early civil rights advocate. Frederick Douglass was in fact totally American. He was honest to expose the hypocrisies of America while being inspired to seek a better America simultaneously. Frederick Douglass was inspired by so many heroes including Anna Douglass. Anna Douglass was a trailblazing black woman, whose insights and magnanimous courage ought to never be forgotten. Frederick Douglass' contributions to our world are very clear and transcendent. Now, it is time to celebrate 200 years after his birth and be inspired to fight for the justice that he continuously advocated in an interminable fashion. Agitate was his call for change and we must always agitate for peace, for justice, for righteousness, and for human freedom unequivocally.
Fredrick Douglas was born in February of 1818 in the Eastern Shore of the Chesapeake Bay in Talbot County, Maryland. He was a slave. Douglass was once given his named by this mother Harriet Bailey. Later, he took the surname of Douglass. His whole name was Frederick Augustus Washington Bailey. Some believe that his grandmother’s cabin east of Tappers Corner was the place of his birth. He wrote of his mother. He wrote of his mother of getting him to sleep. Back then, it was common for enslaved families to be split apart, so Frederick Douglass experienced an early separation from his mother. Frederick then lived with his maternal grandmother, whose name was Betty Bailey. By 6 years old, he was separated from his grandmother and moved into the Wye House plantation. At that location, Aaron Anthony worked as an overseer. Douglass’ mother died when he was only 10. Anthony died too. Douglass was sent to Lucretia Auld or the wife of Thomas Wuld. They made him to serve Thomas’ brother Hugh Auld in Baltimore. By 12 years old, he was taught the alphabet by Sophia (who was Hugh Auld’s wife). Douglass said that Sophia a was kind and tender hearted woman. He said that she treated him “as she supposed one human being ought to treat another.” Hugh didn’t agree with the tutoring, because he believed that literacy would encourage slaves to desire freedom. So, Sophia stopped reading to Frederick Douglass. She even snatched a newspaper away from Douglass. She came to believe in the view that education and slavery were incompatible.
Douglass, according to his autobiography, learned to read from white children in the neighborhood and by observing the writings of the men whom he worked. Douglass continued, secretly, to teach himself how to read and write. He later often said, "Knowledge is the pathway from slavery to freedom." Later, Frederick Douglass soon read newspapers, pamphlets, political materials, and books of many types. He was inspired to oppose slavery. He said that the anthology of the Columbian Orator at the age of 12 defined his views on human rights and freedom. The Columbian Orator was first published in 1797. It had essays, dialogues, etc. It was a classroom reader to help students to learn reading and grammar. William Freeland was the person Douglass was sent to next. Freeland taught slaves on his plantation to read the New Testament at a weekly Sunday school. Word spread. More than 40 slaves attended lessons to read. This went onward for six months until other plantation owners used clubs and stones to throw out the congregation permanently (on Sunday). On 1833, Thomas Auld took Douglass back from Hugh (as a means to punish Hugh according to Douglass’s literature). Thomas Auld sent Douglass to work for Edward Covey. Covey was a poor farmer who whipped black people. He whipped Douglas regularly. It nearly broke Douglass psychologically. The 16 year old Frederick Douglass then fought back in self-defense and won the physical confrontation against Covey. Covey never tried to beat him again.
Frederick Douglass escaped multiple times. He tried to escape from Freeland and he was caught. He escaped from Covey and he was caught again. This was in 1836. In 1837, Frederick Douglass met and fell in love with Anna Murray. Anna Murray was a heroic black woman. She was a free black woman, who lived in Baltimore, and she was about five years older than he was. On September 3, 1838, Frederick Douglass escaped successfully to freedom. First, he boarded a trained from the newly merged Philadelphia, Wilmington and Baltimore Railroad (P.W.&B.) railroad line to the Northern cities. The area where he boarded was a short distance east of the previous temporary P.W.& B. train depot in the recently developed neighborhood between the modern neighborhoods of Harbor East and Little Italy. The depot was located at President and Fleet streets. This was east of the Basin of the Baltimore harbor. It’s found on the Northwest Branch of the Patapsco River. The depot was later replaced by the historic President Street Station. It was constructed from 1849-1850. It was also noted as a site of other slave escapes along one of the many routes of the famous “Underground Railroad” and during the Civil War. Douglass then reached Havre De Grace in Maryland (at Harford County, in the northeast corner of the state). It was located on the southwest shore of the Susquehanna River, which flowed into the Chesapeake Bay.
This was placed some 20 miles from the free state of Pennsylvania. To him, it was easier to go through Delaware first. Delaware was a slave state. Douglass wore a sailor’s uniform. It was given to him by Anna Murray. She gave part of her savings to cover his travel cost. He used identification papers and protection papers that he had obtained from a free black seaman. Douglas crossed the wide Susquehanna River by the railroad’s steam-ferry at Havre de Grace to Perryville on the opposite shore of Cecil County. He used a train to cross state lines into Wilmington, Delaware. He was at a large port at the head of Delaware Bay. From there, because the rail line was not yet completed, he went by steamboat along the Delaware River further northeast to the city of Philadelphia, Pennsylvania. Philadelphia was an anti-slavery stronghold filled with free black people and Quakers. Later, Frederick Douglas traveled into a safe house of noted abolitionist David Ruggles in New York City. Frederick Douglass was filled with joy when he arrived in New York City since he felt that he entered a new world as a free black man. Frederick Douglas sent for Anna Murray to follow him north to NYC. She brought her necessary basics for them to set up a home. They married on September 15, 1838 by a black Presbyterian minister. This was 11 days after Douglass came into New York City. At first, they adopted Johnson as their married name to divert attention.
Life as an Abolitionist
Frederick Douglass and Anna Douglass settled into New Bedford, Massachusetts. They stayed with Nathan and Mary Johnson. Afterwards, the couple adopted Douglass as their married name. Frederick Douglass read “The Lady of the Lake” and was inspired to use Douglass since that was the main character of the poem. He saw a white Methodist Church segregated. He was disappointed. So, he joined the African Methodist Episcopal Zion Church. That was an independent black denomination. It was first formed in New York City. Sojourner Truth and Harriet Tubman were members of that church too. By 1839, Frederick Douglass was a licensed preacher. He has shown great oratorical skills. He was involved in many religious positions like: steward, Sunday school superintendent, and sexton. He gave his speech in 1840 in Elmira, New York (which was a station on the Underground Railroad. It was later a black congregation being created there and by 1940, it was the region’s largest church). Frederick Douglass worked hard. He joined many organizations in New Bedford. He attended numerous abolitionist meetings. He subscribed to William Lloyd Garrison’s weekly journal “The Liberator.” He also worked with William Lloyd Garrison, who was a famous abolitionist. Garrison promoted the Liberator nationwide. Garrison agreed with Douglass’ anti-colonialism views in 1839. By 1841, Garrison was heard by Douglass as Garrison was speaking. This was at the Bristol Anti-Slavery Society. Douglass told his story and was encouraged to be an anti-slavery lecturer. So, he spoke nationwide and worldwide. Days later, he spoke at the Massachusetts Anti-Slavery Society’s annual convention in Nantucket. He was 23 years old and Douglass gave an eloquent speech about his life as a slave.
In 1845, he toured the East and the Midwest in the American Anti-Slavery Society’s Hundred Conventions project. This was a six month tour. He joined other speakers too. During the tour, slavery supporters constantly accosted Douglass. He did a lecture in Pendleton, Indiana. An angry mob assaulted him. The Hardys or a local Quaker family rescued him. The mob injured his hand causing it to break. It didn’t heal properly and it caused him pain for the rest of his life. There is a stone marker in Falls Park in the Pendleton Historic District that outlines a description of the event. Frederick Douglass wrote his first autobiography called “Narrative of the Life of Frederick Douglass, an American Slave.” It was published in 1845. Many racists didn’t believe that a black man could produce eloquent literature back then, but they were wrong. The book was positively reviewed back then. It was an immediate bestseller in America. In 3 years, it was reprinted nine times. There were 11,000 copies circulating in America back then. It was translated into French and Dutch being published in Europe too. He published three versions of his autobiography during his lifetime (and revised the third of the three). He expanded on the previous one. The 1845 Narrative was his biggest seller. Frederick Douglass gained his freedom in 1846. In 1855, Douglass published My Bondage and My Freedom. In 1881, after the Civil War, Douglass published Life and Times of Frederick Douglass, which he revised in 1892.
Frederick Douglass traveled into Ireland and Great Britain to oppose slavery. He wrote about being treated much better in those areas than in America. He sailed on the Cambria ship to Liverpool in August 16, 1845. This was the time of the Irish Potato Famine in Ireland just starting. In 1847, he was about 29 years old. He befriended the Irish nationalist Daniel O’Connell, who was a great inspiration to him. He spoke in Ireland and Britain in churches and chapels. He dined at places and came into various locations without segregation. It is important to note that the British Empire had racism worldwide. He met Thomas Clarkson in 1846. Clarkson was one of the last living British abolitionists who persuaded Parliament to abolish slavery in the Great Britain’s colonies. People like Anna Richardson and her sister in law Ellen of Newcastle upon Tyne raised funds to buy Frederick Douglass his freedom. Some of his supporters wanted him to stay in England. Yet, his wife was in Massachusetts and three million of his black people were in bondage in America. So, he returned into America during the spring of 1847. This was soon after the death of Daniel O’Connell.
Many historical plagues celebrate Douglass’ visit in Cork and Waterford, Ireland plus in London. They were revealed in the 21st century. The third plaque adorns Nell Gwynn House, South Kensington in London, where Douglass stayed with the British abolitionist George Thompson. He came into America by 1847. Frederick Douglass published his first abolitionist newspaper called the North Star. He published it from the basement of the Memorial AME Zion Church in Rochester, New York. The North Star's motto was "Right is of no Sex – Truth is of no Color – God is the Father of us all, and we are all brethren." The AME Church and North Star vigorously opposed the mostly white American Colonization Society and its proposal to send blacks back to Africa. This and Douglass's later abolitionist newspapers were mainly funded by English supporters, who gave Douglass five hundred pounds to use as he chose. Douglass also soon split with Garrison. They had an ideological disagreement. Garrison viewed the Constitution was pro-slavery (which it was. Garrison publicly burned the Constitution), so he wanted to disengage in politics even forming a separate state filled with no slavery. Douglass wanted to engage in politics in order for slavery to be abolished in America completely.
Frederick Douglass wanted to abolish the institution of slavery and make that change within the Constitution. On September 1848, Douglass sent an open letter to Thomas Auld (who owned him). He criticized him for his bad conduct. In one passage, he asked Auld if members of his family were enslaved, then how he would feel. Of course, slavery is evil. Frederick Douglass supported women’s rights. He was the only African American to attend the Seneca Falls Convention or the first women’s rights convention in upstate New York. Elizabeth Cady Stanton was the leader of the assembly. They promoted women’s suffrage or giving women the right to vote. James and Lucretia Mott opposed suffrage, but Douglass agreed with it. He said that he could not accept the right to vote as a black man if women could not also claim that right. He suggested that the world would be a better place if women were involved in the political sphere: “…In this denial of the right to participate in government, not merely the degradation of woman and the perpetuation of a great injustice happens, but the maiming and repudiation of one-half of the moral and intellectual power of the government of the world…” After these words, the attendees passed the resolution to fight for women to have the right to vote. His opinion as the prominent editor of the paper likely carried weight, and he stated the position of the North Star explicitly: "[w]e hold woman to be justly entitled to all we claim for man." This letter, written a week after the convention, reaffirmed the first part of the paper's slogan, "right is of no sex."
Frederick Douglass supported the 15th Amendment to give black people the right to vote, but Stanton opposed it since it didn't allow women the right to vote. Douglass said that during this time it would be impossible to fight to allow black men and women the right to vote in the late 1800’s. So, Douglass wanted the 15th Amendment first and then fight for women the right to vote afterwards in forming truly universal suffrage. Stanton wanted to attach women's suffrage to that of black men so that her cause would be carried to success. Douglass thought such a strategy was too risky, that there was barely enough support for black men's suffrage. He feared that linking the cause of women's suffrage to that of black men would result in failure for both. Douglass argued that white women, already empowered by their social connections to fathers, husbands, and brothers, at least vicariously had the vote. African-American women, he believed, would have the same degree of empowerment as white women once African-American men had the vote. Douglass assured the American women that at no time had he ever argued against women's right to vote.
More on the Antebellum Period
Meanwhile, in 1851, Douglass merged the North Star with Gerrit Smith's Liberty Party Paper to form Frederick Douglass' Paper, which was published until 1860. He criticized the hypocrisy of the 4th of July of claiming freedom but having slavery in his famous speech on July 5, 1852. It was an address to the ladies of the Rochester Anti-Slavery Sewing Society. This speech eventually became known as "What to the slave is the 4th of July?"; one biographer called it "perhaps the greatest antislavery oration ever given." In 1853, he was a prominent attendee of the radical abolitionist National African American Convention in Rochester, NY. His was one of 5 names attached to the address of the convention to the people of the United States published under the title, The Claims of Our Common Cause, along with Amos Noë Freeman, James Monroe Whitfield, Henry O. Wagoner, and Vashon. He promoted education for African Americans. He fought for school desegregation in the North. He wanted schools to be open for all children regardless of color during the 1850’s. He met abolitionists John Brown and George DeBaptiste. It was on March 12, 1859. They met in Detroit at William Webb’s house. Douglass met Brown again. Douglass agreed with Brown on fighting slavery, but disagreed with the raid on Harpers Ferry since that would enrage the American public in his view. John Brown did the raid and he was martyred for the cause of human freedom. After the raid, Douglass fled for a time to Canada, fearing guilt by association as well as arrest as a co-conspirator. Years later, Douglass shared a stage in Harpers Ferry with Andrew Hunter, the prosecutor who secured Brown's conviction and execution.
In March 1860, while Douglass was once again traveling in England, his youngest daughter Annie died in Rochester, New York. Douglass sailed back from England the following month, traveling through Canada to avoid detection. He was photographed constantly and he wanted this to show his image and refute the stereotypes of blackface minstrelsy back then. He used religious imagery to promote freedom and he eventually converted to Christianity. He publicly opposed preachers who supported slavery. He called on churches in the United Kingdom from 1846 and 1848 to not support any American church that permitted slavery. Ministers in Belfast refused to admit slaveholders in their churches. He continued to criticize Thomas Auld for his brutality. Frederick Douglas’ theological views in essence were the ancestor of modern non-denominational liberation theology since he promoted liberation overtly using spirituality as a vehicle. The fireplace mantle features busts of two of his favorite philosophers, David Friedrich Strauss, author of "The Life of Jesus," and Ludwig Feuerbach, author of "The Essence of Christianity." In addition to several Bibles and books about various religions in the library, images of angels and Jesus are displayed, as well as interior and exterior photographs of Washington's Metropolitan African Methodist Episcopal Church. Throughout his life, Douglass had linked that individual experience with social reform. Like other Christian abolitionists, he followed practices such as abstaining from tobacco, alcohol and other substances that he believed corrupted the body and soul.
The Civil War
By the era of the Civil War, Frederick Douglass was one of the most famous black men in America. He used speeches to promote black freedom and women's rights. He was eloquent to speak his mind to crowds at many locations. He was loved by people in England and Ireland. Douglass and other abolitionists wanted to use the Civil War to end slavery once and for all. He fought for African Americans to be in the Union to fight for their freedom. He wanted this and he publicized his views in newspapers and many speeches. In August 1861, Frederick Douglass gave an account of the First Battle of Bull Run and said that some black people were already in Confederate ranks. Weeks later, Douglass brought up the subject again. He quoted a witness of seeing black Confederates with muskets on their shoulders and bullets in their pockets. Black Confederates are traitors to black people period. Douglass conferred with President Abraham Lincoln in 1863 on the treatment of black soldiers, and with President Andrew Johnson on the subject of black suffrage. President Lincoln's Emancipation Proclamation existed on January 1, 1864. It declared the freedom of every slave in Confederate held territory. Slaves in Union held areas and in Northern states were freed by the adoption of the 13th Amendment on December 6, 1865. Douglass described the spirit of those awaiting the proclamation: "We were waiting and listening as for a bolt from the sky ... we were watching ... by the dim light of the stars for the dawn of a new day ... we were longing for the answer to the agonizing prayers of centuries."
During the United States Presidential Election of 1864, Douglass supported John C. Fremont. He was the candidate of the abolitionist Radical Democracy Party. Douglass was disappointed that President Lincoln didn't publicly endorse suffrage or voting rights for black freedman. Douglass said that since African American men were fighting for the Union in the American Civil War, they deserved the right to vote. Frederick Douglass is completely right. The Civil War continued and Douglass fought for equality for our people. He made plans with Lincoln to move liberated slaves out of the South. During the war, Frederick Douglass was a recruiter the 54th Massachusetts Infantry Regiment. His oldest son, Charles Douglass, joined the the 54th Massachusetts Regiment, but was ill for much of his service.Lewis Douglass fought at the Battle of Fort Wagner. Another son, Frederick Douglass Jr., also served as a recruiter.
After the Civil War, the ratification of the 13th Amendment came about in 1865 which outlawed slavery. The 14th Amendment provided for citizenship and equal protection under the law. The 15th Amendment protected all citizens from being discriminated against in voting because of race. On April 14, 1876, Frederick Douglass delivered the keynote speech at the unveiling of the Emancipation Memorial in Washington's Lincoln Park. In that speech, Douglass spoke honestly about Abraham Lincoln. He talked about Lincoln's positive and negative attributes. He called Lincoln,"the white man's president." Douglass criticized Lincoln's tardiness in joining the cause of emancipation, noting that Lincoln initially opposed the expansion of slavery but did not support its elimination. But Douglass also asked, "Can any colored man, or any white man friendly to the freedom of all men, ever forget the night which followed the first day of January 1863, when the world was to see if Abraham Lincoln would prove to be as good as his word?"Douglass also said: "Though Mr. Lincoln shared the prejudices of his white fellow-countrymen against the Negro, it is hardly necessary to say that in his heart of hearts he loathed and hated slavery...."
The crowd, roused by his speech, gave Douglass a standing ovation. Lincoln's widow Mary Lincoln supposedly gave Lincoln's favorite walking-stick to Douglass in appreciation. That walking-stick still rests in Douglass's final residence, "Cedar Hill", now preserved as the Frederick Douglass National Historic Site.
The Reconstruction Era
After the Civil War, Frederick Douglass continued to fight for equality for African Americans and women. He was highly prominent in social activism. He received many political appointments. One was he was the President of the Reconstruction-era Freedman's Saving Bank. That bank was used to help newly freed African Americans. He was also chargé d'affaires for the Dominican Republic, but resigned that position after two years because of disagreements with U.S. government policy. During Reconstruction, white racist insurgents arisen in the South after the war. They organized first in secret vigilante groups like the Ku Klux Klan. Armed insurgency took different forms. There were powerful paramilitary groups like the White League and the Red Shirts. They were both active during the 1870's in the Deep South. They operated as "the military arm of the Democratic Party." They harmed the rights of Republican officeholders and disrupted elections. Back then, the Democrats were more reactionary and the Republicans were more progressive. Today, it is the opposite. More than 10 years after the end of the Civil War, the Democrats regained political power in every state of the former Confederacy and began to reassert white supremacy. They enforced this by a combination of violence, late 19th-century laws imposing segregation and a concerted effort to disfranchise African Americans. New labor and criminal laws also limited their freedom.
Frederick Douglass responded by supporting the Presidential campaign of Ulysses S. Grant in 1868. In 1870, Douglass started his last newspaper, the New National Era, attempting to hold his country to its commitment to equality, President Grant sent a Congressionally sponsored commission, accompanied by Douglass, on a mission to the West Indies to investigate if the annexation of Santo Domingo would be good for the United States. Grant believed annexation would help relieve the violent situation in the South allowing African Americans their own state. Douglass and the commission favored annexation, however, Congress remained opposed to annexation. Douglass criticized Senator Charles Sumner, who opposed annexation, stating if Sumner continued to oppose annexation he would "regard him as the worst foe the colored race has on this continent." Obviously, I believe in the independence of Santo Domingo (without colonialism and without imperialism) not annexed into the American Empire. In 1872, Frederick Douglass was the first African American nominated for Vice President of the United States, as Victoria Woodhull's running mate on the Equal Rights Party ticket. He was nominated without his knowledge. Douglass neither campaigned for the ticket nor acknowledged that he had been nominated. In that year, he was presidential elector at large for the State of New York, and took that state's votes to Washington, D.C.
His home on South Avenue in Rochester, New York was later burned down. Arson was suspected. A complete issue of the North Star was lost. Douglass then moved into Washington, D.C. During the Reconstruction era, Frederick Douglass spoke nationwide. He promoted work, voting rights, and the exercise of suffrage (since we must vote for the right person too). He spoke for over 25 years after the Civil War as a means to counter racism that was prevalent in unions. In a speech delivered on November 15, 1867, Douglass said: "A man's rights rest in three boxes. The ballot box, jury box and the cartridge box. Let no man be kept from the ballot box because of his color. Let no woman be kept from the ballot box because of her sex." Douglass spoke at many colleges around the country. These included Bates College in Lewiston, Maine, in 1873.
More on his life
Frederick Douglass and Anna Douglass had five children. They were Rosetta Douglass, Lewis Henry Douglass, Frederick Douglass Jr., Chalres Remond Douglass, and Annie Douglass (who died at the age of 10). Charles and Rosetta helped produce his newspapers. Anna Douglas was a life long advocate for freedom for black people. She supported her husband's public work. Douglass worked with Julia Griffiths and Ottlie Assing as allies. Anna Douglass passed away in 1884. In 1884, Frederick Douglass married again to the suffragist and abolitonist Helen Pitts. She was from Honeoye, New York. Pitts was the daughter of Gideon Pitts Jr., an abolitionist colleague and friend of Douglass. A graduate of Mount Holyoke College (then called Mount Holyoke Female Seminary), Pitts worked on a radical feminist publication named Alpha while living in Washington, D.C. The marriage provoked a storm of controversy, because of the obvious reason. Pitts' family stopped talking to her. Elizabeth Cady Stanton congratulated the couple.
His Later Years in Washington, D.C.
His latter years were in Washington, D.C. The Freedman's Saving Bank was bankrupt on June 29, 1874. This was a few months after Douglass was its president in late March. During this same economic crisis, his final newspaper, The New National Era, failed in September. When Republican Rutherford B. Hayes was President, Douglass accepted an appointment as United States Marshall for the District of Columbia. This helped assure his family's financial security. In 1877, Douglass visited Thomas Auld. He was on his deathbed. The two men reconciled. Douglass had met Auld's daughter who was Amanda Auld Seas. Years prior, she had requested the meeting. She had subsequently attended and cheered one of Douglass's speeches. Her father complimented her for reaching out to Douglass. The visit was closure to Douglass, but many criticized this effort. During 1877, Douglass brought the house which was to be the family's final home in Washington, D.C. It was on a hill above the Anacostia River. He and Anna named it Cedar Hill. They expanded the house from 14 to 21 rooms. There was china closet there too. In 1878, Douglass purchases adjoining lots and expanded the property to 15 acres (or 61,000 m²). The home is now preserved as the Frederick Douglass National Historic Site. His late published edition of his autobiography, The Life and Times of Frederick Douglass, was created in 1881. He also received another political appointment. He was the Recorder of Deeds for the District of Columbia. Frederick Douglass
worked with the activist Ida B. Wells too. Ida B. Wells was a great hero who fought lynching and believed in black human justice. Douglass continued to speak. He went abroad. He traveled with Helen to England, Ireland, France, Italy, Egypt, and Greece from 1886 and 1887. He advocated Irish Home Rule and supported Charles Stewart Parnell in Ireland. At the 1888 Republican National Convention, Frederick Douglass was the first African American to receive a vote for President of the United States in a major party's roll call vote. In that same year, he spoke at the Claflin College. That was a black college in Orangeburg, South Carolina and the oldest of such of an institution in the state.
Many African Americans called the Exodusters escaped the Klan and racially discriminatory laws in the South by moving into large northerm cities plus places like Kansas. In Kansas, many people formed all black towns as a way for them to have a greater level of freedom and autonomy. Douglass didn't favor this, but autonomous black communities are not equivalent to forcing every black person out of America. He didn't agree with the Back to Africa movement. He believed that it was similar to the American Colonization Society he had fought in his youth. In 1892, at an Indianapolis conference convened by Bishop Henry McNeal Turner, Douglass spoke out against the separatist movements, urging blacks to stick it out. He made similar speeches as early as 1879, and was criticized both by fellow leaders and some audiences even booed him for this position. Speaking in Baltimore in 1894, Douglass said, "I hope and trust all will come out right in the end, but the immediate future looks dark and troubled. I cannot shut my eyes to the ugly facts before me."
President Harrison appointed Douglass to be the United States's minister resident and consul-general to the Republic of Haiti and Chargé d'affaires for Santo Domingoin 1889, but Douglass resigned the commission in July 1891. In 1893, Haiti made Douglass a co-commissioner of its pavilion at the World's Columbian Exposition in Chicago.
In 1892, Douglass constructed rental housing for black human beings, now known as Douglass Place, in the Fells Point area of Baltimore. The complex still exists, and in 2003 was listed on the National Register of Historic Places.
On February 20, 1895, Frederick Douglass was attending a meeting of the National Council of Women in Washington, D.C. During that meeting, he was brought to the platform and received a standing ovation. When he returned home, Frederick Douglass passed away as a product of a massive heart attack. He was 77 years old. His funeral was held at the Metropolitan African Methodist Episcopal Church. Thousands of people were there. They passed by his coffin to show their respect. Douglass attended many churches in D.C. and he had a pew there. He donated 2 standing candelabras when that church moved into a new building in 1886. He also gave many lecturers there. His last major speech, "The Lesson of the Hour." Douglass' coffin was transported back to Rochester, New York. This is the place where had lived for 25 years which was longer than anywhere else in his life. He was buried next to Anna in the Douglass family plot of Mount Hope Cemetery. Helen joined them in 1903. He belongs to the ages. The Frederick Douglass Memorial Bridge was built in 1950 in his honor. It is also called the South Capitol Street Bridge. His home in Anacostia (in Washington, D.C.) became part of the National Park System. In 1988, it was designated the Frederick Douglass National Historic Site. In 1965, the U.S. Postal Service honored Douglass with a stamp in the Prominent Americans series. The Frederick Douglass Institute is a West Chester University program for advancing multicultural studies across the curriculum and for deepening the intellectual heritage of Frederick Douglass.
On June 19, 2013, a statue of Douglass by Maryland artist Steven Weitzman was unveiled in the United States Capitol Visitor Center as part of the National Statuary Hall Collection, the first statue representing the District of Columbia. On September 15, 2014, under the leadership of Governor Martin O'Malley a portrait of Frederick Douglass was unveiled at his official residence in Annapolis, MD. This painting, by artist Simmie Knox, is the first African American portrait to grace the walls of Government House. Commissioned by Eddie C. Brown, founder of Brown Capital Management, LLC, the painting was presented at a reception by the Governor. On October 18, 2016, the Council of the District of Columbia voted that the city's new name as a State is to be "Washington, D.C.", and that "D.C." is to stand for "Douglass Commonwealth."
Legacy and Conclusion
Frederick Douglass lived to be almost 100 years old on this Earth. He courageously fought against slavery and he stood up for the rights of women. Frederick Douglass was one of the most revolutionary black men in human history. He constantly was giving speeches, inspiring Union black troops, and defending the rights of immigrants. Frederick Douglass united with fellow freedom fighters overseas and he always had a love of black people. His first wife Anna Douglas was a fellow activist who always defending him and she heroically advanced the principle of human freedom. Always working, Frederick Douglass never wavered in her fundamental views. He not only disagreed with slavery. He opposed the death penalty, she wanted women to have the right to vote, and desired total equality in all functions of society. Firm in his views and compassionate in his spirit, Frederick Douglass made magnificent contributions in the overall black freedom movement. A legacy of power and monumental social change outline the characteristics of the life of Frederick Douglass. He was a father and a man who saw wrong and sought to eliminate it. Frederick Douglass was totally inspired to make change and he lived to ensure that future generations would live in a better existence than during the past.
Rest in Power Brother Frederick Douglass.
The 2018 Winter Olympics
During this Winter of 2018, the Winter Olympics is upon us. It is a showcase of creative talent, international competition, and tons of athletic, talented human beings who desire to express themselves in the highest possible fashion. For long decades, the winter games have brought people together, motivated excellence, and inspired future generations on the importance of mutual teamwork. The Winter Olympics in Pyeongchang, South Korea is being shown by NBC. The coverage has reached tons of people globally. The Motto of the XXIII Olympics Winter Games has the Korean motto of Hanadoen Yeoljeong ( 하나된 열정.). Those 2 Korean words mean Passion and Connected. Lasting from February 9 to February 25, 2018, the games will showcase tons of dedicated performers. The stadium is called Pyeongchang Olympic Stadium. This is South Korea's second Olympic Games and its first Winter Olympic Games. Seoul hosted the Summer Games in 1988. People have prepared for this moment for a long time. Political issues are involved in these affairs too as some North Korean athletes will perform with South Korean athletes as one team. Pence has desire to want to advance future stronger sanctions against North Korea over the nuclear crisis in that region of the world. The 2018 Winter Olympics will have 102 events in 15 sports. This is the first Winter Olympics to have more than 100 medal events. The sports in the Winter Olympics includes the following: Alpine skiing, Biathlon, Bobseligh, Cross-country skiing, curling, figure skating, freestyle skiing, ice hockey, luge, Nordic combined, Short track speed skating, Skelton, Ski jumping, Snowboarding, and Speed skating. The six nations that are making their Winter Olympics debuts are Ecuador, Eritrea, Kosovo, Malaysia, Nigeria, and Singapore.
The Winter Olympics has a long history. It started during the early 20th century. It was inspired by the ancient Olympic Games that existed in Olympia, Greece from the 8th century B.C. to the 4th century A.D. Those ancient Olympic games included wrestling, boxing, discus throwing, track and field races, and other sporting events. The ancient Olympics ended by the time of the end of the Roman Empire. One predecessor of the Winter Olympics was the Nordic Games. That was created by General Viktor Gustaf Balck in Stockholm, Sweden in 1901 and 1903 plus in 1905. They were held every fourth year after 1905 until 1926. Balck was a close friend of Olympic founder Pierre de Coubertin. Balck was also a charter member of the IOC or the International Olympic Committee. Winter sports like figure staking were at the 1908 Summer Olympics in London. There were plans for winter games in future Summer games. Yet, a separate Winter Olympic Games existed. The 1920 Summer Olympics were held in Antwerp, Belgium and it had a figure skating and ice hockey tournament. Germany, Austria, Hungary, Bulgaria, and Turkey were banned form competing in the Games because of World War I. Those nations were part of the Central Powers who were defeated by the Allied Powers during the first world war. The first Winter Olympic Games was done in Chamonix, France in 1924. In 1925, the IOC decided to create a separate, future Olympic Winter Games. The Second Winter Olympic Games took place in St. Moritz, Switzerland. In 1928, Sonja Henie of Norway was the winner of the figure skating competition at the age of 15. She was the youngest Olympic champion in history. The record held for 70 years.
Later, the next Winter Olympics occurred in Lake Placid, New York in 1932. It had 17 nations and 252 athletes participated. No Winter Games existed during World War II because of various invasions and the destructive aspects of that war. St. Moritz hosted its Winter Games in 1948. The Winter Games was in Olso in 1952, and in Austria (in Innsbruck in 1964). The first Olympics that was held in broadcast in color came about in Grenoble, France by 1968. 37 nations existed and 1,158 athletes competed in 35 events. Frenchman Jean-Claude Killy became only the second person to win all the men's alpine skiing events. The 1972 Olympics took place in Sapporo, Japan, which was the first one held outside of North America or Europe. I remember the 1994 Olympics in Lilehammer, Norway back in 1994 when I was a young child (as I was in the fifth grade as a 10 year old). The women's figure skating competition drew media attention when American skater Nancy Kerrigan was injured on January 6, 1994, in an assault planned by the ex-husband of opponent Tonya Harding. Both skaters competed in the Games, but the gold medal was controversially won by Oksana Baiul. Kerrigan won silver. Baiul became Ukraine's first Olympic champion. The last Winter Olympics was held in Sochi, Russia in 2014. It had a record of 2,800 participants in it from 88 counties. The Games were the most expensive so far, with a cost of £30 billion (USD 51 billion). Following their disappointing performance at the 2010 Games, and an investment of £600 million in elite sport. Dozens of Russian athletes were stripped of medals because of doping. Doping is evil and wrong period.
On the snow, Norwegian biathlete Ole Einar Bjørndalen, took two golds to bring his total tally of Olympic medals to 13, overtaking his compatriot Bjørn Dæhlie to become the most decorated Winter Olympian of all time. Another Norwegian, cross-country skier Marit Bjørgen took three golds: her total of ten Olympic medals tied her as the female Winter Olympian with most medals, alongside Raisa Smetanina and Stefania Belmondo. Snowboarder Ayumu Hiranobecame the youngest medalist on snow at the Winter Games when he took a silver in the halfpipe competition at the age of 15. On ice, the Dutch dominated the speed skating events, taking 23 medals, four clean sweeps of the podium places and at least one medal in each of the 12 medal events. Ireen Wüst was their most successful competitor, taking two golds and three silvers. In figure skating, Yuzuru Hanyu became the first skater to break the 100-point barrier in the short program on the way to winning the gold medal. Among the sledding disciplines, luger Armin Zöggeler took a bronze, becoming the first Winter Olympian to secure a medal in six consecutive Games. There are Winter Paralympic Games too during February and we honor and respect athletes who participate in the Paralympic Games as well.
There were many Winter Olympic legends who inspired the lives of many worldwide. The 1980 USA Men's hockey team made a miracle to defeat the Soviet team. Their team included Mike Erusione, Jim Craig, Mark Johnson, and others. Sonja Henie of Norway won 3 gold medals as figure skater. She was 16 when she run her first Olympic gold. She innovated skating choreography. Eric Heiden of America won 5 gold medals in speed staking. America's Apollo Anton Ohno is one of the most decorated American Winter Olympic Athlete of all time. He competed in 3 Olympic games. He won eight medals. He was a known great ice tracker. He was born in Seattle. He won 2 gold medals. Apollo Ohno was involved in short track speed skating. Bonnie Blair from America is a great speed skate sprinter She won 3 consecutive gold medals in the 500 metter between 1988 and 1993. She won 6 medals. Bjorn Daehlie of Norway won 8 gold medals and 12 total medals. Jazmine Fenlator, Elana Meyers Taylor, Lauryn Williams, P.K. Subban, Maé-Bérénice Méité , and other black people performed greatly in winter games as well. Vonetta Flowers is a Sister who won a gold medal during the 2002 Winter Olympics in Salt Lake City, Utah. Debi Morgan was a medal winning figure skater of the Winter Olympics and she is part of the Figure Skating Hall of Fame in the year of 2000.
New History being Made
There are tons of history being made during the 2018 Winter Olympics. There are tons of black people and people of color who are participating in the Winter Olympics. This Olympics has the largest continent of black athletes and coaches in Winter Games history. This event totally destroys the false stereotype that we (who are black) don't like sports relating to the winter weather. There are 10 black American and 11 Asian American athletes in the record 242 member U.S. team. There are 3 Caribbean and Sub-Saharan African nations in the parade too. Jamaica will be represented with its first women's bobsled team and its first skeleton athlete. Many news talk about the Nigerian first women's bobsled team. The Sisters on the team are gorgeous, humble, and willing to compete to victory. Their names are Akouma Omeoga, Seun Adigun (who is the captain of Team Nigeria. She was a track and field star), and Ngozi Onwumere. Adigun said that Nigerians are excited about their country being represented and she wanted to fill void for Nigeria, for people from the great continent of Africa, and for women in general. All three women worked hard. The team raised more than $75,000 in a GoFund Me campaign to fund for helmets, uniforms, travel, and their first sled (called the Maeflower). Serena Williams and others have praised them. Elena Myers Taylor and Lolo Jones are women of color on the U.S. bobsled team.
This history is not new. 16 years ago, Vonetta Flowers was the first African American athlete to win an Olympic gold medal in the Winter Olympics when her two person bobsled team finished first at the Winter Games in Salt Lake City, Utah. Figure skater Debi Thomas won a bronze medal at the 1988 Winter Games in Calgary. Shani Davis won gold medals at the 2006 and 2010 Winter Games in Turin, Italy ,and Vancouver, Canada. Today, Maame Biney is a Sister who is a star in the speedskating world (she is the first black woman Olympic short track speedskater. She is 18 years old). Erin Jackson is the first black American woman long track Olympic speedskater. Imani Griffin, 28-year-old long track skater from Winston Salem, N.C., makes his Olympic debut. Anthony Barthell, from High Point, N.C., coaches the U.S. short track team.
More on the 2018 Winter Olympics
The area of Pyeongchang having the Olympics consists of many developments. The Winter Olympics has shown much excitement. We have massive tensions between North Korea and South Korea. Mike Pence (or the Vice President of America) said that he wants to tell the truth about North Korea to every location where he is at overseas. Yet, Pence omits the brutal police brutality, discrimination, economic deprivation, sexism, xenophobia, and other evils in America. When North Korea has violations of human rights with its Stalinism (which is not representative of true socialism which is progressive), then that is wrong. Also, it is wrong for the current occupant in the White House to advocate for Muslim bans and for him to call for a nonsensical border wall across the southern U.S./Mexico border. This is the same male (Trump) who called those in Congress who won't clap for him (during his 2018 State of the Union address) un-American and treasonous. Trump is totally wrong for that statement. That shows Trump's extremism and disgraceful mentality. North Korean leader Kim Jong Un sent his sister as a delegate to the South Korean Winter Olympics. This will be the first member of the Kim dynasty to travel to South Korea since the end of the Korean War (which was a very destructive war filled with firebombings of Korean cities and other atrocities) back in 1953. Trump's advocacy for possible nuclear strikes against North Korea is ruthlessly wrong. The only solution to resolve the Korean peninsula foreign policy crisis is a political solution (in the form of a progressive negotiated settlement) so the peoples of the Korean peninsula can have true peace without war. Mike Pence sat close to Kim Jung Un's sister. They didn't speak or looked at each other. Kim Jung Un's sister shook hands with the leader of South Korea. Kim Yo-jong is the name of Kim Jung Un's sister. Many volunteers have helped the American athletes to wear their Olympic coats plus other clothing to deal with the cold weather.
The opening ceremony of the 2018 Winter Olympics was spectacular. There were thousands of athletes including the USA Team and the unified historic Korean team. There were fireworks and the celebration of Korean culture.
During the early part of the Winter Games, Switzerland (with Alina Muller and other players) beat the unified Korean team 5-0. Carlijn Achtereekte won 3000m women's gold in speed skating. It's a Dutch one-two-three in the speed skating. Ireen Wust takes silver while Antoinette De Jong lands a bronze medal. Hyojun Lim of South Korea wins 1,500m men's gold in speed skating. The South Korean prompts brilliant scenes inside the Gangneung Ice Arena. The host country’s first medal comes in its best sport: in the men’s 1,500m short-track speed skating. Meanwhile, the first medal to an “Olympic Athlete from Russia” was given to Semen Elistratov who landed bronze. Laura Dahlmeier won Biathlon gold. Germany’s Dahlmeier cruises to victory in the 7.5km sprint, sauntering to land gold after racking up a time of just over 21.06.02 mins. Norway’s Marte Olsbu takes silver while Czech Republic’s Veronika Vitkova takes bronze. Team USA has its first gold medal by the 17-year-old snowboarder Redmond “Red” Gerard. He is the youngest American man to win an Olympic winter gold medal since 1928. He won the Men's slopestyle event in the 2018 Pyeongchang Winter Games. Chris Mazdzer wins the first men’s singles luge medal in U.S. history.
About the Winter Olympics, Shaun White wins USA's 100th all-time Winter Olympic gold medal. On February 14, he won his third olympic gold medal for the Men's Halfpipe event with a score of 97.75, with Ayumu Hirano of Japan taking the silver medal and Scott James of Australia taking the bronze. Eric Frenzel of Germany won back-to-back gold medals in the normal hill Nordic combined event. Stina Nilsson of Sweden won the women’s cross country sprint race. An American, Jessica Diggins, made the six-woman final. Mikaela Shiffrin (a 22 year old American) won Olympic gold medal in women's giant slalom. Shiffrin’s combined time of 2:20.41 over both runs was 0.39 seconds better than Norway’s Ragnhild Mowinckel, who took the silver. Federica Brignone finished 0.46 off the pace to win bronze, becoming the first Italian woman in 16 years to win an alpine skiing medal in a race initially scheduled for Monday but twice postponed due to extreme weather conditions. “There were moments when I thought, ‘I don’t know if I’m good enough to do this’, and then there were moments when I thought ‘Who cares? You gotta try. You’re here’,” Shiffrin said. “It’s an incredible feeling to know that my best effort is good enough.”
For almost one century, the Winter Olympics has inspired many people all across the globe. There have been excellence shown, international competition, and global respect. Since the 1920's, athletes and others have shown their greatness. The winter weather can never strife sports excellence. Records have been enacted and more and more black people (including people of color) have been apart of Winter Olympic games too. The excitement of skating, bobsled, and other sports definitely inspire us imagination. Legendary athletes don't just exist during the past. They exist in the present too and they will continue to flourish throughout the future as well. Human beings will shine without question filled with enthusiasm, resolve, and inescapable dexterity. Also, it is very important to acknowledge those in the Paralympics as being monumental in their talent too.
|
<urn:uuid:5970e4b3-867a-4fb2-a9c8-3d2dc74d6178>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00020.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9760949015617371,
"score": 3.625,
"token_count": 11091,
"url": "http://truthseeker24s3rdmillenniumtidbits.blogspot.com/"
}
|
Cabrillo National Monument (CABR) in southern California was established to memorialize Juan Rodriguez Cabrillo and his 1542 voyage of exploration. It also protects the Old Point Loma Lighthouse and a variety of natural resources, including an area of intertidal habitats (CABR Figure 1). CABR was proclaimed October 14, 1913, and was transferred from the War Department to the National Park Service on August 10, 1933. Its boundaries have changed several times, on February 2, 1959, September 28, 1974, and July 3, 2000. CABR currently encompasses 64.73 ha (159.94 acres), all of which is under federal administration (CABR Figure 2).
CABR is located in San Diego County at the southern tip of Point Loma, a north-south trending peninsula that separates San Diego Bay on the east from the Pacific Ocean to the west. Downtown San Diego is just over 8 km (5 miles) to the northeast, across the bay. North Island Naval Air Station and the city of Coronado, at the north end of the Silver Strand, are just east of the southern tip of Point Loma. A broad coastal plain, marked by flat-topped mesas dissected by west-flowing ephemeral streams and rivers, extends from the shoreline east to foothills of the Peninsular Ranges.
A geologic resource evaluation scoping session, coordinated by the Geologic Resources Division of the National Park Service (NPS), was held for CABR during May 2008 (KellerLynn 2008). The initial paleontological resource inventory and summary was prepared by Koch and Santucci (2003). Other references that describe the paleontology and geology of CABR or its immediate vicinity at the end of Point Loma include Berry (1922), Stephens (1929), Webb (1937), Matsumoto (1959, 1960), Anderson (1962), Valentine (1961), Valentine and Meade (1961), Sliter (1968), Bukry and Kennedy (1969), Kennedy and Moore (1971), Bowersox (1974), Kern and Warme (1974), Ku and Kern (1974), Kennedy (1975a, 1975b), Wilson (1976), Kern (1977), Dawson (1978), Nilsen and Abbott (1979), Sundberg (1979), Popenoe and Saul (1987), Bannon et al. (1989), Kern and Rockwell (1992), Saul and Popenoe (1992), Bukry (1993), Muhs et al. (1994, 2002, 2003), Abbott (1999), Rahman and Droser (2003), Hunt et al. (2006), von Dassow and Droser (2006), Kennedy and Tan (2008), Taylor (2008), and Squires and Saul (2009). Point Loma as a whole is both extensively fossiliferous and extensively documented in the literature.
The geologic history recorded in the sedimentary rocks exposed at CABR is confined to the Late Cretaceous and Pleistocene (see the appendix for a geologic time scale). The San Diego area is part of the Peninsular Ranges terrane (distinct block of continental crust). It is sometimes reported that the terrane was not accreted to the North American craton until the middle of the Cenozoic (Morris et al. 1986; Lund and Bottjer 1992; Ford and Kirkland 2001), but it now appears that the terrane had accreted by the time CABR’s Late Cretaceous rocks were being deposited (Grove and Bebout 1995; Tan and Kodama 1998; Symons et al. 2003; Vaughn et al. 2005).
The Upper Cretaceous rocks of CABR were deposited on a submarine sediment fan, in part by west-flowing sediment gravity flows (Nilsen and Abbott 1979; Bartling and Abbott 1983). The water depth at this time may have been 900 to 1000 m (2,900 to 3,300 ft) (Almgren 1973). Sediment came from the ancestral Peninsular Ranges to the east (Nilsen and Abbott 1979), which during Late Cretaceous time formed a Andean-style mountain range just landward of an active subduction zone. The ancient Farallon oceanic plate was being subducted beneath the North American continental plate causing regional uplift and exposure of older plutonic (granitic) rocks (Abbott 1999). This mountain-building episode, known as the Laramide Orogeny (Girty 1987), also affected large areas of western North America as far east as present-day Colorado. Meanwhile, the ancient Point Loma submarine fan was depositing sediment into a deepening marine basin. The fan prograded across the floor of this basin and over time formed a thick accumulation of turbidite sandstones, middle-fan channel-fill sandstones and mudstones, and inner-fan channel-fill conglomerates (Nilsen and Abbott 1979).
A hiatus of approximately 70 million years separates the Cretaceous rocks of CABR from the much younger veneer of Pleistocene landforms and unconsolidated deposits. Although only partially documented within CABR, the Upper Cretaceous–Eocene geology of San Diego is related to the Upper Cretaceous–Eocene geology of Channel Islands National Park (CHIS; see the CHIS summary in this report for more details). The islands, which are part of a small crustal block that has moved north and rotated clockwise since the Late Cretaceous, would have been a short distance south of San Diego during the Late Cretaceous, and record the deposition of a similar submarine fan (Bartling and Abbott 1983). Later, during the Eocene, the block had moved north far enough to juxtapose the future site of the islands with the San Diego area, resulting in the accumulation of the same type of conglomerates (Bartling and Abbott 1983).
The Pleistocene geology of the San Diego area includes a series of uplifted marine terraces; CABR has excellent examples of some of these terraces (T. Deméré, pers. comm., November 2011). Sixteen marine terraces and associated deposits are known from San Diego County, ranging from perhaps 1.29 Ma (million years) to 80,000 years old (Kern and Rockwell 1992). Several terraces are exposed within CABR, including, from oldest to youngest, the Linda Vista, Nestor, and Bird Rock terraces (KellerLynn 2008). They date to approximately 855,000 years ago (Linda Vista), 120,000 years ago (Nestor), and 80,000 years ago (Bird Rock) (Kern 1977; Wehmiller et al. 1977; Kennedy et al. 1982; Kern and Rockwell 1992). The various terraces are now much higher than their original elevations; the San Diego region is being uplifted at an average rate of 0.13–0.14 m (5–6 in) per thousand years (Kern and Rockwell 1992). The beginning of the Holocene coincided with the arrival of humans to the area; humans were present in the Carlsbad area 55 km (34 miles) to the north by the Early Holocene (Rick and Erlandson 2000).
Geologic units exposed within CABR include, from oldest to youngest: the Point Loma Formation and Cabrillo Formation of the Rosario Group (Upper Cretaceous); lower–middle Pleistocene paralic (coastal) deposits; and two sets of upper Pleistocene paralic deposits (Kennedy 1975b; Kennedy and Tan 2008). The terminology for the paralic deposits has changed over time. Kennedy (1975a, 1975b) included the oldest paralic deposits in the Lindavista Formation, and the younger paralic deposits in the Bay Point Formation. Later, Kennedy and Tan (2008) attached no formal names to the deposits, and differentiated two units within Kennedy’s (1975a, 1975b) Bay Point Formation. The older formation names are still widely used, although the city of San Diego has switched to using the generic terms (T. Deméré, pers. comm., November 2011). Because the pre-existing NPS digital geological map of CABR uses the generic terms, this document will use these terms as well to maintain continuity. However, the older terms will be described. The Point Loma Formation, Cabrillo Formation, and upper Pleistocene paralic deposits are fossiliferous within CABR (CABR Table 1), which also has specimens in museum collections.
The fossils of CABR present opportunities for education, interpretation, and continued or future scientific research in the monument. Fossils have been described since the late 19th century from Point Loma. Cooper (1894) discussed some of the early collecting. Among the 19th century material Cooper noted is a specimen of the coiled ammonite Heteroceras found associated with a coal shaft north of CABR on the west coast of the peninsula, and a specimen of the straight ammonite Baculites chicoensis collected on the surface near the lighthouse. Several species of Cretaceous invertebrates were named during this early period. Cooper (1894) named the bivalves Crassatella lomana, Corbula triangulata, and Crenella santana, and the gastropods Cerithium fairbanksi, Stomatia intermedia, Calliostoma kempiana, Siphonaria capuloides, and Tornatella normalis from Point Loma, and Anderson (1902) named the gastropod Haliotis lomaensis from the Upper Cretaceous Cabrillo Formation (then referred to as the “Chico Formation” because of their similarity to Cretaceous rocks in northern California).
Recalling the “flying ammonite” (see the Park Collections section), another large ammonite has been found recently near the southwest corner of the monument, just within or just outside of the boundary (T. Deméré, pers. comm., November 2011). Fossils constantly erode from the Pleistocene marine terrace deposits of the monument (T. Deméré, pers. comm., November 2011).
The Point Loma Formation is exposed along the west side of Point Loma. It was named from a locality just outside of CABR, near the extreme southern tip of the peninsula below the new Point Loma lighthouse (Kennedy and Moore 1971; Kennedy 1975a). Approximately 83 m (270 ft) of the formation is exposed above sea level at this locality, with at least another 190 m (620 ft) present below low tide. This unit is composed of interbedded layers of dusky yellow sandstone and olive-gray clay-rich shale, with beds about 30 cm (12 in) thick (Kennedy 1975a). On the south end of Point Loma, the lower half includes interbedded sandstone and mudstone, and the upper half is mostly mudstone (Kern and Warme 1974). The contact with the overlying Cabrillo Formation is conformable (Kennedy 1975a). These two formations were not differentiated until 1971 (Kennedy and Moore 1971); before this, these two units were called the Rosario Formation. Fossils in the formation indicate a Late Cretaceous age (middle or late Campanian to early Maastrichtian) (Kennedy 1975a). Fossils from just north of CABR, at the Point Loma Waste Water Treatment Plant, date to about 75.5 to 74.5 Ma, and slightly younger fossils are present in the steep slope behind the plant (Bukry 1993).
The Point Loma Formation formed as a submarine fan (Bannon et al. 1989) that accumulated on the outer continental shelf, slope, and rise (Kennedy and Moore 1971). It includes mudstones interpreted as representing the continental slope and basin plain, and sandstones representing lagoonal, shelf, fan lobe, and fan channel settings (Nilsen and Abbott 1979). Foraminifera (amoeba-like protists that form “shells”) from strata exposed at the tip of Point Loma indicate a bathyal environment (broadly, the continental slope), with some specimens of sublittoral foraminifera transported downslope from the adjacent continental shelf (Sliter 1968). The formation’s foraminiferal assemblage has been compared to that of the modern assemblage on the continental slope and in deep basins in the eastern Pacific Ocean (Sliter 1975). The formation’s trace fossils also generally indicate bathyal settings (Kern and Warme 1974). Limited biological disturbances of the sediment (bioturbation) indicate that the oxygen content of the water was low (Sliter 1975).
Marine microfossils and molluscs are well represented in the Point Loma Formation, although many other types of fossils have been found. Single-celled organisms are represented by coccoliths (structural plates from some types of algae, also known as calcareous nannofossils) (Bukry 1993, 1994) and foraminifera (Anderson 1962; Sliter 1968, 1975). Terrestrial plants are represented by a cycad leaf recovered from CABR (Koch and Santucci 2003), angiosperm leaves (T. Deméré, pers. comm., February 2012), and wood (Kern and Warme 1974; Nilsen and Abbott 1979). Marine invertebrates are represented by bryozoans (moss animals) (Taylor 2008), brachiopods (lamp shells) (Nilsen and Abbott 1979), bivalves (Sundberg 1981; Saul and Popenoe 1992; Squires and Saul 2009), ammonites (including coiled and straight [Baculites] forms; Matsumoto 1959, 1960), gastropods (Popenoe and Saul 1987; Loch 1989; Saul 1988), scaphopods (tusk shells) (Coombs and Deméré 1996), crabs (Bishop 1988), ostracodes (seed shrimp) (Coombs and Deméré 1996), echinoids (sea urchins) (Sundberg 1979; Coombs and Deméré 1996), and trace fossils (Kern and Warme 1974), including possible worm tubes (Sliter 1975). Most trace fossils are found in mudstone (Kern and Warme 1974). The vertebrate assemblage includes sharks (Coombs and Deméré 1996), holocephalians (ratfish and related cartilaginous fish) (T. Deméré, pers. comm., November 2011), ray-finned fish (Coombs and Deméré 1996), mosasaurs (KellerLynn 2008), and dinosaurs, including hadrosaurs (Ford 1999) and the armored dinosaur Aletopelta coombsi (the first dinosaur named from California), which is either an ankylosaurid (Ford and Kirkland 2001) or a nodosaurid (Coombs and Deméré 1996; Hawakaya et al. 2005).
The Point Loma Formation is fossiliferous within CABR. The most unusual fossil is a large cycad leaf, collected by R. A. Cerutti and B. O. Riney in 1994 from a tide pool (Koch and Santucci 2003). It is currently on display at the San Diego Natural History Museum (SDNHM 48361, from SDNHM locality 3774) (T. Deméré, pers. comm., November 2011). Abundant invertebrate trace fossils, particularly of the trace genera Ophiomorpha and Thalassinoides, can be found in outcrops around the monument’s tide pools (CABR Figure 4) (B. Pister, CABR Chief of Natural and Cultural Resources Management, pers. comm., December 2011; D. Vaughn, Senior Project Geologist, Geotechnical Exploration Inc., and CABR volunteer, pers. comm., December 2011). These trace fossils are found in densities of 2 to 10 per m2; wave action continually exposes new trace fossils and erodes previously exposed examples (D. Vaughn, pers. comm., December 2011). Greater Point Loma has been an important area for Point Loma Formation fossils, including coccoliths (Bukry 1993), foraminifera (Sliter 1968), bryozoans (Taylor 2008), bivalves (Saul and Popenoe 1992; Squires and Saul 2009), ammonites (Matsumoto 1960; Bannon et al. 1989), gastropods (Popenoe and Saul 1987), and trace fossils (Kern and Warme 1974). Large ammonites with attached bivalves, and a partial mosasaur lower jaw have been found north of the monument boundary (Koch and Santucci 2003; KellerLynn 2008), as well as one of the rare hadrosaur specimens (Hilton 2003; T. Deméré, pers. comm., November 2011).
The Cabrillo Formation is exposed in central CABR (Kennedy 1975b; Kennedy and Tan 2008). It was named from a locality 250 m (820 ft) east of the new Point Loma lighthouse, where it is 81 m (270 ft) thick and composed of structureless (massive) sandstone and cross-bedded conglomerate. Farther north, the formation reaches a thickness of 170 m (560 ft) (Kennedy 1975a). Sandstone-dominated and conglomerate-dominated sections can be differentiated, and both are present at CABR (Kennedy 1975b; Kennedy and Tan 2008). Mudstones are also included in the formation (Dawson 1978; Nilsen and Abbott 1979). The unit has a Maastrichtian age (Taylor 2008).
The Cabrillo Formation represents continued deposition of the submarine fan that was active during deposition of the Point Loma Formation (Nilsen and Abbott 1979; Girty 1987; Bannon et al. 1989). New facies include mudstones from turbidity currents and conglomerate from fan channel fills (Nilsen and Abbott 1979). The lowest part of the formation was part of the inner section of the submarine fan (Girty 1987; Bannon et al. 1989).
The Cabrillo Formation is not generally as fossiliferous as the Point Loma Formation. Fossils reported from the formation include coccoliths (Bukry and Kennedy 1969), foraminifera (Anderson 1962), wood fragments (Dawson 1978), corals (Dawson 1978; Sundberg 1979), bryozoans (Taylor 2008), brachiopods (Dawson 1978), bivalves (Dawson 1978; Bannon et al. 1989; Kennedy and Shiller 2011), ammonites, gastropods (Dawson 1978; Bannon et al. 1989), echinoderms (Dawson 1978), and a shark tooth (Koch and Santucci 2003; Hunt et al. 2006). Many fossils discovered in the Cabrillo Formation are reworked, from the underlying Point Loma Formation or from older deposits of the Cabrillo Formation itself (Dawson 1978; T. Deméré, pers. comm., November 2011). For example, fossils reported by Dawson (1978) primarily came from rip-up clasts (rock fragments) of siltstone, meaning they had already undergone some lithification (becoming stone) once before. The only fossils not found in clasts were rare examples of abraded and damaged shells, belonging to robust shallow-water bivalves like Coralliochama orcutti, Ostrea sp., and Spondylus striatus (Dawson 1978). Similarly, a single bivalve shell was the only fossil reported by Kennedy (1975a) that was not reworked.
The Cabrillo Formation is fossiliferous within CABR. Dawson (1978) collected fossils from a series of localities along the sea cliffs on the east side of Point Loma, extending into CABR (SDNHM locality 2823; T. Deméré, pers. comm., November 2011). A shark tooth, from the genus Squalicorax, was found in a clast recovered from a channel cut into sandstone (Koch and Santucci 2003) either just within or just outside of northern CABR (SDNHM specimen 35963, SDNHM locality 3272; T. Deméré, pers. comm., November 2011). It too is probably reworked (T. Deméré, pers. comm., November 2011). Point Loma in general has yielded many fossils from the Cabrillo Formation. Aside from Dawson’s (1978) finds and the shark tooth, coccoliths (Bukry and Kennedy 1969), foraminifera (Anderson 1962), and bryozoans (Taylor 2008) have also been reported from the formation on the peninsula.
The upper Pleistocene paralic deposits of CABR are associated with two major marine terraces: the older Nestor Terrace, at elevations of about 22 to 23 m (72 to 75 ft) above sea level, and the younger Bird Rock Terrace, at about 9 to 11 m (30 to 36 ft) above sea level. Around San Diego, these deposits consist of siltstone, sandstone, and conglomerate, deposited in strandline, beach, and estuarine settings, and as colluvium (sediment transported by gravity, such as around slopes and cliffs) (Kennedy and Tan 2008). These deposits, particularly those associated with the Nestor Terrace, are also known as the Bay Point Formation (see for example Kennedy 1975a, 1975b). At CABR, the Bay Point Formation is complex and includes marine sediments grading up to nonmarine sediments (T. Deméré, pers. comm., November 2011). There are no Pleistocene deposits younger than the Bird Rock Terrace than can be mapped at a scale of 1:24,000 at CABR (Kennedy 1975b; Kennedy and Tan 2008).
The Nestor and Bird Rock terraces were formed during sea-level highstands that date to approximately 120,000 and 80,000 years ago, respectively (Wehmiller et al. 1977; Kennedy et al. 1982; Muhs et al. 1994, 2002). The marine terraces cut during these highstands are found around the world (Muhs et al. 1994). The Nestor Terrace highstand was perhaps 6 m (20 ft) higher than the modern sea level (Muhs et al. 1994), and appears to be associated with water temperatures similar to those found off of San Diego today (Muhs et al. 2006), or slightly warmer (Kennedy et al. 1982; Kennedy 1999). During Nestor Terrace time, Point Loma was an island (Kern 1977). The Bird Rock Terrace sea level highstand was 2 m (7 ft) or more lower than the modern sea level (Kern and Rockwell 1992), and the water was cooler (Kennedy et al. 1982; Kennedy 1999; Muhs et al. 2006; Kennedy and Rockwell 2009). An intermediate 100,000-year-old terrace is not as well preserved (Muhs et al. 1994), but is present on Point Loma (Muhs et al. 2003). These terraces were all formed during the last interglacial complex, preceding the most recent extensive glaciation (Muhs et al. 2002). Older publications sometimes refer to them as being from the Sangamon (Kern et al. 1971; Moore 1972), a term derived from an interglacial soil in Sangamon Co., Illinois, but no longer used for marine deposits of the last interglacial period.
The upper Pleistocene paralic deposits of the San Diego area are extensively fossiliferous. Approximately 275 species have been identified from the Nestor Terrace (Kern 1977), and more than 250 from the Bird Rock Terrace (Kennedy and Shiller 2011). The Nestor Terrace assemblage includes corals, chitons (sea cradles), bivalves, gastropods, scaphopods, barnacles (Kern 1977), crabs (T. Deméré, pers. comm., November 2011), echinoids, and worm tubes (Kern 1977). The Bird Rock Terrace assemblage includes coralline algae, corals, bryozoans, brachiopods, chitons, bivalves, gastropods, scaphopods, polychaete worms, barnacles, decapod crustaceans (crabs, lobsters, and allies), echinoids, sponge borings, shark teeth, stingray teeth and stingers, and bony fish otoliths and miscellaneous bones (Kennedy and Shiller 2011). The Bay Point Formation (in the broad sense) as a whole has a fossil assemblage including foraminifera (Kern 1971; Bowersox 1974; Kennedy 1975a), sponges, corals, brachiopods (Kennedy and Shiller 2011), chitons, bivalves (Kern et al. 1971), scaphopods (Valentine 1959), ostracodes (Holden 1968; Kern 1971; Bowersox 1974; Kennedy 1975a), echinoids (Valentine 1959; Kennedy and Shiller 2011), worm tubes, the ray Myliobatis (Kern et al. 1971), and shark teeth, Rare land mammals including tapirs (Jefferson 1989), ground sloths, mastodons, mammoths, horses, and camels (T. Deméré, pers. comm., November 2011) have been found in nonmarine sediments broadly assigned to the Bay Point Formation. Reworked Pliocene fossils have also been reported by Stephens (1929) and Kern et al. (1971).
The upper Pleistocene deposits of CABR are very fossiliferous. The SDNHM has four fossil localities from the Nestor Terrace within CABR (SDNHM localities 58, 121, 457, and 5635) (T. Deméré, pers. comm., November 2011). Bivalve and gastropod fossils from localities 58 and 121 were reported as far back as 1929 (Stephens 1929). Locality 58 has yielded the gastropod Littorina, locality 121 yielded 13 taxa of bivalves and gastropods, and locality 457 yielded the coral Balanophyllia elegans, the chitons Lepidozona californiensis and Stenoplax conspicua, 41 taxa of bivalves and gastropods, the barnacle Tetraclita sp., and the crab Pachygrapsus crassipes, according to the SDNHM online collections database. SDNHM locality 5635 is the same as San Diego State University (SDSU) locality F2521, a Nestor Terrace locality discussed by Kern (1977). SDSU locality F2525 yielded 5 chiton species, 13 bivalve species, 38 gastropod species, the barnacle Tetraclita rubescens, and the echinoids Dendraster excentricus and Strongylocentrotus sp. (Kern 1977). The records of the SDNHM for locality 5635 add sponges, bryozoans, decapod crustaceans, and bony fish to the list. SDNHM locality 5635 has unusual fossils from a more protected facies of the unit, and warrants additional investigation (T. Deméré, pers. comm., November 2011).
Fossils from the younger Bird Rock Terrace have been recovered from locations just north of CABR on Point Loma and as far north as Ocean Beach (Kennedy and Rockwell 2009; Kennedy and Shiller 2011). Fossils include sponges, corals, bryozoans, brachiopods, chitons, bivalves, gastropods, scaphopods, polychaete worms, barnacles, decapod crustaceans, echinoids, shark and ray teeth, and bony fish remains (Webb 1937; Kennedy and Shiller 2011). Muhs et al. (1994, 2002) obtained uranium-thorium dates on corals from the Bird Rock and Nestor terraces north and south of CABR on Point Loma. Corals that have yielded 100,000-year dates have also been recorded by Muhs et al. (2003). The solitary coral Balanophyllia elegans is often used for dating the terraces (Muhs et al. 1994, 2002). Gastropods from the Nestor Terrace on Point Loma have been found encrusted by contemporaneous bryozoans, worms, barnacles, and other gastropods (Rahman and Droser 2003). Worm-encrusted Pleistocene bivalves have also been found around the peninsula (von Dassow and Droser 2006). Other accounts of late Pleistocene fossils from Point Loma include Berry (1922), Valentine (1961), Valentine and Meade (1961), Bowersox (1974), and numerous unpublished paleontological monitoring and mitigation reports for the City of San Diego Development Services Department.
Fossils have not yet been documented from the lower–middle Pleistocene paralic deposits within CABR. However, this unit is known to preserve fossils elsewhere in the San Diego area, and future field investigations within the monument may recover fossils from it.
The lower–middle Pleistocene paralic deposits of CABR and the vicinity were formed on wave-cut platforms and preserved by uplift (Kennedy and Tan 2008). Within CABR, they are found in relatively small areas at the higher elevations of the monument (Kennedy 1975b; Kennedy and Tan 2008). These deposits have also been referred to as the Lindavista Formation (Hanna 1926; Kennedy 1975a), and are associated with the Linda Vista Terrace, which is found at about 120 m (390 ft) above sea level (Moore 1972; Kern and Rockwell 1992). Deposits on the terrace are about 2 to 10 m (7 to 33 ft) thick (Moore 1972), and date to a highstand that occurred approximately 855,000 years ago (Kern and Rockwell 1992). The Lindavista Formation is noted for the hematite cement that binds its sediments and provides some resistance to erosion (Hanna 1926; Kennedy 1975a).
The fossil assemblage of the Lindavista Formation is not as diverse as that of the younger terrace units discussed previously, and it is not known to be fossiliferous at CABR. Fossils include bivalves, gastropods, barnacles, echinoids, and worm tubes (Kennedy 1973). The best material is known from the Tierrasanta community near Murphy Canyon, over 23 km (14 miles) to the northeast (Kennedy 1973). The assemblage suggests littoral to shallow sublittoral depths derived from two main habitats, an “exposed open coast sandy beach, and a cobble or rocky bottom” (Kennedy 1973). Rare remains of marine vertebrates, including shark teeth and baleen whale ribs, have been recovered from the Lindavista Formation in Mira Mesa, over 27 km (17 miles) northeast of CABR (T. Deméré, pers. comm., February 2012). Land animals are not known from the Lindavista Formation, but slightly older rocks in eastern San Diego County have yielded extensive vertebrate fossils, as part of what is known as the Vallecito Creek Local Fauna. Thousands of vertebrate specimens from over 2,000 localities are known, including remains of sharks, bony fish, frogs, turtles, lizards, and diverse birds and mammals (Cassiliano 1999). These fossils are indicative of the vertebrates that existed at about the time of the formation of the Linda Vista Terrace.
Cultural artifacts make up almost all of CABR’s collections (B. Pister, pers. comm., January 2012). Formerly, the Visitor Center had a cast of a fossil known informally as the “flying ammonite” on display. The actual fossil was collected near CABR from the rocky beach on the east side of Point Loma in 1975 and is in the collections of the Natural History Museum of Los Angeles County, Los Angeles (LACM) (Wilson 1976). It received its nickname because a helicopter was used to recover it from the boulder beach where it was found. It is a specimen of the species Pachydiscus catarinae; a much smaller individual was found within it, and possibly represents a juvenile that was being brooded inside the adult’s shell, in a similar manner to the modern paper nautilus (Wilson 1976).
Several institutions have records of fossil localities within CABR: the LACM (G. Kennedy, Brian F. Smith & Associates, Inc., pers. comm., March 2012); the San Diego Natural History Museum (SDNHM) (Stephens 1929; Dawson 1978; T. Deméré, pers. comm., November 2011); San Diego State University (SDSU) (Kern 1977); and the University of California Museum of Paleontology, Berkeley (UCMP), which has fossils collected from CABR that were previously at USGS-Menlo Park (G. Kennedy, pers. comm., March 2012). SDSU fossils from CABR can be found in the collections of the LACM and SDNHM. Additionally, Cooper’s (1894) fossils, some of which may be from CABR, are probably at the California Academy of Science (CAS) in San Francisco, and fossils collected from the tip of Point Loma for various graduate projects may have been at UCLA at one time (G. Kennedy, pers. comm., March 2012); UCLA collections are now at LACM.
The LACM has fossils from three Pleistocene-age terrace localities collected by Thomas Deméré for the then-San Diego State College (now University): LACM IP5139, 5140, and 5142 (G. Kennedy, pers. comm., March 2012). Specimens from CABR are on display at the SDNHM, including the cycad leaf from the Point Loma Formation (Koch and Santucci 2003). SDNHM collections include 1,765 specimens from CABR localities: 145 specimens of the gastropod Littorina from locality 58, on the Nestor Terrace; 81 specimens of bivalves and gastropods from locality 121, on the Nestor Terrace; 169 specimens of marine invertebrates, mostly bivalves and gastropods, from locality 457 on the Nestor Terrace; 211 specimens of marine invertebrates from locality 2823, in the Cabrillo Formation; the Squalicorax tooth from locality 3272, in the Cabrillo Formation; the cycad leaf from locality 3774, in the Point Loma Formation; and 1,157 specimens of marine animals, mostly bivalves and gastropods, from locality 5635 on the Nestor Terrace. Locality 5635 is another site which was collected originally as an SDSU site (T. Deméré, pers. comm., November 2011). The UCMP has in its collections fossils from USGS-Menlo Park site USGS M6705, another Pleistocene site from the east site of Point Loma in CABR (G. Kennedy, pers. comm., March 2012).
Paleontological Resource Management, Preliminary Recommendations
- Coastal erosion and landslides are important threats to paleontological resources at CABR. There is little that can be done to mitigate in the case of erosion because fossils that erode from the sea cliffs are removed almost immediately by coastal processes (KellerLynn 2008). Thomas Deméré (pers. comm., November 2011) suggested that preparing detailed descriptions of measured stratigraphic sections (bed-by-bed descriptions of exposures) could effectively summarize the distribution of fossiliferous zones in the sedimentary rocks of CABR, which could help focus management efforts.
- Invertebrate trace fossils are located in the tide pool area, incurring inevitable human contact and erosion. Some kind of interpretation, possibly combined with blocking off some of the fossils, has been proposed in the past. The continuous wave erosion makes protection unfeasible, but an interpretative sign or placard (that could be moved as erosion dictates) might be a way to enhance the visitor experience.
- The monument should consider future field inventories for paleontological resources to more fully document in situ occurrences of fossils. The monument may consider a formal site documentation and condition assessment for significant fossil localities. Monitoring of significant sites should be undertaken at least once a year in the future. A Geologic Resource Monitoring Manual by the Geological Society of America and NPS Geologic Resources Division includes a section on paleontological resource monitoring (Santucci et al. 2009).
- Monument staff should be encouraged to observe exposed sedimentary rocks and associated eroded deposits for fossil material while conducting their usual duties. Staff should photo-document and monitor any occurrences of paleontological resources that may be observed in situ. Fossils and their associated geologic context (surrounding rock) should be documented but left in place unless they are subject to imminent degradation by artificially accelerated natural processes or direct human impacts. [The monument may want to consider establishing some sort of protocol for salvaging fossils in imminent danger of erosion.] When opportunities arise to observe paleontological resources in the field and take part in paleontological field studies with trained paleontologists, monument staff should take advantage of them.
- Fossils found in a cultural context should be documented as other fossils, but will also require the input of an archeologist. Any fossil with cultural context may be culturally sensitive as well (e.g. subject to NAGPRA) and should be regarded as such until otherwise established. The Geologic Resources Division can coordinate additional documentation/research of such material.
- Future infrastructure developments or archeological excavations should consider scheduling site monitoring by a trained paleontologist in order to document and protect fossil resources.
- Contact the NPS Geologic Resources Division for technical assistance with paleontological resource management issues.
Last revised 08-Sep-14
|
<urn:uuid:4743fa36-5f92-496d-ac0d-ae4733b6fbb7>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814101.27/warc/CC-MAIN-20180222101209-20180222121209-00220.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9105583429336548,
"score": 3.453125,
"token_count": 8076,
"url": "https://vipvoice.wordpress.com/paleontological-report-for-cnm-2012/"
}
|
- In the beginning, God created the heavens and the earth
- Adam and Eve
- c.1800 B.C. The death of Joseph
- c.1445 B.C. The Exodus from Egypt
- c.1405 B.C. The death of Moses. The books of Genesis, Exodus, and Leviticus were written by Moses sometime between the Exodus and his death. Numbers and Deuteronomy were written during Moses's last days. Deuteronomy includes his farewell address, and an account of his death.
- c.1400 B.C. The conquest of the Promised Land. The book of Joshua contains eyewitness accounts, so it was written around this time.
- c.1380-1050 B.C. - The period of the Judges
|Israel and Judah
|c.1050 - 1010 B.C.
|c.1010 - 970 B.C.
|c.970 - 931 B.C.
||The construction of the temple in Jerusalem
||Rehoboam becomes king. The northern tribes revolt and the kingdom is divided. From here on, the northern tribes are called Israel and the southern tribes are called Judah. Now Jeroboam is king of Israel, and Rehoboam is king of Judah.
|Israel (northern kingdom)
||Judah (southern kingdom)
|c.931 - 910 B.C.
||c.931 - 914 B.C.
c.914 - 912 B.C.
c.909 - 886 B.C.
|c.911 - 871 B.C.
|c.885 - 874 B.C.
c.873 - 853 B.C.
c.853 - 852 B.C.
|c.871 - 848 B.C.
|c.850 B.C. - The death of Elijah, prophet in Israel
|c.852 - 841 B.C.
||c.848 - 841 B.C.
|c.842 - 813 B.C.
c.840 - 835 B.C.
|c.813 - 797 B.C.
||c.835 - 796 B.C.
|c.797 - 782 B.C.
||c.796 - 767 B.C.
|c.793-740 B.C. - The ministry of Amos in both kingdoms
c.793-753 B.C. - Jonah lived during Jeroboam II's reign, and ministered to Ninevah, the capital of Assyria
|c.793 - 753 B.C.
|c.767 - 739 B.C.
|c.750-700 B.C. - The ministry of Micah
c.740-700 B.C. - The ministry of Isaiah in Judah
|c.747 - 742 B.C.
c.742 - 740 B.C.
|c.739 - 734 B.C.
|c.740 - 731 B.C.
c.731 - 722 B.C.
|c.734 - 728 B.C.
c.728 - 699 B.C.
|c.722 B.C. - The ministry of Hosea in Israel
c.722 B.C. - The nation of Israel is destroyed by Assyria
- c.699 - 643 B.C. - Manasseh
- c.650 B.C. - Nahum preaches to Ninevah. They do not repent.
- c.642 - 640 B.C. - Amon
- c.640-615 B.C. - Habakkuk is written
- c.640 - 609 B.C. - The reforms of King Josiah. Zephaniah is written during this period.
- 627-586 B.C. - The ministry of Jeremiah
- 612 B.C. - Ninevah, the capital of Assyria, falls to Babylon.
- c.609 B.C. - Jehoahaz
- c.609 - 598 B.C. - Jehoiakim
- 605 B.C. - The Babylonian conquest of Judah begins. Daniel is exiled, and serves in the Babylonian king's court.
- 597 B.C. - Ezekial is exiled to Babylon and begins his prophetic ministry, which lasts at least 23 years
- c.597 B.C. - Jehoiachin
- c.597 - 587 B.C. - Zedekiah
- 586 B.C. - The fall of Jerusalem and the beginning of the Babylonian captivity. The event is commemorated in Lamentations. Obadiah writes his prophesy against the Edomites, who had helped Babylon against the Israelites.
- 539 B.C. - Babylon defeated by Persia. Daniel given a position in Persia's court.
- 538 B.C. - The people of Judah begin their return from exile in Babylon. Soon thereafter, 1 and 2 Chronicles is written, covering the history of Judah for the previous 500 years.
- 515 B.C. - The temple is rebuilt, at the urging of Haggai and Zechariah. Malachi writes shortly thereafter.
- 480 B.C. - Esther, a Jewish exile in Persia, becomes queen of the Persian Empire
- 458 B.C. - Another group of exiles returns, led by Ezra
- 445 B.C. - The last group of exiles returns with Nehemiah, and they rebuild the wall of Jerusalem
Dates are taken from the book introductions in the English Standard Version Classic Reference Bible. I also used this chronology for dates of kings. Where there was a conflict with the ESV, I went with the ESV. This causes some gaps. There are several competing chronologies for the kings, but the farthest they are ever off from each other is about 10 years. I'm attempting to provide an overall view and a feeling for the order of events, so this is fine for my purposes. The dates marked with a "c." are approximate. The ones without that mark are more certain.
Buy these books from Christian Book Distributors
The Zondervan Encyclopedia of the Bible, 5 Volumes: Revised
Edited by Moises Silva & Merrill C. Tenney / Zondervan
One of the most reliable Bible encyclopedias has been thoroughly revised! Backed by current archaeological research, this comprehensive edition features over 7,500 quality articles from more than 230 international scholars; plus hundreds of color and black-and-white illustrations, charts, graphs, and maps. Diverse scholarly viewpoints provide well-rounded perspectives of theological and biblical topics. Approx. 5000 pages total, five hardcovers.
The Christ of the Covenants
By O. Palmer Robertson / P & R Publishing
A definitive contribution to covenant theology from a Reformed perspective. Emphasizing the interdependence of the Old and New Testaments, Robertson examines the covenants of creation, Adam, Noah, Abraham, Moses, David, and Christ to illustrate the progressive nature of God's redemptive plan. Readers will benefit from Robertson's balanced exposition and fair-minded interaction with other viewpoints. 308 pages, softcover from P & R.
The New International Commentary on the Old Testament, 23 Volumes
Edited by Robert L. Hubbard, Jr. / Wm. B. Eerdmans Publishing Co.
Please Note: Since this series is not complete, the price for the entire series does not include projected volumes or books listed as "not yet in print."
"In the Old Testament we read God's word as it was spoken to his people Israel. Today, thousands of years later, we hear in these thirty-nine books his inspired and authoritative message for us." These twin convictions, shared by all of the contributors to The New International Commentary on the Old Testament, define the goal of this ambitious series of commentaries. For those many modern readers who find the Old Testament to be strange and foreign soil, the NICOT series serves as an authoritative guide bridging the cultural gap between today's world and the world of ancient Israel. Each NICOT volume aims to help us hear God's word as clearly as possible.
The contributors apply their proven scholarly expertise and wide experience as teachers to illumine understanding of the Old Testament and present the results of the best recent research in an interesting manner.
Each commentary opens with an introduction to the biblical book, looking especially at questions concerning its background, authorship, date, purpose, structure, and theology. A select bibliography also points readers to resources for their own study. The author's own translation from the original Hebrew forms the basis of the commentary proper. Verse-by-verse comments nicely balance in-depth discussions of technical matters---textual criticism, critical problems, and so on---with exposition of the biblical writer's theology and its implications for the life of faith today. A scholarly series.
|
<urn:uuid:ed136057-222d-4ab7-b5ea-be92a8c45e8f>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00220.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8858044147491455,
"score": 3.5625,
"token_count": 1929,
"url": "http://www.oldtestamenthistorytimeline.com/Default.aspx"
}
|
Circumstellar habitable zone
In astronomy and astrobiology, the circumstellar habitable zone (CHZ), or simply the habitable zone, is the range of orbits around a star within which a planetary surface can support liquid water given sufficient atmospheric pressure. The bounds of the CHZ are based on Earth's position in the Solar System and the amount of radiant energy it receives from the Sun. Due to the importance of liquid water to Earth's biosphere, the nature of the CHZ and the objects within it may be instrumental in determining the scope and distribution of Earth-like extraterrestrial life and intelligence.
The habitable zone is also called the Goldilocks zone, a metaphor of the children's fairy tale of "Goldilocks and the Three Bears", in which a little girl chooses from sets of three items, ignoring the ones that are too extreme (large or small, hot or cold, etc.), and settling on the one in the middle, which is "just right".
Since the concept was first presented in 1953, many stars have been confirmed to possess a CHZ planet, including some systems that consist of multiple CHZ planets. Most such planets, being super-Earths or gas giants, are more massive than Earth, because such planets are easier to detect. On November 4, 2013, astronomers reported, based on Kepler data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs in the Milky Way. 11 billion of these may be orbiting Sun-like stars. Proxima Centauri b, located about 4.2 light-years (1.3 parsecs) from Earth in the constellation of Centaurus, is the nearest known exoplanet, and is orbiting in the habitable zone of its star. The CHZ is also of particular interest to the emerging field of habitability of natural satellites, because planetary-mass moons in the CHZ might outnumber planets.
In subsequent decades, the CHZ concept began to be challenged as a primary criterion for life, so the concept is still evolving. Since the discovery of evidence for extraterrestrial liquid water, substantial quantities of it are now thought to occur outside the circumstellar habitable zone. The concept of deep biospheres, like Earth's, that exist independently of stellar energy, are now generally accepted in astrobiology given the large amount of liquid water known to exist within in lithospheres and asthenospheres of the Solar System. Sustained by other energy sources, such as tidal heating or radioactive decay or pressurized by non-atmospheric means, liquid water may be found even on rogue planets, or their moons. Liquid water can also exist at a wider range of temperatures and pressures as a solution, for example with sodium chlorides in seawater on Earth, chlorides and sulphates on equatorial Mars, or ammoniates, due to its different colligative properties. In addition, other circumstellar zones, where non-water solvents favorable to hypothetical life based on alternative biochemistries could exist in liquid form at the surface, have been proposed.
- 1 History
- 2 Determination
- 3 Extrasolar discoveries
- 4 Habitability outside the CHZ
- 5 Significance for complex and intelligent life
- 6 See also
- 7 References
- 8 External links
An estimate of the range of distances from the Sun allowing the existence of liquid water appears in Newton's Principia (Book III, Section 1, corol. 4). The concept of a circumstellar habitable zone was first introduced in 1953 by Hubertus Strughold, who in his treatise The Green and the Red Planet: A Physiological Study of the Possibility of Life on Mars, coined the term "ecosphere" and referred to various "zones" in which life could emerge. In the same year, Harlow Shapley wrote "Liquid Water Belt", which described the same theory in further scientific detail. Both works stressed the importance of liquid water to life. Su-Shu Huang, an American astrophysicist, first introduced the term "habitable zone" in 1959 to refer to the area around a star where liquid water could exist on a sufficiently large body, and was the first to introduce it in the context of planetary habitability and extraterrestrial life. A major early contributor to habitable zone theory, Huang argued in 1960 that circumstellar habitable zones, and by extension extraterrestrial life, would be uncommon in multiple star systems, given the gravitational instabilities of those systems.
The theory of habitable zones was further developed in 1964 by Stephen H. Dole in his book Habitable Planets for Man, in which he discussed the concept of circumstellar habitable zone as well as various other determinants of planetary habitability, eventually guestimating the number of habitable planets in the Milky Way to be about 600 million. At the same time, science-fiction author Isaac Asimov introduced the concept of a circumstellar habitable zone to the general public through his various explorations of space colonization. The term "Goldilocks zone" emerged in the 1970s, referencing specifically a region around a star whose temperature is "just right" for water to be present in the liquid phase. In 1993, astronomer James Kasting introduced the term "circumstellar habitable zone" to refer more precisely to the region then (and still) known as the habitable zone. Kasting was the first to present a detailed model for the habitable zone for exoplanets.
An update to habitable zone theory came in 2000, when astronomers Peter Ward and Donald Brownlee introduced the idea of the "galactic habitable zone", which they later developed with Guillermo Gonzalez. The galactic habitable zone, defined as the region where life is most likely to emerge in a galaxy, encompasses those regions close enough to a galactic center that stars there are enriched with heavier elements, but not so close that star systems, planetary orbits, and the emergence of life would be frequently disrupted by the intense radiation and enormous gravitational forces commonly found at galactic centers.
Subsequently, some astrobiologists propose that the concept be extended to other solvents, including dihydrogen, sulfuric acid, dinitrogen, formamide, and methane, among others, which would support hypothetical life forms that use an alternative biochemistry. In 2013, further developments in habitable zone theory were made with the proposal of a circumplanetary habitable zone, also known as the "habitable edge", to encompass the region around a planet where the orbits of natural satellites would not be disrupted, and at the same time tidal heating from the planet would not cause liquid water to boil away.
Whether a body is in the circumstellar habitable zone of its host star is dependent on the radius of the planet's orbit (for natural satellites, the host planet's orbit), the mass of the body itself, and the radiative flux of the host star. Given the large spread in the masses of planets within a circumstellar habitable zone, coupled with the discovery of super-Earth planets which can sustain thicker atmospheres and stronger magnetic fields than Earth, circumstellar habitable zones are now split into two separate regions—a "conservative habitable zone" in which lower-mass planets like Earth or Venus can remain habitable, complemented by a larger "extended habitable zone" in which super-Earth planets, with stronger greenhouse effects, can have the right temperature for liquid water to exist at the surface.
The inner edge of the HZ is the distance where a runaway greenhouse effect vaporizes the whole water reservoir and, as a second effect, induces the photodissociation of water vapor and the loss of hydrogen to space. The outer edge of the HZ is the distance from the star where adding more carbon dioxide to the atmosphere fails to keep the surface of the planet above the freezing point.
Solar System estimates
Estimates for the habitable zone within the Solar System range from 0.38 to 10.0 astronomical units, though arriving at these estimates has been challenging for a variety of reasons. Numerous planetary mass objects orbit within, or close to, this range and as such receive sufficient sunlight to raise temperatures above the freezing point of water. However their atmospheric conditions vary substantially. The aphelion of Venus, for example, touches the inner edge of the zone and while atmospheric pressure at the surface is sufficient for liquid water, a strong greenhouse effect raises surface temperatures to 462 °C (864 °F) at which water can only exist as vapour. The entire orbits of the Moon, Mars, and numerous asteroids also lie within various estimates of the habitable zone. Only at Mars' lowest elevations (less than 30% of the planet's surface) is atmospheric pressure and temperature sufficient for water to, if present, exist in liquid form for short periods. At Hellas Basin, for example, atmospheric pressures can reach 1,115 Pa and temperatures above zero Celsius (around the triple point for water) for 70 days in the Martian year. Despite indirect evidence in the form of seasonal flows on warm Martian slopes, no confirmation has been made of the presence of liquid water there. While other objects orbit partly within this zone, including comets, Ceres is the only one of planetary mass. A combination of low mass and an inability to mitigate evaporation and atmosphere loss against the solar wind make it impossible for these bodies to sustain liquid water on their surface. Most estimates, therefore, are inferred from the effect that a repositioned orbit would have on the habitability of Earth or Venus.
According to extended habitable zone theory, planetary mass objects with atmospheres capable of inducing sufficient radiative forcing could possess liquid water farther out from the Sun. Such objects could include those whose atmospheres contain a high component of greenhouse gas and terrestrial planets much more massive than Earth (super-Earth class planets), that have retained atmospheres with surface pressures of up to 100 kbar. There are no examples of such objects in the Solar System to study; not enough is known about the nature of atmospheres of these kinds of extrasolar objects, and the net temperature effect of such atmospheres including induced albedo, anti-greenhouse or other possible heat sources cannot be determined by their position in the habitable zone.
For reference, the average distance from the Sun of some major bodies within the various estimates of the habitable zone are: Mercury, 0.39 AU; Venus, 0.72 AU; Earth, 1.00 AU; Mars, 1.52 AU; Vesta, 2.36 AU; Ceres, 2.77 AU; Jupiter, 5.20 AU; Saturn, 9.58 AU.
|Inner edge (AU)||Outer edge (AU)||Year||Notes|
|0.725||1.24||Dole 1964||Used optically thin atmospheres and fixed albedos. Places the aphelion of Venus just inside the zone.|
|1.385–1.398||Budyko 1969||Based on studies of ice albedo feedback models to determine the point at which Earth would experience global glaciation. This estimate was supported in studies by Sellers 1969 and North 1975.|
|0.88–0.912||Rasool and De Bergh 1970||Based on studies of Venus's atmosphere, Rasool and De Bergh concluded that this is the minimum distance at which Earth would have formed stable oceans.|
|0.95||1.01||Hart et al. 1979||Based on computer modelling and simulations of the evolution of Earth's atmospheric composition and surface temperature. This estimate has often been cited by subsequent publications.|
|3.0||Fogg 1992||Used the carbon cycle to estimate the outer edge of the circumstellar habitable zone.|
|0.95||1.37||Kasting et al. 1993||Founded the most common working definition of the habitable zone used today. Assumes that CO2 and H2O are the key greenhouse gases as they are for the Earth. Argued that the habitable zone is wide because of the carbonate-silicate cycle. Noted the cooling effect of cloud albedo. Table shows conservative limits. Optimistic limits were 0.84 - 1.67 AU.|
|2.0||Spiegel et al. 2010||Proposed that seasonal liquid water is possible to this limit when combining high obliquity and orbital eccentricity.|
|0.75||Abe et al. 2011||Found that land-dominated "desert planets" with water at the poles could exist closer to the Sun than watery planets like Earth.|
|10||Pierrehumbert and Gaidos 2011||Terrestrial planets that accrete tens-to-thousands of bars of primordial hydrogen from the protoplanetary disc may be habitable at distances that extend as far out as 10 AU in our solar system.|
|0.77—0.87||1.02—1.18||Vladilo et al. 2013||Inner edge of circumstellar habitable zone is closer and outer edge is farther for higher atmospheric pressures; determined minimum atmospheric pressure required to be 15 millibar.|
|0.99||1.68||Kopparapu et al. 2013||Revised estimates of the Kasting et al. (1993) formulation using updated runaway greenhouse and water loss algorithms. According to this measure Earth is at the inner edge of the HZ and close to, but just outside, the runaway greenhouse limit. This applies to a planet with Earth-like atmospheric composition and pressure.|
|0.38||Zsom et al. 2013
|Estimate based on various possible combinations of atmospheric composition, pressure and relative humidity of the planet's atmosphere.|
|0.95||Leconte et al. 2013||Using 3-D models, these authors computed an inner edge of 0.95 AU for our solar system.|
|0.95||2.4||Ramirez and Kaltenegger 2017
|An expansion of the classical carbon dioxide-water vapor habitable zone assuming a volcanic hydrogen atmospheric concentration of 50%.|
Astronomers use stellar flux and the inverse-square law to extrapolate circumstellar habitable zone models created for the Solar System to other stars. For example, although the Solar System has a circumstellar habitable zone centered at 1.34 AU from the Sun, a star with 0.25 times the luminosity of the Sun would have a habitable zone centered at , or 0.5, the distance from the star, corresponding to a distance of 0.67 AU. Various complicating factors, though, including the individual characteristics of stars themselves, mean that extrasolar extrapolation of the CHZ concept is more complex.
Spectral types and star-system characteristics
Some scientists argue that the concept of a circumstellar habitable zone is actually limited to stars in certain types of systems or of certain spectral types. Binary systems, for example, have circumstellar habitable zones that differ from those of single-star planetary systems, in addition to the orbital stability concerns inherent with a three-body configuration. If the Solar System were such a binary system, the outer limits of the resulting circumstellar habitable zone could extend as far as 2.4 AU.
With regard to spectral types, Zoltán Balog proposes that O-type stars cannot form planets due to the photoevaporation caused by their strong ultraviolet emissions. Studying ultraviolet emissions, Andrea Buccino found that only 40% of stars studied (including the Sun) had overlapping liquid water and ultraviolet habitable zones. Stars smaller than the Sun, on the other hand, have distinct impediments to habitability. For example, Michael Hart proposed that only main-sequence stars of spectral class K0 or brighter could offer habitable zones, an idea which has evolved in modern times into the concept of a tidal locking radius for red dwarfs. Within this radius, which is coincidental with the red-dwarf habitable zone, it has been suggested that the volcanism caused by tidal heating could cause a "tidal Venus" planet with high temperatures and no hospitable environment to life.
Others maintain that circumstellar habitable zones are more common, and that it is indeed possible for water to exist on planets orbiting cooler stars. Climate modelling from 2013 supports the idea that red dwarf stars can support planets with relatively constant temperatures over their surfaces in spite of tidal locking. Astronomy professor Eric Agol argues that even white dwarfs may support a relatively brief habitable zone through planetary migration. At the same time, others have written in similar support of semi-stable, temporary habitable zones around brown dwarfs. Also, a habitable zone in the outer parts of stellar systems may exist during the pre-main-sequence phase of stellar evolution, especially around M-dwarfs, potentially lasting for billion-year timescales.
Circumstellar habitable zones change over time with stellar evolution. For example, hot O-type stars, which may remain on the main sequence for fewer than 10 million years, would have rapidly changing habitable zones not conducive to the development of life. Red dwarf stars, on the other hand, which can live for hundreds of billions of years on the main sequence, would have planets with ample time for life to develop and evolve. Even while stars are on the main sequence, though, their energy output steadily increases, pushing their habitable zones farther out; our Sun, for example, was 75% as bright in the Archaean as it is now, and in the future, continued increases in energy output will put Earth outside the Sun's habitable zone, even before it reaches the red giant phase. In order to deal with this increase in luminosity, the concept of a continuously habitable zone has been introduced. As the name suggests, the continuously habitable zone is a region around a star in which planetary-mass bodies can sustain liquid water for a given period of time. Like the general circumstellar habitable zone, the continuously habitable zone of a star is divided into a conservative and extended region.
In red dwarf systems, gigantic stellar flares which could double a star's brightness in minutes and huge starspots which can cover 20% of the star's surface area, have the potential to strip an otherwise habitable planet of its atmosphere and water. As with more massive stars, though, stellar evolution changes their nature and energy flux, so by about 1.2 billion years of age, red dwarfs generally become sufficiently constant to allow for the development of life.
Once a star has evolved sufficiently to become a red giant, its circumstellar habitable zone will change dramatically from its main-sequence size. For example, the Sun is expected to engulf the previously-habitable Earth as a red giant. However, once a red giant star reaches the horizontal branch, it achieves a new equilibrium and can sustain a new circumstellar habitable zone, which in the case of the Sun would range from 7 to 22 AU. At such stage, Saturn's moon Titan would likely be habitable in Earth's temperature sense. Given that this new equilibrium lasts for about 1 Gyr, and because life on Earth emerged by 0.7 Gyr from the formation of the Solar System at latest, life could conceivably develop on planetary mass objects in the habitable zone of red giants. However, around such a helium-burning star, important life processes like photosynthesis could only happen around planets where the atmosphere has carbon dioxide, as by the time a solar-mass star becomes a red giant, planetary-mass bodies would have already absorbed much of their free carbon dioxide. Moreover, as Ramirez and Kaltenegger (2016) showed, intense stellar winds would completely remove the atmospheres of such smaller planetary bodies, rendering them uninhabitable anyway. Thus, Titan would not be habitable even after the Sun becomes a red giant. Nevertheless, life need not originate during this stage of stellar evolution for it to be detected. Once the star becomes a red giant, and the habitable zone extends outward, the icy surface would melt, forming a temporary atmosphere that can be searched for signs of life that may have been thriving before the start of the red giant stage.
A planet's atmospheric conditions influence its ability to retain heat, so that the location of the habitable zone is also specific to each type of planet: desert planets (also known as dry planets), with very little water, will have less water vapor in the atmosphere than Earth and so have a reduced greenhouse effect, meaning that a desert planet could maintain oases of water closer to its star than Earth is to the Sun. The lack of water also means there is less ice to reflect heat into space, so the outer edge of desert-planet habitable zones is further out.
A planet cannot have a hydrosphere—a key ingredient for the formation of carbon-based life—unless there is a source for water within its stellar system. The origin of water on Earth is still not completely understood; possible sources include the result of impacts with icy bodies, outgassing, mineralization, leakage from hydrous minerals from the lithosphere, and photolysis. For an extrasolar system, an icy body from beyond the frost line could migrate into the habitable zone of its star, creating an ocean planet with seas hundreds of kilometers deep such as GJ 1214 b or Kepler-22b may be.
Maintenance of liquid surface water also requires a sufficiently thick atmosphere. Possible origins of terrestrial atmospheres are currently theorised to outgassing, impact degassing and ingassing. Atmospheres are thought to be maintained through similar processes along with biogeochemical cycles and the mitigation of atmospheric escape. In a 2013 study led by Italian astronomer Giovanni Vladilo, it was shown that the size of the circumstellar habitable zone increased with greater atmospheric pressure. Below an atmospheric pressure of about 15 millibars, it was found that habitability could not be maintained because even a small shift in pressure or temperature could render water unable to form as a liquid.
Although traditional definitions of the habitable zone assume that carbon dioxide and water vapor are the most important greenhouse gases (as they are on the Earth), a study led by Ramses Ramirez and co-author Lisa Kaltenegger has shown that the size of the habitable zone is greatly increased if prodigious volcanic outgassing of hydrogen is also included along with the carbon dioxide and water vapor. The outer edge in our solar system would extend out as far as 2.4 AU in that case. Similar increases in the size of the habitable zone were computed for other stellar systems. An earlier study by Ray Pierrehumbert and Eric Gaidos had eliminated the CO2-H2O concept entirely, arguing that young planets could accrete many tens to hundreds of bars of hydrogen from the protoplanetary disc, providing enough of a greenhouse effect to extend the solar system outer edge to 10 AU. In this case, though, the hydrogen is not continuously replenished by volcanism, and is lost within millions to tens-of-millions of years.
In the case of planets orbiting in the CHZs of red dwarf stars, the extremely close distances to the stars cause tidal locking, an important factor in habitability. For a tidally locked planet, the sidereal day is as long as the orbital period, causing one side to permanently face the host star and the other side to face away. In the past, such tidal locking was thought to cause extreme heat on the star-facing side and bitter cold on the opposite side, making many red dwarf planets uninhabitable; however, three-dimensional climate models in 2013, showed that the side of a red dwarf planet facing the host star could have extensive cloud cover, increasing its bond albedo and reducing significantly temperature differences between the two sides.
Planetary-mass natural satellites have the potential to be habitable as well. However, these bodies need to fulfill additional parameters, in particular being located within the circumplanetary habitable zones of their host planets. More specifically, moons need to be far enough from their host giant planets that they are not transformed by tidal heating into volcanic worlds like Io, but must still remain within the Hill radius of the planet so that they are not pulled out of orbit of their host planet. Red dwarfs that have masses less than 20% of that of the Sun cannot have habitable moons around giant planets, as the small size of the circumstellar habitable zone would put a habitable moon so close to the star that it would be stripped from its host planet. In such a system, a moon close enough to its host planet to maintain its orbit would have tidal heating so intense as to eliminate any prospects of habitability.
A planetary object that orbits a star with high orbital eccentricity may spend only some of its year in the CHZ and experience a large variation in temperature and atmospheric pressure. This would result in dramatic seasonal phase shifts where liquid water may exist only intermittently. It is possible that subsurface habitats could be insulated from such changes and that extremophiles on or near the surface might survive through adaptions such as hibernation (cryptobiosis) and/or hyperthermostability. Tardigrades, for example, can survive in a dehydrated state temperatures between 0.150 K (−273 °C) and 424 K (151 °C). Life on a planetary object orbiting outside CHZ might hibernate on the cold side as the planet approaches the apastron where the planet is coolest and become active on approach to the periastron when the planet is sufficiently warm.
Among exoplanets, a review in 2015 came to the conclusion that Kepler-62f, Kepler-186f and Kepler-442b were likely the best candidates for being potentially habitable. These are at a distance of 1200, 490 and 1,120 light-years away, respectively. Of these, Kepler-186f is similar in size to Earth with a 1.2-Earth-radius measure, and it is located towards the outer edge of the habitable zone around its red dwarf star. Among nearest terrestrial exoplanet candidates, Tau Ceti e is 11.9 light-years away. It is in the inner edge of its solar system's habitable zone, giving it an estimated average surface temperature of 68 °C (154 °F).
Studies that have attempted to estimate the number of terrestrial planets within the circumstellar habitable zone tend to reflect the availability of scientific data. A 2013 study by Ravi Kumar Kopparapu put ηe, the fraction of stars with planets in the CHZ, at 0.48, meaning that there may be roughly 95–180 billion habitable planets in the Milky Way. However, this is merely a statistical prediction; only a small fraction of these possible planets have yet been discovered.
Previous studies have been more conservative. In 2011, Seth Borenstein concluded that there are roughly 500 million habitable planets in the Milky Way. NASA's Jet Propulsion Laboratory 2011 study, based on observations from the Kepler mission, raised the number somewhat, estimating that about "1.4 to 2.7 percent" of all stars of spectral class F, G, and K are expected to have planets in their CHZs.
The first discoveries of extrasolar planets in the CHZ occurred just a few years after the first extrasolar planets were discovered. However these early detections were all gas giant sized, and many in eccentric orbits. Despite this, studies indicate the possibility of large, Earth-like moons around these planets supporting liquid water. One of the first discoveries was 70 Virginis b, a gas giant initially nicknamed "Goldilocks" due to it being neither "too hot" nor "too cold." Later study revealed temperatures analogous to Venus, ruling out any potential for liquid water. 16 Cygni Bb, also discovered in 1996, has an extremely eccentric orbit that spends only part of its time in the CHZ, such an orbit would causes extreme seasonal effects. In spite of this, simulations have suggested that a sufficiently large companion could support surface water year-round.
Gliese 876 b, discovered in 1998, and Gliese 876 c, discovered in 2001, are both gas giants discovered in the habitable zone around Gliese 876 that may also have large moons. Another gas giant, Upsilon Andromedae d was discovered in 1999 orbiting Upsilon Andromidae's habitable zone.
Announced on April 4, 2001, HD 28185 b is a gas giant found to orbit entirely within its star's circumstellar habitable zone and has a low orbital eccentricity, comparable to that of Mars in the Solar System. Tidal interactions suggest it could harbor habitable Earth-mass satellites in orbit around it for many billions of years, though it is unclear whether such satellites could form in the first place.
HD 69830 d, a gas giant with 17 times the mass of Earth, was found in 2006 orbiting within the circumstellar habitable zone of HD 69830, 41 light years away from Earth. The following year, 55 Cancri f was discovered within the CHZ of its host star 55 Cancri A. Hypothetical satellites with sufficient mass and composition are thought to be able to support liquid water at their surfaces.
Though in theory such giant planets could possess moons, the technology did not exist to detect moons around them, and no extrasolar moons had been detected. Planets within the zone with the potential for solid surfaces were therefore of much greater interest.
The 2007 discovery of Gliese 581 c, the first super-Earth in the circumstellar habitable zone, created significant interest in the system by the scientific community, although the planet was later found to have extreme surface conditions that may resemble Venus. Gliese 581 d, another planet in the same system and thought to be a better candidate for habitability, was also announced in 2007. Its existence was later disconfirmed in 2014. Gliese 581 g, yet another planet thought to have been discovered in the circumstellar habitable zone of the system, was considered to be more habitable than both Gliese 581 c and d. However, its existence was also disconfirmed in 2014.
Discovered in August 2011, HD 85512 b was initially speculated to be habitable, but the new circumstellar habitable zone criteria devised by Kopparapu et al. in 2013 place the planet outside the circumstellar habitable zone. With an increase in the frequency of exoplanet discovery, the Earth Similarity Index was devised in October 2011 as a way of comparing planetary properties, such as surface temperature and density, to those of Earth in order to better gauge the habitability of extrasolar bodies.
Kepler-22 b, discovered in December 2011 by the Kepler space probe, is the first transiting exoplanet discovered around a Sun-like star. With a radius 2.4 times that of Earth, Kepler-22b has been predicted by some to be an ocean planet. Gliese 667 Cc, discovered in 2011 but announced in 2012, is a super-Earth orbiting in the circumstellar habitable zone of Gliese 667 C.
Gliese 163 c, discovered in September 2012 in orbit around the red dwarf Gliese 163 is located 49 light years from Earth. The planet has 6.9 Earth masses and 1.8–2.4 Earth radii, and with its close orbit receives 40 percent more stellar radiation than Earth, leading to surface temperatures of about 60° C. HD 40307 g, a candidate planet tentatively discovered in November 2012, is in the circumstellar habitable zone of HD 40307. In December 2012, Tau Ceti e and Tau Ceti f were found in the circumstellar habitable zone of Tau Ceti, a Sun-like star 12 light years away. Although more massive than Earth, they are among the least massive planets found to date orbiting in the habitable zone; however, Tau Ceti f, like HD 85512 b, did not fit the new circumstellar habitable zone criteria established by the 2013 Kopparapu study.
Earth-sized planets and Solar analogs
Recent discoveries have uncovered planets that are thought to be similar in size or mass to Earth. While there is no universal definition of "Earth-sized", ranges are typically defined by mass. The lower range used in many definitions of the super-Earth class is 1.9 Earth masses; likewise, sub-Earths range up to the size of Venus (~0.815 Earth masses). An upper limit of 1.5 Earth radii is also considered, given that above 1.5 R⊕ the average planet density rapidly decreases with increasing radius, indicating these planets have a large fraction of volatiles by volume overlying a rocky core. A truly Earth-like planet, an Earth analog or "Earth twin", would need to meet many conditions beyond size and mass; such properties are not observable using current technology.
A solar analog (or "solar twin") is a star that resembles the Sun. To date no solar twin with an exact match as that of the Sun has been found, however, there are some stars that are nearly identical to the Sun, and are such considered solar twins. An exact solar twin would be a G2V star with a 5,778 K temperature, be 4.6 billion years old, with the correct metallicity and a 0.1% solar luminosity variation. Stars with an age of 4.6 billion years are at the most stable state. Proper metallicity and size are also very important to low luminosity variation.
Using data collected by NASA's Kepler Space observatory and the W. M. Keck Observatory, scientists have estimated that 22% of solar-type stars in the Milky Way galaxy have Earth-sized planets in their habitable zone.
On 7 January 2013, astronomers from the Kepler team announced the discovery of Kepler-69c (formerly KOI-172.02), an Earth-size exoplanet candidate (1.7 times the radius of Earth) orbiting Kepler-69, a star similar to our Sun, in the CHZ and expected to offer habitable conditions. The discovery of two planets orbiting in the habitable zone of Kepler-62, by the Kepler team was announced on April 19, 2013. The planets, named Kepler-62e and Kepler-62f, are likely solid planets with sizes 1.6 and 1.4 times the radius of Earth, respectively.
With a radius estimated at 1.1 Earth, Kepler-186f, discovery announced in April 2014, is the closest yet size to Earth of an exoplanet confirmed by the transit method though its mass remains unknown and its parent star is not a Solar analog.
On 6 January 2015, NASA announced the 1000th confirmed exoplanet discovered by the Kepler Space Telescope. Three of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth. Announced 16 January, K2-3d a planet of 1.5 Earth radii was found orbiting within the habitable zone of K2-3, receiving 1.4 times the intensity of visible light as Earth.
Kepler-452b, announced on 23 July 2015 is 50% bigger than Earth, likely rocky and takes approximately 385 Earth days to orbit the habitable zone of its G-class (solar analog) star Kepler-452.
The discovery of a system of three tidally-locked planets orbiting the habitable zone of an ultracool dwarf star, TRAPPIST-1, was announced in May 2016. The discovery is considered significant because it greatly increases the possibility of smaller, cooler, more numerous and closer stars possessing habitable planets.
Announced on the 20 April 2017, LHS 1140b is a super-dense super-Earth 39 light years away, 6.6 times Earth's mass and 1.4 times radius, its star 15% the mass of the Sun but with much less observable stellar flare activity than most M dwarfs. The planet is one of few observable by both transit and radial velocity that's mass is confirmed with an atmosphere may be studied.
At 11 light-years away, a second closest planet, Ross 128 b, was announced in November 2017 following a decade's radial velocity study of relatively "quiet" red dwarf star Ross 128. At 1.35 Earth's mass is it roughly Earth sized and likely rocky in composition.
|Notable exoplanets – Kepler Space Telescope|
(Kepler-62e, Kepler-62f, Kepler-186f, Kepler-296e, Kepler-296f, Kepler-438b, Kepler-440b, Kepler-442b)
(Kepler Space Telescope; January 6, 2015).
Habitability outside the CHZ
Liquid-water environments have been found to exist in the absence of atmospheric pressure, and at temperatures outside the CHZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.
Outside the CHZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.
With some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. Indeed, on Earth itself living organisms may be found more than 6 kilometres below the surface.
Another possibility is that outside the CHZ organisms may use alternative biochemistries that do not require water at all. Astrobiologist Christopher McKay, has suggested that methane (CH
4) may be a solvent conducive to the development of "cryolife", with the Sun's "methane habitable zone" being centered on 1,610,000,000 km (1.0×109 mi; 11 AU) from the star. This distance is coincident with the location of Titan, whose lakes and rain of methane make it an ideal location to find McKay's proposed cryolife. In addition, testing of a number of organisms has found some are capable of surviving in extra-CHZ conditions.
Significance for complex and intelligent life
The Rare Earth hypothesis argues that complex and intelligent life is uncommon and that the CHZ is one of many critical factors. According to Ward & Brownlee (2004) and others, not only is a CHZ orbit and surface water a primary requirement to sustain life but a requirement to support the secondary conditions required for multicellular life to emerge and evolve. The secondary habitability factors are both geological (the role of surface water in sustaining necessary plate tectonics) and biochemical (the role of radiant energy in supporting photosynthesis for necessary atmospheric oxygenation). But others, such as Ian Stewart and Jack Cohen in their 2002 book Evolving the Alien argue that complex intelligent life may arise outside the CHZ. Intelligent life outside the CHZ may have evolved in subsurface environments, from alternative biochemistries or even from nuclear reactions.
On Earth, several complex multicellular life forms (or eukaryotes) have been identified with the potential to survive conditions that might exist outside the conservative habitable zone. Geothermal energy sustains ancient circumvental ecosystems, supporting large complex life forms such as Riftia pachyptila. Similar environments may be found in oceans pressurised beneath solid crusts, such as those of Europa and Enceladus, outside of the habitable zone. Numerous microorganisms have been tested in simulated conditions and in low Earth orbit, including eukaryotes. An animal example is the Milnesium tardigradum, which can withstand extreme temperatures well above the boiling point of water and the cold vacuum of outer space. In addition, the plants Rhizocarpon geographicum and Xanthoria elegans have been found to survive in an environment where the atmospheric pressure is far too low for surface liquid water and where the radiant energy is also much lower than that which most plants require to photosynthesize. The fungi Cryomyces antarcticus and Cryomyces minteri are also able to survive and reproduce in Mars-like conditions.
Species, including humans, known to possess animal cognition require large amounts of energy, and have adapted to specific conditions, including an abundance of atmospheric oxygen and the availability of large quantities of chemical energy synthesized from radiant energy. If humans are to colonize other planets, true Earth analogs in the CHZ are most likely to provide the closest natural habitat; this concept was the basis of Stephen H. Dole's 1964 study. With suitable temperature, gravity, atmospheric pressure and the presence of water, the necessity of spacesuits or space habitat analogues on the surface may be eliminated and complex Earth life can thrive.
Planets in the CHZ remain of paramount interest to researchers looking for intelligent life elsewhere in the universe. The Drake equation, sometimes used to estimate the number of intelligent civilizations in our galaxy, contains the factor or parameter ne, which is the average number of planetary-mass objects orbiting within the CHZ of each star. A low value lends support to the Rare Earth hypothesis, which posits that intelligent life is a rarity in the Universe, whereas a high value provides evidence for the Copernican mediocrity principle, the view that habitability—and therefore life—is common throughout the Universe. A 1971 NASA report by Drake and Bernard Oliver proposed the "water hole", based on the spectral absorption lines of the hydrogen and hydroxyl components of water, as a good, obvious band for communication with extraterrestrial intelligence that has since been widely adopted by astronomers involved in the search for extraterrestrial intelligence. According to Jill Tarter, Margaret Turnbull and many others, CHZ candidates are the priority targets to narrow waterhole searches and the Allen Telescope Array now extends Project Phoenix to such candidates.
Because the CHZ is considered the most likely habitat for intelligent life, METI efforts have also been focused on systems likely to have planets there. The 2001 Teen Age Message and the 2003 Cosmic Call 2, for example, were sent to the 47 Ursae Majoris system, known to contain three Jupiter-mass planets and possibly with a terrestrial planet in the CHZ. The Teen Age Message was also directed to the 55 Cancri system, which has a gas giant in its CHZ. A Message from Earth in 2008, and Hello From Earth in 2009, were directed to the Gliese 581 system, containing three planets in the CHZ—Gliese 581 c, d, and the unconfirmed g.
- Su-Shu Huang, American Scientist 47, 3, pp. 397- 402 (1959)
- Dole, Stephen H (1964). Habitable Planets for Man. Blaisdell Publishing Company. p. 103.
- J. F. Kasting, D. P. Whitmire, R. T. Reynolds, Icarus 101, 108 (1993).
- Kopparapu, Ravi Kumar (2013). "A revised estimate of the occurrence rate of terrestrial planets in the habitable zones around kepler m-dwarfs". The Astrophysical Journal Letters. 767 (1): L8. arXiv: . Bibcode:2013ApJ...767L...8K. doi:10.1088/2041-8205/767/1/L8.
- Cruz, Maria; Coontz, Robert (2013). "Exoplanets - Introduction to Special Issue". Science. 340 (6132): 565. doi:10.1126/science.340.6132.565. Retrieved 18 May 2013.
- Huggett, Richard J. (1995). Geoecology: An Evolutionary Approach. Routledge, Chapman & Hall. p. 10. ISBN 978-0-415-08689-9.
- Overbye, Dennis (January 6, 2015). "As Ranks of Goldilocks Planets Grow, Astronomers Consider What's Next". New York Times. Retrieved January 6, 2015.
- Overbye, Dennis (November 4, 2013). "Far-Off Planets Like the Earth Dot the Galaxy". New York Times. Retrieved November 5, 2013.
- Petigura, Eric A.; Howard, Andrew W.; Marcy, Geoffrey W. (October 31, 2013). "Prevalence of Earth-size planets orbiting Sun-like stars". Proceedings of the National Academy of Sciences of the United States of America. 110: 19273–19278. arXiv: . Bibcode:2013PNAS..11019273P. doi:10.1073/pnas.1319909110. PMC . PMID 24191033. Retrieved November 5, 2013.
- Khan, Amina (November 4, 2013). "Milky Way may host billions of Earth-size planets". Los Angeles Times. Retrieved November 5, 2013.
- Anglada-Escudé, Guillem; et al. (2016). "A terrestrial planet candidate in a temperate orbit around Proxima Centauri". Nature. 536: 437–440. arXiv: . Bibcode:2016Natur.536..437A. doi:10.1038/nature19106. PMID 27558064.
- Schirber, Michael (26 Oct 2009). "Detecting Life-Friendly Moons". Astrobiology Magazine. NASA. Retrieved 9 May 2013.
- Lammer, H.; Bredehöft, J. H.; Coustenis, A.; Khodachenko, M. L.; et al. (2009). "What makes a planet habitable?" (PDF). The Astronomy and Astrophysics Review. 17: 181–249. Bibcode:2009A&ARv..17..181L. doi:10.1007/s00159-009-0019-z. Archived from the original (PDF) on 2016-06-02. Retrieved 2016-05-03.
- Edwards, Katrina J.; Becker, Keir; Colwell, Frederick (2012). "The Deep, Dark Energy Biosphere: Intraterrestrial Life on Earth". Annual Review of Earth and Planetary Sciences. 40 (1): 551–568. Bibcode:2012AREPS..40..551E. doi:10.1146/annurev-earth-042711-105500. ISSN 0084-6597.
- Cowen, Ron (2008-06-07). "A Shifty Moon". Science News.
- Bryner, Jeanna (24 June 2009). "Ocean Hidden Inside Saturn's Moon". Space.com. TechMediaNetwork. Retrieved 22 April 2013.
- Abbot, D. S.; Switzer, E. R. (2011). "The Steppenwolf: A Proposal for a Habitable Planet in Interstellar Space". The Astrophysical Journal. 735 (2): L27. arXiv: . Bibcode:2011ApJ...735L..27A. doi:10.1088/2041-8205/735/2/L27.
- "Rogue Planets Could Harbor Life in Interstellar Space, Say Astrobiologists". MIT Technology Review. MIT Technology Review. 9 February 2011. Retrieved 24 June 2013.
- Wall, Mike (28 September 2015). "Salty Water Flows on Mars Today, Boosting Odds for Life". Space.com. Retrieved 2015-09-28.
- Sun, Jiming; Clark, Bryan K.; Torquato, Salvatore; Car, Roberto (2015). "The phase diagram of high-pressure superionic ice". Nature Communications. 6: 8156. Bibcode:2015NatCo...6E8156S. doi:10.1038/ncomms9156. ISSN 2041-1723. PMC . PMID 26315260.
- Villard, Ray (November 18, 2011). "Alien Life May Live in Various Habitable Zones : Discovery News". News.discovery.com. Discovery Communications LLC. Retrieved April 22, 2013.
- 3rd Edition (1728), trans Bruce, I
- Strughold, Hubertus (1953). The Green and Red Planet: A Physiological Study of the Possibility of Life on Mars. University of New Mexico Press.
- Kasting, James (2010). How to Find a Habitable Planet. Princeton University Press. p. 127. ISBN 978-0-691-13805-3. Retrieved 4 May 2013.
- Kasting, James F.; Whitmire, Daniel P.; Reynolds, Ray T. (January 1993). "Habitable Zones around Main Sequence Stars". Icarus. 101 (1): 108–118. Bibcode:1993Icar..101..108K. doi:10.1006/icar.1993.1010. PMID 11536936.
- Huang, Su-Shu (1966). Extraterrestrial life: An Anthology and Bibliography. National Research Council (U.S.). Study Group on Biology and the Exploration of Mars. Washington, D. C.: National Academy of Sciences. pp. 87–93.
- Huang, Su-Shu (April 1960). "Life-Supporting Regions in the Vicinity of Binary Systems". Publications of the Astronomical Society of the Pacific. 72 (425): 106–114. Bibcode:1960PASP...72..106H. doi:10.1086/127489.
- Gilster, Paul (2004). Centauri Dreams: Imagining and Planning Interstellar Exploration. Springer. p. 40. ISBN 978-0-387-00436-5.
- "The Goldilocks Zone" (Press release). NASA. October 2, 2003. Retrieved April 22, 2013.
- Seager, Sara (2013). "Exoplanet Habitability". Science. 340 (577). Bibcode:2013Sci...340..577S. doi:10.1126/science.1232226.
- Brownlee, Donald; Ward, Peter (2004). Rare Earth: Why Complex Life Is Uncommon in the Universe. New York: Copernicus. ISBN 0-387-95289-6.
- Gonzalez, Guillermo; Brownlee, Donald; Ward, Peter (July 2001). "The Galactic Habitable Zone I. Galactic Chemical Evolution". Icarus. 152 (1): 185–200. arXiv: . Bibcode:2001Icar..152..185G. doi:10.1006/icar.2001.6617.
- Hadhazy, Adam (April 3, 2013). "The 'Habitable Edge' of Exomoons". Astrobiology Magazine. NASA. Retrieved April 22, 2013.
- Fogg, M. J. (1992). "An Estimate of the Prevalence of Biocompatible and Habitable Planets". Journal of the British Interplanetary Society. 45 (1): 3–12. Bibcode:1992JBIS...45....3F. PMID 11539465.
- Redd, Nola Taylor (25 August 2011). "Greenhouse Effect Could Extend Habitable Zone". Astrobiology Magazine. NASA. Retrieved 25 June 2013.
- Zsom, Andras; Seager, Sara; De Wit, Julien (2013). "Towards the Minimum Inner Edge Distance of the Habitable Zone". The Astrophysical Journal. 778: 109. arXiv: [astro-ph.EP]. Bibcode:2013ApJ...778..109Z. doi:10.1088/0004-637X/778/2/109.
- Pierrehumbert, Raymond; Gaidos, Eric (2011). "Hydrogen Greenhouse Planets Beyond the Habitable Zone". The Astrophysical Journal Letters. 734. arXiv: [astro-ph.EP]. Bibcode:2011ApJ...734L..13P. doi:10.1088/2041-8205/734/1/L13.
- Ramirez, Ramses; Kaltenegger, Lisa (2017). "A Volcanic Hydrogen Habitable Zone". The Astrophysical Journal Letters. 837. arXiv: [astro-ph.EP]. Bibcode:2017ApJ...837L...4R. doi:10.3847/2041-8213/aa60c8.
- "Stellar habitable zone calculator". University of Washington. Retrieved 17 December 2015.
- "Venus". Case Western Reserve University. 13 September 2006. Archived from the original on 2012-04-26. Retrieved 2011-12-21.
- Sharp, Tim. "Atmosphere of the Moon". Space.com. TechMediaNetwork. Retrieved April 23, 2013.
- Bolonkin, Alexander A. (2009). Artificial Environments on Mars. Berlin Heidelberg: Springer. pp. 599–625. ISBN 978-3-642-03629-3.
- Haberle, Robert M.; McKay, Christopher P.; Schaeffer, James; Cabrol, Nathalie A.; Grin, Edmon A.; Zent, Aaron P.; Quinn, Richard (2001). "On the possibility of liquid water on present-day Mars". Journal of Geophysical Research. 106 (E10): 23317. Bibcode:2001JGR...10623317H. doi:10.1029/2000JE001360. ISSN 0148-0227.
- Mann, Adam (February 18, 2014). "Strange Dark Streaks on Mars Get More and More Mysterious". Wired. Retrieved February 18, 2014.
- "NASA Finds Possible Signs of Flowing Water on Mars". voanews.com. Retrieved August 5, 2011.
- "Is Mars Weeping Salty Tears?". news.sciencemag.org. Archived from the original on August 14, 2011. Retrieved August 5, 2011.
- Webster, Guy; Brown, Dwayne (December 10, 2013). "NASA Mars Spacecraft Reveals a More Dynamic Red Planet". NASA. Retrieved December 10, 2013.
- A'Hearn, Michael F.; Feldman, Paul D. (1992). "Water vaporization on Ceres". Icarus. 98 (1): 54–60. Bibcode:1992Icar...98...54A. doi:10.1016/0019-1035(92)90206-M.
- Budyko, M. I. (1969). "The effect of solar radiation variations on the climate of the Earth". Tellus. 21 (5): 611–619. doi:10.1111/j.2153-3490.1969.tb00466.x.
- Sellers, William D. (June 1969). "A Global Climatic Model Based on the Energy Balance of the Earth-Atmosphere System". Journal of Applied Meteorology. 8 (3): 392–400. Bibcode:1969JApMe...8..392S. doi:10.1175/1520-0450(1969)008<0392:AGCMBO>2.0.CO;2.
- North, Gerald R. (November 1975). "Theory of Energy-Balance Climate Models". Journal of the Atmospheric Sciences. 32 (11): 2033–2043. Bibcode:1975JAtS...32.2033N. doi:10.1175/1520-0469(1975)032<2033:TOEBCM>2.0.CO;2.
- Rasool, I.; De Bergh, C. (Jun 1970). "The Runaway Greenhouse and the Accumulation of CO2 in the Venus Atmosphere" (PDF). Nature. 226 (5250): 1037–1039. Bibcode:1970Natur.226.1037R. doi:10.1038/2261037a0. ISSN 0028-0836. PMID 16057644.[permanent dead link]
- Hart, M. H. (1979). "Habitable zones about main sequence stars". Icarus. 37: 351–357. Bibcode:1979Icar...37..351H. doi:10.1016/0019-1035(79)90141-6.
- Spiegel, D. S.; Raymond, S. N.; Dressing, C. D.; Scharf, C. A.; Mitchell, J. L. (2010). "Generalized Milankovitch Cycles and Long-Term Climatic Habitability". The Astrophysical Journal. 721 (2): 1308–1318. arXiv: . Bibcode:2010ApJ...721.1308S. doi:10.1088/0004-637X/721/2/1308.
- Abe, Y.; Abe-Ouchi, A.; Sleep, N. H.; Zahnle, K. J. (2011). "Habitable Zone Limits for Dry Planets". Astrobiology. 11 (5): 443–460. Bibcode:2011AsBio..11..443A. doi:10.1089/ast.2010.0545. PMID 21707386.
- Vladilo, Giovanni; Murante, Giuseppe; Silva, Laura; Provenzale, Antonello; Ferri, Gaia; Ragazzini, Gregorio (March 2013). "The habitable zone of Earth-like planets with different levels of atmospheric pressure". The Astrophysical Journal. 767 (1): 65–?. arXiv: . Bibcode:2013ApJ...767...65V. doi:10.1088/0004-637X/767/1/65.
- Leconte, Jeremy; Forget, Francois; Charnay, Benjamin; Wordsworth, Robin; Pottier, Alizee (2013). "Increased insolation threshold for runaway greenhouse processes on Earth like planets". Nature. 504: 268. arXiv: [astro-ph.EP]. Bibcode:2013Natur.504..268L. doi:10.1038/nature12827.
- Cuntz, Manfred (2013). "S-Type and P-Type Habitability in Stellar Binary Systems: A Comprehensive Approach. I. Method and Applications". The Astrophysical Journal. 780: 14. arXiv: [astro-ph.EP]. Bibcode:2014ApJ...780...14C. doi:10.1088/0004-637X/780/1/14.
- Forget, F.; Pierrehumbert, RT (1997). "Warming Early Mars with Carbon Dioxide Clouds That Scatter Infrared Radiation". Science. 278 (5341): 1273–6. Bibcode:1997Sci...278.1273F. doi:10.1126/science.278.5341.1273. PMID 9360920.
- Mischna, M; Kasting, JF; Pavlov, A; Freedman, R (2000). "Influence of Carbon Dioxide Clouds on Early Martian Climate". Icarus. 145 (2): 546–54. Bibcode:2000Icar..145..546M. doi:10.1006/icar.2000.6380. PMID 11543507.
- Vu, Linda. "Planets Prefer Safe Neighborhoods" (Press release). Spitzer.caltech.edu. NASA/Caltech. Retrieved April 22, 2013.
- Buccino, Andrea P.; Lemarchand, Guillermo A.; Mauas, Pablo J.D. (2006). "Ultraviolet radiation constraints around the circumstellar habitable zones". Icarus. 183 (2): 491–503. arXiv: . Bibcode:2006Icar..183..491B. doi:10.1016/j.icarus.2006.03.007.
- Barnes, Rory; Heller, René (March 2013). "Habitable Planets Around White and Brown Dwarfs: The Perils of a Cooling Primary". Astrobiology. 13 (3): 279–291. arXiv: . Bibcode:2013AsBio..13..279B. doi:10.1089/ast.2012.0867. PMC . PMID 23537137.
- Yang, J.; Cowan, N. B.; Abbot, D. S. (2013). "Stabilizing Cloud Feedback Dramatically Expands the Habitable Zone of Tidally Locked Planets". The Astrophysical Journal. 771 (2): L45. arXiv: . Bibcode:2013ApJ...771L..45Y. doi:10.1088/2041-8205/771/2/L45.
- Agol, Eric (April 2011). "Transit Surveys for Earths in the Habitable Zones of White Dwarfs". The Astrophysical Journal Letters. 731 (2): 1–5. arXiv: . Bibcode:2011ApJ...731L..31A. doi:10.1088/2041-8205/731/2/L31.
- Ramirez, Ramses; Kaltenegger, Lisa (2014). "Habitable Zones of Pre-Main-Sequence Stars". The Astrophysical Journal Letters. 797. arXiv: [astro-ph.EP]. Bibcode:2014ApJ...797L..25R. doi:10.1088/2041-8205/797/2/L25.
- Carroll, Bradley; Ostlie, Dale (2007). An Introduction to Modern Astrophysics (2 ed.).
- Richmond, Michael (November 10, 2004). "Late stages of evolution for low-mass stars". Rochester Institute of Technology. Retrieved 2007-09-19.
- Guo, J.; Zhang, F.; Chen, X.; Han, Z. (2009). "Probability distribution of terrestrial planets in habitable zones around host stars". Astrophysics and Space Science. 323 (4): 367–373. arXiv: . Bibcode:2009Ap&SS.323..367G. doi:10.1007/s10509-009-0081-z.
- Kasting, J.F.; Ackerman, T.P. (1986). "Climatic Consequences of Very High Carbon Dioxide Levels in the Earth's Early Atmosphere". Science. 234 (4782): 1383–1385. doi:10.1126/science.11539665. PMID 11539665.
- Franck, S.; von Bloh, W.; Bounama, C.; Steffen, M.; Schönberner, D.; Schellnhuber, H.-J. (2002). "Habitable Zones and the Number of Gaia's Sisters" (PDF). In Montesinos, Benjamin; Giménez, Alvaro; Guinan, Edward F. ASP Conference Series. The Evolving Sun and its Influence on Planetary Environments. Astronomical Society of the Pacific. pp. 261–272. Bibcode:2002ASPC..269..261F. ISBN 1-58381-109-5. Retrieved April 26, 2013.
- Croswell, Ken (January 27, 2001). "Red, willing and able" (Full reprint). New Scientist. Retrieved August 5, 2007.
- Alekseev, I. Y.; Kozlova, O. V. (2002). "Starspots and active regions on the emission red dwarf star LQ Hydrae". Astronomy and Astrophysics. 396: 203–211. Bibcode:2002A&A...396..203A. doi:10.1051/0004-6361:20021424.
- Alpert, Mark (November 7, 2005). "Red Star Rising". Scientific American. Retrieved January 19, 2013.
- Research Corporation (December 19, 2006). "Andrew West: 'Fewer flares, starspots for older dwarf stars'". EarthSky. Retrieved April 27, 2013.
- Cain, Fraser; Gay, Pamela (2007). "AstronomyCast episode 40: American Astronomical Society Meeting, May 2007". Universe Today. Retrieved 2007-06-17.[permanent dead link]
- Ray Villard (27 July 2009). "Living in a Dying Solar System, Part 1". Astrobiology. Retrieved 8 April 2016.
- Christensen, Bill (April 1, 2005). "Red Giants and Planets to Live On". Space.com. TechMediaNetwork. Retrieved April 27, 2013.
- Ramirez, Ramses; Kaltenegger, Lisa (2016). "Habitable Zones of Post-Main Sequence Stars". The Astrophysical Journal. 823. arXiv: [astro-ph.EP]. Bibcode:2016ApJ...823....6R. doi:10.3847/0004-637X/823/1/6.
- Lopez, B.; Schneider, J.; Danchi, W. C. (2005). "Can Life Develop in the Expanded Habitable Zones around Red Giant Stars?". The Astrophysical Journal. 627 (2): 974–985. arXiv: . Bibcode:2005ApJ...627..974L. doi:10.1086/430416.
- Lorenz, Ralph D.; Lunine, Jonathan I.; McKay, Christopher P. (1997). "Titan under a red giant sun: A new kind of "habitable" moon". Geophysical Research Letters. 24 (22): 2905–2908. Bibcode:1997GeoRL..24.2905L. doi:10.1029/97GL52843. ISSN 0094-8276. PMID 11542268.
- Voisey, Jon (February 23, 2011). "Plausibility Check – Habitable Planets around Red Giants". Universe Today. Retrieved April 27, 2013.
- Alien Life More Likely on 'Dune' Planets Archived December 2, 2013, at the Wayback Machine., 09/01/11, Charles Q. Choi, Astrobiology Magazine
- Habitable Zone Limits for Dry Planets, Yutaka Abe, Ayako Abe-Ouchi, Norman H. Sleep, and Kevin J. Zahnle. Astrobiology. June 2011, 11(5): 443–460. doi:10.1089/ast.2010.0545
- Drake, Michael J. (April 2005). "Origin of water in the terrestrial planets". Meteoritics & Planetary Science. John Wiley & Sons. 40 (4): 519–527. Bibcode:2005M&PS...40..519D. doi:10.1111/j.1945-5100.2005.tb00960.x.
- Drake, Michael J.; et al. (August 2005). "Origin of water in the terrestrial planets". Asteroids, Comets, and Meteors (IAU S229). 229th Symposium of the International Astronomical Union. 1. Búzios, Rio de Janeiro, Brazil: Cambridge University Press. pp. 381–394. Bibcode:2006IAUS..229..381D. doi:10.1017/S1743921305006861. ISBN 978-0-521-85200-5.
- Kuchner, Marc (2003). "Volatile-rich Earth-Mass Planets in the Habitable Zone". Astrophysical Journal. 596: L105–L108. arXiv: . Bibcode:2003ApJ...596L.105K. doi:10.1086/378397.
- Charbonneau, David; Zachory K. Berta; Jonathan Irwin; Christopher J. Burke; Philip Nutzman; Lars A. Buchhave; Christophe Lovis; Xavier Bonfils; et al. (2009). "A super-Earth transiting a nearby low-mass star". Nature. 462 (17 December 2009): 891–894. arXiv: . Bibcode:2009Natur.462..891C. doi:10.1038/nature08679. PMID 20016595. Retrieved 2009-12-15.
- Kuchner, Seager; Hier-Majumder, M.; Militzer, C. A. (2007). "Mass–radius relationships for solid exoplanets". The Astrophysical Journal. 669 (2): 1279–1297. arXiv: . Bibcode:2007ApJ...669.1279S. doi:10.1086/521346.
- Vastag, Brian (December 5, 2011). "Newest alien planet is just the right temperature for life". The Washington Post. Retrieved April 27, 2013.
- Robinson, Tyler D.; Catling, David C. (2012). "An Analytic Radiative-Convective Model for Planetary Atmospheres". The Astrophysical Journal. 757 (1): 104. arXiv: . Bibcode:2012ApJ...757..104R. doi:10.1088/0004-637X/757/1/104.
- Shizgal, B. D.; Arkos, G. G. (1996). "Nonthermal escape of the atmospheres of Venus, Earth, and Mars". Reviews of Geophysics. 34 (4): 483–505. Bibcode:1996RvGeo..34..483S. doi:10.1029/96RG02213.
- Chaplin, Martin (April 8, 2013). "Water Phase Diagram". Ices. London South Bank University. Retrieved April 27, 2013.
- D.P. Hamilton; J.A. Burns (1992). "Orbital stability zones about asteroids. II - The destabilizing effects of eccentric orbits and of solar radiation". Icarus. 96 (1): 43–64. Bibcode:1992Icar...96...43H. doi:10.1016/0019-1035(92)90005-R.
- Becquerel P. (1950). "La suspension de la vie au dessous de 1/20 K absolu par demagnetization adiabatique de l'alun de fer dans le vide les plus eléve". C. R. Hebd. Séances Acad. Sci. Paris (in French). 231: 261–263.
- Horikawa, Daiki D. (2012). Alexander V. Altenbach, Joan M. Bernhard & Joseph Seckbach, ed. Anoxia Evidence for Eukaryote Survival and Paleontological Strategies (21 ed.). Springer Netherlands. pp. 205–217. ISBN 978-94-007-1895-1. Retrieved 21 January 2012.
- Kane, Stephen R.; Gelino, Dawn M. (2012). "The Habitable Zone and Extreme Planetary Orbits". Astrobiology. 12 (10): 940–945. arXiv: . Bibcode:2012AsBio..12..940K. doi:10.1089/ast.2011.0798. PMID 23035897.
- Paul Gilster; Andrew LePage (2015-01-30). "A Review of the Best Habitable Planet Candidates". Centauri Dreams, Tau Zero Foundation. Retrieved 2015-07-24.
- Giovanni F. Bignami (2015). The Mystery of the Seven Spheres: How Homo sapiens will Conquer Space. Springer. ISBN 9783319170046., Page 110
- Wethington, Nicholos (September 16, 2008). "How Many Stars are in the Milky Way?". Universe Today. Retrieved April 21, 2013.
- Torres, Abel Mendez (April 26, 2013). "Ten potentially habitable exoplanets now". Habitable Exoplanets Catalog. University of Puerto Rico. Retrieved April 29, 2013.
- Borenstein, Seth (19 February 2011). "Cosmic census finds crowd of planets in our galaxy". Associated Press. Retrieved 24 April 2011.
- Choi, Charles Q. (21 March 2011). "New Estimate for Alien Earths: 2 Billion in Our Galaxy Alone". Space.com. Retrieved 2011-04-24.
- Catanzarite, J.; Shao, M. (2011). "The Occurrence Rate of Earth Analog Planets Orbiting Sun-Like Stars". The Astrophysical Journal. 738 (2): 151. arXiv: . Bibcode:2011ApJ...738..151C. doi:10.1088/0004-637X/738/2/151.
- Williams, D.; Pollard, D. (2002). "Earth-like worlds on eccentric orbits: excursions beyond the habitable zone". International Journal of Astrobiology. Cambridge University Press. 1 (1): 61–69. Bibcode:2002IJAsB...1...61W. doi:10.1017/S1473550402001064.
- "70 Virginis b". Extrasolar Planet Guide. Extrasolar.net. Archived from the original on 2012-06-19. Retrieved 2009-04-02.
- Williams, D.; Pollard, D. (2002). "Earth-like worlds on eccentric orbits: excursions beyond the habitable zone". International Journal of Astrobiology. 1 (1): 61–69. Bibcode:2002IJAsB...1...61W. doi:10.1017/S1473550402001064.
- Sudarsky, David; et al. (2003). "Theoretical Spectra and Atmospheres of Extrasolar Giant Planets". The Astrophysical Journal. 588 (2): 1121–1148. arXiv: . Bibcode:2003ApJ...588.1121S. doi:10.1086/374331.
- Jones, B. W.; Sleep, P. N.; Underwood, D. R. (2006). "Habitability of Known Exoplanetary Systems Based on Measured Stellar Properties". The Astrophysical Journal. 649 (2): 1010–1019. arXiv: . Bibcode:2006ApJ...649.1010J. doi:10.1086/506557.
- Butler, R. P.; Wright, J. T.; Marcy, G. W.; Fischer, D. A.; Vogt, S. S.; Tinney, C. G.; Jones, H. R. A.; Carter, B. D.; Johnson, J. A.; McCarthy, C.; Penny, A. J. (2006). "Catalog of Nearby Exoplanets". The Astrophysical Journal. 646: 505–522. arXiv: . Bibcode:2006ApJ...646..505B. doi:10.1086/504701.
- Barnes, J. W.; O’Brien, D. P. (2002). "Stability of Satellites around Close‐in Extrasolar Giant Planets". The Astrophysical Journal. 575: 1087–1093. arXiv: . Bibcode:2002ApJ...575.1087B. doi:10.1086/341477.
- Canup, R. M.; Ward, W. R. (2006). "A common mass scaling for satellite systems of gaseous planets". Nature. 441 (7095): 834–839. Bibcode:2006Natur.441..834C. doi:10.1038/nature04860. PMID 16778883.
- Lovis; et al. (2006). "An extrasolar planetary system with three Neptune-mass planets". Nature. 441 (7091): 305–309. arXiv: . Bibcode:2006Natur.441..305L. doi:10.1038/nature04828. PMID 16710412.
- "Astronomers Discover Record Fifth Planet Around Nearby Star 55 Cancri". Sciencedaily.com. November 6, 2007. Archived from the original on 26 September 2008. Retrieved 2008-09-14.
- Fischer, Debra A.; et al. (2008). "Five Planets Orbiting 55 Cancri". The Astrophysical Journal. 675 (1): 790–801. arXiv: . Bibcode:2008ApJ...675..790F. doi:10.1086/525512.
- Ian Sample, science correspondent (7 November 2007). "Could this be Earth's near twin? Introducing planet 55 Cancri f". London: The Guardian. Archived from the original on 2 October 2008. Retrieved 17 October 2008.
- Than, Ker (2007-02-24). "Planet Hunters Edge Closer to Their Holy Grail". space.com. Retrieved 2007-04-29.
- Robertson, Paul; Mahadevan, Suvrath; Endl, Michael; Roy, Arpita (3 July 2014). "Stellar activity masquerading as planets in the habitable zone of the M dwarf Gliese 581". Science. 345: 440–444. arXiv: . Bibcode:2014Sci...345..440R. doi:10.1126/science.1253253.
- "Researchers find potentially habitable planet" (in French). maxisciences.com. Retrieved 2011-08-31.
- Schulze-Makuch, D.; Méndez, A.; Fairén, A. G.; Von Paris, P.; Turse, C.; Boyer, G.; Davila, A. F.; António, M. R. D. S.; Catling, D.; Irwin, L. N. (2011). "A Two-Tiered Approach to Assessing the Habitability of Exoplanets". Astrobiology. 11 (10): 1041–1052. Bibcode:2011AsBio..11.1041S. doi:10.1089/ast.2010.0592. PMID 22017274.
- "Kepler 22-b: Earth-like planet confirmed". BBC. December 5, 2011. Retrieved May 2, 2013.
- Scharf, Caleb A. (2011-12-08). "You Can't Always Tell an Exoplanet by Its Size". Scientific American. Retrieved 2012-09-20.: "If it [Kepler-22b] had a similar composition to Earth, then we're looking at a world in excess of about 40 Earth masses".
- Anglada-Escude, Guillem; Arriagada, Pamela; Vogt, Steven; Rivera, Eugenio J.; Butler, R. Paul; Crane, Jeffrey D.; Shectman, Stephen A.; Thompson, Ian B.; Minniti, Dante (2012). "A planetary system around the nearby M dwarf GJ 667C with at least one super-Earth in its habitable zone". The Astrophysical Journal. 751: L16. arXiv: [astro-ph.EP]. Bibcode:2012ApJ...751L..16A. doi:10.1088/2041-8205/751/1/L16.
- Staff (September 20, 2012). "LHS 188 -- High proper-motion Star". Centre de données astronomiques de Strasbourg (Strasbourg astronomical Data Center). Retrieved September 20, 2012.
- Méndez, Abel (August 29, 2012). "A Hot Potential Habitable Exoplanet around Gliese 163". University of Puerto Rico at Arecibo (Planetary Habitability Laboratory). Retrieved September 20, 2012.
- Redd (September 20, 2012). "Newfound Alien Planet a Top Contender to Host Life". Space.com. Retrieved September 20, 2012.
- "A Hot Potential Habitable Exoplanet around Gliese 163". Spacedaily.com. Retrieved 2013-02-10.
- Tuomi, Mikko; Anglada-Escude, Guillem; Gerlach, Enrico; Jones, Hugh R. R.; Reiners, Ansgar; Rivera, Eugenio J.; Vogt, Steven S.; Butler, Paul (2012). "Habitable-zone super-Earth candidate in a six-planet system around the K2.5V star HD 40307". Astronomy and Astrophysics. 549: A48. arXiv: . Bibcode:2013A&A...549A..48T. doi:10.1051/0004-6361/201220268.
- Aron, Jacob (December 19, 2012). "Nearby Tau Ceti may host two planets suited to life". New Scientist. Reed Business Information. Retrieved April 1, 2013.
- Tuomi, M.; Jones, H. R. A.; Jenkins, J. S.; Tinney, C. G.; Butler, R. P.; Vogt, S. S.; Barnes, J. R.; Wittenmyer, R. A.; o’Toole, S.; Horner, J.; Bailey, J.; Carter, B. D.; Wright, D. J.; Salter, G. S.; Pinfield, D. (2013). "Signals embedded in the radial velocity noise". Astronomy & Astrophysics. 551: A79. arXiv: . Bibcode:2013A&A...551A..79T. doi:10.1051/0004-6361/201220509.
- Torres, Abel Mendez (May 1, 2013). "The Habitable Exoplanets Catalog". Habitable Exoplanets Catalog. University of Puerto Rico. Retrieved May 1, 2013.
- Lauren M. Weiss, and Geoffrey W. Marcy. "The mass-radius relation for 65 exoplanets smaller than 4 Earth radii"
- "Solar Variability and Terrestrial Climate". NASA Science. 2013-01-08.
- "Stellar Luminosity Calculator". University of Nebraska-Lincoln astronomy education group.
- Council, National Research (18 September 2012). "The Effects of Solar Variability on Earth's Climate: A Workshop Report". doi:10.17226/13519.
- Most of Earth's twins aren't identical, or even close!, By Ethan. June 5, 2013.
- "Are there oceans on other planets?". National Oceanic and Atmospheric Administration. 6 July 2017. Retrieved 2017-10-03.
- Moskowitz, Clara (January 9, 2013). "Most Earth-Like Alien Planet Possibly Found". Space.com. Retrieved January 9, 2013.
- Barclay, Thomas; Burke, Christopher J.; Howell, Steve B.; Rowe, Jason F.; Huber, Daniel; Isaacson, Howard; Jenkins, Jon M.; Kolbl, Rea; Marcy, Geoffrey W. (2013). "A Super-Earth-Sized Planet Orbiting in or Near the Habitable Zone Around a Sun-Like Star". The Astrophysical Journal. 768 (2): 101. arXiv: . Bibcode:2013ApJ...768..101B. doi:10.1088/0004-637X/768/2/101.
- Johnson, Michele; Harrington, J.D. (18 April 2013). "NASA's Kepler Discovers Its Smallest 'Habitable Zone' Planets to Date". NASA. Retrieved 18 April 2013.
- Overbye, Dennis (18 April 2013). "Two Promising Places to Live, 1,200 Light-Years from Earth". New York Times. Retrieved 18 April 2013.
- Borucki, William J.; et al. (18 April 2013). "Kepler-62: A Five-Planet System with Planets of 1.4 and 1.6 Earth Radii in the Habitable Zone". Science Express. 340 (6132): 587–90. arXiv: . Bibcode:2013Sci...340..587B. doi:10.1126/science.1234702. PMID 23599262. Retrieved 18 April 2013.
- Chang, Kenneth (17 April 2014). "Scientists Find an 'Earth Twin,' or Maybe a Cousin". New York Times. Retrieved 17 April 2014.
- Chang, Alicia (17 April 2014). "Astronomers spot most Earth-like planet yet". AP News. Retrieved 17 April 2014.
- Morelle, Rebecca (17 April 2014). "'Most Earth-like planet yet' spotted by Kepler". BBC News. Retrieved 17 April 2014.
- Clavin, Whitney; Chou, Felicia; Johnson, Michele (6 January 2015). "NASA's Kepler Marks 1,000th Exoplanet Discovery, Uncovers More Small Worlds in Habitable Zones". NASA. Retrieved 6 January 2015.
- Jensen, Mari N. (16 January 2015). "Three nearly Earth-size planets found orbiting nearby star: One in 'Goldilocks' zone". Science Daily. Retrieved 25 July 2015.
- Jenkins, Jon M.; Twicken, Joseph D.; Batalha, Natalie M.; Caldwell, Douglas A.; Cochran, William D.; Endl, Michael; Latham, David W.; Esquerdo, Gilbert A.; Seader, Shawn; Bieryla, Allyson; Petigura, Erik; Ciardi, David R.; Marcy, Geoffrey W.; Isaacson, Howard; Huber, Daniel; Rowe, Jason F.; Torres, Guillermo; Bryson, Stephen T.; Buchhave, Lars; Ramirez, Ivan; Wolfgang, Angie; Li, Jie; Campbell, Jennifer R.; Tenenbaum, Peter; Sanderfer, Dwight; Henze, Christopher E.; Catanzarite, Joseph H.; Gilliland, Ronald L.; Borucki, William J. (23 July 2015). "Discovery and Validation of Kepler-452b: A 1.6 R⨁ Super Earth Exoplanet in the Habitable Zone of a G2 Star". The Astronomical Journal. 150 (2): 56. arXiv: . Bibcode:2015AJ....150...56J. doi:10.1088/0004-6256/150/2/56. ISSN 1538-3881. Retrieved 24 July 2015.
- "NASA telescope discovers Earth-like planet in star's habitable zone". BNO News. 23 July 2015. Retrieved 23 July 2015.
- "Three Potentially Habitable Worlds Found Around Nearby Ultracool Dwarf Star". European Southern Observatory. 2 May 2016.
- Dittmann, Jason A.; Irwin, Jonathan M.; Charbonneau, David; Bonfils, Xavier; Astudillo-Defru, Nicola; Haywood, Raphaëlle D.; Berta-Thompson, Zachory K.; Newton, Elisabeth R.; Rodriguez, Joseph E.; Winters, Jennifer G.; Tan, Thiam-Guan; Almenara, Jose-Manuel; Bouchy, François; Delfosse, Xavier; Forveille, Thierry; Lovis, Christophe; Murgas, Felipe; Pepe, Francesco; Santos, Nuno C.; Udry, Stephane; Wünsche, Anaël; Esquerdo, Gilbert A.; Latham, David W.; Dressing, Courtney D. (2017). "A temperate rocky super-Earth transiting a nearby cool star". Nature. 544 (7650): 333. arXiv: . Bibcode:2017Natur.544..333D. doi:10.1038/nature22055.
- Torres, Abel (2012-06-12). "Liquid Water in the Solar System". Retrieved 2013-12-15.
- Munro, Margaret (2013), "Miners deep underground in northern Ontario find the oldest water ever known", National Post, retrieved 2013-10-06
- Davies, Paul (2013), The Origin of Life II: How did it begin? (PDF), retrieved 2013-10-06[permanent dead link]
- Taylor, Geoffrey (1996), "Life Underground" (PDF), Planetary Science Research Discoveries, retrieved 2013-10-06
- Doyle, Alister (4 March 2013), "Deep underground, worms and "zombie microbes" rule", Reuters, retrieved 2013-10-06
- Nicholson, W. L.; Moeller, R.; Horneck, G.; PROTECT Team (2012). "Transcriptomic Responses of Germinating Bacillus subtilis Spores Exposed to 1.5 Years of Space and Simulated Martian Conditions on the EXPOSE-E Experiment PROTECT". Astrobiology. 12 (5): 469–86. Bibcode:2012AsBio..12..469N. doi:10.1089/ast.2011.0748. PMID 22680693.
- Decker, Heinz; Holde, Kensal E. (2011). "Oxygen and the Exploration of the Universe (article) (book:Oxygen and the Evolution of Life)": 157–168. doi:10.1007/978-3-642-13179-0_9. ISBN 978-3-642-13178-3.
- Stewart, Ian; Cohen, Jack (2002). Evolving the Alien. Ebury Press. ISBN 978-0-09-187927-3.
- Goldsmith, Donald; Owen, Tobias (1992). The Search for Life in the Universe (2 ed.). Addison-Wesley. p. 247. ISBN 0-201-56949-3.
- Vaclav Smil (2003). The Earth's Biosphere: Evolution, Dynamics, and Change. MIT Press. p. 166. ISBN 978-0-262-69298-4.
- Reynolds, R.T.; McKay, C.P.; Kasting, J.F. (1987). "Europa, Tidally Heated Oceans, and Habitable Zones Around Giant Planets". Advances in Space Research. 7 (5): 125–132. Bibcode:1987AdSpR...7..125R. doi:10.1016/0273-1177(87)90364-4.
- Guidetti, R.; Jönsson, K.I. (2002). "Long-term anhydrobiotic survival in semi-terrestrial micrometazoans". Journal of Zoology. 257 (2): 181–187. doi:10.1017/S095283690200078X.
- Baldwin, Emily (26 April 2012). "Lichen survives harsh Mars environment". Skymania News. Retrieved 27 April 2012.
- de Vera, J.-P.; Kohler, Ulrich (26 April 2012). "The adaptation potential of extremophiles to Martian surface conditions and its implication for the habitability of Mars" (PDF). European Geosciences Union. Archived from the original (PDF) on 8 June 2012. Retrieved 27 April 2012.
- Onofri, Silvano; de Vera, Jean-Pierre; Zucconi, Laura; Selbmann, Laura; Scalzi, Giuliano; Venkateswaran, Kasthuri J.; Rabbow, Elke; de la Torre, Rosa; Horneck, Gerda (2015). "Survival of Antarctic Cryptoendolithic Fungi in Simulated Martian Conditions On Board the International Space Station". Astrobiology. 15 (12): 1052–1059. Bibcode:2015AsBio..15.1052O. doi:10.1089/ast.2015.1324. ISSN 1531-1074. PMID 26684504.
- Isler, K.; van Schaik, C. P (2006). "Metabolic costs of brain size evolution". Biology Letters. 2 (4): 557–560. doi:10.1098/rsbl.2006.0538. ISSN 1744-9561. PMC . PMID 17148287.
- Palca, Joe (September 29, 2010). "'Goldilocks' Planet's Temperature Just Right For Life". NPR. NPR. Retrieved April 5, 2011.
- "Project Cyclops: A design study of a system for detecting extraterrestrial intelligent life" (PDF). NASA. 1971. Retrieved June 28, 2009.
- Joseph A. Angelo (2007). Life in the Universe. Infobase Publishing. p. 163. ISBN 978-1-4381-0892-6. Retrieved 26 June 2013.
- Turnbull, Margaret C.; Tarter, Jill C. (2003). "Target Selection for SETI. I. A Catalog of Nearby Habitable Stellar Systems". The Astrophysical Journal Supplement Series. 145 (1): 181–198. arXiv: . Bibcode:2003ApJS..145..181T. doi:10.1086/345779.
- Siemion, Andrew P. V.; Demorest, Paul; Korpela, Eric; Maddalena, Ron J.; Werthimer, Dan; Cobb, Jeff; Howard, Andrew W.; Langston, Glen; Lebofsky, Matt (2013). "A 1.1 to 1.9 GHz SETI Survey of the Kepler Field: I. A Search for Narrow-band Emission from Select Targets". The Astrophysical Journal. 767 (1): 94. arXiv: . Bibcode:2013ApJ...767...94S. doi:10.1088/0004-637X/767/1/94.
- Wall, Mike (2011). "HabStars: Speeding Up In the Zone". Retrieved 2013-06-26
- Zaitsev, A. L. (June 2004). "Transmission and reasonable signal searches in the Universe". Horizons of the Universe Передача и поиски разумных сигналов во Вселенной. Plenary presentation at the National Astronomical Conference WAC-2004 "Horizons of the Universe", Moscow, Moscow State University, June 7, 2004 (in Russian). Moscow. Retrieved 2013-06-30.
- Grinspoon, David (12 December 2007). "Who Speaks for Earth?". Seedmagazine.com. Retrieved 2012-08-21.
- P. C. Gregory; D. A. Fischer (2010). "A Bayesian periodogram finds evidence for three planets in 47 Ursae Majoris". Monthly Notices of the Royal Astronomical Society. 403 (2): 731–747. arXiv: . Bibcode:2010MNRAS.403..731G. doi:10.1111/j.1365-2966.2009.16233.x.
- B. Jones; Underwood, David R.; et al. (2005). "Prospects for Habitable "Earths" in Known Exoplanetary Systems". Astrophysical Journal. 622 (2): 1091–1101. arXiv: . Bibcode:2005ApJ...622.1091J. doi:10.1086/428108.
- Moore, Matthew (October 9, 2008). "Messages from Earth sent to distant planet by Bebo". London: .telegraph.co.uk. Archived from the original on 11 October 2008. Retrieved 2008-10-09.
|Look up habitable zone in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to Habitable zone.|
- "Circumstellar Habitable Zone Simulator". Astronomy Education at the University of Nebraska-Lincoln.
- "The Habitable Exoplanets Catalog". PHL/University of Puerto Rico at Arecibo.
- "The Habitable Zone Gallery".
- "Stars and Habitable Planets". SolStation. Archived from the original on 2011-06-28.
- Nikos Prantzos (2006). "On the Galactic Habitable Zone". Space Science Reviews. 135: 313–322. arXiv: . Bibcode:2008SSRv..135..313P. doi:10.1007/s11214-007-9236-9.
- Interstellar Real Estate: Location, Location, Location – Defining the Habitable Zone
- "Exoplanets in relation to host star's current habitable zone". www.planetarybiology.com.
- "exoExplorer: a free Windows application for visualizing exoplanet environments in 3D". www.planetarybiology.com.
- Shiga, David (November 19, 2009). "Why the universe may be teeming with aliens". New Scientist.
- Simmons; et al. "The New Worlds Observer: a mission for high-resolution spectroscopy of extra-solar terrestrial planets" (PDF). New Worlds.
- Cockell, Charles S.; Herbst, Tom; Léger, Alain; Absil, O.; Beichman, Charles; Benz, Willy; Brack, Andre; Chazelas, Bruno; Chelli, Alain (2009). "Darwin – an experimental astronomy mission to search for extrasolar planets" (PDF). Experimental Astronomy. 23: 435–461. Bibcode:2009ExA....23..435C. doi:10.1007/s10686-008-9121-x.
- Atkinson, Nancy (March 19, 2009). "JWST Will Provide Capability to Search for Biomarkers on Earth-like Worlds". Universe Today.
|
<urn:uuid:9c20a512-2b91-428d-a6ff-871d3e109022>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00420.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.7931376099586487,
"score": 4.3125,
"token_count": 20514,
"url": "https://en.wikipedia.org/wiki/Goldilocks_zone"
}
|
Quebec (i/kwɨˈbɛk/ or /kɨˈbɛk/; French: Québec [kebɛk]) is a province in east-central Canada. It is the only Canadian province that has a predominantly French-speaking population, and the only one to have French as its sole provincial official language. Quebec is Canada's largest province by area and its second-largest administrative division; only the territory of Nunavut is larger. It is bordered to the west by the province of Ontario, James Bay and Hudson Bay, to the north by Hudson Strait and Ungava Bay, to the east by the Gulf of Saint Lawrence and the provinces of Newfoundland and Labrador and New Brunswick. It is bordered on the south by the US states of Maine, New Hampshire, Vermont, and New York. It also shares maritime borders with Nunavut, Prince Edward Island, and Nova Scotia. Quebec is Canada's second most populous province, after Ontario. Most inhabitants live in urban areas near the Saint Lawrence River between Montreal and Quebec City, the capital. English-speaking communities and English-language institutions are concentrated in the west of the island of Montreal but are also significantly present in the Outaouais, Eastern Townships, and Gaspé regions. The Nord-du-Québec region, occupying the northern half of the province, is sparsely populated and inhabited primarily by Aboriginal peoples. Quebec independence debates have played a large role in the politics of the province. Parti Québécois governments held referendums on sovereignty in 1980 and 1995; both were voted down by voters, the latter defeated by a very narrow margin. In 2006, the House of Commons of Canada passed a symbolic motion recognizing the "Québécois as a nation within a united Canada." While the province's substantial natural resources have long been the mainstay of its economy, sectors of the knowledge economy such as aerospace, information and communication technologies, biotechnology and the pharmaceutical industry also play leading roles. These many industries have all contributed to helping Quebec become a very economically influential province within Canada, second only to Ontario in economic output. The name "Québec", which comes from the Algonquin word kébec meaning "where the river narrows", originally referred to the area around Quebec City where the Saint Lawrence River narrows to a cliff-lined gap. Early variations in the spelling of the name included Québecq (Levasseur, 1601) and Kébec (Lescarbot 1609). French explorer Samuel de Champlain chose the name Québec in 1608 for the colonial outpost he would use as the administrative seat for the French colony of New France. The province is sometimes referred to as "La belle province".
Located in the eastern part of Canada and (from a historical and political perspective) part of Central Canada, Quebec occupies a territory nearly three times the size of France or Texas, most of which is very sparsely populated. Its area is very different from one region to another due to the varying composition of the ground, the climate (latitude and altitude) and the proximity to water. The Saint Lawrence Lowland (south) and the Canadian Shield (north) are the two main topographic regions and are radically different.
Quebec has one of the world's largest reserves of fresh water, occupying 12% of its surface. It has 3% of the world's renewable fresh water, whereas it has only 0.1% of its population. More than half a million lakes, including 30 with an area greater than 250 square kilometres (97 sq mi), and 4,500 rivers pour their torrents into the Atlantic Ocean, through the Gulf of Saint Lawrence and the Arctic Ocean, by James, Hudson and Ungava bays. The largest inland body of water is the Caniapiscau Reservoir, created in the realization of the James Bay Project to produce hydroelectric power. Lake Mistassini is the largest natural lake in Quebec.
The Saint Lawrence River has some of the world's largest sustaining inland Atlantic ports at Montreal (the province's largest city), Trois-Rivières, and Quebec City (the capital). Its access to the Atlantic Ocean and the interior of North America made it the base of early French exploration and settlement in the 17th and 18th centuries. Since 1959, the Saint Lawrence Seaway has provided a navigable link between the Atlantic Ocean and Great Lakes. Northeast of Quebec City, the river broadens into the world's largest estuary, the feeding site of numerous species of whales, fish and sea birds. The river empties into the Gulf of Saint Lawrence. This marine environment sustains fisheries and smaller ports in the Lower Saint Lawrence (Bas-Saint-Laurent), Lower North Shore (Côte-Nord), and Gaspé (Gaspésie) regions of the province. The Saint Lawrence River and its estuary form the basis of Quebec's development through the centuries. At the same time, many affluent rivers testify to the exploration of land, among them Ashuapmushuan, Chaudière, Gatineau, Manicouagan, Ottawa, Richelieu, Rupert, Saguenay, Saint-François,
Quebec has three main climate regions. Southern and western Quebec, including most of the major population centres, have a humid continental climate (Köppen climate classification Dfb) with four distinct seasons having warm to occasionally hot and humid summers and often very cold and snowy winters. The main climatic influences are from western and northern Canada and move eastward, and from the southern and central United States that move northward. Because of the influence of both storm systems from the core of North America and the Atlantic Ocean, precipitation is abundant throughout the year, with most areas receiving more than 1,000 millimetres (39 in) of precipitation, including over 300 centimetres (120 in) of snow in many areas. During the summer, severe weather patterns (such as tornadoes and severe thunderstorms) occur occasionally. Most of central Quebec has a subarctic climate (Köppen Dfc). Winters are long, very cold, and snowy, and among the coldest in eastern Canada, while summers are warm but very short due to the higher latitude and the greater influence of Arctic air masses. Precipitation is also somewhat less than farther south, except at some of the higher elevations. The northern regions of Quebec have an arctic climate (Köppen ET), with very cold winters and short, much cooler summers. The primary influences in this region are the Arctic Ocean currents (such as the Labrador Current) and continental air masses from the High Arctic.
The four seasons in Quebec are spring, summer, autumn and winter, with conditions differing by region. They are then differentiated according to the brightness, temperature and precipitation of snow and rain.
Daily sunshine duration is eight hours in December, the time of year when it is the shortest. From temperate zones to the northern territories of the Far North, the brightness varies with latitude, as well as the Northern Lights and Midnight sun.
Quebec is divided into four climatic zones: arctic, subarctic, humid continental and East maritime. From south to north, average temperatures range in summer between 25 °C (77 °F) and 5 °C (41 °F) and, in winter, between −10 °C (14 °F) and −25 °C (−13 °F). In periods of intense heat and cold, temperatures can reach 35 °C (95 °F) in the summer and −40 °C (−40 °F) during the Quebec winter, they may vary depending on the Humidex or Wind chill.
The all-time record of the greatest precipitation in winter was established in winter 2007–2008, with more than five metres of snow in the area of Quebec City, while the average amount received per winter is around three metres. March 1971, however, saw the "Century's Snowstorm" with more than 40 centimetres (16 in) in Montreal to 80 centimetres (31 in) in Mont Apica of snow within 24 hours in many regions of southern Quebec. Also, the winter of 2010 was the warmest and driest ever recorded in more than 60 years.
The large land wildlife is mainly composed of the white-tailed deer, the moose, the muskox, the Caribou, the American black bear and the polar bear. The average land wildlife includes the cougar, the coyote, the Eastern wolf, the bobcat (wild cat), the Arctic fox, the Fox, etc. The small animals seen most commonly include the Eastern gray squirrel, the snowshoe hare, the Groundhog, the Skunk, the raccoon, the chipmunk and the Canadian beaver.
Biodiversity of the estuary and gulf of Saint Lawrence River consists of an aquatic mammal wildlife, of which most goes upriver through the estuary and the Saguenay–St. Lawrence Marine Park until the Île d'Orléans (French for Orleans Island), such as the blue whale, the beluga, the Minke whale and the Harp seal (Earless seal). Among the Nordic marine animals, there are two particularly important to cite: the walrus and the narwhal.
Inland waters are populated by small to large fresh water fish, such as the Largemouth bass, the American pickerel, the Walleye, the Acipenser oxyrinchus, the Muskellunge, the Atlantic cod, the Arctic char, the Brook trout, the Microgadus tomcod (tomcod), the Atlantic salmon, the rainbow trout etc.
The total forest area of Quebec is estimated at 750,300 square kilometres (289,700 sq mi). From the Abitibi-Témiscamingue to the North Shore, the forest is composed primarily of conifers such as the Abies balsamea, the Jack Pine, the white spruce, the black Spruce and the Tamarack. Some species of deciduous trees such as the Yellow Birch Yellow Birch appear when the river is approached in the south. The deciduous forest of the Saint Lawrence Lowlands is mostly composed of deciduous species such as the Sugar Maple, the Red Maple, the white Ash, the American beech, the Butternut (White Walnut), the American elm, the basswood, the Bitternut Hickory and the northern red oak as well as some conifers such as the Eastern White Pine and the Northern Whitecedar. The distribution areas of the Paper Birch, the Trembling Aspen and the Mountain Ash cover more than half of Quebec territory.
At the time of first European contact and later colonization, Algonquian, Iroquois and Inuit tribes were the peoples who inhabited what is now Quebec. Their lifestyles and cultures reflected the land on which they lived. Seven Algonquian groups lived nomadic lives based on hunting, gathering, and fishing in the rugged terrain of the Canadian Shield: (James Bay Cree, Innu, Algonquins) and Appalachian Mountains (Mi'kmaq, Abenaki). St. Lawrence Iroquoians, a branch of the Iroquois, lived more settled lives, planting squash and maize in the fertile soils of the St. Lawrence Valley. They appear to have been later supplanted by the Mohawk tribe. The Inuit continue to fish and hunt whale and seal in the harsh Arctic climate along the coasts of Hudson and Ungava Bay. These people traded fur and food and sometimes warred with each other.
Basque whalers and fishermen traded furs with Saguenay natives throughout the 16th century. The first French explorer to reach Quebec was Jacques Cartier, who planted a cross in 1534 at either Gaspé or Old Fort Bay on the Lower North Shore. He sailed into the St. Lawrence River in 1535 and established an ill-fated colony near present-day Quebec City at the site of Stadacona, a village of the St. Lawrence Iroquoians. Linguists and archaeologists have determined these people were distinct from the Iroquoian nations encountered by later French and Europeans, such as the five nations of the Haudenosaunee. Their language was Laurentian, one of the Iroquoian family. By the late 16th century, they had disappeared from the St. Lawrence Valley.
Government and politics
The Lieutenant Governor represents the Queen of Canada and acts as the province's head of state. The head of government is the premier (called premier ministre in French) who leads the largest party in the unicameral National Assembly, or Assemblée Nationale, from which the Executive Council of Quebec is appointed.
The government of Quebec awards an order of merit called the National Order of Quebec. It is inspired in part by the French Legion of Honour. It is conferred upon men and women born or living in Quebec (but non-Quebecers can be inducted as well) for outstanding achievements.
The government of Quebec takes the majority of its revenue through the perception of a progressive income tax, a 9.5% sales tax and various other taxes (such as carbon, corporate and capital gains taxes), transfer payments from other provinces and direct payments. Quebec is the highest taxed jurisdiction in North America.
In the 2011 census, Quebec had a population of 7,903,001 living in 3,395,343 of its 3,685,926 total dwellings, a 4.7% change from its 2006 population of 7,546,131. With a land area of 1,356,547.02 km2 (523,765.73 sq mi), it had a population density of 5.8/km2 (15.1/sq mi) in 2011. In 2013, Statistics Canada estimated the province's population to be 8,155,334.
At 1.69 children per woman, Quebec's 2011 fertility rate is above the Canada-wide rate of 1.61, and is higher than it was at the turn of the 21st century. However, it is still below the replacement fertility rate of 2.1. This contrasts with its fertility rates before 1960, which were among the highest of any industrialized society. Although Quebec is home to only 24% of the population of Canada, the number of international adoptions in Quebec is the highest of all provinces of Canada. In 2001, 42% of international adoptions in Canada were carried out in Quebec. By 2012, the population of Quebec reached 8 million, and it is projected to reach 9.2 million in 2056. Life expectancy in Quebec reached a new high in 2011, with an expectancy of 78.6 years for men and 83.2 years for women; this ranked as the third-longest life expectancy among Canadian provinces, behind those of British Columbia and Ontario.
All the tables in the following section have been reduced from their original size, for full tables see main article Demographics of Quebec.
Origins in this table are self-reported and respondents were allowed to give more than one answer.
Percentages are calculated as a proportion of the total number of respondents (7,435,905) and may total more than 100 percent due to dual responses.
Only groups with 1.5 percent or more of respondents are shown.
Nearly 9% of the population of Quebec belongs to a visible minority group. This is a lower percentage than that of British Columbia, Ontario, Alberta, and Manitoba but higher than that of the other five provinces. Most visible minorities in Quebec live in or near Montreal.
Percentages are calculated as a proportion of the total number of respondents (7,435,905).
Only groups with more than 0.5 percent of respondents are shown
Quebec is unique among the provinces in its overwhelmingly Roman Catholic population. This is a legacy of colonial times when only Roman Catholics were permitted to settle in New France. The 2001 census showed the population to be 90.3 percent Christian (in contrast to 77 percent for the whole country) with 83.4 percent Catholic Christian (including 83.2 percent Roman Catholic); 4.7 percent Protestant Christian (including 1.2 percent Anglican, 0.7 percent United Church; and 0.5 percent Baptist); 1.4 percent Orthodox Christian (including 0.7 percent Greek Orthodox); and 0.8 percent other Christian; as well as 1.5 percent Muslim; 1.3 percent Jewish; 0.6 percent Buddhist; 0.3 percent Hindu; and 0.1 percent Sikh. An additional 5.8 percent of the population said they had no religious affiliation (including 5.6 percent who stated that they had no religion at all).
Percentages are calculated as a proportion of the total number of respondents (7,125,580)
French is the main language of 80% of Quebec residents. Altogether, 94% of the total population can speak it.
The official language of Quebec is French. Quebec is the only Canadian province whose population is mainly francophone; 6,102,210 people (78.1 percent of the population) recorded it as their sole native language in the 2011 Census, and 6,249,085 (80.0%) recorded that they spoke it most often at home. Knowledge of French is widespread even among those who do not speak it natively; in 2011, about 94.4 percent of the total population reported being able to speak French, alone or in combination with other languages, while 47.3% reported being able to speak English.
In 2011, 599,230 people (7.7 percent of the population) people in Quebec declared English to be their mother tongue, and 767,415 (9.8 percent) used it most often as their home language. The English-speaking community or Anglophones are entitled to services in English in the areas of justice, health, and education; services in English are offered in municipalities in which more than half the residents have English as their mother tongue. Allophones, people whose mother tongue is neither French nor English, made up 12.3 percent (961,700) of the population, according to the 2011 census, though a smaller figure - 554,400 (7.1 percent) - actually used these languages most often in the home.
A considerable number of Quebec residents consider themselves to be bilingual in French and English. In Quebec, about 42.6 percent of the population (3,328,725 people) report knowing both languages; this is the highest proportion of bilinguals of any Canadian province. In contrast, in the rest of Canada, in 2006 only about 10.2 percent (2,430,990) of the population had knowledge of both of the country's official languages. Altogether, 17.5% of Canadians are bilingual in French and English.
In 2011, the most common mother tongue languages in the province were as follows: (Figures shown are for single-language responses only.)
Following were Creoles (0.8%), Chinese (0.6%), Greek (0.5%), Portuguese (0.5%), Romanian (0.4%), Vietnamese (0.3%), and Russian (0.3%). In addition, 152,820 (2.0%) reported having more than one native language.
English is not designated an official language by Quebec law. However, both English and French are required by the Constitution Act, 1867 for the enactment of laws and regulations and any person may use English or French in the National Assembly and the courts of Quebec. The books and records of the National Assembly must also be kept in both languages. Until 1969, Quebec was the only officially bilingual province in Canada and most public institutions functioned in both languages. English was also used in the legislature, government commissions and courts.
Since the 1970s, languages other than French on commercial signs have been permitted only if French is given marked prominence. This law has been the subject of periodic controversy since its inception. The written forms of French place-names in Canada retain their diacritics such as accent marks over vowels in English text. Legitimate exceptions are Montreal and Quebec. However, the accented forms are increasingly evident in some publications. The Canadian Style states that Montréal and Québec (the city) must retain their accents in English federal documents.
Canada 2011 Census
Quebec has an advanced, market-based, and open economy. In 2009, its gross domestic product (GDP) of US$ 32,408 per capita at purchasing power parity puts the province at par with Japan, Italy and Spain, but remains lower than the Canadian average of US$ 37,830 per capita. The economy of Quebec is ranked the 37th largest economy in the world just behind Greece and 28th for the gross domestic product (GDP) per capita.
The economy of Quebec represents 20.36% of the total GDP of Canada. Like most industrialized countries, the economy of Quebec is based mainly on the services sector. Quebec's economy has traditionally been fueled by abundant natural resources, a well-developed infrastructure, and average productivity. The provincial GDP in 2010 was C$ 319,348 billion, which makes Quebec the second largest economy in Canada.
The credit rating of Quebec is currently rated Aa2 according to Moody's rating agency and A+ by S&P. The Quebec economy has changed dramatically in recent years. Between 1995 and 2001, the credit rating of Quebec was rated A2 by Moody's, considered the worst rating in Quebec's history. The provincial debt has reached 47% of GDP in 2011 which represent approximately C$129 billion or C$16 642 per inhabitant. The government of Quebec has announced it will reduce the provincial debt by 25% by 2025.
The Institut national de la recherche scientifique helping to advance scientific knowledge and to train a new generation of students in various scientific and technological sectors. More than one million Quebecers works in the field of science and technology which represents more than 30% of Quebec's GDP.
Quebec's economy has undergone tremendous changes over the last decade. Firmly grounded in the knowledge economy, Quebec has one of the highest growth rates of gross domestic product (GDP) in Canada. The knowledge sector represents about 30.9% of Quebec's GDP. Quebec is experiencing faster growth of its R&D spending than other Canadian provinces. Quebec's spending in R&D in 2011 was equal to 2.63% of GDP, above the European Union average of 1.84% and will have to reaches the target of devoting 3% of GDP to research and development activities in 2013 according to the Lisbon Strategy. The percentage spent on research and technology (R&D) is the highest in Canada and higher than the averages for the Organisation for Economic Co-operation and Development and the G7 countries. Approximately 1.1 million Quebecers work in the field of science and technology.
A mockup of a Bombardier CSeries being developed by Bombardier Aerospace. Since 1856, Quebec has established itself as a pioneer of modern aerospace industry. Quebec has over 260 companies which employ about 43,000 people. Approximately 62% of the Canadian aerospace industry is based in Quebec.
Quebec is also a major player in several leading-edge industries including aerospace, information technologies and software and multimedia. Approximately 60% of the production of the Canadian aerospace industry are from Quebec, where sales totaled C$ 12.4 billion in 2009. Quebec is one of North America's leading high-tech player. This vast sector encompassing approximately 7,300 businesses and employ more than 145,000 people.
Approximately 180 000 Quebeckers are currently working in different field of information technology. Approximately 52% of Canadian companies in these sectors are based in Quebec, mainly in Montreal and Quebec City. There are currently approximately 115 telecommunications companies established in the province, such as Motorola and Ericsson . About 60 000 people currently working in computer software development. Approximately 12 900 people working in over 110 companies such as IBM, CMC, and Matrox. The multimedia sector is also dominated by the province of Quebec. Several companies, such as Ubisoft settled in Quebec since the late 1990s.
The mining industry accounted for approximately 6.3% of Quebec's GDP. It employs approximately 50,000 people in 158 different companies.
The pulp and paper industries generate annual shipments valued at more than $14 billion. The forest products industry ranks second in exports, with shipments valued at almost $11 billion. It is also the main, and in some circumstances only, source of manufacturing activity in more than 250 municipalities in the province. The forest industry has slowed in recent years because of the softwood lumber dispute. This industry employs 68,000 people in several regions of Quebec. This industry accounted for 3.1% of Quebec's GDP.
Agri-food industry plays an important role in the economy of Quebec. It accounts for 8% of the Quebec's GDP and generate $19.2 billion. This industry generated 487,000 jobs in agriculture, fisheries, manufacturing of food, beverages and tobacco and food distribution.
In 2010, Quebec exports declined by 0.6% compared to previous years. Exports to the United States have remained fairly stable while those to Europe surged by 46.3% and sales to Asia were down 12.8%.The unemployment rate in Quebec is around 7%.
Several prominent Quebec companies work within the international market: the producers of pulp and paper Cascades and AbitibiBowater, the milk producer Agropur, the manufacturer of transport Bombardier, the company of information technology CGI, the Cirque du Soleil, the convenience stores Couche-Tard, the Garda (security company), the energy distributor Gaz Métro, the marketing firm Cossette Communication Group, the media and telecommunications company Quebecor, the accounting firm Raymond Chabot Grant Thornton, the Saputo empire and the Vachon bakery, the engineering and construction group SNC-Lavalin, etc.
The abundance of natural resources gives Quebec an advantageous position on the world market. Quebec stands out particularly in the mining sector, ranking among the top ten areas to do business in mining. It also stands for the exploitation of its forest resources.
Quebec is remarkable for the natural resources of its vast territory. It has about 30 mines, 158 exploration companies and fifteen primary processing industries. Many metallic minerals are exploited, the principals are gold, iron, copper and zinc. Many other substances are extracted including titanium, asbestos, silver, magnesium, nickel and many other metals and industrial minerals. However, only 40% of the mineral potential of Quebec is currently known. In 2003, the value of mineral exploitation reached Quebec 3.7 billion Canadian dollars. Moreover, as a major centre of exploration for diamonds, Quebec has seen, since 2002, an increase in its mineral explorations, particularly in the Northwest as well as in the Otish Mountains and the Torngat Mountains.
The vast majority (90.5%) of Quebec's forests are publicly owned. Forests cover more than half of Quebec's territory, for a total area of nearly 761,100 square kilometres (293,900 sq mi). The Quebec forest area covers seven degrees of latitude.
More than a million lakes and rivers cover Quebec, occupying 21% of the total area of its territory. The aquatic environment is composed of 12.1% of fresh water and 9.2% of saltwater (percentage of total QC area).
Tourism plays an important role in the economy of Quebec. Tourism represents 2.5% of Quebec's GDP and nearly 400,000 people are employed in the tourism sector. Nearly 30,000 businesses are related to this industry, of which 70% are located outside of Montreal and Quebec City. In 2011, Quebec welcomed 26 million foreign tourists, most of them from the United States, France, the United Kingdom, Germany, Mexico and Japan.
The province of Quebec has 22 tourist regions, each of which presents its geography, its history and culture. The capital, Quebec City, is the only fortified city in North America and has its own European cachet. The oldest Francophone city in North America, Quebec City was named a World Heritage Site by UNESCO in 1985 and has celebrated its 400th anniversary in 2008. Montreal is the only Francophone metropolis in North America and also the second largest Francophone city after Paris in terms of population. This major centre of 3.6 million inhabitants is a tapestry of cultures from the world over with its many neighbourhoods, including Chinatown, the Latin Quarter, the Gay Village, Little Italy, Le Plateau-Mont-Royal, the Quartier International and Old Montreal. Montreal has a rich architectural heritage, along with many cultural activities, sports events and festivals.
The province of Quebec has over 400 museums including the Musée des beaux-arts de Montréal, which is the oldest museum in Canada and one of the most important art institutions. It is Montreal's largest museum and is amongst the most prominent in Canada.
Quebec is also a religious tourism destination. The Basilique Sainte-Anne-de-Beaupré and Oratoire Saint-Joseph du Mont-Royal are the most popular religious site in the province. In 2005, the Oratory was added to the List of National Historic Sites of Canada on the occasion of its 100th anniversary. Quebec has over 130 church and Cathedrals. All of which bear witness to the many origins that colonized the region.
|
<urn:uuid:91f13baf-a718-4985-9028-af8491d3a4b7>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813818.15/warc/CC-MAIN-20180221222354-20180222002354-00020.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9541077613830566,
"score": 3.28125,
"token_count": 6096,
"url": "http://aryanimmigrationservices.com/qc.php"
}
|
The banana is an edible fruit – botanically a berry – produced by several kinds of large herbaceous flowering plants in the genus Musa. In some countries, bananas used for cooking may be called plantains, in contrast to dessert bananas. The fruit is variable in size, color, and firmness, but is usually elongated and curved, with soft flesh rich in starch covered with a rind, which may be green, yellow, red, purple, or brown when ripe. The fruits grow in clusters hanging from the top of the plant. Almost all modern edible parthenocarpic (seedless) bananas come from two wild species – Musa acuminata and Musa balbisiana. The scientific names of most cultivated bananas are Musa acuminata, Musa balbisiana, and Musa × paradisiaca for the hybrid Musa acuminata × M. balbisiana, depending on their genomic constitution. The old scientific name Musa sapientum is no longer used.
Musa species are native to tropical Indomalaya and Australia, and are likely to have been first domesticated in Papua New Guinea. They are grown in 135 countries, primarily for their fruit, and to a lesser extent to make fiber, banana wine, and banana beer and as ornamental plants. The world's largest producers of bananas in 2016 were India and China, which together accounted for 28% of total production.
Worldwide, there is no sharp distinction between "bananas" and "plantains". Especially in the Americas and Europe, "banana" usually refers to soft, sweet, dessert bananas, particularly those of the Cavendish group, which are the main exports from banana-growing countries. By contrast, Musa cultivars with firmer, starchier fruit are called "plantains". In other regions, such as Southeast Asia, many more kinds of banana are grown and eaten, so the binary distinction is not useful and is not made in local languages.
The term "banana" is also used as the common name for the plants that produce the fruit. This can extend to other members of the genus Musa, such as the scarlet banana (Musa coccinea), the pink banana (Musa velutina), and the Fe'i bananas. It can also refer to members of the genus Ensete, such as the snow banana (Ensete glaucum) and the economically important false banana (Ensete ventricosum). Both genera are in the banana family, Musaceae.
- 1 Description
- 2 Etymology
- 3 Taxonomy
- 4 Bananas and plantains
- 5 Historical cultivation
- 6 Modern cultivation
- 7 Production and export
- 8 Pests, diseases, and natural disasters
- 9 Nutrition
- 10 Culture
- 11 Gallery
- 12 References
- 13 Bibliography
- 14 Further reading
- 15 External links
The banana plant is the largest herbaceous flowering plant. All the above-ground parts of a banana plant grow from a structure usually called a "corm". Plants are normally tall and fairly sturdy, and are often mistaken for trees, but what appears to be a trunk is actually a "false stem" or pseudostem. Bananas grow in a wide variety of soils, as long as the soil is at least 60 cm deep, has good drainage and is not compacted. The leaves of banana plants are composed of a "stalk" (petiole) and a blade (lamina). The base of the petiole widens to form a sheath; the tightly packed sheaths make up the pseudostem, which is all that supports the plant. The edges of the sheath meet when it is first produced, making it tubular. As new growth occurs in the centre of the pseudostem the edges are forced apart. Cultivated banana plants vary in height depending on the variety and growing conditions. Most are around 5 m (16 ft) tall, with a range from 'Dwarf Cavendish' plants at around 3 m (10 ft) to 'Gros Michel' at 7 m (23 ft) or more. Leaves are spirally arranged and may grow 2.7 metres (8.9 ft) long and 60 cm (2.0 ft) wide. They are easily torn by the wind, resulting in the familiar frond look.
When a banana plant is mature, the corm stops producing new leaves and begins to form a flower spike or inflorescence. A stem develops which grows up inside the pseudostem, carrying the immature inflorescence until eventually it emerges at the top. Each pseudostem normally produces a single inflorescence, also known as the "banana heart". (More are sometimes produced; an exceptional plant in the Philippines produced five.) After fruiting, the pseudostem dies, but offshoots will normally have developed from the base, so that the plant as a whole is perennial. In the plantation system of cultivation, only one of the offshoots will be allowed to develop in order to maintain spacing. The inflorescence contains many bracts (sometimes incorrectly referred to as petals) between rows of flowers. The female flowers (which can develop into fruit) appear in rows further up the stem (closer to the leaves) from the rows of male flowers. The ovary is inferior, meaning that the tiny petals and other flower parts appear at the tip of the ovary.
The banana fruits develop from the banana heart, in a large hanging cluster, made up of tiers (called "hands"), with up to 20 fruit to a tier. The hanging cluster is known as a bunch, comprising 3–20 tiers, or commercially as a "banana stem", and can weigh 30–50 kilograms (66–110 lb). Individual banana fruits (commonly known as a banana or "finger") average 125 grams (0.276 lb), of which approximately 75% is water and 25% dry matter (nutrient table, lower right).
The fruit has been described as a "leathery berry". There is a protective outer layer (a peel or skin) with numerous long, thin strings (the phloem bundles), which run lengthwise between the skin and the edible inner portion. The inner part of the common yellow dessert variety can be split lengthwise into three sections that correspond to the inner portions of the three carpels by manually deforming the unopened fruit. In cultivated varieties, the seeds are diminished nearly to non-existence; their remnants are tiny black specks in the interior of the fruit.
Bananas are naturally slightly radioactive, more so than most other fruits, because of their potassium content and the small amounts of the isotope potassium-40 found in naturally occurring potassium. The banana equivalent dose of radiation is sometimes used in nuclear communication to compare radiation levels and exposures.
The genus Musa was created by Carl Linnaeus in 1753. The name may be derived from Antonius Musa, physician to the Emperor Augustus, or Linnaeus may have adapted the Arabic word for banana, mauz. The old biological name Musa sapientum = "Muse of the wise" arose because of homophony in Latin with the classical Muses.
Musa is in the family Musaceae. The APG III system assigns Musaceae to the order Zingiberales, part of the commelinid clade of the monocotyledonous flowering plants. Some 70 species of Musa were recognized by the World Checklist of Selected Plant Families as of January 2013[update]; several produce edible fruit, while others are cultivated as ornamentals.
The classification of cultivated bananas has long been a problematic issue for taxonomists. Linnaeus originally placed bananas into two species based only on their uses as food: Musa sapientum for dessert bananas and Musa paradisiaca for plantains. More species names were added, but this approach proved to be inadequate for the number of cultivars in the primary center of diversity of the genus, Southeast Asia. Many of these cultivars were given names that were later discovered to be synonyms.
In a series of papers published from 1947 onwards, Ernest Cheesman showed that Linnaeus's Musa sapientum and Musa paradisiaca were cultivars and descendants of two wild seed-producing species, Musa acuminata and Musa balbisiana, both first described by Luigi Aloysius Colla. Cheesman recommended the abolition of Linnaeus's species in favor of reclassifying bananas according to three morphologically distinct groups of cultivars – those primarily exhibiting the botanical characteristics of Musa balbisiana, those primarily exhibiting the botanical characteristics of Musa acuminata, and those with characteristics of both. Researchers Norman Simmonds and Ken Shepherd proposed a genome-based nomenclature system in 1955. This system eliminated almost all the difficulties and inconsistencies of the earlier classification of bananas based on assigning scientific names to cultivated varieties. Despite this, the original names are still recognized by some authorities today, leading to confusion.
The accepted scientific names for most groups of cultivated bananas are Musa acuminata Colla and Musa balbisiana Colla for the ancestral species, and Musa × paradisiaca L. for the hybrid M. acuminata × M. balbisiana.
Synonyms of M. × paradisica include
- a large number of subspecific and varietial names of M. × paradisiaca, including M. p. subsp. sapientum (L.) Kuntze
- Musa × dacca Horan.
- Musa × sapidisiaca K.C.Jacob, nom. superfl.
- Musa × sapientum L., and a large number of its varietal names, including M. × sapientum var. paradisiaca (L.) Baker, nom. illeg.
Generally, modern classifications of banana cultivars follow Simmonds and Shepherd's system. Cultivars are placed in groups based on the number of chromosomes they have and which species they are derived from. Thus the Latundan banana is placed in the AAB Group, showing that it is a triploid derived from both M. acuminata (A) and M. balbisiana (B). For a list of the cultivars classified under this system, see "List of banana cultivars".
Bananas and plantains
In regions such as North America and Europe, Musa fruits offered for sale can be divided into "bananas" and "plantains", based on their intended use as food. Thus the banana producer and distributor Chiquita produces publicity material for the American market which says that "a plantain is not a banana". The stated differences are that plantains are more starchy and less sweet; they are eaten cooked rather than raw; they have thicker skin, which may be green, yellow or black; and they can be used at any stage of ripeness. Linnaeus made the same distinction between plantains and bananas when first naming two "species" of Musa. Members of the "plantain subgroup" of banana cultivars, most important as food in West Africa and Latin America, correspond to the Chiquita description, having long pointed fruit. They are described by Ploetz et al. as "true" plantains, distinct from other cooking bananas. The cooking bananas of East Africa belong to a different group, the East African Highland bananas, so would not qualify as "true" plantains on this definition.
An alternative approach divides bananas into dessert bananas and cooking bananas, with plantains being one of the subgroups of cooking bananas. Triploid cultivars derived solely from M. acuminata are examples of "dessert bananas", whereas triploid cultivars derived from the hybrid between M. acuminata and M. balbinosa (in particular the plantain subgroup of the AAB Group) are "plantains". Small farmers in Colombia grow a much wider range of cultivars than large commercial plantations. A study of these cultivars showed that they could be placed into at least three groups based on their characteristics: dessert bananas, non-plantain cooking bananas, and plantains, although there were overlaps between dessert and cooking bananas.
In Southeast Asia – the center of diversity for bananas, both wild and cultivated – the distinction between "bananas" and "plantains" does not work, according to Valmayor et al. Many bananas are used both raw and cooked. There are starchy cooking bananas which are smaller than those eaten raw. The range of colors, sizes and shapes is far wider than in those grown or sold in Africa, Europe or the Americas. Southeast Asian languages do not make the distinction between "bananas" and "plantains" that is made in English (and Spanish). Thus both Cavendish cultivars, the classic yellow dessert bananas, and Saba cultivars, used mainly for cooking, are called pisang in Malaysia and Indonesia, kluai in Thailand and chuoi in Vietnam. Fe'i bananas, grown and eaten in the islands of the Pacific, are derived from entirely different wild species than traditional bananas and plantains. Most Fe'i bananas are cooked, but Karat bananas, which are short and squat with bright red skins, very different from the usual yellow dessert bananas, are eaten raw.
In summary, in commerce in Europe and the Americas (although not in small-scale cultivation), it is possible to distinguish between "bananas", which are eaten raw, and "plantains", which are cooked. In other regions of the world, particularly India, Southeast Asia and the islands of the Pacific, there are many more kinds of banana and the two-fold distinction is not useful and not made in local languages. Plantains are one of many kinds of cooking bananas, which are not always distinct from dessert bananas.
Farmers in Southeast Asia and Papua New Guinea first domesticated bananas. Recent archaeological and palaeoenvironmental evidence at Kuk Swamp in the Western Highlands Province of Papua New Guinea suggests that banana cultivation there goes back to at least 5000 BCE, and possibly to 8000 BCE. It is likely that other species were later and independently domesticated elsewhere in Southeast Asia. Southeast Asia is the region of primary diversity of the banana. Areas of secondary diversity are found in Africa, indicating a long history of banana cultivation in the region.
Phytolith discoveries in Cameroon dating to the first millennium BCE triggered an as yet unresolved debate about the date of first cultivation in Africa. There is linguistic evidence that bananas were known in Madagascar around that time. The earliest prior evidence indicates that cultivation dates to no earlier than late 6th century CE. It is likely, however, that bananas were brought at least to Madagascar if not to the East African coast during the phase of Malagasy colonization of the island from South East Asia c. 400 CE.
The banana may also have been present in isolated locations elsewhere in the Middle East on the eve of Islam. The spread of Islam was followed by far-reaching diffusion. There are numerous references to it in Islamic texts (such as poems and hadiths) beginning in the 9th century. By the 10th century the banana appears in texts from Palestine and Egypt. From there it diffused into North Africa and Muslim Iberia. During the medieval ages, bananas from Granada were considered among the best in the Arab world. In 650, Islamic conquerors brought the banana to Palestine. Today, banana consumption increases significantly in Islamic countries during Ramadan, the month of daylight fasting.
Bananas were certainly grown in the Christian Kingdom of Cyprus by the late medieval period. Writing in 1458, the Italian traveller and writer Gabriele Capodilista wrote favourably of the extensive farm produce of the estates at Episkopi, near modern-day Limassol, including the region's banana plantations.
There are fuzzy bananas whose skins are bubblegum pink; green-and-white striped bananas with pulp the color of orange sherbet; bananas that, when cooked, taste like strawberries. The Double Mahoi plant can produce two bunches at once. The Chinese name of the aromatic Go San Heong banana means 'You can smell it from the next mountain.' The fingers on one banana plant grow fused; another produces bunches of a thousand fingers, each only an inch long.— Mike Peed, The New Yorker
Plantation cultivation in the Caribbean, Central and South America
In the 15th and 16th centuries, Portuguese colonists started banana plantations in the Atlantic Islands, Brazil, and western Africa. North Americans began consuming bananas on a small scale at very high prices shortly after the Civil War, though it was only in the 1880s that the food became more widespread. As late as the Victorian Era, bananas were not widely known in Europe, although they were available. Jules Verne introduces bananas to his readers with detailed descriptions in Around the World in Eighty Days (1872).
The earliest modern plantations originated in Jamaica and the related Western Caribbean Zone, including most of Central America. It involved the combination of modern transportation networks of steamships and railroads with the development of refrigeration that allowed more time between harvesting and ripening. North American shippers like Lorenzo Dow Baker and Andrew Preston, the founders of the Boston Fruit Company started this process in the 1870s, but railroad builders like Minor C. Keith also participated, eventually culminating in the multi-national giant corporations like today's Chiquita Brands International and Dole. These companies were monopolistic, vertically integrated (meaning they controlled growing, processing, shipping and marketing) and usually used political manipulation to build enclave economies (economies that were internally self-sufficient, virtually tax exempt, and export-oriented that contribute very little to the host economy). Their political maneuvers, which gave rise to the term Banana republic for states like Honduras and Guatemala, included working with local elites and their rivalries to influence politics or playing the international interests of the United States, especially during the Cold War, to keep the political climate favorable to their interests.
Peasant cultivation for export in the Caribbean
The vast majority of the world's bananas today are cultivated for family consumption or for sale on local markets. India is the world leader in this sort of production, but many other Asian and African countries where climate and soil conditions allow cultivation also host large populations of banana growers who sell at least some of their crop.
Peasant sector banana growers produce for the world market in the Caribbean, however. The Windward Islands are notable for the growing, largely of Cavendish bananas, for an international market, generally in Europe but also in North America. In the Caribbean, and especially in Dominica where this sort of cultivation is widespread, holdings are in the 1–2 acre range. In many cases the farmer earns additional money from other crops, from engaging in labor outside the farm, and from a share of the earnings of relatives living overseas. This style of cultivation often was popular in the islands as bananas required little labor input and brought welcome extra income. Banana crops are vulnerable to destruction by high winds, such as tropical storms or cyclones.
After the signing of the NAFTA agreements in the 1990s, however, the tide turned against peasant producers. Their costs of production were relatively high and the end of favorable tariff and other supports, especially in the European Economic Community, made it difficult for peasant producers to compete with bananas grown on large plantations by the well-capitalized firms like Chiquita and Dole. Not only did the large companies have access to cheap labor in the areas they worked, but they were better able to afford modern agronomic advances such as fertilization. The "dollar banana" produced by these concerns made the profit margins for peasant bananas unsustainable.
Caribbean countries have sought to redress this problem by providing government supported agronomic services and helping to organize producers' cooperatives. They have also been supporters of the Fair Trade movement which seeks to balance the inequities in the world trade in commodities.
Most farms supply local consumption. Cooking bananas represent a major food source and a major income source for smallhold farmers. In east Africa, highland bananas are of greatest importance as a staple food crop. In countries such as Uganda, Burundi, and Rwanda per capita consumption has been estimated at 45 kilograms (99 lb) per year, the highest in the world.
All widely cultivated bananas today descend from the two wild bananas Musa acuminata and Musa balbisiana. While the original wild bananas contained large seeds, diploid or polyploid cultivars (some being hybrids) with tiny seeds are preferred for human raw fruit consumption. These are propagated asexually from offshoots. The plant is allowed to produce two shoots at a time; a larger one for immediate fruiting and a smaller "sucker" or "follower" to produce fruit in 6–8 months. The life of a banana plantation is 25 years or longer, during which time the individual stools or planting sites may move slightly from their original positions as lateral rhizome formation dictates.
Cultivated bananas are parthenocarpic, i.e. the flesh of the fruit swells and ripens without its seeds being fertilized and developing. Lacking viable seeds, propagation typically involves farmers removing and transplanting part of the underground stem (called a corm). Usually this is done by carefully removing a sucker (a vertical shoot that develops from the base of the banana pseudostem) with some roots intact. However, small sympodial corms, representing not yet elongated suckers, are easier to transplant and can be left out of the ground for up to two weeks; they require minimal care and can be shipped in bulk.
It is not necessary to include the corm or root structure to propagate bananas; severed suckers without root material can be propagated in damp sand, although this takes somewhat longer.
In some countries, commercial propagation occurs by means of tissue culture. This method is preferred since it ensures disease-free planting material. When using vegetative parts such as suckers for propagation, there is a risk of transmitting diseases (especially the devastating Panama disease).
As a non-seasonal crop, bananas are available fresh year-round.
In global commerce in 2009, by far the most important cultivars belonged to the triploid AAA group of Musa acuminata, commonly referred to as Cavendish group bananas. They accounted for the majority of banana exports, despite only coming into existence in 1836. The cultivars Dwarf Cavendish and Grand Nain (Chiquita Banana) gained popularity in the 1950s after the previous mass-produced cultivar, Gros Michel (also an AAA group cultivar), became commercially unviable due to Panama disease, caused by the fungus Fusarium oxysporum which attacks the roots of the banana plant. Cavendish cultivars are resistant to the Panama Disease but in 2013 there were fears that the Black Sigatoka fungus would in turn make Cavendish bananas unviable.
Ease of transport and shelf life rather than superior taste make the Dwarf Cavendish the main export banana.
Even though it is no longer viable for large scale cultivation, Gros Michel is not extinct and is still grown in areas where Panama disease is not found. Likewise, Dwarf Cavendish and Grand Nain are in no danger of extinction, but they may leave supermarket shelves if disease makes it impossible to supply the global market. It is unclear if any existing cultivar can replace Cavendish bananas, so various hybridisation and genetic engineering programs are attempting to create a disease-resistant, mass-market banana.
Export bananas are picked green, and ripen in special rooms upon arrival in the destination country. These rooms are air-tight and filled with ethylene gas to induce ripening. The vivid yellow color consumers normally associate with supermarket bananas is, in fact, caused by the artificial ripening process. Flavor and texture are also affected by ripening temperature. Bananas are refrigerated to between 13.5 and 15 °C (56.3 and 59.0 °F) during transport. At lower temperatures, ripening permanently stalls, and the bananas turn gray as cell walls break down. The skin of ripe bananas quickly blackens in the 4 °C (39 °F) environment of a domestic refrigerator, although the fruit inside remains unaffected.
"Tree-ripened" Cavendish bananas have a greenish-yellow appearance which changes to a brownish-yellow as they ripen further. Although both flavor and texture of tree-ripened bananas is generally regarded as superior to any type of green-picked fruit, this reduces shelf life to only 7–10 days.
Bananas can be ordered by the retailer "ungassed" (i.e. not treated with ethylene), and may show up at the supermarket fully green. Guineos verdes (green bananas) that have not been gassed will never fully ripen before becoming rotten. Instead of fresh eating, these bananas can be used for cooking, as seen in Jamaican cuisine.
A 2008 study reported that ripe bananas fluoresce when exposed to ultraviolet light. This property is attributed to the degradation of chlorophyll leading to the accumulation of a fluorescent product in the skin of the fruit. The chlorophyll breakdown product is stabilized by a propionate ester group. Banana-plant leaves also fluoresce in the same way. Green bananas do not fluoresce. The study suggested that this allows animals which can see light in the ultraviolet spectrum (tetrachromats and pentachromats) to more easily detect ripened bananas.
Storage and transport
Bananas must be transported over long distances from the tropics to world markets. To obtain maximum shelf life, harvest comes before the fruit is mature. The fruit requires careful handling, rapid transport to ports, cooling, and refrigerated shipping. The goal is to prevent the bananas from producing their natural ripening agent, ethylene. This technology allows storage and transport for 3–4 weeks at 13 °C (55 °F). On arrival, bananas are held at about 17 °C (63 °F) and treated with a low concentration of ethylene. After a few days, the fruit begins to ripen and is distributed for final sale. Unripe bananas can not be held in home refrigerators because they suffer from the cold. Ripe bananas can be held for a few days at home. If bananas are too green, they can be put in a brown paper bag with an apple or tomato overnight to speed up the ripening process.
Carbon dioxide (which bananas produce) and ethylene absorbents extend fruit life even at high temperatures. This effect can be exploited by packing banana in a polyethylene bag and including an ethylene absorbent, e.g., potassium permanganate, on an inert carrier. The bag is then sealed with a band or string. This treatment has been shown to more than double lifespans up to 3–4 weeks without the need for refrigeration.
Production and export
|Source: FAOSTAT of the United Nations Note: Some countries produce statistics distinguishing between bananas and plantain production, but four of the top six producers do not, requiring comparisons using the total for bananas and plantains combined.|
In 2016, world production of bananas and plantains was 148 million tonnes, led by India and China with a combined total (only for bananas) of 28% of global production (table). Other major producers were the Philippines, Ecuador, Indonesia, and Brazil, together accounting for 20% of the world total of bananas and plantains (table).
As reported for 2013, total world exports were 20 million tonnes of bananas and 859,000 tonnes of plantains. Ecuador and the Philippines were the leading exporters with 5.4 and 3.3 million tonnes, respectively, and the Dominican Republic was the leading exporter of plantains with 210,350 tonnes.
Bananas and plantains constitute a major staple food crop for millions of people in developing countries. In most tropical countries, green (unripe) bananas used for cooking represent the main cultivars. Most producers are small-scale farmers either for home consumption or local markets. Because bananas and plantains produce fruit year-round, they provide a valuable food source during the hunger season (when the food from one annual/semi-annual harvest has been consumed, and the next is still to come). Bananas and plantains are important for global food security.
Pests, diseases, and natural disasters
While in no danger of outright extinction, the most common edible banana cultivar Cavendish (extremely popular in Europe and the Americas) could become unviable for large-scale cultivation in the next 10–20 years. Its predecessor 'Gros Michel', discovered in the 1820s, suffered this fate. Like almost all bananas, Cavendish lacks genetic diversity, which makes it vulnerable to diseases, threatening both commercial cultivation and small-scale subsistence farming. Some commentators remarked that those variants which could replace what much of the world considers a "typical banana" are so different that most people would not consider them the same fruit, and blame the decline of the banana on monogenetic cultivation driven by short-term commercial motives.
Panama disease is caused by a fusarium soil fungus (Race 1), which enters the plants through the roots and travels with water into the trunk and leaves, producing gels and gums that cut off the flow of water and nutrients, causing the plant to wilt, and exposing the rest of the plant to lethal amounts of sunlight. Prior to 1960, almost all commercial banana production centered on "Gros Michel", which was highly susceptible. Cavendish was chosen as the replacement for Gros Michel because, among resistant cultivars, it produces the highest quality fruit. However, more care is required for shipping the Cavendish, and its quality compared to Gros Michel is debated.[by whom?]
According to current sources, a deadly form of Panama disease is infecting Cavendish. All plants are genetically identical, which prevents evolution of disease resistance. Researchers are examining hundreds of wild varieties for resistance.
Tropical race 4
Tropical race 4 (TR4), a reinvigorated strain of Panama disease, was first discovered in 1993. This virulent form of fusarium wilt has wiped out Cavendish in several southeast Asian countries. It has yet to reach the Americas; however, the soil-based fungi can easily be carried on boots, clothing, or tools. This is how TR4 travels and will be its most likely route into Latin America. Cavendish is highly susceptible to TR4, and over time Cavendish will almost certainly be eliminated from commercial production by this disease. The only known defense to TR4 is genetic resistance, which remains undiscovered as of 2018.
Black sigatoka is a fungal leaf spot disease first observed in Fiji in 1963 or 1964. Black Sigatoka (also known as black leaf streak) has spread to banana plantations throughout the tropics from infected banana leaves that were used as packing material. It affects all main cultivars of bananas and plantains (including the Cavendish cultivars), impeding photosynthesis by blackening parts of the leaves, eventually killing the entire leaf. Starved for energy, fruit production falls by 50% or more, and the bananas that do grow ripen prematurely, making them unsuitable for export. The fungus has shown ever-increasing resistance to treatment, with the current expense for treating 1 hectare (2.5 acres) exceeding $1,000 per year. In addition to the expense, there is the question of how long intensive spraying can be environmentally justified. Several resistant cultivars of banana have been developed, but none has yet received commercial acceptance due to taste and texture issues.
In East Africa
With the arrival of black sigatoka, banana production in eastern Africa fell by over 40%. For example, during the 1970s, Uganda produced 15 to 20 tonnes (15 to 20 long tons; 17 to 22 short tons) of bananas per hectare. Today, production has fallen to only 6 tonnes (5.9 long tons; 6.6 short tons) per hectare.
The situation has started to improve as new disease-resistant cultivars have been developed by the International Institute of Tropical Agriculture and the National Agricultural Research Organisation of Uganda (NARO), such as FHIA-17 (known in Uganda as the Kabana 3). These new cultivars taste different from the Cabana banana, which has slowed their acceptance by local farmers. However, by adding mulch and manure to the soil around the base of the plant, these new cultivars have substantially increased yields in the areas where they have been tried.
The International Institute of Tropical Agriculture and NARO, funded by the Rockefeller Foundation and CGIAR have started trials for genetically modified bananas that are resistant to both Black sigatoka and banana weevils. It is developing cultivars specifically for smallholder and subsistence farmers.
Banana bunchy top virus
Banana bunchy top virus (BBTV) jumps from plant to plant using aphids. It stunts leaves, resulting in a "bunched" appearance. Generally, an infected plant does not produce fruit, although mild strains exist which allow some production. These mild strains are often mistaken for malnourishment, or a disease other than BBTV. There is no cure; however, its effect can be minimized by planting only tissue-cultured plants (in vitro propagation), controlling aphids, and immediately removing and destroying infected plants.
Banana bacterial wilt
Banana bacterial wilt (BBW) is a bacterial disease caused by Xanthomonas campestris pv. musacearum. After being originally identified on a close relative of bananas, Ensete ventricosum, in Ethiopia in the 1960s, BBW occurred in Uganda in 2001 affecting all banana cultivars. Since then BBW has been diagnosed in Central and East Africa including the banana growing regions of Rwanda, the Democratic Republic of the Congo, Tanzania, Kenya, Burundi, and Uganda.
|Nutritional value per 100 g (3.5 oz)|
|Energy||371 kJ (89 kcal)|
|Dietary fiber||2.6 g|
|Pantothenic acid (B5)||
Link to USDA Database entry values are for edible portion
|Percentages are roughly approximated using US recommendations for adults.
Source: USDA Nutrient Database
Raw bananas (not including the peel) are 75% water, 23% carbohydrates, 1% protein, and contain negligible fat (table). In a 100 gram amount, bananas supply 89 Calories and are a rich source of vitamin B6, providing 31% of the US recommended Daily Value, and contain moderate amounts of vitamin C, manganese and dietary fiber (table).
Although bananas are commonly thought to supply exceptional potassium content, their actual potassium content is relatively low per typical food serving at only 8% of the US recommended Daily Value (table). Vegetables with higher potassium content than raw dessert bananas (358 mg per 100 grams) include raw spinach (558 mg per 100 grams), baked potatoes without skin (391 mg per 100 grams), cooked soybeans (539 mg per 100 grams), grilled portabella mushrooms (437 mg per 100 grams) and processed tomato sauces (413–439 mg per 100 grams). Raw plantains contain 499 mg potassium per 100 grams. Dehydrated dessert bananas or banana powder contain 1491 mg potassium per 100 grams.
Food and cooking
Bananas are a staple starch for many tropical populations. Depending upon cultivar and ripeness, the flesh can vary in taste from starchy to sweet, and texture from firm to mushy. Both the skin and inner part can be eaten raw or cooked. The primary component of the aroma of fresh bananas is isoamyl acetate (also known as banana oil), which, along with several other compounds such as butyl acetate and isobutyl acetate, is a significant contributor to banana flavor.
During the ripening process, bananas produce the gas ethylene, which acts as a plant hormone and indirectly affects the flavor. Among other things, ethylene stimulates the formation of amylase, an enzyme that breaks down starch into sugar, influencing the taste of bananas. The greener, less ripe bananas contain higher levels of starch and, consequently, have a "starchier" taste. On the other hand, yellow bananas taste sweeter due to higher sugar concentrations. Furthermore, ethylene signals the production of pectinase, an enzyme which breaks down the pectin between the cells of the banana, causing the banana to soften as it ripens.
Bananas are eaten deep fried, baked in their skin in a split bamboo, or steamed in glutinous rice wrapped in a banana leaf. Bananas can be made into jam. Banana pancakes are popular amongst backpackers and other travelers in South Asia and Southeast Asia. This has elicited the expression Banana Pancake Trail for those places in Asia that cater to this group of travelers. Banana chips are a snack produced from sliced dehydrated or fried banana or plantain, which have a dark brown color and an intense banana taste. Dried bananas are also ground to make banana flour. Extracting juice is difficult, because when a banana is compressed, it simply turns to pulp. Bananas feature prominently in Philippine cuisine, being part of traditional dishes and desserts like maruya, turón, and halo-halo or saba con yelo. Most of these dishes use the Saba or Cardaba banana cultivar. Bananas are also commonly used in cuisine in the South-Indian state of Kerala, where they are steamed (puzhungiyathu), made into curries, fried into chips, (upperi) or fried in batter (pazhampori). Pisang goreng, bananas fried with batter similar to the Filipino maruya or Kerala pazhampori, is a popular dessert in Malaysia, Singapore, and Indonesia. A similar dish is known in the United Kingdom and United States as banana fritters.
Banana hearts are used as a vegetable in South Asian and Southeast Asian cuisine, either raw or steamed with dips or cooked in soups, curries and fried foods. The flavor resembles that of artichoke. As with artichokes, both the fleshy part of the bracts and the heart are edible.
Banana leaves are large, flexible, and waterproof. They are often used as ecologically friendly disposable food containers or as "plates" in South Asia and several Southeast Asian countries. In Indonesian cuisine, banana leaf is employed in cooking method called pepes and botok; the banana leaf packages containing food ingredients and spices are cooked on steam, in boiled water or grilled on charcoal. In the South Indian states of Tamil Nadu, Karnataka, Andhra Pradesh and Kerala in every occasion the food must be served in a banana leaf and as a part of the food a banana is served. Steamed with dishes they impart a subtle sweet flavor. They often serve as a wrapping for grilling food. The leaves contain the juices, protect food from burning and add a subtle flavor. In Tamil Nadu (India) leaves are fully dried and used as packing material for food stuffs and also making cups to hold liquid foods. In Central American countries, banana leaves are often used as wrappers for tamales.
Banana fiber harvested from the pseudostems and leaves of the plant has been used for textiles in Asia since at least the 13th century. Both fruit-bearing and fibrous varieties of the banana plant have been used. In the Japanese system Kijōka-bashōfu, leaves and shoots are cut from the plant periodically to ensure softness. Harvested shoots are first boiled in lye to prepare fibers for yarn-making. These banana shoots produce fibers of varying degrees of softness, yielding yarns and textiles with differing qualities for specific uses. For example, the outermost fibers of the shoots are the coarsest, and are suitable for tablecloths, while the softest innermost fibers are desirable for kimono and kamishimo. This traditional Japanese cloth-making process requires many steps, all performed by hand.
In a Nepalese system the trunk is harvested instead, and small pieces are subjected to a softening process, mechanical fiber extraction, bleaching and drying. After that, the fibers are sent to the Kathmandu Valley for use in rugs with a silk-like texture. These banana fiber rugs are woven by traditional Nepalese hand-knotting methods, and are sold RugMark certified.
In India, a banana fiber separator machine has been developed, which takes the agricultural waste of local banana harvests and extracts strands of the fiber.
Banana fiber is used in the production of banana paper. Banana paper is made from two different parts: the bark of the banana plant, mainly used for artistic purposes, or from the fibers of the stem and non-usable fruits. The paper is either hand-made or by industrial process.
- The song "Yes! We Have No Bananas" was written by Frank Silver and Irving Cohn and originally released in 1923; for many decades, it was the best-selling sheet music in history. Since then the song has been rerecorded several times and has been particularly popular during banana shortages.
- A person slipping on a banana peel has been a staple of physical comedy for generations. An American comedy recording from 1910 features a popular character of the time, "Uncle Josh", claiming to describe his own such incident:
Now I don't think much of the man that throws a banana peelin' on the sidewalk, and I don't think much of the banana peel that throws a man on the sidewalk neither ... my foot hit the bananer peelin' and I went up in the air, and I come down ker-plunk, jist as I was pickin' myself up a little boy come runnin' across the street ... he says, "Oh mister, won't you please do that agin? My little brother didn't see you do it."
- The poet Bashō is named after the Japanese word for a banana plant. The "bashō" planted in his garden by a grateful student became a source of inspiration to his poetry, as well as a symbol of his life and home.
- The cover artwork for the debut album of The Velvet Underground features a banana made by Andy Warhol. On the original vinyl LP version, the design allowed the listener to "peel" this banana to find a pink, peeled phallic banana on the inside.
Religion and popular beliefs
In all the important festivals and occasions of Hindus, the serving of bananas plays a prominent part. Traditionally in Tamil marriages, banana plants are tied on both sides of the entrance of houses to bless the newlyweds to be useful to each other. The banana is one of three fruits with this significance, the others being mango and jack fruit.
In Thailand, it is believed that a certain type of banana plants may be inhabited by a spirit, Nang Tani, a type of ghost related to trees and similar plants that manifests itself as a young woman. Often people tie a length of colored satin cloth around the pseudostem of the banana plants.
There is a long racist history of describing people of African descent as being more like monkeys than humans, and due to the assumption in popular culture that monkeys like bananas, bananas have been used in symbolic acts of hate speech. In April 2014, during a match at Villarreal's stadium, El Madrigal, Dani Alves was targeted by Villareal supporter David Campaya Lleo, who threw a banana at him. Alves picked up the banana, peeled it and took a bite, and the meme went viral on social media in support of him. Racist taunts are an ongoing problem in football. Bananas were hung from nooses around the campus of American University in May 2017 after the student body elected its first black woman student government president.
- The large leaves may be used as umbrellas.
- Banana peel may have capability to extract heavy metal contamination from river water, similar to other purification materials. In 2007, banana peel powder was tested as a means of filtration for heavy metals and radionuclides occurring in water produced by the nuclear and fertilizer industries (cadmium contaminant is present in phosphates). When added and thoroughly mixed for 40 minutes, the powder can remove roughly 65% of heavy metals, and this can be repeated.
- Waste bananas can be used to feed livestock.
Kilawin na pusô ng saging, a Filipino dish using banana flowers
Kaeng yuak is a northern Thai curry made with the core of the banana plant
Banana inflorescence, partially opened
- "Banana from 'Fruits of Warm Climates' by Julia Morton". Hort.purdue.edu. Archived from the original on 2009-04-15. Retrieved 2009-04-16.
- Armstrong, Wayne P. "Identification Of Major Fruit Types". Wayne's Word: An On-Line Textbook of Natural History. Archived from the original on November 20, 2011. Retrieved 2013-08-17.
- "Banana". Merriam-Webster Online Dictionary. Retrieved 2013-01-04.
- "Tracing antiquity of banana cultivation in Papua New Guinea". The Australia & Pacific Science Foundation. Archived from the original on 2007-08-29. Retrieved 2007-09-18.
- Nelson, Ploetz & Kepler 2006.
- "Where bananas are grown". ProMusa. 2013. Retrieved 24 October 2016.
- Picq, Claudine & INIBAP, eds. (2000). Bananas (PDF) (English ed.). Montpellier: International Network for the Improvement of Banana and Plantains/International Plant Genetic Resources Institute. ISBN 978-2-910810-37-5. Retrieved 2013-01-31.
- Stover & Simmonds 1987, pp. 5–9.
- Stover & Simmonds 1987, p. 212.
- Stover & Simmonds 1987, pp. 13–17.
- Nelson, Ploetz & Kepler 2006, p. 26.
- Ploetz et al. 2007, p. 12.
- "Banana Plant Growing Info". Greenearth. Retrieved 2008-12-20.
- Stover & Simmonds 1987, pp. 9–13.
- Angolo, A. (May 15, 2008). "Banana plant with five hearts is instant hit in Negros Occ". ABS-CBN Broadcasting Corporation. Retrieved 2008-05-17.
- Stover & Simmonds 1987, pp. 244–247.
- Office of the Gene Technology Regulator 2008.
- Smith, James P. (1977). Vascular Plant Families. Eureka, Calif.: Mad River Press. ISBN 978-0-916422-07-3.
- Warkentin, Jon (2004). "How to make a Banana Split" (Microsoft Word). University of Manitoba. Retrieved 2014-07-21.
- Simmonds, N.W. (1962). "Where our bananas come from". New Scientist. Reed Business Information. 16 (307): 36–39. ISSN 0262-4079. Retrieved 2011-06-11.
- Brodsky, Allen B (1978). CRC Handbook on Radiation Measurement and Protection. 1. West Palm Beach, FL: CRC Press. p. 620 Table A.3.7.12. ISBN 978-0-8493-3756-7.
- Cass, Stephen & Wu, Corinna (June 4, 2007). "Everything Emits Radiation—Even You: The millirems pour in from bananas, bomb tests, the air, bedmates..." Discover: Science, Technology, and the Future. Retrieved 2011-09-05.
- "banana dose « Physical Insights". Enochthered.wordpress.com. July 25, 2007. Retrieved 2011-10-02.
- "Banana". Online Etymology Dictionary. Retrieved 2010-08-05.
- Search for "Musa", "World Checklist of Selected Plant Families". Royal Botanic Gardens, Kew. Retrieved 2013-01-06.
- Hyam, R. & Pankhurst, R.J. (1995). Plants and their names : a concise dictionary. Oxford: Oxford University Press. p. 329. ISBN 978-0-19-866189-4.
- Bailey, Liberty Hyde (1916). The Standard Cyclopedia of Horticulture. Macmillan. pp. 2076–2079.
- Valmayor et al. 2000.
- Constantine, D.R. "Musa paradisiaca". Archived from the original on 2008-09-05. Retrieved 2014-09-05.
- Porcher, Michel H. (July 19, 2002). "Sorting Musa names". The University of Melbourne. Retrieved 2011-01-11.
- "Musa paradisiaca". World Checklist of Selected Plant Families. Royal Botanic Gardens, Kew. Retrieved 2013-01-06.
- d’Hont, A. L.; Denoeud, F.; Aury, J. M.; Baurens, F. C.; Carreel, F. O.; Garsmeur, O.; Noel, B.; Bocs, S. P.; Droc, G. T.; Rouard, M.; Da Silva, C.; Jabbari, K.; Cardi, C. L.; Poulain, J.; Souquet, M. N.; Labadie, K.; Jourda, C.; Lengellé, J.; Rodier-Goud, M.; Alberti, A.; Bernard, M.; Correa, M.; Ayyampalayam, S.; McKain, M. R.; Leebens-Mack, J.; Burgess, D.; Freeling, M.; Mbéguié-a-Mbéguié, D.; Chabannes, M. & Wicker, T. (2012). "The banana (Musa acuminata) genome and the evolution of monocotyledonous plants". Nature. 488 (7410): 213–217. doi:10.1038/nature11241. PMID 22801500.
- "Our plantains: What is a plantain?". Chiquita. Retrieved 2013-02-02.
- Valmayor et al. 2000, p. 2.
- Ploetz et al. 2007, pp. 18–19.
- Office of the Gene Technology Regulator 2008, p. 1.
- Stover & Simmonds (1987, p. 183). "The Horn and French group of plantain cultivars (AAB) are preferred for cooking purposes over ABB cooking bananas ... As a result the AAB plantains fetch a higher price than the ABB cooking bananas."
- Qi, Baoxiu; Moore, Keith G. & Orchard, John (2000). "Effect of Cooking on Banana and Plantain Texture". Journal of Agricultural and Food Chemistry. 48 (9): 4221–4226. doi:10.1021/jf991301z. PMID 10995341.
- Gibert, Olivier; Dufour, Dominique; Giraldo, Andrés; Sánchez, Teresa; Reynes, Max; Pain, Jean-Pierre; González, Alonso; Fernández, Alejandro & Díaz, Alberto (2009). "Differentiation between Cooking Bananas and Dessert Bananas. 1. Morphological and Compositional Characterization of Cultivated Colombian Musaceae (Musa sp.) in Relation to Consumer Preferences". Journal of Agricultural and Food Chemistry. 57 (17): 7857–7869. doi:10.1021/jf901788x. PMID 19691321.
- Valmayor et al. 2000, pp. 8–12.
- Englberger, Lois (2003). "Carotenoid-rich bananas in Micronesia" (PDF). InfoMusa. 12 (2): 2–5. Retrieved 2013-01-22.
- de Langhe, Edmond & de Maret, Pierre (2004). "Tracking the banana: its significance in early agriculture". In Hather, Jon G. The Prehistory of Food: Appetites for Change. Routledge. p. 372. ISBN 978-0-203-20338-5.
- Denham, T.P.; Haberle, S.G.; Lentfer, C.; Fullagar, R.; Field, J.; Therin, M.; Porch, N. & Winsborough, B. (2003). "Origins of Agriculture at Kuk Swamp in the Highlands of New Guinea". Science. 301 (5630): 189–193. doi:10.1126/science.1085255. PMID 12817084.
- Ploetz et al. 2007, p. 7.
- Watson, Andrew (1983). Agricultural innovation in the early Islamic world. New York: Cambridge University Press. p. 54. ISBN 978-0-521-24711-5.
- Mbida, V.M.; Van Neer, W.; Doutrelepont, H. & Vrydaghs, L. (2000). "Evidence for banana cultivation and animal husbandry during the first millennium BCE in the forest of southern Cameroon" (PDF). Journal of Archeological Science. 27 (2): 151–162. doi:10.1006/jasc.1999.0447.
- Zeller, Friedrich J. (2005). "Herkunft, Diversität und Züchtung der Banane und kultivierter Zitrusarten (Origin, diversity and breeding of banana and cultivated citrus)" (PDF). Journal of Agriculture and Rural Development in the Tropics and Subtropics, Supplement 81 (in German). Retrieved 2014-09-05.
- Lejju, B. Julius; Robertshaw, Peter & Taylor, David (2005). "Africa's earliest bananas?" (PDF). Journal of Archeological Science. Archived from the original (PDF) on 2007-12-02.
- Randrianja, Solofo & Ellis, Stephen (2009). Madagascar: A Short History. University of Chicago Press. ISBN 978-1-85065-947-1.
- Haroon, Jasim Uddin (September 10, 2008). "Banana consumption on rise during Ramadan". The Financial Express. Retrieved 2014-09-05.
- Jennings, Ronald (1992). Christians and Muslims in Ottoman Cyprus and the Mediterranean World, 1571–1640. New York: NYU Press. p. 189. ISBN 978-0-8147-4181-8.
- Gibson, Arthur C. "Bananas and plantains". UCLA. Archived from the original on November 10, 2012. Retrieved September 5, 2014.
- Peed, Mike (January 10, 2011). "We Have No Bananas: Can Scientists Defeat a Devastating Blight?". The New Yorker. pp. 28–34. Retrieved 2011-01-13.
- "Phora Ltd. – History of Banana". Phora-sotoby.com. Archived from the original on 2009-04-16. Retrieved 2009-04-16.
- Koeppel, Dan (2008). Banana: The Fate of the Fruit that Changed the World. New York: Hudson Street Press. pp. 51–53. ISBN 978-0-452-29008-2.
- "Big-business greed killing the banana – Independent". The New Zealand Herald. May 24, 2008. p. A19.
- Office of the Gene Technology Regulator 2008, pp. 7–8.
- Stover & Simmonds 1987, pp. 206–207.
- Castle, Matt (August 24, 2009). "The Unfortunate Sex Life of the Banana". DamnInteresting.com.
- "How bananas are grown". Banana Link. Retrieved 2 September 2016.
- "Banana History – The history of bananas as food". Homecooking.about.com. May 5, 2011. Retrieved 2011-10-02.
- Holmes, Bob (April 20, 2013). "Go Bananas". New Scientist. 218 (2913): 9–41. (Also at Holmes, Bob (April 20, 2013). "Nana from heaven? How our favourite fruit came to be". New Scientist. Retrieved 2013-04-19. (Subscription required (. )))
- "Are bananas about to become extinct?". Retrieved 2012-12-13.
- Ding, Phebe; Ahmad, S.H.; Razak, A.R.A.; Shaari, N. & Mohamed, M.T.M. (2007). "Plastid ultrastructure, chlorophyll contents, and colour expression during ripening of Cavendish banana (Musa acuminata 'Williams') at 17°C and 27°C" (PDF). New Zealand Journal of Crop and Horticultural Science. 35 (2): 201–210. doi:10.1080/01140670709510186. Retrieved 2011-07-16.
- Kirschner, Chanie (January 21, 2016). "4 ways to use green bananas that won't ripen". Mother Nature Network. Retrieved April 30, 2017.
- Moser, Simone; Müller, Thomas; Ebert, Marc-Olivier; Jockusch, Steffen; Turro, Nicholas J. & Kräutler, Bernhard (2008). "Blue luminescence of ripening bananas" (PDF). Angewandte Chemie International Edition. 47 (46): 8954–8957. doi:10.1002/anie.200803189. PMC . PMID 18850621. Retrieved 2014-05-16.
- FAO), Pedro Arias (Asesor da (2003). The World Banana Economy, 1985-2002. Food & Agriculture Org. ISBN 9789251050576.
- "How to Ripen Bananas". Chiquita. Retrieved 2009-08-15.
- Scott, K.J.; McGlasson, W.B. & Roberts, E.A. (1970). "Potassium Permanganate as an Ethylene Absorbent in Polyethylene Bags to Delay the Ripening of Bananas During Storage". Australian Journal of Experimental Agriculture and Animal Husbandry. 10 (43): 237. doi:10.1071/EA9700237.
- Scott, K.J.; Blake, J.R.; Stracha, G.; Tugwell, B.L. & McGlasson, W.B. (1971). "Transport of Bananas at Ambient Temperatures using Polyethylene Bags". Tropical Agriculture (Trinidad). 48: 163–165.
- Scott, K.J. & Gandanegara, S. (1974). "Effect of Temperature on the Storage Life of bananas Held in Polyethylene Bags with an Ethylene Absorbent". Tropical Agriculture (Trinidad). 51: 23–26.
- "Banana and plantain production in 2016, Crops/Regions/World list/Production Quantity (pick lists)". UN Food and Agriculture Organization, Corporate Statistical Database (FAOSTAT). 2017. Retrieved 6 January 2018.
- "Banana and plantain exports in 2013, Crops and livestock products/Regions/World list/Export quantity (pick lists)". UN Food and Agriculture Organization, Corporate Statistical Database (FAOSTAT). 2017. Retrieved 6 January 2018.
- d'Hont, A; Denoeud, F; Aury, J. M; Baurens, F. C; Carreel, F; Garsmeur, O; Noel, B; Bocs, S; Droc, G; Rouard, M; Da Silva, C; Jabbari, K; Cardi, C; Poulain, J; Souquet, M; Labadie, K; Jourda, C; Lengellé, J; Rodier-Goud, M; Alberti, A; Bernard, M; Correa, M; Ayyampalayam, S; McKain, M. R; Leebens-Mack, J; Burgess, D; Freeling, M; Mbéguié-a-Mbéguié, D; Chabannes, M; et al. (2012). "The banana (Musa acuminata) genome and the evolution of monocotyledonous plants". Nature. 488 (7410): 213–7. doi:10.1038/nature11241. PMID 22801500.
- "A future with no bananas?". New Scientist. May 13, 2006. Retrieved 2006-12-09.
- Montpellier, Emile Frison (February 8, 2003). "Rescuing the banana". New Scientist. Retrieved 2006-12-09.
- Barker, C.L. (November 2008). "Conservation: Peeling Away". National Geographic Magazine.
- "Risk assessment of Eastern African Highland Bananas and Plantains against TR4" (PDF). International Banana Symposium. 2012. Archived from the original (PDF) on April 7, 2014. Retrieved April 6, 2014.
- Tushemereirwe, W.; Kangire, A.; Ssekiwoko, F.; Offord, L.C.; Crozier, J.; Boa, E.; Rutherford, M. & Smith, J.J. (2004). "First report of Xanthomonas campestris pv. musacearum on banana in Uganda". Plant Pathology. 53 (6): 802. doi:10.1111/j.1365-3059.2004.01090.x.
- Bradbury, J.F. & Yiguro, D. (1968). "Bacterial wilt of Enset (Ensete ventricosa) incited by Xanthomonas musacearum". Phytopathology. 58: 111–112.
- Mwangi, M.; Bandyopadhyay, R.; Ragama, P. & Tushemereirwe, R.K. (2007). "Assessment of banana planting practices and cultivar tolerance in relation to management of soilborne Xanthomonas campestris pv. musacearum". Crop Protection. 26 (8): 1203–1208. doi:10.1016/j.cropro.2006.10.017.
- Kraft S (4 August 2011). "Bananas! Eating Healthy Will Cost You; Potassium Alone $380 Per Year". Medical News Today. Retrieved 25 October 2014.
- "Ranking of potassium content per 100 grams in common fruits and vegetables". United States Department of Agriculture, National Nutrient Database for Standard Reference, Release 28. November 2016. Retrieved 6 May 2017.
- Taylor, J.S. & Erkek, E. (2004). "Latex allergy: diagnosis and management". Dermatologic Therapy. 17 (4): 289–301. doi:10.1111/j.1396-0296.2004.04024.x. PMID 15327474.
- Fahlbusch, Karl-Georg; Hammerschmidt, Franz-Josef; Panten, Johannes; Pickenhagen, Wilhelm; Schatkowski, Dietmar; Bauer, Kurt; Garbe, Dorothea & Surburg, Horst (2000). "Flavors and Fragrances". Ullmann's Encyclopedia of Industrial Chemistry. 15. Wiley-VCH Verlag GmbH & Co. KGaA. p. 82. doi:10.1002/14356007.a11_141. ISBN 978-3-527-30673-2.
- Mui, Winnie W. Y.; Durance, Timothy D. & Scaman, Christine H. (2002). "Flavor and Texture of Banana Chips Dried by Combinations of Hot Air, Vacuum, and Microwave Processing". Journal of Agricultural and Food Chemistry. 50 (7): 1883–1889. doi:10.1021/jf011218n. "Isoamyl acetate (9.6%) imparts the characteristic aroma typical of fresh bananas (13, 17−20), while butyl acetate (8.1%) and isobutyl acetate (1.4%) are considered to be character impact compounds of banana flavor."
- Salmon, B.; Martin, G. J.; Remaud, G. & Fourel, F. (November–December 1996). "Compositional and Isotopic Studies of Fruit Flavours. Part I. The Banana Aroma". Flavour and Fragrance Journal. 11 (6): 353–359. doi:10.1002/(SICI)1099-1026(199611)11:6<353::AID-FFJ596>3.0.CO;2-9.
- "Fruit Ripening". Retrieved 2010-02-17.
- "Ethylene Process". Archived from the original on 2010-03-24. Retrieved 2010-02-17.
- Manmadhan, Prema (February 28, 2011). "Pazham Pachadi". The Hindu. Chennai, India. Retrieved 2014-01-03.
- Pereira, Ignatius (April 13, 2013). "The taste of Kerala". The Hindu. Chennai, India. Retrieved 2014-01-03.
- Manmadhan, Prema (February 28, 2011). "A snack & a snare". The Hindu. Chennai, India. Retrieved 2014-01-03.
- Plant Breeding Abstracts. Commonwealth Agricultural Bureaux. 1949. p. 162.
- Solomon, C (1998). Encyclopedia of Asian Food (Periplus ed.). Australia: New Holland Publishers. ISBN 0-85561-688-1. Archived from the original on June 3, 2008. Retrieved 2008-05-17.
- Fried banana flowers. Duda Online (December 14, 2009). Retrieved on 2011-10-02.
- Molly Watson. "Banana Flowers". About.com. Retrieved 2014-05-13. See also the link on that page for Banana Flower Salad.
- "Banana". Hortpurdue.edu. Archived from the original on April 15, 2009. Retrieved 2009-04-16.
- Hendrickx, Katrien. The Origins of Banana-fibre Cloth in the Ryukyus, Japan. Leuven University Press. p. 188. ISBN 9789058676146.
- "Traditional Crafts of Japan – Kijoka Banana Fiber Cloth". Association for the Promotion of Traditional Craft Industries. Archived from the original on November 4, 2006. Retrieved December 11, 2006.
- "An Entrepreneur Story – Turning Waste from Banana Harvests into Silk Fiber for the Textile Industry". InfoDev. 5 January 2009.
- Gupta, K. M. (2014-11-13). Engineering Materials: Research, Applications and Advances. CRC Press. ISBN 9781482257984.
- Shaw A (1987). ""Yes! We have No Bananas"/"Charleston" (1923)". The Jazz Age: Popular Music in 1920s. Oxford University Press. p. 132. ISBN 9780195060829.
- Dan Koeppel (2005). "Can This Fruit Be Saved?". Popular Science. Bonnier Corporation. 267 (2): 60–70.
- Stewart, Cal. "Collected Works of Cal Stewart part 2". Uncle Josh in a Department Store (1910). The Internet Archive. Retrieved 2010-11-17.
- Matsuo Basho: the Master Haiku Poet, Kodansha Europe, ISBN 0-87011-553-7
- Bill DeMain (December 11, 2011). "The Stories Behind 11 Classic Album Covers". mental_floss. Archived from the original on October 28, 2012. Retrieved January 6, 2013.
- "Banana Tree Prai Lady Ghost". Thailand-amulets.net. 2012-03-19. Retrieved 2012-08-26.
- "Spirits". Thaiworldview.com. Retrieved 2012-08-26.
- "Pontianak- South East Asian Vampire". Castleofspirits.com. Retrieved 2014-05-13.
- Hund, Wulf D.; Mills, Charles W (29 February 2016). "Comparing Black People to Monkeys has a Long, Dark Simian History". Huffington Post.
- "In the Fight Against Racism, No Bananas, No Monkeys, Please!". RioOnWatch. 6 May 2014.
- "Dani Alves: Joven que lanzó un plátano a Dani Alves quedó en libertad con cargos". La Prensa, Peru. 30 April 2014. Retrieved 9 March 2015.
- "Dani Alves: Barcelona defender eats banana after it lands on pitch". BBC Sport. 28 April 2014. Retrieved 29 April 2014.
- Evans, Richard (August 22, 2016). "Throwing bananas at black sportsmen has been recognised as racism across Europe for decades".
- McGowan, Tom (May 5, 2014). "Bananas and monkey chants: Is racism endemic in Spanish football? - CNN". CNN.
- Fortin, Jacey (3 May 2017). "F.B.I. Helping American University Investigate Bananas Found Hanging From Nooses". The New York Times.
- "Miscellaneous Symbols and Pictographs" (PDF). Retrieved 2015-04-28.
- Minard, Anne (March 11, 2011). "Is That a Banana in Your Water?". National Geographic. Archived from the original on April 26, 2011. Retrieved 2011-03-15.
- Castro, Renata S. D.; Caetano, LaéRcio; Ferreira, Guilherme; Padilha, Pedro M.; Saeki, Margarida J.; Zara, Luiz F.; Martines, Marco Antonio U. & Castro, Gustavo R. (2011). "Banana Peel Applied to the Solid Phase Extraction of Copper and Lead from River Water: Preconcentration of Metal Ions with a Fruit Waste". Industrial & Engineering Chemistry Research. 50 (6): 3446–3451. doi:10.1021/ie101499e.
- Heuzé V., Tran G., Archimède H., Renaudeau D., Lessire M., 2016. Banana fruits. Feedipedia, a programme by INRA, CIRAD, AFZ and FAO. https://www.feedipedia.org/node/683 Last updated on March 25, 2016, 10:36
- Nelson, S.C.; Ploetz, R.C. & Kepler, A.K. (2006). "Musa species (bananas and plantains)". In Elevitch, C.R. Species Profiles for Pacific Island Agroforestry (PDF). Hōlualoa, Hawai'i: Permanent Agriculture Resources (PAR). Retrieved 2013-01-10.
- Office of the Gene Technology Regulator (2008). The Biology of Musa L. (banana) (PDF). Australian Government. Retrieved 2013-01-30.
- Ploetz, R.C.; Kepler, A.K.; Daniells, J. & Nelson, S.C. (2007). "Banana and Plantain: An Overview with Emphasis on Pacific Island Cultivars". In Elevitch, C.R. Species Profiles for Pacific Island Agroforestry (PDF). Hōlualoa, Hawai'i: Permanent Agriculture Resources (PAR). Retrieved 2013-01-10.
- Stover, R.H. & Simmonds, N.W. (1987). Bananas (3rd ed.). Harlow, England: Longman. ISBN 978-0-582-46357-8.
- Valmayor, Ramón V.; Jamaluddin, S.H.; Silayoi, B.; Kusumo, S.; Danh, L.D.; Pascua, O.C. & Espino, R.R.C. (2000). Banana cultivar names and synonyms in Southeast Asia (PDF). Los Baños, Philippines: International Network for Improvement of Banana and Plantain – Asia and the Pacific Office. ISBN 978-971-91751-2-4. Archived from the original (PDF) on 2013-01-08. Retrieved 2013-01-08.
|
<urn:uuid:4c0c9689-bff7-477c-bd43-415b1488a2a7>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00621.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8711490631103516,
"score": 3.625,
"token_count": 15276,
"url": "https://en.wikipedia.org/wiki/BANANA"
}
|
Presentation on theme: "Warm-Up: January 26, 2015 A 24.0 kg mass is attached to the bottom of a vertical spring, causing it to stretch 15.0 cm. What is the spring constant? What."— Presentation transcript:
1 Warm-Up: January 26, 2015A 24.0 kg mass is attached to the bottom of a vertical spring, causing it to stretch 15.0 cm.What is the spring constant?What is the final potential energy stored in the spring?
2 First Semester Grading Scale 65-75: B55-65: C45-55: D0-45: F
4 Oscillatory Motion and Waves OpenStax Chapter 16
5 Hooke’s Law RevisitedWhen an elastic object is deformed, it experiences a restoring force.Elastic means that it is capable of returning to its original shape/size.Includes springs, plastic rulers, rubber bands, guitar strings, etc.The restoring force is given by Hooke’s Law
6 Force ConstantThe force constant, k, is related to the rigidity (or stiffness) of a system.Larger k greater restoring force stiffer systemMeasured in N/mThe slope of a Force vs. Displacement graph (if the graph is linear)
8 You-Try 16.1What is the force constant for the suspension system of a car that settles 1.20 cm when an 80.0 kg person gets in?
9 You-Try 16.1What is the force constant for the suspension system of a car that settles 1.20 cm when an 80.0 kg person gets in?6.53x104 N/m
10 Energy in Hooke’s Law Work must be done in any deformation. Assuming no energy is lost to heat, sound, etc., then all work is transferred to potential energy.Since the force increases linearly with displacement, we can easily calculate the potential energy.
12 You-Try 16.2How much energy is stored in the spring of a tranquilizer gun that has a force constant of 50.0 N/m and is compressed m?If you neglect friction and the mass of the spring, at what speed will a 2.00 g projectile be ejected from the gun?0.563 J23.7 m/s
13 AssignmentRead Section 16.1 of OpenStax textbook (online): pages
15 Warm-Up: January 27, 2015A force of 20.0 N is applied to the tip of a ruler, causing it to deflect 4.00 cm. What is the force constant of the ruler?
16 Today’s Lab You will be assigned to a group. Your group’s goal is to determine the relationship between the following:The mass at the end of the stringThe length of the stringThe period of an oscillation (the time it takes to complete one oscillation).The angle of oscillationMaterials allowed:String (and scissors to cut the string)MassesRulers/Metersticks/ProtractorStopwatch (cell phone)Be sure to record everything in your lab notebook!
29 Warm-Up: January 29, 2015A stroboscope is set to flash every 8.00x10-5 s. What is the frequency of the flashes?
30 Simple Harmonic Motion: A Special Periodic Motion OpenStax section 16.3AP Physics 1
31 Simple Harmonic Motion Simple harmonic motion is oscillatory motion for a system where the net force can be described by Hooke’s Law.Pendulums are sometimes considered simple harmonic motion – only for small angles.Equal/symmetric displacement on either side of the equilibrium position.The maximum displacement from equilibrium is called the amplitude.
33 Importance of SHMPeriod and frequency are independent of amplitude, so simple harmonic oscillators can be used as clocks.They are a good analogue for waves, including invisible ones (sound, electromagnetic)
34 Period and Frequency For simple harmonic oscillators: What do period and frequency not depend on?
35 You-Try 16.4If the shock absorbers in a car go bad, then the car will oscillate at the least provocation, such as when going over bumps in the road and after stopping.Calculate the frequency and period of these oscillations for such a car if the car’s mass (including its load) is 900. kg and the force constant of the suspension system is 6.53x104 N/mf=1.36 HzT=0.738 s
39 Think-Pair-ShareSuppose you pluck a banjo string. You hear a single note that starts out loud and slowly quiets over time. Describe what happens to the period, frequency, and amplitude of the sound waves as the volume decreases.
40 You-Try #18A diver on a diving board is undergoing simple harmonic motion. Her mass is 55.0 kg and the period of her motion is s. The next diver is a male whose period of simple harmonic motion is 1.05 s. What is his mass if the mass of the board is negligible?
43 Warm-Up: January 30, 2015If the spring constant of a simple harmonic oscillator is doubled, by what factor will the mass of the system need to change in order for the frequency of the motion to remain the same?
44 Lab Reports DueOne person from each group collect lab notebooks from all group members and turn them in.
57 Warm-Up: February 2, 2015Punxsutawney Phil, seer of seers, prognosticator of prognosticators, has an extremely accurate pendulum clock in his secret lair. The period of this clock is exactly six weeks.What is the length of this pendulum clock?Is your answer realistic? Why or why not?
58 American Mathematics Competition New date and timeWednesday, February 25Time TBD
62 Simple Harmonic Oscillators Energy is conservedMaximum speed occurs at equilibrium position.
63 Pendulums (small θ) Energy is conserved Maximum speed occurs at equilibrium position.
64 You-Try 16.6Suppose that a car is 900. kg and has a suspension system that has a force constant of kN/m. The car hits a bump and bounces with an amplitude of m. What is its maximum velocity (assuming no damping)?0.852 m/s
65 You-Try 36Near the top of the Citigroup Center building in New York City, there is an object with mass of 4.00x105 kg on springs that have adjustable force constants. Its function is to dampen wind-driven oscillations of the building by oscillating at the same frequency as the building is being driven; the driving force is transferred to the object, which oscillates instead of the entire building.What effective force constant should the springs have to make the object oscillate with a period of s?What energy is stored in the springs for a 2.00 m displacement from equilibrium?
70 You-Try 40A ladybug sits 12.0 cm from the center of a Beatles album spinning at 33.3 rpm. What is the maximum velocity of its shadow on the wall behind the turntable, if illuminated parallel to the record by the parallel rays of the setting sun?0.419 m/s
73 Warm-Up: February 3, 2015The device pictured entertains infants while keeping them from wandering. The child bounces in a harness suspended from a door frame by a spring.If the spring stretches m while supporting an 8.00 kg child, what is its spring constant?What is the time for one complete bounce?What is the child’s maximum velocity if the amplitude of her bounce is m?
76 Friction is Real! Friction is not always negligible. Damping is the slowing and stopping of oscillations, caused by a non-conservative force (such as friction).Damping is sometimes part of a design (such as a car’s shock absorbers).For small damping, the amplitude slowly decreases while period and frequency are nearly unchanged.
77 DampingNon-conservative work removes mechanical energy (usually to thermal energy).
78 Large DampingLarge damping causes the period to increase and the frequency to decrease.Very large damping prevents oscillation – the system just returns to equilibrium.Critical damping is the amount of damping that returns a system to equilibrium as quickly as possible.Overdamped systems return to equilibrium slower than critical damping.
79 Think-Pair-ShareWhich is critical damping?Which is overdamping?
80 Applications of Critical Damping Car shock absorbersBathroom scale
81 You-Try 16.7Suppose a kg object is connected to a spring as shown, but there is simple friction between the object and the surface, and the coefficient of kinetic friction is equal to The force constant of the spring is 50.0 N/m. Use g=9.80 m/s2.What is the frictional force between the surfaces?What total distance does the object travel if it is released from rest m from equilibrium?
83 Warm-Up: February 4, 2015A novelty clock has a kg mass object bouncing on a spring that has a force constant of 1.25 N/m.What is the maximum velocity of the object if the object bounces 3.00 cm above and below the equilibrium position?How many Joules of kinetic energy does the object have at its maximum velocity?
85 Forced Oscillations and Resonance OpenStax section 16.8AP Physics 1
86 Think-Pair-ShareWhat do you have to do to swing high on a swing?
87 Natural FrequencyThe natural frequency is the frequency at which it would oscillate if there were no driving and no damping force.If you drive a system at a frequency equal to its natural frequency, its amplitude will increase. This is called resonance.A system being driven at its natural frequency is said to resonate.
88 Resonance The highest amplitude oscillations occur when: The system is driven at its natural frequencyThere is minimal damping
89 Think-Pair-ShareA famous trick involves a performer singing a note toward a crystal glass until the glass shatters. Explain why the trick works in terms of resonance and natural frequency.
91 You-Try 46A suspension bridge oscillates with an effective force constant of 1.00x108 N/m.How much energy is needed to make it oscillate with an amplitude of m?If soldiers march across the bridge with a cadence equal to the bridge’s natural frequency and impart 1.00x104 J of energy each second, how long does it take for the bridge’s oscillations to go from m to m amplitude?
94 Warm-Up: February 5, 2015How much energy must the shock absorbers of a 1200 kg car dissipate in order to damp a bounce that initially has a velocity of m/s at the equilibrium position? Assume the car returns to its original vertical position.
99 WavesA wave is a disturbance that propagates, or moves from the place it was created.Waves carry energy, not matter.Similar to simple harmonic motion, waves have a period, frequency, and amplitude.Waves also have a wave velocity, the velocity at which the disturbance moves.Waves also have a wavelength, λ, the distance between identical parts of the wave.
104 Transverse vs. Longitudinal In transverse waves, also called shear waves, the direction of energy transfer and the direction of displacement are perpendicular.Examples: strings on musical instruments, lightIn longitudinal waves, also called compressional waves, the direction of energy transfer and the direction of displacement are parallel.Example: soundSome waves, such as ocean waves, are a combination of transverse and longitudinal.
105 You-Try 52What is the wavelength of the waves you create in a swimming pool if you splash your hand at a rate of 2.00 Hz and the waves propagate at m/s?
106 Superposition and Interference OpenStax section 16.10AP Physics 1
107 Superposition Most real-world waves are combinations of simple waves. When two or more waves arrive at the same point, their disturbances are added together. This is called superposition.In constructive interference, crest meets crest, and trough meets trough, and the resultant is a wave with a larger amplitude.In destructive interference, crest meets trough, and resulting amplitude is smaller than either original amplitude.Amplitude is zero for pure destructive interference.
114 Standing WavesSometimes waves superimpose in a way that causes an apparent lack of sideways motion.These waves are called standing waves.Standing waves have points that do not move, called nodes.The points that move the most are called antinodes.
122 Energy in Waves: Intensity OpenStax section 16.11AP Physics 1
123 Wave Energy Wave energy is related to wave amplitude The intensity, I, of a wave is the power, P, carried through area A.
124 Intensity Valid for any flow of energy. Has units of W/m2. Other intensity units include decibels.90 decibel = 10-3 W/m2
125 You-Try 16.9The average intensity of sunlight on Earth’s surface is about 700. W/m2.Calculate the amount of energy that falls on a solar collector having an area of m2 in 4.00 hours.What intensity would such sunlight have if concentrated by a magnifying glass onto an area 200. times smaller than its own?
|
<urn:uuid:bdc276e6-b9e6-4e7e-8bbe-052d951ee907>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00021.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.910648763179779,
"score": 3.265625,
"token_count": 2808,
"url": "http://slideplayer.com/slide/4047662/"
}
|
Renewable energy is energy which comes from natural resources such as sunlight, wind, rain, tides, and geothermal heat, which are renewable (naturally replenished). In 2008, about 19% of global final energy consumption came from renewables, with 13% coming from traditional biomass, which is mainly used for heating, and 3.2% from hydroelectricity. New renewables (small hydro, modern biomass, wind, solar, geothermal, and biofuels) accounted for another 2.7% and are growing very rapidly. The share of renewables in electricity generation is around 18%, with 15% of global electricity coming from hydroelectricity and 3% from new renewables.
Wind power is growing at the rate of 30% annually, with a worldwide installed capacity of 158 gigawatts (GW) in 2009, and is widely used in Europe, Asia, and the United States. At the end of 2009, cumulative global photovoltaic (PV) installations surpassed 21 GW and PV power stations are popular in Germany and Spain. Solar thermal power stations operate in the USA and Spain, and the largest of these is the 354 megawatt (MW) SEGS power plant in the Mojave Desert. The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the USA.
While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas, where energy is often crucial in human development. Globally, an estimated 3 million households get power from small solar PV systems. Micro-hydro systems configured into village-scale or county-scale mini-grids serve many areas. More than 30 million rural households get lighting and cooking from biogas made in household-scale digesters. Biomass cookstoves are used by 160 million households.
Climate change concerns, coupled with high oil prices, peak oil, and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization. New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.
|2008 worldwide renewable-energy sources|
Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:
Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
Renewable energy replaces conventional fuels in four distinct areas: power generation, hot water/ space heating, transport fuels, and rural (off-grid) energy services:
Power generation. Renewable energy provides 18 percent of total electricity generation worldwide. Renewable power generators are spread across many countries, and wind power alone already provides a significant share of electricity in some areas: for example, 14 percent in the U.S. state of Iowa, 40 percent in the northern German state of Schleswig-Holstein, and 20 percent in Denmark. Some countries get most of their power from renewables, including Iceland (100 percent), Brazil (85 percent), Austria (62 percent), New Zealand (65 percent), and Sweden (54 percent).
Heating. Solar hot water makes an important contribution in many countries, most notably in China, which now has 70 percent of the global total (180 GWth). Most of these systems are installed on multi-family apartment buildings and meet a portion of the hot water needs of an estimated 50–60 million households in China. Worldwide, total installed solar water heating systems meet a portion of the water heating needs of over 70 million households. The use of biomass for heating continues to grow as well. In Sweden, national use of biomass energy has surpassed that of oil. Direct geothermal for heating is also growing rapidly.
Transport fuels. Renewable biofuels have contributed to a significant decline in oil consumption in the United States since 2006. The 93 billion liters of biofuels produced worldwide in 2009 displaced the equivalent of an estimated 68 billion liters of gasoline, equal to about 5 percent of world gasoline production.
Mainstream forms of renewable energy
|The adoption of wind power has been increasing.|
See also: Wind power, Wind farm, and Wind power in the United States
Airflows can be used to run wind turbines. Modern wind turbines range from around 600 kW to 5 MW of rated power, although turbines with rated output of 1.5–3 MW have become the most common for commercial use; the power output of a turbine is a function of the cube of the wind speed, so as wind speed increases, power output increases dramatically. Areas where winds are stronger and more constant, such as offshore and high altitude sites, are preferred locations for wind farms. Typical capacity factors are 20-40%, with values at the upper end of the range in particularly favourable sites.
Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand. This could require large amounts of land to be used for wind turbines, particularly in areas of higher wind resources. Offshore resources experience mean wind speeds of ~90% greater than that of land, so offshore resources could contribute substantially more energy.
Wind power is renewable and produces no greenhouse gases during operation, such as carbon dioxide and methane.
Energy in water can be harnessed and used. Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
Hydroelectric energy is a term usually reserved for large-scale hydroelectric dams. Examples are the Grand Coulee Dam in Washington State and the Akosombo Dam in Ghana.
Micro hydro systems are hydroelectric power installations that typically produce up to 100 kW of power. They are often used in water rich areas as a remote-area power supply (RAPS). There are many of these installations around the world, including several delivering around 50 kW in the Solomon Islands.
Damless hydro systems derive kinetic energy from rivers and oceans without using a dam.
Ocean energy describes all the technologies to harness energy from the ocean and the sea. This includes marine current power, ocean thermal energy conversion, and tidal power.
See also: Solar energy, Solar power, and Solar thermal energy
|Monocrystalline solar cell.|
Solar energy is the energy derived from the sun through the form of solar radiation. Solar powered electrical generation relies on photovoltaics and heat engines. A partial list of other solar applications includes space heating and cooling through solar architecture, daylighting, solar hot water, solar cooking, and high temperature process heat for industrial purposes.
Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert and distribute solar energy. Active solar techniques include the use of photovoltaic panels and solar thermal collectors to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air.
Biomass (plant material) is a renewable energy source because the energy it contains comes from the sun. Through the process of photosynthesis, plants capture the sun's energy. When the plants are burned, they release the sun's energy they contain. In this way, biomass functions as a sort of natural battery for storing solar energy. As long as biomass is produced sustainably, with only as much used as is grown, the battery will last indefinitely.
In general there are two main approaches to using plants for energy production: growing plants specifically for energy use, and using the residues from plants that are used for other things. The best approaches vary from region to region according to climate, soils and geography.
|Information on pump regarding ethanol fuel blend up to 10%, California.|
Liquid biofuel is usually either bioalcohol such as bioethanol or an oil such as biodiesel.
Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the USA and in Brazil.
Biodiesel is made from vegetable oils, animal fats or recycled greases. Biodiesel can be used as a fuel for vehicles in its pure form, but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe.
Biofuels provided 1.8% of the world's transport fuel in 2008.
Main articles: Geothermal energy, Geothermal heat pump, and Renewable energy in Iceland
|Krafla Geothermal Station in northeast Iceland|
Geothermal energy is energy obtained by tapping the heat of the earth itself, both from kilometers deep into the Earth's crust in volcanically active locations of the globe or from shallow depths, as in geothermal heat pumps in most locations of the planet. It is expensive to build a power station but operating costs are low resulting in low energy costs for suitable sites. Ultimately, this energy derives from heat in the Earth's core.
Three types of power plants are used to generate power from geothermal energy: dry steam, flash, and binary. Dry steam plants take steam out of fractures in the ground and use it to directly drive a turbine that spins a generator. Flash plants take hot water, usually at temperatures over 200 °C, out of the ground, and allows it to boil as it rises to the surface then separates the steam phase in steam/water separators and then runs the steam through a turbine. In binary plants, the hot water flows through heat exchangers, boiling an organic fluid that spins the turbine. The condensed steam and remaining geothermal fluid from all three types of plants are injected back into the hot rock to pick up more heat.
The geothermal energy from the core of the Earth is closer to the surface in some areas than in others. Where hot underground steam or water can be tapped and brought to the surface it may be used to generate electricity. Such geothermal power sources exist in certain geologically unstable parts of the world such as Chile, Iceland, New Zealand, United States, the Philippines and Italy. The two most prominent areas for this in the United States are in the Yellowstone basin and in northern California. Iceland produced 170 MW geothermal power and heated 86% of all houses in the year 2000 through geothermal energy. Some 8000 MW of capacity is operational in total.
There is also the potential to generate geothermal energy from hot dry rocks. Holes at least 3 km deep are drilled into the earth. Some of these holes pump water into the earth, while other holes pump hot water out. The heat resource consists of hot underground radiogenic granite rocks, which heat up when there is enough sediment between the rock and the earths surface. Several companies in Australia are exploring this technology.
Renewable energy commercialization
Main article: Renewable energy commercialization
Growth of renewables
During the five-years from the end of 2004 through 2009, worldwide renewable energy capacity grew at rates of 10–60 percent annually for many technologies. For wind power and many other renewable technologies, growth accelerated in 2009 relative to the previous four years. More wind power capacity was added during 2009 than any other renewable technology. However, grid-connected PV increased the fastest of all renewables technologies, with a 60-percent annual average growth rate for the five-year period.
Selected renewable energy indicators
Selected global indicators 2007 2008 2009
Investment in new renewable capacity (annual) 104 130 150 billion USD
Existing renewables power capacity,
including large-scale hydro 1,070 1,140 1,230 GWe
Existing renewables power capacity,
excluding large hydro 240 280 305 GWe
Wind power capacity (existing) 94 121 159 GWe
Solar PV capacity (grid-connected) 7.6 13.5 21 GWe
Solar hot water capacity 126 149 180 GWth
Ethanol production (annual) 50 69 76 billion liters
Biodiesel production (annual) 10 15 17 billion liters
Countries with policy targets for renewable energy use 68 75 85
Scientists have advanced a plan to power 100% of the world's energy with wind, hydroelectric, and solar power by the year 2030, recommending renewable energy subsidies and a price on carbon reflecting its cost for flood and related expenses.
All forms of energy are expensive, but as time progresses, renewable energy generally gets cheaper, while fossil fuels generally get more expensive. Al Gore has explained that renewable energy technologies are declining in price for three main reasons:
First, once the renewable infrastructure is built, the fuel is free forever. Unlike carbon-based fuels, the wind and the sun and the earth itself provide fuel that is free, in amounts that are effectively limitless.
Second, while fossil fuel technologies are more mature, renewable energy technologies are being rapidly improved. So innovation and ingenuity give us the ability to constantly increase the efficiency of renewable energy and continually reduce its cost.
Third, once the world makes a clear commitment to shifting toward renewable energy, the volume of production will itself sharply reduce the cost of each windmill and each solar panel, while adding yet more incentives for additional research and development to further speed up the innovation process.
Wind power market
See also: List of onshore wind farms and List of offshore wind farms
At the end of 2009, worldwide wind farm capacity was 159,213 MW, representing an increase of 31 percent during the year, and wind power supplied some 1.3% of global electricity consumption. Wind power accounts for approximately 19% of electricity use in Denmark, 9% in Spain and Portugal, and 6% in Germany and the Republic of Ireland.
Top 10 wind power countries
Country Total capacity
end 2009 (MW) Total capacity
June 2010 (MW)
United States 35,159 36,300
China 26,010 33,800
Germany 25,777 26,400
Spain 19,149 19,500
India 10, 925 12,100
Italy 4,850 5,300
France 4,521 5,000
United Kingdom 4,092 4,600
Portugal 3,535 3,800
Denmark 3,497 3,700
Rest of world 21,698 24,500
Total 159,213 175,000
As of November 2010, the Roscoe Wind Farm (781 MW) is the world's largest wind farm. As of September 2010, the Thanet Offshore Wind Project in United Kingdom is the largest offshore wind farm in the world at 300 MW, followed by Horns Rev II (209 MW) in Denmark. The United Kingdom is the world's leading generator of offshore wind power, followed by Denmark.
New generation of solar thermal plants
See also: Solar power plants in the Mojave Desert
Large solar thermal power stations include the 354 megawatt (MW) Solar Energy Generating Systems power plant in the USA, Solnova Solar Power Station (Spain, 150 MW), Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW), and the PS10 solar power tower (Spain, 11 MW).
The solar thermal power industry is growing rapidly with 1.2 GW under construction as of April 2009 and another 13.9 GW announced globally through 2014. Spain is the epicenter of solar thermal power development with 22 projects for 1,037 MW under construction, all of which are projected to come online by the end of 2010. In the United States, 5,600 MW of solar thermal power projects have been announced. In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.
Main article: List of photovoltaic power stations
Photovoltaic production has been increasing by an average of some 20 percent each year since 2002, making it a fast-growing energy technology. At the end of 2009, the cumulative global PV installations surpassed 21,000 megawatts.
As of November 2010, the largest photovoltaic (PV) power plants in the world are the Finsterwalde Solar Park (Germany, 80.7 MW), Sarnia Photovoltaic Power Plant (Canada, 80 MW), Olmedilla Photovoltaic Park (Spain, 60 MW), the Strasskirchen Solar Park (Germany, 54 MW), the Lieberose Photovoltaic Park (Germany, 53 MW), and the Puertollano Photovoltaic Park (Spain, 50 MW). Many of these plants are integrated with agriculture and some use innovative tracking systems that follow the sun's daily path across the sky to generate more electricity than conventional fixed-mounted systems. There are no fuel costs or emissions during operation of the power stations.
Topaz Solar Farm is a proposed 550 MW solar photovoltaic power plant which is to be built northwest of California Valley in the USA at a cost of over $1 billion. High Plains Ranch is a proposed 250 MW solar photovoltaic power plant which is to be built on the Carrizo Plain, northwest of California Valley.
However, when it comes to renewable energy systems and PV, it is not just large systems that matter. Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.
Use of ethanol for transportation
Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.= Brazil’s ethanol fuel program uses modern equipment and cheap sugar cane as feedstock, and the residual cane-waste (bagasse) is used to process heat and power. There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.=
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10, and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, DaimlerChrysler, and GM are among the automobile companies that sell “flexible-fuel” cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). By mid-2006, there were approximately six million E85-compatible vehicles on U.S. roads. The challenge is to expand the market for biofuels beyond the farm states where they have been most popular to date. Flex-fuel vehicles are assisting in this transition because they allow drivers to choose different fuels based on price and availability. The Energy Policy Act of 2005, which calls for 7.5 billion gallons of biofuels to be used annually by 2012, will also help to expand the market.
Geothermal energy commercialization
The International Geothermal Association (IGA) has reported that 10,715 megawatts (MW) of geothermal power in 24 countries is online, which is expected to generate 67,246 GWh of electricity in 2010. This represents a 20% increase in geothermal power online capacity since 2005. IGA projects this will grow to 18,500 MW by 2015, due to the large number of projects presently under consideration, often in areas previously assumed to have little exploitable resource.
In 2010, the United States led the world in geothermal electricity production with 3,086 MW of installed capacity from 77 power plants;the largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California. The Philippines follows the US as the second highest producer of geothermal power in the world, with 1,904 MW of capacity online; geothermal power makes up approximately 18% of the country's electricity generation.
Geothermal (ground source) heat pumps represented an estimated 30 GWth of installed capacity at the end of 2008, with other direct uses of geothermal heat (i.e., for space heating, agricultural drying and other uses) reaching an estimated 15 GWth. As of 2008, at least 76 countries use direct geothermal energy in some form.
Wave farms expansion
Portugal now has the world's first commercial wave farm, the Agucadoura Wave Park, officially opened in September 2008. The farm uses three Pelamis P-750 machines generating 2.25 MW. Initial costs are put at € 8.5 million. A second phase of the project is now planned to increase the installed capacity to 21MW using a further 25 Pelamis machines.
Funding for a wave farm in Scotland was announced in February, 2007 by the Scottish Government, at a cost of over 4 million pounds, as part of a UK£13 million funding packages for ocean power in Scotland. The farm will be the world's largest with a capacity of 3MW generated by four Pelamis machines.
Main article: Renewable energy in developing countries
Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.
Biomass cookstoves are used by 40 percent of the world’s population. These stoves are being manufactured in factories and workshops worldwide, and more than 160 million households now use them. More than 30 million rural households get lighting and cooking from biogas made in household-scale digesters. An estimated 3 million households get power from small solar PV systems. Micro-hydro systems configured into village-scale or county-scale mini-grids serve many areas.
Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 12 to 30 watts, are sold in Kenya annually.
Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty alleviation by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.
Industry and policy trends
See also: Renewable energy industry and Renewable energy policy
Global renewable energy investment growth (1995-2007)
Global revenues for solar photovoltaics, wind power, and biofuels expanded from $76 billion in 2007 to $115 billion in 2008. New global investments in clean energy technologies expanded by 4.7 percent from $148 billion in 2007 to $155 billion in 2008. U.S. President Barack Obama's American Recovery and Reinvestment Act of 2009 includes more than $70 billion in direct spending and tax credits for clean energy and associated transportation programs. Clean Edge suggests that the commercialization of clean energy will help countries around the world pull out of the current economic malaise. Leading renewable energy companies include First Solar, Gamesa, GE Energy, Q-Cells, Sharp Solar, Siemens, SunOpta, Suntech, and Vestas.
The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed on January 26, 2009, by 75 countries signing the charter of IRENA. As of March 2010, IRENA has 143 member states who all are considered as founding members, of which 14 have also ratified the statute.
Renewable energy policy targets exist in some 73 countries around the world, and public policies to promote renewable energy use have become more common in recent years. At least 64 countries have some type of policy to promote renewable power generation. Mandates for solar hot water in new construction are becoming more common at both national and local levels. Mandates for blending biofuels into vehicle fuels have been enacted in 17 countries.
New and emerging renewable energy technologies
New and emerging renewable energy technologies are still under development and include cellulosic ethanol, hot-dry-rock geothermal power, and ocean energy.These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.
See also: Cellulosic ethanol commercialization
Companies such as Iogen, Broin, and Abengoa are building refineries that can process biomass and turn it into ethanol, while companies such as Diversa, Novozymes, and Dyadic are producing enzymes which could enable a cellulosic ethanol future. The shift from food crop feedstocks to waste residues and native grasses offers significant opportunities for a range of players, from farmers to biotechnology firms, and from project developers to investors.
Selected Commercial Cellulosic Ethanol Plants in the U.S.
(Operational or under construction)
Company Location Feedstock
Abengoa Bioenergy Hugoton, KS Wheat straw
BlueFire Ethanol Irvine, CA Multiple sources
Gulf Coast Energy Mossy Head, FL Wood waste
Mascoma Lansing, MI Wood
POET LLC Emmetsburg, IA Corn cobs
Range Fuels Treutlen County, GA Wood waste
SunOpta Little Falls, MN Wood chips
Xethanol Auburndale, FL Citrus peels
Systems to harvest utility-scale electrical power from ocean waves have recently been gaining momentum as a viable technology. The potential for this technology is considered promising, especially on west-facing coasts with latitudes between 40 and 60 degrees:
In the United Kingdom, for example, the Carbon Trust recently estimated the extent of the economically viable offshore resource at 55 TWh per year, about 14% of current national demand. Across Europe, the technologically achievable resource has been estimated to be at least 280 TWh per year. In 2003, the U.S. Electric Power Research Institute (EPRI) estimated the viable resource in the United States at 255 TWh per year (6% of demand).
The world's first commercial tidal power station was installed in 2007 in the narrows of Strangford Lough in Ireland. The 1.2 megawatt underwater tidal electricity generator, part of Northern Ireland's Environment & Renewable Energy Fund scheme, takes advantage of the fast tidal flow (up to 4 metres per second) in the lough. Although the generator is powerful enough to power a thousand homes, the turbine has minimal environmental impact, as it is almost entirely submerged, and the rotors pose no danger to wildlife as they turn quite slowly.
Ocean thermal energy conversion (OTEC) uses the temperature difference that exists between deep and shallow waters to run a heat engine.
Enhanced Geothermal Systems
Main article: Enhanced Geothermal Systems
Enhanced Geothermal Systems are a new type of geothermal power technologies that do not require natural convective hydrothermal resources. The vast majority of geothermal energy within drilling reach is in dry and non-porous rock. EGS technologies "enhance" and/or create geothermal resources in this "hot dry rock (HDR)" through hydraulic stimulation.
EGS / HDR technologies, like hydrothermal geothermal, are expected to be baseload resources which produce power 24 hours a day like a fossil plant. Distinct from hydrothermal, HDR / EGS may be feasible anywhere in the world, depending on the economic limits of drill depth. Good locations are over deep granite covered by a thick (3–5 km) layer of insulating sediments which slow heat loss.
There are HDR and EGS systems currently being developed and tested in France, Australia, Japan, Germany, the U.S. and Switzerland. The largest EGS project in the world is a 25 megawatt demonstration plant currently being developed in the Cooper Basin, Australia. The Cooper Basin has the potential to generate 5,000–10,000 MW.
Nanotechnology thin-film solar panels
Solar power panels that use nanotechnology, which can create circuits out of individual silicon molecules, may cost half as much as traditional photovoltaic cells, according to executives and investors involved in developing the products. Nanosolar has secured more than $100 million from investors to build a factory for nanotechnology thin-film solar panels.
Renewable energy debate
Main article: Renewable energy debate
Renewable electricity production, from sources such as wind power and solar power, is sometimes criticized for being variable or intermittent. However, the International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.
There have been "not in my back yard" (NIMBY) concerns relating to the visual and other impacts of some wind farms, with local residents sometimes fighting or blocking construction. In the USA, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive and there are many examples of community wind farm developments. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.
|
<urn:uuid:ef720016-cdfe-4551-a8f0-973fb02e1a19>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00021.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9325766563415527,
"score": 3.640625,
"token_count": 6257,
"url": "http://profilefacts.blogspot.com/2011/01/renewable-energy.html"
}
|
SeaSat Mission — the world's first satellite mission dedicated to oceanography
SeaSat (also referred to as SeaSat-A prior to launch and SeaSat-1 after launch) is a pioneering Earth observation experimental mission of NASA/JPL; the first ever civilian spaceborne imaging radar instrument (SAR) was flown on SeaSat in 1978. During its brief 110-day lifetime (end of mission due to a malfunction), SeaSat collected more information about the oceans than had been acquired in the previous 100 years of shipboard research. It established satellite oceanography and proved the viability of imaging radar for studying our planet. Most importantly, it spawned many subsequent Earth remote sensing satellites and instruments at JPL and elsewhere that track changes in Earth's oceans, land and ice. Its advances were also subsequently applied to missions to other planets.
The SeaSat program had three main objectives:
1) to demonstrate techniques to monitor Earth's oceanographic phenomena and features from space on a global scale
2) to provide timely oceanographic data to scientists studying marine phenomena, and to users of the oceans as a resource (ocean shippers, fishermen, marine geologists, etc.)
3) to determine the key features of an operational full-time ocean-monitoring system.
The SeaSat mission pioneered satellite oceanography and proved the viability of imaging radar for studying our planet. The SAR instrument provided a wealth of information on such diverse ocean phenomena as surface waves, internal waves, currents, upwelling, shoals, sea ice, wind, and rainfall. SAR did not produce the first global view of ocean circulation. The ALT instrument of SeaSat produced the first global view of variability of the surface geostrophic currents. Oceanographers had global maps of the mean surface geostrophic currents (commonly called the ocean circulation) years before Seasat. - Beyond the oceans, SeaSat's synthetic aperture radar instrument provided spectacular images of Earth's land surfaces. Even though the satellite was short-lived and the SeaSat program was discontinued, it demonstrated the immense potential of the SAR observation technology, generating great interest in satellite active microwave remote sensing. SeaSat SAR observations amounted to a total area of about 126 x 106 km2 (of the northern hemisphere), including multiple coverage of many regions. 1) 2) 3) 4) 5) 6) 7)
Background: NASA began planning for the Seasat satellite mission in 1972, the first multisensor spacecraft, dedicated specifically to ocean observations. Specific objectives were to collect data on sea-surface winds, seasurface temperatures, wave heights, ocean topography, internal waves, atmospheric water, and sea ice properties.
Requirements for Seasat were generated by a User Working Group (UWG), which included the Office of the Oceanographer of the U.S. Navy, Fleet Numerical Weather Center in Monterey, CA, Navy Surface Weapons Center in Dahlgren, VA, Naval Research Laboratory, the Johns Hopkins University Applied Physics Laboratory (APL), the Office of Naval Research, and the Navy/NOAA Joint Ice Center. NOAA was represented on the UWG by the many NOAA laboratories around the nation, including the NOAA Atlantic Oceanic Marine Laboratory (AOML) in Miami, FL, the NOAA weather center in Suitland, MD, the NOAA Pacific Marine Environmental Laboratory in Seattle, WA, and NOAA's Marine Fisheries office in Bay St Louis, MS; the Defense Mapping Agency, United States Geological Survey (USGS), the U.S. Coast Guard, the Department of the Interior and the Department of Agriculture (AgriStars program) were also represented on the UWG.
The SeaSat project was managed by JPL for NASA, with significant participation from NASA's Goddard Space Flight Center, Greenbelt, MD; NASA's Wallops Flight Facility, Wallops Island, VA; NASA's Langley Research Center, Hampton, VA; NASA's Glenn Research Center, Cleveland, Ohio; Johns Hopkins University Applied Physics Laboratory, Laurel, MD; Lockheed Missiles and Space Systems, Sunnyvale, CA; and the National Oceanic and Atmospheric Administration (NOAA), Washington, D.C.
The SeaSat-A Project is a proof-of-concept mission whose objectives include demonstration of techniques for global monitoring of oceanographic phenomena and features, provision of oceanographic data for both application and scientific users, and the determination of key features of an operational ocean dynamics monitoring system.
The specific mission objectives are:
• Provide an evaluation of sensor capabilities to measure the following geophysical parameters:
- Wave heights
- Wave length and direction
- Surface wind speed and direction
- Ocean surface temperature
- Atmospheric water content (liquid and vapor)
- Sea ice morphology and dynamics
• Provide oceanographic data for participating users and, following geophysical evaluation, for distribution to the general user community:
- Predictions of wave height, directional spectra and wind fields for ship routing, ship design, storm damage avoidance, coastal disaster warning, coastal protection and development, and deep water port development.
- Maps of current patterns and temperatures for ship routing, fishing, pollution dispersion and iceberg hazard avoidance.
- Charts of ice fields and leads for navigation and weather prediction.
- Charts of the ocean geoid fine structure.
• Determine key features of an operational ocean dynamics monitoring system including:
- Sensor operation
- Global sampling
- Production of geophysical data records
- Near real-time data handling
- User operations interaction
- Precision orbit determination
• Demonstrate the economic and social. benefits of user agency products.
The program was designed as a US Interagency program with NASA as the lead agency. All data analysis would be funded by agencies other than NASA. Commercial use was an important part of the program from the very beginning.
1) Relationship between NASA and User Agencies - General (from the SeaSat Program Plan - 1973): (Ref. 9)
• The program will be closely supported by and of considerable benefit to a substantial number of agencies and government and commercial organizations.
• The program has been considered since its inception to be a joint NASA/Interagency Cooperative effort ... requiring interagency support and cooperation.
• Large-scale data analyses required by the user community will be funded and performed in the interested organizations.
2) Relationship Between NASA and User Agencies - Particular (from Mission Operation Report):
• Naval Fleet Numerical Weather Central will process and distribute real time weather data to support weather forecasting and maritime experiments.
• NOAA Environmental Data Service will archive and distribute geophysical data processed by NASA for non-real-time studies.
• NOAA, Navy, Coast Guard, Geological Survey, and the NSF (National Science Foundation) will fund scientific experiments based on SeaSat data.
• Experimental teams composed of scientists supported by these agencies will evaluate the geophysical performance of the SeaSat instruments.
Unfortunately, this co-operation was mostly unfunded and never happened. NOAA never received congressional approval and funding for it's part of the mission. - After the early failure of the satellite, funds programmed for satellite operations were reprogrammed by W. Stanley Wilson to fund a SeaSat Data Utilization Project.
SeaSat managers, with advice from scientists, released data to all with a demonstrated interest in evaluating data. This led to very rapid understanding of measurement accuracies, usefulness of data, and scientific and applied results.
1) Commercial users were involved initially through the SeaSat UWG (User's Working Group)
2) Extensive cost-benefit studies commissioned by NASA indicated significant economic benefits from SeaSat
3) As a result, commercial users were interested in documenting usefulness of SeaSat data for their operations
4) Commercial demonstration program begun in 1977 with 15 companies participating.— SeaSat Success Statement.
1) 16 users completed 18 two-year studies at no cost to NASA beyond cost of supplying data. They found:
- SeaSat data can significantly improve forecast of winds, waves, and the location of storms
- Improved forecasts lead to better routing of ships with substantial savings of operating costs
- SeaSat data are useful for establishing the climatology of offshore areas leading to better selection of offshore equipment
- Fish catch and oceanic conditions are correlated, hence SeaSat data lead to more efficient fisheries.
2) Usefulness of the results has led to a continuation of the program.
- Real time satellite data are now being provided to commercial users through the Fleet Numerical Oceanographic Center.—SeaSat Success Statement.
Government Agencies Involved in the SeaSat Program:
• DOC (Department of Commerce)
- NOAA (National Oceanic and Atmospheric Administration)
- Maritime Administration
• DoD (Department of Defence)
- Director of Defense Research and Engineering
- NRL (Naval Research Laboratory)
- DMA (Defence Mapping Agency)
- Fleet Numerical Weather Center
- Naval Oceanographic Office
- Coastal Engineering Research Center
- Corps of Engineers.
• DOT (Department of Transportation)
- Coast Guard
• NSF (National Science Foundation)
• DOI (Department of Interior)
- Geodetic Service
- Geological Service
• DOA (Department of Agriculture).
• Initial requirements: ''Oceanography From Space" conference at Woods Hole, August 1964
• Improved requirements: "The Terrestrial Environment: Solid Earth and Ocean Physics: Application of Space and Astronomic Technique" at the Williamstown Conference, MA, August 1969
• SeaSat Users Working Group Formed, 1972
• SeaSat Phase-A reports: Applied Physics Laboratory, Goddard Space Flight Center, Jet Propulsion Laboratory, July 1973
• SeaSat Phase-B Reports (same laboratories), August 1974
• Program and Project Start: January 1975
• Launch and Satellite Operation Period: 26 June - 10 October 1978
• SeaSat Data Utilization Project: 1979-1982.
• Program Office: NASA Office of Applications, Special Programs Division; later the Office of Space and Terrestrial Applications, Earth and Oceans Division.
• Project Office: Jet Propulsion Laboratory
• Satellite Prime Contractor: Lockheed Missiles and Space Company, Sunnyvale, California.
• Data Processing:
- Sensor and Geophysical Data Records: Jet Propulsion Laboratory
- Operational and Precise Orbits: Goddard Space Flight Center
- Real time processing: Navy Fleet Numerical Weather Central
- Archives: NOAA National Environmental Satellite Data and Information Service and Jet Propulsion Laboratory Pilot Ocean Data System
• Geophysical Evaluation
- Initially by experiment teams funded by user agencies
- After end of mission by the SeaSat Data Utilization Project.
Figure 1: Overview of the functional SeaSat-A organization (image credit: NASA)
Table 1: Overview of the SeaSat program history -25 years after the launch of SeaSat 11)
The spacecraft was designed and developed by LMSC (Lockheed Missiles and Space Company) as prime contractor and by Ball Aerospace Systems of Boulder, CO. The satellite utilized the Agena upper stage to provide satellite bus functions, including power, telemetry (S-band), attitude control, and command and control functions. A sensor package containing the mission's five experiments was attached to the Agena, as were the experiments' antenna systems. Seasat was three-axis stabilized using momentum wheels and horizon sensors. The vehicle was oriented with the SAR and other antennas remaining nadir pointing and the Agena rocket nozzle and solar panels zenith pointing. S/C size: 21 m length, 1.5 m diameter, total S/C mass=2290 kg. The spacecraft design life is 1 year with expendables, including orbit adjust capability, for three years. 12) 13) 14)
Application: Ice and ocean monitoring (sea-surface winds, sea-surface temperatures, wave heights, internal waves, sea-ice features, ocean features, ocean topography, and the marine geoid), land use, geology, forestry, and mapping.
Figure 2: Artist's view of the deployed SeaSat-A spacecraft in orbit (image credit: NASA/JPL)
Figure 3: Alternate view of SeaSat (image credit: Lockheed Martin, NASA)
Figure 4: Line drawing of the SeaSat spacecraft (image credit: DLR)
The Agena as the second stage of the Atlas-F/Agena launch vehicle, serves as the satellite bus providing attitude control, power, guidance, telemetry and command functions. The sensor module is tailored specifically for the SeaSat payload of five microwave instruments and their antennas. Together, the two modules are ~ 21 m long with a maximum diameter of 1.5 m without appendages deployed. Atop the Atlas booster rocket, the entire satellite is enclosed within a 3 m diameter nose fairing which matches the diameter of the Atlas. After burnout of the Agena stage and injection into the nominal orbit, SeaSat has a mass of nearly 2300 kg. 15)
In orbit, the satellite appears to "stand on end" (Figure 6) like a pencil, the sensor and communications antennas pointing toward nadir and the Agena rocket nozzle and solar panels pointing opposite toward space. The dominant feature of the SeaSat spacecraft is the SAR antenna, a 2.1 m x 10.7 m planar array deployed perpendicular to the satellite body.
ACS (Attitude Control Subsystem): The spacecraft is 3-axis stabilized using a momentum wheel/horizon sensing system to accurately point the sensors at Earth's surface. Hot gas jets provide thrust for adjusting the orbit and for attitude control during Agena burn and orbit adjustment periods.
Following orbital insertion, ACS orients the spacecraft from nose-forward to nose-down and provides stabilization during deployment of the antennas and solar arrays. These functions are performed using hydrazine reaction control thrusters for attitude control and a gyro reference unit as one attitude reference, augmented by horizon sensors for a short period prior to nose-down.
The payload pointing requirements include control to an accuracy of 0.5º in roll, pitch and yaw and telemetered data on the spacecraft orientation to an accuracy of 0.2º in all axes. Scanwheels provide pitch and roll references viewing the Earth's horizon and pitch and roll fine control. The yaw attitude is maintained by gyrocompassing. Sun sensor data is used to determine accurately the yaw orientation, but is not used for control. The scanwheels are mounted at the lower end of the sensor module near all of the critical antennas. The pitch momentum wheel and roll reaction wheel are located in a support structure above the sensor module. Excess momentum accumulated in the wheels is removed by providing adjustable torque on the satellite using electromagnets which interact with the Earth's magnetic field.
EPS (Electrical Power Subsystem): EPS was designed to provide power at 28 ±4 VDC to the spacecraft subsystems and to the payload using solar arrays and rechargeable batteries. The basic design philosophy was to provide functional redundancy. A capability was provided for component isolation (removal from circuit), cross-strapping, charge control (automatic and manual); in addition, system protection was implemented via bypass functions by commandable relays (Ref. 18).
The primary energy source for the spacecraft was the SA (Solar Array) which consisted of two wings mounted on either side of the aft rack. With the vehicle in the normal orbital attitude and the SA deployed in the X-Y plane, the wing axis lay 40º ahead (toward the direction of flight) of the +Y axis and 40º behind the -Y axis. The wings tracked the sun through 360º about this axis using error signals generated by the sun sensors located on each solar array wing. The signals generated by the sun sensors were processed in the SADE (Solar Array Drive Electronics) which provided power to control the array drive motor speed.
During periods of eclipse, the array was driven by a fixed angular rate by signals from the SADE. In addition, the rotation direction and rate could be controlled by commands. Each wing contained 11 panels. The average power output capability varied during the life of the spacecraft due to the seasonal intensity of the sun, the angle to the sun (β angle), eclipse periods, and various factors which degrade the power output capability of the solar cells. During full sun, the SA supplied power to all the loads as well as for charging the 2 type 40 NiCd batteries. The batteries supplied the total spacecraft load requirements during eclipse and supplied the surge loads when they exceeded the instantaneous capability of the SA.
Power of ~1000 W was provided at the beginning of the mission, varying throughout the mission to ~ 700 W. The average on-orbit power was about 700 W. The solar panels were rotatable on one axis; they made up an area of 14.5 m2 of solar cells.
Figure 5: Block diagram of the EPS (image credit: NASA/JPL, Ref. 18)
RF communications: The data collected by the sensors are converted from analog to digital, except for that of the SAR instrument. Data are transmitted from the satellite in three separate streams: a 25 kbit/s real-time stream containing instrument data from ALT, SASS, SMMR, and VIRR and all engineering subsystem data, an 800 kbit/s playback stream of recorded real-time data, and a 20 MHz analog SAR instrument data stream, receivable only in real-time by specially equipped tracking stations.
An onboard data storage capacity of ~ 350 Mbit is provided - the equivalent of more than two full orbits of measurements from all sensors with the exception of the SAR instrument. SAR data is not recorded.
Redundant S-band transmitters and receivers, functioning as transponders, provide the communications link for engineering and payload data. A separate S-band transmitter (5 W) with its own helical antenna provides the SAR downlink in real-time.
In addition to the primary tracking information from SeaSat's S-band communication system, two independent tracking systems aid in navigation and orbit determination. Laser tracking signals originate from ground sites and are reflected from an array of retroreflectors on the spacecraft.
A dual-frequency beacon transmits ultrastable carriers to a ground tracking network, TRANET. The TRANET, operated by DoD (Department of Defense), receives the dual-frequency Doppler beacon from SeaSat. The tracking measurements are used to supplement the STDN S-band tracking for orbit determination. Onboard equipment includes an ultrastable transmitter radiating at 162 MHz and at 324 MHz. SeaSat uses this frequency also as a source for satellite data timing.
Sensor module and payload accommodation:
The sensor module is a platform for the operation of the five sensors to achieve the mission objectives within the required resolution and accuracy. The sensors are located in positions relative to one another and to the beacon, laser retroreflector and communication antennas so that each ahs an unobstructed field of view and each achieves the required pointing and scan angle. The mounting positions were also selected to prevent electromagnetic interference between multiple radiating sources.
The sensor module's primary structure is a 25.4 cm diameter aluminum alloy tubular mast to which equipment mounts are attached.
Two scanwheel assemblies are mounted near the forward end on the tubular supports to give each unit a clear view of Earth's horizon.
The ALT (Radar Altimeter) is mounted at the end of the mast structure - nearest to Earth - the 1 m diameter reflector antenna and RF unit on the forward end and the signal processor to the side. The ring of the corner cube quartz reflectors for the laser tracking system surrounds the altimeter antenna and RF electronics module.
The SASS (Microwave Scatterometer) and Doppler beacon transmitter for the TRANET tracking system are mounted in a support structure on the side of the mast. Four slotted array stick antennas for the SASS are stowed against the structure and each is deployed separately. The TRANET antenna is attached to a deployable boom which also supports on of the two S-band communication antennas. The second is deployed on a separate boom.
The VIRR (Visible and Infrared Radiometer) consists of a scanner mounted on a deployable boom and electronics on the mast tube.
The SMMR (Scanning Multifrequency Microwave Radiometer) is mounted as a single unit on the side of the sensor module structure. The unit includes a fixed offset parabolic reflector, scan mechanism and a digital processor.
The SAR /Synthetic Aperture Radar) antenna and the electronics are installed near the base of the sensor module. The huge SAR sensor antenna is in eight segments, folded during launch and deployed to form a flat rectangular array with an area of 23 m2. The SAR downlink transmitter is mounted on the mast and its helical antenna is deployed on a short boom.
Figure 6: Illustration of the deployed SeaSat spacecraft on orbit (image credit: NASA)
Launch: The SeaSat spacecraft was launched on June 27 (UTC), 1978 on an Atlas-F/Agena launch vehicle from VAFB (Vandenberg Air Force Base), CA, USA.
Orbit: Non-sun-synchronous near circular polar orbit, inclination = 108º, apogee = 799 km, perigee =775 km, period = 101 minutes, repeat cycle of 17 days (subcycle of 3 days).
SeaSat operated successfully from late June to early October 1978 when it experienced a malfunction. The end of the SeaSat mission occurred on October 9 (UTC), 1978 - due to an abrupt power system failure in the Agena bus that was used as a part of the spacecraft. The loss of power was caused by a massive and progressive short in one of the slip ring assemblies that was used to connect the rotating solar arrays into the power subsystem. The most likely cause of this short was the initiation of an arc between adjacent slip ring brush assemblies. The triggering mechanism of this arc could have been either a wire-to-brush assembly contact, a brush-to-brush contact, or a momentary short caused by a contaminant that bridged internal components of opposite electrical polarity. 16) 17) 18)
Mission duration: 70 days (data generation) of 105 operational days (1503 orbits). During SAR operations, approximately 42 hours of SAR data were collected.
• Imagery obtained from the Seasat SAR clearly demonstrated its sensitivity to surface roughness, slope, and land-water boundaries. Seasat images have been used to determine the directional spectra of ocean waves, surface manifestations of internal waves, polar ice-cover motion, geological structural features, soil moisture boundaries, vegetation characteristics, urban land-use patterns, and other geoscientific features of interest.
• Despite its overall technological and scientific success, Seasat's relatively short lifetime precluded the acquisition of a seasonal data set. Moreover, the Seasat SAR was a single-parameter instrument using a fixed wavelength, polarization, and incidence angle. While the near-nadir incidence angle was ideal for acquiring strong ocean returns, it produced severe geometric layover distortions on terrain images of high-relief regions.
Table 2: Collection of some events/items during the SeaSat-1 mission
Figure 7: A sample SeaSat-1 SAR image of the Los Angeles metropolitan area observed in 1978 (image credit: NASA/JPL, Ref. 6)
Figure 8: Internal waves and shallow subsea features imaged by SAR near Cape Cod, Massachusetts. Both were generated by tidal currents in the region; the image was acquired on Aug. 27, 1978 (image credit: NASA) 19)
Legend to Figure 8: SeaSat of NASA/JPL was the first satellite mission designed specifically to observe the ocean. Launched in 1978, it suffered a mission-ending power failure after 105 days of operation. But in that short time, SeaSat collected more information about the ocean than had been acquired in the previous hundred years of shipboard research.
Figure 9: SAR image of the mouth of the Columbia River and the Oregon coastline. ASF Granule SS_00638_STD_F0914 captured August 10, 1978 (image credit: NASA) 20)
The complete catalog of SeaSat images has been processed digitally and is freely available from the Alaska Satellite Facility.
The SeaSat archive is located at ASF (Alaska Satellite Facility), a NASA SAR/DACC (Synthetic Aperture Radar/Distributed Active Archive Center) at UAF (University of Alaska Fairbanks). As of October 2013, the SAR/DACC archive exceeds 1.5 PB (1015 Byte). 21) 22)
Starting in the summer of 2012, ASF undertook the significant challenge of developing a SeaSat telemetry decoder in order to create raw data files suitable for focusing by a SAR correlator. In this case, that means processable by ROI, the Repeat Orbit Interferometry package developed at Jet Propulsion Laboratory. In addition to creating the range lines out of minor frames, the decoder must interpret the 18 fields in the headers to create a metadata file describing the state of the satellite when the data was collected.
Sensor complement: (SAR, SMMR, ALT, SASS, VIRR, LRR)
The sensor complement consisted of active and passive instruments to achieve an all-weather capability. A new era of spaceborne oceanography was ushered in with the SeaSat sensor complement. All sensors operated at the same time, over the same region of the ocean, providing a truly synoptic view of the parameters important to the understanding of the dynamics of our ocean. 23) 24)
SAR (Synthetic Aperture Radar):
The SAR instrument features: HH polarization, look angle = 20º; pixel size = 25 x 25 m (spatial resolution on the surface at 4 looks); radiometric resolution = 5 bit raw data. Sensor transmission frequency: 1.275 GHz (L-band); wavelength= 23.5 cm; swath width=100 km. Antenna: 1024-element phased array antenna of size 10.74 m x 2.16 m; PRF= 1464 to 1640 Hz; pulse duration = 33.4 µs; bandwidth (linear FM) = 19.077 MHz; transmitted peak power = 1 kW (nominal). 25) 26) 27) 28) 29)
The planar antenna array consisted of eight, 1.3 m x 2.16 m rigid and structurally identical fiberglass honeycomb panels. The panels were hinged together in series, but were individually supported by a deployable tripod substructure that governed the deployment of the truss and provided the interface of the antenna structure with the spacecraft.
The Seasat SAR sensor is regarded as the first imaging SAR system used in Earth orbit. The SAR antenna is mounted on the S/C with its boresight oriented at 20º from the vertical direction (look angle), pointing to the right of the flight path. The antenna beamwidth measures 6.2º in elevation and 1º in azimuth. A footprint of 100 km x 15 km (3 dB contour) is provided. The swath extends from 290 km to 390 km to the right of the S/C ground track (Figure 10). The received radar echoes are downlinked in S-band (analog data link at 2.265 GHz) to a total of five ground receiving stations in real-time located at: Goldstone, CA, Fairbanks, AK, Merrit Island, FL, Shoe Cove, Newfoundland, and Oakhanger, UK. No high-rate onboard recording capability of SAR data was available at the time.
Table 3: Performance characteristics of SAR instrument
Figure 10: Illustration of the SeaSat SAR viewing geometry (image credit: NASA/JPL)
Table 4: SeaSat SAR image products
The SAR instrument had a mass of 147 kg and a power consumption of 216 W (1000 W peak power). The instrument could only be operated from 10 minutes per orbit.
SMMR (Scanning Multichannel Microwave Radiometer):
The SMMR is a five-frequency instrument of Nimbus-7 mission heritage. The instrument was designed and built at JPL. Objectives: Monitoring sea surface temperatures, wind speeds, rain rate, atmospheric water content (mapping of columnar water vapor distribution over the global oceans) and ice conditions. SMMR is a multispectral, dual-polarization microwave radiometer observing at the following frequencies: 6.6 GHZ (45.4 mm), 10.7 GHz (28 mm), 18.0 GHz (16.6 mm), 21.0 GHz (14.2 mm), and 37.0 GHz (8.1 mm). Six Dicke-type radiometers were utilized. Those operating at the four longest wavelengths measured alternate polarizations during successive scans of the antenna; the others operated continuously for each polarization.
The SMMR instrument consisted of five hardware elements:
• The antenna assembly consisting of the reflector, fabricated of graphite epoxy, and the feedhorn
• The scan mechanism, including momentum compensation devices
• An RF module containing the input and reference switching networks, the mixer-IF preamplifiers, and the Gunn local oscillators
• An electronics module containing the main IF amplifiers, all the post-detection electronics, and the power supplies for the scan and data subsystems
• A power supply module which contains the dc-to-dc converters and regulators for the rest of the instrument.
The antenna was a parabolic reflector offset from the nadir by 42º. Motion of the antenna reflector provided observations from within a conical volume along the ground track of the spacecraft. SMMR had a swath width of about 600 km and the spatial resolution ranged from about 22 km at 37 GHz to about 100 km at 6.6 GHz. The absolute accuracy of sea surface temperature obtained was 2 K with a relative accuracy of 0.5 K. The accuracy of the wind speed measurements was 2 m/s for winds ranging from 7 to about 50 m/s. An identical instrument was flown on Nimbus-7 (launch Oct. 24, 1978). 30) 31) 32) 33)
The SMMR instrument had a mass of 53.9 kg and a power consumption of ~60 W.
Figure 11: Illustration of the SMMR instrument (image credit: JPL)
ALT (Radar Altimeter):
ALT is of S-193 heritage flown on Skylab and of ALT flown in the GEOS-3 mission. Objective: Determination of sea surface profiles, currents, wind speeds and wave heights (first attempt to achieve 10 cm altitude precision from orbit).
Goals of the ALT Experiment:
• Measure the height of the satellite above the ocean surface with an accuracy of ± 10 cm once per second.
• Measure wave height at the subsatellite point with an accuracy of ± 10% or 0.5 m, whichever is greater, once per second for wave heights from 1 to 20 m.
• Measure the backscatter coefficient with an accuracy of ± 1.0 dB.
• Combine altimeter measurements of height with an accurate ephemeris to determine the marine geoid, currents, tides, storm surges, etc.
The ALT instrument was a Ku-band compressed pulse radar altimeter (first use of the full-deramp technique). With this new full-deramp technique no compression filter is required in the receiver. From SeaSat onwards, all altimeters have been using this technique, achieving a significant improvement in the resolution. The ALT instrument was designed and developed by JHU/APL. 34) 35) 36) 37) 38) 39)
Two of its unique features were a linear FM transmitter with a 320 MHz bandwidth, which yielded a 3.125 ns time-delay resolution, and microprocessor-implemented closed-loop range tracking, automatic gain control, and real-time estimation of significant wave height. This instrument flew the first microprocessor (8080-based controller/tracker) in space. The altimeter operated at 13.56 GHz (Ku-band, chirp signal at 2 kW peak power) using a 1-m parabolic antenna pointed off nadir and had a swath width which varied from 2.4 to 12 km, depending on sea state. ALT operated in chirp pulse mode with a 3.2 µs uncompressed pulse width and 3.125 ns compressed pulse width. The precision of the height measurement was 10 cm (rms). The estimate of significant wave height was accurate to 0.5 m or 10%, whichever was greater, the ocean backscatter coefficient had an accuracy of 1 dB.
In the SeaSat design the number of echo samples is increased (compared to GEOS-3). The samples are spaced 3.125 ns apart to encompass the anticipated spread in ocean return for wave heights up to 20 m. In this case waveforms sampling is implemented by a bank of filters with 312.5 kHz bandwidth and spacing. In contrast with previous designs, the samples are an integral part of the altitude tracking process and are used in such a way that the system adapts as a function of wave height to optimize tracker performances. The altitude tracking loop is closed in two parts: a coarse adjustment of the local oscillator pulse timing in 12.5 ns step, and a fine adjustment.
Calibration: The ALT instrument was calibrated for height bias using four overflight passes of Bermuda that were supported by the Bermuda laser. The estimated height bias was 0.0 ± 0.07 m. 40)
Figure 12: Illustration of the radar altimeter on SeaSat
The ALT instrument had a mass of 93.8 kg and a power consumption of 177 W.
SASS (Seasat-A Scatterometer System):
SASS (of S-193 heritage on Skylab) is a fan-beam dual-polarized Doppler scatterometer with the objective of radar backscatter measurements (sigma naught) over ocean surfaces for estimation of the wind field. Pulse transmit frequency of 14.599 GHz (Ku-band). SASS illuminated the sea surface with four fan-shaped beams (two orthogonal beams, each 500 km wide, on each side of the ground track). Doppler filters were used to discriminate resolution cells in the long dimension of the fan beam, resulting in 500 km swaths on either side of the satellite. The high wind swaths added an additional 250 km to each side. The spatial resolution was 50 km over a region of 200 to 700 km on either side of the spacecraft. The experimental SASS instrument first demonstrated the ability to accurately infer vector winds over the ocean's surface from a spaceborne platform. 41) 42) 43)
Note: The S-193 scatterometer on Skylab was also known by the name of RADSCAT.
Figure 13: Some parameters of the SASS instrument
Figure 14: Viewing geometry of the SASS instrument (image credit: NASA/JPL)
SASS was a proof-of-concept experiment for measuring ocean surface wind vectors under day/night near-all-weather conditions. The physical basis for this remote sensing technique is the generation of capillary waves on the ocean surface by the friction velocity of the wind. The amplitude of these cm-wavelength ocean waves is in equilibrium with the local wind, and the two-dimensional wave spectrum is highly anisotropic with the wind direction. The ocean radar backscatter results from Bragg scattering from these capillary waves, and the normalized radar cross section (σο) grows approximately as a power series of wind speed.
The scatterometer on SeaSat was the primary means of measuring ocean surface wind speed and direction. Nonetheless, patterns of SAR-measured normalized radar cross section clearly showed spatial structures associated with variations in wind speed and direction. In the last five years, it has become more apparent that SAR imagery can be used to make high spatial resolution estimates of wind speed. 44)
Given our experience over the last 25 years, SeaSat was clearly a dramatically visionary satellite system. It provided the precursors to many subsequent spaceborne instruments. The SeaSat SAR was designed to provide ocean surface wave images from which ocean wave spectra could be derived. However, the imagery clearly showed features associated with variations in wind speed and direction. Since that time, a new generation of calibrated SARs have been launched which makes it possible to use what we learned from SeaSat to produce, on a routine basis and in nearly real-time, high-resolution SAR wind fields (Ref. 44).
The SASS instrument had a total mass of 103 kg (electronics assembly of 59 kg, each antenna had a mass of 11 kg), power consumption of 100 W (peak).
VIRR (Visible and Infrared Radiometer):
VIRR is a supporting instrument on Seasat (of SR heritage on NOAA-1) with the objective to provide images of visual reflection and thermal infrared emission from oceanic, coastal, and atmospheric features that might aid in interpreting the data from the other Seasat sensors (also some quantitative measurements of SST and cloud top height). Scanning is accomplished by a rotating mirror mounted at 45º to the optical axis of the collecting telescope (scan angle=±51.2º). VIRR uses a 12.7 cm diameter Cassegrain-type telescope, focusing the radiation onto a field stop. A relay optical system transmits the radiation to a dichroic beamsplitter, which separates it into the visible and infrared wavelengths. The swath of the VIRR is about 2280 km wide, centered on nadir. 45) 46)
Table 5: VIRR instrument parameters
The VIRR instrument had a mass of 8.1 kg and a power consumption of 7.3 W.
LRR (Laser Retro-Reflector):
A device to support precision orbit determination for Seasat. Laser corner-cube reflectors, composed of 96 fused silica 3.75 cm hexagonal corner cube retroreflectors, and ground-based laser systems were used to obtain precise satellite tracking information. The retroreflector array was configured as a single ring of cube corners 1.27 m in diameter. Sixteen of the cube corners were tilted away from the axis of the ring by an angle of 25º and the remaining 80 cubes by an angle of 50º. Because of the great distance of the array from the center of mass of the satellite, the range correction varied from 5.28 m at zenith to 3.08 m near the horizon.
When illuminated by laser light pulses from the ground, each retroreflector cube in the array reflected the light pulses back to a telescope/receiver on the ground. A digital counter recorded the time of flight of the laser light pulses from the ground to the satellite and back to the ground. Range was determined from this time onwards with an accuracy of a few centimeters. The data were essential for accurate calculation of the satellite's orbit (ephemeris). NASA, USAF, SAO (Smithsonian Astrophysical Observatory) and foreign laser tracking stations tracked this satellite.
The Seasat mission was controlled from the real-time mission operations facility located at NASA/GSFC. Spacecraft data were received and recorded by the tracking stations of STDN (Spaceflight Tracking Data Network) and transmitted to GSFC. There, data were sorted, merged, time tagged, and recorded on magnetic tape, which were then shipped to the IDPS (Instrument Data Processing System ) at JPL. SeaSat's five onboard sensor data were individually managed by the following centers:
• ALT (Radar Altimeter): Wallops Flight Center, VA
• SMMR (Scanning Multichannel Microwave Radiometer) and SAR: JPL
• SASS (SeaSat-A Scatterometer System): LaRC (Langley Research Center)
• VIRR (Visible and Infrared Radiometer): GSFC
The received SAR echoes were downlinked in S-band (analog data link at 2.265 GHz) to a total of five ground receiving stations in real-time (no onboard high-rate recording capability was possible at the time; SAR data were only acquired when the satellite was in the sight of a ground station) located at: Goldstone, CA, Fairbanks, AK, Merrit Island, FL, Shoe Cove, Newfoundland (provided by CCRS), and Oakhanger, UK (provided by ESA).
The downlinked analog SAR data were recorded at the receiving stations on film using a cathode ray tube. The data were then processed to pictures using analog Fourier Optical techniques in what is known as an "Optical SAR Processor." SAR echo data is effectively a microwave hologram of the illuminated area, so by recording this data on film, optical processing becomes the natural approach to forming an image of the ground.
Early SAR data users were hampered by enormous amounts of data and very limited computing power to analyze the data. Until 1978, SAR images were formed using analog techniques, incorporating optical lenses and photographic film (initially, over 95% of the SeaSat data was processed in a survey mode using optical laser techniques).
Also in 1978, the first reconstruction of a SAR image was formed on a digital computer (a slow process at the time with the available computer power, but good quality imagery was generated with this technique). This SAR processor was developed by MDA (MacDonald Dettwiler) of Richmond, BC, Canada, for the purpose to manage SeaSat SAR data. These early digital SAR processors required all the processing power available of a system; they were installed on mainframe computers or on large dedicated hardware. A typical digital SAR scene of size 100 km x 100 km required 6 magnetic tapes of 1600 BPI. -- On the other hand, todays SAR images (since the late 1990s) can be formed on relatively inexpensive equipment like a workstation or a PC. 47) 48)
Figure 15: Coverage map of SeaSat SAR data (image credit: NASA/JPL)
Figure 16: Illustration of a more recent SeaSat swath coverage map (image credit: Alaska Satellite Facility) 49)
Figure 17: SeaSat ocean data distribution plan (image credit NASA)
SeaSat Success Statement (Ref. 8)
Table 6: SeaSat instruments
Table 7: Availability of SeaSat data (from date of first good science data (after engineering assessment) to end of mission
Geophysical Evaluation of SeaSat Data:
• Evaluation was based on open release of interim geophysical data and competition among groups of users.
- Interim geophysical data were distributed to all with a demonstrated interest in evaluating the data (no exclusive use of data).
• Evaluations initially included comparisons with surface observations but later expanded to include intercomparisons among instruments.
- Surface observations came from two large oceanographic experiments (Jasin and Goasex) and from buoys, ships, aircraft, and other spacecraft.
- The same variables were often measured by different instruments on SeaSat and the results could be intercompared; e.g., wind speed was measured by four instruments.
• Results of evaluation were discussed at workshops.
- 13 workshops and 2 colloquiums were held between January 1979 and May 1981.
• Results were then published in scientific journals.
- 6 special issues of journals plus dozens of papers were published.
Special SeaSat Journal Issues:
• Science, Vol. 204, No. 4400, pp. 1405-1424, June 29, 1979
• Journal of Oceanic Engineering, Vol. OE-5, No.2, April 1980
• Journal of Astronautical Sciences, Vol. XXVIII, No.4, October-December 1980
• Journal of Geophysical Research, Oceans and Atmospheres, Vol. 87, No. C5, 3173-3438, April 30,1982
• Journal of Geophysical Research, Oceans and Atmospheres, Vol. 88, No. C3, 1529-1952 February 28, 1983
• Marine Geodesy, Vol. 8, No. 1-4, 1-402, September 1984.
Figure 18: Comparison of three wind-speed measurements (image credit: NASA)
Figure 19: SeaSat Altimeter data (image credit: NASA)
Altimeter Experiment Goals and Results
• Measure the height of the satellite above the ocean surface with an accuracy of ± 10 cm once per second.
• Measure wave height at the subsatellite point with an accuracy of ± 10% or 0.5 m, whichever is greater, once per second for wave heights from 1 to 20 m.
• Measure the backscatter coefficient with an accuracy of ± 1.0 dB.
• Combine altimeter measurements of height with an accurate ephemeris to determine the marine geoid, currents, tides, storm surges, etc.
Table 8: SeaSat Altimeter observation parameters
• Altimeter data were used to make detailed maps of the marine geoid with an accuracy of ± 1 m and a resolution of around 100 km.
- These maps have revolutionized our knowledge of the marine geoid, and are being used to study plate tectonics, mantle convection. and the marine lithosphere.
• Tides over entire ocean basins were directly observed from space for the first time.
• Mean and variable ocean surface currents were mapped globally. - The maps of variable currents show new information.
• Ice sheets in Greenland and along the Antarctic coast were profiled with great accuracy.
• Wave heights were mapped globally for the first time, and the propagation of waves across ocean basins studied.
Figure 20: Measurement of ocean circulation by satellite altimetry (image credit: NASA)
Figure 21: Mean sea surface topography based on SeaSat altimeter data (image credit: NASA)
Figure 22: Map of surface gravity and earthquakes (image credit: Haxby et al., 1984)
Figure 23: The marine geoid observed by the altimeter on SeaSat. The relief is due to variations in Earlh's gravity field resulting from the uneven distribution of mass within the Earth. Trenches, seamounts, transform faults, and oceanic ridges arc clearly seen. The surface is displayed as though it were illuminated by light shining from the northwest, thus accentuating the features on the surface (image credit: Lamont-Doheny Geological Observatory of Columbia University)
Figure 24: The correlation between sea level (top line) and bathymetry (bottom line), image credit: NASA)
Figure 25: Mesoscale sea height variability from SeaSat collinear altimeter data (image credit: Cheney, Marsh and Beckley, 1983)
Figure 26: Smoothed mean sea level from hydrography -contours in meters relative to 2000 decibar surface (image credit: NASA)
Figure 27: Smoothed mean sea level from satellite data -contours in meters relative to the geoid (image credit: Tai and Wunsch, 1984)
Figure 28: Change in average geostrophic currents around Antarctica from altimetry data in the period July-October 1978 (image credit: NASA)
Figure 29: Surface elevation contours of southern Greenland derived from SeaSat radar altimetry data. Major drainage basins are delineated based on surface topography. The contour interval is 50 m above sea level (image credit: NASA/GSFC)
Precision orbit determination: Goals and Results
• Determine the best attainable precision and accuracy of the SeaSat ephemeris with a goal of submeter accuracy for the vertical component.
- The SeaSat ephemeris is now the most accurately known of any satellite in its class.
- The vertical component of the ephemeris is known with an accuracy of 50-70 cm globally, and improvements in accuracy are expected.
• Define methods for improved orbit determination.
- Altimeter and tracking data have been combined to yield new and more accurate techniques for orbit determination.
- SeaSat plus geodetic satellite data have led to an improved gravity field and better tracking station locations.
• Provide a precise ephemeris required to exploit fully the altimeter data for studies of sea-surface topography.
- The ephemeris was calculated with sufficient accuracy to enable the altimeter data to be used for studies of tides and currents.
Figure 30: Evolution of the accuracy of the SeaSat ephemeris (image credit: NASA)
Scatterometer: Goals and Results
• Provide closely spaced observations of surface wind speed and direction, from which vector winds can be derived on a global basis.
- Winds were measured on a 50 km grid.
- Global maps of vector winds were produced. These were the first global maps of wind measured from space.
- Observed wind field is being used to test usefulness of surface winds for weather forecasts and to test theories of atmospheric turbulences.
Table 9: SeaSat Scatterometer measurement parameters
SeaSat Scatterometer Marine Wind Analysis:
Figure 31: Streamlines for undealised winds depicting EDZ (Equatorial Divergence Zone) and a wave on the ICTZ (Interpropical Convergence Zone), image credit: NASA)
Figure 32: Global wind field at the sea surface observed by the SeaSat scatterometer on 14 September 1978 together with the northern extent of the sea-ice pack around Antarctica seen by the same instrument (image credit: NASA Jet Propulsion Laboratory)
Figure 33: Gridded wind vector field from SeaSat scatterometer winds, 3-day mean, Sept. 6-8, 1978 (image credit: JPL, AES (Canada), UCLA, dBS, NASA)
Figure 34: Scatterometer wind speed measurements, 3-day mean: Rev. 1015 (Sept. 5, 1978, 23:04 UTC) through Rev. 1060 (Sept. 9, 1978, 04:18 UTC) , image credit: JPL, AES (Canada), UCLA, dBS, NASA
SMMR (Scanning Multichannel Microwave Radiometer): Goals and Results
• Obtain all weather measurements of ocean surface temperature and wind speed.
- The instrument has produced the first global maps of surface temperature together with maps of wind speed.
• Obtain integrated liquid water and water vapor content in the atmosphere.
- Water vapor measurements proved to be at least as accurate as radiosondes.
- Liquid water in clouds and rain were mapped globally.
• Provide necessary corrections for the SeaSat altimeter and scatterometer measurements.
- Accurate corrections were provided for both instruments.
Table 10: SeaSat SMMR measurement parameters
Figure 35: SeaSat-A SMMR theater of observation (image credit: NASA)
Figure 36: Various SMMR parameter illustrations (image credit: NASA)
Figure 37: Sea level humidity is related to integrated water vapor (image credit: NASA/JPL, W. T. Liu)
Figure 38: SeaSat SMMR observations of temperature, water vapor, wind speed, and latent heat flux (image credit: NASA/JPL,W. T. Liu, 1984)
SAR (Synthetic Aperture Radar): Goals and Results
• Provide high resolution imagery of oceans, ice, and land.
- The radar produced images of a swath 100 km wide with 25 km resolution.
- 100 million km2 mapped, including nearly all of North America and western Europe.
• Obtain radar imagery of ocean wave patterns in deep water for purposes of deriving ocean wave directional spectra.
- Waves were observed under a variety of conditions.
• Obtain ocean wave patterns and water-land interaction data in coastal regions.
- Wave refraction, diffraction, and breaking were observed.
• Obtain radar imagery of sea and fresh water ice and snow cover.
- Extensive coverage of Arctic sea ice was obtained.
• Demonstrate the environmental monitoring capability of the sensor under day-night and all weather conditions.
- Ice types and levels, ships, shoals, oceanic eddies, and many other features were monitored under all conditions.
Table 11: SeaSat SAR observation parameters
Figure 39: Gulf of California. Internal waves observed by SAR (image credit: NASA/JPL)
Figure 40: A warm-core ring off Delaware Bay (image credit: NASA/JPL)
Legend to Figure 40: On the right of the image is the western portion of a warm-core ring about 100 km southeast of Delaware Bay. The ring was repeatedly observed during four other overpasses (revolutions 1275, 1318, 1404, and 1447) and some characteristics common to all of its images have been identified (Lichy et aI., 1981). The area within the ring generally has a higher intensity than the surrounding area with a boundary characterized by concentric curvilinear lines (from E1 and F1 to H9 and J9). These lines are bright in the southern part of the ring and dark in the northern part. Small-scale patches of high and low intensities (the mottled texture) surround the ring and wave-like pattems (J2 to J5) occur near the ring center. The cause of these features is not understood; however, they are most likely associated with the current shears and temperature contrasts produced by the ring.
Figure 41: Coastal eddies off Point Arena, California (image credit: NASA/JPL)
Legend to Figure 41: This image of the coastal waters south of Point Arena (at 82), California, shows a large-scale, tongue-like feature protruding from the coast into the ocean (from H5, through C10, to H11). The feature is characterized by small-scale swirls (H5to J7) and filaments of low and high image intensities (H8to E11). Another feature of smaller size and weaker signatures can be seen south of Point Arena (83 to C6/D6). Off the coast of California, the annual upwelling season ends usually in August. Then begins the process of intense exchange between the cold coastal water resulting from the upwelling and the relatively warm offshore water. This process produces various eddies and plumes (Sverdrup et aI., 1942, p. 725) as shown on the image.
Figure 42: SAR image of the Misteriosa Bank in the Caribbean Sea (image credit: NASA/JPL)
Legend to Figure 42: The Misteriosa Bank is a part of the Cayman Ridge located between Mexico and the Cayman Islands south of the Yucatan Channel; it has depths ranging from 9 to 12 fathoms. The isolated position of Misteriosa Bank-distant from any coastal areas makes this image unique among the images in this section. The shape of the bank (see bathymetry map) is delineated on the SAR image by a thin dark line on the southern side of the bank (A15, B16, and C15) and a bright line on the northern side (A15 to C15). The Misteriosa Bank was included in three other passes (revolutions 400,1440, and 1483), but was imaged only on the latter two. Easterly winds of 2.5 to 5 m/s occurred during revolution 400, while weak, easterly winds of 0 to 2.5 m/s.
Figure 43: SAR image of the Nantucket Shoals (image credit: NASA/JPL)
Legend to Figure 43: The Nantucket Shoals are shallow-water areas to the south and east of Nantucket Island, south of Cape Cod, and are characterized by ridges and shoals separated by deeper channels. The surface expressions on this image, the only SAR image taken of this area, reflect closely the bathymetric patterns shown on the map, with the more intense and distinct patterns occurring over areas shallower than 10 fathoms (18 meters) (e.g., C4, D4, and E4).
Figure 44: Arctic ice observed by the SAR instrument (image credit: NASA/JPL)
Legend to Figure 44: The large feature with a unique radar backscatter is Fletcher's Ice Island, commonly referred to as T-3 (84 on Image 49A, and B5 on Image 49B), which is a tabular block of ice, 7 kilometers by 12 kilometers in area, calved from the Ellesmere Island ice shelf. The high radar backscatter may be due to the ice island's regular pattern of low, corrugated ridges and scattered deposits of rock debris (Rodahl, 1954). Fletcher's Ice Island was discovered in 1946 and has been tracked continuously since then, remaining within the anticyclonic gyre of the Beaufort Sea. It was imaged by SeaSat on eight separate passes from August 16 to October 6, 1978 traveling 157 km over 61 days in a south-westward direction. The positions have been confirmed by the tracking of T-3 from a NOAA satellite.
Figure 45: Ice deformation in the central Arctic Pack (image credit: NASA/JPL)
Figure 46: SeaSat SAR image of the eastern Beaufort Sea in the Fall of 1978. Ice conditions at the edge of the central Arctic Pack (image credit: NASA/JPL)
Figure 47: SAR image of the western Beaufort Sea ice margin on Oct. 8, 1978. Overlay: TB, SMMR radiance, 37 GHz vertical polarization (image credit: NASA/JPL)
Figure 48: Penetration of alluvium (Mojave Desert, CA). Preliminary investigations seem to indicate that radar is imaging buried features (image credit: NASA/JPL)
Figure 49: SAR image of the Rio Lacuntum Region (Mexico/Guatemala Border). Processed to highlight folded geologic features under tropical forest (image credit: NASA/JPL)
VIRR (Visible and Infrared Radiometer): Goals and Results
• Provide low resolution visible and infrared images of oceans, coasts, and clouds to aid in interpreting microwave data.
- Early failure of the instrument limited its usefulness.
- 62 days of images were used to help interpret data from other SeaSat sensors.
• Determine sea-surface temperature with an accuracy of ± 1.5ºC.
- Accuracy was about ± 1.7 ºC.
Figure 50: Antarctic ocean-air-sea ice interactions from SeaSat (image credit: NASA/GSFC)
Figure 51: Combining Altimeter and SAR data to study Arctic ice (image credit: NASA)
SeaSat Success Statement: Commercial Demonstration
• Commercial users were involved initially through SeaSat UWG (User's Working Group).
• Extensive cost-benefit studies commissioned by NASA indicated significant economic benefits from SeaSat.
• As a result, commercial users were interested in documenting usefulness of SeaSat data for their operations.
• Commercial demonstration program begun in 1977 with 15 companies participating.
Commercial Demonstration Goals
• Evaluate impact of SeaSat data on selected commercial operations.
• Provide experimental evidence to help refine earlier estimates of benefits to:
- Offshore oil and gas operations
- Marine transportation
- Deep ocean mining
- Marine fisheries
- Marine safety
- Marine forecasting.
• Begin technology transfer process and accelerate rate at which benefits are obtained.
Figure 52: The SeaSat NRT (Near-Real-Time) data distribution network (image credit: NASA)
Commercial Demonstration Results:
• 16 users completed 18 two-year studies at no cost to NASA beyond the cost of supplying data. They found:
- SeaSat data can significantly improve the forecast of winds. waves, and the location of storms.
- Improved forecasts lead to better routing of ships with substantial savings of operating costs.
- SeaSat data are useful for establishing the climatology of offshore areas leading to better selection of offshore equipment.
• Fish catch and oceanic conditions were correlated, hence SeaSat data lead to more efficient fisheries.
- Real time satellite data were provided to commercial users through the FNOC (Fleet Numerical Oceanographic Center). FNOC only distributed data for a short period of time. It is no longer distributing data to commercial users.
Results of Data Analysis Program
• Demonstrated that widespread distribution of interim geophysical data is essential for rapid progress in developing geophysical algorithms for processing satellite data.
- Subsets of SeaSat data were quickly made available to all with a demonstrated interest in the data (no exclusive use of data).
• Showed that global, simultaneous measurements of the same variable by different techniques are important for understanding remotely sensed observations.
- Wind speed was measured 4 ways, wave height 3 ways, temperature 2 ways, ice type 3 ways.
• Proved that accurate observations of oceanic variables can be used for scientific studies of the ocean, air-sea interaction, and climate.
Significant Results of the SeaSat Mission
• Demonstrated the ability to routinely monitor the oceans from space for both scientific and operational uses.
• Demonstrated key elements of facilities for processing and distributing data for science and applications.
- The task is difficult and requires the cooperation of many agencies.
- NASA, by taking the lead in processing and distributing data, greatly accelerated the application of data by users. (Had the users processed data as originally planned, the process would have been much slower).
• Demonstrated the usefulness of real-time data for those who operate at sea.
- 11 corporations and 2 government agencies evaluated the usefulness of SeaSat data for their operations.
• SeaSat laid the foundation for many further programs both domestic and foreign.
The SeaSat Legacy
SeaSat has strongly influenced the space programs and oceanographic studies of many countries:
• Canada: RADARSAT a proposed satellite for SAR surveys of land and ice.
• Europe: European Association of Remote Sensing Laboratories formed to take advantage of SeaSat data;
- ERS-l (European Remote-Sensing Satellite-1), a satellite program very similar to SeaSat
- POSEIDON project in France for satellite altimetry
- Spacelab test of advanced spaceborne radar for oceanography.
• Japan: JERS-l (Japan Earth Resources Satellite-1), a SAR (Synthetic Aperture Radar) satellite program
- MOS-2 a proposed radar satellite for oceanography.
• Soviet Union: Development of a series of SeaSat-like spacecraft.
• USA: GEOSAT (Geodetic/Geophysical Satellite), a Navy satellite program to complete SeaSat altimetry mission
- NROSS (Navy Remote Ocean Sensing System) a Navy oceanography satellite similar to SeaSat
- TOPEX (Topographic Experiment Mission), a proposed advanced altimeter satellite for ocean circulation
- SMM/I (Special Multifrequency Microwave Imager), a DoD multifrequency microwave radiometer based on SeaSat
- SIR (Shuttle Imaging Radar)-A,-B, -C missions based on SeaSat.
Looking back, the main results of the mission were: 50)
1) ALT (Altimeter) observations of oceanic currents, bathymetry, and waves:
• First global maps of variability of surface geostrophic currents
• First global maps with useful resolution of oceanic bathymetry. This led the US Navy to classify later measurements from GEOSAT.
• First global maps of significant wave height.
2) SASS (SeaSat-A Scatterometer System) observations of ocean winds:
• First global maps of oceanic surface wind speed and direction, leading to much improved weather forecasts when scatterometer data became available from later satellites.
3) SMMR (Scanning Multifrequency Microwave Radiometer) measurements of sea-surface temperature, wind speed, and liquid water.
• First global maps of atmospheric water vapor and liquid water.
• First observations of latent heat flux through the sea surface (evaporation).
4) SAR (Synthetic Aperture Radar) results:
• First large-area maps of sea-ice movement in the Arctic.
• First maps of sub-sand features and old river basins in the Sahara.
• Note: SAR did not produce "first global view of ocean circulation."
The data from ALT, SASS and SMMR were far more interesting scientifically than the data from SAR, and they led to a revolution in understanding oceanic processes.
5) Open release of all data to all interested scientists and engineers led to a far faster understanding of the measurements than had ever before been achieved. Before SeaSat, data were released to a small number of experts on instrument teams that took years to publish reports (Nimbus-7, launched at the same time as SeaSat, took nearly a decade to produce the first global maps of chlorophyll, and the maps had glaring errors).
6) When phenomena (e.g. wind speed) are measured by several different instruments at the same time and place (e.g. SMMR, ALT, SASS) instrument errors and accuracies can be determined far faster than if observed by only one instrument.
• SeaSat was extraordinarily successful in demonstrating the ability to observe the ocean from space and the importance of the observations to oceanography.
• Like the UK Challenger expedition of 1872-76, SeaSat has set forth a new and fruitful view of the oceans.
1) A. Buis, "Seafaring Satellite Sets 25 Year Trend," Sept. 1, 2003, URL: http://www.nasa.gov/vision/earth/lookingatearth/Seasat_25.html
2) K. B. Katsaros, R. A. Brown, "Legacy of the Seasat Mission for Studies of the Atmosphere and Air-Sea-Ice Interactions," BAMS (Bulletin of the American Meteorological Society), Volume 72, No 7, July 1991, pp. 967-981, URL: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281991%29072%3C0967%3ALOTSMF%3E2.0.CO%3B2
3) D. L. Evans, W. Alpers, A. Cazenave, C. Elachi, T. Farr, D. Glackin, B. Holt, L. Jones, W. Timothy Liu, W. McCandless, Y. Menard, R. Moore, Eni Njoku, "Seasat - A 25-year legacy of success," Remote sensing of Environment, Vol. 94, No 3, Feb. 15, 2005, pp. 384-404, ISSN 0034-4257, URL: http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/40868/1/03-3010.pdf
4) B. Holt, R. Kwok, "Sea Ice Geophysical Measurements from SeaSat to the Present, with an Emphasis on Ice Motion: Brief Review and Look Ahead," Workshop, Dec. 1-5, 2003, ESA/ESRIN, URL: http://earth.esa.int/workshops/cmasar_2003/papers/E21holt.pdf
5) Richard L. Crout, "SeaSat's legacy," Oct. 2003, URL: http://www.allbusiness.com/science-technology/earth-atmospheric-science/16091783-1.html
8) Robert H. Stewart, "Seasat: Results of the Mission,", BAMS (Bulletin American Meteorological Society), Vol 69, No 12, December 1988, pp 1441–1447, URL: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281988%29069%3C1441%3ASROTM%3E2.0.CO%3B2
9) Robert H. Stewart, "Post Launch Mission Operations Report No. E-655-78-01," NASA/JPL, April 11, 1985. —This is the official NASA document on the results of the SeaSat mission, signed by the top officials responsible for the mission at NASA HQ. The document was used by NASA to inform Congress that funds for the mission were well spent.
10) The mission objectives were formulated in the 1970s. They were approved in June 1978 by: L. R. Greenwood, Director Environmental Observations Division,Office of Space and Terrestrial Applications; Anthony J. Calio, Associate Administrator for Space and Terrestrial Applications; S. W. McCandless Jr., SeaSat-A Program Manager, Office of Space and Terrestrial Applications.
11) Samuel W. (Walt) McCandless, Jr. "The Origin, Evolution and Legacy of SeaSat," Proceedings of IGARSS (IEEE International Geoscience and Remote Sensing Symposium), Toulouse, France, July 21-25, 2003
13) W. F. Havens, H. Ohtakay, "Attitude Determination System for a Nadir-Pointing Satellite," Journal of Guidance, Control, and Dynamics, Vol. 1, No 5, 1978 pp. 352-358
15) "NASA Press Kit of SeaSat-A," Release No 78-77, May 26, 1978, URL: http://www.scribd.com/doc/48929892/Seasat-A-Press-Kit
17) Rick Obenschain, "SeaSat- Lessons Learned — .And Not Learned," http://klabs.org/mapld04/tutorials/mishaps/presentations/7_seasat_obenschain.ppt
18) "Report of the SeaSat Failure Review Board," Dec. 21, 1978, URL: http://www.klabs.org/richcontent/Reports/Failure_Reports/seasat/seasat.pdf
19) Alan Buis, Carol Rasmussen, "NASA Historic Earth Images Still Hold Research Value," NASA, March 18,2014, URL: http://www.nasa.gov/jpl/historic-earth-images-hold-research-value/#.Uyklh852H5o
20) "ASF (Alaska Satellite Facility)," URL: https://www.asf.alaska.edu/seasat/about/#prettyPhoto
23) D. E. Barrick, C: T. Swift, "The Seasat Microwave Instruments in Historical Perspective," IEEE Journal of Oceanic Engineering, Vol. OE-5, 1980, pp. 74-79
25) Lee-Lueng Fu, Ben Holt, "Seasat Views Oceans and Sea Ice With Synthetic Aperture Radar," JPL publication 81-120, February 15, 1982
26) Ch. Elachi, "Spaceborne Imaging Radar: Geologic and Oceanographic Applications," Science, Vol. 209, No. 4461,, September 5, 1980, pp. 1073-1082
27) R. L. Jordan, "The Seasat-A synthetic-aperture radar systems," IEEE Journal of Oceanic Eng., Vol. OE-5, pp. 154-164,1980.
28) John. F. Vesecky, Robert H. Stewart, "The observation of ocean surface phenomena using imagery from the Seasat synthetic aperture radar: An assessment," Journal of Geophysical Research, Vol. 87, C3, April 1982, pp. 3397-3430, DOI: 10.1029/JC087iC05p03397, URL of abstract: http://onlinelibrary.wiley.com/doi/10.1029/JC087iC05p03397/abstract
29) Samuel W. (Walt) McCandless, Jr., Christopher R. Jackson, "Chapter 1: Principles of Synthetic Aperture Radar," SAR Marine User's Manual, 2004, URL: http://www.sarusersmanual.com/ManualPDF/NOAASARManual_CH01_pg001-024.pdf
30) E. Njoku, et al., "The Seasat Scanning Multichannel Microwave Radiometer (SMMR): instrument description and performance," IEEE Journal of Oceanic Engineering, Vol.-5, Issue 2, 1980, pp. 100-115
31) P. N. Swanson, A. L. Riley, "The SeaSAT Scanning Multichannel Microwave Radiometer (SMMR): Radiometric calibration algorithm development and performance," IEEE Journal of Ocean Engineering, Vol 5 No.2, 1980, pp. 116-124
32) Scanning Multi-channel Microwave Radiometer (SMMR)," NSIDC, URL: http://nsidc.org/data/docs/daac/smmr_instrument.gd.html
33) Werner Alpers, "Ocean Surface Wave Imaging from SeaSat to Envisat," Proceedings of IGARSS (IEEE International Geoscience and Remote Sensing Symposium), Toulouse, France, July 21-25, 2003
34) W. Townsend, "An initial assessment of the performance achieved by the Seasat-1 radar altimeter," IEEE Journal of Oceanic. Eng., Vol. OE-5, pp. 80-92, 1980
35) L. S. Fedor, G. S. Brown, "Wave height and wind speed measurements from the SeaSat radar altimeter," Journal of Geophysical Research, Vol. 87, 1982, pp. 3254-3260
36) R. Cheney, J. Marsh, B. Beckle, "Global mesoscale variability from collinear tracks of SeaSat altimeter data," Journal of Geophysical Research, Vol. 88, 1983, pp. 4343-4354
37) J. Lorell, E. Colquitt, R. J. Anderle, "Ionospheric Correction for SeaSat Altimeter Height Measurement," Journal of Geophysical Research, Vol. 87, C5, 1982, pp. 3207-3212
38) R. F. Gasparovic, R. K. Raney, R. C. Beal, "Ocean Remote Sensing Research and Applications at APL," Johns Hopkins APL Technical Digest, Vol. 20. No 4, 1999, pp. 600-610, URL: http://www.jhuapl.edu/techdigest/TD/td2004/gaspar.pdf
39) B. C. Douglas, R. W. Agreen, D. T. Sandwell, "Observing Global Ocean Circulation with SeaSat Altimeter Data," Marine Geodesy, Vol 8, No 1-4, 1984, URL: http://www.luau.ucsd.edu/sandwell/publications/11.pdf
40) R. Kolenkiewicz, C. F. Martin, "Seasat altimeter height calibration," Journal of Geophysical Research, Vol. 87, 1982, pp. 3189-3197
41) J. W. Johnson, et al., "Seasat-A satellite scatterometer instrument evaluation," IEEE Journal of Oceanic Eng.,Vol. OE-5, pp. 138-144, 1980
42) W. L. Jones, L. C. Schroeder, D. H. Boggs, E. M. Bracalente, R. A. Brown, G. J. Dome, W. J. Pierson, F. J. Wentz, "The SeaSat-A Satellite Scatterometer, The Geophysical Evaluation of Remotely Sensed Wind Vectors Over the Ocean," Journal of Geophysical Research, Vol. 87, No. C5, pp. 3297-3317, April 1982
43) R. K. Moore, W. L. Jones, "Satellite Scatterometer Wind Vector Measurements - the Legacy of the Seasat Satellite Scatterometer," IEEE Geoscience and Remote Sensing Society Newsletter, Sept. 2004, pp. 18-36
44) Frank Monaldo, "SeaSat Sees the Winds with SAR," Proceedings of IGARSS (IEEE International Geoscience and Remote Sensing Symposium), Toulouse, France, July 21-25, 2003
45) P. McClain, R. Marks, G. Cunningham, A, McCulloch, "Visible and Infrared Radiometer on Seasat-1," IEEE Journal on Oceanic Engineering, Vol. OE-5, No. 2, April 1980, pp 164-168
47) J. R. Bennett, I. G. Cumming, R. A. Deane, "The Digital Processing of SeaSat Synthetic Aperture Radar Data," Proceedings of the IEEE International Radar Conference, 1980 pp. 168-174
48) H. Nohmi, S. Kato, N. Ito, T. Yanase,H. Kashihara, K. Naito, S. Hanaki, "Digital Processing of Spaceborne Synthetic Aperture Radar Data," Proceedings of ACRS (Asia Conference on Remote Sensing) 1980, Nov. 5-7, 1980, Bangkok, Thailand, URL: http://www.a-a-r-s.org/acrs/proceeding/ACRS1980/Papers/DCA80-1.htm
50) Information was kindly provided by Robert H. Stewart (retired, formerly of Scripps Institution of Oceanography and NASA/JPL — and deeply involved with Seasat from 1974 to 1984)
The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: "Observation of the Earth and Its Environment: Survey of Missions and Sensors" (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates.
|
<urn:uuid:4c7d6268-3354-4b0d-b234-d1c0cbda7742>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816841.86/warc/CC-MAIN-20180225170106-20180225190106-00221.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8978025317192078,
"score": 3.875,
"token_count": 16300,
"url": "https://directory.eoportal.org/web/eoportal/satellite-missions/s/seasat"
}
|
|Page tools: Print Page Print All RSS Search this Product|
Health Status: Mental Health
In recognition of the need to highlight issues of quality of life and rights of people with mental disorders, in 1992 the National Mental Health Strategy was developed and in 1996 mental health was designated as a National Health Priority Area.
Types of mental health disorders
The prevalence of mental disorder was similar for men and women (17% and 18% respectively). However, there were differences in the prevalence of mental disorders of different types among men and women and at different ages. Women were more likely to have experienced anxiety disorders (12% for women compared to 7% for men) and affective disorders (7% compared to 4%). On the other hand, men were more than twice as likely as women to have had a substance use disorder (11% compared to 4%).
The prevalence of anxiety disorders for women aged 18-44 ranged between 12% to 15%. Women aged 45-54 had the highest rate of anxiety disorders, 16%, which steadily declined in older age groups to 5% for those aged over 64. For men, the prevalence of anxiety disorders varied little with age until age 55, after which it declined. The prevalence of affective disorders was highest at 11% for women aged 18-24, more than three times the rate for men of this age. For women, the prevalence of affective disorders generally declined with age, while for men, rates increased in the middle years before declining after age 55.
Men aged 18-24 had the highest rate of substance use disorders, particularly from excessive alcohol intake, with more than one in five being affected (22%). The equivalent rate for women in this age group was half this (11%). For men and women, the prevalence of substance use disorders declined steadily with age. Alcohol use disorders were about three times more common than any other substance use disorder (7% compared to 2%).
The presence of a mental disorder may predispose individuals to other disorders. For example, people who experience social phobia may also experience depression and alcohol dependence. People with an affective disorder were the most likely to have more than one mental disorder. Of those with an affective disorder, 61% also had an anxiety or substance use disorder. In comparison, 43% of those with an anxiety disorder also had an affective or substance use disorder and 31% of those with a substance use disorder had an affective or anxiety disorder.
PREVALENCE OF MENTAL DISORDERS(a), 1997
(a) During the 12 months prior to interview.
Source: Mental Health and Wellbeing Profile of Adults, Australia 1997 (cat. no. 4326.0).
Impact on daily life
People with a mental disorder (or physical condition) are not necessarily restricted in their day to day activities. However, the presence of mental and/or physical conditions in combination often increases the likelihood of disability, compounding the difficulties that these people face.
The 1997 National Survey of Mental Health and Wellbeing used the Brief Disability Questionnaire (BDQ), based on a standard international questionnaire, as a measure of disability. The BDQ asks participants whether they are limited in a number of activities, and whether they have cut down or stopped activities they were expected to do as part of their routine. Participants were then allocated a score characterising them as having a mild, moderate or severe disability, or none.
People who reported physical conditions only were more likely to have a disability than those who reported mental disorders only (55% compared to 30%). This may partly reflect the emphasis the BDQ places on the physical rather than the mental aspects of disability. Even so, adults with mental disorders, were on average more likely to be disabled than adults in general (44% compared to 34%).
MENTAL DISORDERS AND PHYSICAL CONDITIONS(a) BY DISABILITY STATUS(b), 1997
(b) During the four weeks prior to interview, according to the Brief Disability Questionnaire.
Source: Mental Health and Wellbeing Profile of Adults, Australia, 1997 (cat. no. 4326.0).
Health service use
Some people experience a mental disorder once and fully recover. For others, it recurs throughout their lives or in episodes. The vast majority of mental illnesses are able to be treated if they have access to appropriate care and services2.
Of those with mental disorders in 1997, 38% used a health service for their mental health problems in the previous 12 months. Women were more likely than men to use health services (46% of women compared to 29% of men). The most commonly used health service for both men and women was consulting a general practitioner (22% and 37% respectively).
The likelihood of using health services for a mental health problem was closely related to the type of mental disorder. Of those with affective disorders only, 56% used health services, compared to 28% of those with anxiety only and 14% of those with substance use disorders only. Those with combinations of mental disorders were the most likely to use services for mental health problems (66%).
For those with a disability, service use for mental disorders increased with the severity of the disability. A small proportion of people with no mental disorders also used services for mental health problems (5%). These people may have consulted a health professional for a sub-clinical mental disorder such as stress, or for a mental disorder not included in the analysis of the National Survey of Mental Health and Wellbeing.
Overall, the proportion of people with a mental disorder decreased as the number of people living in the household increased. This may reflect the difficulties that some of these people have in forming and maintaining relationships.
After adjusting for age, the prevalence of mental disorder was highest for both men and women living alone. This was the case for anxiety, affective and substance use disorders individually.
Age standardised rates were higher among people who were separated or divorced (24% of men and 27% of women) compared to people who were married, widowed or never married. In particular, people who were separated or divorced had higher rates of anxiety or affective disorders (18% and 12% respectively) than the other groups. People who had never married also had higher rates of mental disorder than those who were married. In particular, this group, had higher rates of substance use disorders (14%).
People with mental disorders not only find it more difficult to obtain jobs (see Australian Social Trends 1997, Employment of people with a handicap), but unemployment may also contribute to their disorder. Higher unemployment rates among people with mental disorders could be the result of a combination of factors including the disabling effects of mental disorders, lack of training and negative employer attitudes.
After adjusting for age, rates of mental disorders were highest for men and women who were unemployed or not in the labour force. In particular, unemployed people had relatively high rates of substance use disorders (19% of men and 11% of women) compared to employed people and people not in the labour force. It is unclear whether substance use predisposes people to unemployment, unemployment predisposes people to substance use, or both.
Unemployed women also had relatively high rates of anxiety disorders (20%) compared to employed women and women not in the labour force.
PROPORTION OF PEOPLE WITH A MENTAL DISORDER(a) BY LABOUR FORCE STATUS, 1997
Source: Mental Health and Wellbeing Profile of Adults, Australia 1997, (cat. no. 4326.0).
Suicide is thought to be higher among people with mental disorders. However, the incidence of suicide among people with mental disorders is not known.
Results from the 1997 Survey of Mental Health and Wellbeing indicate that people with a mental disorder were nearly four times as likely to have thought about suicide since the age of 18 as people without a mental disorder (37% compared to 9%). Furthermore, they were nearly 7 times more likely to have attempted suicide (10% compared to 1.5%).
1 World Health Organisation (WHO) 1992, The ICD-10 Classification of Mental and Behavioural Disorders, Clinical Descriptions and Diagnostic Guidelines, WHO, Geneva.
2 National Mental Health Strategy brochure 1997, Mental illness - facts, AGPS, Canberra.
|
<urn:uuid:1e8e47f4-38ed-4a5f-b631-3440a6a8c87d>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00621.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9700741171836853,
"score": 3.328125,
"token_count": 1689,
"url": "http://www.abs.gov.au/ausstats/abs@.NSF/2f762f95845417aeca25706c00834efa/1DD7483CA34527E7CA2570EC0011260D?opendocument"
}
|
Kilauea has been monitored ever since, making it one of the better studied volcanoes. Still there is much that we still do not understand about the inner workings of this volcano. Many of the members of Hawaii Center for Volcanology are working on Kilauea or have gained insights into the nature of volcanoes from visiting it. Unlike most other active volcanoes, Kilauea is approachable.
Volcanic History of Kilauea Volcano
Kilauea stands just under 4200 feet tall at it's highest point. Kilauea has a 165m deep circular caldera at its summit that measures 3x5km (or 6x6 km, including the outermost ring faults). It is said that Kilauea is the home to Pele, the volcano goddess of ancient Hawaiian legends. ... Bulletin of Volcanology, 57, 440-450. Wallace and Delaney, Deformation of Kilauea volcano during 1982-1983, a transition period. Journal of Geophysical Research, 100, 8201-8219.
Journal of Volcanology and Geothermal Research
Inflation along Kilauea's Southwest Rift Zone in 2006. Journal of Volcanology and Geothermal Research 177 (2008) 418–424. ... The current eruption sequence of Kilauea began in 1983 and has not involved the SWRZ. Output at Kilauea is fed by magma rising from a deep source to a shallow storage chamber (Eaton and Murata, 1960; Ryan, 1988; Tilling and Dvorak, 1993; Wright and Klein, 2006).
How to become a Volcanologist?
Volcanologists there predict, monitor, and closely study the eruptions of Kilauea and the nearby Mauna Loa volcanoes. ... Besides teaching occasional volcanology courses, most volcanologists teach other traditional courses such as petrology, geochemistry, and geophysics. University faculty have to be on campus most of the year, and thus they tend to do volcanic fieldwork during summer vacations.
Journal of Volcanology and Geothermal Research
Local earthquake tomography with the inclusion of full topography and its application to Kilauea volcano, Hawai'i. Journal of Volcanology and Geothermal Research 316 (2016) 12 – 21. ... We present both synthetic and real data tests based on the P- and S-wave arrival time data for Kilauea volcano in Hawai'i. A total of 33,768 events with 515,711 P-picks and 272,217 S-picks recorded by 35 stations at the Hawaiian Volcano Observatory are used in these tests.
Announcing the 2016 NASA Planetary Volcanology Workshop
Analogs to Volcanic Features and Processes in Satellite and Rover Images. This field workshop is sponsored by NASA, and will be based on Kilauea Volcano, Hawai‘i. ... This workshop is intended for NASA-funded senior graduate students who already have a background in volcanology and are currently working on Mars volcanology problems, but are in need of field experience on real volcanoes.
Global Volcanism Program | Educational Resources
A volcanologist from the Institute of Volcanology in Petropavlovsk, shielded from the intense heat in a reflective suit, extracts a glowing sample of lava from a flank vent of Kliuchevskoi volcano in 1983. ... Hawaiian Volcano Observatory scientists conduct an electronic-distance measurement (EDM) survey on the rim of Kilauea caldera in 1988, with snow-capped Mauna Loa in the background. The procedure uses a laser beam, which is reflected back to the EDM instrument from a distant cluster of reflectors.
Volcanologist runs away from an aa flow! Why did the aa cross the road? ... 11. Logarithmic plot of flow length vs eruption rate (m3/sec). Lines are theoretical predictions, whereas symbols represent real data from Etna, Kilauea and Mauna Loa. In this example, Anja will show us how to estimate flow velocities from measurements made on lava channels, such as this one on the Hapaimanu flow on Mauna Loa.
Scientific specialty: volcanology
TITLE: Magma-?tectonic interactions at Kilauea volcano...
In particular, Kilauea’s southern flank is sliding seaward along a large crustal detachment fault (decollement), located at the interface between the volcano and the preexisting ocean floor at about 9-?12 km depth, that occasionally produces large-?magnitude and destructive earthquakes. ... KEYWORDS: 8485 VOLCANOLOGY Remote sensing of volcanoes, 8400 VOLCANOLOGY, 8488 VOLCANOLOGY Volcanic hazards and risks, 7280 SEISMOLOGY Volcano seismology.
Volcanology and Geothermal Energy. ... 6.6). The stillactive shield volcanoes of Kilauea and Mauna Loa are believed to be made up of a below sealevel mass of submarine pillow basalt that is interbedded with and overlain by hyaloclastite deposits and subaerial basalt flows. Hill and Zucca (1987) report that under the Kilauea and Mauna Loa shields, the Mohorovicic ? 232 ? Fig.
Department of Geosciences | Volcanology
Volcanology research at UAF comprises a diverse set of disciplines, and active collaborations among different research groups, including Seismology, Remote Sensing, Infrasound, and Atmospheric Sciences. The UAF/GI volcanology group comprises active members of the Alaska Volcano Observatory in partnership with the US Geological Survey and the State of Alaska Division of Geological and Geophysical Surveys (ADGGS).
Continuous Tracking of Lava Effusion Rate in a
Journal of Volcanology and Geothermal Research 135(1):29-49 Zablocki CJ (1978) Applications of the VLF induction method for studying some volcanic processes of Kilauea volcano, Hawaii. Journal of Volcanology and Geothermal Research 3(1):155-195 Zebker HA, Rosen P, Hensley S, Mouginis-Mark PJ (1996) Analysis of active lava flows on Kilauea volcano, Hawaii, using SIR-C radar correlation measurements.
Volcanology, Geochemistry & Petrology
Earth & Planetary Surface Processes. Geodynamics. Volcanology, Geochemistry & Petrology. ... Volcanology, Geochemistry & Petrology. Earthquakes and Seismology. Mineral & Energy Resources.
Hawaiian Oral Tradition Clarifies 400 Years
We volcanologists used to think. that these same events, except for the yAilayau flow, had all occurred in 1790, in fact. within a few days or weeks in that year, so we had incorrectly telescoped 400 years. ... Swanson, D. A. (2008). Hawaiian oral tradition describes 400 years of volcanic activity at Kilauea. Journal of Volcanology and Geothermal Research, 176, 427–431.
Paleomagnetic constraints on fault motion... - CaltechAUTHORS
Riley, Colleen M. and Diehl, Jimmy F. and Kirschvink, Joseph L. and Ripperdan, Robert L. (1999) Paleomagnetic constraints on fault motion in the Hilina Fault System, south flank of Kilauea Volcano, Hawaii. Journal of Volcanology and Geothermal Research, 94 (1-4). pp. 233-249.
Volcanology Field Camp
Iceland is a volcanic wonderland ( Iceland Field Camp pics.)This course in volcanology will explore Iceland from the south coast where the Mid-Atlantic Ridge comes ashore to the highlands near the focus of the Iceland mantle plume. ... Course Information: Prerequisites: Mineralogy, and petrology required; stratigraphy, structural geology, and volcanology helpful but not required.
Volcanology, University of Oregon (Architecture of the...)
It was renamed Volcanology in 1968 and converted into classrooms and offices in 1969 following the opening of the new infirmary, the Student Health Center. Timelines. ... Sculpture, by Harold Balasz, presents symbols, mostly associated with sciences. 1968. The building is renamed Volcanology and converted to classroom and offices. Interior alterations by Banks Upshaw (1969). Its function is replaced by the new Student Health Services Building.
Petrology and Geochemistry of the Ongoing Pu'u 'O'o Eruption...
The current eruption of Kilauea Volcano on the island of Hawai'i has been closely monitored and studied since its inception in 1983. This laboratory exercise utilizes the excitement of an ongoing eruption to demonstrate magmatic processes (crystal fractionation and magma mixing) in an ... This lab is designed for an undergraduate course in petrology, although it could be used for a geochemistry or volcanology course at the undergraduate or graduate level. Skills and concepts that students must have mastered.
img12052014_0024. 14-97h GEOS 363 Volcanology, new course.
USF :: Volcanology
USF volcanology is contributing to a new National Academy of Sciences panel on the state of volcano science in the US. Chuck Connor is working on the report, and Steve McNutt is liaison to the committee from the NAS crustal geodynamics group. ... REUTERS: ‘Ring of Fire’ volcano risk, the last obstacle for Japan nuclear plants - USF volcanologists, weigh in on the risks to nuclear power plants in Japan from caldera-forming eruptions.
Volcanology | Geophysical Institute
Volcanology facilities at the GI and UAF comprise geophysical networks, satellite receiving facilities, experimental petrology and the Advanced Instrumentation Lab in the College of Natural Sciences and Mathematics. The GI is a partner agency in the Alaska Volcano Observatory. ... We apply this real-world experience to research projects around the globe, and train the next generation of volcanologists through mentoring and research programs for undergraduate and graduate students.
Robert A. Zierenberg | UC Davis Earth and Planetary Sciences
Rob Zierenberg on Kilauea Volcano, the Big Island, Hawaii. ... GEL 138: Introductory Volcanology. Recent Publications. Zierenberg, RA, P Schiffman, GH Barfod, CE Lesher, NE Marks, JB Lowenstern, AK Mortensen, EC Pope, DK Bird, MH Reed, GO Frioleifsson and WA Elders (2013) Composition and origin of rhyolite melt intersected by drilling in the Krafla geothermal field, Iceland.
Kilauea (IPA: [ki?lauea]) is an active volcano in the Hawaiian Islands, one of five shield volcanoes that together form the Island of Hawai?i. In Hawaiian, the word kilauea means "spewing" or "much spreading", in reference to the mountain's frequent outpouring of lava. Issuing lava continuously since January 1983, Kilauea is currently the most active volcano on the planet, an invaluable resource for volcanologists, and also the planet's most visited active volcano.
GPS Spotlight : Station Details | Kilauea Eruption
I said volcanology. He replied, "Not many jobs there either, but let's see what we can do." ... I've felt strong earthquakes (up to M6.7), seen several fissure eruptions, witnessed lava fountaining, and stood on the rim of the largest lava lake in the world, among many other unique experiences. Perhaps the best part about Hawaii, and Kilauea in particular, is that it is so dynamic.
Volcanology :: | The University of New Mexico
Volcanology. EPS 450L / 550L (4).
IN the past Kilauea erupted almost everyday since 1983 but now Kilauea is in it's dormant stage. Tiltmetres and seismographs help to give warning that is was going to erupt, so no one was hurt during the 36 day eruption, the largest eruption that was ever recorded for Kilauea. FACTS:1) Estimated age of the first eruption is 300,000-600,000 years ago 2) Volume of Kilauea volcano is 25,000-35,000 km3.
Frank A. Perret
The plan was set into action and by 1911, Jaggar and Perret established the first observation station on the rim of the Halema’uma’u crater. A year later, the Hawaiian Volcano Observatory was built at the edge of the Kilauea caldera. ... For more information about Perret's work at Kilauea, please see Braving Kilauea. Perret’s expertise in volcanology did not go unnoticed.
Hawaiian "Hot Spot"
Kilauea: youngest volcano on the big island. Of hawaii.
Volcanology of Hawai'i
Watch where the "lava" chooses to flow. It flows most quickly down the steepest slopes and slows down when it hits a flatter surface, such as your driveway. Volcanology of Hawai'i. ... Kilauea volcano on Hawai'i Island is a good example of that kind of eruption. Or, an eruption can be extremely explosive, such as the eruption of Mount St. Helens in the 1980s that pulverized one entire side of the mountain. These explosions can be very violent and dangerous.
Virtual Tour | Kilauea
Kilauea. Video of the Kilauea Volcano.
Journal of Volcanology and Geothermal Research
This Special Issue on the “Volcanology of Erebus volcano, Antarctica” has the aim of showcasing a wide-ranging selection of the most recent scientic research on the volcano, but it also marks and commemorates two notable events. The rst is the 100th anniversary of the rst ascent of the volcano and the second is the International Polar Year (IPY 2007–2008), which is still underway at the time of writing.
Volcanology at Arizona State University
© 2007 - Amanda Clarke - Volcanology Group. School of Earth and Space Exploration - Arizona State University.
How Volcanoes Work - Hawaiian eruptions
The Kilauea summit caldera and east rift system are evident on the above map-view and 3D images. The blue-to-purple regions descending down the southeastern slope of Kilauea (far right) are lava flows generated during the Pu'u O'o eruptive series, through early 1994. ... They can occur in short spurts, or last for hours on end. One of the most spectacular fire fountaining events ever recorded on Kilauea produced a lava spray 580 m high at the Kilauea Iki vent in 1959. However, this is dwarfed by the 1600 m fire fountain...
Application Procedure for employment at The University of the...
Application Procedure for employment at The University of the West Indies, St. Augustine...
EENS 4680 Volcanology
EENS 4680 Volcanology Volcanology The study of volcanoes including volcanic landforms, eruptive mechanisms, and tectonic environments. Pre-requistites: Approval of instructor. credit hours: 3.
Our days spent on the summit of Kilauea volcano were predictably rainy on the windward side and breezy and sunny over by Halemaumau, which is still spectacularly gushing gas. A hike across Kilauea Iki was ethereal in the fog, with lots of steam from heated rainwater coming up around us. The yellow sulfur deposits can be seen at the Sulfur Banks walk. I like the last few pics taken from the airplane from Hilo to Honolulu (left side airplane!).
Kilauea Area Map.
Volcanology and Igneous Petrology.
iLearn: Log in to the site
Maintained by Academic Technology. Academic Technology supports and advances effective learning, teaching, scholarship, and community service with technology.
Kilauea Point Albatross Population Data
Graph: Number of Laysan Albatross Nests and Chicks at Kilauea Point.
Volcanology. Title Filter.
Volcanology - Earth and Space Sciences
Volcanology. Research. ESS in the News.
Volcano Information Center (VIC)
The purpose of the Volcano Information Center (VIC) is to provide links to websites that are resources for data not contained in VIC and to inform the user about general volcanology in an organized way, including features of volcanoes, volcanic eruptions and volcanic hazards. ... Price: $35.00. Second Book. Out of the Crater: Chronicles of a Volcanologist. by Richard V. Fisher, 1999.
PHYSICAL VOLCANOLOGY. The physical volcanologists on board the JOIDES Resolution during Leg 183 sought to determine the types of eruptive activity that formed the volcanic rocks and volcaniclastic sediments recovered in the cores. This was accomplished by describing the rocks and identifying features that are diagnostic of specific physical processes to produce an integrated picture of the style of volcanism and environmental setting of each site drilled.
Volcanology | University of Texas Libraries
Volcanology. Earthquake and Volcano Deformation. Segall, Paul. ... Fundamentals of Physical Volcanology. Parfitt, Elisabeth Ann; Wilson, Lionel. 2008. Malden, MA: Blackwell Pub.
VERY Preliminary Schedule – ERTH 130V – Geologic Field...
Two pre-field-trip meetings @SBCC Day 1 (Jan 2) – Arrive in Hilo, Set up Day 2 (Jan 3) – Jaggar Museum (Don Swanson lecture), Kilauea Caldera, Keanakako’i Ash, Sulphur Springs, Steaming Bluffs, Keanakako’i Crater and 1974 flow, spatter ramparts, tree molds, Evening “After Dark in the Park” lecture. Day 3 (Jan 4) – Kilauea Iki Crater, Pu’u Pua’i Cone, Pu’u Pua’i Ash Day 4 (Jan 5) – Meet with Christina Neal of HVO, Mauna Ulu, Mapping project for 1969 flow...
The discipline of Volcanology has its origins in this Zone, and, since then this complex volcanic area is the test site for volcanological hypothesis and theories. Lyell based his theory on vertical movements of the earth surface on the observation of lithodomes (shellfish holes) in columns of the ancient Roman market ''The Serapeum'' in Pozzuoli (Phlegrean field).
Crater of the still active Kilauea Volcano
Crater of the still active Kilauea Volcano (note the steam in the middle of the picture and the solidified lava lake). Black sand beach. The big island has numerous huge lava flows. Kilauea is still pouring lava into the Pacific and all the land visible here is fairly recently deposited black lava. Signs warn that it is dangerous to proceed beyond this point.
Cochise College P | Kilauea crater floor-D Kilauea 3864
Department of Geology
Rodriguez, L. and Smith, A.L., 2013, “Field Guide: Volcanic Evolution of Montserrat: From the Silver Hills to the Soufriere Hills Current Eruption”, 22-26 March, 2013 SE GSA Meeting post-meeting field trip, 51 p. Henney, L.A., L.A. Rodriguez, I.M. Watson, 2012, “A comparison of SO2 retrieval techniques using mini-UV spectrometers and ASTER imagery at Lascar volcano, Chile”, Bulletin of Volcanology, doi: 10.1007/s00445-011-0552-2. Rodriguez, L.A., 2010, “VEPP: Volcanic activity and monitoring of Pu`u `O`o, Kilauea...
Get /c/en/philippine_institute_of_volcanology_and_seismology in JSON format. ... Philippine Institute of Volcanology and Seismology is an instance of Organisation.
Hawaii Workshop | Recent Advances in Volcanology Workshop
Recent Advances in Volcanology Workshop. Participants of the Volcanoes Workshop in Hawaii approach a lava flow entering the sea. Burned park ranger station.
Determining | Kilauea
I will be focusing on the Hawaiian volcanoes of Kilauea, Loihi, Mauna Kea, and Mauna Loa, all located on the big island of Hawaii. I will be using compositional data for magmas, represented by glasses, from the four volcanoes and a petrological method to calculate the pressures of partial crystallization of these magmas. ... Bulletin of Volcanology, 57: 602-630.
MHC Volcanologist Leads Trip
MHC Volcanologist Leads Her Last Five College Field Trip. High on Geology Professors Godchaux (fourth from right) and Mike Rhodes (top) with their students climbing a boulder at the base of the dome of Mount St. Helens in 1990. The professors will take students on a volcanology field trip to the Cascade mountain range this summer. Retiring Geology Professor Martha Godchaux is going out, characteristically, with a volcanic bang.
The distribution of the recolonizing organisms on Kilauea...
b. Explain how immigration and competition changed the community structure from year 1 to year 9. The community continued to change after year 9 of this study. c. Describe the expected distribution of the five original types of organisms on Kilauea in another 20 years. Explain your reasoning.
Department of Geology and Planetary Science
Past volcanology students. ... Jeff Byrnes. Dissertation Title: Lava flow field emplacement studies of Mauna Ulu (Kilauea Volcano, Hawaii, USA) and Venus using field and remote sensing analyses.
Plumes from the Kilauea Volcano in Hawai’i « CIMSS Satellite...
Activity from Kilauea then continued for several weeks; GOES-11 (GOES-West) 0.63 µm visible imagery from 07 April 2008 (above) showed the hazy signature of a long volcanic plume (composed primarily of steam, but possibly containing small amounts of ash) streaming southwestward from Hawai’i. With the typical northeasterly trade winds that often persist over that region, this was the common scenario seen on many days during late March into early April.
CareerFinder: Browse by city
Please log in to access this page. If you do not have an account, please Register.
Panoramic view of Kilauea Iki Crater from near the trailhead. Notice the lightly worn "trail" across the crater floor.
Silica coatings on the 1974 kilauea flow: new sem and sims results and implications for mars. ... We focus on a suite of samples from the 1974 Kilauea pahoehoe flow, col-lected in 2003. The chemistry and morphology of these coatings were previously presented . Here we pre-sent new morphological, spectral and isotopic analyses of the coating suite.
Environmental Geology lecture outline - Volcanology.
Environmental Geology lecture outline - Volcanology. We can start with some initial questions to ponder. What are the compositional variations and physical properties of lavas?
VOLCANOLOGY. Course Name V. OLCANOLOGY.
Volcanology Field Trip to Guatemala / n3.jpg
Volcanology Field Trip to Guatemala: JR and Stan.
<--Previous Up Next-->.
The Great Crater at Kilauea
Serial: Appletons' journal: a magazine of general literature. Title: The Great Crater at Kilauea [Volume 10, Issue 232, Aug 30, 1873; pp. 266-268]. Author: Vincent, Frank, Jr. Collection: Making of America Journal Articles.
Kilauea Volcano Tour. Contact Lehua Hawaiian Adventures for additional information. Visit the Smoldering Remains of Kilauea. Drive through miles of black molten lava and into Volcanoes National Park where you will experience the sight and smell of the volcano crater.
Kilauea | Ke Alaka'i
Kilauea. Lava continues to flow on Big Island. Lava flows from Kilauea volcano on the Big Island have been on the move since June 27 and are threatening to force Puna district residents to pack up their belongings and leave their houses.
kilauea surface seismic map
eCite - Sedimentology and volcanology of the Tawallah Group
Sedimentology and volcanology of the Tawallah Group...
WHOI Through the Lens
Kilauea is the youngest and southeastern most volcano on the Big Island of Hawai`i. Topographically Kilauea appears as only a bulge on the southeastern flank of Mauna Loa, and so for many years Kilauea was thought to be a mere satellite of its giant neighbor, not a separate volcano. However, research over the past few decades shows clearly that Kilauea has its own magma-plumbing system, extending to the surface from more than 60 km deep in the earth.
Kilauea Volcano Photo Caption
This image was taken by an instrument (called a Thematic Mapper Simulator) from an airplane (a NASA C-130) flying over the Kilauea Crater. This image looks different from aerial photographs. This is because all the data acquired by this instrument (called the Thermal Infrared Multispectral Scanner, or TIMS) is in the thermal infrared part of the electromagnetic spectrum.
Journal of Volcanology and Geothermal Research
348 P. Bani et al. / Journal of Volcanology and Geothermal Research 188 (2009) 347–357. Fig. 1. Map showing Ambae, located roughly in the geographic centre of the Vanuatu archipelago (left), and of the island, including Voui crater lake (right). Insets show photographs of Voui, with its usual blue colour (July 2005; courtesy S. Cronin) and its recent red colour (June 2006; courtesy P. Metois).
Kilauea. 1. Seared trunks poked twenty feet through the pumice crust, skinned and ground to a point.
Research Focus Areas: Igneous petrology, volcanology, and undergraduate geoscience education. Current Projects: Volcanology and Petrology of Tertiary volcanic rocks associated with the Bald Mountain Volcanic Complex, central Oregon. Petrology of Pleistocene ash-flow tuffs and silicic domes related to Newberry Volcano in central Oregon. Developing active-learning strategies for undergraduate geoscience courses.
kilauea area map
HTML code by Chris Kreger Maintained by ETE Team Last updated September 29, 2010. Discuss Exploring the Environment! Some images © 2004 www.clipart.com. Privacy Statement and Copyright © 1997-2004 by Wheeling Jesuit University/NASA-supported Classroo...
Madison@ Kilauea, 1998 (earth, wind, fire). Jennah@UMCP, 1999 (cap, gown, diploma).
Welcome to web pages of S. Can Genc
Neogene and younger magmatic activity of NW and Western Anatolia. Volcanology and petrology of the Bodrum peninsula Neogene magmatism. Petrology of the Leucitic mafic volcanism along the Izmir-Ankara suture zone (Central and Western Pontides).
Volcanology: Geology, not Mr. Spock
Volcanology: Geology, not Mr. Spock. View/Open.
"Volcanology and Geochemistry of Pliocene and Quaternary..."
Signatures have been redacted for privacy and security measures. Repository Citation. Dickson, Loretta D., "Volcanology and Geochemistry of Pliocene and Quaternary Basalts on Citadel Mountain, Lunar Crater Volcanic Field, Pancake Range, Nevada" (1997). UNLV Theses, Dissertations, Professional Papers, and Capstones.
SESSION 10 - EM applications on seismology and volcanology
SESSION 10 - EM applications on seismology and volcanology.
People - School of Earth and Climate Sciences
Daniel Belknap. Professor School of Earth and Climate Sciences Cooperating Appointment, Climate Change Institute. Sean Birkel. Research Assistant Professor School of Earth and Climate Sciences and Climate Change Institute. Harold Borns. Professor Eme...
Boudreau - Home Page
EOS 49S - Great Geologic Controversies. EOS 402S - Volcanology: Geology of the Hawaiian Islands. EOS 569 - Theoretical Geochemistry. EOS 585 - Layered Intrusions.
Home Page - Ron Morton. PhD-Carleton University, Ottawa; Volcanology-Economic Geology. Introductory Geology-Geol 1110. Physical Volcanology.
VEMG Table of Contents Page
Volcanology Research @ UNH Home Page. VEMG News Page. VEMG Press Release 1. ... VEMG Search Page. Volcanology Reprints.
FS11: Great Crack
Field stop 11: sothwest rift / great crack. On day three, the Eau Claire crew began the day by visiting the Soutwest Rift Zone. Kilauea volcanoe developed two rift zones-- the East Rift Zone and the Southwest Rift Zone. Aerial view of the uppermost Southwest Rift Zone looking towards the northeast. Photo by J.D. Griggs on March 4, 1985.
Brian Hausback CSUS :: home
Geology 114 Volcanology. Geology 212 Geologic Remote Imaging. Geology 240D Field Volcanology. Lituya Bay. Fluxgate Magnetometer.
Sir William Hamilton and the Beginnings of Volcanology
Today, the beginnings of volcanology. The Honors College at the University of Houston presents this program about the machines that make our civilization run, and the people whose ingenuity created them. How do you study a volcano without dying? That’s a serious question. In 79 AD, Vesuvius claimed the life of the great ancient naturalist Pliny the Elder. Mount St. Helens killed volcanologist David Johnston in 1980.
GEOL 50623 - Volcanology
GEOL 50623 - Volcanology. Prerequisite: GEOL 50233, or permission of instructor. Two hours lecture and one three hour laboratory period per week.
Dynamic coupling of Kilauea and Mauna Loa, Hawai’i
Six months prior a similar inflation began at neighboring Kilauea. A numerical model that integrates GPS measurements of volcano deformation with asthenospheric and crustal magma flow demonstrates that Mauna Loa and Kilauea may be dynamically linked ... Streamlines (thin red arrows) show that Mauna Loa and Kilauea capture different parts of the melt source. Thick red arrows above the porous zone schematically represent vertical magma flow through Mauna Loa's and Kilauea's lithospheric plumbing system.
Education in Russia for foreign citizens: Educational...
25.00.04 - Petrology, Volcanology; 25.00.06 - Lithology; 25.00.07 - Hydrogeology; 25.00.08 - Engineering Geology, Study of Frozen Soil Conditions and Mineralogical Soil Study; 25.00.10 - Geophysics, Geophysical Methods of Exploration of Mineral Deposits; 25.00.23 - Physical Geography and Biogeography, Geography of Soils and Geochemistry of Landscapes; 25.00.24 - Economic, Social and Political Geography; 25.00.36
Mike's Volcano Page
OK general purpose volcanology site; obviously emphasizing hazards. YVO - Yellowstone Volcano Observatory. "Monitoring the Largest Volcanic System in North America" (actually only true if we're talking about Quaternary systems). ... Terrific site by one of the "Masters" of volcanology. Information for Future Volcanologists. Nice site by Bill Leeman of Rice University.
acumen | Search
Admin Data. Notes.
CWU Geological Sciences - Volcanology - Student...
Volcanology. Student responsibilities. Attendance is required for all labs and lectures.
Die Top 2 der gefahrlichsten Supervulkane weltweit
Kilauea, Hawaii. 1247 m. Stromboli, Italien.
Petroglyphs near Kilauea. Many concentric circles, no spirals. turtle? paddle?
Geochemistry, Petrology and Volcanology Faculty
Experimental volcanology We use measurements of lava rheology, heat capacity and thermal diffusivity to understand physical and chemical controls on the style of volcanic eruptions (explosive or gentle), and what the morphology of lava flows on Earth and other planets can tell us about the rheology of the erupting lava. ... Sampling active lava flows at Kilauea volcano, Hawai’i.
14-Mar-02: 1983 Eruption of Kilauea
14-Mar-02: 1983 Eruption of Kilauea.
Colgate Classes Webs - Spring 2008
GEOL220 - Volcanology - KHarpp. EDUC - Student Teachers - BRegenspan. Colgate Web.
Charles H. Langmuir | Department of Earth and Planetary...
Charles H. Langmuir. Higgins Professor of Geochemistry. The solid earth geochemical cycle, petrology, volcanology, ocean ridges, convergent margins, ocean islands, composition and evolution of the earth's mantle. Assistant: Lisa McCaig.
Education. Doctorate in Earth Science (Volcanology): University of Perugia, Italy. Laurea degree in Geology: University of Rome, La Sapienza", Italy.
Volcanology | Themes | ObieMAPS
Oberlin collections, courses, faculty/staff members, and study away programs associated with the theme of Volcanology.
Journal of Volcanology and Geothermal Research
T. Husain et al. / Journal of Volcanology and Geothermal Research 285 (2014) 100–117. Description. Flow rate in 3D geometry Flow velocity of uid for 3D geometry Area of conduit for a 3D geometry Average uid velocity given by Hagen–Poisseuille's ow equation Radius of conduit Normal force applied on the particle in contact with another in PFC2D Normal contact bond stiffness Overlap in the normal direction between 2 contacting particle in.
Fall 2017 Spring 2018 Spring 2019 Spring 2020. Geo 201 Geo 204 Geo 212 Geo 211. Structural Geology Petrology Volcanology GIS and Remote Sensing. Geo 203 Geo 209 Geo 306 Geol 311. Mineralogy Geochron and Paleontology Glaciers & Climate Change Advanced GIS.
Glogster: Multimedia Posters | Online Educational Content
The creative visual learning platform that every educator and student deserves.
Journal of Volcanology and Geothermal Research
Westward subduction of the Atlantic plate at ?2.2 cm/yr has given rise to the Lesser Antilles, a 750-km long intra-oceanic north–south-. L. Ruzie, M. Moreira / Journal of Volcanology and Geothermal Research 192 (2010) 142–150. 143. more prolonged period of eruptive activity, which culminates in Strombolian-type eruptions in July 1996 (Christenson, 2000).
Hawaii, Big Island
Kilauea "summit" crater. Kilauea lava explodes when hitting the ocean. Explosions (at dusk). This violently rising plume spawns a tornado every few minutes. ... Fiddlehead in rainforest near the crater. Kahili Ginger Flower. Kilauea Iki crater (a nearby crater that last erupted in 1959).
POR - Station Metadata #2, KILAUEA FLD 17 113, HAWAII
Kilauea fld 17 113, hawaii. Station Metadata. Color indicates dates with an observation for the specific element.
Volc Hawaiian Kilauea Mauna Loa Hualalai Eruptions in Hawaiian Volcanoes are usually preceded by By Jenn Kil Pu’u’O’o-Kupaianaha eruption of Kilauea Kilauea is the youngest volcano and one of the world’s most active Three main areas of eruption - the summit - two rift zones Eruptions Most Eruptions are relative- ly gentle Lava flows downslope Grad Future o The future.
Professor Martha Schoene
For on-line quizzes, exploration questions and links to the Web’s best geology sites visit http://www.prenhall.com/lutgens. Grand Teton National Park, Wyoming. Kilauea Volcano, Hawaii.
Kunlun Infrared Sky Survey
Check out our Meeting at Kilauea Military Camp. March 2015 .
Geosciences Laboratory | Planetary Science Institute
Collections include NASA CD-ROMs of Clementine, Galileo, Magellan, Mars Pathfinder, Mars Global Surveyor, Viking, Voyager, Geologic Remote Sensing Field Experiment, and Volcanology (Kilauea, Mauna Loa, and Kamchatka) data; selected prints of Lunar Orbiter, Mariner 9 (Mars) and 10 (Mercury), Viking Orbiter, and Voyager images; aerial photographs and LANDSAT data of volcanic landforms in the U.S. (CA, ID, HI, and OR) and.
This dissertation is divided into three chapters, each to be submitted to peer-reviewed journals, addressing the volcanology, petrology, and petrogenesis of rhyolitic volcanics in southwestern Idaho. Chapter 1: Conflicting ...
Austin Peay State University : Dr. Lindsay Szramek
Areas of Expertise: petrology, mineralogy, volcanology.
Print Friendly Search Results
Series. : Developments in volcanology, v. 10. Developments in volcanology ; 10. Subject Term. : Cokuntu.
GE151j: Introduction to Volcanoes and Volcanology
Introduction to Volcanoes & Volcanology. Course Syllabus January, 2005. (Photo of Guagua Pichincha, Ecuador, in October, 1999, courtesy of Josh Morris, Colby '96.) ... 4 January. Why volcanoes? An introduction to Plate Tectonics VIDEO: Eruptive Phenomena of Kilauea's East Rift Zone, Hawai'i (55 min.) * * Video synopsis papers due at THE BEGINNING OF CLASS on THE FOLLOWING CLASS DAY. 5 January.
The Journal of Geology: Home
Since 1893, The Journal of Geology has promoted the systematic philosophical and fundamental study of geology.
New Page 1
These so-called corn rocks were considered special enough by their creators to be built into the walls of homes constructed shortly after the eruption of Sunset Crater. Had I been in the mood at Kilauea in 1969, I could have created corn rocks, too, by laying ears at the base of the active hornito, and then retrieving solidified spatter that had splashed down on and molded around the ears.
Baylor University || Department of Geosciences || Igneous...
Igneous Petrology & Volcanology. Over the past twenty five years, the Department of Geology has sponsored student research in diverse volcanic provinces such as the Cenozoic volcanic fields of West Texas and the San Juans of Colorado, volcanic rocks of the Oregon Coastal Range and the Quaternary Central Oregon Cascades.
The Dream City: The Volcano of Kilauea
THE VOLCANO OF KILAUEA - Between the Chinese Theatre and the Ferris Wheel stood the cyclorama of the greatest active volcano in the northern hemisphere. In front of the pavilion was a heroic statue of Pele, the Hawaiian goddess of -re, made by Mrs. Copp, the sculptor, and under the canopy a choir of Kanak musicians sang to the public, evoking much applause.
12-Dec-08: Why does Kilauea flow, but Mt. St. Helens explode?
12-Dec-08: Why does Kilauea flow, but Mt. St. Helens explode?
Mineralogy, Geochemistry and Volcanology of Volcanic Tuff...
Jordan Journal of Civil Engineering, Volume 8, No. 2, 2014. Mineralogy, Geochemistry and Volcanology of Volcanic Tuff Rocks from Jabal Huliat Al-Gran, South of Jordan (New Occurrence). Reyad A. Al Dwairi. 1)*. ... In: Basaltic Rocks of Various Tectonic Settings, Special Issue of the Geochemical Journal, 28, 542-558, Japan. Al-Malabeh, A. (2003). "Geochemistry and Volcanology of Jabal Al-Rufiyat, Strombolian Monogenic Volcano, Jordan".
Lava flows devastate neighborhoods on the flanks of Kilauea. The "fire pit", or active crater, of Halemaumau in Kilauea caldera sometimes holds a lake of bubbling, glowing lava. The side of Halemaumau Crater has collapsed due to mass wasting. Mineral encrustations grow around the vents of fumaroles, marked by rising steam. In 1959, a spectacular eruption formed Pu'u Pua'i Cone in Kilauea Iki, a large pit crater near the caldera of Kilauea Volcano.
Moving. Colli Creates trenches or subduction zones Earthquakes & volcanos Less dens. Kilauea, Hawai’i.
Browse Profiles: 2 showing - CSU Fresca
BATTLE ROBOTS Camps (Kilauea Rec. Center)
2004 Summer LEGO® BUILDER-PROGRAMMER, CAR RACING, MARS MISSION, and. BATTLE ROBOTS Camps (Kilauea Rec. Center). We are offering 4 exciting LEGO® Mindstorm™ robotics based summer camps this year –.
GEOL 620 - Volcanology - Acalog ACMS
GEOL 620 - Volcanology. Credits: 2. Examines processes associated with active volcanoes as revealed by volcanic deposits.
In July 2013 I attended a volcanology conference in Kagoshima Japan organized by IAVCEI (the International Association for Volcanology and Chemistry of the Earth's Interior). I took some time off before the meeting, stopping in Narita, Matsumoto, and Kyoto. After the meeting I briefly visited Unzen and Aso volcanoes.
Introduction to Volcanology (3 credits). Fall 2015.
About – Volcanology
Fisher, R.V.: Out of the Crater: Chronicles of a Volcanologist.
Volcanologists venture to treacherous volcanoes the world over in the pursuit of their science. ... He writes about the cultural rewards and challenges of conducting research in isolated areas of such countries as Argentina, Mexico, and China. And he discusses the early influences that steered him toward volcanology--including his army experiences as a witness to two atom-bomb explosions at Bikini atoll.
Lu'au: Program | Willamette University | Ka Nani A'o Kilauea
Ka Nani A'o Kilauea. Co-ed ?Auana, Taught by Courtney Lai. In the song, Weldon Kekauoha describes the first time he sees Pele's home, Kilauea. From the windy cliffs overlooking Halema?uma?u. to the sweet fragrance of the lehua blossom, this song beautifully describes the wondrous natural beauty of Pele's cherished home. Hanohano Ka Lei Pikake.
Kilauea Crater. Preparation for our hiking. A road destroyed by lava. ... ??Vacanoes National Park???????visitor center. ???????Kilauea Crater????Mauna Loa.
These data have indicated that the Earth’s lithosphere is divided into a series of plates that move across the surface of our planet. Our observations in class today are intended to help you better understand the various types of plate boundaries and the important geological features associated with these boundaries. Around the room there are four maps entitled Geography, Geochronology, Seismology, and Volcanology.
Research interests Volcanology Tephrochronology Volcanic hazard Volcano-tectonic deformations. Recent publications: 1. Melekestsev I.V.,Braitseva O.A.,Ponomareva V.V. 1989. Prediction of Volcanic Hazards on the Basis of the Study of Dynamics of Volcanic Activity Kamchatka. In: Volcanic Hazards Assessment and Monitoring: IAVCEI Proceedings in Volcanology I. Berlin - ...Tokyo. Springer-Verlag. P. 10-35.
Images from the field
Euconocephalus nasutus: a non-native conocephaline katydid. An acoustic trap used to collect Ormia ochracea. Wildlife park at Kilauea Lighthouse, Kauai. North coast of Kauai.
Volcanology (GEOL 3118) | GW Expert Finder
Volcanology (GEOL 3118) Course. Overview. participant.
METU | Geological Engineering | Courses
8.0. GEOE541. VOLCANOLOGY.
Courses/Teaching: GEOL 205 - Volcanic Hazards, Surveillance and Prediction GEOL 320 - Mineralogy and Crystallography GEOL 321 - Optical Mineralogy GEOL 322 - Introduction to Geochemistry GEOL 325 - Igneous and Metamorphic Petrology GEOL 552 - Volcanology and Volcanic Hazard Assessment.
Dr. William K. Hart--Miami University Geology
Dr. William K. Hart is a professor of geology at Miami University and director of the Miami Field Station. His research interests are in petrology, volcanology, geochemistry, and crust/mantle evolution. ... Igneous Petrology, Geochemistry, Volcanology, Crust/Mantle Evolution, Tephrostratigraphy.
Listing of /v1/AUTH_opentopography/PC_Bulk/HI09_Big_Island
Volcanology of Kozuf Mountain in the Republic of Macedonia
The University of Texas at El Paso Library
Entries 2 Found. 1. Philippine Institute of Volcanology. : Government Documents. 1992. 1. 2. Philippine Institute of Volcanology and Seismology -- See Also the earlier heading Philippine Institute of Volcanology. 1. Save Marked Records Save All On Page.
VIEPS: Volcanology, Geochemistry and Geochronology
Atmospheric and Oceanic Science Basin Analysis, Palaeontology and Petroleum Geology Economic Geology and Metallogeny Environmental and Regolith Processes Geological Field Skills Igneous Petrology Information Technology Methodologies Remote Sensing, Imaging and Modelling Structural Geology Volcanology, Geochemistry and Geochronology Miscellaneous.
Volcanoville: Predicting Eruptions. Research project: Eruption news report or volcanologist interview. Time: goal ... Imagine that you are a volcanologist being interviewed about how people can be warned before eruptions happen. Your interview should answer the following questions: • What is the technique called?
Index of /~csmart/Observing/Lectures/animations/04 Volcanoes
Eruption of Kilauea.mov. Hot Spots - Seamounts and Guyots.swf. ... hot spot volcanoes.swf. kilauea9812-1.mpeg.
The Perkins Geology Museum at The University of Vermont...
PGM Collection catalog and archive search. Search Results for Kilauea lava. Click on Picture to view larger images and details. ... There are 0 pictures for 1 Collection Items. PGM Collection catalog listing only. Search Results for Kilauea lava.
youngest subaerial volcano, Kilauea, will be used as the starting point because it currently sits near the center of the hot spot. Students will first measure the distances from Kilauea to the respective volcanoes on the Kea trend (Mauna Kea, Kohala, Haleakala, West Maui, and East Molokai) using a map of the Hawaiian Islands (Figure 1). They will then place this informa-tion in a chart (Figure 2) that also shows the ages of these respective volcanoes.
Fundamentals of Physical Volcanology, Elisabeth A. Parfitt and Lionel Wilson, 2008, Blackwell Publishing, Malden, Massachusetts, Oxford, and Carlton, Victoria, xxi + 230 p., ISBN 978-63205443-5, USD 69.95. ... Introduction to Physical Volcanology is not a comprehensive book about volcanology. It avoids aspects of geochemistry, petrology, and stratigraphic associations that are significant components of the field.
Jonathan M. Lees - Dept. of Geological Sciences, UNC...
89.304 home page | Pahoehoe and scoria, Kilauea, Hawaii
Pahoehoe and scoria, Kilauea, Hawaii. Igneous and Metamorphic Petrology 89.504. Mafic dike cutting pegmatite and tonalite.
ECU Libraries Catalog
Some of our New Titles. Americans and the birth of Israel / Lawrence J. Epstein. Fitzpatrick's color atlas and synopsis of clinical dermatology / Klaus... The destruction of Hillary Clinton / Susan Bordo. Reorienting the Sasanians : east Iran in...
1886 Athen?um 14 Aug. 210/3 The Progress in volcanology...
Volcanology: volca'nology. f. volcano sb. + -(o)logy. = vulcanology. The science or scientific study of volcanoes. 1886 Athen?um 14 Aug. ... 1889 Pall Mall G. 23 Oct. 3/2 Students..will find comparatively little that is new to them, as volcanology, in this..easy-going volume. This definition was modified from the Oxford English Dictionary.
Project MUSE - Voices of Fire
?Mokuna / Chapter 7 Aloha Kilauea, ka ‘Aina Aloha (Cherished Is Kilauea, the Beloved Land): Remembering, Reclaiming, Recovering, and Retelling—Pele and Hi‘iaka Mo‘olelo as Hawaiian Literary Nationalism ‘O Puna lehua ‘ula i ka hapapa I ‘ula i ka papa ka lehua o Puna Ke kui ‘ia maila e na wahine o ka Lua e Mai ka Lua au i hele mai. nei, mai Kilauea Aloha Kilauea, ka ‘aina a ke...
American Geographical Society Library Digital Map Collection
Add or remove other collections to your search: American Geographical Society Library Digital Map Collection. ACT UP Milwaukee Videos. Afghanistan - Images from the Harrison Forman Collection. AGSL Digital Photo Archive - Africa. AGSL Digital Photo A...
Los Rios Community College District /All Locations
(Search History) SUBJECT: Volcanology in View Entire Collection (Clear Search History) (End Search Session). SearchType. Keyword author title subject LC call no. ... Volcanology Research -- See Volcanological research. 1.
There was an error: Your request cannot be completed as we cannot verify how you got here. Please close EVERY browser window; then restart the Instructional Architect again. If you have any questions about our policies or this web site, contact us vi...
Baylor University || Department of Geosciences || Dr. Don...
Dr. Parker teaches advanced courses in Igneous Petrology, Analytical Geochemistry, and Volcanology, as well as undergraduate Petrology and introductory geology classes. Selected Publications. White, J.C. Parker, D.F., and Ren, M., 2009, The origin of trachyte and pantelleritie from Pantelleriia, Italy: Insights from thermodynamic modeling: Journal of Volcanology and Geothermal Research, v. 179, p. 33-55.
Journal of Volcanology and Geothermal Research. ... Volcanology and Seismology, a cover-to-cover translation of Vulkanologiya i Seismologiya. Zeitschrift fur Vulkanologie.
Ben Edwards | Volcanology and Igneous Petrology
Ben Edwards. Volcanology and Igneous Petrology. Search. Main menu.
Philippine Institute of Volcanology and Seismology...
GIS Analysis and cartographic presentation of a site selection problem. Rowena Bassi QUIAMBAO Senior Science Research Specialist Philippine Institute of Volcanology and Seismology (PHIVOLCS) C.P. Garcia Street, University of the Philippines Campus. Diliman, Quezon City 1101 Philippines. T: (632) 426 14 68 to 79 F: (632) 920 70 58, 927 45 24 email@example.com.
About Us - NIU - Geology and Environmental Geosciences
Igneous Petrology & Volcanology.
Little Ice Age | Volcanology
Volcanology. C.D. Miller, started dating Dome Peak in Washington state in the 1960s by using ash layers (Grove, 1988). Between February and March of 1600, the Huanyaputina volcano erupted in Peru, releasing 19.2 cubic kilometers or more of sediment into the atmosphere, darkening the sun and moon from the South Pole to Greenland.
GEOL4553/5553 Dr. Glen S. Mattioli. Spring Semester 2009. Suggested Volcanology Research Topics. ... The role of volcanic eruptions in climate – ENSO The effect of volcanism on sea level. Volcanic hazards – how are they determined? Hazard mapping and the relationship between volcanologists and the public The pathology of volcano-induced injury and death.
The hawaiian islands – tectonic plate movement
Procedure 2: l. Using the data from Table 1, place the age next to each of the following volcanoes on the. Hawaii Map: Kauai, Oahu, Molokai, Maui and Hawaii. (Kilauea Volcano).
QELP Data Set 073
Kilauea volcano on the Big Island is active today, and other centers on the Big Island and on Maui have erupted recently. As one progresses towards the west-northwest, the volcanoes of the Hawai'ian Islands get progressively older (see data table). ... Distances from the active Kilauea volcanic center (measured parallel to the Hawai'i-Emperor chain) and ages of each volcano and seamount are given in the data table (Clague and Dalrymple 1989).
Hawai'i Plants and Animals
'Ohi'a trees grow in an amazing variety of situations. This one is on recent lava of Kilauea. 'Ohi'a is the most common tree in the remaining native forests. The endangered nene is the Hawai'i state bird. ... Two honu (Green Sea Turtles) at Kaloko-Honokohau National Historical Park in Kona. Kilauea Point National Wildlife Refuge on northern Kaua'i hosts sea birds (white dots) like the red-tailed tropic bird (lower center). 'Ua'u kani informational sign at Kilauea Point.
Historical Chapters. A Geophysical Laboratory is Born. Braving Kilauea. The Beginnings of X-ray Crystallography. Adventures on Katmai.
Kilauea summit (Halemaumau) webcam at HVO. Kilauea summit (Halemaumau) lava lake thermal webcam. Photo & Video Chronology. Big Island earthquakes.
Ingrid Ukstins Peate | Department of Earth & Environmental...
Volcanology, Igneous Petrology, Planetary Geology. My research involves utilizinging a multi-disciplinary approach to understanding explosive volcanic systems - magma petrogenesis, eruption and emplacement mechanisms of both mafic and silicic pyroclastic deposits, and the holistic interpretation of volcanic stratigraphic sequences containing effusive, explosive and reworked volcanic material.
The Hawaiian Archipelago, by Isabella L. Bird : contents
Letter V. Volcano of Kilauea, Jan. 31. ... Crater House, Kilauea. June 4th.
AMWG Diagnostic Plots | SD.2006-Kilauea3_KS and png
SD.2006-Kilauea3_KS and png. Back to diagnostic sets. Set 5 Chemistry - Tracer profile comparison with NOAA aircraft campaigns.
Keller, G.V., Grose, L.T., Murray, J.C., and Skokan, C.K., 1978, Results of an experimental drill at the summit of Kilauea Volcano: Journal of Volcanology and Geothermal Research, vol. 5, p. 345-385. Skokan, C.K., and Stoyer, C.H.,1978, A review of geophysical methods for groundwater exploration, on file at Colorado Water Congress.
Dr. Sawyer's webcams
Halema`uma`u, Kilauea Volcano, Hawaii Volcano Observatory. USA Home.
Favorite Links | Best volcanology/petrology links on the www
volcanology USGS. volcano. Smithsonian Institute.
Petrogenesis of Two New Eucrites from Northwest Africa. Petrologic and spatial analysis of volcanic ballistics from the 1790 explosive eruption of Kilauea, Hawai’i. David Colander.
Paleontology | Geology Student Research
Volcanology. ... Volcanology. Geology Links.
About the Professor
The teacher inspired me to study geology in college. I graduated with a B.S. in geology from CWRU in 2002. Subsequently, I went to graduate school whereupon I studied physical volcanology. In graduate school, you really can study anything. Surprisingly, the University at Buffalo in New York is the home for studying volcanoes. Not the place you'd expect to study the hottest landforms on Earth (they have more volcanology professors than any university in Hawaii).
Eisenhower National Clearinghouse
Also included are links to government agencies and research institutions related to volcanoes, a guide to basic volcano hazards, and even some volcano humor. Site visitors learn that the primary focus of volcanology is to provide scientific and educational information that can lead to hazard mitigation. (Author/JMJ).
University of Cincinnati News: Photos from 2002 UC Volcano...
First Days on the Big Island. Field Work around the Kilauea Caldera. ... Lava Flows at Sunrise Rising before dawn, Lisa Ventre captured this scene of the Kilauea volcano lava flows. The UC students will compare these flows with those left by older volcanoes.
Publications | Minerals in Aqueous Environments
Chemtob, S. M. and G. R. Rossman (2014) Timescales and mechanisms of formation of opaque silica coatings on fresh basalts at Kilauea Volcano, Hawai’i. Journal of Volcanology and Geothermal Research, 286, 41-54. Chemtob, S. M., G. R. Rossman, and J. F. Stebbins (2012) Natural hydrous amorphous silica: quantitation of network speciation and hydroxyl content by 29Si MAS NMR and vibrational spectroscopy.
Hawaii Pacific University
Andrew Greene teaches environmental science and geology at Hawaii Pacific University. His research focuses on volcanology and geochemistry of active and extinct volcanoes from hotspots, flood basalt provinces, and volcanic island arcs. ... Dr. Greene samples and studies active volcanism on Kilauea Volcano in Hawaii and participates in ocean drilling expeditions to study large submarine eruptions.
Kilauea Volcano = 1500 tons of SO2 daily. C. Some Important Characteristics. air pressure / barometer.
Isaacs Art Center
PrivacyPolicy. Our Commitment To Privacy. Your privacy is important to us. To better protect your privacy we provide this notice explaining our online information practices and the choices you can make about the way your information is collected and ...
Global Warming Misinformation - Volcanoes Emit More...
Emissions of CO2 by human activities, including fossil fuel burning, cement production, and gas flaring, amount to about 27 billion tonnes per year (30 billion tons) [ ( Marland, et al., 2006) - The reference gives the amount of released carbon (C), rather than CO2, through 2003.]. Human activities release more than 130 times the amount of CO2 emitted by volcanoes--the equivalent of more than 8,000 additional volcanoes like Kilauea (Kilauea emits about 3.3 million tonnes/year)!
Tuesday Nov 14
form terrestrial planets. 5. No, the stars would have died by now. What age would radiometric dating give for a chunk of recently solidified lava from Kilauea, an active volcano in Hawaii? 1. Zero. 2. The half life of potassium-40 (1.25 billion years).
Department of Earth Sciences: James H. Dieterich
Parsons, T., Toda, S., Stein, R.S., Barka, A., Dieterich, J.H ., 2000, Heightened odds of large earthquakes near Istanbul : An interaction based probability calculation , Science, 28, 661-665. Cayol, V., Dieterich, J.H., Okamura, A.T., Miklius, A ., 2000, High magma storage rates before the 1983 eruption of Kilauea, Hawaii, Science , 288 , 2343-2346. Dieterich, J.H. , Cayol, V., Okubo, P., 2000, The use of earthquake rate changes as a stress meter at Kilauea volcano, Nature , 408 , 457-460.
Bathymetry and active or dormant volcanoes
Rifts in Mauna Loa and Kilauea
Magma reservoir and conduits in Kilauea
Geology, ecology, and human dimensions of mount st
The eruption of Mount St. Helens on May 18, 1980, was a globally-transformative event for volcanology, ecosystem science, and human engagement with volcanoes. Public interest in the volcano, its ever-changing landscape, and the broader societal context tell us that, even after 30 years, this is a vibrant place for learning and teaching. The 1980 and subsequent geophysical events have taught us a great deal about many poorly-known processes and...
2000 American Geophysical Union (AGU) Poster Session. 2007 Journal of Volcanology & Geothermal Research (JVGR). 2008 SPIE: TIMS MSI Data and Lava Tube Thermal Modeling. 2009 SPIE: Code for "A nonparametric characterization of the geometry of spectra in hyperspace".
Patty and I went to Hawaii Volcanoes National Park, summer 2002. There's a great lava flow from Kilauea Volcano in the background. Flowing lava from the molten center of the earth is about as cool as it gets... I took this picture, of the same lava flow. I took it right off of the USGS's website on Kilauea. While doing my postdoc in Idaho I worked on a genetic engineering project designed to find ways to reduce the susceptibility of potato crops to soil nematode damage.
Graduate Programs - Geological and... - CSU, Chico
Highlighted Courses. Volcanology.
Volcano Fields - MicrobeWiki
According to the Kilauea Volcano Microbial Observatory (1), Carbon monoxide oxidation and methanotrophy are two major processes that shape the chemical makeup of a volcano field. Due to the high levels of sulfur gases found in volcanic chemical makeup, sulfur metabolism is utilized by microbes of the Thiobacillus, Thiosphaera, and other sulfur metabolizing bacteria.
(U). Highlight figures
Video: Eruption of Kilauea, 1959-1960 (explosive phases) 65). Video: Mount Shasta: Composite volcano. (J). Sedimentary Rocks.
Science - Spokane Community College
A four-year or graduate-level degree in the earth sciences can lead to careers as science educators at the K/12 and collegiate levels as well as researchers in a variety of subdisciplines such as volcanology, marine geology, paleontology, seismology, tectonics, mineralogy, hydrology, soils, engineering geology, and geologic hazards. ... February 16, 2010, 7 PM Historic and Cataclysmic Eruptions of Kilauea Volcano Dr. Don Swanson, United States Geologic Survey, Hawaii Volcanoes Observatory.
Hawai’i Volcanoes National Park Factsheet
· Kilauea and Mauna Loa are the two volcanoes on the property. · Kilauea has been actively erupting since 1983. · Mauna Loa last erupted in 1984 · Was once joined with Haleakala where they were both part of Hawaii National park. · The park is listed in poor condition · Invasive species are throughout the area · The park lacks the funding that they need · They average between 1.3 and 1.5 million visitors annually.
Leda Casey, Lecturer of Geology :: Indiana University Kokomo
She has also worked in the environmental consulting industry. While much of her academic career and professional experience has centered on the field of environmental geology, Leda has a wide range of interests in geology including paleontology, glaciology, volcanology, energy and mineral resources, as well as the scholarship of teaching and learning.
Volcano Vista High School: Home Page
The STEAM Journal
NASA Earth Observing System (EOS) IDS Volcanology Team. NASA Facts: Volcanoes and Global Climate Change. NGDC Natural Hazards Data.
Dr. Loretta Dickson
Philpotts, A. R. and Dickson, L. D., 2000, The formation of plagioclase chains during the convective transfer in basaltic magma: NATURE, vol. 406, p. 59-61. Dickson, L. D., 1997, Volcanology and geochemistry of Pliocene and Quaternary basalts on Citadel Mountain, Lunar Crater Volcanic Field, Pancake Range, Nevada [Master’s thesis]: University of Nevada, Las Vegas, Nevada, 146 p.
Volcano Observatories. Volcanology (Web Notes). Dunyadaki gercek zamanl? deprem aktiviteleri. · Near-Real Time Earthquake Bulletin (USGS). ... · Volcano Research Center (VRC-ERI, Univ. Tokyo). · Volcanology (Web Notes). · Volcano Monitoring Techniques (VolcanoWorld).
Chapter 10 Figures
Total Chlorine Estimates for Past and Future. Figure 10.12. Kilauea Crater, Hawaii.
Hawaii Volcano Field study
Kilauea Crater, the Halemaumau Fire Pit, Sulfur Banks, Puu Oo Oo, and Steam Vents. Hike along the Crater, to the present eruption site and into lava tubes and a tour of the Geological Survey’s Volcano Observatory will originate from our base in the Park. 2. A drive down the Chain of Craters Road to observe recent lava beach formation and shoreline erosion with sea arches, sea stacks and black, green and white sand beaches.
Kilauea Facts The Kilauea volcano in comparison to the other volcanoes located on the big island of Hawaii, is the youngest with the oldest rocks dating back to approximately 23,000 years ago, of the volcano’s first eruption could be dated back to 300,000 to 600,000 years ago. ... Chirico, Giuseppe D., Massimiliano Favalli, Paolo Papale, Enzo Boschi, Maria Teresa Pareschi, and Arthur Mamou-Mani. "Lava Flow Hazard at Nyiragongo Volcano, DRC." Bulletin of Volcanology 71.4 (2009): 375-87.
Toba, Sumatra, Indonesia
With the intense media interest in the volcanic activity in Indonesia, this is a reminder that the contact details of the responsible warning authority for volcanic crises can be found at the web site of the World Organization of Volcano Observatories, a commission of IAVCEI. The web address is http://www.wovo.org/. Additionally, Last year, the Directorate of Volcanology and Geological Hazard Mitigation (informally, the Volcanological Survey of Indonesia)...
Design of seismic acquisition system for volcanology
Design of seismic acquisition system for volcanology. Normandino Carreras27, Antoni Manuel Lazaro27, Spartacus Gomariz27.
ARSC Resources | ARSC
Areas of research include ice, ocean, and atmospheric coupled modeling; regional climate modeling; global climate change; permafrost, hydrology, and arctic engineering; magnetospheric, ionospheric, and upper atmospheric physics; volcanology and geology; petroleum and mineral engineering; as well as arctic biology.
16th IVS Galapagos | GEO1
Ecuador Geology and Volcanology Lecture Part 2.
Introduction to Volcanology.
Genevieve Robert | Geology | Bates College
Dr. Robert is an experimental volcanologist. She studies the physical and chemical properties of volcanic materials and how they influence the eruptive behavior of volcanoes. Genevieve measures the viscosity of lava she creates in the lab by melting rocks collected from both active and ancient volcanoes. Genevieve’s teaching interests include mineralogy, igneous and metamorphic petrology, volcanology, planetary geology, and magmatic ore deposits.
LINKS. Thermal. JPL Team Members: TIMS and ASTER EOS Volcanology Studies. ... About ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). NASA EOS IDS Volcanology.
Although we were never able to get close enough to view the active flows we did see an spectacular array of volcanic features including pristine aa and pahoehoe flows, tree casts and spatter ramparts, Pele's hair and reticulite, cinder cones and littoral cones, and ultramafic xenoliths. The students also participated in gravity surveys around Kilauea caldera.
Apache is functioning normally.
My interests include field-based igneous petrology, geochemistry, volcanology, geochronology and geodynamics. My research specialty concerns the mantle origin, eruptive history and geochemical evolution of alkaline volcanism associated with continental rift zones and hotspots in the southwestern Pacific (Antarctica and New Zealand).
Matthew E. Brueseke Associate Professor Department of...
My geologic interests are broad, but primarily lie in the fields of igneous petrology, volcanology, and tectonics. These interests are focused on understanding: how magmas form and are modified; relationships between volcanism and tectonism; temporal, spatial, and mass relationships between magmatism and precious metal mineralization; physical volcanology of silicic magmatic products and their eruptive systems; volcanic stratigraphy, including tephrostratigraphy.
Volcanic Hazards: Tephra, including volcanic ash
Kilauea Volcano, Hawai`i; lava fountain. Mount St. Helens Tephra: block. ... Kilauea Tephra: Pele's hair. Tephra consists of a wide range of rock particles (size, shape, density, and chemical composition), including combinations of pumice, glass shards, crystals from different types of minerals, and shattered rocks of all types (igneous, sedimentary, and metamorphic).
Spring 2016. 2115 Volcanology (Solid Earth) 2625 Ocean Acidification (Ocean) 2620 Topics in Gulf of Maine Oceanography (2000-level non-lab) 3020 Earth Climate History (3000-level Senior Seminar) 3115 Mineral Science (3000-level Research Experience) 3140 Tectonics and Climate (3000-level Senior Seminar).
Brian K. Hornbuckle
Crack in surface of Kilauea Iki Crater, Hawai'i Volcanoes National Park. August, 2010.
Journal of Volcanology and Geothermal Research
Approximately, ten E–W trending grabens currently exist, which include the Edremit, Bergama, O.I. Ece et al. / Journal of Volcanology and Geothermal Research 255 (2013) 57–78. 59. Gokova, Buyuk Menderes, Kucuk Menderes, Simav, and Gediz grabens.
Lahars: Pyroclastic flows: Tephra: 1. Cinder cone volcano: cone-shaped with steep sides. made from explosive erupti Examples: Paricutin in Mexico. 2. Shield volcano: broad, flat dome-shaped volcanoes. built up of ma Examples: Mauna Loa and Kilauea in Hawaii - the world’s largest shield volcano is Mauna Loa (10.5 miles high).
The Whitman College Geology Home Page
Kirsten P. Nicolaysen, Ph.D.; Associate Professor, Igneous Petrology, Mineralogy and Volcanology; (509) 527-4934. Liz Philips, Geology Technician; (509) 527-5696. Kevin R. Pogue, Ph.D.; Professor, Department Chair, Structural Geology; (509) 527-5955.
Environment & Sustainability | School of the Earth, Ocean and...
Petrology-Geochemistry-Volcanology Laboratory. Tectonics and Sedimentation Laboratory. Donation Opportunities.
Volcanism is a Planetary Thermal regulatory Mechanism
Mauna Loa and Kilauea show shield building stage, subaerial substage (2c) Kilauea Caldera from the air Recent eruptions of Kilauea A’a lava flows (A’a is blocky, rough, jagged, with a spiny surface) Pahoehoe Flows (smooth, billowy, ropy surface) Maps and Photos of Recent Kilauea Eruptions. 1983 – present Pu’u O’o eruptions are most voluminous from the East Rift of Kilauea in 500 years Mauna Kea represents the Capping Stage (3) Review Volcanic History The Hawaiian Islands Haleakala Volcano – East Maui...
IU Northwest: Professor Kristin T. Huysken
Kristin T. Huysken Assistant Professor of Geology Classes — Website Marram 236. (219) 980-6739. Education: Ph.D. Michigan State University, 1996 Igneous Petrology and Volcanology M.S. Michigan State University, 1993 Igneous Petrology and Volcanology B.S. Central Michigan University, 1990 Geology.
Geology | Math, Science, & Engineering
Take an SMCC Geology class! Geology is more than just rocks—it’s the study of the earth and the materials that compose it, along with its structures, processes, and organisms. It also includes topics such as planetary geology, geophysics, geochemistry, petroleum geology, volcanology, oceanography, hydrogeology, and meteorology. At SMCC we offer courses in Physical Geology, Historical Geology, and Geologic Disasters and the Environment.
Earth's highest volcanoes are shield volcanoes. Most (not all) form on the ocean floor, forming volcanic islands or submerged volcanic peaks (seamounts). Example = Hawaii (Mauna Loa, Kilauea) , Iceland, Galapagos Islands. 3. Cinder Cone (Fig 9.16) - - Forms as a result of eruption and build-up of mostly loose, cinder-sized pyroclastic material from a gas-rich basaltic magma.
In 1808 he settled in Paris and published the findings of his New World expedition in Voyage de Humboldt et Bonpland (23 vol., 1805–1834), often cited by the title of Part I, Voyage aux regions equinoxiales du nouveau continent. Humboldt established the use of isotherms in map making, studied the origin and course of tropical storms, the increase in magnetic intensity from the equator toward the poles, and volcanology.
Paleoceanography at GSO
Paleoceanography/paleoclimatology at GSO stretches from the high latitude North Atlantic to the equatorial Pacific to ice cores from Antarctica. GSO faculty research interests include: micropaleontology, stratigraphy, sedimentary mineralogy and chemistry, and environmental magnetism. In addition, the volcanology group studies volcanogenic sediments in the deep sea. For additional information visit the Geological Oceanography page.
[iris-bulk] opportunity to request SAFOD Phase III physical...
Previous message: [iris-bulk] (Job) Professor, Volcanology - Earth Observatory of Singapore.
Volcanoes and Earthquakes : Natural Hazards
Volcanoes and Earthquakes. August 19, 2009 Plume from Kilauea's Halema'uma'u Crater.
Volcano Web Site, Educational Resources for K-16
Answers a lot of frequently asked volcano questions. Relates Dante's Peak (the movie) with the real Dante's Peak and how the movie relates to the job of a volcanologist. ... Hood, Kilauea, and Yellowstone National Park and poses questions about future potential volcanic activity. Background information is provided under the subheadings Volcanology, Analyzing Volcanoes, Living with Volcanoes, and Volcanoes and the Earth.
Geology 496: 2013 Hawaii Field Course Projects
Njos. Physical Volcanology of Basaltic Lava. Lisa. Romano. ... Kilynn. Sandberg. Relationship between the Koae Fault and Kilauea Volcano. Ashley. Steffen.
Kilauea Volcano, Hawai`i. 1960. Lava flows.
Earth and Planetary Science Letters, 415, 90-99. Elliot, D. H., & Fleming, T. H. (2008). Physical volcanology and geological relationships of the Jurassic Ferrar large igneous province, Antarctica. Journal of Volcanology and Geothermal Research, 172(1), 20-37.
TYPE OF FAULTING: right-lateral s California’s largest ever! San Francisco, 190 San Francisco Aftermath The Geography Volcano Cla active: has erupted in recorded history. (Kilauea, Hi, Mt. Etna, Italy, Mt. Lassen) dormant: has not been seen to erupt in history, but shows ev Volcanoes: Composite cones (stratovolcano) pointed, steep-sided, tall volcanoes “Composite”: layers of pyroclastics and lava (mostly felsic) Explosive and dangerous; found in subduction zones Landforms
Usually stratovolcanoes. Examples include: Mount Pinatubo (Philippines); Mount St. Helens (Washington State, USA). Predicting volcanic eruptions is one of the chief concerns of volcanology. When a volcano erupts, little can be done to prevent property damage in the surrounding area. But many lives can be saved if people in the area are evacuated before the eruption begins.
EWU | Faculty and Staff
Interests: mineralogy, igneous and metamorphic petrology, volcanology.
Geology Central - Geology Links: Volcanoes
Lesson 1: Geographical Context - Mount Rainier An...
The International Association of Volcanology and Chemistry of the Earth’s Interior designated 16 volcanoes to be studied during the International Decade for Natural Disaster Reduction. Which of the following is not a decade volcano? a. Popocatepetl, Mexico b. Mauna Loa, USA c. Santorini, Greece d. Sakurajima, Japan.
Jeff Noblett | PhD., Professor of Geology
Volcanology, SONY DSC. Volcanoes New Mexico intrusive in rhyolite and basalt. ... His primary field of research is Igneous and Metamorphic Petrology with a focus on Volcanology. His early work was on the Clarno Formation, a package of Eocene volcanics in Oregon, which developed above a subduction zone. Clarnoflows on the John Day River.
Ecology of Hawaii, Itinerary
Views from this perch are the most spectacular in all Hawaii Nei. Jan 5: We will drive to Kilauea National Wildlife Refuge and join a refuge staff member for a behind-the-scenes tour of the excellent habitat restoration work being performed here to promote the breeding reintroduction of many of Hawaii's sea birds. We will have lunch, then travel to the Maha'ulepu coastline for an afternoon of swimming and snorkeling.
ne nh du mt ex hn bt yh yh jc hg bb efr mn vt cd uj ht ff kd gj lc ge nv xb nt jn ck my sv jd ct in zh qg lfr sy yi nt fh bg gy nu br jd gh hi ck ku bf sdf ly ve cf dg th tyr bu qwe yu gi et hg ur bt kh bi jv jv bg by qwf ec lt wy hj lj nr ch ge hg ok nf hy ff vh dg di nu nr rc dg hd yh fd jv ot xb vc jf ev nf vh kt tk fc fy ej me sdu sx bt oh nt qg nj kf ot gc gn dd ju rs qg ff ng qn yh jt xy rf xt gd ku ut vu tt tv kg ke st wc gv mn kh vr pg vg tyu gg uj ev sv mc vg wy ov vj ls ny fe tyt hy bu sy hu fg dd pd tk jg qn uf nc vf ji qwt yd zg if ox gfr lc kf hv sk qc hb vg tt ff xt lfr qwf mj cx ng en sh kd my nn ic av hj vu zd fc qwf yf gc cu jd fs ly fg gn fy gu sdg yd fg bt ph vi iu ry by tt zg uf jd st mb nj yg nx gg qwj ly bv kd yh ji ki bu tyf dh rs kr vh jx cy dc nt yd gc hu vh yd ws ku ay ly ii gh hc jy nf gg vf ws if ee ev fj fr kd ft xs gv yg ph jv kg uf du en gv vh ek qwe fy gd jd rd mc vg rd jn yv tyj hg bs kv ci kh jd pd rv jf pg cd sdc hf fc afr jj dh gc ve ft bd fj jv hi vu xj sdg vd je rv yg ng tx qwu bd fj ge st ph jg hd gg kr gg bc ln gj et vd ct ii eh gfr lv jt bu hn lj dh vd jf fh fu zd dg vf rb qc fe ot ph jy bf tyf vi dd bf gn my br qu zg qwr sy av ov fn ffr wj ut wy sdi ty uu fd ct my ex xc bj sdn gc jd xy kv rj ac rb nf cg yd gy jk fv kj hf df ry rt ny hs jfr fh lb qf lc sh ec fj dg sx mfr kg ke gh an sh afr tyf ig st oy sk gy ru le lj kf fd fj zh cv ig jt sdc eh ae kg gg wb hf tyf mj vg hy vf mt zg hc yd ue tx ig le bf ri fj dn gf bj my ey be rd hg wb gh kd ph ok by ey hs zh vn nu jc tye jy ku aj jh ffr mc gj cv cv ms ru nh nt lj ji hi sx hu fy kh st ni wt ci mv fb jy nk dh me ju rg dn sh gi sdn qi kg yv cd pd
|
<urn:uuid:d814eb9d-5f02-4386-be76-c84fe2ce5e4a>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00021.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8312254548072815,
"score": 3.375,
"token_count": 18256,
"url": "http://volcanology.press/Kilauea/"
}
|
Fukushima disaster cleanup
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
The Fukushima disaster cleanup is an ongoing attempt to limit radioactive contamination from the three nuclear reactors involved in the Fukushima Daiichi nuclear disaster which followed the earthquake and tsunami on 11 March 2011. The affected reactors were adjacent to one another and accident management was made much more difficult because of the number of simultaneous hazards concentrated in a small area. Failure of emergency power following the tsunami resulted in loss of coolant from each reactor, hydrogen explosions damaging the reactor buildings, and water draining from open-air spent fuel pools. Plant workers were put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units.
Automated cooling systems were installed within 3 months from the accident. A fabric cover was built to protect the buildings from storms and heavy rainfall. New detectors were installed at the plant to track emissions of xenon gas. Filters were installed to reduce contaminants from escaping the area of the plant into the area or atmosphere. Cement has been laid near to the seabed to control contaminants from accidentally entering the ocean.
No strontium was released into the area from the accident;[needs update] however, in September 2013 it was reported that the level of strontium-90 detected in a drainage ditch located near a water storage tank from which around 300 tons of water was found to have leaked was believed to have exceeded the threshold set by the government.
Decommissioning the plant is estimated to cost tens of billions of dollars and last 30–40 years. Initial fears that contamination of the soil was deep have been reduced with the knowledge that current crops are safe for human consumption and the contamination of the soil was not serious; however, in July and August 2013, it was discovered that radioactive groundwater has been leaking into the sea.[clarification needed]
Initially, TEPCO did not put forward a strategy to regain control of the situation in the reactors. Helmut Hirsch, a German physicist and nuclear expert, said "they are improvising with tools that were not intended for this type of situation". However, on 17 April 2011, TEPCO appeared to put forward the broad basis of a plan which included: (1) reaching "cold shutdown in about six to nine months;" (2) "restoring stable cooling to the reactors and spent fuel pools in about three months;" (3) putting "special covers" on Units 1, 3, and 4 starting in June;(4) installing "additional storage containers for the radioactive water that has been pooling in the turbine basements and outside trenches;" (5) using radio-controlled equipment to clean up the site; and (6) using silt fences to limit ocean contamination. Previously, TEPCO publicly committed to installing new emergency generators 20 m above sea level, twice the height of the generators destroyed by the 11 March tsunami. Toshiba and Hitachi had both proposed plans for shuttering the facility.
Cold shutdown was accomplished on December 11, 2011. From that point cooling was no longer required, but maintenance was still required to control large water leaks. Long term plans for Units 5 and 6 have not been announced, "but they too may need to be decommissioned".
On 5 May 2011, workers were able to enter reactor buildings for the first time since the accident. The workers began to install air filtration systems to clean air of radioactive materials to allow additional workers to install water cooling systems.
Scope of cleanupEdit
Japanese reactor maker Toshiba said it could decommission the earthquake-damaged Fukushima nuclear power plant in about 10 years, a third quicker than the American Three Mile Island plant. As a comparison, at Three Mile Island the vessel of the partially melted core was first opened 11 years after the accident, with cleanup activities taking several more years.
TEPCO announced it restored the automated cooling systems in the damaged reactors in about three months, and had the reactors put into cold shutdown status in six months.
First estimates included costs as high as ¥1 trillion (US$13 billion), as cited by the Japanese Prime Minister at the time, Yoshihiko Noda (野田 佳彦). However, this estimate was made before the scope of the problem was known. It seems that the contamination was less than feared. No strontium is detectable in the soil, and though the crops of the year of the disaster were contaminated, the crops produced by the area now are safe for human consumption.
Japan's economy, trade, and industry ministry recently (as of 2016[update]) estimated the total cost of dealing with the Fukushima disaster at ¥21.5 trillion (US$187 billion), more than twice the previous estimate of ¥11 trillion (US$96 billion). A rise in compensation for victims of the disaster from ¥5.4 trillion (US$47 billion) to ¥7.9 trillion (US$69 billion) was expected, with decontamination costs estimated to rise from ¥2.5 trillion (US$22 billion) to ¥4 trillion (US$35 billion), costs for interim storage of radioactive material to increase from ¥1.1 trillion (US$10 billion) to ¥1.6 trillion (US$14 billion), and costs of decomissioning reactors to increase from ¥2 trillion (US$17 billion) to ¥8 trillion (US$69 billion).
Working conditions at the plantEdit
There has been concern that the plant would be dangerous for workers. Two workers suffered skin burns from radiation, but no serious injuries or fatalities have been documented to have been caused by radiation at Fukushima Dai-ichi.
Unskilled workforce systematically employed on Japanese nuclear power plantsEdit
The disaster in Fukushima has revealed the practice of Japanese nuclear power plants systematically using unskilled laborers with short contracts. These people are paid per day, and are hired per day from questionable agencies and firms. From data provided by NISA, it was concluded that 80 percent of all of the workforce hired in commercial nuclear power plants is done using temporary contracts, In Fukushima this number was even higher, at 89 percent. This had been practiced for decades. Unemployed people gathered in parks in the morning, and were picked up to be taken to the nuclear power plants. They would get a contract for a few months to do unskilled and the most dangerous labor. After the work was finished, these people were supposed to disappear.
Workers in dorms exposed to radiationEdit
Two shelters for people working at the Fukushima-site were not listed as part of the radiation management zones although radiation levels in the shelters exceeded the legal limits. The consequence was, that the workers did not get paid the extra "danger allowance" that was paid to workers in these "radiation management zones". The shelters were constructed by Toshiba Corporation and the Kajima Corporation at a place some 2 kilometers west of the damaged reactors, just outside the plant compound, but quite near to the reactors 1 to 4. The shelters were built after the shelters at the plant-compound became overcrowded. At 7 October 2011 radiation levels in the Toshiba building were between 2 and 16 microsieverts per hour, in the Kajima dorm it was 2 to 8.5 microsieverts per hour. The Industrial Safety and Health Law on the prevention of health damage through ionizing radiation had set the limit for accumulated radiation dosage in radiation management zones at 1.3 millisieverts over three months, so the maximum level is 2.6 microsieverts/hour. In both dorms the radiation levels were higher. However, these doses are well below the level to affect human health. According to the law, the "business operator" is responsible for "managing radiation dosage and the prevention of contamination", Toshiba and Kajima said that TEPCO was responsible. But a TEPCO official made the comment: "From the perspective of protecting workers from radiation, the business operators (that constructed the shelters) are managing radiation dosage and the prevention of contamination" in this way suggesting that Toshiba and Kajima had to take the care for the zone management.
Preventing hydrogen explosionsEdit
On 26 September 2011, after the discovery of hydrogen in a pipe leading to the containment vessel of reactor no.1, NISA instructed TEPCO to check whether hydrogen was building up in reactor no. 2 and 3 as well. TEPCO announced that measurements of hydrogen would be done in reactor no. 1, before any nitrogen was injected to prevent explosions. When hydrogen would be detected at the other reactors, nitrogen injections would follow.
After the discovery of hydrogen concentrations between 61 and 63 percent in pipes of the containment of reactor no. 1, nitrogen injections were started on 8 October. On 10 October TEPCO announced, that the concentrations were at that moment low enough to prevent explosions, and even if the concentration would rise again, it would not exceed 4 percent, the lowest level that would pose the risk of an explosion. On the evening of 9 October two holes were drilled into the pipe to install a filter for radioactive substances inside the containment vessel, this was 2 weeks behind the schedule TEPCO had set for itself. This filter should be in operation as soon as possible.
Investigations inside the reactorsEdit
On 19 January 2012 the interior of the primary containment vessel of reactor 2 was inspected with an industrial endoscope. This device, 8.5 millimeters in diameter, is equipped with a 360 degrees-view camera and a thermometer to measure the temperature at this spot and the cooling-water inside, in an attempt to calibrate the existing temperature-measurements that could have an error-margin of 20 degrees. The device was brought in by a hole at 2.5 meter above the floor where the vessel is located. The whole procedure lasted 70 minutes. The photos showed parts of the walls and pipes inside the containment vessel. But they were unclear and blurred, most likely due to water vapors and the radiation inside. According to TEPCO the photos showed no serious damage. The temperature measured inside was 44.7 degrees Celsius, and did not differ much from the 42.6 degrees measured outside the vessel.
Inspections of the suppression chambers reactor no. 2 and 3Edit
On 14 March 2012 for the first time after the accidents six workers were sent into the basements of reactor no. 2 and 3, to examine the suppression chambers. Behind the door of suppression chamber in the no.2 building 160 millisieverts/hour was measured. The door to the suppression chamber in the no. 3 reactor building was damaged and could not be opened. In front of this door the radiation level measurement was 75 millisieverts/hour. For reactors to be decommissioned, access to the suppression chambers is vital for conducting repairs to the containment structures. Because the high levels of radiation, according to TEPCO this work should be done with robots, because these places could be hostile to humans. TEPCO released some video footage of the work at the suppression chambers of the No. 2 and 3 reactors.
On 26 and 27 March 2012 the inside of the containment vessel of reactor 2 was inspected with a 20 meter long endoscope. With this a dosi-meter was brought into the vessel to measure the radiation levels inside. At the bottom of the primary containment structure, 60 centimeters of water was found, instead of the 3 meters expected at that place. The radiation level measured was 72.9 sieverts per hour. Because of this, the endoscope could only function a few hours at this place. For reactors number 1 and 3, no endoscopic survey was planned at that time, because the actual radiation levels at these places were too high for humans.[dead link]
Management of contaminated waterEdit
This section needs to be updated.(February 2017)
This section may be in need of reorganization to comply with Wikipedia's layout guidelines. (February 2017) (Learn how and when to remove this template message)
Continued cooling of the melted reactor cores is required in order to remove excess heat. Due to damage to the integrity of the reactor vessels, radioactive water accumulated inside the reactor and turbine buildings. To decontaminate the contaminated water, TEPCO installed radioactive water treatment systems.
The Japanese government had initially requested the assistance of the Russian floating water decontamination plant Landysh to process the radioactive water from the damaged reactors, but negotiations with the Russian government were a extremely slow process and it is unclear if the plant was ever sent to Fukushima. Landysh was built by Russia with funding from Japan to process liquid wastes produced during the decommissioning of nuclear submarines.
As of early September 2011 the operating rate of the filtering system exceeded the target of 90 percent for the first time. 85,000 tons of water were decontaminated by September 11, with over 100,000 tons of waste-water remaining to be treated at the time. However, the nuclear waste generated by the filters had already filled almost 70 percent of the 800 cubic meters of storage space available at the time. TEPCO had to figure out how to cool the reactors with less than 15 tons of water per day in order to reduce the growth of waste-water and nuclear waste to more manageable levels.
Installation of circulating water cooling systemEdit
In order to remove decay heat of the severe damaged cores of Unit 1-3, TEPCO injected cooling water into the reactors. As the reactors appear to have holes around the bottom, the water dissolved the water-soluble fission products, which then accumulated in the basement of the turbine building (the adjacent diagram #2) through any leaks from the water-injected reactor buildings (#1). Since the accumulated radioactive water was a risk, TEPCO tried to transfer it.
As the accumulated water in the basement (see the tunnel below diagram #2) of the turbine building of Units 2 and 3 was radioactive, TEPCO needed to remove it. They had initially planned to pump the water to the condenser (the large black vessel in diagram #1). However, TEPCO had to abandon that plan after discovering that the condensers on both units were already full of water. Pumps capable of processing 10–25 tons of water per hour were used to transfer condenser water into other storage tanks, freeing up condenser storage for the water in the basements. However, since both the storage tanks and the condensers were nearly full, TEPCO also considered using floating tankers ships as a temporary storage location for the radioactive water. Regardless of the availability of offshore storage for radioactive-contaminated water, TEPCO decided to discharge 11,500 tons of its least contaminated water (with was still approximately 100 times the legal limit for radioactivity) to the sea on April 5 in order to free up storage space. At the same time, on 5 April, TEPCO began pumping water from the condensers of units 1–3 to their respective condensation storage tanks to free room for the trench water (see below).
Removal of accumulated water in seawater piping trenchEdit
The Fukushima Daiichi NPS has several seawater piping trenches which were originally designed to house pipes and cables running from the Unit 2–4 turbine buildings to their seaside, which doesn't directly connect to the sea. Inside the trench, radioactive contaminated water has been accumulating since the accident. Due to the risk of soil or ocean contamination from these trenches, TEPCO has been trying to remove the accumulated water in the trenches by pumping it back into the turbine buildings, as well as backfilling the trenches to reduce or prevent further incursion of contaminated water.
On 5 July 2013, TEPCO found 9 kBq/L of 134Cs and 18 kBq/L of 137Cs in a sample taken from a monitoring well close to the coastline. Compared with samples taken three days earlier, the levels were 90 times higher. The cause was unknown. The monitoring well is situated close to another monitoring well that had previously leaked radioactive water into the sea in April 2011. A sample of groundwater from another well situated about 100 meters south of the first well showed that the radioactivity had risen by 18 times over the course of 4 days, with 1.7 kBq/L of strontium and other radioactive substances. A day later the readings in the first well were 11 kBq/L of 134Cs and 22 kBq/L of 137Cs, 111 times and 105 times greater than the samples of 5 July. TEPCO did not know the reasons for the higher readings, but the monitoring was to be intensified.
More than a month after the groundwater contamination was discovered, TEPCO started to contain the radioactive groundwater. They assumed that the radioactivity had escaped early in the beginning of the disaster in 2011, but NRA experts[who?] had serious doubts about their assumption. According to them, other sources could not be excluded. Numerous pipes were running everywhere on the reactor grounds to cool the reactors and decontaminate the water used, and leaks could be anywhere. TEPCO's solution resulted in redirection of the groundwater flows, which could have spread the radioactive contamination further. Besides that, TEPCO had plans for pumping groundwater.[further explanation needed] At that time the turbine buildings of units 2 and 3 contained 5000 and 6000 cubic meters of radioactive water. With wells in contact with the turbine-buildings, this could spread the radioactivity into the ground. The NRA announced that it would form a task force to find the leaks and to block the flow of the groundwater to the coastline, because the NRA suspected that the groundwater was leaking into the sea.
Tritiated water treatmentEdit
This section relies largely or entirely on a single source. (February 2017)
Some of this section's listed sources may not be reliable. (February 2017) (Learn how and when to remove this template message)
In January 2014 it was made public that a total of 875 terabecquerel (TBq) of tritium is on the site of Fukushima Daiichi; it would take 59 years to safely discharge this amount of tritium to the sea. According to data that TEPCO submitted to the tritium task force (of the Ministry of Economy, Trade and Industry), the 400,000 tonnes of contaminated water stored in tanks at the site contained a total of about 817 TBq of tritium. A further 58 TBq of tritium was contained in water outside of the tanks, e.g. in reactor buildings. According to further data submitted by TEPCO, the amount of tritium concentration water is increasing by approximately 230 TBq per year. This followed a report made public in December 2013 that "Tritium could be separated theoretically, but there is no practical separation technology on an industrial scale."[dubious ]
Timeline of contaminated water treatmentEdit
Some of this section's listed sources may not be reliable. (March 2017) (Learn how and when to remove this template message)
This section needs to be updated.(March 2017)
- On March 27
- TEPCO announced that radioactive water had accumulated in the basement of the Unit 2 turbine building.
- On March 28
- The Japanese Nuclear Safety Commission advised TEPCO to take all possible measures to avoid the accumulated water in the Unit 2 turbine building leaking into the ground and the sea.(hereinafter called "the JNSC advice")
- On April 2
- TEPCO announced the outflow of fluid containing radioactive materials to the ocean from areas near the intake channel of Unit 2. The fluid source was a 20 cm crack on the concrete lateral of the pit which appeared to have been created by the earthquake. TEPCO attempted to inject fresh concrete, polymeric water absorbent, sawdust, and shredded newspapers into the crack; however, this approach failed to slow the leak. After an investigation of the water flow, TEPCO began to inject sodium silicate on April 5th, and the outflow was stopped on April 6th. The total amount and radioactivity of the outflow from the crack was estimated to be approximately 520 m3 and approximately 4.7 PBq respectively.
- On April 17
- TEPCO announced the Roadmap towards Restoration from the Accident at Fukushima Daiichi Nuclear Power Station.
- On April 27
- In order to prevent the outflow of the highly radioactive water at the turbine building of Unit 2, the water was transferred to the Centralized Radiation Waste Treatment Facility since April 19th. TEPCO planned to install facilities for processing the stored water and reusing treated water to inject it into the reactors.
- On May 11
- TEPCO investigated possible leakage of radioactive water to the outside from around the intake canal of Unit 3 in response to the employees' report of water flowing into the pit via power cable pipe lines.
- On May 23
- Nuclear and Industrial Safety Agency began to use the term "Contaminated Water" as the water with high concentration of radioactive materials.
- On June 17
- TEPCO began the operation of the cesium adsorption apparatus (Kurion) and the decontamination apparatus (AREVA).
- On August 17
- TEPCO began the (test) operation of SARRY, which is the second cesium adsorption apparatus (TOSHIBA).
- On August 28
- 2 TEPCO workers at the plant were exposed to radiation by mistake while they were replacing parts of the contaminated water processing system. The next Wednesday 31 August two other workers were sprayed with highly contaminated water when the water splashed from a container with a leaking valve that did not close. It was found that they were exposed to 0.16 and .14 millisieverts. The last man wore a raincoat. No immediate symptoms were found.
- On December 21
- TEPCO announced Mid-and-long-Term Roadmap towards the Decommissioning of Fukushima Daiichi Nuclear Power Units 1-4.
- On April 5
- A leaking pipe was found at 1.00 AM. The leakage stopped an hour after the valves were closed. 12,000 liters water with high levels of radioactive strontium were lost, according to TEPCO much of this water escaped through a nearby sewer-system into the ocean. Investigations should reveal how much water was lost into the ocean, and how the joint could fail. A similar leakage in at the same facility happened on 26 March 2012.
- 2013 (The year to the social problem)
- On March 30
- TEPCO began the operation of ALPS, which is the multi-nuclide removal equipment.
- On July 22
- With announcing the situation on seawater and groundwater, TEPCO admitted that contaminated groundwater had been leaking into the ocean since March 2011.
- On July 27
- TEPCO announced that extremely high levels of tritium and cesium were found in a pit containing about 5000 cubic meters water on the sea side of the Unit 2 reactor building. 8.7 MBq/liter of tritium and 2.35 GBq/liter of cesium was measured. The NRA was concerned that leaks from the pit could release high tritium levels into the sea and that there was still water flowing from the reactor into the turbine building and into the pit. However, TEPCO believed that this pollution was there from the first days in 2011, and had stayed there. Nevertheless, TEPCO would control the site for leaks, and seal the soil around the pit.
- On May 30
- The Government of Japan decided the policy to prevent the groundwater flowing in the reactor buildings. A frozen soil wall (Land-side Impermeable Wall) was scheduled for introduction to block the flow of groundwater and prevent its mixing with contaminated water.
- On August 19
- Contaminated water leakage from a flange type tank was found in the H4 area. The incident was finally evaluated by the NRA as a provisional rating Level 3 on the eight-level INES. In response to this incident, NRA recommended that TEPCO should replace the flange type tank, which is prone to leak water, with a welded type tank.
- On August 28
- A subcontractor employee was contaminated on his face, head and chest while transferring water from the damaged tank. After decontamination, 5,000 cpm were still measured on his head; the readings from prior to decontamination were not released. The man was released, but ordered to have a whole-body radiation count later.
- On September 2
- It was reported that radiation near another tank was measured at 1.8 Sv/h, 18 times higher than previously thought. TEPCO had initially recorded radiation at about 100 mSv/h, but later admitted that that was because the equipment they were using could only read measurements up to that level. The latest reading came from a more advanced device capable of measuring higher levels. The buildup of water at the site is close to becoming unmanageable and experts say that TEPCO will soon be left with no choice but to release the water into the ocean or evaporate it.
- On September 3
- The Nuclear Emergency Response Headquarters published "the Government’s Decision on Addressing the Contaminated Water Issue at TEPCO’s Fukushima Daiichi NPS".
- On September 9
- TEPCO started cleaning the draining ditch at the north side of the leaking tank one day before Tokyo was selected as host of the 2020 Olympic Games. Radiation monitoring data were masked after that day for some time.[unreliable source?]
- On September 12
- Contaminated water leakage from storage tanks was found in the H4 area.
Cooling the reactors with recirculated and decontaminated water from the basements proved to be a success, but as a consequence, this radioactive waste was piling up in the temporary storage facility at the plant. TEPCO decided in the first week of October to use the "Sally" decontamination system built by Toshiba Corporation and keep the Kurion/Areva system as back-up.
On 27 September after three months operation some 4,700 drums with radioactive waste had piled up at the plant. The Kurion and Sally systems both utilized zeolites to concentrate cesium. After the zeolite was saturated, the vessels with the zeolite were turned into nuclear waste. By now, 210 Kurion-made vessels with a total of 307 cubic meters, each vessel measuring 0.9 meters in diameter and 2.3 meters in height had accumulated at the plant. The Areva-filters used sand to absorb radioactive materials and chemicals were used to reactivate the filters. In this way, 581 cubic meters of highly contaminated sludge were produced.
According to Professor Akio Koyama of the Kyoto University Research Reactor Institute, the density of high-level decontaminated water was believed to contain 10 gigabecquerel per liter, but if this is condensed to polluted sludge and zeolites, this density could increase 10,000 fold. These densities could not be dealt using conventional systems.
Spent fuel poolsEdit
On August 16, 2011, TEPCO announced the installation of desalination equipment in the spent fuel pools of reactor 2, 3 and 4. These pools had been cooled with seawater for some time, and TEPCO feared the salt would corrode the stainless steel pipes and pool wall liners. The Unit 4 spent fuel pool was the first to have the equipment installed, the spent fuel pools of reactor 2 and 3 came next. TEPCO expected to achieve removal of 96% of the salt in the spent fuel pools within two months.
Unit 4 spent fuel removalEdit
On December 22, 2014, TEPCO crews completed the removal of all fuel assemblies from the spent fuel pool of reactor 4. 1331 spent fuel assemblies were moved to the ground-level common spent fuel pool, and 204 unused fuel assemblies were moved to the spent fuel pool of reactor 6 (Unit 4 was out of service for refueling at the time of the 2011 accident, so the spent fuel pool contained a number of unused new fuel assemblies).
On 10 April 2011, TEPCO began using remote-controlled, unmanned heavy equipment to remove debris from around reactors 1–4. The debris and rubble, caused by hydrogen explosions at reactors 1 and 3, was impeding recovery operations both by being in the way and emitting high radioactivity. The debris will be placed into containers and kept at the plant.
Proposed building protectionsEdit
This section needs to be updated.(August 2011)
Because the monsoon season begins in June in Japan, it became urgent to protect the damaged reactor buildings from storms, typhoons and heavy rainfall. As a short term solution, TEPCO envisaged to apply a light cover on the remaining structures above the damaged reactors. As of mid-June, TEPCO released its plan to use automated cranes to move structures into place over the reactor. This strategy is an attempt to keep as many people away from the reactors as possible, while still covering the damaged reactors.
On 18 March, Reuters reported that Hidehiko Nishiyama, Japan's nuclear agency spokesman when asked about burying the reactors in sand and concrete, said: "That solution is in the back of our minds, but we are focused on cooling the reactors down." Considered a last-ditch effort since it would not provide cooling, such a plan would require massive reinforcement under the floor, as for the Chernobyl Nuclear Power Plant sarcophagus.
Scrapping reactor Daiichi 1, 2, 3, 4Edit
On 7 September 2011, TEPCO president Toshio Nishizawa said that the 4 damaged reactors will be scrapped. This announcement came at a session of the Fukushima Prefectural Assembly, which was investigating the accident at the plant. Whether the six other remaining reactors, (Daiichi 5, 6, Daini 1, 2, 3, 4) should be abolished too, would be decided based on the opinions of local municipalities.
On 28 October 2011, the Japanese Atomic Energy Commission presented a timetable in a draft report titled, “how to scrap the Fukushima reactors”. Within 10 years, a start should be made with the retrieval of the melted fuel within the reactors. First, the containment vessels of reactors 1, 2 and 3 should be repaired, then all should be filled with water to prevent radiation releases. Decommissioning would take more than 30 years, because the pressure vessels of the reactor vessels are damaged also. After the accident at Three Mile Island in 1979, some 70 percent of the fuel rods had melted. There, the retrieval of the fuel was started in 1985, and completed in 1990. The work at Fukushima was expected to take significantly longer because of the far greater damage and the fact that 4 reactors would need to be decommissioned all at the same time.
After discussions were started in August 2011, on 9 November, a panel of experts of Japan's Atomic Energy Commission completed a schedule for scrapping the damaged reactors - their conclusions were;
- The scrapping will take 30 years or longer.
- First, the containment vessels needed to be repaired, then filled with water to block radiation.
- The reactors should be in a state of stable cold shutdown.
- Three years later, a start would be made to take all spent fuel from the 4 damaged reactors to a pool within the compound.
- Within 10 years, the removal of the melted fuel inside the reactors could begin.
This scheme was partly based on the experience gained from the 1979 Three Mile Island accident. However, in Fukushima with three meltdowns at one site, the damage was much more extensive. It could take 30 years or more to remove the nuclear fuel, dismantle the reactors, and remove all the buildings. Research institutions all over the world were asked to participate in the construction of a research-site to examine the removal of fuel and other nuclear wastes. The official publication of the report was planned at the end of 2011.
Protection systems installedEdit
Since the disaster, TEPCO has installed sensors, a fabric cover over the reactors and additional filters to reduce the emission of contaminants.
Sensors for xenon and temperature changes to detect critical reactionsEdit
After the detection of radioactive xenon gas in the containment vessel of the No. 2 reactor on 1 and 2 November 2011 TEPCO was not able to determine whether this was a sustained fission process or only spontaneous fission. Therefore TEPCO installed detection devices for radioactive xenon to single out any occurrence of nuclear criticality. Next to this TEPCO installed temperature sensors to control temperature changes in the reactors, another indicator of possible critical fission reactions.
On 20 September the Japanese government and TEPCO announced the installation of new filters to reduce the amount of radioactive substances released into the air. In the last week of September 2011 these filters were to be installed at reactor 1, 2 and 3. Gases out of the reactors would be decontaminated before they would be released into the air. Mid October the construction of the polyester shield over the No.1 reactor should be completed. In the first half of September the amount of radioactive substances released from the plant was about 200 megabecquerel per hour, according to TEPCO, that was about one-four millionths of the level of the initial stages of the accident in March.
Fabric cover over Unit 1Edit
An effort has been undertaken to fit the three damaged reactor buildings with fabric covers and filters to limit radioactive contamination release. On 6 April 2011, sources told Kyodo News that a major construction firm was studying the idea, and that construction wouldn't "start until June". The plan has been criticized for potential only having "limited effects in blocking the release of radioactive substances into the environment". On 14 May, TEPCO announced that it had begun to clear debris to create a space to install a cover over the building of reactor 1. In June, a large crane was erected near Reactor 1 to begin construction of the fabric cover. From mid August to mid September 2011, a rectangular steel frame entirely surrounding the reactor building was constructed. Starting 9 September, the crane was used to attach polyester panels to the frame. On 20 September 2011, TEPCO announced that within three weeks they hoped to complete the construction of the polyester shield over the No.1 reactor. By that time the steel frame for the fabric cover had been completed. By 7 October, the roof of the structure was being added. On 9 October, the walls of the cover appeared to be placed, and by 13 October the roof had been completed.
Metal cover over Unit 3Edit
In June 2016, preparation work began to install a metal cover over the Unit 3 reactor building. In conjunction with this, a crane is to be installed to assist with the removal of the fuel rods from the storage pool. After inspection and cleaning, the removed fuel is expected to be stored in the site's communal storage facility.
Cleanup of neighboring areasEdit
Significant efforts are being taken to clean up radioactive material that escaped the plant. This effort combines washing down buildings and scraping away topsoil. It has been hampered by the volume of material to be removed and the lack of adequate storage facilities.
There is also a concern that washing surfaces will merely move the radioactive material without eliminating it.
After an earlier decontamination-plan only to clean all areas with radiation levels above 5 millisievert per year, had raised protests, the Japanese government revealed, on 10 October 2011, in a meeting with experts, a revised decontamination plan. This plan included:
- all areas with radiation levels above 1 millisievert per year would be cleaned.
- no-entry zones and evacuation zones designated by the government would be the responsibility of the government.
- the rest of the areas would be cleaned by local authorities.
- in areas with radiation levels above 20 millisievert per year, decontamination would be done step by step.
- within two years, radiation levels between 5 and 20 millisieverts should be cut down to 60%.
- the Japanese government would help local authorities with disposing the enormous amount of radioactive waste.
On 19 December 2011 the Japanese Ministry of Environment published more details about these plans for decontamination: the work would be subsidized in 102 villages and towns. Opposition against the plan came from cattle-farmers in the prefecture Iwate and the tourist-industry in the city of Aizuwakamatsu, because of fears that cattle sales might drop or tourism would be hurt to the town, when the areas would be labeled to be contaminated. Areas with lower readings complained that their decontamination would not be funded.
In a Reuters story from August 2013, it was noted "[m]any have given up hope of ever returning to live in the shadow of the Fukushima nuclear plant. A survey in June showed that a third of the former residents of Iitate, a lush village famed for its fresh produce before the disaster, never want to move back. Half of those said they would prefer to be compensated enough to move elsewhere in Japan to farm." In addition, despite being allowed to return home, some residents say the lack of an economy continues to make the area de facto unlivable. Compensation payments to those who have been evacuated are stopped when they are allowed to return home, however, as of August 2013 decontamination of the area has progressed more slowly than expected. There have also been revelations of additional leaks (see above: storage tanks leaking contaminated water).
Cementing the seabed near the water-intakeEdit
On 22 February 2012 TEPCO started cementing the seabed near the plant to prevent the spread of radioactive materials into the sea. Some 70000 square meters of seabed around the intake of cooling water would be covered with 60 centimeters thick cement. The work should be finished within 4 months time, and prevent the spread of contaminated mud and sand at that place for at least 50 years.
New definition of the no-entry-zones introducedEdit
On 18 December 2011 Fukushima Gov. Yuhei Sato and representatives of 11 other municipal governments near the plant were notified at a meeting at the city of Fukushima the three ministers in charge of handling the crises, Yokio Edano, minister of Economy, Trade and Industry, Goshi Hosono, nuclear disaster minister, and Tatsuo Hirano, minister in charge of reconstruction of the government plan to redesign the classification of the no-entry-zones around the Fukushima nuclear plant. From 1 April 2012 a three level system would be introduced, by the Japanese government:
a) no-entry zones, with an annual radiation exposure of 50 millisieverts or more
- at these places habitation would be prohibited
b) zones with annual radiation exposures between 20-50 millisievert,
- here former residents could return, but with restrictions.
c) zones with exposures of less than 20 millisievert per year
- in these zones the residents would be allowed to return to their houses.
Decontamination efforts were planned in line with this newly designed order, to help the people to return to places where the radiation levels would be relatively low.
Costs of the clean-up operationsEdit
Mid December 2011 the local authorities in Fukushima had spent already around 1.7 billion yen (21 million$) on the costs of decontamination-works in the cities of Fukushima and Date and the village of Kawauchi. The total clean-up costs were estimated around 420 billion yen (~ 5.2 billion$). For the clean-up only 184.3 billion yen was reserved in the September supplementary budget of prefecture Fukushima, and some funds in the central government's third supplementary budget of 2011. Whenever needed the central government would be asked for extra funding.
In 2016, University of Oxford researcher and author Peter Wynn Kirby wrote that the government had allocated the equivalent of US$15 billion for the regional cleanup and described the josen (decontamination) process, with "provisional storage areas (kari-kari-okiba) ... [and] more secure, though still temporary, storage depots (kari-okiba)". Kirby opined the effort still would be better called "transcontamination" because it was moving the contaminated material around without long-term safe storage planned or executed. He also saw little progress on handling the more intense radiation waste of the destroyed power-plant site itself; or on handling the larger issue of the national nuclear program's waste, particularly given the earthquake-risk of Japan relative to secure long-term storage.
Lessons learned to dateEdit
The Fukushima Daiichi nuclear disaster revealed the dangers of building multiple nuclear reactor units close to one another. This proximity triggered the parallel, chain-reaction accidents that led to hydrogen explosions blowing the roofs off reactor buildings and water evaporating from open-air spent fuel pools—a situation that was potentially more dangerous than the loss of reactor cooling itself. Because of the proximity of the reactors, Plant Director Masao Yoshida "was put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units".
- "Survey finds zero Fukushima plant strontium contamination in soil samples - AJW by The Asahi Shimbun". Ajw.asahi.com. Archived from the original on 2013-12-12. Retrieved 2013-12-24.
- The Mainichi Shimbun (13 September 2013) Toxic water has leaked into Pacific Ocean: TEPCO Archived 2013-09-15 at the Wayback Machine.
- Fukushima radiation levels 18 times higher than previously thought The Guardian 1 September 2013
- "Doubts over ice wall to keep Fukushima safe from damaged nuclear reactors". The Guardian. 13 July 2014. Retrieved 14 July 2014.
- Kaplan, Karen (2011-07-12). "Japanese soil still safe for planting after Fukushima nuclear power plant disaster, scientists report - Los Angeles Times". Articles.latimes.com. Retrieved 2013-12-24.
- David McNeill (2 March 2013). "'I am one of the Fukushima fifty': One of the men who risked their lives to prevent a catastrophe shares his story". The Independent. London. Retrieved 5 March 2013.
- Veronika Hackenbroch, Cordula Meyer and Thilo Thielke (5 April 2011). "A hapless Fukushima clean-up effort". Der Spiegel.
- TEPCO Announces a "Roadmap to restoration" at Fukushima Dai-1 - IEEE Spectrum. Spectrum.ieee.org. Retrieved on 30 April 2011.
- Clenfield, Jason. (14 April 2011) Fukushima radioactive contamination leaks will continue through June, Tokyo Electric says. Bloomberg. Retrieved on 30 April 2011.
- NTI: Global Security Newswire – Japan plant emits more radiation after cooling lapse. Global Security Newswire (14 April 2011). Retrieved on 30 April 2011.
- Brumfiel, Geoff (December 11, 2011). "Fukushima reaches cold shutdown". Nature. doi:10.1038/nature.2011.9674. Retrieved 10 April 2013.
- "After Nuclear Milestone, a Long Road". Asia Times. Retrieved 10 April 2013.
- Tabuchi, Hiroko Nuclear cleanup plans hinge on unknowns, NY Times, 14 April 2011.
- Workers enter reactor building, NHK, 5 May 2011
- "Japan: Minister in first tour of stricken nuclear plant". BBC News. 9 April 2011. Retrieved 12 April 2011.
- Kyodo News, "Reactor shutdowns nine months away", Japan Times, 18 April 2011.
- "2.4 trillion yen in Fukushima crisis compensation costs to be tacked onto power bills". Mainichi Daily News. The Mainichi. 10 December 2016. Retrieved 10 March 2017.
At the meeting, the ministry also revealed that the estimated cost of dealing with the disaster has hit 21.5 trillion yen -- nearly double the initial projection of 11 trillion yen.
Total compensation for people affected by the disaster is estimated to rise from 5.4 trillion yen to 7.9 trillion yen, and decontamination-associated costs are likely to grow from 2.5 trillion yen to 4 trillion yen. The bill for building interim storage facilities for radioactive materials is expected to rise from 1.1 trillion yen to 1.6 trillion yen, while that of decommissioning reactors at the crippled plant will likely surge from 2 trillion yen to 8 trillion yen.
- (dutch) AD (6 September 2011) Slave-labor in Japanese nuclear power-plants
- The Mainichi Daily News (31 October 2011) Workers in shelters just outside gates of nuclear complex miss out on allowance Archived October 31, 2011, at the Wayback Machine.
- JAIF (26 September 2011)Earthquake-report 216: Hydrogen check ordered at No.2, 3 reactors Archived 2011-10-28 at the Wayback Machine.
- JAIF (10 October 2011)Earthquake-report 230: Hydrogen level falls at Fukushima plant Archived 2011-11-06 at the Wayback Machine.
- The Mainichi Daily news (19 January 2012)TEPCO uses endoscope to look inside crippled Fukushima reactor Archived 2012-07-15 at Archive.is
- JAIF (20 January 2012)Earthquake-report 323: TEPCO fails to clearly see inside damaged reactor Archived 2013-05-16 at the Wayback Machine.
- nu.nl (20 January 2012) TEPCO films in reactor 2 (footage comments in Dutch)
- NHK-world (15 March 2012)Radiation high near suppression chambers
- JAIF (15 March 2012)Earthquake report 374: Radiation high near suppression chambers[permanent dead link]
- The Mainichi Shimbun (28 March 2012)Fukushima No. 2 reactor radiation level up to 73 sieverts per hour
- "Tech companies to begin cleaning water at Japan nuclear plants". Gigaom.com. 17 June 2011. Retrieved 13 July 2011.
- "Russia floating nuclear waste plant ready to depart for Japan". ITAR TASS. 8 April 2011. Retrieved 11 April 2011.[dead link]
- JAIF (11 September 2011) Earthquake-Report 201: Challenges to contain nuclear accident[permanent dead link]
- "Fukushima Daiichi Nuclear Accident Update (27 March, 03:00 UTC)". International Atomic Energy Agency. 27 March 2011. Retrieved 27 March 2011.
- Staff (24 March 2007). "Official: Workers touched water with radiation 10,000 times normal". CNN Wire. Retrieved 27 March 2011., records(2011) pp.249-250
- "TEPCO halts work to remove radioactive water". NHK WORLD English. 30 March 2011. Archived from the original on May 11, 2011.
- "Fukushima's radioactive water to be pumped into 'Mega Float'". Gizmodo. 30 March 2011. Retrieved 2 June 2011.
- "TEPCO may use floating island to hold tainted water". E.nikkei.com. 2 April 2011. Retrieved 7 April 2011.
- "Increase your water play pool, use of Mega-Float for fishing". Google. Retrieved 24 April 2011.
- Westall, Sylvia (4 April 2011). "Japan to dump 11,500 metric tons of radioactive water The wastewater facility had 11,500 tons of water stored (by 10 April 8900 tons had been pumped into the sea". Reuters. Retrieved 24 April 2011.
- asahi.com(朝日新聞社):Radiation fallout from Fukushima plant will take "months" to stop - English. Asahi.com (4 April 2011). Retrieved on 30 April 2011.
- TEPCO:Seawater Piping Trench
- The Mainichi Shimbun (08 July 2013) Groundwater contamination level soars at Fukushima plant
- The Mainichi Shimbun (10 July 2013)Cesium readings further climb in groundwater at Fukushima plant
- The Mainichi Shimbun (11 July 2013) Radioactive water at Fukushima plant 'strongly suspected' of seeping into sea: NRA Archived 2013-07-14 at the Wayback Machine.
- The Asahi Shimbun (12 July 2013)Strontium detected in well on seaward side of Fukushima plant Archived 2013-07-16 at the Wayback Machine.
- The Asahi Shimbun (13 July 2013) TEPCO's plan to halt spread of radioactive water based on shaky theory Archived 2013-07-17 at the Wayback Machine.
- 875,000,000,000,000 Bq of Tritium contained in total contaminated water / Over 60 times much as safety limit – Fukushima Diary
- Total Tritium in contaminated water increasing by 330 [sic] Trillion Bq per year / Beyond discharge-able amount – Fukushima Diary
- JP Gov “No drastic technology to remove Tritium was found in internationally collected knowledge” Fukushima Diary
- Japanese nuclear firm admits error on radiation reading The Guardian, 27 March 2011.
- Photo, Press Release(TEPCO:2011.4.2)
- Press Release (TEPCO:2011.4.6), records(2011) pp.253-256
- Press Release (TEPCO:2011.4.21)
- Roadmap towards Restoration from the Accident at Fukushima Daiichi Nuclear Power Station (TEPCO:2011.4.17)
TEPCO revised this roadmap many times in 2011 as below.
May 17th, Jun.17th, Jul.19th, Aug.17th, Sep. 20th, Oct.17th, Nov. 17th
- Installation Plan of the Water Treatment Facility (TEPCO:2011.4.27)
- Possible leakage of water including radioactive materials to the outside from around the intake canal of Unit 3 (TEPCO:2011.5.11)
- Appendix1 (NISA:2011.5.25)
- Soramoto(2014) p.9,
Recovery and processing of radioactive accumulated water at Fukushima Daiichi NPS
- Jaif (31 August 2011) 2 workers showered with highly radioactive water Archived 2011-10-11 at the Wayback Machine.
- Mid-and-long-Term Roadmap towards the Decommissioning of Fukushima Daiichi Nuclear Power Units 1-4, TEPCO (TEPCO:2011.12.21)
- NHK-world (5 April 2012) Strontium at Fukushima plant flows into sea[permanent dead link]
- 原子力規制委員会の施行に伴う関係政令の閣議決定について 原子力規制委員会設置法の施行日を定める政令要綱 (2012.9.11)
- Hot test started for the multi-nuclide removal equipment (ALPS) Announcements (TEPCO:2013.3.30)
- On the day before, Japan's Upper House election was held.
- Increases in the Concentration of Radioactive Materials in Seawater and Groundwater on the Ocean Side of the Site: Current Situation and Countermeasures (TEPCO reference material)
- The Asahi Shimbun (28 July 2013) Extremely high tritium level found in water in pit at Fukushima plant Archived 2013-08-01 at the Wayback Machine.
- 地下水の流入抑制のための対策 in 汚染水処理対策委員会(第3回) (2013.5.30)
- Land-side Impermeable Wall (Frozen Soil Wall)
- Contaminated Water Leakage from the Tank in the H4 Area
- Background information and Press Release on INES provisional rating on contaminated water leakage from a water tank at Fukushima Daiichi NPS
- NRA committee(2013.8.28)
- The Fukushima Diary (28 August 2013) Fukushima worker had contamination over head, face and chest on transferring water from the leaking tank
- TEPCO (Japanese) Handout 130828 07
- Government’s Decision on Addressing the Contaminated Water Issue at TEPCO’s Fukushima Daiichi NPS
- The Prime Minister Shinzo Abe's presentation of Tokyo's bid in the International Olympic Committee in Buenos Aires was gotten plenty of attention.
- Contaminated Water Leakage from the Tank in the H4 Area
- The Mainichi Daily news (3 October 2011) Radioactive waste piles up at Fukushima nuclear plant as disposal method remains in limbo Archived 2011-10-05 at the Wayback Machine.
- Jaif (August 16, 2011)Desalinisation of spent fuel pools Archived 2011-08-18 at the Wayback Machine.
- "FUEL REMOVAL FROM UNIT 4 REACTOR BUILDING COMPLETED AT FUKUSHIMA DAIICHI". www.tepco.co.jp. Retrieved 30 April 2011.
- NHK, "TEPCO Uses Unmanned Equipment To Remove Rubble", 10 April 2011.
- "TEPCO unveils plan to seal Fukushima reactors". The Guardian. London. 15 June 2011. Retrieved 6 August 2011.
- Saoshiro, Shinichi (18 March 2011). "Japan weighs need to bury nuclear plant; tries to restore power". Reuters. Retrieved 18 March 2011.
- Alleyne, Richard (19 March 2011). "Japan nuclear crisis: scientists consider burying Fukushima in a 'Chernobyl sarcophagus'". The Daily Telegraph. London.
- JAIF (7 September 2011) Nishizawa:TEPCO to scrap Fukushima reactors Archived 2012-04-19 at the Wayback Machine.
- NHK-world (28 October 2011) Fuel retrieval at Fukushima to start in 10 years Archived 2011-10-28 at the Wayback Machine.
- JAIF (29 October 2011) Earthquake-report 249: Fuel retrieval at Fukushima to start in 10 years Archived 2013-05-17 at the Wayback Machine.
- JAIF (10 November 2011) Earthquake-report 261[permanent dead link]
- NHK-world (9 November 2011) Commission releases report on scrapping N-plant Archived 2011-12-13 at the Wayback Machine.
- NHK-world (10 November 2011) TEPCO to monitor xenon at Fukushima plant Archived 2011-11-12 at the Wayback Machine.
- JAIF (20 September 2011 Earthquake-report 211: A new plan set to reduce radiation emissions
- "Reactor feared in meltdown, radiation spreads". ABC News. 30 March 2011. Retrieved 30 March 2011.
- Radiation-shielding sheets to be installed in September at earliest, Kyodo News. English.kyodonews.jp. Retrieved on 30 April 2011.
- "TEPCO to cover No.1 reactor building". NHK. 14 May 2011. Archived from the original on May 13, 2011. Retrieved 2 June 2011.
- Based on archived video of the plant from TEPCO
- JAIF (20 September 2011)Steel Frame for Unit 1 Reactor Building Cover is Complete Archived 2012-04-19 at the Wayback Machine.
- "Test run for Fukushima Daiichi 3 cover installation". World Nuclear News. 13 June 2016. Retrieved 14 June 2016.
- Hot-spot cleanups hampered by public resistance to local disposal sites - The Mainichi Daily News Archived August 27, 2011, at the Wayback Machine.
- No quick way to remove radioactive substances from soil: experts - The Mainichi Daily News Archived August 27, 2011, at the Wayback Machine.
- JAIF (10 October 2011) Earthquake-report 231: Decontamination-plan compiled Archived 2011-11-06 at the Wayback Machine.
- JAIF (20 December 2011) Earthquake report 296: Govt to designate nuclear clean-up areas Archived 2012-01-03 at the Wayback Machine.
- Knight, Sophie (14 August 2013). "Japan's nuclear clean-up: costly, complex and at risk of failing | Reuters". In.reuters.com. Retrieved 2013-12-24.
- https://archive.is/20130821145939/http://mainichi.jp/english/english/newsselect/news/20130808p2a00m0na013000c.html. Archived from the original on August 21, 2013. Retrieved August 21, 2013. Missing or empty
- NHK-world (22 February 2012)Seabed near nuke plant to be covered with cement Archived November 27, 2012, at the Wayback Machine.
- The Daily Yomiuri (18 December 2011) Govt speeds rezoning of contaminated areas
- The Mainichi Daily News (21 December 2011) Fukushima local decontamination costs bust estimates Archived December 21, 2011, at the Wayback Machine.
- Kirby, Peter Wynn, "Playing Pass the Parcel With Fukushima", New York Times OpEd, March 7, 2016. Retrieved 2016-03-07.
- Yoichi Funabashi and Kay Kitazawa (March 1, 2012). "Fukushima in review: A complex disaster, a disastrous response". Bulletin of the Atomic Scientists.
- 電気新聞, ed. (2011). 東日本大震災の記録 - 原子力事故と計画停電 -. (社)日本電気協会新聞部.
- Management of contaminated water
- The Committee on countermeasures for contaminated water treatment (2013), Preventative and Multilayered Measures for Contaminated Water Treatment at the Fukushima Daiichi Nuclear Power Station of Tokyo Electric Power Company - Through completeness of comprehensive risk management - (PDF)
- Tritiated Water Task Force (2016), Tritiated Water Task Force Report (PDF)
- METI (2016), Important Stories on Decommissioning-Fukushima Daiichi Nuclear Power Station, now and in the future (PDF)
- Answers to Frequently Asked Questions About Cleanup Activities at Three Mile Island, Unit 2, NUREG, 1984
- 空本 誠喜 (2014). 汚染水との闘い −福島第一原発・危機の深層−. ちくま新書. 筑摩書房.
- PM Information on contaminated water leakage at TEPCO's Fukushima Daiichi Nuclear Power Station, Prime Minister of Japan and His Cabinet
- MOFA Information on contaminated water leakage at TEPCO’s Fukushima Daiichi Nuclear Power Station, Ministry of Foreign Affairs
- TEPCO News Releases, Tokyo Electric Power Company
- NRA, Japan, Nuclear Regulation Authority
- NISA, Nuclear and Industrial Safety Agency, former organization
- Fukushima Diary News site of a concerned Japanese man in Europe
- Decommissioning plan of Fukushima Daiichi Nuclear Power Station
- Mid-and-Long-Term Roadmap towards the Decommissioning of TEPCO's Fukushima Daiichi Nuclear Power Station Units 1-4
|
<urn:uuid:a35e09f0-db48-456c-b7d1-c01ac2d9a7ba>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812247.50/warc/CC-MAIN-20180218173208-20180218193208-00221.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9489346146583557,
"score": 3.875,
"token_count": 12514,
"url": "https://en.m.wikipedia.org/wiki/Fukushima_disaster_cleanup"
}
|
Presentation on theme: "UNDERSTANDING THE SCRIPTURES"— Presentation transcript:
1 UNDERSTANDING THE SCRIPTURES Chapter 8: The LawUNDERSTANDING THE SCRIPTURES
2 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) ANTICIPATORY SET Read the episode with the golden calf (cf. Ex 32:35). Write for a few minutes about something surprising within this story. Briefly share responses.
3 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) BASIC QUESTIONSHow did the Israelites abandon their covenant with God?What prevented God from destroying the nation of Israel?What was the origin of the Levitical priesthood?KEY IDEASWith Aaron’s help, the Israelites abandoned God and sinfully worshiped an idol.God was going to destroy the Israelites, but Moses interceded on their behalf.The Levites were given the priesthood because they had attacked idolaters.
4 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) FOCUS QUESTIONS Where was Moses as the Israelites were getting into trouble? He was alone with God on Mt. Sinai. Why did Aaron demand the people give him their gold earrings? Perhaps he was trying to deter them from wanting an idol since it would cost them valuable jewelry. What claim did the Israelites make about the golden calf? They said, “These are your gods... who brought you up out of the land of Egypt!” (Ex 32: 4). Extension: They knew the one true God had brought them out of Egypt, having defeated the false gods.
5 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) FOCUS QUESTIONS Why were the people attracted to the worship of idols? They were attracted probably out of sensuality. They “sat down to eat and drink, and rose up to play” (Ex 32: 6). In other words, they had a feast, drank, and danced. What laws of the Decalogue did the Israelites break when they worshiped the molten calf? They broke the First Commandment by having made a graven image and worshiping it. Inferred also is a breaking of the Sixth Commandment against sexual immorality.
6 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) FOCUS QUESTIONS What wording did God use to show he and the people were disowning each other? God spoke of “your [Moses’] people” whom “you [Moses]” brought out of Egypt. God no longer referred to them as his people, but Moses’. What did God offer to Moses? God offered to destroy the Israelites and raise up a new people for Moses to lead. In other words, he would have been a New Abraham. Despite his knowledge of what the Israelites were doing in the camp, how did Moses react to having seen their revelry, and what did this action mean? Moses smashed the two stone tablets of the Decalogue. This symbolized the covenant had been broken.
7 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) GUIDED EXERCISE Conduct a think / pair / share to understand the meaning of Exodus 32:11–14 with respect to the use of words like repenting and evil in light of St. Augustine’s excerpt from City of God (p. 102).
8 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) GUIDED EXERCISE: Personal JournalingAaron and the Israelites who wanted to worship the golden calf led the nation into sin.It is good for each person to examine his or her life to recognize those situations in which he or she is leading others (e.g., family members, coworkers, friends) into sinful practices.For your eyes only write for five minutes responding to the following question:Which actions of mine tend to lead others to sin?As a class, brainstorm ways people can avoid leading others into sin.
9 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) FOCUS QUESTIONS Which tribe of the Israelites had remained loyal to God? The Tribe of Levi had remained loyal. About how many idolaters did the Levites slay in the Hebrews’ camp? They slew about How was God’s relationship with Israel to be different than first envisioned? Initially, Israel was to be a nation of priests who would lead the rest of the nations of the world to the Lord. However, because of the Israelites’ sin, the Lord made the Levites mediators of his Chosen People, but he remained a God of steadfast love and forgiveness.
10 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) CLOSURE Write a paragraph describing how Israel went from a nation of priests to a nation who needed priests to intercede for them.
11 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) HOMEWORK ASSIGNMENTStudy Questions 1–7 (p. 165)Workbook Questions 1–13Read “After the Fall” through “In the Wilderness” (pp. 154–158)
12 1. The Golden Calf and the Levitical Priesthood (pp. 150–153) ALTERNATIVE ASSESSMENTThe Israelites failed to sacrifice their desires in favor of the will of God.In the same way, people often fail to understand the life of a Christian as a call to sacrifice one’s desires for the will of God.Take part in a brief class discussion using the following question:Who are today’s heroes and why are they heroic?Then free write in response to the following statement:True heroism consists of personal sacrifice for a cause or the good of others, and it may include the sacrifice of one’s life.Share responses.
13 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) ANTICIPATORY SETRead the first three paragraphs of page 154.Do a think / pair / share using the following question:How did the Hebrew priesthood change as a result of the worship of the golden calf?
14 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) BASIC QUESTIONSWhat was the function of the Levitical priesthood?What was the tabernacle?What was the aim of the laws after the idolatry (golden calf)?How did Israel respond to God’s plan to settle in Canaan?KEY IDEASThe Levites were given the priesthood that would have belonged to all the people; they were thus mediators between the Israelites and God.God dwelt with Israel in the earthly tabernacle designed after the heavenly Temple.The Levitical laws taught Israel humility and holiness.The Israelites rejected God’s plan to settle in the land of Canaan, so God had them wander in the desert for the rest of their lives.
15 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS According to the Catechism, no. 1539, what was the role of the Levitical priesthood? Levitical priests offered to God, on behalf of the Hebrews, gifts and sacrifices for sins. How was the work of the Levitical priesthood under the Mosaic Covenant (to “proclaim the Word of God and to restore communion with God by sacrifices and prayer”) like the Catholic priesthood and the Mass in the New Covenant? The Mass, presided over by a Catholic priest, consists of the proclamation of the Word of God and a re‑presentation of the Sacrifice of Christ, which restored communion between God and mankind. According the Catechism, no. 1540, what was the imperfection of Israel’s worship? It was “powerless to bring about salvation, and so needed to repeat its sacrifices ceaselessly, which only the sacrifice of Christ could accomplish.”
16 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS Why did God give the Israelites laws to separate them from other peoples? The aim was to teach them they were different from other people, to prevent them from being infected by paganism. Why were the Israelites commanded to sacrifice often? The frequent sacrifices represented a daily killing of one or another false god; thus, the sacrifices reminded the Chosen People of their dependence on the one true God. How was the giving of the Law similar to a teenager being grounded? Many of God’s laws appear to have been punishments, but their aim was always to rehabilitate the Israelites. Good parents tighten the rules when a teenager gets into trouble to prevent him or her from making the same or similar bad choices.
17 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) GUIDED EXERCISEIsrael was severely punished for having worshiped a false god, the golden calf.Some non–Catholics might view the practice of venerating statues of the Blessed Virgin Mary as a form of idol worship.Read the Catechism, no (p. 166), and then take part in a class discussion using the following question:How is veneration of a visual representation of the Blessed Virgin Mary substantially different from idol worship?
18 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) GUIDED EXERCISEThere exist utopian political philosophies promoting the idea of a heaven on earth.Modern examples include Marxism and materialism.Some scientists are working on means to extend a person’s lifespan indefinitely.Free write in response to this question:What would be some characteristics of a heaven on earth?Share answers aloud.Discuss this question:Why is it impossible to attain complete happiness on earth?
19 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS What is the Book of Leviticus? Latin for of the Levites, the Hebrews called it the Manual for Priests. It was an instruction book for the Levitical priesthood. What is the intention of the Levitical laws? The purpose behind the laws in Leviticus was to teach Israel how to be a holy people. What happened to the sons of Aaron when they did not worship God in the prescribed manner? God killed them by fire.
20 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS What was the purpose of the prohibition of some kinds of foods? The prohibitions made Israel different to help them remember they were to be a holy people belonging to God. What provision was made for disobeying God’s laws? There were offerings to make atonement for sins. What was unique about the way Israel was to be governed? They were not to be governed by a king but by God himself. What is the Book of Numbers? It is called Numbers in English because it numbers, or is a census of, all the tribes of Israel. Its Hebrew name is In the Wilderness because it chronicles Israel’s forty years’ wandering. It is a history of Israel’s failure to live up to the Law.
21 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS How long did a journey from Mt. Sinai to the Promised Land usually last? It usually lasted about eleven days. How was the land of Canaan described? It was “a land which flows with milk and honey” (Nm 14: 8), that is, a rich agricultural land. What did most of the Israelite spies sent into Canaan think were the chances of conquering the territory? They said it was impossible. How did the people of Israel respond to the spies’ report? They turned against Moses and complained about being brought out of Egypt only to die. Which scouts had faith Israel could possess Canaan? Joshua and Caleb believed God could deliver this land to them.
22 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) FOCUS QUESTIONS How did the people respond to Caleb and Joshua’s confidence? They wanted to stone them. How did God give the Israelites what they wished? Since they would rather have died in the wilderness, God allowed them to remain there until all the adults had died. What was Moses’ sin? Moses struck the rock twice out of anger to bring forth water. Rather than recognize God gave them water, this implied Moses provided it.
23 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) CLOSURE Free write for five minutes about God’s love for Israel even after her fall into idolatry.
24 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) HOMEWORK ASSIGNMENTStudy Questions 8–19 (p. 165)Practical Exercises 1–2 (p. 165)Workbook Questions 14–33Read “The Constitution of Israel” through “The Tabernacle in the Wilderness” (pp. 159–162)
25 2. The Consequences of Israel’s Unfaithfulness (pp. 154–158) ALTERNATIVE ASSESSMENT Conduct a think / pair / share to compare and contrast the sins of Adam and Eve with the golden calf.
26 3. Deuteronomy (pp. 159–162) ANTICIPATORY SET Opening prayer on Deuteronomy 30:11–20.Discuss this question:Which passages in this reading are temporal, i.e., applying to the specific situation of Israel at the time, and those that are universal, i.e., applicable to all people of all times.
27 3. Deuteronomy (pp. 159–162) BASIC QUESTION What is the Book of Deuteronomy?KEY IDEAThe Book of Deuteronomy is the book of laws Moses gave to the nation of Israel as its constitution before its entry into the Promised Land; it was an imperfect law that made concessions to the Israelites’ hard hearts and was amended as times changed.
28 3. Deuteronomy (pp. 159–162)FOCUS QUESTIONS What does it mean to say the Hebrew people “began to play the harlot with the daughters of Moab”? Moabite women introduced the Israelites to the worship of Baal of Peor, which may have included prostitution at their temples. What did God promise Phinehas? Because he punished idolaters, God promised the office of high priest would always belong to his descendants. What is the Book of Deuteronomy? It is the last book of the Pentateuch, meaning second law, a law given not directly by God but by Moses. What was the importance of Deuteronomy? It was a new constitution, or set of laws, that established Israel not as a nation of priests but as a nation‑state.
29 3. Deuteronomy (pp. 159–162) GUIDED EXERCISE Conduct a focused reading of the paragraph “The Old Law...” (p. 150) using the following question:What is the relationship between the New Law and the Old Law?Have the students repeat the Anticipatory Set from Chapter 2 (p. 24) about how Christ fulfills, refines, surpasses, and leads the Old Law to its perfection.Assign each group to a different passage than the first time.
30 3. Deuteronomy (pp. 159–162)FOCUS QUESTIONS What were some of the concessions, or lower laws, Moses gave because of the Israelites’ hardness of heart? These concessions include divorce and genocide in the conquest of the Holy Land (since Israel had proved unable to live alongside idolaters). What did Ezekiel mean when he said, “I gave them statutes that were not good and ordinances by which they could not have life”? In the Book of Deuteronomy, God allowed Israel to do what they wanted within the context of the Law so they would experience the consequences of their evil deeds.
31 3. Deuteronomy (pp. 159–162)GUIDED EXERCISE Read the Catechism, no (p. 166). The Mosaic Law contains many truths accessible to reason that people can—but often do not—read in their own hearts. In your assigned group, explain how your Commandment accords with reason and how one could discover this obligation naturally, i.e., “reason to it.”
32 3. Deuteronomy (pp. 159–162)FOCUS QUESTIONS To whom was the Book of Deuteronomy addressed? It was given by Moses to the Israelites. When was the Book of Deuteronomy given? It was given when the Israelites had reached the River Jordan and were about to cross into the Promised Land. Why was the Book of Deuteronomy given? It was a constitution, or series of laws, for the nation to be founded.
33 3. Deuteronomy (pp. 159–162)FOCUS QUESTIONS What is the Great Commandment? It is to love God with all one’s heart, soul, and might. What is the heart of the Book of Deuteronomy? It is Chapters 12–26, the new law for the land God gave them. Why was Moses laid in a secret grave? God gave Moses a secret grave so the Israelites could not turn Moses’ resting place into a site for idolatrous worship.
34 3. Deuteronomy (pp. 159–162)CLOSURE Free write for five minutes about the Sacred Author, his aim, and the nature of the Book of Deuteronomy.
36 3. Deuteronomy (pp. 159–162)ALTERNATIVE ASSESSMENT To remember the steps in the founding of Israel, use the table on page 160 comparing Israel and the United States of America to write a paragraph comparing the founding of these two nations.
|
<urn:uuid:25767821-aab9-4fb5-a74f-703f6773b2ca>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00622.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9594144225120544,
"score": 3.359375,
"token_count": 3870,
"url": "http://slideplayer.com/slide/3561106/"
}
|
Boreal Forest Region: Alberta, Canada
Boreal Forest Region: Alberta, Canada
Mighty rivers drain north and east from the Rocky Mountains into the watershed of the Arctic Ocean.
Look at any map of Alberta and you will see them: The Athabasca, Smoky, Peace, Chinchaga and Hay, tracing sinuous patterns across the vast northern half of the province, a lightly populated and little-known region of dark forests and muskegs.
This is the Boreal Forest Region which comprises 48 percent of Alberta.
The boreal forest is a critical ecosystem. The tar sands deposits lie in the boreal plains ecozone, which covers 183 million acres (74 million hectares) and extends across British Columbia, Northwest Territories, Alberta, Saskatchewan, and Manitoba. Forest cover is predominantly coniferous, and black spruce, white spruce, jack pine, and tamarack are principal species. Hardwoods, particularly trembling aspen, white birch, and balsam poplar, are well represented and are often mixed with conifers. This is one of the most productive forest areas not only in Canada, but in the entire world.
Approximately 35 percent of the boreal plains is composed of wetlands, including bogs, fens, swamps, marshes, and shallow open-water ponds. Some areas of the boreal plains have 85 to 95 percent wetland ground coverage, and these areas can stretch as wide as 120,000 acres (48,500 hectares). These extensive wetland and water areas combine with complex uplands to create a diverse mosaic of bird habitats. Most of these wetlands are connected through surface and groundwater hydrology and are highly susceptible to damage from tar sands development.
The tar sands cover 141,000 sq km of Alberta. Twenty per cent of this has 174billion recoverable barrels close enough to the surface to be strip-mined.
These incredible pictures show the bleak landscape of bitumen, sand and clay created by the frantic pursuit of 173billion barrels of untouched oil.
This is done by removing the forest and the peaty soil beneath, before gas-heated water is then forced through the tar sand to melt and separate bitumen from the sand and clay. It takes four barrels of water to retrieve one barrel of oil - creating large tailing ponds of oily, toxic water that covers vast expanses.
Lawyer and environmentalist Polly Higgins said: ‘Runaway climate change becomes almost inevitable if the tar sands continue.
‘The tar sands mining should be classified as an act of ecocide and rendered illegal under international law. This is, in effect, a crime against humanity.
Once this landscape was a pristine wilderness roamed by deer now it's 'the most destructive industrial project on earth'
Lush green forests once blanketed an area of the Tar Sands larger than England at Fort McMurray in Alberta, Canada,
- where blackened earth now stands, dubbed by environmentalists as most destructive industrial project on earth. Boreal forest - once home to grizzly bears, moose and bison - is vanishing at rate second to Amazon deforestation
Read more: http://www.dailymail.co.uk/news/article-2219240/Tar-Sands-Canada-worlds-largest-oil-reserve-173billion-untouched-barrels.html#ixzz29iRMOGBH
Follow us: @MailOnline on Twitter | DailyMail on Facebook
Enbridge Inc. is a Calgary, Alberta based company focused on three core businesses: crude oil and liquids pipelines, natural gas transportation and distribution, and green energy
Enbridge operates the world's longest crude oil and liquids pipeline system, located in both Canada and USA. It owns and operates Enbridge Pipelines Inc. and a variety of affiliated pipelines in Canada and the U.S., and has an approximate 27% interest in Enbridge Energy Partners, L.P. (NYSE: EEP) which owns the Lakehead System in the U.S. These pipeline systems have operated for over 60 years and now comprise approximately 13,500 kilometres (8,400 mi) of pipeline, delivering more than 2 million barrels (320,000 m3) per day of crude oil and liquids.
|Oil Pipeline stretching 1000's of miles|
Using data from Enbridge's own reports, the Polaris Institute calculated that 804 spills occurred on Enbridge pipelines between 1999 and 2010. These spills released approximately 168,645 barrels (26,812.4 m3) of crude oil into the environment.
|Enbridge Oil spill Yellowstone National Park, 2010|
On July 4, 2002 an Enbridge pipeline ruptured in a marsh near the town of Cohasset, Minnesota in Itasca County, spilling 6,000 barrels (950 m3) of crude oil.In 2006, there were 67 reportable spills totaling 5,663 barrels (900.3 m3) on Enbridge's energy and transportation and distribution system; in 2007, there were 65 reportable spills totaling 13,777 barrels (2,190.4 m3)
On March 18, 2006, approximately 613 barrels (97.5 m3) of crude oil were released when a pump failed at Enbridge's Willmar terminal in Saskatchewan. According to Enbridge, roughly half the oil was recovered.
On January 1, 2007 an Enbridge pipeline that runs from Superior, Wisconsin to near Whitewater, Wisconsin cracked open and spilled ~50,000 US gallons (190 m3) of crude oil onto farmland and into a drainage ditch. The same pipeline was struck by construction crews on February 2, 2007, in Rusk County, Wisconsin, spilling ~201,000 US gallons (760 m3) of crude, of which only about 87,000 gallons were recovered. Some of the oil filled a hole more than 20 feet (6.1 m) deep and contaminated the local water table.[
In 2009, Enbridge Energy Partners, a US affiliate of Enbridge Inc., agreed to pay $1.1 million to settle a lawsuit brought against the company by the state of Wisconsin for 545 environmental violations.
In January 2009 an Enbridge pipeline leaked about 4,000 barrels (640 m3) of oil southeast of Fort McMurray at the company's Cheecham Terminal tank farm
On January 2, 2010, Enbridge's Line 2 ruptured near Neche, North Dakota, releasing about 3,784 barrels of crude oil, of which only 2,237 barrels of were recovered.
July 2010, a leaking pipeline spilled an estimated 843,444 US gallons (3,192.78 m3) of crude oil into Talmadge Creek leading to the Kalamazoo River in southwest Michigan on Monday, July 26 near Marshall, Michigan. A United States Environmental Protection Agency update of the Kalamazoo River spill concluded the pipeline rupture "caused the largest inland oil spill in Midwest history" and reported the cost of the cleanup at $36.7 million (US) as of November 14, 2011. The cleanup is unfinished as of July 2012.
Long involvement in Canada's tar sands has been central to Koch Industries' evolution and positions the billionaire brothers for a new oil boom.
According to Enbridge’s own data, between 1999 and 2010, across all of the company’s operations there were 804 spills that released 161,475 barrels (approximately 25.67 million litres, or 6.8 million gallons) of hydrocarbons into the environment
Bitumen from Canada's tar sands is dirtier and thicker than conventional oil. Extracting and processing this unconventional fossil fuel creates far more greenhouse gases than drilling for the light, sweet oil most Americans are familiar with. Environmentalists, supported by many scientists, want tighter regulations imposed on this crude to minimize its role in the U.S. economy as part of a larger effort to move beyond petroleum
Mining operations in the Athabasca oil sands. Image shows the Athabasca River about 600m from the tailings pond. NASA Earth Observatory photo, 2009.
Koch Industries declined to answer any questions for this story.
The controversy over the Kochs and the pipeline was sparked by an InsideClimate News report from February. That analysis, also published on Reuters.com and later cited by various news organizations, found that Flint Hills is deeply involved in the Canada-Alberta oil sands trade and is well positioned to benefit if more heavy crude is exported to the United States.
|A geyser of oil from a broken pipe in Burnaby in 2007 released 234,000 liters (aprox. 1500 barrels) before it was stopped.|
|Refinery at Texas City|
The Koch brothers have donated millions to Republican candidates and conservative movements, bankrolling groups involved in Tea Party causes and in campaigns to deny climate change science and the need for cleaner energy. Through their Flint Hills subsidiary, they underwrote the failed 2010 ballot initiative that would have suspended California's landmark law capping greenhouse gases.
|Bullet causes oil spill in Alaska|
Out on the Tar Sands Mainline: Mapping Enbridge’s Dirty Web of Pipelines
May 2010 (partially updated, March 2012).
The Polaris Institute
For more information on the Polaris Institute’s energy campaign please visit www.tarsandswatch.org
The James River (also known as the Jim River or the Dakota River) is a tributary of the Missouri River, approximately 710 mi (1,143 km) long, draning an area of 20,653 square miles (53,490 km2) in the U.S. states of North Dakota and South Dakota. The river provides the main drainage of the flat lowland area of the Dakotas between the two plateau regions known as the Coteau du Missouri and the Coteau des Prairies. This narrow area was formed by the lobe of a glacier during the last ice age, and as a consequence the watershed of the river is slender and it has few major tributaries for a river of its length
The river rises in Wells County, North Dakota, approximately 10 mi (16 km) northwest of Fessenden. It flows briefly east towards New Rockford, then generally SSE through eastern North Dakota, past Jamestown, where it is first impounded by a large reservoir (the Jamestown Dam), and then joined by the Pipestem River. It enters northeastern South Dakota in Brown County, where it is impounded to form two reservoirs northeast of Aberdeen. At Columbia, it is joined by the Elm River. Flowing southward across eastern South Dakota, it passes Huron and Mitchell, where it is joined by the Firesteel Creek. South of Mitchell, it flows southeast and joins the Missouri just east of Yankton.
Originally called "E-ta-zi-po-ka-se Wakpa," literally "unnavigable river" by the Dakota tribes, the river was named Rivière aux Jacques (literally, "James River" in English) by French explorers. By the time Dakota Territory was incorporated, it was being called the James River. This name was provided by Thomas L. Rosser, a former Confederate general who helped to build the Northern Pacific Railroad across North Dakota. A Virginian, he named the river and the settlement of Jamestown, North Dakota, after the English colony of Jamestown, Virginia. (The coincidence of the old French name "Jacques" directly translating as "James" in English is noted.) However, the Dakota Territory Organic Act of 1861 renamed it the Dakota River. The new name failed to attain popular usage and the river retains its pre-1861 name.
of Prairie Potholes
in North Dakota
of Prairie Potholes
in North Dakota
By CHARLES E. SLOAN
HYDROLOGY OF PRAIRIE POTHOLES IN NORTH DAKOTA
GEOLOGICAL SURVEY PROFESSIONAL PAPER 585-C
Prairie potholes (sloughs) are water-holding depressions of glacial origin in the prairies of the Northern United States and southern Canada. Water is supplied to the potholes by precipitation on the water surface, basin runoff, and seepage inflow of ground water. Depletion of pothole water results from evapotranspiration, overflow, and seepage outflow. Since potholes generally do not overflow, seepage outflow is the principal way in which dissolved salts can be removed. Salinity of pothole water is therefore a good indication of the seepage balance. Net seepage outflow results in fresh to brackish waters that constitute ephemeral to semipermanent ponds, whereas net seepage inflow results in brackish to saline waters that constitute semipermanent to permanent ponds.
The most conspicuous glacial feature of North Dakota is the Coteau du Missouri, defined by the U.S. Geographic Board (1933) as a "narrow plateau beginning in the northwest corner of North Dakota between the Missouri River and River des Lacs and Souris River and running southeast and south, with its southern limit not well defined; and its western escarpment forming the bluffs of the Missouri." Winters (1967) discusses boundaries of the Coteau and other definitions that are in use. For this study the Coteau du Missouri is defined as that region of dead-ice moraine which lacks a well-integrated drainage system and lies between the Missouri escarpment on the northeast and the well-drained ground moraine adjacent to the Missouri River on the southwest. Figure 4 is a generalized map showing selected physical subdivisions of North Dakota including the Coteau du Missouri.
Glacial drift west of the Coteau du Missouri consists mainly of thin ground moraine that is discontinuous and patchy near the Missouri River. West of the Missouri River, the drift is very discontinuous and, in many places, consists only of scattered erratics.
Coteau du Missouri
The Coteau du Missouri, or Missouri Plateau, is a large plateau that stretches along the eastern side of the valley of the Missouri River in central North Dakota and north-central South Dakota in the United States. This physiographic region of Saskatchewan and Alberta is classified as the uplands Missouri Coteau which is a part of the Great Plains Province or Alberta Plateau Region which extends across the south east corner of the province of Saskatchewan as well as the south west corner of the province of Alberta. Historically, in Canada the area was known as the Palliser's Triangle regarded as an extension of the Great American Desert and unsuitable for agriculture and thus designated by Canadian geographer and explorer John Palliser. The terrain of the Missouri Coteau features low hummocky, undulating, rolling hills, potholes, and grasslands.
The plateau is poorly drained and is interspersed with glacial kettle lakes. It is transversed by several broad sags marking the ancient stream valleys of the eastern continuations of the Grand, Moreau, Cheyenne, Cheyenne River, Bad, and White rivers.
To the east of the plateau, the lowland valley of the James River was formed by the lobe of the most recent ice age, separating the plateau from the Coteau des Prairies to the east.
Agriculturally the plateau is a grain and livestock region.
|Enbridge XL Pipeline Route (proposed)|
Cottonwood Lake Study Area North Dakota Wetlands
The Cottonwood Lake Study Area is located in Stutsman County, North Dakota, about 35 miles northwest of the Northern Prairie Wildlife Research Center (NPWRC) headquarters near Jamestown.
Today the Cottonwood Lake Study Area is internationally recognized as one of the most intensively studied wetland complexes in North America. More than 80 scientific publications, graduate theses, and presentations at scientific conferences resulting from these studies provide the bulk of information currently available to guide wetland management in the prairie pothole region of the U.S. and Canada. According to Euliss, one of the greatest contributions of the Cottonwood Lake effort is that it "provides invaluable baseline data on the hydrological, chemical, and biological attributes upon which to base comparisons with ongoing research, including studies on wetland restoration and wetland monitoring." In addition, the understanding of the interrelation of hydrological, chemical, and biological processes revealed by research at the site provides the scientific foundation that allows wetland managers to understand the outcome of different management options.
Pothole C1 (fig. 11) is in the Cottonwood Lake area of the SW¼, sec. 32, T. 142 N., R. 66 W., about 12 miles west of Buchanan, N. Dak. Pothole C1 is surrounded by surficial glacial till in high-relief stagnation moraine. The pothole is in a deep basin that shows evidence of a high-water mark about 6 feet above the highest water level that occurred during the study. A well-developed wave-cut platform that is very stony surrounds the pothole. The pothole is brackish and semipermanent according to the classification of Stewart and Kantrud (1969).
The Coteau des Prairies is a plateau approximately 200 miles in length and 100 miles in width (320 by 160 km), rising from the prairie flatlands in eastern South Dakota, southwestern Minnesota, and northwestern Iowa in the United States.
|Prairie Coteau of North Dakota.|
The southeast portion of the Coteau comprises one of the distinct regions of Minnesota, known as Buffalo Ridge.
The flatiron-shaped plateau was named by early French explorers from New France (Quebec), Coteau meaning "slope" in French.
The plateau is composed of thick glacial deposits, the remnants of many repeated glaciations, reaching a composite thickness of approximately 900 feet (275 m). They are underlain by a small ridge of resistant Cretaceous shale. During the last (Pleistocene) Ice Age, two lobes of the glacier appear to have parted around the pre-existing plateau and further deepened the lowlands flanking the plateau.
The plateau has numerous small glacial lakes and is drained by the Big Sioux River in South Dakota and the Cottonwood River in Minnesota. Pipestone deposits on the plateau have been quarried for hundreds of years by Native Americans, who use the prized, brownish-red mineral to make their sacred peace pipes. The quarries are located at Pipestone National Monument in the southwest corner of Minnesota and in adjacent Minnehaha County, South Dakota.
|Depth and location of Ogallala Aquifer|
As may be seen from the map on your left, the ecologically sensitive areas under discussion that begin adjacent to the tar sands deposits in Alberta, Canada, continue into the High Prairie potholes of North Dakota, through the James River watershed, Cottonwood Lake and the COTEAU DU MISSOURI, and crosses portions of the Ogallala Aquifer .
The underlying glacial pothole formations do not lend themselves to easy drainage, are susceptible to fluid transfers between layers, are permeable to exterior elements contamination, and exchange surface liquids with the underlying aquifer with ease.
The Ogallala aquifer has emerged as an important point in the debate. In June, two scientists from Nebraska called for a special study to determine how an oil spill would affect it, and Republican Sen. Mike Johanns of Nebraska has asked the State Department to consider an alternate, more easterly route that would avoid it. Twenty scientists from top research institutions recently signed a letter urging President Obama not to approve the pipeline because of environmental concerns.
Because it's the most heavily used aquifer in the United States and supplies about 30 percent of the groundwater pumped for irrigation nationwide. The Ogallala aquifer (also known as the High Plains aquifer) covers 175,000 square miles, an area larger than the state of California, and spans eight states — Nebraska, South Dakota, Wyoming, Colorado, Kansas, Oklahoma, Texas and New Mexico.
A valve broke at a pumping station in southern North Dakota along the first leg of its Keystone pipeline system. The breach released about 500 barrels of Canadian heavy crude inside the facility and set off a geyser of oil that reached above the treetops in a nearby field. It was only ten months ago that the pipeline began transporting bitumen from Alberta's oil sands mines to refineries in Patoka, Illinois. ( Stacy Feldman, InsideClimate News and Elizabeth McGowan)
Ogallala saturated thickness 1997
The following is an excerpt from “The Dilbit Disaster: Inside the Biggest Oil Spill You've Never Heard of,” a seven-month investigation by InsideClimate News
More than 1.1 million gallons of oil blackened two miles of Talmadge Creek and almost 36 miles of the Kalamazoo River, according to the EPA’s most recent Situation Report (pdf). The EPA’s estimate of the amount of oil that has been collected exceeds Enbridge’s estimate of 843,444 gallons by 15 percent. Enbridge spokeswoman Terri Larson told InsideClimate News that the company stands by that number as accurate.
Oil is still showing up two years later, as the cleanup continues. About 150 families have been permanently relocated and most of the tainted stretch of river between Marshall and Kalamazoo remained closed to the public until June 21.
The accident was triggered by a six-and-a-half foot tear in Line 6B, a 30- inch carbon steel pipeline operated by Enbridge Energy Partners LP, a U.S. affiliate of Enbridge Inc., Canada's largest transporter of crude oil. With Enbridge's costs already totaling more than $765 million, it is the most expensive oil pipeline spill since the U.S. government began keeping records in 1968.
Needless to say, there can be no mistakes here, not with the lives, drinking water and food supply of millions Americans at risk. The ecosystems mentioned here are only the largest and most well known. From the arboreal forests of Alberta Canada, glacial tills of the High Plateau, the wetlands of the Dakotas and the Missouri River Watershed, the treasures of Utah, Nebraska all the way through Texas, this is a gift to be treasured and protected at all cost. The certain ecological damage to wildlife and the natural resources of our Great Plains breadbasket is too precious to risk for a few drops of oil that will enrich only a few and do nothing to add to our American jobs or energy independence. Even if that were not true, we still could not afford this debacle. The destruction in Alberta, in our American west, the multiple dangers of pursuing these out dated and destructive projects is the epitome of arrogance. For only a fraction of the fortune that is being reaped from our lands, and investment of true American spirit, renewable energy could be ours in our lifetimes. As the space program was created and successfully implemented in the 1960's through sheer political will and popular support, so can we today continue creating the necessary energy technologies to eliminate the need for these destructive and fatal practices. It is the height of foolishness to even consider moving forward into the future with the old end game of dirty oil, toxic lands, polluted water and unbreathable air. Worse yet, to see all that is precious to us be destroyed by the avarice of greed and the Worship of Mammon would be a failing of monumental and irreversible proportions.
|
<urn:uuid:78303d0e-8740-486c-b1bb-e98ca4d2ada9>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812959.48/warc/CC-MAIN-20180220125836-20180220145836-00622.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.936412513256073,
"score": 3.609375,
"token_count": 4881,
"url": "http://shaybo-therisingtide.blogspot.com/2012/10/prairie-potholes-glacial-till-and-why.html"
}
|
Christian Churches of God
The Origins of Christmas and Easter
(Edition 3.0 19980117-20071215-20081215-20100430)
Christians have been conditioned to accept that Christmas and Easter are essentially part of the Christian tradition. The fact is that neither is at all Christian and both have their roots in the Mystery cults, the Saturnalia, the worship of the Mother-goddess system and the worship of the Sun god. They are directly contradictory to the Laws of God and His system.
The Origins of Christmas and Easter
Modern so-called Christianity celebrates two major festivals of Christmas and Easter. One is in December and the other is in March-April. The Bible celebrates no religious festival in December. The March-April festival commanded to be observed by the Bible is called the Passover. It falls in March-April but is not called Easter and does not fall as determined by the calculations for Easter.
More importantly, there are also other festivals commanded by the Bible that are not being kept. The Sabbath, which is the Fourth Commandment, is not kept but the day of the Sun is kept in its stead. How did this happen? How did it all originate? Is it biblical and is it Christian? The answers are all found in history and the answers are fascinating.
There was a festival celebrated in December in Rome. It is necessary to any understanding of what is happening at Christmas. That festival was termed the Saturnalia. It was the festival of Saturn to whom the inhabitants of Latium, the Latins, attributed agriculture and the arts necessary for civilised life (Smith’s Dictionary of Greek and Roman Antiquities, 2nd ed., London, 1851, p. 1009). It fell towards the end of December and was viewed by the population as a time of absolute relaxation and merriment. During its continuance, the law courts were closed. No public business could be transacted. The schools kept a holiday. To commence a war was impious and to punish a malefactor involved pollution (ibid.). Slaves were relieved of onerous toils and permitted to wear the pileus or badge of freedom. They were granted freedom of speech and were waited on at a special banquet by their masters whose clothes they wore (ibid.). All ranks devoted themselves to feasting and mirth with presents exchanged among friends.
Wax tapers were given by the more humble to their superiors. The crowds thronged the streets, and Smith says many of the customs had a remarkable resemblance to those of Christmas and the Italian carnival (ibid.).
Public gambling was condoned by the authorities as was later card-playing, and was indulged in even by the most rigid in later times at Christmas Eve. The whole populace threw off the toga, wore the loose gown called the synthesis and walked about with the pileus on their heads. Smith’s Dictionary says this practice is reminiscent of the dominoes, peaked caps and other disguises worn at later Christmas festivals by masques and mummers. The cerei or wax tapers or lights were probably employed as the moccoli are on the last night of the carnival. Our traditions of Christmas lights probably stems from this tradition.
Lastly, for amusement in private society, was the election of a mock king, which is immediately recognised in the ceremony of Twelfth Night (ibid.). We will come across this later.
Sir James George Frazer, in his classic study of magic and religion (The Golden Bough, McMillan, 1976), says this mock king was an allusion back to the idyllic days of the reign of Saturn, and the slaves being given temporary freedom at this time hearkened back to these days when all were free and things were just (ibid., ix, p. 308ff.). Roman soldiers stationed on the Danube in the reign of Maximian and Diocletian are recorded (by Franz Cumont) to have chosen a young and handsome man to resemble Saturn from among them by lot, thirty days before the festival. They dressed him in royal attire to resemble Saturn. He went about in public attended by a retinue of soldiers and indulged his passions no matter how base and shameful. At the end of thirty days, he then cut his own throat on the altar of the god he personated. In the year 303, the lot fell upon the Christian soldier Dasius, but he refused to play the part of the heathen god and to soil his last days by debauchery. He refused to give in to the intimidation of his commanding officer Bassus, and was accordingly beheaded by the soldier John at Durostorum at the fourth hour on Friday 20 November 303, being the twenty-fourth day of the Moon (Frazer, ibid.).
This historical account was confirmed after its publication by Franz Cumont by the discovery in the crypt of the cathedral at Ancona of the white marble sarcophagus in script characteristic of the age of Justinian with the Greek inscription:
Here lies the holy martyr Dasius, brought from Durostorum.
The sarcophagus had been brought there from the church of St Pellegrino in 1848 where it lay under the high altar, and was recorded as being there in 1650 (Frazer, p. 310).
Frazer says this sets a new light on the nature of the Lord of the Saturnalia, the ancient Lord of Misrule, who presided over the winter revels at Rome (ibid., p. 311). Here we see the extent of the traditions and the elements of human sacrifice, which extend into the festivals in both December and at the equinox. Dasius the Christian suffered martyrdom rather than participate in these revels.
As Saturnus was an ancient national god of Latium, the institution of the Saturnalia is lost in remote antiquity (ibid.).
There are three traditions associated with it.
1. It is ascribed to Janus, who, on the sudden disappearance of his benefactor from the abodes of men, erected an altar to him as a deity in the forum and ordained annual sacrifices.
2. According to Varro, it is attributed to the wanderings of the Pelasagi on their first settlement in Italy. Hercules then, on his return from Spain, was said to have abolished the worship and practice of immolating human sacrifice; and
3. The third tradition attributes the Saturnalia to the followers of Hercules who set it up after his return to Greece.
In either of the last two we see a commonality. The practice of this agricultural festival thus has certain common elements with the spring festival of Easter, as we will see later. The element of human sacrifice common to all traditions can also be traced to the worship of Moloch as the Moon god Sin, and also of Ishtar (see the paper The Golden Calf (No. 222)). This sacrificial aspect also appeared in the worship of the god Attis (see below).
The erection of temples in historical times is recorded, such as during the reign of Tatius, Tarquinius superbus, to the consulship of A. Sempronius or M. Minucius (497 BCE) or in that of T. Larcius the previous year. It appears that at varying stages the ceremonies were neglected or corrupted, and then revived and extended (ibid.).
The Saturnalia originally fell on 14 Kalend January. When the Julian calendar was introduced it was extended to 16 Kalend January which caused confusion among the more ignorant, and Augustus enacted that three whole days (namely 17, 18 and 19 December) should be hallowed in all time coming (ibid.). Some unknown authority added a fourth day and Caligula added a fifth day, the Juvenalis. This fell into disuse and was later restored by the emperor Claudius.
Strictly speaking, one day only was consecrated to religious observance in the days of the Republic. However, the celebrations lasted over a much longer period. Historically, Livy speaks of the first day of the Saturnalia (Liv., xxx, 36). Cicero writes of the second and third days (ad Att., v, 20; xv, 32). From Novius (Attelanae) the term seven days of the Saturnalia was used and this phrase was also used by Memmius (Macrobius, i, 10) and Martial (xiv, 72; cf. Smith, ibid.). Martial also speaks of the five days enacted by Caligula and Claudius.
These five days have an ancient calendrical significance also.
Smith says that in reality three festivals were involved over this period.
1. The Saturnalia proper commenced on 17 December (16 Kalend December).
2. This was followed by the Opalia (14 Kalend January or 19 December), which was anciently coincidental to the Saturnalia. These two together lasted for five days. This festival was celebrated in honour of Opis who was allegedly the wife of Saturn. Originally, it was celebrated on the same day, and thus the Mother goddess and lover theme is evident in the origins of this festival. We will meet this theme throughout. The followers of Opis paid their vows sitting, and touched the earth of whom she was goddess (Smith, ibid., art. ‘Opalia’, p. 835).
3. The sixth and seventh days were occupied by the Sigillaria which was named for the little earthenware figures that were displayed for sale on the period as toys to be given as presents for children.
Thus, under the Julian calendar, the period ran from 17 December until 23 December when the presents were given to the children.
We now proceed to examine further the theology behind these festivals. The commonality of the traditions of the festivals is too obvious to be ignored.
The Heavenly Virgin as Mother goddess
Frazer notes that:
… the worship of the Great Mother of the Gods and her lover or son was very popular under the Roman Empire, (v, pp. 298ff.)
From the inscription we know that the two (as Mother and lover or Mother and son) received divine honours not only in Italy but also in all the provinces – particularly in Africa, Spain, Portugal, France, Germany and Bulgaria (ibid.). Their worship survived the establishment of Christianity by Constantine.
Thus, the symbolism of the Heavenly Virgin and the infant child paraded on a yearly basis are not of Christian origin. They stem from the Mother-goddess religion, which is very ancient. We will see more of this later.
Frazer notes Symmachus as recording the festival of the Great Mother. In the days of Augustine her effeminate priests still paraded the streets and squares of Carthage and, like the mendicant friars of the Middle Ages, begged alms from the passers-by (ibid.; cf. S. Dill, Roman Society in the Last Century of the Western Empire, London, 1899, p. 16; and Augustine, City of God, vii, 26).
The Greeks, on the other hand, rejected the more barbarous rites in favour of those similar but gentler rites of the worship of Adonis (ibid.).
Frazer says that the same features which shocked and repelled the Greeks were what attracted the Romans and the barbarians of the west (ibid., pp. 298-299).
The ecstatic frenzies which were mistaken for divine inspiration, the mangling of the body and the theory of a new birth and the remission of sin through the shedding of blood, have all their origin in savagery (ibid.).
Frazer holds that their true character was often disguised under a decent veil of allegory and philosophical interpretation, which drew the more cultivated of them to things that might otherwise have filled them with horror and disgust. Modern Pentecostalism draws its inspiration from the ideas behind these religious festivals.
The religion of the Great Mother was only one of a multitude of similar oriental faiths that spread across the Roman Empire, imposing themselves on the Europeans. According to Frazer, this gradually undermined the whole fabric of ancient civilisation.
The entire Greek and Roman society was based on the concept of the subordination of individual to the state, and one’s whole life was dedicated to the perpetuation of the society. If one shrank from supreme sacrifice then it never occurred to anyone that they acted other than for base reasons.
Oriental religion taught the reverse of this doctrine. It inculcated the communion of the “soul” with God and its eternal salvation as the only objects of existence, and in comparison with the prosperity and even the existence of the state was insignificant.
The inevitable consequence of this selfish and immoral doctrine was to draw the individual more and more from the public service and to concentrate contempt for the present life in the individual.
The misapplication of these Mystery doctrines or oriental religions and their application in Gnosticism, when placed on the biblical narrative of the City of God as a spiritual edifice, was to have disastrous consequences for the ordering of society. The effect was to loosen the ties of the family and the state, and to generally disintegrate the political body of the state. The society tended to relapse into its individual elements and thereby into barbarism. Civilisation is only possible through the active cooperation of the individual and the subordination of the interests of the individual to that of the common good (ibid., p. 301).
Frazer holds that this obsession lasted for a thousand years. He held that it only changed at the end of the Middle Ages with the revival of Roman law, of Aristotelian philosophy, and of ancient art and literature to saner and more manly views of the world. The fact of the matter is that if the true biblical model was implemented no such problem would have existed. The problem arose from Oriental Mysteries combined with the Gnostic system, which is more prevalent today. Frazer held that the tide of this oriental invasion had turned at last and was ebbing still. He was wrong in this regard, although he also allows that bad government and a ruinous fiscal system are two major causes which strike down civilisations, as they did the Turkish Empire in his day.
We will look at the effects of the Great-Mother religion, and the Mithras system and its applications under Gnostic influence in Christianity to see that it is still there as strong as ever in more subtle forms. Yet, much of its traditional trappings are the same.
One of the gods who competed for the worship of the West was the Persian deity Mithra.
The immense popularity of this cult should not be underestimated. The monuments dedicated to this system are scattered all over the Roman Empire and right through Europe (a map of the extent of the monuments is found in David Ulansey, The Origins of the Mithraic Mysteries, Oxford, New York, 1989, p. 5).
This was a secret cult whose mysteries were never written down, and so little is known exactly of their ritual except what we can deduce from their shrines and places of worship. However, we do know that they had two forms of worship. The private and secret form was Mithraism. The public form, however, was Elagabalism and we know more of its system from this. Both were based on Sun worship.
Much of its religion was similar to the religion of the Mother of the Gods and also to what was understood to have been later Christianity (cf. Frazer, ibid., p. 302). The similarity struck the Christian doctors themselves, and it was explained to them as the work of the devil by counterfeiting a version of the true faith (ibid.). Tertullian explained how the fasts of Isis and Cybele were similar to the fasts of Christianity (De jejunio 16).
Justin Martyr explains how the death, resurrection and ascension of Dionysius, the virgin birth of Perseus, and Bellerophon mounted on Pegasus were parodies of the true Christian stories written by the demons in advance, even down to the story of Christ riding on an ass which was contained in the Psalms as prophecy (cf. Apol., i, 54).
The conflict between Mithraism and Christianity was so great that for a time the outcome hung in the balance. The fact of the matter is that the result was decided by the adoption of the Mithraic practices and giving them Christian names. The most important single relic of this pagan syncretism is that of Christmas, which Frazer says the Church seems to have borrowed directly from its heathen rival (p. 303).
The Roman army became devotees of Mithras and it is obvious from the records regarding Dasius that the Saturnalia was held in conjunction with the worship of Mithras. Thus, the Saturnalia simply preceded the solstice festival and became a part of it.
Christmas and the Heavenly Virgin
In the Julian calendar, 25 December was reckoned as the winter solstice (Frazer, ibid., p. 303; cf. Pliny, Natural History, xviii, p. 221). It was regarded as the nativity of the Sun as its days began to lengthen and its power increased from that turning point of the year.
Frazer holds that the ritual of the nativity as it was celebrated in Syria and Egypt was remarkable. The celebrants retired into certain inner shrines from which at midnight they issued a loud cry, “The Virgin has brought forth! The Light is waxing!” (ibid.; cf. Cosmas Hierosolymitanus, see fn. 3 to p. 303).
The Egyptians even represented the newborn Sun by an image of an infant which, on his birthday (the winter solstice), they brought forth and exhibited to his worshippers (ibid., cf. Macrobius, Saturnalia, i, 18, 10).
No doubt the Virgin who thus conceived and bore a son on the twenty-fifth of December was the great Oriental goddess whom the Semites called the Heavenly Virgin or simply the Heavenly Goddess; in Semitic lands she was a form of Astarte (ibid., noting Franz Cumont s.v. Caelestis in Pauly-Wissowa’s Real-Encyclopädie der classischen Altertumswissenschaft, v, 1, 1247, sqq).
This is the origin of the doctrine of the perpetual virginity of the mother of Jesus Christ. It has no basis in the Bible or in fact. Christ’s mother was not named Mary and the Bible is clear that she bore other children. We will return to this myth later.
The legend of the three kings
25 December was an ancient Sun-worshipping festival and the three kings associated with it do not appear to relate to the wise men from the East in the biblical narrative, but to a perhaps older tradition relating to the so-called twelve days of Christmas. The twelfth-day sequence is associated with the three kings in France, Spain, Belgium, Germany and Austria. Their names are Caspar, Melchior and Balthasar. In Germany and Austria it is known as the Day of the Three Kings (Dreikönigstag) and in France as the Festival of the Kings (Fête des Rois). The kings go around in some areas represented by mummers who sing songs and collect from the householders. It is given a Christian basis but there is no basis in the Bible for assuming there were three people (other than the three types of gifts) or that they were kings. They are recorded as Magi or wise men. This seems to have another basis (cf. Frazer, ix, p. 329). From the customs in Franche-Comte and also the Vosges Mountains, Melchior is supposed to have been a black king, and the face of the boy playing him is blackened (ibid., p. 330). These three are invoked for healing with rituals involving three nails placed in the earth. This smacks of the triune systems of the Celts in France long before the Christian system.
In Czech and German Bohemia, the rituals of fumigation and spices are found being used on the twelfth day. The initials C.M.B (Caspar, Melchior and Balthasar) together with three crosses are marked on doors after fumigation to guard against evil influences and infectious diseases. They were invoked under the words pray for us now and at the hour of our deaths.
The Lord of Misrule and the King of Beans
In this tradition also we see the Lord of Misrule emerge among the traditions. The full extent of time was from All Hallows eve (31 October, the eve of All Saints day) to Candlemas (2 February). However, it was generally confined to the twelve days at Christmas, termed the twelve nights. The Lord of Misrule was elected from the Court of the Sovereign in England through every office of the land. This Lord of Misrule was also elected at Merton College, Oxford as King of the Beans (cf. Frazer, ix, p. 332).
The Festival of Fools
In France, the counterparts of the English Lords of Misrule masqueraded as mock clergy, bishops, archbishops, popes or abbots. This was known as the Festival of Fools and was held either on Christmas Day, St Stephen’s Day (26 December), New Year’s Day, or Twelfth Day depending on place.
At these times there were parodies of the most solemn rites of the church where priests, wearing masks and sometimes dressed as women, danced in the choir and sang obscene chants; and laymen disguised as monks mingled with the clergy and the altar was turned into a tavern where the deacons and sub-deacons ate sausage and black pudding or played dice and cards under the nose of the celebrant. The censers were filled with bits of old shoes, filling the church with a foul stench.
In some areas of France, for example at Autun, an ass was led into the church where a parody of the Mass was said over it. A regular Latin liturgy was said over it and the celebrant priest initiated the braying of an ass (Frazer, pp. 334-335).
At Beauvais, on 14 January, a young woman with a child in her arms rode on the back of an ass allegedly in imitation of the flight into Egypt. She was led in triumph from the cathedral to the parish church of St Stephen, where she and the ass were placed on the left side of the altar. A long Mass was said, consisting of scraps borrowed indiscriminately from many church services throughout the year. The singers quenched their thirst in the intervals as did the congregation, and the ass was fed and watered. Afterwards, the ass was brought from the chancel into the nave where the entire congregation, clergy and laity danced round it braying like asses. After vespers, a large procession proceeded to a great theatre opposite the church where they watched indecent farces.
All of this is reminiscent of the rites in North Africa of the effeminate priests of the Mother-goddess system and the Saturnalia. Frazer says there is no direct evidence that one is derived from the other but the Saturnalia, with the licence that characterised it and the temporary reign of a mock king, makes it appear so (ix, p. 339). These traditions were kept up until the nineteenth century when Victorian England and Napoleonic France, following on from the Revolution, did away with them in some fashion. They were replaced, as we will see, with another form of the same errors. Much of the modern insanity derives from the USA and its commercialism.
The twelve days of Christmas, cakes, beans and money
The King of the Bean is also associated with the Festival of Fools in France and there is a more ancient significance to it. The Festival of Fools goes on to the Twelfth Day of Christmas (Twelfth Night is the night of 6 January). The eve, which is 5 January and thus the Epiphany of 6 January, marks the end of the two periods of the pre-Christmas festivities, which are associated with the Saturnalia and the Sun system and which commence from the Solstice on 25 December and continue until 5 January.
In some areas the king has a queen consort and both have an agricultural significance and seem to be related to the rites also of the Saturnalia.
The king and queen are elected by lot on the Twelfth Night (i.e. Epiphany, 6 January) or on the eve of that festival on 5 January. It was common in France, Belgium, Germany and England. It is still kept in some parts of France. The Court acknowledged the practice and each family elected its own king. On the eve of the festival, a great cake was baked with a bean in it. It was divided into portions: one for each member of the family; one for God; one for the Heavenly Virgin: and, sometimes, one for the poor. The person getting the portion with the bean was proclaimed King of the Bean (Frazer, ix, p. 313). Sometimes a second bean was placed in the cake for the election of the queen. At Blankenheim, near Neuerburg in the Eiffel, a black and a white bean were baked in the cake – the black for the king and the white for the queen. In Franche-Comte they used to put as many white haricot beans in a hat as there where people present. Two coloured beans were included and drawn at random by a child. Those receiving the coloured beans were king and queen.
In England, the practice was to put a bean in the hat for the king and a pea for the queen. However, in some places only the king was elected by lot, and he chose his queen himself. Sometimes a coin was substituted for the bean in the cake. This custom was followed in southern Germany as early as the first half of the sixteenth century. It is, however, considered by Frazer to be a variation on the earlier bean custom. It shows reasonably clearly that the placing of coins in Christmas puddings stems from this custom of an earlier time.
In France, the young child present was placed under a table. It was addressed as Phoebe or Tebe and he answered in Latin Domine. The pieces of the cake were distributed according to the child’s direction. The etymology has been attributed to the oracle of Apollo by some scholars. Frazer thinks it may be simply derived from the word for the bean (Lat. faba, Fr. fève).
Every time the king or queen drank, the company cried: “The king (or queen) drinks!”, and they all did likewise. Anyone failing to do so had their faces blackened by corks or soot or the lees of wine. In some parts of the Ardennes, the practice was to fasten great horns of paper in the hair and put a huge pair of spectacles on their nose. This was worn until the end of the festival. This is probably the origin of the Dunce’s Cap.
This is still kept in northern France where a miniature porcelain figure is substituted for the bean and drawn by a child. If it is drawn by a boy he chooses his queen; if drawn by a girl she chooses her king.
These kings and queens placed white crosses on the rafters of houses to ban hobgoblins, witches and bugs. There was, however, a more serious significance to some of the office. In Lorraine, the height of the hemp crop was said to be determined from the height of the king and queen. If the king were taller, the male hemp would be higher than the female and vice versa. In the Vosges Mountains on the border of France-Compte, the practice of dancing on the roof was observed to make the hemp grow tall.
In many areas the beans used in the cake were taken to be blessed by the clergy, and divination was employed on Twelfth Night to determine the month of the year in which the price of wheat would be dearest.
The practice of lighting bonfires is still carried out in some areas and, at the time Frazer wrote, it was still done in the Montagne du Doubs on the eve of Twelfth Night (ix, p. 316). This was seemingly to ensure the fertility of the crops. There seems to be a definite, if distant, relationship to the Yule festivals of the pagans.
While it burned the people danced around it singing: “Good year come back! Bread and wine come back!”
The youth of Pontarlier carry torches over the sowed lands shouting: “Couaille, couaille, blanconnie”, the meaning of which is lost in antiquity.
In the Bocage of Normandy on the same day, it is the fruit trees that are fired. These twinkling lights are everywhere as the peasants celebrate the Ceremony of the Moles and Field-mice (Taupes et Mulots). Villages compete in the blaze, and woods and hedges are scoured for materials. They scour the fields threatening the moles and field mice and thus they believe the crop will be larger that autumn.
The bonfires on the eve of Epiphany were also observed in the Ardennes. It is useful to look at the customs here in regard to festivals of the goddess Hecate in Rome and Europe generally and the fields and the crosses involved there (cf. the paper The Cross: Its Origin and Significance (No. 39)).
Similar fire customs are experienced in the UK in Gloucester and in Hertfordshire, with twelve fires at the end of twelve lands (Gloucester) designed to prevent smut in wheat. There is a thirteenth larger fire lit in both cases – the latter being on a hill (Frazer, ix, p. 318).
This custom of making twelve fires of straw and drinking toasts of cider or ale is called Wassailing and is ancient. Oxen are also toasted in this strange ritual in some areas with a cake placed on the horns of the lead ox and then thrown by tickling the ox.
The explanation of the practice of lighting fires and especially this largest is found in examination against the practice not only in UK and France but also in Macedonia. The large fires are to burn the witches and malefactors that roam the fields at night. They are called by the Macedonians karkantzari or skatzanzari. They are overcome by binding with straw rope. They resume their human shape during the day. Over the twelve days of Christmas, they must be overcome by strenuous effort. Some places start on Christmas Eve and in others it continues or is done on Twelfth Night.
On Christmas Eve, some people burn the karkantzari by burning holm-oak faggots and throwing them out in the streets at early dawn. Here again, we have reference to the Yule festivals of the Druids. The later oak faggots were remnants of the earlier log burning.
In Ireland, they set up sheaves of oats. This was done in Roscommon where they held that “Twelfth Night, which is Old Christmas Day, is greater than Christmas Day itself” (Frazer, ix, p. 321).
They set up thirteen candles in the sheaf, twelve smaller and one greater in the centre and attribute these to the Apostles at the Last Supper; but these are at Christmas and not Passover. Thirteen candles of rushlight named after each member of the family (or relations to make up the number) are placed in cakes of cow dung and burned to determine the length of life of each person (ix, p. 322).
The origin of candles
The use of candles goes back to the ancient Aryan religion, which used them at the Yule ceremony to ward off the gods of thunder, storm and tempest (Frazer, x, p. 264 (n. 4); and also p. 265). They were lit and tied to the sacred oak (ibid., ii, 327).
In some areas (Ruthenia, and Europe generally) they were used by thieves and burglars to cause sleep (Frazer, i, pp. 148-149), and in this case they were made of human tallow (ibid., i, p. 236). Parts of the human anatomy were also used as candles or human bones were filled with tallow made from the fat of hanged men (ibid., p. 149). Sometimes, candles were made from the fingers of newborn or, preferably as they saw it, unborn children. As late as the seventeenth century in Europe, robbers used to murder pregnant women to extract such candles from their wombs (ibid.).
Candles were burnt to ward off witches. They entered Christianity through the Catholic or Orthodox Church (cf. Frazer, ibid., i, p. 13).
Among the Germans, the ancient Aryan practice continued of lighting new fire by means of a bonfire at Easter and sending the sticks to each home to start the fires to ward off the gods of thunder, storm and tempest. The practice was introduced to Catholicism as the Easter candle. This single giant candle was lit at Easter on Saturday night before the Easter Sunday and then all the candles of the church were lit from it. This continued for the year until the following Easter, when the single Easter candle was again lit.
The practice of lighting the candle appears to take place on the night before the day of the Sun, as part of the ancient Sun-worshipping system.
In the Temple, incense was burned. Candles were not burned other than as the Menorah, which was made up of oil lamps and not candles.
This practice of burning lights as candles or tapers was similar to that of the Saturnalia. We know from the Book of Baruch (6:19ff.) that the practice of lighting candles before idols overlaid with precious metals was Babylonian. The practice of lighting multiple candles probably entered Judaism through the Babylonian system. We will deal with it in more detail in the section on Easter.
The Menorah was seven-branched and ordered by God for the Temple. In Solomon's Temple, there were ten lamp stands with seven oil lamps per stand representing the Council of the Elohim, of which the Sanhedrin was a copy. The nine branches in Judaism are given mystical symbolism. There is no biblical authority for them.
The weather of the twelve days of Christmas was said to determine the weather of the forthcoming year.
It is based on what appears to be a form of ancient zodiacal division of dividing the twelve days into four quadrants of three days per quadrant. This was done in the British Isles and it extended through Germany and German Austria into Western Europe.
From the weather on each of the twelve days it was possible to divine the weather of each successive month of the year. It was held to be accurate and apply also to the Twelfth Day itself where the weather on each hour would determine the weather for the corresponding month. The days were thus a system of divination for the year ahead in its agricultural aspects.
In Swabia, the days were called the twelve lot days. More precise divination was determined by making twelve circles divided into four quadrants. Each quadrant represented a quarter of the month. These were drawn on paper and hung over the door. As each day of the twelve days passed from Christmas to Epiphany, the weather on each quarter day was shaded and the weather for that quarter month was determined.
In Switzerland, Germany, and Austria, it was done somewhat differently. On Christmas, New Year’s Day or on another of the twelve days, one sliced an onion in two, peeled off twelve coats, and sprinkled a pinch of salt in each of them. From the moisture left in them the next morning, it was considered possible to determine the weather for the next twelve months of the year.
This was not confined to the Germanic tribes or the Teutons – it was found also in France and among the Celts of Brittany and in Scotland.
In the Bocage of Normandy, the temperature was divined for the year from the temperature of the twelve days. This was considered more accurate than the predictions of the Double-Liégois. In Cornouaille, Brittany, the twelve days were determined from Christmas to the Epiphany – being the last six days of December and the first six of January. In other parts of Brittany and in Scotland the twelve days were determined from 1 January. They were known in Brittany as the gour-deziou or male days. It is said to mean properly the additional or supplementary days. This concept takes us back to another ancient concept of the calendar and the five excess days of the year.
From their almanac, the Scots determine the weather of the forthcoming year by that of the twelve days of Christmas. Thus the weather in January is determined by the weather of 31 December or 1 January (depending on place), and so on, as an infallible rule.
The Celts of Scotland, as elsewhere in France, are divided as to the beginning of the days: either at Christmas, on 1 January, or on 31 December. Frazer considers this an important indicator of the origin of the beliefs (ibid., ix, p. 24).
This concept is very ancient and is found among the Aryans of the Vedic age in India. This predates Christ by many centuries.
They, too, appear to have invested days in midwinter with a sacred character as a time when the three Ribhus or genii of the seasons rested from their labours in the home of the sun-god, and these twelve rest-days they called ‘an image or copy of the year’ (Frazer, ix, pp. 324-325).
Frazer follows A. Weber in this explanation of the common views of the East and West (cf. fn. 3 to ix, p. 325).
The system was thus an ancient system of the Aryans, who conquered India from the Steppes with the use of iron-age implements and harnessed horses about 1000 BCE.
Their relatives took the same festivals west into Europe. These movements are part of the dispersion of the ancient Mysteries of the Babylonian system that found its way into the nomadic Shamans. This religion was Animism.
Ancient calendar systems
The division of the twelve days came from the ancient Aryan calendar, which was divided according to the phases of the Moon and not that of the Sun. The various Aryan languages have the name for month as the name for moon.
The days of the month alternate between twenty- nine and thirty days every two months. These days at fifty-nine times six fall short of the actual solar year by almost twelve days (eleven and one quarter days).
This appears to have been an intercalation to adjust the lunar to the solar year, which was a perversion of the true intercalation system adopted by the Hebrews, the Assyro-Babylonians and the Greco-Romans. It thus seems to have been a perversion of Sun-worship from the earliest days of the movements of the Middle Eastern tribes. The Celtic Hittites, being the first to move into Europe, took the system with them and its implementation corrupted subsequent colonisation from the Assyrian relocations and the movement of the Parthian and Gothic horde.
We now know much more about the calendar system in use in Europe and the midwinter solstice in use in Europe and the UK. The megalithic stone circles were designed to determine the solstice exactly on midwinter’s day.
The twelve days were distinct from the five days, and they appear to have been variously added to or combined in different areas.
It appears that the five extra days of the year making the 365 days, over and above the 360 days considered to be the normal year, was a very ancient belief and system of intercalary practice where, from the Mayas of Yucatan to the pyramids of Egypt, people regarded them as useless for any religious or civil purpose and did nothing on those days. This may have also had some basis for the practices. The texts of the pyramids expressly mention the five days over and above the year comprised of twelve months of thirty days (ibid., p. 340). The Aztecs and the American system, however, have eighteen months of twenty days and so did not follow any lunar system. Because of their mathematical values in the divisions of the calendar, the five days were considered to be useless and the object of no work and a general malaise of the society. This had no relationship to the Hebrew prophetic year of twelve thirty-day months, which is a symbolic idealisation of the actual revolutions of the true intercalary, nineteen-year cycle. This religious symbolism and structure is detailed in the Bible.
The five-day sequence related to the calendar is in use in solar systems or Sun-worshipping systems. The twelve days were an adjustment of the lunar to the solar which one would expect to find in the more ancient Moon-Sun-Morning Star systems that were common at the time of the Exodus (see the paper The Golden Calf (No. 222)).
The Sun god
25 December was also associated with Mithras, as he was Sun god.
The Catholic liturgist, Mario Righetti (in addition to Duchesne and also Cullman), held that:
After the peace of the Church of Rome, to facilitate the acceptance of the faith by the pagan masses, found it convenient (sic) to institute the 25th of December as the feast of the temporal birth of Christ, to divert them from the pagan feast, celebrated on the same day in honour of the “Invincible Sun” Mithras, the conqueror of darkness (fn. 74, II, p. 67; quote also in Bacchiocchi, From Sabbath to Sunday, Pontifical Gregorian University Press, Rome, 1977, p. 260).
Thus, Mithras was the god of the festival of the solstice on 25th December, which followed immediately on from the Saturnalia. With this deity, we see Sunday-worship emerge in Rome.
The dedication to Mithra was as Soli invicto Mithrae or the Invincible Sun – the Unconquered Sun as Frazer terms it (p. 304). It was also related to him as Sol Invictus Elagabal in the public form of the religion.
The term Father was a rank held by the priests of Mithra. The term is forbidden to Christians (Mat. 23:9). It entered Christianity with the Mystery cults.
What actually occurred was that the original calendars of the Roman system began the week on Saturday and were in use in the first years of the Augustan era (27 BCE to 14 CE) following the discovery of the calendar of Nola (cf. A. Degrassi, fn. 26, p. 104; cf. Bacchiocchi, ibid., p. 244). This structure appears to be related to the system of Mithras (as we know from the Epicurean Celcus, ca. 140-180 CE) where the Sun occupied the highest place on the ladder of ascent through the seven gates of the Mithraic ladder from Saturn to the Sun. This is classic Shamanism and is practiced by animistic religion throughout the world. In Origen’s Contra Celsum, 6,21-22, we see that Celsus lists the planets in the reverse order, enabling the Sun to occupy the significant seventh position.
We later see this system emerge as the eight-day symbolism in the Roman system for the week beginning on Saturn’s day or Saturday and ending with the day of the Sun or Sunday, which was always a holiday. The planetary week was also not in the accepted order of the planets and people could not account for the difference (cf. Plutarch, Complete Works, III, p. 230; cf. Bacchiocchi, ibid., p. 246).
The differences can be seen also by comparison with the Ziggurat of the Babylonian system and the seven levels of ascent to the Moon god there (cf. the paper The Golden Calf (No. 222)).
The statement of Tertullian (Ad Nationes, 1, 13, ANF, III, p. 123), attempts to refute the charge of Sun-worship. Tertullian admits that, by then, Christians had commenced praying towards the East and made Sunday a day of festivity. He directly places the responsibility for Sunday-worship over the Sabbath on the Sun-worshipping cults, where he says they selected its day in preference to the previous day of the week (i.e. the Sabbath or Saturday) (cf. Bacchiocchi, pp. 248-249). However, by then, they were both worshipping on that day as well as the Christian Sabbath.
Prayer to the Sun in the East
Apparently, prayer to the East originated by prayer towards Jerusalem, as Irenaeus mentions being the custom of the Ebionites (Adv. Her., 1,26, ANF, I, p. 352). By the time of Clement of Alexandria and Origen, we see the orientation to be towards the source of light that dispels the darkness of the night, although Clement still mentions the ancient temples (Stromateis, 7,7,43, GCS, 3, 32; cf. Bacchiocchi, p. 255).
Bacchiocchi makes it clear that the association between the Christian Sunday and the pagan veneration of the day of the Sun is not explicit before the time of Eusebius (ca. 260-340 CE). Although previous writers associated him as true light and sun of justice, no deliberate attempt prior to Eusebius was made to justify Sunday observance by means of the symbology of the day of the Sun (ibid., p. 261).
The process thus entered Christianity by means of the earlier December festival, which was originally derived from the worship of Saturn and Opis in the Saturnalia, and its association with the Heavenly Virgin or Mother goddess and her infant child.
The Gospels say nothing as to the day of Christ’s birth, and the early Church did not celebrate it.
The custom of celebrating Christ’s birth began in Egypt, being derived from the Mother-goddess cult there, and the Christians there celebrated it on 6 January. By the fourth century it had become generally established in the East (Frazer, v, p. 304). The Western church had never recognised 6 January as the true date and, in time its decision was accepted by the Eastern church. At Antioch this change was not introduced until about 375 CE (Frazer, ibid.).
The origin of the practice is plainly recorded by the Syrian Christians, as we see from Frazer quoting also Credner and Momsen and Usener (v, pp. 304-305).
The reason why the fathers transferred the celebration of the sixth of January to the twenty-fifth of December was this. It was a custom of the heathen to celebrate on the same twenty-fifth of December the birthday of the Sun, at which they kindled lights in token of festivity. In these solemnities and festivities the Christians also took part. Accordingly when the doctors of the Church perceived that the Christians had a leaning to this festival, they took counsel and resolved that the true Nativity should be solemnized on that day and the festival of the Epiphany on the sixth of January. Accordingly, along with this custom, the practice has prevailed of kindling fires till the sixth.
Thus, the Saturnalia led up to the solstice when presents were given to children from 23 December, or now Christmas Eve on 24 December, in the Gregorian calendar. The rites of the solstice then took over from the original Saturnalia, but the period then became lengthened from three to seven days to which was added the twelve days.
When we count five days from 25 December we come to 31 December, from which some of the Celts and Germans begin the count. The addition of St Stephen’s Day (or Boxing Day) brings the five-day period from 27 December in line to 1 January.
The pagan origin of Christmas is also evident in Augustine, when he exhorts his brethren not to celebrate this solemn day like the heathen on account of the Sun but on account of him who made the Sun (Augustine Serm., cxc, 1; in Migne Patriologia Latina, xxxviii, 1007). Leo, called ‘the Great’, likewise rebuked the pestilent belief that Christmas was solemnised because of the birth of the new Sun, and not because of the nativity of Christ (Frazer, ibid.; cf. Leo the Great, Serm., xxii (al xxi) 6 and Migne, liv, 198).
However, by then it was a hopeless cause. The entire system was endemic to Christianity and the Mother-goddess cult was entrenched.
Thus it appears that the Christian Church chose to celebrate the birthday of its Founder on the twenty-fifth of December in order to transfer the devotion of the heathen from the Sun to him who was called the Sun of Righteousness (p. 305).
There was a theory put forward by one Mgr Duchesne that 25 December arose from the conformity with the equinox on 25 March and this was the day on which Christ was killed and also on which his mother conceived. This digs an even deeper pit because 25 March was indeed initially adopted in Africa and elsewhere as the date of the crucifixion. However, it was on a Sunday in the only year that 14 Nisan could have fallen on 25 March. It is thus destructive to the theory. Moreover, 25 March is associated with the festival of the god Attis, as Frazer notes in his footnote to page 305. We will examine this in the sections below.
The Goat and the Bear
On the twelve days we also see mummers playing the part of a goat and a bear.
In the highlands of Scotland and St Kilda down until the last half of the eighteenth century at least, a cowherd would wrap himself in a skin on New Year’s Eve. The young people would meet and with staves they would beat the hide as a drum and proceed from house to house, where the one covered with the hide would run three times round deiseil, i.e. in the way the Sun revolves. He was pursued by the crowd crying in Gaelic:
… let us raise the noise louder and louder let us beat the hide (Frazer, viii, p. 323).
They go from house to house repeating verses. On entry, they call down blessings on the house and its cattle, stones and timber, its produce and health. A part of the hide was then burnt and applied to the noses of every person and domestic animal in order to protect the inhabitants against disease and misfortune for the coming year.
This last day of the year is called Hogmanay.
Each of the party, after the rhyme had been said and the Rann Calluin or Christmas Rhyme had been repeated, in return entered and had refreshment. The general thing that was burnt in lieu of the strip of hide was a Casein-uchd made of the breast strip of a sheep (or deer or goat) wrapped around the point of a shinty stick. The shinty stick was singed in the fire and put three times around the family and to the nose of all. No drink was taken until this ceremony had been completed. The purpose was to protect the household against witchcraft and disease.
On the Isle of Man, the feather of the wren was used (viii, p. 324).
The custom appears to be related to an older custom involving human sacrifice. Frazer notes that the Khonds slew a human victim as a divinity and took him from house to house and everyone took a relic of his sacred person (cf. i, pp. 246ff.). The cowhide no doubt substituted for this victim. The communion substituted for the body and blood of the god.
While these customs may not have connection with agriculture, the similar customs of Plough Monday certainly do, and the processions we see in Europe of men clad as animals probably identify with the corn spirit. They may have association with the Gilyak procession of the bear, and the Indian procession of the snake (ibid.).
Often in these processions (as in the last days of the carnival in Bohemia) a man was swathed from head to foot in pease-straw and wrapped around in straw ropes (Frazer, ibid.). This harkens back to the wicca man in ancient Britain.
These festivals of agriculture were associated with both the midwinter solstice and the spring equinox – both heralding the return of growth and warmth and life as the power of the Sun and summer to nature.
The Bohemian man goes by the name of the Shrovetide or carnival bear (Fastnachtsbär).
After he has danced at every house with the girls and maids and the housewife herself, they all retire to the ale-house.
For at Shrovetide, but especially at Shrove Tuesday, every one must dance, if the flax, the vegetables and the corn are to thrive (Frazer, viii, p. 326).
The straw of the bear is put in the nests of the hens and geese. The bear represents the spirit of fertility. The purpose of the dancing is to make fertile both animal and vegetable in all aspects.
In parts of Bohemia, this person is not called a bear but an oats-goat.
In Prussian Lithuania on Twelfth day a man is wrapped in pease-straw to represent the bear and another in oats-straw to represent the goat.
In Marburg in Steiermark, men appear as both a wolf and a bear (Frazer, ibid.).
The man who gave the last stroke at threshing is called the wolf. He keeps the name wolf until Christmas, when he is wrapped in a goat’s skin and led from house to house as a pease-bear at the end of a rope. His dress as a goat marks him out and appears to associate the symbols of goat and bear and wolf in this ancient ritual of the corn-spirit.
In Scandinavia, the appearance of the corn-spirit as a goat is common (ibid.). In Sweden, led about with horns on his head, he personated the Yule-goat. In parts of Sweden they make a pretence of slaughtering the goat that comes to life again (ibid., p. 327). The two men who slaughter him sing verses referring to the mantles of varying colours, red, blue, white and yellow – which they laid on him.
After supper on Christmas evening, the people dance the “angel dance” to ensure a good crop. Yule straw (either of wheat or rye) is made into the likeness of a goat, and thrown among the dancers with the cry of, "Catch the Yule-goat!" In Dalarne it is called the Yule-ram.
In Denmark and Sweden, it is customary to bake cakes of fine meal at Christmas in the shape of goats, rams and boars (Frazer, ibid., p. 328). They are often made out of the last sheaf at harvest and kept until sowing-time, where they are partly mixed with the seed corn and partly eaten by the people and the plough-oxen in the hope of securing a good harvest. The commonality of the customs from the British Isles to Europe and Scandinavia and the East establishes beyond doubt the ancient practice as appeasement of the corn-spirit and the ancient gods. The appearance as a wether and a boar is also ancient and widespread.
The Straw-bear, being performed as it had been for centuries on the day after Plough Monday, was witnessed in Wittlesy, Cambridgeshire by Professor Moore Smith of Sheffield University, in January 1909 (see letter of 13 January 1909; cf. Frazer, viii, p. 329).
Plough Monday is the first Monday of January after Twelfth day. It is beyond dispute that we are dealing with an ancient agricultural festival directed at appeasement of the ancient agricultural gods in the sequence of the midwinter festivals, which run from the Saturnalia to the solstice high day and then on to the twelve days of so-called Christmas to the plough-festival of Plough Monday and Shrove Tuesday.
It appears to have been anciently associated with human sacrifice – perhaps in each of the three aspects or perhaps as single festivals.
Plough Monday in England was normally associated with a team of human plough-bullocks, one of whom was disguised as an old crone called Bessy. They went about leaping and dancing in high fashion, presumably to make the corn grow as high as they leapt. This was similar to the practice of the Straw-bears or Yule-goats on the continent and elsewhere in the UK.
The same practices are found in Thrace and Bulgaria on the same day, i.e. the Monday of the last week of Carnival. One dancer (the Kuker) is a man clad in goatskin. Another dancer (the Kukerica), disguised in petticoats as the old woman or baba, has “her” face blackened.
Bears are represented by dogs that are wrapped in bearskins. A mock court is set up of a king and judge and other officials. The plays of the Kuker and Kukerica are wanton and lascivious.
Towards evening, two people are yoked to a plough and the Kuker ploughs a few furrows and sows some corn. He then takes off his disguise and is paid for his trouble.
The people believe that the person who plays the Kuker commits a deadly sin and the priests also make vain efforts to abolish the customs. The Kuker in Losengrad district has a cake with money in it, which is distributed to those present. If a farmer gets the coin, the crops will be good; if a herdsman gets it, the herds will be good. The Kuker also symbolically ploughs the ground and waves to and fro to imitate the waving corn. The man with the coin is bound and dragged by the feet over the ground to quicken the fertility of the ground. This drawing by lot is reminiscent also of the Saturnalia sacrifice we saw above.
In Bulgaria itself, the festival has the Old Woman or Mother as the leading personage, played by a man in woman’s clothing. The Kuker and Kukerica are subordinate to the “Old Woman”. They wear fantastic masks of human heads with animal horns or birds’ heads and skins with a girdle of lime bark. On their back is a hump made out of rags. This festival in Bulgaria, being the Monday of the last week of Carnival, is called Cheese Monday. It is nevertheless associated with the Ploughing festival.
The same rituals associated in western Europe of going round the house and the blessings conferred by the presence of the “Old Woman” on the fertility of the village is uppermost in the minds of all. Incursion by masked people from any other village is seen as a threat and a drawing away of the fertility of the village. Such incursions are resisted.
The similarity between the Old Woman with the black face of Demeter and the two aides of Pluto and Persephone are probably behind the origins of the custom of the three kings, with the black Melchior representing Demeter.
The festival of Befana in Rome on the night before Epiphany is clearly related to this festival of Demeter, and the term Befana is obviously a corruption of Epiphany. She is clearly an old witch and the noise of this festival is associated with an ancient custom of clearing the area of evil influences (see also below). The same ceremonies involving Befana on the eve of Epiphany were or are observed in Tuscan Romagna and elsewhere in Italy (Frazer, ix, p. 167).
Frazer rightly sees in the Old Woman of the Bulgarian and Thracian system a reference to the Corn Mother-goddess Demeter, who in the likeness of an old woman brought blessing to the house of Celeus, king of Eleusis and restored the lost fertility to the fallow Eleusinian fields. The Kuker and Kukerica, the male and female mummers, represent Pluto and Persephone. These rituals are extant from East to West and represent the oldest of the religious festivals (Frazer, viii, pp. 334-335). We are thus directly in the middle of the Eleusinian Mystery cults and linked with the same Mystery cults of ancient times from the cult of Apollo in early Europe and of Dionysius and of agricultural symbols in the cult of worship of the Sun god. The Bull-slaying cults are thus also involved, and we see from the times of dedication of the bulls sacrificed by the Greeks in Magnesia after its dedication in the beginning of the sowing that we have a common idea of the festival. Zeus is the partner of Demeter and the final product is the slaying of the bull to Zeus in the equivalent of the month of May.
Yule logs, the holly and ivy, and mistletoe
The summer and winter solstice were seen as the two great turning points of the year. Fires were lit on both solstices. The midsummer fires were lit in the open and youths jumped the fires. This practice was found among the Celts in Ireland, Britain and Gaul and also among the North Africans in Morocco and the Atlas Mountains. Their practice is much more ancient than the Islam they also profess. The practice of lighting fires happened anciently among the pagans on May Day and on Halloween (1 November), called All Saints Day. The asymmetric nature of these festivals with that of the solstice should be noted. The Festival of Walpurgis on the last day of April, preceding May Day, is the Festival of the Burning of the Witches. This type of festival is also associated with the twelve days between Christmas on 25 December and the Epiphany of 6 January. Fires of pine-resin are lit on these nights to keep the witches away. The fires are generally larger on Twelfth Night. In Silesia, people burn fires of pine-resin between Christmas and New Year to drive witches away from the farmhouses. This was the “proper time for the expulsion of the forces of darkness”. On Christmas Eve and New Year’s Eve, shots are fired over the fields and people wrap straw around the fruit trees to prevent evil forces from doing them harm.
In Biggar, in Lanarkshire UK, New Year’s Eve is the traditional time for this fire, which has been lit since time immemorial.
In 1644, nine witches of flesh and blood were burnt on Leith Links in Scotland (Frazer, ix, p. 165).
Fires are lit in the autumn but are not significant. The festival of the Nativity of the Virgin on 8 September was traditionally accompanied by noise and uproar as associated with Befana at Rome, and traditionally involved assassinations. Prof. Housman noted that when he witnessed the festival at Capri in 1897, a few more than the usual eight or ten were murdered (Frazer, x, p. 221).
Fires are also traditionally lit on the midwinter solstice on 25 December. The difference between the midsummer and midwinter fires is that the midwinter fires are lit indoors and form part of the ritual of the invocation of the Sun god to his place of supremacy in the heavens. Thus, the midwinter fires developed a more cloistered or family type atmosphere.
It is perhaps of significance that in the Shetland Islands, the Yule or Christmas holidays began seven days before Christmas and ended at Antinmas, i.e. the twenty-fourth day after Christmas.
The Shetlanders name these holidays the Yules. Seven days before Christmas, the elves, called Trows by the Shetlanders, are let free from their homes in the earth and dwell above ground if it pleases them. This is the probable origin of the elf symbolism of and with Santa Claus. It seems to relate back to the concept of the misrule of the seven days of the Saturnalia leading up to 25 December.
The most important of the rituals in Yule was the saining, which had to be properly carried out to deal with the grey folk, as the elves were called.
The modern myths emanating from the USA regarding alien ‘greys’ are none other than the revamping of the elves at Yule.
On the last day of the holidays, the twenty-fourth day after Christmas, called up-helly-a, or Uphalliday in Shetland, the doors were all opened and a great deal of pantomimic chasing went on to rid the area of the mischievous elves. People piously read the Bible and displayed iron ostentatiously, “for it is well known that elves cannot abide the sight of iron”. The infants were carefully guarded and sained by learned wise women. No doubt, we have the sign of the evil eye involved here as an ancient custom (cf. also the paper The Cross: Its Origin and Significance (No. 39)).
When day dawned after twenty-fourth night, the Trows or Grey folk had vanished and the Yules were ended.
The customs of banishing evil forces and witches on a night set aside for the purpose in the period of the winter solstice and festivals can thus be traced from Rome and Calabria in the south as far north as the Shetlands. It also runs from Ireland to the Steppes and down to North Africa.
We know that the Germans burnt the Yule log, which was an ancient custom even by the eleventh century. In 1184, the parish priest of Ahlen in Münsterland records bringing a tree to kindle the festal fire at the Lord’s nativity (Frazer, x, p. 247). This was found in Britain in ancient times and was common to the Teutons and apparently the Celts. John Brand is quoted by Frazer as saying that the Yule block is a counterpart of the midsummer fires made within doors because of the cold weather at the winter solstice (ibid., n. 2). This was nothing other than the erroneous application to 25 December of the solstice, which was set aside for the worship of the Sun (Frazer, x, p. 246). This lighting of the tree fire was to assist the Sun to relight its ailing lamp, and the entire system of fires and candles at the nativity before the Heavenly Virgin is the ancient worship of the Mother goddess and her infant child, the Sun. The lamps assist in the lighting of the heavenly fire of the Sun and this is the basic idea behind flame and its use in Zoroastrianism.
The Yule log was also kept among European groups and placed on the fire to ward off thunder and the effects of storms. Thus, the relationship is clearly made between the ancient gods of the Teutons over thunder and lightning and weather, and the Yule log at the solstice.
Mistletoe was sacred in the religion of the Druids. The Druids who came via Egypt as Magi were picked up by the Milesians in Spain from among the Gadelians before the Scoto-Milesians went to Ireland. From there they spread into Britain and Europe (MacGeohagen The History of Ireland, Sadlier, NY, p. 42; cf. Frazer, ii, pp. 358,362; xi, pp. 76ff., 301).
Pliny (Natural History, xvi, pp. 249-251) derives the word Druid from the Greek word for oak, which is drus. It is, however, the same or similar in the Celtic, being daur. The Druids are thus priests of the oak. Their cult is thus ancient and associated with the oak groves. Other scholars prefer to derive the name from the root meaning knowledge or wisdom – hence, they were the wizards or magicians. This is also borne from the title Magi which they held (cf. Frazer, xi, pp. 76-77, n. 1 to p. 76).
The Druidic cycle of the calendar was of thirty years, and there appears to be a common relationship in their worship with that of the Boetians who, like they, worship or conjured the oak and thus both may have a common Aryan connection. The Boetian cycle, in the festival of the great Daedala, was one of sixty years and not thirty. This may have application to the Aryan practice observed among the Indians of the sixty-year cycle based on the sidereal cycle of Jupiter.
The mistletoe is cut with a golden scythe on the first or sixth day of the Moon (Frazer, xi, pp. 77-78). It is associated with fertility and was held to make barren animals and women to bring forth. It was thought to have fallen from the sky and was called the all-healer (Frazer, xi, pp. 77-79,82). Two white bulls were sacrificed at its cutting on the sixth day for this purpose. The priest was dressed in a white robe. It was cut on the first day of the Moon by the Italians and on the sixth by the Druids. This difference is probably accounted for because of the commencement of the lunar month in both systems. Neither cut the mistletoe with an iron implement. It was not allowed to touch the earth and, hence, it was caught in a white cloth.
The Italians believed that mistletoe growing on oak had similar properties, if we accept Pliny, and thus there was a commonality of belief to both systems.
We are thus again back to the fertility system of the Saturnalia and the healing of the Mysteries and Apollo, but in an ancient form common to the Aryans before 1000 BCE.
This system was so ancient that it was common even to the Ainu of Japan who also held it sacred. However, they use mistletoe cut from a willow because that tree is sacred to them. They agree with both the Druids (in its curative properties) and the Italians (regarding the fertility of women for childbirth) in their beliefs (Frazer, xi, p. 79).
This belief extends down to the natives of Mabuig Island in the Torres Strait (ibid.). The common belief is also found in Africa among the Walos of Senegambia (ibid.).
The veneration of mistletoe as an all-healer is found among Swiss peasants and among the Swedes (ibid., p. 82).
The Norse god Balder was said to have been slain by mistletoe, and Frazer gives an extensive account of this matter in his work.
Mistletoe was used as a remedy for epilepsy generally, and by high medical authorities in the UK and Holland as late as the eighteenth century (ibid., p. 83, noting Ray of UK in 1700, Boerhaave of Holland in 1720 and his pupil Van Swieten in 1745).
Mistletoe is held to be a protection against lightning and fire and, hence, associated with the Yule system also (Frazer, xi, p. 85).
It was most commonly used at the midsummer fires and at this time was associated with the death of the god Balder. This seems to have involved actual human sacrifice at this time in Denmark, Norway and Sweden (Frazer, xi, p. 87). The practice of throwing the victim chosen by lot into the Beltane fire and also the Green wolf of the midsummer fires are associated with this system of worship as tree spirits or gods of vegetation (ibid., p. 88).
The worship of mistletoe is associated directly with the cult of the worship of the oak, and was common to all the Aryans. The Celts in Asia Minor worshiped at the grove called Drynemetum, which is pure Celtic, meaning Temple of the Oak. These are the groves, which also contained a phallus, spoken against by the Bible.
Among the Slavs, the oak was the sacred symbol of the great god Perun and the oak ranks first among the holy trees of the Germans. It was adored by them anciently and certain of these practices and attitudes survive to the present day (Frazer, ibid., p. 89).
The oak was also sacred to the Italians, and the image of Jupiter on the Capitol was originally nothing but a natural oak tree. At Dodona, Zeus was also worshipped as being immanent in the oak. Frazer concludes that the Aryans, including Celts, Germans and Lithuanians, commonly held the oak sacred before their dispersion and this common land must have been plentifully supplied with oak. The mistletoe is merely its symbol, as heaven-sent aspect of healing, protection and fertility.
The kindling of sacred fire, whether among the Celts, Germans or Slavs, is always by use of the oak in rubbing two of the sticks together, or by rubbing oak on a grey (not red) stone. The same types of practice are found from Germany to the Highlands of Scotland in kindling the need-fire (cf. Frazer, xi, p. 91).
Frazer says the perpetual fire of Vesta in Rome was fed with oak wood. Oak wood also burnt in the perpetual fire before the sacred oak at Romove in Lithuania. The blocks of oak are burnt also from the midwinter solstice through to the end of the year and replaced with the new log and the ashes are mixed among the seed etc. for fertility.
The common link in all these stories is the burning of the fires and the cutting of the mistletoe. The ancient Aryans believed, as we can deduce from the myth of Balder, that the oak was the god and the mistletoe’s link with it ensured its longevity. The human sacrifice at the midsummer fires ensured the life of the crops. The use of mistletoe and the Yule log at the midwinter solstice also looked to the sacrifice of the god represented by the human who took his place, and the return of the Sun system. This is the underlying symbolism of the Christmas tradition (cf. Frazer, xi, p. 93).
While the mistletoe stood, neither the god nor his substitute could be injured. The cutting of the mistletoe was both the signal and the cause of his death.
Holly and ivy
Holly and ivy allegedly represent male and female. The ivy clings and twines – supposedly representing the female. The holly is prickly and erect – supposedly representing the male.
In Surrey, England, a holly tree is used to pass a child through a cleft to heal rupture, whereas it is usually an ash elsewhere (Frazer, xi, p. 169, n. 2).
The holly-oak was sacred to the Fratres Arvales or Brethren of the Tilled Fields. This was a Roman college of twelve priests who performed public religious rites for the purposes of agriculture. They wore wreaths of ears of corn. Their sacrifices were made in the grove of the goddess Dia some five miles down the Tiber from Rome. This grove contained laurels and holly-oaks. It was so hallowed that expiatory sacrifices were offered every time a tree or even a bough of a tree fell to the ground. This was obviously especially prone to occur with the advent of snow and storms at the winter solstice. Hence, we have the concept also of holly and the white Christmas. More elaborate sacrifices had to be made when one of the trees was struck by lightning. They were then dug up by the roots, split and burnt and others planted in their stead. At the Roman festival of the Parilia, which was for the welfare of flocks and herds, peasants prayed for forgiveness if they had entered a hallowed grove, sat under a sacred tree, or lopped a holy bough to feed sheep (cf. Frazer, ii, p. 123).
Pliny says the woods were formerly the temples of the deities and that even in his time the peasants dedicated a tall tree to a god with the ritual of olden times (Pliny, Natural History, xii, p. 3).
The ivy is the symbol of the Mystery cults. It is chewed by the Bacchanalian feast-goers. It is identified with the god Dionysius, or Bacchus.
Ivy was used by the Greeks as one of the two firesticks. The board of the pair was made out of a parasitic or creeping plant, which was usually ivy. The borer was usually laurel. Oak was also used as the borer.
The ancient Indians used a parasite (the climbing fig) as the borer using the parasite as the male concept. The Greeks seemed to have reversed this concept. The ivy is considered female and the laurel male. Yet in the Greek, the word ivy is masculine and the ivy was identified anciently with the male god Dionysius. The word for laurel is feminine and is identified with a nymph. Thus, we may conclude that the Greeks, like the Indians, considered the concepts similarly in very ancient times but modified them perhaps through expedience (Frazer, ii, pp. 251-252).
Anciently, ivy was prohibited to touch or name (Frazer, iii, pp. 13ff.). Ivy was also sacred to the god Attis and, hence, we come then to the pine tree, which was also sacred to that god (cf. Frazer, v, p. 278 and see the paper The Cross: Its Origin and Significance (No. 39)).
Ivy was also sacred to the god Osirus (Frazer, vi, p. 112) and also for dreams (ibid., x, p. 242). Thus, we see a commonality to the system of the Triune god and the Mystery cults generally which ties in naturally with the solstice system and Sun worship. Thus, the holly and the ivy are the symbols also of the oak and other groves dedicated to the deities so condemned by the Bible.
The Christmas tree
The decorated pine tree stems directly from the Mystery cults and the worship of the god Attis. He is held to have been a man who became a tree and, hence, is the embodiment of the ancient tree-spirit we meet in ancient Indian or Indus mythology from as early as Harappa and Mohenjo Daro. He is clearly a fertility god of corn and wears a Phrygian cap like Mithras (from the statue in the Lateran; Frazer, v, p. 279).
The bringing in of the pine tree decked in violets and woollen bands is like bringing in the May-tree or Summer-tree in modern folk custom. The effigy that was attached to the tree was a duplicate representative of the god Attis. This was traditionally kept until the next year, when it was burnt (Firmicus Maternus, De errore profanarum religionum; cf. Frazer, v, p. 277 and n. 2). It is forbidden by God in Jeremiah 10:1-9.
The original intent of this custom was to maintain the spirit of vegetation intact throughout the coming year. The Phrygians worshipped the pine tree above all others and it is from this area that we derive the Mysteries and the Mithras system. It is probably sacred to the cults in that it is an evergreen lasting through the solstice period over a large area, when other trees are bare. Remember also that pine-resin was burnt at the solstice festivals. The origins are lost in the antiquity of the Assyro-Babylonian system.
The resemblance of the god Attis was changed to the Sun-symbol as a monstrance on the top and then to angels and other types of decorations. The decorations are easily identifiable as the Sun, Moon, and stars of the Triune system of the Babylonians as Sin, Ishtar and Shamash or Isis, Osirus and Horus of the Egyptians (see the paper The Golden Calf (No. 222)).
Ivy was also sacred to Attis and his eunuch priests were tattooed with the symbol of the ivy leaf (Frazer, v, p. 278).
Pine nuts were used to produce a wine used in the orgiastic rites of Cybele which were in effect counterparts of the Dionysian orgies and Strabo compared them (Strabo, x, 3. 12ff.).
At the festival of Thesmophoria, they were thrown along with pigs and other agents or emblems of fertility into the sacred vaults of Demeter for the purpose of increasing the fertility of the earth and of women (Frazer, v, p. 278). Thus, we are back again to the Demeter festivals and the aspects that have kept on and which are associated with Christmas in Europe generally, as we have already seen.
The term Epiphany means manifestation as the appearance of some divine or superhuman being. It was applied to Antiochus IV Epiphanes, king of Syria (175-164 BCE).
It was also known as the dies luminum (day of lights), as three kings day or the twelfth day. All of these are dealt with above. The practices associated with it are all derived from the ancient sources we see in the text and have little to do with the Faith.
The name survives in the great festival of Befana at Rome (cf. Catholic Encyclopedia, art. ‘Epiphany’, Robert Appleton, NY, 1909, Vol. V, p. 504). The CE says:
It is difficult to say how closely the practice then observed of buying all sorts of earthenware images, combined with whistles and representing some type of Roman life, is to be connected with the rather similar custom in vogue during the December feast of the Saturnalia (ibid.).
It is hardly difficult to identify. The practices were the same and the term is applied to the manifestation of the Befana as the goddess, as we see above. The attempt to place the reference in Hippolytus on the Sacrament of Baptism is incorrect, as he uses the term theophaneia not epiphania (ibid.).
The first substantive reference is in Clement (Stromateis, I, xxi, p. 45). The CE quotes this text as follows and then goes on to say:
‘There are those, too, who over-curiously assign to the Birth of our Saviour not only its year but its day, which they say to be on 25 Pachon (20 May) in the twenty eighth year of Augustus. But the followers of Basilides celebrate the day of his Baptism too, spending the previous night in readings. And they say that it was the 15th of the month Tybi of the 15th year of Tiberius Caesar. And some say that it was observed the 11th of the same month.’ Now, 15 and 11 Tybi are 6 and 10 January.
Both the Roman Catholic Church and the Orthodox Church try to draw from this practice of the Gnostics under Basilides (teaching at Rome in the middle of the second century) support for the celebration of the nativity as well as the baptism of Christ, but there is no real evidence for this conjecture. The evidence of the festivals themselves indicates that the practice was the ancient fertility festival and the blessing of the produce. From this arose the practice of blessing the waters and the practice of throwing crucifixes into the sea to make the seas productive for fishermen. All are based in ancient paganism and were not evident in Christianity until the fourth century. This addition was well after Origen writing in the third century, as he makes no mention of the Epiphany in his list of the festivals. The first reference to it as a feast of the church is in 361 (cf. CE, p. 505).
From Saint Nicholas to Santa Claus
Santa Claus is a rather late invention and comes to us as a product of late American commercialism. It is derived chiefly from German and Dutch folklore. It has its origins in the entity referred to as ‘Saint Nicholas’.
The man usually known as Saint Nicholas is Nicholas of Myra in Lycia. He died on 6 December 345 or 352 (Catholic Encyclopedia, Vol. XI, p. 63). He is popular in both the Greek and the Latin church, but there is scarcely anything certain about him except that he was bishop of Myra in the fourth century (ibid., p. 64). He was born at Parara in Lycia of Asia Minor. In his youth, he made a pilgrimage to Egypt and Palestine. On his return he was made bishop of Myra and was imprisoned during the persecution of Diocletian. He was released on the ascension of Constantine. The Catholics allege he was present at Nicaea, but his name does not appear on any of the records by their own admission (ibid.).
In 1087, Italian merchants stole his body at Myra and took it to Bari. His cult in Italy dates from this point. It appears this may have been prompted by a cult that had developed concerning him in Europe. The numerous miracles attributed to him are the outgrowth of a long tradition but, as we will see, much of it has pagan origins that would have little to do with the original man.
His cult in the Greek church is old and especially prominent in the Russian church although they were long after him (c. 1000 CE). The emperor Justinian I built a church in his honour at Constantinople and his name appears on the liturgy ascribed to John Chrysostom (ibid.).
His cult in Europe started from the time of Otto II whose wife Theophano was a Grecian. Bishop Reginald of Eichstadt (d. 991) wrote a metric entitled the Vita S. Nicholai. He is, or was, honoured as patron saint in Greece, Russia, the kingdom of Naples, Sicily, Lorraine, the Diocese of Liege, and many cities in Italy, Germany, Austria and Belgium, Campen in the Netherlands, Corfu in Greece, Frieburg in Switzerland and Moscow in Russia (ibid.). He was patron of mariners, merchants, bankers and children.
His relics are still preserved in the church of S. Nicola in Bari. An oily substance, known as Manna di S. Nicola, is said to exude from his relics. It is valued for medicinal purposes. His relationship with the festivals of 5/6 December, are examined below.
One legend associated with him relates to the formation of three golden balls, each made from his wages for one year, and rolled through the window of a needy family of good birth over a period of years. The first ball allegedly landed in a stocking (hence the Christmas stocking). This enabled the needy recipients to marry off their daughters. He was allegedly seen on the last occasion. This is no doubt the origin of the three golden balls of the pawnbrokers and the symbol of his patronage of merchants. These stories we will see have relationship with other myths.
The traditions associated with his generosity caused the practice of Norman French nuns giving to the poor on Saint Nicolas’ day or eve, and this came to be called Boxing Day from the alms box of the church. This became the tradition behind the Boxing Day of 26 December. In Germany, Christ Bundles were also given to the poor and the annual parades took on the Heavenly Mother-goddess tokens of the Mysteries.
The practice of children saving all year for the annual pig at Christmas in Holland led to the introduction of the piggy bank.
The amalgam also of the false Roman robes of the clergy worn on the Festival of Fools, and the tales of Odin’s wild ride, and the beards of the Magi with the elves of the Yule festivals saw a gradual evolution.
Nicholas of Myra was a saint in the Roman Catholic Church until 1969 when he suffered the fate of many other myths.
Sinterklaas – the precursor of Santa Claus
Sinterklaas, or Saint Nicolas, is a typical Dutch folklore, celebrated in the Netherlands and partly in Belgium.
The celebration of Sinterklaas is always on the evening after sunset of 5 December in the Netherlands, and 6 December in Belgium.
In the celebration of the evening and night, the children are assembled around the chimney, singing songs to Sinterklaas:
“Heerlijk avondje is gekomen. Kom maar binnen met je knecht”.
This translates as: “The nice (or lordlike) evening has come. Come in with your servant”.
His servant, Black Peter, is black. He is always portrayed as a Negro with thick lips and earrings and clothed in funny clothes. This probably stems from the Demeter/Melchior nexus and later associated with good and evil being embodied in the legend of Woden and Nöwi.
Sinterklaas himself is as a bishop with mitre and a book with the good deeds and sins. He has the staff of a shepherd and rides on a white horse over the rooftops. Black Peter listens at the chimneys to determine whether the children are singing the right songs and presenting the right offerings to the horse in the form of hay and carrots.
The presents for the children are put through the chimney.
Sinterklaas is a syncretic product of the old Germanic or Teutonic religion. The Germanic roots can be explained as follows:
The god Woden (also known as Odin), who is still remembered by the use of Wednesday, was the most important god of the old Germanic tribes (not the small group of people we understand as Germans today). Woden, who is a figure of history, was made into the personification of the multitude of earlier gods – the gods of wind and war, the god of the dead, the god of fertility, the god of wisdom and the Sun god. We will find him in mythological legends “riding through the air on his faithful white horse, clothed in a flowing robe”. Further, he is described as a figure with a long white beard, and with a big hat on his head. Because he was also held to be the god of wisdom, he had a book in his hand written in rune letters, and he carried a great spear.
In these stories, Woden was accompanied by the giant Nöwi, who had a black countenance because he was the father of the night. He was, according to legend, well versed in making rhymes and poems. He carried a bunch of twigs in his hand as a sign of fertility.
From these aspects – the white horse, the wide robe, the big hat, the book, the spear, and the black Nöwi, with a bunch of twigs, and the poems or poetic traditions – we have so many parallels with our today’s Sinterklaas and Zwarte Piet (Black Peter) that is beyond mere coincidence. We see here, also, the parallels with Demeter and the three wise kings, one of whom was also the black Melchior.
If we now add to this the traditional customs, we will complete the picture.
After the harvest, the old Germanic tribes or Teutons always left a sheaf on the land for the white horse of Woden. During the Sinterklaas’ time the children offered hay in their shoes at the chimney (stockings at the chimney at Christmas) for his horse.
We see here the same traditions as found among the Celts of burning the twelve fires and the thirteenth major fire of the straw. We also see the black faces of the Mother-goddess system. We can deduce a much earlier origin than that attributed to Woden. This is part of the early cults of fertility related to Apollo as Sun god and master of the Mystery religions among the states of the Danube and into the Hyperborean Celts. He was drawn across the sky in a chariot and often this was pictured being drawn not just by horses but also by geese or swans. The similarity of these feasts was with the old ceremonies of the Saturnalia, which was traditionally prior to Christmas. In the Netherlands, we see a much earlier date than is normal now. It was some thirty days before the Epiphany. It was, however, not thirty days before the solstice as we saw in the Saturnalia examples above. We see the same tradition but removed so that the thirty days of the Lord of Misrule as the god Saturn and Apollo relate to the Epiphany rather than the end of the Saturnalia.
Today’s tradition in the Netherlands is to give letters of chocolate or almond pastry. The connection with the ancient runes seems very obvious. The German Wotan feast was a mixture of sacrifice and fertility festivals during and around the midwinter feasts. The lads and lassies of the Germanic tribes prayed in those early times for a partner. The presents from Sinterklaas were also in the form of lovers made from speculatius or other cakes. Also, presents were of animals in the form of sugar mice and pigs, to substitute for the real animal sacrifices.
Sinterklaas is also the patron of the city of Amsterdam and the seamen who sail from her ports.
The apparel of Sinterklaas is Roman Catholic. It was little wonder that, in the sixteenth century, the Reformation tried to stamp out these customs. It was not entirely successful in the Netherlands. Sinterklaas came to life again after an absence of some centuries (or being underground) in Protestant Netherlands in the first half of the twentieth century. Sinterklaas disappeared in England and Germany and went underground. Many of the traditions simply were moved to 25 December and completed with the Christmas tree and Santa Claus. The acceptance of the ‘rebirth’ of Sinterklaas in Protestant Netherlands was sooner and earlier than the acceptance of the Christmas tree. Today, commercialism has to fight to get Santa Claus accepted in the Netherlands, as many are against this imposter of Sinterklaas, even though its rebirth in the Netherlands was because of what was done in the USA.
Santa Claus in the USA
When migrants went to the United States, they brought with them the Yule traditions from Europe and particularly the three elements that went to make up the Santa Claus myth.
The Dutch contributed the Sinterklaas myth, which was adapted from its traditional place. The Pere Noel tradition of the red robes was also contributed from Europe. The Germans brought with them the Christ Bundle tradition and termed it Christkindl or Christ Child tradition. The name Kris Kringle developed from this term.
Washington Irving in the Knickerbocker Tales (ca. 1820) discusses the elf Santa Claus who presents the stocking, as did St Nicholas.
Clement Clark Moore introduced many new elements in his poem: A Visit from Saint Nicholas, which was renamed: ’Twas the Night Before Christmas. He introduced new elements such as eight reindeer including the traditional representation we see regarding thunder and lightning as the gods of the Yule festival in the form of Donner (Donder) and Blitzen.
Santa Claus was still an elf of the Yule tradition, however, until the American Civil War when Thomas Nast of Harpers Weekly was commissioned to do a series of Santa Claus cartoons. He continued this after the Civil War, and the publishing company McLaughlin Brothers Printing Company experimented with the colour of Santa’s leather and decided on red.
The final change was made in 1931. The Scandinavian Haddon Sundblom was hired by Coca Cola to paint Santa Claus. On the death of his model, he fashioned Santa Claus on his own face. This continued for twenty-five years.
In 1941, the song Rudolph the Red-Nosed Reindeer was written. It was recorded by the cowboy singer, Gene Autry.
The Coca Cola model and colours and the American myths surrounding the figure are now the final product of at least 3,000 years of pagan idolatry wrapped in the crass commercialism that first emanated from the merchants of the Roman Saturnalia and which was perfected in the USA.
There is nothing Christian about so-called Christmas and, indeed, it is so steeped in false religious superstition that it is a direct breach of biblical Law. No Christian can observe it and remain a Christian.
The Encyclopedia of Religion and Ethics (ERE, v. p. 846) states quite clearly that:
“The English name ‘Easter’ is probably derived from Eostre an Anglo-Saxon goddess, to whom special sacrifices were offered at the beginning of spring (Bede de Temp. Rat. xv., Op., ed. Giles London, 1843, vi. 179).
It also says in relation to Easter Day that “This chief festival of the Christian Church was not at first distinguished by any special right from other Sundays.” (ibid.)
Eostre, Eastre, Eostur (The Teutonic Goddess) is mentioned by Bede in de Temperorum Ratione 15 with the goddess Hreda (or Rheda or Href) and the months of March and April were named after these goddesses. The Spring Festival was the festival of Easter beginning from the New Moon of the Equinox and thus what we now term April was called Eosturmonath (ERE, ix, p. 253a, xii, p. 102a).
Bede (ibid.) says that the names of the months were calculated from the moon and were:
Jan: Giuli; Feb: Solmonath; Mar. Rhedmonath; Apr: Eostremonath; May: Thrilmilei; Jun: Lida; Jul. Lida; Aug. Weodmonath; Sept: Halegmonath; Oct: Winterfylleth; Nov: Blotmonath; Dec. Giuli. Thus two months had the same name twice in the calendar.
Giuli had the same name as one preceded the solstice and the other succeeded it and the solstice was of paramount importance in the sun cults. Solmonath ca. February was the “Month of cakes” and cakes were offered to the gods. Sacrifices were offered to the goddesses in Rhedmonath (Rheda) and Eostremonath (Easter or Eostre). Thrimilei was derived from the fact that the cattle were milked three times a day in this month due to the fertility of Britain and Germany in those days. Lida means “Blandus siue navigabilis.” Weodmonath means “the month of tares.” Halegmonath means “mensis sacrorum” the sacred or holy devotions. The blotmonth or bloodmonth denoted the month of sacrifice of the livestock. The year began on 25 December and the eve of that day was called Modrahnit or “Night of the Mothers” (ibid., iii, p. 138b).
The Teutons intercalated in summer and the month was called Thrilidi as there was then three months of Lida (ibid. p. 139a). From some accounts the month of Winterfylleth was so named because they reckoned the winter as beginning on the full moon of this month (ibid.).
The months in the Netherlands differed from those in Germany as did the Danes and Swedes but the fourth month of the Danes was termed “The Sheep Month” and the Swedes called the fourth month Varant meaning spring work. The association with the spring sacrifices and harvests are common.
Enid Welsford, in the ERE, goes on to say that the word Eostre is connected with the Latin Aurora and the Greek ‘hoos, skr., Usas, Lith. Auzra which was the personification of the dawn. The Lithuanian Auzrine or Morning Star is derived from Auzra. “The name Eostur is identical with the Latin, Greek, Sanskrit and Lithuanian names for the goddess of the dawn, or Morgenrothe, probably the same being who is referred to in the Lithuanian and Lettish folk-songs as the “daughter of the sun.” The physical items were distinguished from the actual beings that ruled over them in the old Norse language (ERE, xii, p. 102a).
It is thus clear that the Teutonic was derived from the worship of the Morning Star which became associated with the Goddess Easter who was the Mother of the Morning Star. This is the Mother goddess cult associated with the sun and mystery cults right through the Middle East to India in the Sanskrit. These traditions entered the Norse and “Snorri counts sol as one of the Aysinjur or goddesses” (ERE, ibid.).
The name Friday is derived from Fri the goddess and is translated as Venus. Thus the Morning Star Eostre is the goddess Venus and the festival of Easter venerates Friday and the Sunday as the days of the Morning Star and the Sun which is also a symbol of the Mother goddess (cf. ERE, xii, p. 249b). The Earth mother or Erce was also mixed into a Christian /Heathen brew in this regard.
The name Ea as the root of this word is the name of the Babylonian God (ERE, ii, 296a, 309b, 310b; vi, 250b; ix, 249b; xi, 828b; xii, 42a, 708b,709a) associated with the descent of Ishtar or Eostre (ERE, ii, 315b). Ea is also associated with the ages of the world (ibid., i, 185a). There is a massive amount of information about the cult and worship (ERE Index, p. 173). The Easter Cakes associated with the Friday and also the other days of Lent are derived from the pagan practices of baking cakes to the goddess and other deities (ERE, iii, pp. 60b-61a).
Frazer notes, and correctly, that if it was the case concerning Christmas that the pagans had adopted and syncretised the entire system giving it Christian names, then there is no reason to suppose that the same sort of motives:
… may have led the ecclesiastical authorities to assimilate the Easter festival of the death and resurrection of their Lord to the death and resurrection of another Asiatic god which fell at the same season (v, p. 306).
Frazer goes on to state that:
Now the Easter rites still observed in Greece, Sicily and Southern Italy bear in some respects a striking resemblance to the rites of Adonis and I have suggested that the Church may have consciously adapted the new festival to its heathen predecessor for the sake of winning souls to Christ (ibid.).
Adonis is the Syrian counterpart for Adonai or Lord. Baal or Bel also means Lord.
Frazer considers that this adaptation probably occurred only in the Greek-speaking world rather than the Latin, as the worship of Adonis seems to have made little impression in the West and certainly never formed part of the official Roman religion. He says:
… the place which it might have taken in the affections of the vulgar was already occupied by the similar but more barbarous worship of Attis and the Great Mother (ibid.).
The death and resurrection of the god Attis was officially celebrated at Rome on 24 and 25 March, the latter being regarded as the spring equinox and, therefore, the most appropriate day for the revival of a god of vegetation who had been dead or sleeping throughout the winter. According to an ancient and widespread tradition, 25 March was celebrated as the death of Christ without regard to the state of the Moon. This tradition was followed in Phrygia, Cappadocia, Gaul and, seemingly, also in Rome itself (cf. Frazer, v, p. 306). Tertullian affirms that Christ was crucified on 25 March 29 CE (Adv. Jud., 8, Vol. ii, p. 719, and also by Hippolytus and Augustine; cf. Frazer, v, fn. 5 to p. 306).
This is an absolute historical and astronomical impossibility and, yet, the notion appears to have become deeply rooted early in the traditions (cf. Frazer, v, p. 307 and the paper Timing of the Crucifixion and the Resurrection (No. 159)).
It thus appears that this earliest of traditions had some connection with the cult of Attis. Similarly the pine was sacred to the god Attis, and it is no accident that all relics of the cross are composed of pine (cf. the paper The Cross: Its Origin and Significance (No. 39)).
It is the view of Frazer and also of Duchesne that the date of the death and resurrection of Christ was arbitrarily referred to the fictitious date of 25 March to harmonise with an older festival of the spring equinox. This appears to have equated with an older belief that it was on the very day that the world was created (Frazer, ibid., p. 307).
The resurrection of Attis, who combined in himself the characters of the divine Father and the divine Son, was officially celebrated at Rome on the same day. Thus, it is not only the syncretism of the resurrection doctrine with which we are concerned, but we see also the origin of the doctrines of Modalism, where one god has attributes of or different aspects as forms of the one but in distinction, and from which idea the Trinity was formed.
There is also the more recent heresy of the “Jesus is the one true God” concept entering Protestant quasi-Gnostic theology.
This replacement phenomenon, where a heathen festival is replaced by one with Christian names, is seen in a number of pagan or heathen festivals. In line with the Mother goddess and Heavenly Virgin theology, the Festival of Diana was ousted by the Festival of the Assumption of the Virgin in August. Like changes were the pagan Parilia in April, which was replaced by the feast of St George. The midsummer water festival in June was replaced by the festival of St John the Baptist. Each has connection with the typology it replaced. The feast of All Souls in November is the ancient heathen Feast of the Dead. The Nativity of Christ replaced that of the Sun. The Festival of Easter is simply the feast of the Phrygian god Attis at the vernal equinox. It should also be remembered that the Phrygians were the source of the Mithras system and the Mystery cults generally (see also the paper The Nicolaitans (No. 202)).
Mithras was introduced to Rome by pirates captured by Pompey, circa 63 BCE. The places which celebrated the death of Christ at the equinox were the very places that the worship of the god Attis originated or had taken deepest root, namely Phrygia, Gaul and apparently Rome itself. Frazer says it is difficult to regard the coincidence as accidental (v, p. 309).
Another characteristic that is coincidental to the resurrection is that the date is also ascribed to 27 March, two days later, and this is where the shortened period of the Friday crucifixion and Sunday resurrection occurs. Frazer notes that similar displacements of Christian to heathen celebrations occur in the Festivals of St George and the Assumption of the Virgin (v, p. 309).
It is perhaps the telling item in the syncretism when we see that the traditions of Lactantius and seemingly the Christian church in Gaul placed the death of Christ on the 23rd and the resurrection on the 25th, exactly in accordance with the festival of Attis. This is impossible for any year of the Hebrew calendar that Christ could have possibly been crucified and is directly related to the worship of Attis (cf. Frazer, ibid.).
By the fourth century, the worshippers of the god Attis were complaining bitterly that Christians had made a spurious imitation of their theology or the resurrection of Attis, and the Christians asserted that the resurrection of Attis was a diabolical counterfeit of the resurrection of Christ.
However, we know from history and linguistics that the original dates of the resurrection were based on the Passover, which was based on the lunar calendar and occurred on 14 and 15 Nisan and proceeded to the Wave-Sheaf offering on the Sunday. Thus, the Passover could fall on any two days in the week with a variable gap to the Sunday Wave Sheaf, which marked the ascension of Messiah and not his resurrection, which occurred the previous evening. Easter, on the other hand, was confined to a Friday crucifixion and Sunday resurrection in direct contradiction of Scripture. Originally, it was on fixed dates in the cult of Attis. The word Easter was even inserted in the English KJV translation of the Bible to replace the word for Passover to further disguise the issue.
Candles at the changes of the seasons and Easter
We saw above that candles entered the system of worship from the ancient Aryan religion. It stemmed from a common ancestor central and seemingly associated with the Assyro-Babylonian system prior to the entry of the Aryans to India circa 1000 BCE. This could have been as early as the earliest times of the Assyrians in the second or even during the third millennium BCE.
The ancient Aryan practice continued among the Germans of lighting new fire by means of a bonfire at Easter, and sending the sticks to each home to start the fires to ward off the gods of thunder, storm and tempest. The practice was still found all over Germany, according to Frazer when he wrote. The differences in Protestant and Catholic communities were that the Protestant youth tended the fires and the grown men of the Catholics tended them. The festivals were directly associated with the ancient fertility rites. The church was brought in later as a locus of the procession around which they went according to the revolution of the Sun. The fires are lit on the Easter Mountains.
The practice was introduced to Catholicism as the Easter candle. This single giant candle was lit at Easter on Saturday night before Easter Sunday, and then all the candles of the church were lit from it. This continued for the year until the following Easter when the single Easter candle was again lit. The bonfires continued to be burnt in Catholic countries. The bonfires burnt on Easter eve often have a wooden figure called Judas burnt with them, and the ashes are often mixed with the ashes of the consecrated palm branches and mixed with the seeds at sowing. Even where this sacrificial effigy is omitted, the fires themselves are still called the burning of Judas (Frazer, x, p. 121). Frazer records that in Bavaria the newly kindled Easter candle was used to light the lanterns and the young men ran to the bonfire to light it. The first one there was rewarded by the housewives with red eggs the next day, i.e. Easter Sunday, at the door of the church. The burning of Judas was accompanied by great jubilation (ibid., x, p. 122).
On this same day in the Abruzzi, the holy water is collected from the church as protection against witches and their maladies. The wax from the candles is placed on the hat and is then a protection against thunder and lightning in storms. In Calabria, and elsewhere in Italy, the customs relating to new water are much the same. Similar beliefs are found among the Germans of Bohemia (see also the section Epiphany).
R. Chambers (The Book of Days, London and Edinburgh, 1886, I, p. 421) records that all the fires in Rome were lit afresh from the holy fire kindled in Rome in St Peter’s on Easter Saturday (cf. Frazer, x, p. 125).
The practice of lighting the candle appears to take place on the night before the day of the Sun as part of the ancient Sun-worshipping system. Candles form part of ancient magical rites and were common to the occult systems and among the animist systems stemming from the Assyro-Babylonians.
The practice of lighting candles is of mixed symbolism. The lights in the Temple were specific and limited for special purposes related to the seven lights as the seven spirits of God in the single Menorah, and the seventy lights of the Host in the Temple of Solomon. This was later interpreted by occultists as referring to the seven heavens, and the seven planets. The ascent through the seven levels of animistic Shamanism entered Judaism through Merkabah Mysticism.
The candle itself is held to be a symbol of individuated light and consequently of the life of an individual as opposed to the cosmic and universal life (see Cirlot, Dictionary of Symbols, Dorset, 1991, p. 38). This is a distinction among the occult and is not Christian.
The practice of lighting multiple candles before heathen altars and later in Christianity is based on the premises inherent in the godless and blasphemous doctrine of the ‘immortal soul’ and the attempts at isolating holiness to the individual through the action of the spiritual forces involved by the placation of the entity adored. The more entities, the more candles are required. These candles stand as symbols of the pantheistic thinking of the soul doctrine.
The practice in Judaism is based on a thinking that operates at a lower physical level, stemming from the Babylonian captivity and the Mysticism that entered Judaism from that phase.
In Kabbalistic Judaism, one enters the Gate of Kavanah (or concentration) through meditation based on light. The symbols are thus that one elevates the mind by meditation from one light to a higher one. Two of the lights are called Bahir (brilliant) and Zohar (radiant), alluding to the two most important Kabbalistic classics (Kaplan, Meditation and Kabbalah, Weiser, 1982, p. 118). These lights correspond to the Sefirot. These systems were understood by Rabbi Moshe de Leon (1238-1305) in his Shekel ha Kodesh of 1292.
This system of ascent is Shamanism to the seventh great light Ain Sof. These are: Tov (Good) Nogah (Glow) Kavod (Glory) Bahir (Brilliance) Zohar (Radiance) Chaim (Life) and the infinite and seventh is Ain Sof (the crown). Their Sefirot equivalents are Chesed (Love) Geveruah (Strength) Tiferet (Beauty) Netzach (Victory) Hod (Splendour) Yesod (Foundation) (Kaplan, ibid., p. 119).
The ancient Zohar speaks of different colours with regard to fire and this may be derived from Mazdean systems. The colours of the seven levels to the worship of Sin as Moon god were identified with the Ziggurat at Babylon (see the paper The Golden Calf (No. 222)).
This entire system is straight Mysticism and the use of candles in their various forms is tied directly to magic and mystical practice except where lit in the Temple of God, in which case they are not candles but oil lamps, as the Menorah. Their use at Hanukkah and Purim is examined below.
Passover or Easter
The method of calculating the day of the Sun at the vernal equinox was similar to the calculation of the Wave-Sheaf offering of Leviticus 23, but it was not quite the same. That is why there is a slight difference between the Passover and the Easter system.
The Universal Oxford Dictionary gives the method for determining Easter Sunday or Easter day, which is the true Day of the Sun as Easter.
It is observed on the first Sunday after the calendar full moon, i.e. the 14th day of the calendar moon - which happens on or next after 21 March. Applied colloq. to the week commencing Easter Sunday (1964 print, p. 579).
This is the rule for determining the Easter or Ishtar festival, and not the rule for the biblical Passover.
The arguments are clearly demonstrated in the history of the Quartodeciman dispute, which occurred from the reign of Anicetus to that of Victor (or Victorinus), bishops of Rome from the middle to the end of the second century (ca. 154-190).
Thus, from the Quartodeciman dispute we know that this false dating system emanated from Rome in the second century and was opposed by those in the Church who were taught by the Apostles, namely Polycarp, who opposed Anicetus, and his pupil Polycrates opposing Victor (or Victorinus). The later writings of Socrates Scholasticus (ca. 439 CE) introduce error into the history and are incorrect on a number of grounds, many of which are outlined by the compilers of the Nicene and Post-Nicene Fathers (cf. NPNF, 2nd series, Vol. 2, introduction to the text) (see also the paper The Quartodeciman Disputes (No. 277)).
Socrates records that the Quartodecimans kept the 14th day of the Moon, disregarding the Sabbath (NPNF ibid., Ch. XXII, p. 130). He records that Victor, bishop of Rome, excommunicated them and was censured for this by Irenaeus (ibid.). He tries to introduce, at this later stage, an appeal to Peter and Paul for the support of the Roman practice of Easter and the Quartodeciman’s practice with John (NPNF op. cit., p. 131). He alleges that neither party can produce written testimony to their views. However, we know correctly that the Quartodecimans appealed to John from the writings of Polycarp and Polycrates, who were taught directly by John. No appeal is made to Peter and Paul for support of Easter in any serious way. Moreover, it is absurd to suggest that the twelve Apostles would be divided as to how to calculate the Passover.
Socrates is clear on one thing and that is that the Church and the Quartodecimans did not keep the dates for the Passover in accordance with the modern Jewish calculations (i.e. as at the time he wrote ca. 437, being after the introduction of the Hillel calendar in 358). He holds them to be wrong in almost everything (ibid., p. 131).
In this practice they averred, they conformed not to the modern Jews, who are mistaken in almost everything, but to the ancients and according to Josephus in what he has written in the third book of his Jewish Antiquities.
i.e. Antiquities of the Jews, III, 10 which is quoted here in full:
In the month of Xanthicus, which is called Nisan by us, and is the beginning of the year, on the fourteenth day of the moon, while the sun is in the sign of Aries (the Ram), for during this month we were freed from bondage under the Egyptians, he has also appointed that we should sacrifice each year the sacrifice which, as we went out of Egypt, they commanded us to offer, it being called the Passover.
The sign of Aries finished on 19-20 April and thus the Passover could not fall after this period. The 14th could not fall prior to the equinox, and thus we have the ancient parameters for the Passover. Here we see that the early Church did not follow the later Jewish traditions under Hillel. Most quotations of Socrates ignore this most important piece of evidence.
The Preparation Day of 14 Nisan was thus seen anciently as the commencement of the Passover and that date could fall on the equinox, but 15 Nisan, which was the first Holy Day and the night on which Passover was eaten, could not fall on the equinox. The ancient practice is the basis for the rule now, but after the dispersion the Jews observed only 15 Nisan and not both days as they did previously in accordance with Deuteronomy 16:5-7.
We also see from Socrates here that the Council of Nicaea did not fix the timing of Easter as the Audiani claimed (see NPNF, ibid., p. 131 and fn. 14 to p. 131). It was determined according to ancient tradition and this we know, as it was determined according to the worship of the god Adonis and the god Attis in conjunction with Ishtar or Venus and the worship of the Sun system. It resolved the conflict in the heathen systems of Attis and Adonis. Nicaea simply adopted Easter as the official festival using existing pagan practice, but harmonised it. It did not fix or determine the festival. The Jews had established an entirely false calendar by 358 not long after Nicaea, as we see here from Socrates. This event is much closer to his time and, hence, more accurately noted. Thus, the Christian Passover was all but eliminated by paganism, establishing Easter or a false calendar of rabbinical Judaism, moving the Passover dates in Nisan in relation to the Moon. The Council of Nicaea decreed that the determination of Easter Sunday as the Sunday following the full moon in effect made it virtually impossible (but not quite) for Easter Sunday to fall on the same Sunday as the Wave-Sheaf offering of the Sunday of the Passover – should it fall on 15 Nisan. Thus, it is almost impossible to have Easter and the Passover coincide correctly on some occasions. This was allegedly out of a desire to distance Christianity from the Jews, but in reality it is the determination of the system of a false god to dislocate the true festival and bring it into conformity with pantheistic worship.
The meaning of Easter
The sheer language involved in the English is itself most telling. The Passover was termed Pash in the early Church writings. The term Easter is from the ancient Anglo-Saxon form.
The Universal Oxford Dictionary gives the meaning of Easter as coming from the Old English éastre or the feminine plural éastron. It says:
Baeda derives the word from Eostre (Northumb. sp. of Éastre), a goddess whose festival was celebrated at the vernal equinox (ibid.).
The dictionary then proceeds to ignore this lead in and goes on to associate it with a Christian festival, after identifying its earliest use with the cult of the goddess.
The vernal or spring equinox is the time that the days are beginning to lengthen beyond the length of the night (hence, equinox) and the growth is beginning to quicken. Thus, the symbolism is of fertility.
From this we associate such symbols as rabbits, eggs etc. The rabbit was a symbol of fertility in the ancient Babylonian system and it is found in the archaeological record. Rabbits were used in ancient homoeopathic magic from Africa to America (Frazer, i, pp. 154-155). They were also used in ceremonies to stop rain (i, p. 295).
Not only Christianity adopted the egg symbol in its ritual. Rabbinical Judaism also adopted the practice of including an egg in the Seder table at Passover, thus profaning the Passover meal on a yearly and ritual basis. Coupled with their adoption of the Hillel calendar, they virtually never keep the Passover themselves and prevent any who try to follow their system from doing so by virtue of the false calendar system they have adopted.
Ishtar or Astarte
Easter (fem. pl. Eastron) is actually the name of Ishtar, which is another name of Astarte as we see above. As Ashtaroth, which is the Hebrew plural form denoting various local manifestations of Astarte (Deut. 1:4; Greek Ashtoreth), she was the Canaanite fertility goddess Athtarath, pronounced seemingly Ashtarath or Ashtereth.
From this, the Greeks derived Astarte and the Hebrews in writing the heathen god’s name in the biblical text seemingly kept the consonants but replaced the vowels with the vowels for the word bosheth or shame. Ashtarath or Ishtar became Easter in the Anglo-Saxon prior to their arrival in Britain.
At Ras Shamra, in the form of Anat, she plays the leading role during the eclipse of the Sun god Baal as the vegetation deity (Interpreter’s Dictionary of the Bible, Vol. 1, p. 254). She is less conspicuous in Palestine as Ashtaroth than as Astarte, who assumes the role of Anat there. What we are seeing is the same role played by this goddess under different names, seemingly depicting some local or other aspect of significance. This is seemingly the same as the Artemis-Diana distinction. The seasonal rituals of the fertility cult of Baal and Astarte are noted in early Israel (Jdg. 2:13; 10:6; cf. Interp. Dict., ibid.). Samuel at Mizpah at the election of Saul ordered Israel to put away the Baalim and Ashtaroth, thus indicating they were associate and plural (1Sam. 7:4). Israel did not do so and confessed its apostasy after the defeat by the Philistines (1Sam. 12:10). From 1Samuel 31:10, we see her cult at Beth-shan which was not occupied by Israel, being destroyed at the time of David. Hence, her cult was general to the area. She is called Ashtaroth of the Horns (Ashteroth-karnaim). This city was a city of the Rephaim and within the territory of Og, king of Bashan (Deut. 1:4; 3:10; Josh. 12:4). Cherdorlaomer raided the Rephaim there (Gen. 14:5). It later was settled by Machir (Josh. 13:12,31) and became an Israelite city of refuge (1Chr. 6:71; cf. Josh. 21:27). This is representative of the goddess Astarte depicted as the horned goddess and represented in the same way as Hathor, the cow goddess of Egypt. This is the representation of Ishtar with the Moon god Sin whose upturned horns are identified in the crescent moon on the horizon, with Venus as the evening star (cf. the paper The Golden Calf (No. 222)). The system was thus ancient and was central to the Rephaim and the religious systems of Egypt and Asia Minor generally, but centred on the Assyro-Babylonian system.
The form of the word Ashteroth (a. soneka) is also a common noun meaning young of the flock or breeding stock, referring to productivity of sheep (cf. Deut. 7:13; 28:4,18,51). The ancient etymology of the terms suggests the connection with the breeding or fertility system and may even be why the sun sign of the month of the equinox was named as Aries or the Ram by the ancients.
Astarte, or Easter in her various forms, is the Mother goddess mentioned above and was associated with the son-lover as Lord, which is the meaning of Baal, Adonis etc. As the Heavenly Virgin or Mother-goddess figure, she was involved, as we see, in the symbolism of the golden calf that led Israel astray at Sinai under Moses (cf. ibid.). In this Trinity of the Star, the Sun and the Moon we see her as goddess of sensual love as evening star (hence, also Venus) and goddess of war as morning star. This war role was attributed to Aphrodite. This title is directly related to Satan from Isaiah 14 and Ezekiel 28. She is related to the Moon god Sin from where we derived our concept of the word and is in association with the Sun as the third member of the Trinity. The festivals are tied to this symbolism.
The cult of Ashtoreth was patronised by Solomon (1Kgs. 11:5). Her cultic place established on the Mount of Corruption on the Mount of Olives across from Zion was abolished during Josiah’s reformation. In both cases, this cult is tied to the Phoenicians and, particularly, the Sidonians. Thus, the Bull system of Sin and the sacrifices of the Minotaur in Crete are also associated here through the early maritime system of the Sea Lords. Her worship is directly linked with the worship of Milcom, god of the Ammonites, and Chemosh of the Moabites. They appear to be associated with her in the form of Athtar, the astral Venus, of which Ashtoreth is the female form. She is the consort and ally of Baal in the conflict with the Sea-and-River in the Ras Shamra texts and, in the text from the nineteenth dynasty in Egypt, she was the bride claimed by the tyrant Sea. She was associated with Baal as the Giver of Life or Death in the saga of king Keret from the Ras Shamra texts. Here, the king invokes a curse in the name of Athtarath-the-name-of-Baal. Thus the name is associated with Baal and has both male and female aspects as consort and giver of fertility. At Ras Shamra, her place was usurped by Anath, sister of Baal but, from the biblical and Phoenician inscriptions, she was the most prominent deity anciently (Interp. Dict., ibid., art. ‘Ashtoreth’, pp. 255-256; cf. the paper The Golden Calf (No. 222)).
The Egyptians, under the Ptolemies at Edfu, depicted Ashtoreth as a lion-headed goddess. This is again an association with the lion-headed Aeon and the Mysteries. As Quodshu or holiness, holding a papyrus plant and a serpent, she stands on a lion between the Egyptian fertility god Min and Resheph, the Semitic god of destruction and death. Her hair is worn in the stylised fashion of the horns of the cow-goddess Hathor. Bronze figurines from Gezer depict a nude figure with horns, which are considered to be that of Ashteroth. Her cultic systems flourished at Beth-shan from the fifteenth to the thirteenth centuries BCE and, in the second century BCE, there was a cult centre at Delos to Astarte of Palestine (ibid., p. 256). The fertility symbols found are of the goddess with the horned headdress and the breasts pronounced, often holding a lotus flower and a serpent. Where the Mother goddess is depicted, it is Ashera and it has a dove clutched to the breast. She is also associated with the Phoenician god of healing, Eshmun, from an undated inscription from Carthage. This role is endemic to the cult throughout and is found among the Celts and Druids, who were exposed to the Sea Lords very anciently. A name associated with her in the Assyrian form Ishtar is Ishtar-miti-uballit or Ishtar make the dead to live (ibid.). Thus, the resurrection theme is associated with her at Easter as Easter.
The Queen of Heaven
The prophet Ezekiel condemns the women in Israel for weeping for Tammuz (Ezek. 8:14). This Syrian deity was mourned as the dying god in idolatrous Israel.
Tammuz was associated with the Queen of Heaven, who was also the Heavenly Virgin, as we have seen. Cakes were baked to her, and the prophet Jeremiah condemns this practice outright (Jer. 7:18; 44:19).
The Queen of Heaven was, as we see, an ancient Oriental goddess. She was associated with the harvest also, and the last sheaf and corn of the harvest were often dedicated to her and was called the Queen (Frazer, ii, p. 146; vii, p. 153).
The Queen at Athens was married to the god Dionysius (ii, pp. 136ff.; vii, pp. 30ff.). It appears that the consummation of the divine union, as well as the espousals, was enacted at the ceremony. It is not known whether the part of the god was played by a man or an image. Attic law required that the Queen be a burgess and have known no man but her husband (Frazer, ii, p. 136). She was assisted by fourteen sacred women, one for each of the altars of Dionysius. This Dionysian ceremony of the Mystery cults was enacted on the 12th of Anasterion (or around February). The fourteen were sworn to purity and chastity by the Queen at the ancient shrine of Dionysius on the Marshes, which was opened on that day of the year only. Her marriage seemingly took place later and, according to Aristotle (Constitution of Athens, iii, p. 5), at the old residence of the king on the north-eastern side of the Acropolis and known as the Cattle stall. It was nevertheless part of this ancient fertility festival of the vines and fruit trees of which Dionysius was the god (Bacchus to the Romans) (cf. Frazer, ii, pp. 136-137 and n. 1).
The Queen became consort of the gods but remained the fertility goddess and Mother goddess. In this role, the Queen of the corn-ears was drawn in procession at the end of the harvest.
The Queen of Egypt was also the wife of Ammon (ii, pp. 131ff.; v, p. 72) and thus personified the goddess in her person. This degenerated in later years where the divine consort was a young and beautiful girl of good family who led the loosest of sexual relations until she reached puberty and was then mourned and given in marriage (Strabo, xvii, I, 46, p. 816). The Greeks called these Pallades after their virgin goddess Pallas.
This prostitution appears to have anciently been associated with the worship of Ishtar and, indeed, most of the devotees of Easter or Ishtar spent some time at least enrolled as a temple prostitute as a young girl in the cult centres of Asia Minor. At Corinth, prostitution was general and virtually everyone in the city was at one time or another involved with it.
The prophetess of Apollo also had this role of consort. So long as the god tarried at Patara, his winter oracle and home, his prophetess was shut up with him every night.
As Artemis, the many-breasted goddess of fertility at Ephesus, the goddess had consorts who were termed Essenes or King Bees and seemed to have been entirely celibate for a fixed period of time, being dedicated to the goddess. The records or inscriptions at Ephesus indicate some were married.
She had a grove of fruit trees around her temple (Frazer, i, p. 7). She was thus associated with Demeter, who was termed the fruit bearer (vii, p. 63). In this way she was also identified with Diana, who was patroness of fruit trees as was she herself (i, pp. 15ff.). This Mother goddess is identified by Frazer with the King of the Wood and his woodland goddess Diana at Nemi. This appears to make perfect sense and would explain why the crowd at Ephesus, in Acts, referred to the goddess as Diana of Ephesus. This aspect has been transferred to the cult of the Virgin and fruit trees are blessed on the day of the Assumption of the Virgin (Frazer, i, pp. 14ff.). The cult of the Virgin in Christianity is nothing but the cult of Ishtar, Astarte, Diana or Artemis in ancient paganism in new guise and sometimes in the same clothes.
The relationship with the Mysteries in Egypt carries on to the cult of Osirus, whose worshippers were forbidden to injure fruit trees (Frazer, vi, p. 111). Dionysius was also a god of fruit trees (vii, pp. 3ff.). We see an intertwined relationship here, which shows that these are not really different gods but different aspects of the same system of worship with variations on a theme.
These Essene at Ephesus were expected to have no intercourse with mortal women, just as the wives of Bel and Ammon, from early times, were expected to have no intercourse with mortal men. There seems to be a logic in the celibate dedication to the Queen of Heaven as Mother goddess. That is why the priests dedicated to her were celibate or eunuchs. This practice entered Christianity from the pagan cults and Gnosticism in its adaptation of the Mystery cults (see the paper Vegetarianism and the Bible (No. 183)). The females in the Ishtar cult in Asia Minor were not celibate, but promiscuous. It is probable that Pliny called the Sons of Zadok at Qumran as Essene from the fact that some of their orders were celibate ascetics. They themselves used no such title and the application of the name of priests of a pagan god would have been offensive in the extreme.
As Queen of May, the goddess was representative of the spirit of vegetation (ii, pp. 79,84) both in France (ii, p. 87) and in England (ii, pp. 87ff.).
It seems to be a common view that the Mother was also goddess of the Corn, and the last of the harvest is often dedicated to her in symbolism and a special cake is made of this last of the harvest and dedicated to her. The symbolism runs throughout Europe in varying forms and has the same symbolism being identified with this Queen of the harvest (cf. Frazer, vii, pp. 149-151).
A sacrificial cake is baked of new barley or rice (Frazer, viii, p. 120). The barley harvest is at Easter or Passover. Among the Hindus, sacrifice was made at the beginning of the harvest, either at the new or full moon. The barley was reaped in spring and the rice in autumn. From the new grain a sacrificial cake was set forth on twelve potsherds sacred to the gods Indra and Agni. A pap of gruel or boiled grain was offered to the pantheon of deities, the Visve Devah, and a cake on one potsherd was presented to Heaven and Earth (ibid.). This is similar to the record of presenting the cakes to the Queen of Heaven referred to by Jeremiah and appears to have been anciently common to all the Aryans. The sacrifices in the Hindu system were of the first-fruits and the fee of the priests was the first-born of the cattle and, thus, we are seeing the ancient first-fruits system among the Aryans entering Hinduism. The harvest goddess is Gauri, wife of Siva. Rice cakes or pancakes are offered to a plant-formed effigy of Gauri. On the third day, it is thrown into a river or a tank. A handful of dirt or pebbles is brought home from the spot and thrown about the house and gardens and trees to ensure fertility. This is the same effect as the custom of sweeping churches in Italy on the third day of the Easter festival, and shows an ancient common tradition much older than Christianity. The cakes have become hot cross buns in Christianity.
The same practice is among the Chins of Upper Burma as an offering of first-fruits to the goddess Pok Klai.
This Mother-goddess figure entered Buddhism and the East as the goddess Kuan-yin, who became the Avalokitesvara of the Mahayana system.
She entered Christianity as the Heavenly Virgin called Mary. She was made the mother of Jesus Christ and blasphemously termed Mother of God.
The Black Madonna
We can see now that the Mother-goddess figure entered Christianity as the Virgin Mary. She is termed the Madonna. We can see that her aspect as goddess of the spirit of vegetation was emphasised in the application of a black face to the goddess in her role as Demeter or the spring goddess of fertility in her aspects of Artemis or Diana.
In Christianity, this aspect seems to be known as the Black Madonna.
There was no cult of the Virgin Mariam or Mary in the early centuries of the Church. The ERE in dealing with the cult of Mary says:
No mention of Mary’s name, nor reference to her, occurs in the notices of Holy Communion in the NT; nor in the liturgical thanksgiving in the 1st epistle of St. Clement of Rome; nor in the Didache; nor in Justin Martyr’s or Tertullian’s account of the Eucharistic services. The only place where an invocation of St. Mary could come in is at the Commemoration of Martyrs and the Commemoration of the Departed; and on this all that St Cyprian has to say is:
‘Ecclesiastical discipline teaches, as the faithful know, that at the point where the martyrs are named at the altar of God, there they are not prayed for but for others who are commemorated prayer is offered (Epp. i, [Opera, Oxford, 1682, p. 81])
There is no direct evidence that among ‘the martyrs’ the Virgin was so much as mentioned (ERE, Vol. 8, pp. 475-476).
The introduction of Mariolatry was some time later from the introduction in the Eastern rites. After the Church was adopted by the Roman Empire the heathen practice or heresy was adopted, and the practice is recorded by Epiphanius:
… as heresy (Her, lxxix) that ‘certain women in Thrace, Scythia, and Arabia’ were in the habit of adoring the virgin as a goddess and offering to her a certain kind of cake [kollurida tina] whence he calls them ‘Collyridians’. Their practice (cf. Jer. 44:19) and the notion underlying it were undoubtedly relics of heathenism always familiar with female deities.
These cakes were made to the Queen of Heaven at her festival, the festival of Ishtar or Easter or Astarte, since long before the Babylonian captivity.
Epiphanius was adamant that Mary (her name was actually Mariam and Maria was her sister) was not to be worshipped. In the Liturgy of St Mark (Alexandrian), Mary was originally included in the prayer that God would give rest to the holy dead (ERE, ibid., p. 478). Mary or Mariam was seen as being quite dead and among those awaiting the resurrection.
The Trinitarians, particularly the Cappadocians, elevated Mary in response to the arguments of the non-Trinitarians later called Arians (cf. ERE, ibid., p. 476). They elevated Christ to God and then elevated ‘Mary’ as Mother of God and, hence, the Mother goddess and mother of the gods. These ideas were purely heathen and did not originate until the end of the fourth century. W. R. Ramsey argues that:
… so early as the 5th. cent. the honour paid to the Virgin Mary at Ephesus was the recrudescence in a baptized form of the old pagan Anatolian worship of the Virgin Mother (Pauline and Other Studies, p. 126; cf. ERE, ibid., p. 477, n. 1).
The Virgin Mary was none other than Artemis or Diana of Ephesus that Paul so boldly opposed (Acts 19:24-35).
By the medieval period up to the close of the council of Trent in 1563, we see that Mary had been elevated in the liturgy, being mentioned by name as:
… the most holy, stainless, blessed, Our Lady, Mother of God and the sequence of thought, which still shows she is prayed for is interrupted by a salutation ‘Hail thou that art full of grace ... because thou did bring forth the saviour of the world’ (ERE, ibid., p. 478).
There is no doubt Mariam, or Mary, the mother of Christ, was originally thought of as dead and was prayed for and not to and this was eroded by the Mother-goddess cult whose place she took.
The Mother-goddess was given a black face as Demeter, goddess of fertility, in the December rites, and as the Black Madonna she was thus related to the fertility and Mystery cults. Her cult, in any form, is pagan and an affront to Christianity.
The Council of Trent tried to reduce the idolatry associated with Mary and make distinction in the concepts of worship accorded to God, Jesus, Mary and the saints.
The effects of the Council were later eroded by successive popes, down to the present day.
Hanukkah and Purim
A festival of the Jews that mirrors the influence of the Persians and the Greeks is that of Hanukkah. It has no religious significance and work is not ceased. It is a festival of the 25th of the ninth month called Chislev or Kislev, which approximates December.
We know from Baruch 6:19ff. that the Babylonians lit candles before their idols and this was mentioned somewhat disparagingly in Baruch. The Greeks had also taken over this system, as we see from the references above. From the time of the Seleucid kingdom and its influence over Judah, the Hellenisation of Palestine was unavoidable.
Its political influence was considered marginal over Jerusalem, according to Hayyim Schauss in his work The Jewish Festivals: History and Observance, Chanukkoh (Schocken Books, p. 211). One only has to look at the fact that the grove of a Greek god was at Bethlehem (see below) to see the naivety of this statement. He admits on page 212 that the Hellenisation process was of political and economic interest. The governing party in Jerusalem under Syrian rule was the Hellenistic aristocratic party. The conflicts from this system reached its head under Antiochus Epiphanes. The High Priest was the Hellenised Jew of the aristocratic pro-Syrian party, Jason (altered from Joshua). He erected a gymnasium at Jerusalem and introduced Greek games. Jews adopted Greek names and culture (cf. Schauss, p. 213). When the Syrian-Egyptian war broke out the conservative Jason was deposed by the more radically pro-Greco-Syrian, Menelaus (Menachem). A rumour that Antiochus had been slain on the battlefield emboldened Jason to enter Jerusalem with 1,000 men and attack Menelaus. Antiochus entered Jerusalem and commenced to slaughter every advocate of the Egyptian party. He plundered the Temple and removed the treasure and all the gold and silver utensils. Menelaus was left in charge. A year later Antiochus again marched against Egypt, but was ordered to withdraw by the Roman senate and he was forced to comply (cf. Schauss, p. 214). Antiochus was then forced to consolidate the empire against Roman and Egyptian power. To do this, he demanded the worship of Greek gods. The Jews did not comply and he was impelled to send an army into Palestine to force compliance. The Temple was turned into a Grecian temple. The death penalty was introduced for observance of the Jewish faith.
A new strictly nationalist party emerged under Judah Maccabee and his brothers of the Hasmonean family.
On 25 Kislev they rededicated the altar of the Temple and instituted a yearly eight-day festival commencing on that day. They forced the repeal of the anti-Jewish laws of the Syrians and began to erect an independent Jewish kingdom in Palestine. This kingdom lasted less than 100 years before being swallowed up by the Romans.
Schauss makes a telling statement on page 216. He says:
For centuries since the Babylonian captivity they were a small and weak community in the little land of Judah ... It was only through the revolt and victory of the Hasmoneans that the latent forces of the people were aroused, and the various trends in Jewish spiritual life attained distinct forms. Jews grew enormously in numbers and power during that period.
Hanukkah is allegedly to commemorate the victory of the Hasmoneans. What we see is a period of total religious syncretism with the support of a party of the Jewish people. The practice of lighting tapers or candles over an eight-day period commencing in early December often coincides with the Saturnalia or the festivals of Demeter and the Mother goddess in Egypt, as we see above. It is indicative of the adaptation of a foreign practice to commemorate the victory of a Jewish aristocratic party and appropriate to itself the legitimacy of the previous aristocracy in the eyes of the people. This practice has no biblical sanction. Haggai 2:10-19 speaks of 24 Kislev as the period of the Temple restoration. The wrong date is involved for the application of this prophecy (see also the paper The Oracles of God (No. 184)).
An indication that the same thinking is involved in these Jewish festivals is the note 305 by Schauss (on p. 310) to the text on Purim and the practice of eating beans there, where he says:
The primitive source of this custom must be sought for in the primitive character of Purim as a season festival. For, exactly like beating and masquerading, legumes were also, in the belief of the peoples, a charm against the spirits. For this same reason beans are also eaten at a wedding.
Note the beating and masquerading attendant with the eating of the bean. It is also the practice, however, now only among oriental Jews, of the burning Haman at Purim.
In the same process, Judas is burnt among the Roman Catholics of Europe. The same aspects of beating and masquerading are common to all.
Schauss says in relation to Purim and the consumption of Kreplech and the Hamantaschen:
The word Kreplech obviously comes from the German and like many other forms of Purim observance was taken over from ‘Shrove Tuesday’ of the Christians and made a part of Purim. From Purim, it must be assumed the custom of eating Kreplech was carried over to the day before Yom Kippur and to Hashano Rabboh (ibid., p. 270).
He suggests the jesting explanation has been made that they are eaten on the days when beating is done – hence, the day before Yom Kippur when men flog themselves; Hoshano Rabboh when the willow branches are beaten; and Purim when Haman is beaten (p. 270).
The practice anciently was to burn lights at Hanukkah. Haman was burnt at Purim on the gallows. This is the origin of the Christians objecting to the practice on the grounds that it was identified with Christ. When this was done, ten candles were lit for the sons of Haman.
We see here the concept of candles as the single soul of the individual and the burning of the candles to create light. This practice can only be Assyro-Babylonian in origin and of pagan animist derivation. It has died out with the burning, but it was coupled with it. The candles are lit to placate the spirits of the ten demons.
Schauss shows that the practices of the theatrical aspects of the festivals began at Chanukkoh (or Hanukkah), but were predominate at Purim in the ghetto.
He says of the Purim masquerade:
It is ordinarily assumed that the Purim masquerade originated among the Jews of Italy, through the influence of the Christian Carnival, and that from Italy it spread to Jews of other lands. It is more logical to assume, however, that the masquerade belonged to Purim from the very start, together with the noise making. Both the noise-making and the masquerading were originally safeguards against evil spirits, against whom it was necessary to guard oneself at the change of the seasons. It would be truer to say that the Purim Mask and the Christian Carnival have the same heathen origin, with the season of the year and the approach of spring and both later took on new significance (p. 268).
He notes the custom among the Talmudic academics, until recently, of electing a Purim-rabbi (p. 269). This custom developed from the custom of electing the Purim-king, which was akin to the election of the King of the Bean or the King of Fools in Europe (see above).
These clearly and admittedly heathen practices associated with festivals not commanded to be observed indicate that we are dealing with the ancient primitive festivals of the fertility cults that entered Judaism from the same sources as they entered the Roman and Orthodox systems, namely from the Assyro-Babylonians, and then the Greeks and Egyptians. They lead up to the Passover in the same way as the other systems lead up to Easter.
The traditions of Judaism are as perverted as those of mainstream Christian sects. Indeed, they are of a common heathen origin; Babylon the Great rules the entire world.
The worship of Adonis at Easter
The remnants of the cult of the worship of Adonis are found to this day in Sicily and Calabria. In Sicily, gardens of Adonis are still sown in spring as well as in summer, from which Frazer infers that Sicily as well as Syria celebrated an old vernal festival of a dead and risen god. Frazer says:
At the approach of Easter, Sicilian women sow wheat, lentils and canary seed in plates, which they keep in the dark and water every two days. The plants soon shoot up; the stalks are tied together with red ribbons, and the plates containing them are placed on the sepulchres, which with the effigies of the dead Christ, are made up in Catholic and Greek churches on Good Friday, just as the gardens of Adonis were placed on the grave of the dead Adonis. The practice is not confined to Sicily but is observed in Calabria and perhaps in other places (Frazer, ibid., v, pp. 253-254).
The gardens are also still sown in Croatia and are often tied with the national colours.
Frazer draws attention to the widespread nature of this cult in Christian guise. The Greek church incorporated the festival in the procession of the dead Christ around Greek cities from house to house, bewailing his death.
Frazer is of the view that the church has skilfully grafted the festival of the dead god Adonis onto the Easter festival of so-called Christianity. The dead and risen Adonis became the dead and risen Christ. The depiction of the Greek artists of the sorrowful goddess with the dying lover Adonis in her arms resembles and seems to have been the model for the Pieta of Christian art of the Virgin with the dead body of her son in her lap (ibid., pp. 256-257). The most celebrated example of this is the one by Michelangelo in St Peter’s.
Jerome tells us of the grove to Adonis located at Bethlehem. Where Jesus wept, the Syrian god and lover of Venus was bewailed (ibid., p. 257). Bethlehem means the House of Bread and thus the worship of Adonis, as god of the corn, came to be associated with Bethlehem rather than the bread of life that was Messiah.
This was itself probably deliberate to assimilate the belief in the Syrian god Adonis and his lover Ishtar or Astarte, the Venus of the Romans.
The first seat of Christianity outside of Palestine was Antioch, and it was occupied by the Apostle Peter, as bishop. It was here that the cult of Adonis was entrenched and the death and resurrection of the god was celebrated annually with great solemnity.
When the emperor Julian entered into the city, which was at the time of the celebration of the death and resurrection of the god Adonis, he was greeted with great salutations so much so that he marvelled at them as they cried: “The Star of Salvation has dawned upon them in the East” (Ammianus Marcellinus, xxii, 9. 14; cf. Frazer, v, n. 2 to p. 258).
Rain-making at Easter
In order to ensure the growth of the crops, it was necessary to have rainfall by the equinox to get spring under way.
In order to do this, various rain-making ceremonies were held anciently by exposing the gods to various forms of hardship. In Italy, Palm Sunday, the Day of the Sun god at the Easter festival, was used to hang the consecrated palm branches on trees. The churches were swept and the dust was sprinkled on the gardens (see also above). Special consecrated candles were also lit to ensure rain. The statue of St Francis of Paola is credited with annually bringing the rain when he is carried every spring through the market gardens.
In the great drought of 1893, it is recorded that after some six months of drought the Italians could not induce the saints to bring rain by candles, bells, illuminations, fireworks and special masses and vespers. They banished the saints after they had scourged themselves with iron whips to no avail. At Palermo, they dumped the statue of St Joseph in a garden to see the state of things for himself and with the intention of leaving him there until rain fell. Other statues were turned to the wall like naughty children. Others were stripped of their regalia and banished from their parishes, being dunked in horse ponds and were threatened and grossly insulted. At Caltanisetta, the statue of the Archangel Michael was stripped of his golden wings and robes and given pasteboard wings instead and a clout was wrapped around him. The statue of St Angelo at Licata fared even worse as it was stripped and left naked. The statue was reviled, put in irons and threatened with drowning or hanging. The angry people roared at him shouting: “Rain or the rope!” (Frazer, i, p. 300).
This story, as farcical as it is, was carried out with deadly seriousness some 100 years ago in a civilised so-called Christian country with the knowledge and consent of the Catholic Church. The activities demonstrate the connection in the minds of the peasantry with the ancient agricultural system, and the so-called statues of the saints have simply replaced those of the ancient gods of the harvest, namely Adonis, Attis, Astarte, and Zeus as the god of rain etc.
These practices were based on the same ideas and concepts found in ancient China and elsewhere in the East. In 1710 on the island of Tsong-ming in Nanking province, the viceroy, after attempting to placate the deity, shut up his temple and placed locks on the doors after banishing the deity. Rain fell soon afterwards and the deity was restored. In April 1888, the Mandarins of Canton prayed to the god Lung-wong to stop the incessant downpour of rain. He did not heed them and so they put him in a lock-up for five days and the rain duly ceased. He was then restored to liberty (Frazer, i, pp. 298-299). The ideas are thus exactly the same and precede Christianity by millennia. However, they were absorbed into it and were prevalent into this century.
In fact, the ideas still exist within the legends and minds of a superstitious peasantry, encouraged by ignorance and a manipulative priesthood.
The Morning Star
The cult of Adonis involved the divine mistress of Adonis whose ancient name was Astarte, who was identified with the planet Venus. Thus, the star was the symbol both of the god and his lover.
It is also biblically the symbol of Satan and hence the visions of the Virgin are related to the Morning Star and can only be of demonic significance. The Adversary poses as an angel of light.
Astarte, the divine mistress of Adonis, was identified with Venus by the Babylonians, whose astronomers made careful notation of her transition from Morning to Evening Star, drawing omens from her appearance and disappearance (Frazer, v, p. 258). It is reasonable, then, to assume that the festival of Adonis was timed to commence with the appearance as the Morning or Evening Star. As the star that the people of Antioch saluted was seen in the East, and if it was indeed Venus, it can only have been as the Morning Star. From this we can deduce that the term Easter relates then also to the word for East and relates to this pagan goddess of the dawn.
Frazer holds that the festival of Astarte at the ancient temple at Aphaca in Syria was timed to start with the fall of a meteor from the heavens, which on a certain day was timed to fall from the top of Mt Lebanon to the river of Adonis (v, p. 259). This seems a little too convenient and it may be that the morning star he attributes to Antioch and elsewhere is this same meteor that represents the star of the goddess falling from Heaven into the arms of her lover (ibid.). The placing of the temple at Aphaca in relation to Mt Lebanon and the River Adonis would give, therefore, a precise location of the temple in relation to the rise of the morning star on the first day of the Sun following the vernal equinox of each year. Fairly accurate triangulation should locate the temple with a fair degree of accuracy on this hypothesis.
Frazer’s attempts to locate this star with Bethlehem and the wise men cannot possibly be correct.
The link, however, with the god Adonis and Astarte is absolute. The coupling of these festivals with Adonis and also Attis as the dead and risen god – to which the pine was sacred, as we see with Attis – is conclusive (Frazer, v, p. 306). The symbol of the dead man hanged on the tree and absorbed with it and then resurrected is the basis behind the relics of the cross being all of pine. The Easter system with its rekindling of new fires or need-fires is entirely non-biblical and anti-Christian.
Christianity compromised with its rivals in order to accommodate a still dangerous enemy. In the words of Frazer, the shrewd clerics saw that:
If Christianity was to conquer the world it could only do so by relaxing the too rigid principles of its Founder, by widening a little the narrow gate that leads to salvation.
He makes the telling but incorrect argument that Christianity was like Buddhism, where both were essentially ethical reforms which could only be carried out by a small number of disciples who were forced to renounce their family and the state. For the faiths to be accepted, they must be substantially reformed to appeal to the prejudices and passions and superstitions of the vulgar. This happened in both Judaism and in Christianity.
In this way, the faith of Messiah was subverted by worldly secular priests, who accommodated the Faith to the religions of ancient Rome and the sun-worshipping Mystery cults. This perversion of the Faith started with the basic festivals, which replaced the festivals of the Bible with those of the sun-worshippers. They introduced Christmas and Easter and then Sunday worship, which replaced the Fourth Commandment regarding the Sabbath. They invented the myth of the perpetual virginity of a woman they called Mary, rather than Mariam, to disguise the fact that they had murdered her sons and their descendants, the brothers and nephews of the Messiah of the world, the Son of God who came to teach them the truth and save them from themselves (see the paper The Virgin Mariam and the Family of Jesus Christ (No. 232)). The Christmas symbolism involves this Virgin bringing forth an infant from a cave year after year, as the eternal Sun comes forth in its infancy at the solstice.
The symbolism conveyed by the true Feasts of God contained in the Bible is deliberately obscured so that no growth in the Faith and in the knowledge of the One True God is possible.
The ignorant teach their children lies in the misguided belief that somehow that will make them happy. The society reduces its people to idolaters on the basis of commercialism and greed, following practices steeped in paganism and false religion. Keeping Christmas and Easter is a direct involvement in the sun-worshipping and Mystery cults and is a direct breach of the First and Fourth Commandments among others.
Christ called them hypocrites and quoted God speaking through the prophet Isaiah (Isa. 29:13):
This people draweth nigh unto Me with their mouth and honoureth Me with their lips; but their heart is far from Me. But in vain do they worship Me teaching for doctrines the commandments of men (Mat. 15:8-9; Mk. 7:6-7).
God has given His Laws through His servants the prophets. Soon, the Messiah will return to enforce those Laws and that system.
|
<urn:uuid:32772769-d066-44e7-9136-95d5b0112a51>
|
{
"dataset": "HuggingFaceFW/fineweb-edu",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813602.12/warc/CC-MAIN-20180221083833-20180221103833-00622.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9715060591697693,
"score": 3.296875,
"token_count": 34981,
"url": "http://www.ccg.org/s/P235.html?cfet=y"
}
|