text stringlengths 4 602k |
|---|
Table of Contents
What is Polymerase chain reaction (PCR)?
- Polymerase chain reaction (PCR) is a widely used enzymatic process that allows scientists to replicate a specific region of DNA, resulting in the production of many copies of a particular DNA sequence. PCR combines the principles of complementary nucleic acid hybridization and nucleic acid replication to amplify a single copy of a target DNA sequence, even if it is undetectable by standard hybridization methods. This amplification process can generate millions to billions of copies of the target DNA in a relatively short period of time, providing an abundant amount of DNA for further analysis.
- PCR was invented in 1983 by American biochemist Kary Mullis at Cetus Corporation, for which he was awarded the Nobel Prize in Chemistry in 1993, jointly with biochemist Michael Smith. Since its development, PCR has become a fundamental technique in genetic testing and research. It is used in a wide range of applications, including the analysis of ancient DNA samples, identification of infectious agents, gene cloning and manipulation, DNA sequencing, diagnosis of genetic disorders, forensic science, and detection of pathogens in infectious diseases.
- The PCR method typically relies on thermal cycling, which involves exposing the reaction to repeated cycles of heating and cooling. These temperature changes facilitate specific reactions, such as DNA melting and enzyme-driven DNA replication. Two key components of PCR are primers and a DNA polymerase. Primers are short single-stranded DNA fragments, known as oligonucleotides, that are complementary to the target DNA region. The DNA polymerase is the enzyme responsible for assembling new DNA strands using the primers and free nucleotides.
- The PCR process involves several steps. In the initial denaturation step, the double-stranded DNA template is heated to a high temperature, causing the two strands to separate. Then, the temperature is lowered, allowing the primers to bind to the complementary sequences of the DNA. Once the primers are bound, the DNA polymerase synthesizes a new DNA strand using the template DNA and free nucleotides. This process is repeated multiple times, with each cycle resulting in the amplification of the target DNA. The DNA generated in one cycle becomes the template for the next, leading to an exponential increase in the amount of DNA.
- To ensure the success of PCR, a heat-stable DNA polymerase is typically used, such as Taq polymerase, which was originally isolated from the bacterium Thermus aquaticus. Taq polymerase can withstand the high temperatures of the denaturation step without denaturing itself. Prior to the use of Taq polymerase, DNA polymerase had to be manually added after each cycle, which was a time-consuming and expensive process.
- PCR has revolutionized various fields of research and diagnostics. It has enabled scientists to amplify and study minute amounts of DNA, providing valuable insights into genetic information and facilitating advancements in fields such as medicine, forensics, and evolutionary biology.
Definition of Polymerase chain reaction (PCR)
Polymerase chain reaction (PCR) is a widely used enzymatic process that rapidly and exponentially amplifies a specific region of DNA, producing millions to billions of copies of a particular DNA sequence.
Principle of Polymerase chain reaction
The principle of polymerase chain reaction (PCR) involves the amplification of a specific segment of DNA through a series of temperature cycles. The process begins with denaturation, where the target DNA is heated to separate the two strands, resulting in single-stranded DNA. Next, specific primers, designed to bind to each target DNA strand, are added. These primers serve as starting points for DNA synthesis.
During the primer annealing step, the reaction mixture is cooled to a temperature that allows the primers to bind to their complementary sequences on the target DNA. Once the primers are bound, DNA polymerase, an enzyme that synthesizes new DNA strands, extends the primers by adding complementary nucleotides. This primer extension step occurs at a temperature optimal for DNA polymerase activity.
The cycling steps in PCR involve repeating the denaturation, primer annealing, and primer extension steps multiple times. Each cycle results in the duplication of the target DNA. After 25 to 30 cycles, the number of copies of the target DNA can reach at least 107 due to the exponential amplification.
PCR typically utilizes a series of temperature changes, known as thermal cycling, to facilitate the different steps. The cycling steps usually consist of three temperature shifts: denaturation, primer annealing, and primer extension. Denaturation is typically performed at a high temperature (around 94-96°C) to separate the DNA strands. During primer annealing, the temperature is lowered (around 45-60°C) to allow the primers to bind to their target sequences. Finally, primer extension occurs at a temperature optimal for DNA polymerase activity, usually around 72°C.
PCR methods often include multiple cycles of these temperature changes, starting and ending with a hold step at higher and lower temperatures, respectively. The hold steps ensure the completion of product extension and allow for the analysis or storage of the final PCR product. While most PCR methods amplify DNA fragments up to around 10 kilo base pairs (kb), some techniques can amplify even larger fragments, reaching up to 40 kb.
Requirements for PCR
Requirements for PCR involve several components and conditions that are necessary for the successful amplification of DNA. Here are the key requirements for PCR:
- DNA template: The DNA template is the target sequence that will be amplified. It can be isolated from various sources, such as blood, tissue, forensics specimens, or microbial cells. The DNA template should contain the target sequence that needs to be amplified.
- Primers: Primers are short, single-stranded DNA molecules that bind to the DNA template and provide a starting point for DNA synthesis. Two primers are used in PCR: the forward primer, which binds to one strand of the target DNA, and the reverse primer, which binds to the complementary strand. Primers need to be designed specifically for the target DNA sequence to ensure specificity and efficiency of amplification.
- DNA polymerase: DNA polymerase is an enzyme that synthesizes new DNA strands by adding nucleotides to the primer. The most commonly used DNA polymerase in PCR is Taq DNA polymerase, derived from the bacterium Thermus aquaticus. Taq polymerase is thermostable and can withstand the high temperatures used during denaturation. Other DNA polymerases with proofreading capabilities, such as Pfu or KOD polymerase, can also be used to minimize errors during amplification.
- Nucleotides: PCR requires all four deoxynucleoside triphosphates (dNTPs) – adenine (A), guanine (G), cytosine (C), and thymine (T) – to synthesize new DNA strands. These nucleotides are added to the reaction mixture at equal concentrations.
- Buffer solution: The PCR buffer provides the necessary conditions and chemicals for optimal activity and stability of the DNA polymerase. It typically contains Tris-HCl to maintain the pH, KCl to stabilize primer-template annealing, and MgCl2 as a cofactor for DNA polymerase activity. Other additives may be included depending on the specific PCR protocol.
- Monovalent and divalent cations: Potassium chloride (KCl) is commonly used as a monovalent cation in PCR reactions to optimize the conditions for DNA amplification. Divalent cations, such as magnesium (Mg2+), are essential as cofactors for DNA polymerase activity. Mg2+ ions are typically included in the PCR buffer, although manganese (Mn2+) can be used in certain cases, such as DNA mutagenesis.
- PCR tube: PCR is performed in small, thin-walled plastic tubes called PCR tubes. These tubes allow for efficient thermal conductivity and temperature equilibration during the thermal cycling process.
- Thermal cycler: A thermal cycler, also known as a thermocycler, is a device used to rapidly heat and cool the PCR reaction mixture during the temperature cycling steps. Modern thermocyclers use the Peltier effect to achieve precise temperature control. They are also equipped with heated lids to prevent condensation and evaporation of the reaction mixture.
These requirements, when combined and optimized, enable the efficient amplification of specific DNA sequences using PCR.
Preparation of PCR Reaction Mixture
Add these following ingredients to a PCR tube;
- Sterile Water
- 10X Assay Buffer
- 10 mM dNTP Mix
- Template DNA (100 ng/µl)
- Forward Primer (100 ng/µl)
- Reverse Primer (100 ng/µl)
- Taq DNA Polymerase (3 U/µl)
Note: Tap the tube for 1–2 seconds to mix the contents thoroughly. Then, add 25 μl of mineral oil in the tube to avoid evaporation of the contents. Place the tube in the thermocycler block and set the program to get DNA amplification.
Steps of Polymerase Chain Reaction (PCR)
The polymerase chain reaction (PCR) is a powerful technique used to amplify specific DNA sequences. It involves a series of temperature changes known as thermal cycles. The steps involved in PCR are as follows:
- Initialization: This step is required for DNA polymerases that need heat activation. The reaction chamber is heated to a temperature of 94-96 °C (or 98 °C for extremely thermostable polymerases) and held for 1-10 minutes.
- Denaturation: The reaction chamber is heated to 94-98 °C for 20-30 seconds. This causes the double-stranded DNA template to denature, separating it into two single-stranded DNA molecules.
- Annealing: The temperature is lowered to 50-65 °C for 20-40 seconds. During this step, short DNA sequences called primers anneal to each of the single-stranded DNA templates. Primers are designed to bind to specific regions flanking the target DNA sequence.
- Extension/Elongation: The temperature is raised to the optimal activity temperature of the DNA polymerase used, typically around 72-80 °C. The DNA polymerase synthesizes a new DNA strand complementary to the template strand by adding nucleotides (dNTPs) in the 5′-to-3′ direction. This step extends the primers and synthesizes new DNA strands.
- Cycling: Steps 2 to 4 (denaturation, annealing, and extension) are repeated for 20-40 cycles. With each cycle, the target DNA is doubled, resulting in exponential amplification of the specific DNA sequence.
- Final elongation: After the last PCR cycle, a final elongation step is performed at a temperature of 70-74 °C for 5-15 minutes to ensure complete elongation of any remaining single-stranded DNA.
- Final hold: The reaction chamber is cooled to 4-15 °C for an indefinite time. This step allows for short-term storage of the PCR products.
To verify the success of the PCR, agarose gel electrophoresis can be used. The amplified DNA products are separated based on size using a DNA ladder as a reference.
PCR is a versatile technique used in various applications, such as gene expression analysis, genetic testing, forensics, and molecular biology research. Its ability to amplify specific DNA sequences quickly and efficiently has revolutionized many fields of study.
Description of Each Steps of Polymerase Chain Reaction (PCR)
- Initialization is an important step in the polymerase chain reaction (PCR) process, particularly for DNA polymerases that require heat activation through hot-start PCR. This step is performed at the beginning of the PCR reaction and involves heating the reaction chamber to a specific temperature.
- The purpose of the initialization step is to ensure that the DNA polymerase enzyme is fully activated and ready to perform its function of synthesizing new DNA strands. Not all DNA polymerases require this heat activation, but for those that do, initialization becomes necessary.
- During initialization, the reaction chamber is typically heated to a temperature range of 94-96 °C (201-205 °F). However, if extremely thermostable polymerases are used, the temperature can be elevated to 98 °C (208 °F). The specific temperature used depends on the requirements of the particular DNA polymerase being employed.
- The duration of the initialization step can vary and is generally held for 1-10 minutes. This time allows the DNA polymerase to undergo the necessary conformational changes, enabling it to reach its optimal state for DNA synthesis.
- By subjecting the reaction chamber to this elevated temperature during initialization, any residual enzyme activity that could lead to non-specific amplification or primer-dimer formation is minimized or eliminated. This hot-start approach helps improve the specificity and efficiency of the PCR reaction by preventing undesired DNA amplification before the actual cycling stages begin.
- Overall, the initialization step in PCR is crucial for ensuring the activation of DNA polymerases that require heat activation. By heating the reaction chamber to a specific temperature and holding it for a designated period, the DNA polymerase enzyme is prepared for subsequent steps, such as denaturation, annealing, and extension, leading to successful DNA amplification.
- Denaturation is a crucial step in the polymerase chain reaction (PCR) process. It is the first regular cycling event that occurs after the initialization step. Denaturation involves heating the reaction chamber to a temperature range of 94-98 °C (201-208 °F) for a duration of 20-30 seconds.
- The primary objective of the denaturation step is to induce the melting or separation of the double-stranded DNA template into two single-stranded DNA molecules. This process is achieved by breaking the hydrogen bonds that hold the complementary bases of the DNA strands together. As a result, the double-stranded DNA template is transformed into single-stranded DNA templates, which serve as the starting point for subsequent stages of the PCR reaction.
- The high temperature used during denaturation disrupts the hydrogen bonding between the two complementary DNA strands. The DNA molecule unwinds, and the two strands separate from each other. The heat provides the energy required to overcome the hydrogen bonding forces between the base pairs, thereby promoting the separation of the strands. This denaturation step is crucial because it allows access to the DNA template for the binding of primers in the following annealing step.
- By generating single-stranded DNA templates, denaturation enables the primers to anneal to their complementary sequences during the next stage of PCR. The denaturation step ensures that the DNA template is in a suitable conformation for efficient primer binding and subsequent DNA synthesis.
- The duration of the denaturation step is relatively short, typically ranging from 20 to 30 seconds. This short exposure to high temperature is sufficient to disrupt the hydrogen bonds and separate the DNA strands without causing significant damage to the DNA molecules.
- In summary, denaturation is a critical step in the PCR process. By subjecting the reaction chamber to a high temperature for a short period, the double-stranded DNA template is melted into single-stranded DNA templates, allowing for efficient primer binding and subsequent amplification of the target DNA sequence.
- Annealing is a crucial step in the polymerase chain reaction (PCR) process that follows denaturation. During annealing, the reaction temperature is lowered to a range of 50-65 °C (122-149 °F) for a duration of 20-40 seconds. This temperature range allows for the specific binding of primers to each of the single-stranded DNA templates generated during denaturation.
- In PCR, two different primers are typically included in the reaction mixture. Each primer is designed to complement a short sequence of nucleotides at the 3′ end of one of the single-stranded DNA templates containing the target region. The primers themselves are single-stranded sequences and are much shorter than the overall length of the target region.
- The annealing temperature is a critical parameter to determine as it directly influences the efficiency and specificity of primer binding. The temperature must be low enough to enable the hybridization or binding of the primers to their complementary sequences on the single-stranded DNA templates. However, it must also be high enough to ensure specific binding, where the primers bind only to their perfectly complementary sequences and nowhere else.
- If the annealing temperature is too low, the primers may bind imperfectly to non-specific regions of the DNA template, leading to nonspecific amplification and undesirable products. On the other hand, if the temperature is too high, the primers may fail to bind at all, resulting in no amplification.
- A typical annealing temperature is set to be around 3-5 °C below the melting temperature (Tm) of the primers used. The Tm represents the temperature at which stable hydrogen bonds are formed between complementary bases. During annealing, the primer sequence should closely match the template sequence to allow for stable and specific binding.
- During the annealing step, the DNA polymerase enzyme binds to the primer-template hybrid formed by the annealing of primers to their complementary sequences on the single-stranded DNA templates. The polymerase, once bound, initiates the synthesis of a new DNA strand complementary to the DNA template. This DNA synthesis marks the beginning of DNA amplification in PCR.
- In summary, annealing is a critical step in PCR where the reaction temperature is lowered to enable the specific binding of primers to their target sequences on the single-stranded DNA templates. The proper determination of the annealing temperature ensures efficient and specific primer binding, facilitating subsequent DNA synthesis by the polymerase enzyme.
- The extension/elongation step is a crucial part of the polymerase chain reaction (PCR) process, following denaturation and annealing. The temperature for this step depends on the DNA polymerase being used. For example, the thermostable DNA polymerase Taq polymerase has an optimum activity temperature of approximately 75–80 °C (167–176 °F), although a temperature of 72 °C (162 °F) is commonly used with this enzyme.
- During the extension/elongation step, the DNA polymerase synthesizes a new DNA strand that is complementary to the DNA template strand. This synthesis occurs by adding free deoxyribonucleotides (dNTPs) from the reaction mixture. The dNTPs are complementary to the template sequence and are added in the 5′-to-3′ direction, which is the direction of DNA synthesis. The polymerase enzyme catalyzes the condensation reaction, joining the 5′-phosphate group of the dNTPs with the 3′-hydroxy group at the end of the nascent DNA strand. This process results in the elongation of the DNA strand.
- The time required for elongation depends on factors such as the specific DNA polymerase being used and the length of the DNA target region being amplified. As a general guideline, most DNA polymerases can polymerize approximately a thousand bases per minute at their optimal temperature.
- Under optimal conditions with no limitations due to limiting substrates or reagents, each extension/elongation step in PCR doubles the number of DNA target sequences. This is because the original template strands, as well as all newly generated strands, become template strands for the next round of elongation. This phenomenon leads to exponential (geometric) amplification of the specific DNA target region as PCR cycles progress.
- The processes of denaturation, annealing, and elongation together constitute a single cycle in PCR. However, multiple cycles are necessary to amplify the DNA target to millions of copies. The formula used to calculate the number of DNA copies formed after a given number of cycles is 2^n, where n represents the number of cycles. For example, a reaction set for 30 cycles would result in 2^30, or 1,073,741,824 copies of the original double-stranded DNA target region.
- In summary, the extension/elongation step of PCR involves the DNA polymerase synthesizing a new DNA strand complementary to the DNA template strand by adding dNTPs. Each elongation step doubles the number of DNA target sequences, leading to exponential amplification. Multiple cycles are performed to achieve the desired level of DNA amplification.
5. Final elongation
- The final elongation step in the polymerase chain reaction (PCR) is an optional but often employed step that occurs after the last PCR cycle. Its purpose is to ensure that any remaining single-stranded DNA in the reaction mixture is fully elongated, leading to the completion of DNA synthesis.
- During the final elongation step, the reaction temperature is typically set to a range of 70–74 °C (158–165 °F). This temperature range is chosen because it supports the optimal activity of most polymerases used in PCR. By maintaining this temperature for a duration of 5–15 minutes, any remaining single-stranded DNA templates present in the reaction mixture can be effectively elongated.
- The rationale behind this step is to allow additional time for the DNA polymerase to complete the synthesis of any partially elongated DNA strands. By prolonging the incubation period, any incomplete extensions or gaps in the synthesized DNA strands can be filled, resulting in more complete and fully elongated double-stranded DNA molecules.
- While the final elongation step is not strictly required for successful PCR amplification, it can be beneficial in certain applications. It ensures that all available single-stranded DNA templates are thoroughly extended, reducing the chances of incomplete or truncated amplification products. Additionally, it may be particularly useful when the amplified DNA will be used for downstream applications that require high-fidelity DNA synthesis.
- In summary, the final elongation step in PCR is an optional step performed at a temperature of 70–74 °C (158–165 °F) for 5–15 minutes after the last PCR cycle. It allows for the completion of DNA synthesis by ensuring that any remaining single-stranded DNA templates are fully elongated. Although not always necessary, this step can contribute to improved amplification results and more reliable downstream applications.
6. Final hold
- The final step in the polymerase chain reaction (PCR) is the “final hold.” It involves cooling the reaction chamber to a temperature range of 4–15 °C (39–59 °F) and maintaining this temperature for an indefinite period of time. This step is typically performed after the completion of the PCR amplification cycles.
- The purpose of the final hold is to provide a suitable environment for short-term storage of the PCR products. By reducing the temperature to a cool range, it helps to stabilize and preserve the amplified DNA molecules until further analysis or processing can be carried out. The specific temperature within the range of 4–15 °C may vary depending on the specific requirements of the experiment or protocol.
- During the final hold, the reaction is essentially paused, and the PCR products are held at a low temperature. This allows for convenient handling and storage of the amplified DNA without degradation or loss of integrity. The duration of the final hold can vary depending on the immediate needs of the experiment or the time required for subsequent steps.
- The final hold temperature range of 4–15 °C is commonly achieved using a thermal cycler or a refrigerated incubation chamber. The duration of the hold can be several minutes to several hours or even overnight, depending on the experimental design and the stability requirements of the PCR products.
- By employing the final hold, researchers can temporarily store the PCR products before further analysis, such as gel electrophoresis, sequencing, or other downstream applications. This step helps to maintain the integrity of the amplified DNA and allows flexibility in the timing of subsequent experiments or procedures.
- In summary, the final hold in PCR involves cooling the reaction chamber to a temperature range of 4–15 °C for an indefinite period of time. It serves as a short-term storage step, providing a stable environment for the PCR products until they can be further analyzed or processed. This final hold temperature range allows for the preservation of the amplified DNA molecules, enabling convenient handling and flexibility in experimental workflows.
Stages of PCR
The polymerase chain reaction (PCR) can be divided into three stages based on the progress of the reaction: exponential amplification, leveling off, and plateau.
- Exponential amplification: In this stage, the PCR reaction undergoes exponential growth, leading to a rapid increase in the amount of the desired DNA product. Each cycle of PCR doubles the amount of DNA, assuming 100% reaction efficiency. For example, after 30 cycles, a single copy of DNA can be amplified to up to 1,000,000,000 (one billion) copies. This stage demonstrates the remarkable amplification power of PCR, allowing the replication of a specific DNA sequence in a controlled laboratory setting. Even minute quantities of the target DNA are sufficient for detection and analysis.
- Leveling off stage: As the PCR reaction progresses, the amplification rate starts to slow down. This stage is characterized by a decrease in the rate of product accumulation. Several factors contribute to this slowdown. The DNA polymerase enzyme may begin to lose its activity over time, reducing its efficiency in synthesizing new DNA strands. Additionally, the consumption of reagents, such as deoxynucleotide triphosphates (dNTPs) and primers, gradually depletes their availability, limiting their contribution to the reaction. These factors collectively lead to a reduced amplification rate compared to the earlier exponential phase.
- Plateau: In the plateau stage, no further increase in product accumulation is observed. This occurs when the reagents and enzyme required for the PCR reaction become completely exhausted. As the reaction progresses and continues through multiple cycles, the limited availability of dNTPs and primers prevents further DNA synthesis. Additionally, the DNA polymerase enzyme may become fully utilized or degraded, further hindering the production of new DNA strands. The reaction reaches a point where no significant amplification occurs, and the amount of product levels off.
These three stages of PCR demonstrate the dynamic nature of the reaction and highlight the importance of optimizing reaction conditions, including reagent concentrations and cycling parameters, to achieve maximum amplification efficiency. The exponential amplification stage showcases the powerful capability of PCR to rapidly generate millions or billions of copies of a specific DNA target. However, it is crucial to monitor the reaction progress and consider the limitations imposed by reagent availability and enzyme activity as the reaction progresses into the leveling off and plateau stages.
Gel electrophoresis to visualize the results of PCR
After completion of the PCR, perform agarose gel electrophoresis. Compare the amplified product with the ladder and determine its size.
Magnitude of amplification
- PCR (Polymerase Chain Reaction) is a powerful technique that allows for the exponential amplification of target DNA templates through multiple cycling steps. The magnitude of amplification achieved in PCR can be calculated using a simple formula: “2^n,” where “n” represents the number of cycles performed [27, 28]. This formula indicates that the number of DNA copies produced doubles with each cycle.
- For example, let’s consider a PCR set with 36 cycles. Applying the formula “2^36,” we find that this PCR would result in approximately 68 billion copies of the target DNA template. This demonstrates the remarkable amplification potential of PCR.
- Under optimal conditions and with high efficiency, a PCR reaction performed in a 50 ml volume can generate substantial amounts of amplified DNA. Even with minimal efficiency, after 35-40 cycles, this PCR could produce approximately 0.2 mg of a 150 bp DNA fragment from only 100 template molecules. The molar weight of the fragment is typically around 99,000 Da.
- These calculations illustrate how PCR can generate an enormous number of DNA copies from a minute starting amount. The exponential amplification achieved through successive cycles allows for the detection, analysis, and study of DNA targets that may be present in low abundance. PCR’s ability to generate such a high magnitude of amplification has made it an indispensable tool in various scientific and diagnostic applications.
Validating PCR is an essential step to ensure the success and accuracy of the amplification process. Several methods can be employed to validate PCR results. Let’s explore some of the common validation techniques:
- Ethidium Bromide Staining: Ethidium bromide (EtBr) is a commonly used dye for the staining of amplified DNA products. It intercalates between the base pairs of the double helix and can be visualized under UV light. EtBr has UV absorbance maxima at 300 and 360 nm, with an emission maximum at 590 nm. By staining the DNA bands, the presence and intensity of the amplified product can be determined. The detection limit for DNA bound to ethidium bromide is typically around 0.5-5.0 ng per band.
- Three Primer Combination Approach: In some cases, a three primer combination approach can be employed for a more cost-effective end-labeling of PCR products. This approach involves using a fluorescently labeled universal primer, modified locus-specific primers, and 5′ universal primer sequence tails. The use of fluorescent labels allows for the direct visualization and quantification of the PCR products.
- Agarose Gel Electrophoresis: Agarose gel electrophoresis is the most commonly employed method for validating PCR results. This technique utilizes an electric current to separate DNA molecules based on their size. Agarose gels are used for DNA fragments larger than 500 base pairs, while polyacrylamide gels are used for smaller fragments. After electrophoresis, the DNA bands are visualized by staining the gel with DNA-specific dyes like ethidium bromide. The presence of a DNA band of the expected size, confirmed using a DNA ladder as a size reference, indicates successful amplification of the target sequence. The absence of any DNA bands suggests the absence of the target DNA, while the presence of incorrect size DNA bands indicates the production of unintended products.
- Restriction Enzyme Digestion: If direct DNA sequencing is not accessible or cost-effective, restriction enzyme digestion can be used as an indirect method to assess the sequence of the PCR amplicon. By selecting appropriate restriction enzymes that recognize specific DNA sequences within the amplicon, the presence or absence of expected restriction fragments can provide information about the integrity and specificity of the amplified product.
Validating PCR results using these techniques helps ensure the reliability and accuracy of the amplification process. These methods enable researchers to confirm the presence of the target sequence, identify any unintended products, and assess the overall success of the PCR experiment.
Types of PCR
PCR (Polymerase Chain Reaction) is a versatile technique that has undergone various modifications and adaptations to cater to specific needs in different fields. Here are some of the common types of PCR techniques:
- Conventional PCR: This is the standard PCR method where a single primer pair is used to bind to the two separated target strands. It generates millions of copies of the target DNA sequences.
- Multiplex PCR: This specialized PCR technique utilizes several pairs of primers that anneal to different target sequences in a single sample. It is commonly employed for the detection of pathogenic microorganisms and can identify mutations, deletions, insertions, and rearrangements in pathogenic specimens.
- Nested PCR: Nested PCR is used to enhance the specificity of DNA amplification by reducing nonspecific amplification. It involves two sets of primer pairs used in successive PCR reactions. The first round of PCR generates a larger DNA product that includes the target sequence, and the second PCR amplifies only the specific target sequence, improving specificity.
- Real-time PCR/Quantitative PCR (qPCR): This technique allows for the quantification of DNA amplification in real time during the PCR reaction. It is commonly used to estimate the number of DNA targets present in a sample or to study and compare gene expression. Real-time PCR can utilize nonspecific fluorescent dyes or sequence-specific DNA oligonucleotide fluorescent probes for amplification measurement.
- Hot Start/Cold Finish PCR: This PCR technique reduces nonspecific amplification during the initial stages of PCR. It involves using hybrid polymerases that remain inactive at ambient temperature and are only activated at higher temperatures. Inhibition of the polymerase activity at lower temperatures can be achieved using an antibody or covalently bound inhibitors.
- Touchdown PCR (Step-down PCR): Touchdown PCR is designed to minimize nonspecific amplification by gradually decreasing the primer annealing temperature in successive cycles. It starts with initial cycles having a higher annealing temperature for increased specificity and gradually reduces the temperature for more efficient amplification.
- Assembly PCR or Polymerase Cycling Assembly (PCA): This technique is used for the synthesis of long DNA molecules from long oligonucleotides with short overlapping segments. It involves an initial PCR with primers that have an overlap, followed by a second PCR using the products of the first PCR as templates to generate the final full-length DNA structure.
- Colony PCR: Colony PCR is a high-throughput technique used to confirm the presence of DNA inserts in recombinant clones. It amplifies the inserted sequences using specific primers designed for the vector regions flanking the insertion site. This technique allows for screening bacterial colonies transformed with recombinant vectors without initially extracting genomic DNA.
- Methylation-Specific PCR (MSP): MSP is a variant of PCR used to identify promoter hyper-methylation at CpG islands. It involves treating the target DNA with sodium bisulfite to convert unmethylated cytosine bases into uracil. Two types of primers, specific to methylated and unmethylated cytosine, are used to amplify the modified DNA, providing quantitative information about methylation when used in quantitative PCR.
- Inverse PCR: Inverse PCR is used to detect the sequences surrounding a target DNA (flanking sequences). It involves a series of restriction enzyme digestions and self-ligation. Primers are then used to amplify sequences at either end of the target DNA segment, extending outward from the known DNA segment.
- Reverse Transcription-PCR (RTP): RTP combines reverse transcription of RNA into complementary DNA (cDNA) using viral reverse transcriptase, followed by a conventional PCR using the resulting cDNA as a template. This technique is widely used in the detection of RNA viruses and the study of gene expression. A variant called differential-display reverse transcription-PCR or RNA arbitrarily primed PCR (RAP-PCR) allows for the comparison of gene expression under different conditions.
These different types of PCR techniques have expanded the applications of PCR in various fields, ranging from diagnostics and research to agriculture and environmental studies.
Variants of PCR
Variants of PCR (Polymerase Chain Reaction) have been developed to cater to various research, diagnostic, and industrial requirements. These variants offer specific advantages and are designed to address specific challenges encountered in traditional PCR techniques. Let’s explore some of the important variants of PCR:
- Extreme PCR: Extreme PCR involves increasing the concentration of primers and polymerase by 10-20 times. This amplification technique allows for rapid detection of virulent infectious and bioterrorism pathogens by achieving an amplification rate of 0.4-2.0 seconds per cycle.
- Photonic PCR: Photonic PCR utilizes fast heating and energy conversion to shorten the PCR time. This technique employs electronic resonance light-emitting diodes to achieve rapid energy conversion, resulting in target DNA amplification within 5 minutes. Photonic PCR offers greater convenience and speed in PCR detection.
- COLD-PCR: COLD-PCR stands for Co-amplification at Lower Denaturation temperature PCR. It is used to enrich mutant genes by reducing the reactive temperature of PCR. COLD-PCR takes advantage of the fact that DNA strands with base mismatches have lower denaturation temperatures compared to wild-type DNA. This variant is often employed in the detection of viral gene mutations, cancer-associated gene mutations, and other genetic variations.
- Nanoparticle-PCR: Nanoparticle-PCR involves the addition of gold nanoparticles to PCR reactions. Gold nanoparticles possess unique properties such as electrical, optical, thermal, and catalytic activities. By acting as additives in slowdown or touchdown PCR reaction systems, gold nanoparticles can significantly improve the amplification efficiency of high GC content templates. This variant is particularly useful for amplifying templates with high GC content.
- HPE-PCR: HPE-PCR, which stands for High G+C Content PCR, is designed for amplifying templates with long DNA chains and a high number of CTG repeats. It involves increasing the denaturation temperature of PCR to address the challenges posed by high G+C content templates .
- LATE-PCR: LATE-PCR (Linear-After-The-Exponential PCR) generates high concentrations of single-stranded DNA that can be analyzed at the endpoint using probes. This variant employs probes that hybridize over a wide temperature range, allowing for efficient and accurate analysis of amplified DNA.
- Digital PCR: Digital PCR (dPCR) is a technique that enables precise and sensitive quantification of nucleic acids. It utilizes two discrete optical channels for detection and focuses on the quantification of one or two targets within a single reaction. Digital PCR has gained popularity as a quantification strategy that combines absolute quantification with high sensitivity. It finds applications in healthcare and environmental analysis.
These variants of PCR demonstrate the versatility and adaptability of the PCR technique, allowing researchers and scientists to overcome specific challenges and achieve more accurate and efficient results in various fields of study.
Advantages of PCR
PCR offers several advantages that have made it a widely used and powerful technique in various fields. Here are some key advantages of PCR:
- Simplicity and speed: PCR is a relatively simple technique to understand and perform. It involves a few basic steps and can be carried out in a relatively short period of time, typically a few hours. This rapid turnaround time allows for quick diagnosis and identification of target sequences.
- High sensitivity: PCR is highly sensitive and has the potential to amplify even trace amounts of target DNA. It can produce millions to billions of copies of a specific DNA fragment, making it suitable for applications that require a high level of sensitivity, such as detecting rare genetic variants or low-abundance pathogens.
- Quantification capabilities: Real-time quantitative PCR (qPCR) is a variant of PCR that allows for the quantification of the synthesized product in real-time. This feature is particularly useful for analyzing alterations in gene expression levels, such as in tumors or microbial infections. qPCR enables researchers to accurately measure and compare gene expression levels across different samples.
- Versatility and flexibility: PCR is a versatile technique that can be used in various applications. It allows for the amplification of DNA from a wide range of sources, including genomic DNA, cDNA, and even degraded or old DNA samples. It can be adapted for different purposes, such as sequencing, cloning, mutation analysis, and genotyping.
- Specificity: PCR offers high specificity when designed properly. By using specific primers that target the desired DNA sequence, PCR can selectively amplify the target region while minimizing amplification of non-target DNA. This specificity ensures reliable and accurate results.
- Research tool: PCR is a powerful tool in research, enabling the sequencing of unknown disease etiologies and identification of new viral strains. It allows researchers to identify the sequence of previously unknown viruses related to known ones, providing a better understanding of diseases. PCR has greatly contributed to the advancement of genomics, molecular biology, and diagnostic research.
- Enhanced diagnosis and identification: PCR has significantly improved the diagnosis and identification of various diseases. It enables rapid and sensitive detection of pathogens, genetic disorders, and cancer-related mutations. PCR-based tests have become an integral part of clinical laboratories, providing faster and more accurate results for disease diagnosis.
Overall, PCR’s simplicity, speed, sensitivity, quantification capabilities, versatility, specificity, and its impact on research and diagnostics make it an indispensable tool in molecular biology, genetics, and clinical applications. Its continued development and refinement are expected to further enhance its applications and contribute to advancements in various scientific disciplines.
Limitations of PCR
While PCR has revolutionized biological sciences and enabled significant advancements, it also has certain limitations that need to be considered. Here are some limitations of PCR:
- Target sequence requirement: PCR requires prior knowledge of the target DNA sequence to design the primers for selective amplification. This means researchers need to know the specific sequence upstream of the target region on each single-stranded template. Without this information, PCR amplification cannot be performed.
- Potential for errors and mutations: Like all enzymes, DNA polymerases used in PCR are prone to errors during DNA synthesis, leading to mutations in the amplified fragments. These errors can introduce inaccuracies in the results and affect downstream analysis or interpretation.
- Contamination risks: PCR is highly sensitive, which is both an advantage and a limitation. Even a small amount of contaminating DNA, such as from previous PCR reactions or environmental sources, can be amplified and lead to false or ambiguous results. To minimize contamination risks, strict laboratory protocols should be followed, including separate rooms for reagent preparation, PCR setup, and analysis. Single-use aliquots and disposable pipettors should be used, and unidirectional workflow should be maintained.
- Inhibition by certain substances: PCR can be inhibited by the presence of certain chemicals such as ethanol, phenol, isopropanol, sodium dodecyl sulfate (SDS), high salt concentration, and chelators. These substances may be present in the sample or introduced during the experimental process, leading to failed or suboptimal amplification.
- Size limitations: There is an upper limit to the size of DNA that can be effectively amplified by PCR. Large DNA fragments may not be efficiently amplified or may require modified PCR protocols.
- Longer analysis time: While the PCR reaction itself is relatively quick, the analysis and detection of the PCR products often take longer. This can include gel electrophoresis, DNA sequencing, or other techniques to verify and analyze the amplified DNA. The additional time required for these steps should be considered in experimental planning.
- Inhibition by environmental samples: Environmental samples that contain humic acids, such as soil or water samples, can inhibit PCR amplification and result in inaccurate or failed results. Specialized purification methods or alternative PCR protocols may be required to overcome this limitation.
Awareness of these limitations is essential for researchers and practitioners working with PCR. By understanding these challenges, appropriate precautions can be taken to ensure the reliability and accuracy of PCR results in various applications.
Applications of Polymerase Chain Reaction (PCR)
The polymerase chain reaction (PCR) has revolutionized various fields of science and medicine due to its versatility and sensitivity. PCR and its advanced variants have made several applications possible, which were once considered impossible. Here are some key applications of PCR:
- Diagnosis of infections: PCR approaches are widely used in clinical laboratories to diagnose infections caused by bacteria, viruses, protozoa, and fungi. These techniques offer specific and sensitive detection and quantification of infectious agents.
- Diagnosis of genetic defects: PCR-based detection systems are used to accurately identify genetic disorders before the onset of disease and confirm their presence after the onset. PCR allows for the detection of inherited genetic changes and spontaneous genetic mutations.
- Diagnosis and prognosis of cancers: PCR-based approaches can identify cancer-related genes and analyze their expression patterns. This helps in determining genetic predisposition to certain types of cancer, confirming the cancer type, predicting prognosis, and guiding treatment decisions.
- Phylogenetics: PCR amplification of phylogenetic markers is routinely used in phylogenetic analysis to identify and classify organisms. It helps in understanding evolutionary relationships and biodiversity.
- Archeology: PCR techniques are employed to amplify and improve the quality and quantity of ancient DNA (aDNA) recovered from archaeological remains. This enables the analysis of aDNA for studying ancient populations, evolutionary history, and genetic diversity.
- Recombinant DNA technology: PCR is used in recombinant DNA technology to generate hybrid DNA molecules with precision. It is also employed to clone DNA into specific vectors for protein expression and production.
- Metagenomics: PCR is combined with metagenomics to identify rare genes and members of microbial communities. Gene-targeted metagenomics allows for the detection of rare species and rare genes present in complex microbial ecosystems.
- Site-directed mutagenesis: PCR-based approaches are commonly used to introduce mutations at specific locations in a gene. This technique helps in studying the role of specific amino acids in the structure and function of proteins.
- Personalized medicine: PCR technologies play a crucial role in pharmacogenomics and pharmacogenetics. Genetic markers tracked by PCR help determine individual responses to treatments, design tailored drugs, and prescribe effective drug doses.
- Forensic sciences: PCR is utilized in forensic sciences to amplify DNA samples obtained from crime scenes. Even poor-quality and low-quantity DNA samples can be reliably analyzed using PCR, aiding in criminal identification and investigations.
- DNA profiling: PCR-based methods are employed for DNA profiling, which exploits the polymorphic nature of DNA. These techniques help study ecological communities, phylogeny, population genetics, and provide valuable information in forensic investigations.
- Gene expression profiling: Reverse-transcriptase PCR and quantitative PCR (qPCR) are routinely used to analyze gene expression. They help in profiling gene expression patterns and validating transcriptome profiles obtained through techniques like microarray and RNA-seq.
- Identifying medicinal plants: PCR-based DNA barcoding is a rapid and accurate tool for identifying medicinal plant species. This approach is used in various fields, including medicine, ecology, and conservation biology, to identify endangered and new species.
- Detecting genetically modified organisms (GMOs): PCR techniques are employed to track the presence of genetically modified organisms in food and feed. This ensures their regulation and protects consumer rights by providing reliable and quick detection methods.
- Meat traceability: PCR methods are used to identify and quantify adulteration of meat in raw and processed food products. This helps ensure the accuracy and integrity of meat labeling and traceability systems.
The applications of PCR are extensive and continue to expand as new variants and techniques are developed. The technique’s sensitivity, specificity, and versatility have made it an indispensable tool in various scientific disciplines, clinical research, and diagnostic medicine.
Factors affecting Amplification
- Generally, the amplification is performed at 20, 50, or 100 µl volume in 0.2 or 0.5 ml microfuge tubes.
- Larger volumes do not allow adequate thermal equilibrium of the reaction mixture.
- During PCR, generally, a nanogram amount of plasmid DNA, or microgram amount of genomic DNA is used.
- The higher amounts of template DNA can lead to the inhibition or result in non-specific amplification.
- Primers are synthetic oligonucleotides which are ranging from 15 to 30 bases.
- For PCR required a forward and a reverse primer. The melting temperature (Tm) of both primers is the same.
- The 3’ ends of Primers do not have more than two bases complementary to each other, as this helps in the formation of PRIMER-DIMER.
- The G+C content of Primers ranges from 40-60%.
- Low concentrations of primers result in poor yield of the specific product and high concentrations of primers may result in non-specific amplification. The optimal concentration of primers is between 0.1-1 µM.
- The final concentration of each dNTP (dATP, dGTP, dCTP & dTTP) in a standard amplification reaction is 200µM.
- It is important to keep the dNTP concentrations above the estimated Km of each dNTP (10 to 15 µM) for best base incorporation.
Taq DNA polymerase buffer
- 10X assay buffer is consist of 100 mM Tris- Cl (pH 9.0), 500 mM Potassium chloride, 15 mM MgCl 2 and 0.1% w/v gelatin.
- Mg2+ is an important cofactor needed for the activity of Taq DNA Polymerase. A low concentration of Magnesium will result in no amplification and high amounts may lead to the production of unwanted products.
Taq DNA Polymerase
- It is a 94 kD (Kilodaltons) thermostable DNA polymerase. The optimal temperature for Taq DNA Polymerase is 72°C.
- 3’ to 5’ exonuclease activity is absent but has 5’ to 3’ exonuclease activity and 5’ to 3’ polymerase activity.
- For most amplification reactions 1.5 to 2 units of the enzyme is recommended as higher enzyme concentration leads to non-specific amplification.
What is PCR?
PCR is a laboratory technique used to amplify a specific DNA sequence, creating millions or billions of copies of the target DNA. It is based on the principles of DNA replication and involves a cyclic process of denaturation, annealing, and extension using DNA polymerase.
What is the purpose of PCR?
The primary purpose of PCR is to amplify a specific DNA sequence of interest. It is widely used in research, diagnostics, forensic analysis, genetic testing, and various other applications where the detection and analysis of DNA are necessary.
How does PCR work?
PCR involves repeated cycles of temperature changes to facilitate DNA denaturation, primer annealing, and DNA synthesis. The DNA sample is subjected to cycles of heating to separate the double-stranded DNA, cooling to allow primers to bind to the target sequence, and DNA synthesis by a heat-stable DNA polymerase.
What are the key components needed for PCR?
The essential components of a PCR reaction include a DNA template containing the target sequence, DNA primers that flank the target sequence, DNA polymerase (such as Taq polymerase), nucleotides (dNTPs), buffer solution, and magnesium ions (Mg2+).
What are the different types of PCR?
Some common types of PCR include conventional PCR, real-time PCR (qPCR), multiplex PCR, nested PCR, reverse transcription PCR (RT-PCR), and quantitative PCR (qPCR). Each type has specific variations and applications.
What is the role of primers in PCR?
Primers are short DNA sequences that bind to complementary regions flanking the target DNA sequence. They serve as starting points for DNA synthesis by the DNA polymerase. Primers are critical for target specificity and amplification.
What is the significance of PCR in genetic testing?
PCR is widely used in genetic testing to identify genetic mutations, detect infectious agents like viruses and bacteria, diagnose genetic diseases, determine gene expression levels, and perform DNA profiling for forensic analysis and paternity testing.
How can PCR be used in research?
PCR plays a crucial role in various research applications, including cloning, DNA sequencing, gene expression analysis, DNA fingerprinting, mutation detection, genotyping, and studying microbial diversity and evolution.
What are the limitations of PCR?
PCR has some limitations, such as the possibility of amplifying nonspecific DNA sequences if the primers are not specific enough, the potential for contamination leading to false results, the requirement for prior knowledge of the target sequence, and difficulties in amplifying highly GC-rich or repetitive DNA regions.
What is the role of PCR in the COVID-19 pandemic?
PCR has been instrumental in the diagnosis of COVID-19 by detecting the presence of SARS-CoV-2 viral RNA in patient samples. It has been widely used for mass testing, tracking the spread of the virus, and monitoring the effectiveness of vaccination campaigns.
- Liu, H. Y., Hopping, G. C., Vaidyanathan, U., Ronquillo, Y. C., Hoopes, P. C., & Moshirfar, M. (2019). Polymerase Chain Reaction and Its Application in the Diagnosis of Infectious Keratitis. Medical hypothesis, discovery & innovation ophthalmology journal, 8(3), 152–155. |
|Value of h||Units||Ref.|
|Values of ħ (h-bar)||Units||Ref.|
|Values of hc||Units||Ref.|
|Values of ħc (h-bar)||Units||Ref.|
The Planck constant, or Planck's constant, is the quantum of electromagnetic action that relates a photon's energy to its frequency. The Planck constant multiplied by a photon's frequency is equal to a photon's energy. The Planck constant is a fundamental physical constant denoted as , and of fundamental importance in quantum mechanics. In metrology it is used to define the kilogram in SI units.
At the end of the 19th century, accurate measurements of the spectrum of black body radiation existed, but predictions of the frequency distribution of the radiation by then-existing theories diverged significantly at higher frequencies. In 1900, Max Planck derived empirically a formula for the observed spectrum. He assumed that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, , from the experimental measurements, and that constant is named in his honor. In 1905, the value was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle. It was eventually called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
Since energy and mass are equivalent, the Planck constant also relates mass to frequency.
Origin of the constantEdit
Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law.
In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths.:141 Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths.
Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths.
Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which subsequently became known as the Planck Constant. The expression formulated by Planck showed that the spectral radiance of a body for frequency ν at absolute temperature T is given by
The spectral radiance of a body, , describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In this case, it is given by
showing how radiated energy emitted at shorter wavelengths increases more rapidly with temperature than energy emitted at longer wavelengths.
Planck's law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI units of are W·sr−1·m−2·Hz−1, while those of are W·sr−1·m−3.
Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was
to interpret UN [the vibrational energy of N oscillators] not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts. Let us call each such part the energy element ε;— Planck, On the Law of Distribution of Energy in the Normal Spectrum
With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words, but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation":
Planck was able to calculate the value of from experimental data on black-body radiation: his result, 6.55×10−34 J⋅s, is within 1.2% of the currently accepted value. He also made the first determination of the Boltzmann constant from the same data and theory.
Development and applicationEdit
The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.
Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light.
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.
Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation:
Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant .
Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies
where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . Once the electron reached the lowest energy level ( ), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants.
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if is the total angular momentum of a system with rotational invariance, and the angular momentum measured along any given direction, these quantities can only take on the values
The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise.
In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :
where is the Kronecker delta.
This energy is extremely small in terms of ordinarily perceived everyday objects.
The de Broglie wavelength λ of the particle is given by
In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant. It is equal to the Planck constant divided by 2π, and is denoted ħ (pronounced "h-bar"):
The energy of a photon with angular frequency ω = 2πf is given by
while its linear momentum relates to
where k is an angular wavenumber. In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics.
Problems can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI. In the language of quantity calculus, the expression for the "value" of the Planck constant, or of a frequency, is the product of a "numerical value" and a "unit of measurement". When we use the symbol f (or ν) for the value of a frequency it implies the units cycles per second or hertz, but when we use the symbol ω for its value it implies the units radians per second; the numerical values of these two ways of expressing the value of a frequency have a ratio of 2π, but their values are equal. Omitting the units of angular measure "cycle" and "radian" can lead to an error of 2π. A similar state of affairs occurs for the Planck constant. We use the symbol h when we express the value of the Planck constant in J⋅s/cycle, and we use the symbol ħ when we express its value in J⋅s/rad. Since both represent the value of the Planck constant, but in different units, we have h = ħ. Their "values" are equal but, as discussed below, their "numerical values" have a ratio of 2π. In this Wikipedia article the word "value" as used in the tables means "numerical value", and the equations involving the Planck constant and/or frequency actually involve their numerical values using the appropriate implied units. The distinction between "value" and "numerical value" as it applies to frequency and the Planck constant is explained in more detail in this pdf file Link.
These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors.
Classical statistical mechanics requires the existence of h (but does not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some integer multiple of a very small quantity, the "quantum of action", now called the reduced Planck constant or the natural unit of action. This is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time. Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of classical particle motion.
In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden.
The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J⋅s or N⋅m⋅s or kg⋅m2⋅s−1). Implicit in the dimensions of the Planck constant is the fact that the SI unit of frequency, the Hertz, represents one complete cycle, 360 degrees or 2π radians, per second. An angular frequency in radians per second is often more natural in mathematics and physics and many formulas use a reduced Planck constant (pronounced h-bar)
- The above values are recommended by 2018 CODATA.
In atomic units,
Understanding the 'fixing' of the value of hEdit
Since 2019, the numerical value of the Planck constant has been fixed, with finite significant figures. Under the present definition of the kilogram, which states that "The kilogram [...] is defined by taking the fixed numerical value of h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light c and duration of hyperfine transition of the ground state of an unperturbed cesium-133 atom ΔνCs." This implies that mass metrology is now aimed to find the value of one kilogram, and thus it is kilogram which is compensating. Every experiment aiming to measure the kilogram (such as the Kibble balance and the X-ray crystal density method), will essentially refine the value of a kilogram.
As an illustration of this, suppose the decision of making h to be exact was taken in 2010, when its measured value was 6.62606957×10−34 J⋅s, thus the present definition of kilogram was also enforced. In future, the value of one kilogram must have become refined to 6.62607015/ ≈ 1.0000001 times the mass of the International Prototype of the Kilogram (IPK), neglecting the metre and second units' share, for sake of simplicity.
Significance of the valueEdit
The Planck constant is related to the quantization of light and matter. It can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant. The physical meaning of the Planck's constant could suggest some basic features of our physical world. These basic features include the properties of the vacuum constants and . The Planck's constant can be identified as
The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant (the quantum of action) is very small. One can regard the Planck constant to be only relevant to the microscopic scale instead of the macroscopic scale in our everyday experience.
Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye to be green) has a frequency of 540 THz (540×1012 Hz). Each photon has an energy E = hf = 3.58×10−19 J. That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, NA = 6.02214076×1023 mol−1, with the result of 216 kJ/mol, about the food energy in three apples.
In principle, the Planck constant can be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods.
Since the value of the Planck constant is fixed now, it is no longer determined or calculated in laboratories. Some of the practices given below to determine the Planck constant are now used to determine the mass of the kilogram. The below-given methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect.
The Josephson constant KJ relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ν of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that KJ = 2e/h.
The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts. The measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force, in a Kibble balance. Assuming the validity of the theoretical treatment of the Josephson effect, KJ is related to the Planck constant by
A Kibble balance (formerly known as a watt balance) is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant.
The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic moment μ′p, the spin number I (I = 1⁄2 for protons) and the reduced Planck constant.
The ratio of the shielded proton magnetic moment μ′p to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence
A further complication is that the measurement of γ′p involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γ′p-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γ′p-90(hi) is of interest in determining the Planck constant.
Substitution gives the expression for the Planck constant in terms of Γ′p-90(hi):
The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant.
X-ray crystal densityEdit
The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by
The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011. The study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.
- Set on 20 November 2018, by the CGPM to this exact value. This value took effect on 20 May 2019.
- The value is exact but not expressible as a finite decimal; approximated to 9 decimal places only.
- The value is exact but not expressible as a finite decimal; approximated to 8 decimal places only.
- The value is exact but not expressible as a finite decimal; approximated to 10 decimal places only.
- "Resolutions of the 26th CGPM" (PDF). BIPM. 2018-11-16. Retrieved 2018-11-20.
- International Bureau of Weights and Measures (2019-05-20), SI Brochure: The International System of Units (SI) (PDF) (9th ed.), ISBN 978-92-822-2272-0, p. 131
- "2018 CODATA Value: Planck constant". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
- "Resolutions of the 26th CGPM" (PDF). BIPM. 2018-11-16. Archived from the original (PDF) on 2018-11-19. Retrieved 2018-11-20.
- Planck, Max (1901), "Ueber das Gesetz der Energieverteilung im Normalspectrum" (PDF), Ann. Phys., 309 (3): 553–63, Bibcode:1901AnP...309..553P, doi:10.1002/andp.19013090310. English translation: "On the Law of Distribution of Energy in the Normal Spectrum Archived 2008-04-18 at the Wayback Machine"."Archived copy" (PDF). Archived from the original (PDF) on 2011-10-06. Retrieved 2011-10-13.CS1 maint: archived copy as title (link)
- Bitter, Francis; Medicus, Heinrich A. (1973). Fields and particles. New York: Elsevier. pp. 137–144.
- Planck, M. (1914). The Theory of Heat Radiation. Masius, M. (transl.) (2nd ed.). P. Blakiston's Son. pp. 6, 168. OL 7154661M.
- Chandrasekhar, S. (1960) . Radiative Transfer (Revised reprint ed.). Dover. p. 8. ISBN 978-0-486-60590-6.
- Rybicki, G. B.; Lightman, A. P. (1979). Radiative Processes in Astrophysics. Wiley. p. 22. ISBN 978-0-471-82759-7.
- Shao, Gaofeng; et al. (2019). "Improved oxidation resistance of high emissivity coatings on fibrous ceramic for reusable space systems". Corrosion Science. 146: 233–246. arXiv:1902.03943. doi:10.1016/j.corsci.2018.11.006.
- Kragh, Helge (1 December 2000), Max Planck: the reluctant revolutionary, PhysicsWorld.com
- Kragh, Helge (1999), Quantum Generations: A History of Physics in the Twentieth Century, Princeton University Press, p. 62, ISBN 978-0-691-09552-3
- Planck, Max (2 June 1920), The Genesis and Present State of Development of the Quantum Theory (Nobel Lecture)
- Previous Solvay Conferences on Physics, International Solvay Institutes, archived from the original on 16 December 2008, retrieved 12 December 2008
- See, e.g., Arrhenius, Svante (10 December 1922), Presentation speech of the 1921 Nobel Prize for Physics
- Lenard, P. (1902), "Ueber die lichtelektrische Wirkung", Ann. Phys., 313 (5): 149–98, Bibcode:1902AnP...313..149L, doi:10.1002/andp.19023130510
- Einstein, Albert (1905), "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" (PDF), Ann. Phys., 17 (6): 132–48, Bibcode:1905AnP...322..132E, doi:10.1002/andp.19053220607
- Millikan, R. A. (1916), "A Direct Photoelectric Determination of Planck's h", Phys. Rev., 7 (3): 355–88, Bibcode:1916PhRv....7..355M, doi:10.1103/PhysRev.7.355
- Isaacson, Walter (2007-04-10), Einstein: His Life and Universe, ISBN 978-1-4165-3932-2, pp. 309–314.
- "The Nobel Prize in Physics 1921". Nobelprize.org. Retrieved 2014-04-23.
- Smith, Richard (1962), "Two Photon Photoelectric Effect", Physical Review, 128 (5): 2225, Bibcode:1962PhRv..128.2225S, doi:10.1103/PhysRev.128.2225.Smith, Richard (1963), "Two-Photon Photoelectric Effect", Physical Review, 130 (6): 2599, Bibcode:1963PhRv..130.2599S, doi:10.1103/PhysRev.130.2599.4.
- Bohr, Niels (1913), "On the Constitution of Atoms and Molecules", Phil. Mag., 6th Series, 26 (153): 1–25, Bibcode:1913PMag...26..476B, doi:10.1080/14786441308634993
- Mohr, J. C.; Phillips, W. D. (2015). "Dimensionless Units in the SI". Metrologia. 52 (1): 40–47. arXiv:1409.2794. Bibcode:2015Metro..52...40M. doi:10.1088/0026-1394/52/1/40.
- Mills, I. M. (2016). "On the units radian and cycle for the quantity plane angle". Metrologia. 53 (3): 991–997. Bibcode:2016Metro..53..991M. doi:10.1088/0026-1394/53/3/991.
- "SI units need reform to avoid confusion". Editorial. Nature. 548 (7666): 135. 7 August 2011. doi:10.1038/548135b. PMID 28796224.
- P. R. Bunker; I. M. Mills; Per Jensen (2019). "The Planck constant and its units". J Quant Spectrosc Radiat Transfer. 237: 106594. doi:10.1016/j.jqsrt.2019.106594.
- P. R. Bunker; Per Jensen (2020). "The Planck constant of action A". J Quant Spectrosc Radiat Transfer. 243: 106835. doi:10.1016/j.jqsrt.2020.106835.
- Maxwell J.C. (1873) A Treatise on Electricity and Magnetism, Oxford University Press
- Giuseppe Morandi; F. Napoli; E. Ercolessi (2001), Statistical mechanics: an intermediate course, p. 84, ISBN 978-981-02-4477-4
- Einstein, Albert (2003), "Physics and Reality" (PDF), Daedalus, 132 (4): 24, doi:10.1162/001152603771338742, archived from the original (PDF) on 2012-04-15,
The question is first: How can one assign a discrete succession of energy value Hσ to a system specified in the sense of classical mechanics (the energy function is a given function of the coordinates qr and the corresponding momenta pr)? The Planck constant h relates the frequency Hσ/h to the energy values Hσ. It is therefore sufficient to give to the system a succession of discrete frequency values.
- 9th edition, SI BROCHURE. "BIPM" (PDF). BIPM.
- Chang, Donald C. (2017). "Physical interpretation of Planck's constant based on the Maxwell theory". Chin. Phys. B. 26 (4): 040301. doi:10.1088/1674-1056/26/4/040301.
- Materese, Robin (2018-05-14). "Kilogram: The Kibble Balance". NIST. Retrieved 2018-11-13.
- Quantum of Action and Quantum of Spin – Numericana
- Moriarty, Philip; Eaves, Laurence; Merrifield, Michael (2009). "h Planck's Constant". Sixty Symbols. Brady Haran for the University of Nottingham.
- A pdf file explaining the relation between h and ħ, their units, and the history of their introduction Link |
Global warming and climate change are terms for the observed century-scale rise in the average temperature of the Earth's climate system and its related effects. Multiple lines of scientific evidence show that the climate system is warming. Although the increase of near-surface atmospheric temperature is the measure of global warming often reported in the popular press, most of the additional energy stored in the climate system since 1970 has gone into ocean warming. The remainder has melted ice and warmed the continents and atmosphere.[a] Many of the observed changes since the 1950s are unprecedented over tens to thousands of years.
Scientific understanding of global warming is increasing. The Intergovernmental Panel on Climate Change (IPCC) reported in 2014 that scientists were more than 95% certain that global warming is mostly being caused by human (anthropogenic) activities, mainly increasing concentrations of greenhouse gases such as carbon dioxide (CO2). Human-made carbon dioxide continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels is not absorbed by vegetation and the oceans and remains in the atmosphere. Climate model projections summarized in the report indicated that during the 21st century the global surface temperature is likely to rise a further 0.3 to 1.7 °C (0.5 to 3.1 °F) for their lowest emissions scenario using stringent mitigation and 2.6 to 4.8 °C (4.7 to 8.6 °F) for their highest. These findings have been recognized by the national science academies of the major industrialized nations[b] and are not disputed by any scientific body of national or international standing.
Future climate change and associated impacts will differ from region to region around the globe. Anticipated effects include warming global temperature, rising sea levels, changing precipitation, and expansion of deserts in the subtropics. Warming is expected to be greater over land than over the oceans and greatest in the Arctic, with the continuing retreat of glaciers, permafrost and sea ice. Other likely changes include more frequent extreme weather events including heat waves, droughts, heavy rainfall with floods and heavy snowfall; ocean acidification; and species extinctions due to shifting temperature regimes. Effects significant to humans include the threat to food security from decreasing crop yields and the abandonment of populated areas due to rising sea levels. Because the climate system has a large "inertia" and CO2 will stay in the atmosphere for a long time, many of these effects will not only exist for decades or centuries, but will persist for tens of thousands of years.
Possible societal responses to global warming include mitigation by emissions reduction, adaptation to its effects, building systems resilient to its effects, and possible future climate engineering. Most countries are parties to the United Nations Framework Convention on Climate Change (UNFCCC), whose ultimate objective is to prevent dangerous anthropogenic climate change. The UNFCCC have adopted a range of policies designed to reduce greenhouse gas emissions and to assist in adaptation to global warming. Parties to the UNFCCC had agreed that deep cuts in emissions are required and as first target the future global warming should be limited to below 2.0 °C (3.6 °F) relative to the pre-industrial level, [c] while the Paris Agreement of 2015 stated that the parties will also "pursue efforts to" limit the temperature increase to 1.5 °F (0.8 °C).
Public reactions to global warming and general fears of its effects are also steadily on the rise, with a global 2015 Pew Research Center report showing a median of 54% who consider it "a very serious problem". There are, however, significant regional differences. Notably, Americans and Chinese, whose economies are responsible for the greatest annual CO2 emissions, are among the least concerned.
- 1 Observed temperature changes
- 2 Initial causes of temperature changes (external forcings)
- 3 Feedback
- 4 Climate models
- 5 Observed and expected environmental effects
- 6 Observed and expected effects on social systems
- 7 Possible responses to global warming
- 8 Discourse about global warming
- 9 Etymology
- 10 See also
- 11 Notes
- 12 Citations
- 13 References
- 14 Further reading
- 15 External links
Observed temperature changes
The global average (land and ocean) surface temperature shows a warming of 0.85 [0.65 to 1.06] °C in the period 1880 to 2012, based on multiple independently produced datasets. Earth's average surface temperature rose by ±0.18 °C over the period 1906–2005. The rate of warming almost doubled for the last half of that period ( 0.74±0.03 °C per decade, versus 0.13±0.02 °C per decade). 0.07
The average temperature of the lower troposphere has increased between 0.13 and 0.22 °C (0.23 and 0.40 °F) per decade since 1979, according to satellite temperature measurements. Climate proxies show the temperature to have been relatively stable over the one or two thousand years before 1850, with regionally varying fluctuations such as the Medieval Warm Period and the Little Ice Age.
The warming that is evident in the instrumental temperature record is consistent with a wide range of observations, as documented by many independent scientific groups. Examples include sea level rise, widespread melting of snow and land ice, increased heat content of the oceans, increased humidity, and the earlier timing of spring events, e.g., the flowering of plants. The probability that these changes could have occurred by chance is virtually zero.
Temperature changes vary over the globe. Since 1979, land temperatures have increased about twice as fast as ocean temperatures ( against 0.25 °C per decade). 0.13 °C per decade Ocean temperatures increase more slowly than land temperatures because of the larger effective heat capacity of the oceans and because the ocean loses more heat by evaporation. Since the beginning of industrialisation the temperature difference between the hemispheres has increased due to melting of sea ice and snow in the North. Average arctic temperatures have been increasing at almost twice the rate of the rest of the world in the past 100 years; however arctic temperatures are also highly variable. Although more greenhouse gases are emitted in the Northern than Southern Hemisphere this does not contribute to the difference in warming because the major greenhouse gases persist long enough to mix between hemispheres.
The thermal inertia of the oceans and slow responses of other indirect effects mean that climate can take centuries or longer to adjust to changes in forcing. Climate commitment studies indicate that even if greenhouse gases were stabilized at year 2000 levels, a further warming of about 0.5 °C (0.9 °F) would still occur.
Global temperature is subject to short-term fluctuations that overlay long-term trends and can temporarily mask them. The relative stability in surface temperature from 2002 to 2009, which has been dubbed the global warming hiatus by the media and some scientists, is consistent with such an episode. 2015 updates to account for differing methods of measuring ocean surface temperature measurements show a positive trend over the recent decade.
15 of the top 16 warmest years have occurred since 2000. While record-breaking years can attract considerable public interest, individual years are less significant than the overall trend. So some climatologists have criticized the attention that the popular press gives to "warmest year" statistics; for example, Gavin Schmidt stated "the long-term trends or the expected sequence of records are far more important than whether any single year is a record or not."
2015 was not only the warmest year on record, it broke the record by the largest margin by which the record has been broken. 2015 was the 39th consecutive year with above-average temperatures. Ocean oscillations like El Niño Southern Oscillation (ENSO) can affect global average temperatures, for example, 1998 temperatures were significantly enhanced by strong El Niño conditions. 1998 remained the warmest year until 2005 and 2010 and the temperature of both of these years was enhanced by El Niño periods. The large margin by which 2015 is the warmest year is also attributed to another strong El Niño. However, 2014 was ENSO neutral. According to NOAA and NASA, 2015 had the warmest respective months on record for 10 out of the 12 months. The average temperature around the globe was 1.62˚F (0.90˚C) or 20% above the twentieth century average. In a first, December 2015 was also the first month to ever reach a temperature 2 degrees Fahrenheit above normal for the planet.
Initial causes of temperature changes (external forcings)
The climate system can warm or cool in response to changes in external forcings. These are "external" to the climate system but not necessarily external to Earth. Examples of external forcings include changes in atmospheric composition (e.g., increased concentrations of greenhouse gases), solar luminosity, volcanic eruptions, and variations in Earth's orbit around the Sun.
The greenhouse effect is the process by which absorption and emission of infrared radiation by gases in a planet's atmosphere warm its lower atmosphere and surface. It was proposed by Joseph Fourier in 1824, discovered in 1860 by John Tyndall, was first investigated quantitatively by Svante Arrhenius in 1896, and was developed in the 1930s through 1960s by Guy Stewart Callendar.
On Earth, naturally occurring amounts of greenhouse gases have a mean warming effect of about 33 °C (59 °F).[d] Without the Earth's atmosphere, the Earth's average temperature would be well below the freezing temperature of water. The major greenhouse gases are water vapor, which causes about 36–70% of the greenhouse effect; carbon dioxide (CO2), which causes 9–26%; methane (CH4), which causes 4–9%; and ozone (O3), which causes 3–7%. Clouds also affect the radiation balance through cloud forcings similar to greenhouse gases.
Human activity since the Industrial Revolution has increased the amount of greenhouse gases in the atmosphere, leading to increased radiative forcing from CO2, methane, tropospheric ozone, CFCs and nitrous oxide. According to work published in 2007, the concentrations of CO2 and methane have increased by 36% and 148% respectively since 1750. These levels are much higher than at any time during the last 800,000 years, the period for which reliable data has been extracted from ice cores. Less direct geological evidence indicates that CO2 values higher than this were last seen about 20 million years ago.
Fossil fuel burning has produced about three-quarters of the increase in CO2 from human activity over the past 20 years. The rest of this increase is caused mostly by changes in land-use, particularly deforestation. Another significant non-fuel source of anthropogenic CO2 emissions is the calcination of limestone for clinker production, a chemical process which releases CO2. Estimates of global CO2 emissions in 2011 from fossil fuel combustion, including cement production and gas flaring, was 34.8 billion tonnes (9.5 ± 0.5 PgC), an increase of 54% above emissions in 1990. Coal burning was responsible for 43% of the total emissions, oil 34%, gas 18%, cement 4.9% and gas flaring 0.7%
In May 2013, it was reported that readings for CO2 taken at the world's primary benchmark site in Mauna Loa surpassed 400 ppm. According to professor Brian Hoskins, this is likely the first time CO2 levels have been this high for about 4.5 million years. Monthly global CO2 concentrations exceeded 400 ppm in March 2015, probably for the first time in several million years. On 12 November 2015, NASA scientists reported that human-made carbon dioxide continues to increase above levels not seen in hundreds of thousands of years: currently, about half of the carbon dioxide released from the burning of fossil fuels is not absorbed by vegetation and the oceans and remains in the atmosphere.
Over the last three decades of the twentieth century, gross domestic product per capita and population growth were the main drivers of increases in greenhouse gas emissions. CO2 emissions are continuing to rise due to the burning of fossil fuels and land-use change.:71 Emissions can be attributed to different regions. Attributions of emissions due to land-use change are subject to considerable uncertainty.:289
Emissions scenarios, estimates of changes in future emission levels of greenhouse gases, have been projected that depend upon uncertain economic, sociological, technological, and natural developments. In most scenarios, emissions continue to rise over the century, while in a few, emissions are reduced. Fossil fuel reserves are abundant, and will not limit carbon emissions in the 21st century. Emission scenarios, combined with modelling of the carbon cycle, have been used to produce estimates of how atmospheric concentrations of greenhouse gases might change in the future. Using the six IPCC SRES "marker" scenarios, models suggest that by the year 2100, the atmospheric concentration of CO2 could range between 541 and 970 ppm. This is 90–250% above the concentration in the year 1750.
The popular media and the public often confuse global warming with ozone depletion, i.e., the destruction of stratospheric ozone (e.g., the ozone layer) by chlorofluorocarbons. Although there are a few areas of linkage, the relationship between the two is not strong. Reduced stratospheric ozone has had a slight cooling influence on surface temperatures, while increased tropospheric ozone has had a somewhat larger warming effect.
Aerosols and soot
Global dimming, a gradual reduction in the amount of global direct irradiance at the Earth's surface, was observed from 1961 until at least 1990. Solid and liquid particles known as aerosols, produced by volcanoes and human-made pollutants, are thought to be the main cause of this dimming. They exert a cooling effect by increasing the reflection of incoming sunlight. The effects of the products of fossil fuel combustion – CO2 and aerosols – have partially offset one another in recent decades, so that net warming has been due to the increase in non-CO2 greenhouse gases such as methane. Radiative forcing due to aerosols is temporally limited due to the processes that remove aerosols from the atmosphere. Removal by clouds and precipitation gives tropospheric aerosols an atmospheric lifetime of only about a week, while stratospheric aerosols can remain for a few years. Carbon dioxide has a lifetime of a century or more, and as such, changes in aerosols will only delay climate changes due to carbon dioxide. Black carbon is second only to carbon dioxide for its contribution to global warming.
In addition to their direct effect by scattering and absorbing solar radiation, aerosols have indirect effects on the Earth's radiation budget. Sulfate aerosols act as cloud condensation nuclei and thus lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets, a phenomenon known as the Twomey effect. This effect also causes droplets to be of more uniform size, which reduces growth of raindrops and makes the cloud more reflective to incoming sunlight, known as the Albrecht effect. Indirect effects are most noticeable in marine stratiform clouds, and have very little radiative effect on convective clouds. Indirect effects of aerosols represent the largest uncertainty in radiative forcing.
Soot may either cool or warm Earth's climate system, depending on whether it is airborne or deposited. Atmospheric soot directly absorbs solar radiation, which heats the atmosphere and cools the surface. In isolated areas with high soot production, such as rural India, as much as 50% of surface warming due to greenhouse gases may be masked by atmospheric brown clouds. When deposited, especially on glaciers or on ice in arctic regions, the lower surface albedo can also directly heat the surface. The influences of atmospheric particles, including black carbon, are most pronounced in the tropics and sub-tropics, particularly in Asia, while the effects of greenhouse gases are dominant in the extratropics and southern hemisphere.
Since 1978, solar irradiance has been measured by satellites. These measurements indicate that the Sun's radiative output has not increased since 1978, so the warming during the past 30 years cannot be attributed to an increase in solar energy reaching the Earth.
Climate models have been used to examine the role of the Sun in recent climate change. Models are unable to reproduce the rapid warming observed in recent decades when they only take into account variations in solar output and volcanic activity. Models are, however, able to simulate the observed 20th century changes in temperature when they include all of the most important external forcings, including human influences and natural forcings.
Another line of evidence against solar variations having caused recent climate change comes from looking at how temperatures at different levels in the Earth's atmosphere have changed. Models and observations show that greenhouse warming results in warming of the lower atmosphere (the troposphere) but cooling of the upper atmosphere (the stratosphere). Depletion of the ozone layer by chemical refrigerants has also resulted in a strong cooling effect in the stratosphere. If solar variations were responsible for observed warming, warming of both the troposphere and stratosphere would be expected.
Variations in Earth's orbit
The tilt of the Earth’s axis and the shape of its orbit around the Sun vary slowly over tens of thousands of years and are a natural source of climate change, by changing the seasonal and latitudinal distribution of solar insolation.
During the last few thousand years, this phenomenon contributed to a slow cooling trend at high latitudes of the Northern Hemisphere during summer, a trend that was reversed by greenhouse-gas-induced warming during the 20th century.
Variations in orbital cycles may initiate a new glacial period in the future, though the timing of this depends on greenhouse gas concentrations as well as the orbital forcing. A new glacial period is not expected within the next 50,000 years if atmospheric CO2 concentration remains above 300 ppm.
The climate system includes a range of feedbacks, which alter the response of the system to changes in external forcings. Positive feedbacks increase the response of the climate system to an initial forcing, while negative feedbacks reduce it.
There are a range of feedbacks in the climate system, including water vapor, changes in ice-albedo (snow and ice cover affect how much the Earth's surface absorbs or reflects incoming sunlight), clouds, and changes in the Earth's carbon cycle (e.g., the release of carbon from soil). The main negative feedback is the energy the Earth's surface radiates into space as infrared radiation. According to the Stefan-Boltzmann law, if the absolute temperature (as measured in kelvin) doubles,[e] radiated energy increases by a factor of 16 (2 to the 4th power).
Feedbacks are an important factor in determining the sensitivity of the climate system to increased atmospheric greenhouse gas concentrations. Other factors being equal, a higher climate sensitivity means that more warming will occur for a given increase in greenhouse gas forcing. Uncertainty over the effect of feedbacks is a major reason why different climate models project different magnitudes of warming for a given forcing scenario. More research is needed to understand the role of clouds and carbon cycle feedbacks in climate projections.
The IPCC projections previously mentioned span the "likely" range (greater than 66% probability, based on expert judgement) for the selected emissions scenarios. However, the IPCC's projections do not reflect the full range of uncertainty. The lower end of the "likely" range appears to be better constrained than the upper end.
A climate model is a representation of the physical, chemical and biological processes that affect the climate system. Such models are based on scientific disciplines such as fluid dynamics and thermodynamics as well as physical processes such as radiative transfer. The models may be used to predict a range of variables such as local air movement, temperature, clouds, and other atmospheric properties; ocean temperature, salt content, and circulation; ice cover on land and sea; the transfer of heat and moisture from soil and vegetation to the atmosphere; and chemical and biological processes, among others.
Although researchers attempt to include as many processes as possible, simplifications of the actual climate system are inevitable because of the constraints of available computer power and limitations in knowledge of the climate system. Results from models can also vary due to different greenhouse gas inputs and the model's climate sensitivity. For example, the uncertainty in IPCC's 2007 projections is caused by (1) the use of multiple models with differing sensitivity to greenhouse gas concentrations, (2) the use of differing estimates of humanity's future greenhouse gas emissions, (3) any additional emissions from climate feedbacks that were not included in the models IPCC used to prepare its report, i.e., greenhouse gas releases from permafrost.
The models do not assume the climate will warm due to increasing levels of greenhouse gases. Instead the models predict how greenhouse gases will interact with radiative transfer and other physical processes. Warming or cooling is thus a result, not an assumption, of the models.
Clouds and their effects are especially difficult to predict. Improving the models' representation of clouds is therefore an important topic in current research. Another prominent research topic is expanding and improving representations of the carbon cycle.
Models are also used to help investigate the causes of recent climate change by comparing the observed changes to those that the models project from various natural and human causes. Although these models do not unambiguously attribute the warming that occurred from approximately 1910 to 1945 to either natural variation or human effects, they do indicate that the warming since 1970 is dominated by anthropogenic greenhouse gas emissions.
The physical realism of models is tested by examining their ability to simulate contemporary or past climates. Climate models produce a good match to observations of global temperature changes over the last century, but do not simulate all aspects of climate. Not all effects of global warming are accurately predicted by the climate models used by the IPCC. Observed Arctic shrinkage has been faster than that predicted. Precipitation increased proportionally to atmospheric humidity, and hence significantly faster than global climate models predict. Since 1990, sea level has also risen considerably faster than models predicted it would.
Observed and expected environmental effects
Anthropogenic forcing has likely contributed to some of the observed changes, including sea level rise, changes in climate extremes (such as the number of warm and cold days), declines in Arctic sea ice extent, glacier retreat, and greening of the Sahara.
During the 21st century, glaciers and snow cover are projected to continue their widespread retreat. Projections of declines in Arctic sea ice vary. Recent projections suggest that Arctic summers could be ice-free (defined as ice extent less than 1 million square km) as early as 2025-2030.
"Detection" is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change. Detection does not imply attribution of the detected change to a particular cause. "Attribution" of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence. Detection and attribution may also be applied to observed changes in physical, ecological and social systems.
Changes in regional climate are expected to include greater warming over land, with most warming at high northern latitudes, and least warming over the Southern Ocean and parts of the North Atlantic Ocean.
Future changes in precipitation are expected to follow existing trends, with reduced precipitation over subtropical land areas, and increased precipitation at subpolar latitudes and some equatorial regions. Projections suggest a probable increase in the frequency and severity of some extreme weather events, such as heat waves.
A 2015 study published in Nature Climate Change, states:
|“||About 18% of the moderate daily precipitation extremes over land are attributable to the observed temperature increase since pre-industrial times, which in turn primarily results from human influence. For 2 °C of warming the fraction of precipitation extremes attributable to human influence rises to about 40%. Likewise, today about 75% of the moderate daily hot extremes over land are attributable to warming. It is the most rare and extreme events for which the largest fraction is anthropogenic, and that contribution increases nonlinearly with further warming.||”|
Data analysis of extreme events from 1960 till 2010 suggests that droughts and heat waves appear simultaneously with increased frequency. Extremely wet or dry events within the monsoon period have increased since 1980.
Sea level rise
The sea level rise since 1993 has been estimated to have been on average 2.6 mm and 2.9 mm per year ± 0.4 mm. Additionally, sea level rise has accelerated from 1995 to 2015. Over the 21st century, the IPCC projects for a high emissions scenario, that global mean sea level could rise by 52–98 cm. The IPCC's projections are conservative, and may underestimate future sea level rise. Other estimates suggest that for the same period, global mean sea level could rise by 0.2 to 2.0 m (0.7–6.6 ft), relative to mean sea level in 1992.
Widespread coastal flooding would be expected if several degrees of warming is sustained for millennia. For example, sustained global warming of more than 2 °C (relative to pre-industrial levels) could lead to eventual sea level rise of around 1 to 4 m due to thermal expansion of sea water and the melting of glaciers and small ice caps. Melting of the Greenland ice sheet could contribute an additional 4 to 7.5 m over many thousands of years. It has been estimated that we are already committed to a sea-level rise of approximately 2.3 meters for each degree of temperature rise within the next 2,000 years.
Warming beyond the 2 °C target would potentially lead to rates of sea-level rise dominated by ice loss from Antarctica. Continued CO2 emissions from fossil sources could cause additional tens of meters of sea level rise, over the next millennia and eventually ultimately eliminate the entire Antarctic ice sheet, causing about 58 meters of sea level rise.
In terrestrial ecosystems, the earlier timing of spring events, as well as poleward and upward shifts in plant and animal ranges, have been linked with high confidence to recent warming. Future climate change is expected to affect particular ecosystems, including tundra, mangroves, and coral reefs. It is expected that most ecosystems will be affected by higher atmospheric CO2 levels, combined with higher global temperatures. Overall, it is expected that climate change will result in the extinction of many species and reduced diversity of ecosystems.
Increases in atmospheric CO2 concentrations have led to an increase in ocean acidity. Dissolved CO2 increases ocean acidity, measured by lower pH values. Between 1750 and 2000, surface-ocean pH has decreased by ≈0.1, from ≈8.2 to ≈8.1. Surface-ocean pH has probably not been below ≈8.1 during the past 2 million years. Projections suggest that surface-ocean pH could decrease by an additional 0.3–0.4 units by 2100. Future ocean acidification could threaten coral reefs, fisheries, protected species, and other natural resources of value to society.
Ocean deoxygenation is projected to increase hypoxia by 10%, and triple suboxic waters (oxygen concentrations 98% less than the mean surface concentrations), for each 1 °C of upper ocean warming.
On the timescale of centuries to millennia, the magnitude of global warming will be determined primarily by anthropogenic CO2 emissions. This is due to carbon dioxide's very long lifetime in the atmosphere.
Stabilizing the global average temperature would require large reductions in CO2 emissions, as well as reductions in emissions of other greenhouse gases such as methane and nitrous oxide. Emissions of CO2 would need to be reduced by more than 80% relative to their peak level. Even if this were achieved, global average temperatures would remain close to their highest level for many centuries.
Long-term effects also include a response from the Earth's crust, due to ice melting and deglaciation, in a process called post-glacial rebound, when land masses are no longer depressed by the weight of ice. This could lead to landslides and increased seismic and volcanic activities. Tsunamis could be generated by submarine landslides caused by warmer ocean water thawing ocean-floor permafrost or releasing gas hydrates. Some world regions, such as the French Alps, already show signs of an increase in landslide frequency.
Large-scale and abrupt impacts
Climate change could result in global, large-scale changes in natural and social systems. Examples include the possibility for the Atlantic Meridional Overturning Circulation to slow- or shutdown, which in the instance of a shutdown would change weather in Europe and North America considerably, ocean acidification caused by increased atmospheric concentrations of carbon dioxide, and the long-term melting of ice sheets, which contributes to sea level rise.
Some large-scale changes could occur abruptly, i.e., over a short time period, and might also be irreversible. Examples of abrupt climate change are the rapid release of methane and carbon dioxide from permafrost, which would lead to amplified global warming, or the shutdown of thermohaline circulation. Scientific understanding of abrupt climate change is generally poor. The probability of abrupt change for some climate related feedbacks may be low. Factors that may increase the probability of abrupt climate change include higher magnitudes of global warming, warming that occurs more rapidly, and warming that is sustained over longer time periods.
The effects of climate change on human systems, mostly due to warming or shifts in precipitation patterns, or both, have been detected worldwide. Production of wheat and maize globally has been impacted by climate change. While crop production has increased in some mid-latitude regions such as the UK and Northeast China, economic losses due to extreme weather events have increased globally. There has been a shift from cold- to heat-related mortality in some regions as a result of warming. Livelihoods of indigenous peoples of the Arctic have been altered by climate change, and there is emerging evidence of climate change impacts on livelihoods of indigenous peoples in other regions. Regional impacts of climate change are now observable at more locations than before, on all continents and across ocean regions.
The future social impacts of climate change will be uneven. Many risks are expected to increase with higher magnitudes of global warming. All regions are at risk of experiencing negative impacts. Low-latitude, less developed areas face the greatest risk. A study from 2015 concluded that economic growth (gross domestic product) of poorer countries is much more impaired with projected future climate warming, than previously thought.
Examples of impacts include:
- Food: Crop production will probably be negatively affected in low latitude countries, while effects at northern latitudes may be positive or negative. Global warming of around 4.6 °C relative to pre-industrial levels could pose a large risk to global and regional food security.
- Health: Generally impacts will be more negative than positive. Impacts include: the effects of extreme weather, leading to injury and loss of life; and indirect effects, such as undernutrition brought on by crop failures.
In small islands and mega deltas, inundation as a result of sea level rise is expected to threaten vital infrastructure and human settlements. This could lead to issues of homelessness in countries with low-lying areas such as Bangladesh, as well as statelessness for populations in countries such as the Maldives and Tuvalu.
Estimates based on the IPCC A1B emission scenario from additional CO2 and CH4 greenhouse gases released from permafrost, estimate associated impact damages by US$43 trillion.
Continued permafrost degradation will likely result in unstable infrastructure in Arctic regions, or Alaska before 2100. Thus, impacting roads, pipelines and buildings, as well as water distribution, and cause slope failures.
Possible responses to global warming
Mitigation of climate change are actions to reduce greenhouse gas emissions, or enhance the capacity of carbon sinks to absorb GHGs from the atmosphere. There is a large potential for future reductions in emissions by a combination of activities, including: energy conservation and increased energy efficiency; the use of low-carbon energy technologies, such as renewable energy, nuclear energy, and carbon capture and storage; and enhancing carbon sinks through, for example, reforestation and preventing deforestation. A 2015 report by Citibank concluded that transitioning to a low carbon economy would yield positive return on investments.
Near- and long-term trends in the global energy system are inconsistent with limiting global warming at below 1.5 or 2 °C, relative to pre-industrial levels. Pledges made as part of the Cancún agreements are broadly consistent with having a likely chance (66 to 100% probability) of limiting global warming (in the 21st century) at below 3 °C, relative to pre-industrial levels.
In limiting warming at below 2 °C, more stringent emission reductions in the near-term would allow for less rapid reductions after 2030. Many integrated models are unable to meet the 2 °C target if pessimistic assumptions are made about the availability of mitigation technologies.
Other policy responses include adaptation to climate change. Adaptation to climate change may be planned, either in reaction to or anticipation of climate change, or spontaneous, i.e., without government intervention. Planned adaptation is already occurring on a limited basis. The barriers, limits, and costs of future adaptation are not fully understood.
A concept related to adaptation is adaptive capacity, which is the ability of a system (human, natural or managed) to adjust to climate change (including climate variability and extremes) to moderate potential damages, to take advantage of opportunities, or to cope with consequences. Unmitigated climate change (i.e., future climate change without efforts to limit greenhouse gas emissions) would, in the long term, be likely to exceed the capacity of natural, managed and human systems to adapt.
Environmental organizations and public figures have emphasized changes in the climate and the risks they entail, while promoting adaptation to changes in infrastructural needs and emissions reductions.
Climate engineering (sometimes called geoengineering or climate intervention) is the deliberate modification of the climate. It has been investigated as a possible response to global warming, e.g. by NASA and the Royal Society. Techniques under research fall generally into the categories solar radiation management and carbon dioxide removal, although various other schemes have been suggested. A study from 2014 investigated the most common climate engineering methods and concluded they are either ineffective or have potentially severe side effects and cannot be stopped without causing rapid climate change.
Discourse about global warming
Most countries in the world are parties to the United Nations Framework Convention on Climate Change (UNFCCC). The ultimate objective of the Convention is to prevent dangerous human interference of the climate system. As stated in the Convention, this requires that GHG concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can proceed in a sustainable fashion. The Framework Convention was agreed in 1992, but since then, global emissions have risen.
During negotiations, the G77 (a lobbying group in the United Nations representing 133 developing nations):4 pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions. This was justified on the basis that: the developed world's emissions had contributed most to the cumulation of GHGs in the atmosphere; per-capita emissions (i.e., emissions per head of population) were still relatively low in developing countries; and the emissions of developing countries would grow to meet their development needs.:290
This mandate was sustained in the Kyoto Protocol to the Framework Convention,:290 which entered into legal effect in 2005. In ratifying the Kyoto Protocol, most developed countries accepted legally binding commitments to limit their emissions. These first-round commitments expired in 2012. United States President George W. Bush rejected the treaty on the basis that "it exempts 80% of the world, including major population centers such as China and India, from compliance, and would cause serious harm to the US economy.":5
At the 15th UNFCCC Conference of the Parties, held in 2009 at Copenhagen, several UNFCCC Parties produced the Copenhagen Accord. Parties associated with the Accord (140 countries, as of November 2010):9 aim to limit the future increase in global mean temperature to below . 2 °C The 16th Conference of the Parties (COP16) was held at Cancún in 2010. It produced an agreement, not a binding treaty, that the Parties should take urgent action to reduce greenhouse gas emissions to meet a goal of limiting global warming to above pre-industrial temperatures. It also recognized the need to consider strengthening the goal to a global average rise of 2 °C. 1.5 °C
Most scientists agree that humans are contributing to observed climate change. At least nine surveys of scientists and meta-studies of academic papers concerning global warming have been carried out since 2004. While up to 18% of scientists surveyed might disagree with the consensus view, when restricted to scientists publishing in the field of climate, 97 to 100% agreed with the consensus: most of the current warming is anthropogenic (caused by humans). National science academies have called on world leaders for policies to cut global emissions.
In the scientific literature, there is a strong consensus that global surface temperatures have increased in recent decades and that the trend is caused mainly by human-induced emissions of greenhouse gases. No scientific body of national or international standing disagrees with this view.
Discussion by the public and in popular media
The global warming controversy refers to a variety of disputes, substantially more pronounced in the popular media than in the scientific literature, regarding the nature, causes, and consequences of global warming. The disputed issues include the causes of increased global average air temperature, especially since the mid-20th century, whether this warming trend is unprecedented or within normal climatic variations, whether humankind has contributed significantly to it, and whether the increase is completely or partially an artifact of poor measurements. Additional disputes concern estimates of climate sensitivity, predictions of additional warming, and what the consequences of global warming will be.
From 1990 to 1997, right-wing conservative think tanks in the United States mobilized to challenge the legitimacy of global warming as a social problem. They challenged the scientific evidence, argued that global warming will have benefits, and asserted that proposed solutions would do more harm than good. Some people dispute aspects of climate change science. Organizations such as the libertarian Competitive Enterprise Institute, conservative commentators, and some companies such as ExxonMobil have challenged IPCC climate change scenarios, funded scientists who disagree with the scientific consensus, and provided their own projections of the economic cost of stricter controls. On the other hand, some fossil fuel companies have scaled back their efforts in recent years, or even called for policies to reduce global warming. Global oil companies have begun to acknowledge climate change exists and is caused by human activities and the burning of fossil fuels.
Surveys of public opinion
The world public, or at least people in economically advanced regions, became broadly aware of the global warming problem in the late 1980s. Polling groups began to track opinions on the subject, at first mainly in the United States. The longest consistent polling, by Gallup in the US, found relatively small deviations of 10% or so from 1998 to 2015 in opinion on the seriousness of global warming, but with increasing polarization between those concerned and those unconcerned.
The first major worldwide poll, conducted by Gallup in 2008-2009 in 127 countries, found that some 62% of people worldwide said they knew about global warming. In the advanced countries of North America, Europe and Japan, 90% or more knew about it (97% in the U.S., 99% in Japan); in less developed countries, especially in Africa, fewer than a quarter knew about it, although many had noticed local weather changes. Among those who knew about global warming, there was a wide variation between nations in belief that the warming was a result of human activities.
By 2010, with 111 countries surveyed, Gallup determined that there was a substantial decrease since 2007–08 in the number of Americans and Europeans who viewed global warming as a serious threat. In the US, just a little over half the population (53%) now viewed it as a serious concern for either themselves or their families; this was 10 points below the 2008 poll (63%). Latin America had the biggest rise in concern: 73% said global warming is a serious threat to their families. This global poll also found that people are more likely to attribute global warming to human activities than to natural causes, except in the US where nearly half (47%) of the population attributed global warming to natural causes.
A March–May 2013 survey by Pew Research Center for the People & the Press polled 39 countries about global threats. According to 54% of those questioned, global warming featured top of the perceived global threats. In a January 2013 survey, Pew found that 69% of Americans say there is solid evidence that the Earth's average temperature has got warmer over the past few decades, up six points since November 2011 and 12 points since 2009.
A 2010 survey of 14 industrialized countries found that skepticism about the danger of global warming was highest in Australia, Norway, New Zealand and the United States, in that order, correlating positively with per capita emissions of carbon dioxide.
In the 1950s, research suggested increasing temperatures, and a 1952 newspaper reported "climate change". This phrase next appeared in a November 1957 report in The Hammond Times which described Roger Revelle's research into the effects of increasing human-caused CO2 emissions on the greenhouse effect, "a large scale global warming, with radical climate changes may result". Both phrases were only used occasionally until 1975, when Wallace Smith Broecker published a scientific paper on the topic; "Climatic Change: Are We on the Brink of a Pronounced Global Warming?" The phrase began to come into common use, and in 1976 Mikhail Budyko's statement that "a global warming up has started" was widely reported. Other studies, such as a 1971 MIT report, referred to the human impact as "inadvertent climate modification", but an influential 1979 National Academy of Sciences study headed by Jule Charney followed Broecker in using global warming for rising surface temperatures, while describing the wider effects of increased CO2 as climate change.
In 1986 and November 1987, NASA climate scientist James Hansen gave testimony to Congress on global warming. There were increasing heatwaves and drought problems in the summer of 1988, and when Hansen testified in the Senate on 23 June he sparked worldwide interest. He said: "global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and the observed warming." Public attention increased over the summer, and global warming became the dominant popular term, commonly used both by the press and in public discourse.
In a 2008 NASA article on usage, Erik M. Conway defined Global warming as "the increase in Earth’s average surface temperature due to rising levels of greenhouse gases", while Climate change was "a long-term change in the Earth’s climate, or of a region on Earth." As effects such as changing patterns of rainfall and rising sea levels would probably have more impact than temperatures alone, he considered global climate change a more scientifically accurate term, and like the Intergovernmental Panel on Climate Change, the NASA website would emphasise this wider context.
- Climate change and agriculture
- Effects of global warming on oceans
- Environmental impact of the coal industry
- Geologic temperature record
- Global cooling
- Glossary of climate change
- Greenhouse gas emissions accounting
- History of climate change science
- Index of climate change articles
- Scientific opinion on climate change
- Scientific journals use "global warming" to describe an increasing global average temperature just at earth's surface, and most of these authorities further limit "global warming" to such increases caused by human activities or increasing greenhouse gases.
- The 2001 joint statement was signed by the national academies of science of Australia, Belgium, Brazil, Canada, the Caribbean, the People's Republic of China, France, Germany, India, Indonesia, Ireland, Italy, Malaysia, New Zealand, Sweden, and the UK. The 2005 statement added Japan, Russia, and the U.S. The 2007 statement added Mexico and South Africa. The Network of African Science Academies, and the Polish Academy of Sciences have issued separate statements. Professional scientific societies include American Astronomical Society, American Chemical Society, American Geophysical Union, American Institute of Physics, American Meteorological Society, American Physical Society, American Quaternary Association, Australian Meteorological and Oceanographic Society, Canadian Foundation for Climate and Atmospheric Sciences, Canadian Meteorological and Oceanographic Society, European Academy of Sciences and Arts, European Geosciences Union, European Science Foundation, Geological Society of America, Geological Society of Australia, Geological Society of London-Stratigraphy Commission, InterAcademy Council, International Union of Geodesy and Geophysics, International Union for Quaternary Research, National Association of Geoscience Teachers, National Research Council (US), Royal Meteorological Society, and World Meteorological Organization.
- Earth has already experienced almost 1/2 of the 2.0 °C (3.6 °F) described in the Cancún Agreement. In the last 100 years, Earth's average surface temperature increased by about 0.8 °C (1.4 °F) with about two thirds of the increase occurring over just the last three decades.
- The greenhouse effect produces an average worldwide temperature increase of about 33 °C (59 °F) compared to black body predictions without the greenhouse effect, not an average surface temperature of 33 °C (91 °F). The average worldwide surface temperature is about 14 °C (57 °F).
- A rise in temperature from 10 °C to 20 °C is not a doubling of absolute temperature; a rise from (273 + 10) K = 283 K to (273 + 20) K = 293 K is an increase of (293 − 283)/283 = 3.5 %.
- 16 January 2015: NASA GISS: NASA GISS: NASA, NOAA Find 2014 Warmest Year in Modern Record, in: Research News. NASA Goddard Institute for Space Studies, New York, NY, USA. Retrieved 20 February 2015
- Gillis, Justin (28 November 2015). "Short Answers to Hard Questions About Climate Change". The New York Times. Retrieved 29 November 2015.
- Hartmann, D. L.; Klein Tank, A. M. G.; Rusticucci, M. (2013). "2: Observations: Atmosphere and Surface" (PDF). IPCC WGI AR5 (Report). p. 198.
Evidence for a warming world comes from multiple independent climate indicators, from high up in the atmosphere to the depths of the oceans. They include changes in surface, atmospheric and oceanic temperatures; glaciers; snow cover; sea ice; sea level and atmospheric water vapour. Scientists from all over the world have independently verified this evidence many times.
- "Myth vs Facts....". EPA (US). 2013.The U.S. Global Change Research Program, the National Academy of Sciences, and the Intergovernmental Panel on Climate Change (IPCC) have each independently concluded that warming of the climate system in recent decades is 'unequivocal'. This conclusion is not drawn from any one source of data but is based on multiple lines of evidence, including three worldwide temperature datasets showing nearly identical warming trends as well as numerous other independent indicators of global warming (e.g., rising sea levels, shrinking Arctic sea ice).
- Borenstein, Seth (29 November 2015). "Earth is a wilder, warmer place since last climate deal made". Retrieved 29 November 2015.
- Rhein, M.; Rintoul, S.R. (2013). "3: Observations: Ocean" (PDF). IPCC WGI AR5 (Report). p. 257.
Ocean warming dominates the global energy change inventory. Warming of the ocean accounts for about 93% of the increase in the Earth's energy inventory between 1971 and 2010 (high confidence), with warming of the upper (0 to 700 m) ocean accounting for about 64% of the total. Melting ice (including Arctic sea ice, ice sheets and glaciers) and warming of the continents and atmosphere account for the remainder of the change in energy.
- IPCC, Climate Change 2013: The Physical Science Basis - Summary for Policymakers, Observed Changes in the Climate System, p. 2, in IPCC AR5 WG1 2013. "Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia."
- "CLIMATE CHANGE 2014: Synthesis Report. Summary for Policymakers" (PDF). IPCC. Retrieved 1 November 2015.
The following terms have been used to indicate the assessed likelihood of an outcome or a result: virtually certain 99–100% probability, very likely 90–100%, likely 66–100%, about as likely as not 33–66%, unlikely 0–33%, very unlikely 0–10%, exceptionally unlikely 0–1%. Additional terms (extremely likely: 95–100%, more likely than not >50–100%, more unlikely than likely 0–<50% and extremely unlikely 0–5%) may also be used when appropriate.
- "CLIMATE CHANGE 2014: Synthesis Report. Summary for Policymakers" (PDF). IPCC. Retrieved 7 March 2015.
The evidence for human influence on the climate system has grown since the Fourth Assessment Report (AR4). It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together
- America's Climate Choices: Panel on Advancing the Science of Climate Change; National Research Council (2010). Advancing the Science of Climate Change. Washington, D.C.: The National Academies Press. ISBN 0-309-14588-0.
(p1) ... there is a strong, credible body of evidence, based on multiple lines of research, documenting that climate is changing and that these changes are in large part caused by human activities. While much remains to be learned, the core phenomenon, scientific questions, and hypotheses have been examined thoroughly and have stood firm in the face of serious scientific debate and careful evaluation of alternative explanations. * * * (p21-22) Some scientific conclusions or theories have been so thoroughly examined and tested, and supported by so many independent observations and results, that their likelihood of subsequently being found to be wrong is vanishingly small. Such conclusions and theories are then regarded as settled facts. This is the case for the conclusions that the Earth system is warming and that much of this warming is very likely due to human activities.
- Buis, Alan; Ramsayer, Kate; Rasmussen, Carol (12 November 2015). "A Breathing Planet, Off Balance". NASA. Retrieved 13 November 2015.
- Staff (12 November 2015). "Audio (66:01) - NASA News Conference - Carbon & Climate Telecon". NASA. Retrieved 12 November 2015.
- St. Fleur, Nicholas (10 November 2015). "Atmospheric Greenhouse Gas Levels Hit Record, Report Says". The New York Times. Retrieved 11 November 2015.
- Ritter, Karl (9 November 2015). "UK: In 1st, global temps average could be 1 degree C higher". AP News. Retrieved 11 November 2015.
- Stocker et al., Technical Summary, in IPCC AR5 WG1 2013.
- "Joint Science Academies' Statement" (PDF). Retrieved 6 January 2014.
- Kirby, Alex (17 May 2001). "Science academies back Kyoto". BBC News. Retrieved 27 July 2011.
- DiMento, Joseph F. C.; Doughman, Pamela M. (2007). Climate Change: What It Means for Us, Our Children, and Our Grandchildren. The MIT Press. p. 68. ISBN 978-0-262-54193-0.
- Parry, M.L.; et al., "Technical summary", Box TS.6. The main projected impacts for regions, in IPCC AR4 WG2 2007, pp. 59–63
- Solomon et al., Technical Summary, Section TS.5.3: Regional-Scale Projections, in IPCC AR4 WG1 2007.
- Lu, Jian; Vechhi, Gabriel A.; Reichler, Thomas (2007). "Expansion of the Hadley cell under global warming" (PDF). Geophysical Research Letters 34 (6): L06805. Bibcode:2007GeoRL..3406805L. doi:10.1029/2006GL028443.
- On snowfall:
- Christopher Joyce (15 February 2010). "Get This: Warming Planet Can Mean More Snow". NPR.
- "Global warming means more snowstorms: scientists". 1 March 2011.
- "Does record snowfall disprove global warming?". 9 July 2010. Retrieved 14 December 2014.
- Battisti, David; Naylor, Rosamund L. (2009). "Historical warnings of future food insecurity with unprecedented seasonal heat". Science 323 (5911): 240–4. doi:10.1126/science.1164363. PMID 19131626. Retrieved 13 April 2012.
- US NRC 2012, p. 26
- Peter U. Clark et al.: Consequences of twenty-first-century policy for multi-millennial climate and sea-level change. Nature Climate Change 6, 2016, 360-369, doi:10.1038/NCLIMATE2923
- United Nations Framework Convention on Climate Change (UNFCCC) (2011). "Status of Ratification of the Convention". UNFCCC Secretariat: Bonn, Germany: UNFCCC.. Most countries in the world are Parties to the United Nations Framework Convention on Climate Change (UNFCCC), which has adopted the target. As of 25 November 2011, there are 195 parties (194 states and 1 regional economic integration organization (the 2 °CEuropean Union)) to the UNFCCC.
- "Article 2". The United Nations Framework Convention on Climate Change.
The ultimate objective of this Convention and any related legal instruments that the Conference of the Parties may adopt is to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner. Such a level should be achieved within a time-frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner, excerpt from the founding international treaty that took force on 21 March 1994.
- United Nations Framework Convention on Climate Change (UNFCCC) (2005). "Sixth compilation and synthesis of initial national communications from Parties not included in Annex I to the Convention. Note by the secretariat. Executive summary" (PDF). Geneva (Switzerland): United Nations Office at Geneva.
- Gupta, S. et al. 13.2 Climate change and other related policies, in IPCC AR4 WG3 2007.
- Ch 4: Climate change and the energy outlook., in IEA 2009, pp. 173–184 (pp.175-186 of PDF)
- United Nations Framework Convention on Climate Change (UNFCCC) (2011). "Compilation and synthesis of fifth national communications. Executive summary. Note by the secretariat" (PDF). Geneva (Switzerland): United Nations Office at Geneva.
- Adger, et al., Chapter 17: Assessment of adaptation practices, options, constraints and capacity, Executive summary, in IPCC AR4 WG2 2007.
- 6. Generating the funding needed for mitigation and adaptation (PDF), in "World Development Report 2010: Development and Climate Change". Washington, D.C., USA: The International Bank for Reconstruction and Development / The World Bank. 2010: 262–263.
- United Nations Framework Convention on Climate Change (UNFCCC) (2011). "Conference of the Parties – Sixteenth Session: Decision 1/CP.16: The Cancun Agreements: Outcome of the work of the Ad Hoc Working Group on Long-term Cooperative Action under the Convention (English): Paragraph 4" (PDF). UNFCCC Secretariat: Bonn, Germany: UNFCCC: 3. "(...) deep cuts in global greenhouse gas emissions are required according to science, and as documented in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, with a view to reducing global greenhouse gas emissions so as to hold the increase in global average temperature below above preindustrial levels" 2 °C
- America's Climate Choices. Washington, D.C.: The National Academies Press. 2011. p. 15. ISBN 978-0-309-14585-5.
The average temperature of the Earth's surface increased by about 1.4 °F (0.8 °C) over the past 100 years, with about 1.0 °F (0.6 °C) of this warming occurring over just the past three decades.
- Sutter, John D.; Berlinger, Joshua (12 December 2015). "Final draft of climate deal formally accepted in Paris". CNN. Cable News Network, Turner Broadcasting System, Inc. Retrieved 12 December 2015.
- Stokes, Bruce; Wike, Richard; Carle, Jill (5 November 2015). "Global Concern about Climate Change, Broad Support for Limiting Emissions: U.S., China Less Worried; Partisan Divides in Key Countries". Pew Research Center. Retrieved 18 June 2016.
- Brown, Dwayne; Cabbage, Michael; McCarthy, Leslie; Norton, Karen (20 January 2016). "NASA, NOAA Analyses Reveal Record-Shattering Global Warm Temperatures in 2015". NASA. Retrieved 21 January 2016.
- Rhein, M., et al. (June 7, 2013): Box 3.1, in: Chapter 3: Observations: Ocean (final draft accepted by IPCC Working Group I), pp.11-12 (pp.14-15 of PDF chapter), in: IPCC AR5 WG1 2013
- IPCC (November 11, 2013): D.3 Detection and Attribution of Climate Change, in: Summary for Policymakers (finalized version), p.15, in: IPCC AR5 WG1 2013
- "Climate Change 2013: The Physical Science Basis, IPCC Fifth Assessment Report (WGI AR5)" (PDF). WGI AR5. IPCC AR5. 2013. p. 5.
- "Climate Change 2007: Working Group I: The Physical Science Basis". IPCC AR4. 2007.
- Jansen et al., Ch. 6, Palaeoclimate, Section 126.96.36.199: What Do Reconstructions Based on Palaeoclimatic Proxies Show?, pp. 466–478, in IPCC AR4 WG1 2007.
- Kennedy, J.J.; et al. (2010). "How do we know the world has warmed? in: 2. Global Climate, in: State of the Climate in 2009". Bull. Amer. Meteor. Soc. 91 (7): 26.
- Kennedy, C. (10 July 2012). "ClimateWatch Magazine >> State of the Climate: 2011 Global Sea Level". NOAA Climate Services Portal.
- "Summary for Policymakers". Direct Observations of Recent Climate Change., in IPCC AR4 WG1 2007
- "Summary for Policymakers". B. Current knowledge about observed impacts of climate change on the natural and human environment., in IPCC AR4 WG2 2007
- Rosenzweig, C.; et al. "Ch 1: Assessment of Observed Changes and Responses in Natural and Managed Systems". Sec 188.8.131.52 Changes in phenology., in IPCC AR4 WG2 2007, p. 99
- Trenberth et al., Chap 3, Observations: Atmospheric Surface and Climate Change, Executive Summary, p. 237, in IPCC AR4 WG1 2007.
- Rowan T. Sutton; Buwen Dong; Jonathan M. Gregory (2007). "Land/sea warming ratio in response to climate change: IPCC AR4 model results and comparison with observations". Geophysical Research Letters 34 (2): L02701. Bibcode:2007GeoRL..3402701S. doi:10.1029/2006GL028164. Retrieved 19 September 2007.
- Feulner, Georg; Rahmstorf, Stefan; Levermann, Anders; Volkwardt, Silvia (March 2013). "On the Origin of the Surface Air Temperature Difference Between the Hemispheres in Earth's Present-Day Climate". Journal of Climate 26: 130325101629005. doi:10.1175/JCLI-D-12-00636.1. Retrieved 25 April 2013.
- TS.3.1.2 Spatial Distribution of Changes in Temperature, Circulation and Related Variables - AR4 WGI Technical Summary
- Ehhalt et al., Chapter 4: Atmospheric Chemistry and Greenhouse Gases, Section 184.108.40.206: Carbon monoxide (CO) and hydrogen (H2), p. 256, in IPCC TAR WG1 2001.
- Meehl, Gerald A.; Washington, Warren M.; Collins, William D.; Arblaster, Julie M.; Hu, Aixue; Buja, Lawrence E.; Strand, Warren G.; Teng, Haiyan (18 March 2005). "How Much More Global Warming and Sea Level Rise" (PDF). Science 307 (5716): 1769–1772. Bibcode:2005Sci...307.1769M. doi:10.1126/science.1106663. PMID 15774757. Retrieved 11 February 2007.
- England, Matthew (February 2014). "Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus". Nature Climate Change 4: 222–227. Bibcode:2014NatCC...4..222E. doi:10.1038/nclimate2106.
- Knight, J.; Kenney, J.J.; Folland, C.; Harris, G.; Jones, G.S.; Palmer, M.; Parker, D.; Scaife, A.; Stott, P. (August 2009). "Do Global Temperature Trends Over the Last Decade Falsify Climate Predictions? [in "State of the Climate in 2008"]" (PDF). Bull. Amer. Meteor. Soc. 90 (8): S75–S79. Retrieved 13 August 2011.
- Global temperature slowdown – not an end to climate change. UK Met Office. Retrieved 20 March 2011.
- Gavin Schmidt (4 June 2015). "NOAA temperature record updates and the ‘hiatus’".
- NOAA (4 June 2015). "Science publishes new NOAA analysis: Data show no recent slowdown in global warming".
- "2015 is warmest year on record, NOAA and NASA say". CNN. 20 January 2016.
- Schmidt, Gavin (22 January 2015). "Thoughts on 2014 and ongoing temperature trends". RealClimate. Retrieved 4 September 2015.
- "Climate change: 2015 'shattered' global temperature record by wide margin". BBC. 20 January 2016.
- Miller, Brandon (20 January 2016). "2015 is warmest year on record, NOAA and NASA say". CNN. Retrieved 27 March 2016.
- Group (28 November 2004). "Forcings (filed under: Glossary)". RealClimate.
- Pew Center on Global Climate Change / Center for Climate and Energy Solutions (September 2006). "Science Brief 1: The Causes of Global Climate Change" (PDF). Arlington, Virginia, USA: Center for Climate and Energy Solutions., p.2
- US NRC 2012, p. 9
- Hegerl et al., Chapter 9: Understanding and Attributing Climate Change, Section 220.127.116.11: The Influence of Other Anthropogenic and Natural Forcings, in IPCC AR4 WG1 2007, pp. 690–691. "Recent estimates indicate a relatively small combined effect of natural forcings on the global mean temperature evolution of the second half of the 20th century, with a small net cooling from the combined effects of solar and volcanic forcings." p. 690
- Tyndall, John (1861). "On the Absorption and Radiation of Heat by Gases and Vapours, and on the Physical Connection of Radiation, Absorption, and Conduction" (PDF). Philosophical Magazine. 4 22: 169–94, 273–85. Retrieved 8 May 2013.
- Weart, Spencer (2008). "The Carbon Dioxide Greenhouse Effect". The Discovery of Global Warming. American Institute of Physics. Retrieved 21 April 2009.
- The Callendar Effect: the life and work of Guy Stewart Callendar (1898–1964) Amer Meteor Soc., Boston. ISBN 978-1-878220-76-9
- Le Treut; et al. "Chapter 1: Historical Overview of Climate Change Science". FAQ 1.1., p. 97, in IPCC AR4 WG1 2007: "To emit 240 W m–2, a surface would have to have a temperature of around −19 °C. This is much colder than the conditions that actually exist at the Earth's surface (the global mean surface temperature is about 14 °C). Instead, the necessary −19 °C is found at an altitude about 5 km above the surface."
- Blue, Jessica. "What is the Natural Greenhouse Effect?". National Geographic. Retrieved 1 Jan 2015.
- Kiehl, J.T.; Trenberth, K.E. (1997). "Earth's Annual Global Mean Energy Budget" (PDF). Bulletin of the American Meteorological Society 78 (2): 197–208. Bibcode:1997BAMS...78..197K. doi:10.1175/1520-0477(1997)078<0197:EAGMEB>2.0.CO;2. ISSN 1520-0477. Archived from the original (PDF) on 24 June 2008. Retrieved 21 April 2009.
- Schmidt, Gavin (6 April 2005). "Water vapour: feedback or forcing?". RealClimate. Retrieved 21 April 2009.
- Russell, Randy (16 May 2007). "The Greenhouse Effect & Greenhouse Gases". University Corporation for Atmospheric Research Windows to the Universe. Retrieved 27 December 2009.
- EPA (2007). "Recent Climate Change: Atmosphere Changes". Climate Change Science Program. United States Environmental Protection Agency. Retrieved 21 April 2009.
- Spahni, Renato; Jérôme Chappellaz; Thomas F. Stocker; Laetitia Loulergue; Gregor Hausammann; Kenji Kawamura; Jacqueline Flückiger; Jakob Schwander; Dominique Raynaud; Valérie Masson-Delmotte; Jean Jouzel (November 2005). "Atmospheric Methane and Nitrous Oxide of the Late Pleistocene from Antarctic Ice Cores". Science 310 (5752): 1317–1321. Bibcode:2005Sci...310.1317S. doi:10.1126/science.1120132. PMID 16311333.
- Siegenthaler, Urs; et al. (November 2005). "Stable Carbon Cycle–Climate Relationship During the Late Pleistocene" (PDF). Science 310 (5752): 1313–1317. Bibcode:2005Sci...310.1313S. doi:10.1126/science.1120130. PMID 16311332. Retrieved 25 August 2010.
- Petit, J. R.; et al. (3 June 1999). "Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica" (PDF). Nature 399 (6735): 429–436. Bibcode:1999Natur.399..429P. doi:10.1038/20859. Retrieved 27 December 2009.
- Lüthi, D.; Le Floch, M.; Bereiter, B.; Blunier, T.; Barnola, J. M.; Siegenthaler, U.; Raynaud, D.; Jouzel, J.; Fischer, H.; Kawamura, K.; Stocker, T. F. (2008). "High-resolution carbon dioxide concentration record 650,000–800,000 years before present". Nature 453 (7193): 379–382. Bibcode:2008Natur.453..379L. doi:10.1038/nature06949. PMID 18480821.
- Pearson, PN; Palmer, MR (2000). "Atmospheric carbon dioxide concentrations over the past 60 million years". Nature 406 (6797): 695–699. doi:10.1038/35021000. PMID 10963587.
- IPCC, Summary for Policymakers, Concentrations of atmospheric greenhouse gases ..., p. 7, in IPCC TAR WG1 2001.
- IPCC (2007) AR4. Climate Change 2007: Working Group III: Mitigation of Climate Change, section 18.104.22.168. https://www.ipcc.ch/publications_and_data/ar4/wg3/en/ch7s7-4-5.html
- Le Quéré, C.;; Andres, R.J.; Boden, T.; Conway, T.; Houghton, R.A.; House, J.I.; Marland, G.; Peters, G.P.; van der Werf, G.; Ahlström, A.; Andrew, R.M.; Bopp, L.; Canadell, J.G.; Ciais, P.; Doney, S.C.; Enright, C.; Friedlingstein, P.; Huntingford, C.; Jain, A.K.; Jourdain, C.; Kato, E.; Keeling, R.F.; Klein Goldewijk, K.; Levis, S.; Levy, P.; Lomas, M.; Poulter, B.; Raupach, M.R.; Schwinger, J.; Sitch, S.; Stocker, B.D.; Viovy, N.; Zaehle, S.; Zeng, N. (2 December 2012). "The global carbon budget 1959–2011". Earth System Science Data Discussions 5 (2): 1107–1157. Bibcode:2012ESSDD...5.1107L. doi:10.5194/essdd-5-1107-2012.
- "Carbon dioxide passes symbolic mark". BBC. 10 May 2013. Retrieved 27 May 2013.
- Pilita Clark (10 May 2013). "CO2 at highest level for millions of years". Financial Times. Retrieved 27 May 2013. (registration required (. ))
- "Climate scientists discuss future of their field". 7 July 2015.
- Rogner, H.-H., et al., Chap. 1, Introduction, Section 22.214.171.124: Intensities, in IPCC AR4 WG3 2007.
- NRC (2008). "Understanding and Responding to Climate Change" (PDF). Board on Atmospheric Sciences and Climate, US National Academy of Sciences. p. 2. Retrieved 9 November 2010.
- World Bank (2010). World Development Report 2010: Development and Climate Change. The International Bank for Reconstruction and Development / The World Bank, 1818 H Street NW, Washington, D.C. 20433. doi:10.1596/978-0-8213-7987-5. ISBN 978-0-8213-7987-5. Archived from the original on 5 March 2010. Retrieved 6 April 2010.
- Banuri et al., Chapter 3: Equity and Social Considerations, Section 3.3.3: Patterns of greenhouse gas emissions, and Box 3.1, pp. 92–93 in IPCC SAR WG3 1996.
- Liverman, D.M. (2008). "Conventions of climate change: constructions of danger and the dispossession of the atmosphere" (PDF). Journal of Historical Geography 35 (2): 279–296. doi:10.1016/j.jhg.2008.08.008. Retrieved 10 May 2011.
- Fisher et al., Chapter 3: Issues related to mitigation in the long-term context, Section 3.1: Emissions scenarios: Issues related to mitigation in the long term context in IPCC AR4 WG3 2007.
- Morita, Chapter 2: Greenhouse Gas Emission Mitigation Scenarios and Implications, Section 126.96.36.199: Emissions and Other Results of the SRES Scenarios, in IPCC TAR WG3 2001.
- Rogner et al., Ch. 1: Introduction, Figure 1.7, in IPCC AR4 WG3 2007.
- IPCC, Summary for Policymakers, Introduction, paragraph 6, in IPCC TAR WG3 2001.
- Prentence et al., Chapter 3: The Carbon Cycle and Atmospheric Carbon Dioxide Executive Summary, in IPCC TAR WG1 2001.
- Newell, P.J., 2000: Climate for change: non-state actors and the global politics of greenhouse. Cambridge University Press, ISBN 0-521-63250-1.
- Talk of the Nation. "Americans Fail the Climate Quiz". NPR. Retrieved 27 December 2011.
- Shindell, Drew; Faluvegi, Greg; Lacis, Andrew; Hansen, James; Ruedy, Reto; Aguilar, Elliot (2006). "Role of tropospheric ozone increases in 20th-century climate change". Journal of Geophysical Research 111 (D8): D08302. Bibcode:2006JGRD..11108302S. doi:10.1029/2005JD006348.
- Solomon, S; D. Qin; M. Manning; Z. Chen; M. Marquis; K.B. Averyt; M. Tignor; H.L. Miller, eds. (2007). "188.8.131.52 Surface Radiation". Climate Change 2007: Working Group I: The Physical Science Basis. ISBN 978-0-521-88009-1.
- Hansen, J; Sato, M; Ruedy, R; Lacis, A; Oinas, V (2000). "Global warming in the twenty-first century: an alternative scenario". Proc. Natl. Acad. Sci. U.S.A. 97 (18): 9875–80. Bibcode:2000PNAS...97.9875H. doi:10.1073/pnas.170278997. PMC 27611. PMID 10944197.
- Ramanathan, V.; Carmichael, G. (2008). "Global and regional climate changes due to black carbon". Nature Geoscience 1 (4): 221–227. Bibcode:2008NatGe...1..221R. doi:10.1038/ngeo156.
- V. Ramanathan and G. Carmichael, supra note 1, at 221 (". . . emissions of black carbon are the second strongest contribution to current global warming, after carbon dioxide emissions.") Numerous scientists also calculate that black carbon may be second only to CO2 in its contribution to climate change, including Tami C. Bond & Haolin Sun, Can Reducing Black Carbon Emissions Counteract Global Warming, ENVIRON. SCI. TECHN. (2005), at 5921 ("BC is the second or third largest individual warming agent, following carbon dioxide and methane."); and J. Hansen, A Brighter Future, 53 CLIMATE CHANGE 435 (2002), available at http://pubs.giss.nasa.gov/docs/2002/2002_Hansen_1.pdf (calculating the climate forcing of BC at 1.0±0.5 W/m2).
- Twomey, S. (1977). "Influence of pollution on shortwave albedo of clouds". J. Atmos. Sci. 34 (7): 1149–1152. Bibcode:1977JAtS...34.1149T. doi:10.1175/1520-0469(1977)034<1149:TIOPOT>2.0.CO;2. ISSN 1520-0469.
- Albrecht, B. (1989). "Aerosols, cloud microphysics, and fractional cloudiness". Science 245 (4923): 1227–1239. Bibcode:1989Sci...245.1227A. doi:10.1126/science.245.4923.1227. PMID 17747885.
- IPCC, "Aerosols, their Direct and Indirect Effects", pp. 291–292 in IPCC TAR WG1 2001.
- Ramanathan, V.; Chung, C.; Kim, D.; Bettge, T.; Buja, L.; Kiehl, J. T.; Washington, W. M.; Fu, Q.; Sikka, D. R.; Wild, M. (2005). "Atmospheric brown clouds: Impacts on South Asian climate and hydrological cycle" (Full free text). Proceedings of the National Academy of Sciences 102 (15): 5326–5333. Bibcode:2005PNAS..102.5326R. doi:10.1073/pnas.0500656102. PMC 552786. PMID 15749818.
- Ramanathan, V.; et al. (2008). "Report Summary" (PDF). Atmospheric Brown Clouds: Regional Assessment Report with Focus on Asia. United Nations Environment Programme.
- Ramanathan, V.; et al. (2008). "Part III: Global and Future Implications" (PDF). Atmospheric Brown Clouds: Regional Assessment Report with Focus on Asia. United Nations Environment Programme.
- IPCC, Summary for Policymakers, Human and Natural Drivers of Climate Change, Figure SPM.2, in IPCC AR4 WG1 2007.
- US Environmental Protection Agency (2009). "3.2.2 Solar Irradiance". Volume 3: Attribution of Observed Climate Change. Endangerment and Cause or Contribute Findings for Greenhouse Gases under Section 202(a) of the Clean Air Act. EPA's Response to Public Comments. US Environmental Protection Agency. Archived from the original on June 16, 2011. Retrieved June 23, 2011.
- US NRC 2008, p. 6
- Hegerl, et al., Chapter 9: Understanding and Attributing Climate Change, Frequently Asked Question 9.2: Can the Warming of the 20th century be Explained by Natural Variability?, in IPCC AR4 WG1 2007.
- Simmon, R.; D. Herring (November 2009). "Notes for slide number 7, titled "Satellite evidence also suggests greenhouse gas warming," in presentation, "Human contributions to global climate change"". Presentation library on the U.S. National Oceanic and Atmospheric Administration's Climate Services website. Archived from the original on 3 July 2011. Retrieved 23 June 2011.
- Hegerl et al., Chapter 9: Understanding and Attributing Climate Change, Frequently Asked Question 9.2: Can the Warming of the 20th century be Explained by Natural Variability?, in IPCC AR4 WG1 2007.
- Randel, William J.; Shine, Keith P.; Austin, John; et al. (2009). "An update of observed stratospheric temperature trends". Journal of Geophysical Research 114 (D2): D02107. Bibcode:2009JGRD..11402107R. doi:10.1029/2008JD010421.
- USGCRP 2009, p. 20
- R.S. Bradley; K.R. Briffa; J. Cole; M.K. Hughes; T.J. Osborn (2003). "The climate of the last millennium". In K.D. Alverson; R.S. Bradley; T.F. Pederson. Paleoclimate, global change and the future. Springer. pp. 105–141. ISBN 3-540-42402-4.
- Kaufman, D. S.; Schneider, D. P.; McKay, N. P.; Ammann, C. M.; Bradley, R. S.; Briffa, K. R.; Miller, G. H.; Otto-Bliesner, B. L.; Overpeck, J. T.; Vinther, B. M.; Abbott, M.; Axford, M.; Bird, Y.; Birks, B.; Bjune, H. J. B.; Briner, A. E.; Cook, J.; Chipman, T.; Francus, M.; Gajewski, P.; Geirsdottir, K.; Hu, A.; Kutchko, F. S.; Lamoureux, B.; Loso, S.; MacDonald, M.; Peros, G.; Porinchu, M.; Schiff, D.; Seppa, C.; Seppa, H.; Arctic Lakes 2k Project Members (2009). "Recent Warming Reverses Long-Term Arctic Cooling". Science 325 (5945): 1236–1239. Bibcode:2009Sci...325.1236K. doi:10.1126/science.1173983. PMID 19729653.
- "Arctic Warming Overtakes 2,000 Years of Natural Cooling". UCAR. 3 September 2009. Retrieved 8 June 2011.
- Bello, David (4 September 2009). "Global Warming Reverses Long-Term Arctic Cooling". Scientific American. Retrieved 8 June 2011.
- Mann, M. E.; Zhang, Z.; Hughes, M. K.; Bradley, R. S.; Miller, S. K.; Rutherford, S.; Ni, F. (2008). "Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia". Proceedings of the National Academy of Sciences 105 (36): 13252–7. Bibcode:2008PNAS..10513252M. doi:10.1073/pnas.0805721105. PMC 2527990. PMID 18765811.
- Berger, A. (2002). "CLIMATE: An Exceptionally Long Interglacial Ahead?". Science 297 (5585): 1287–8. doi:10.1126/science.1076120. PMID 12193773.
- Masson-Delmotte V.M.; et al. (2013). "Information from paleoclimate archives". In Stocker T.F. et al. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. pp. 383–464. ISBN 978-1-107-66182-0.
- Jackson, R.; A. Jenkins (17 November 2012). "Vital signs of the planet: global climate change and global warming: uncertainties". Earth Science Communications Team at NASA's Jet Propulsion Laboratory / California Institute of Technology.
- Riebeek, H. (16 June 2011). "The Carbon Cycle: Feature Articles: Effects of Changing the Carbon Cycle". Earth Observatory, part of the EOS Project Science Office located at NASA Goddard Space Flight Center.
- US National Research Council (2003). "Ch. 1 Introduction". Understanding Climate Change Feedbacks. Washington, D.C., USA: National Academies Press., p.19
- Lindsey, R. (14 January 2009). "Earth's Energy Budget (p.4), in: Climate and Earth's Energy Budget: Feature Articles". Earth Observatory, part of the EOS Project Science Office, located at NASA Goddard Space Flight Center.
- US National Research Council (2006). "Ch. 1 Introduction to Technical Chapters". Surface Temperature Reconstructions for the Last 2,000 Years. Washington, D.C., USA: National Academies Press., pp.26-27
- AMS Council (20 August 2012). "2012 American Meteorological Society (AMS) Information Statement on Climate Change". Boston, Massachusetts, USA: AMS.
- Meehl, G.A.; et al. "Ch 10: Global Climate Projections". Sec 10.5.4.6 Synthesis of Projected Global Temperature at Year 2100]., in IPCC AR4 WG1 2007
- NOAA (January 2007). "Patterns of greenhouse warming" (PDF). GFDL Climate Modeling Research Highlights (Princeton, New Jersey, USA: The National Oceanic and Atmospheric Administration (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL)) 1 (6)., revision 2 February 2007, 8:50.08 AM.
- NOAA Geophysical Fluid Dynamics Laboratory (GFDL) (9 October 2012). "NOAA GFDL Climate Research Highlights Image Gallery: Patterns of Greenhouse Warming". NOAA GFDL.
- IPCC, Glossary A-D: "Climate Model", in IPCC AR4 SYR 2007.
- Karl, TR; et al., eds. (2009). "Global Climate Change". Global Climate Change Impacts in the United States. Cambridge University Press. ISBN 978-0-521-14407-0.
- KEVIN SCHAEFER; TINGJUN ZHANG; LORI BRUHWILER; ANDREW P. BARRETT (2011). "Amount and timing of permafrost carbon release in response to climate warming". Tellus Series B 63 (2): 165–180. Bibcode:2011TellB..63..165S. doi:10.1111/j.1600-0889.2011.00527.x.
- Hansen, James (2000). "Climatic Change: Understanding Global Warming". In Robert Lanza. One World: The Health & Survival of the Human Species in the 21st century. Health Press (New Mexico). pp. 173–190. ISBN 0-929173-33-3. Retrieved 18 August 2007.
- Stocker et al., Chapter 7: Physical Climate Processes and Feedbacks, Section 7.2.2: Cloud Processes and Feedbacks, in IPCC TAR WG1 2001.
- Torn, Margaret; Harte, John (2006). "Missing feedbacks, asymmetric uncertainties, and the underestimation of future warming" (PDF). Geophysical Research Letters 33 (10): L10703. Bibcode:2006GeoRL..3310703T. doi:10.1029/2005GL025540. Retrieved 4 March 2007.
- Harte, John; Saleska, Scott; Shih, Tiffany (2006). "Shifts in plant dominance control carbon-cycle responses to experimental warming and widespread drought". Environmental Research Letters 1 (1): 014001. Bibcode:2006ERL.....1a4001H. doi:10.1088/1748-9326/1/1/014001. Retrieved 2 May 2007.
- Scheffer, Marten; Brovkin, Victor; Cox, Peter (2006). "Positive feedback between global warming and atmospheric CO2 concentration inferred from past climate change" (PDF). Geophysical Research Letters 33 (10): L10702. Bibcode:2006GeoRL..3310702S. doi:10.1029/2005gl025044. Retrieved 4 May 2007.
- Randall et al., Chapter 8, Climate Models and Their Evaluation, Sec. FAQ 8.1 in IPCC AR4 WG1 2007.
- IPCC, Technical Summary, p. 54, in IPCC TAR WG1 2001.
- Stroeve, J.; et al. (2007). "Arctic sea ice decline: Faster than forecast". Geophysical Research Letters 34 (9): L09501. Bibcode:2007GeoRL..3409501S. doi:10.1029/2007GL029703.
- Wentz,FJ; et al. (2007). "How Much More Rain Will Global Warming Bring?". Science 317 (5835): 233–5. Bibcode:2007Sci...317..233W. doi:10.1126/science.1140746. PMID 17540863.
- Liepert, Beate G.; Previdi, Michael (2009). "Do Models and Observations Disagree on the Rainfall Response to Global Warming?". Journal of Climate 22 (11): 3156–3166. Bibcode:2009JCli...22.3156L. doi:10.1175/2008JCLI2472.1.
Recently analyzed satellite-derived global precipitation datasets from 1987 to 2006 indicate an increase in global-mean precipitation of 1.1%–1.4% decade−1. This trend corresponds to a hydrological sensitivity (HS) of 7% K−1 of global warming, which is close to the Clausius–Clapeyron (CC) rate expected from the increase in saturation water vapor pressure with temperature. Analysis of two available global ocean evaporation datasets confirms this observed intensification of the atmospheric water cycle. The observed hydrological sensitivity over the past 20-yr period is higher by a factor of 5 than the average HS of 1.4% K−1 simulated in state-of-the-art coupled atmosphere–ocean climate models for the twentieth and twenty-first centuries.
- Rahmstorf, S.; Cazenave, A.; Church, J. A.; Hansen, J. E.; Keeling, R. F.; Parker, D. E.; Somerville, R. C. J. (4 May 2007). "Recent Climate Observations Compared to Projections". Science 316 (5825): 709–709. Bibcode:2007Sci...316..709R. doi:10.1126/science.1136843. PMID 17272686.
- 4. Global Mean Sea Level Rise Scenarios, in: Main Report, in Parris & others 2012, p. 12
- Executive Summary, in Parris & others 2012, p. 1
- Hegerl, G.C.; et al. "Ch 9: Understanding and Attributing Climate Change". Executive Summary., in IPCC AR4 WG1 2007
- "Sahara Desert Greening Due to Climate Change?". National Geographic. Retrieved 12 June 2010.
- Meehl, G.A.; et al. "Ch 10: Global Climate Projections". Box 10.1: Future Abrupt Climate Change, ‘Climate Surprises’, and Irreversible Changes: Glaciers and ice caps., in IPCC AR4 WG1 2007, p. 776
- Meehl, G.A.; et al. "Ch 10: Global Climate Projections". Sec 10.3.3.2 Changes in Snow Cover and Frozen Ground., in IPCC AR4 WG1 2007, pp. 770, 772
- Meehl, G.A.; et al. "Ch 10: Global Climate Projections". Sec 10.3.3.1 Changes in Sea Ice Cover., in IPCC AR4 WG1 2007, p. 770
- Wang, M.; Overland, J. E. (2009). "A sea ice free summer Arctic within 30 years?". Geophys. Res. Lett. 36 (7). Bibcode:2009GeoRL..3607502W. doi:10.1029/2009GL037820. Retrieved 2 May 2011.
- Met Office. "Arctic sea ice 2012". Exeter, UK: Met Office.
- IPCC, Glossary A-D: "Detection and attribution", in IPCC AR4 WG1 2007. See also Hegerl et al., Section 9.1.2: What are Climate Change Detection and Attribution?, in IPCC AR4 WG1 2007.
- Rosenzweig et al., Chapter 1: Assessment of Observed Changes and Responses in Natural and Managed Systems Section 1.2 Methods of detection and attribution of observed changes, in IPCC AR4 WG2 2007 .
- IPCC, Synthesis Report Summary for Policymakers, Section 3: Projected climate change and its impacts, in IPCC AR4 SYR 2007.
- NOAA (February 2007). "Will the wet get wetter and the dry drier?" (PDF). GFDL Climate Modeling Research Highlights (Princeton, New Jersey, USA: National Oceanic and Atmospheric Administration (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL)) 1 (5)., p.1. Revision 15 October 2008, 4:47:16 PM.
- "D. Future Climate Extremes, Impacts, and Disaster Losses, in: Summary for policymakers". MANAGING THE RISKS OF EXTREME EVENTS AND DISASTERS TO ADVANCE CLIMATE CHANGE ADAPTATION., in IPCC SREX 2012, pp. 9–13
- Justin Gillis (27 April 2015). "New Study Links Weather Extremes to Global Warming". The New York Times. Retrieved 27 April 2015.
“The bottom line is that things are not that complicated,” Dr. Knutti said. “You make the world a degree or two warmer, and there will be more hot days. There will be more moisture in the atmosphere, so that must come down somewhere.”
- E. M. Fischer; R. Knutti (27 April 2015). "Anthropogenic contribution to global occurrence of heavy-precipitation and high-temperature extremes" (online). Nature Climate Change 5: 560–564. Bibcode:2015NatCC...5..560F. doi:10.1038/nclimate2617. Retrieved 27 April 2015.
We show that at the present-day warming of 0.85 °C about 18% of the moderate daily precipitation extremes over land are attributable to the observed temperature increase since pre-industrial times, which in turn primarily results from human influence. … Likewise, today about 75% of the moderate daily hot extremes over land are attributable to warming.
- "UCI study finds dramatic increase in concurrent droughts, heat waves". UCI. 2015.
- "Indian Monsoons Are Becoming More Extreme". Scientific American. 2014.
- Christopher S. Watson; Neil J. White; John A. Church; Matt A. King; Reed J. Burgette; Benoit Legresy (11 May 2015). "Unabated global mean sea-level rise over the satellite altimeter era". Nature Climate Change. Bibcode:2015NatCC...5..565W. doi:10.1038/nclimate2635.
- Churchs, John; Clark, Peter. "Chapter 13: Sea Level Change - Final Draft Underlying Scientific-Technical Assessment" (PDF). IPCC Working Group I. Retrieved 21 January 2015.
- PROJECTIONS OF FUTURE SEA LEVEL RISE, pp.243-244, in: Ch. 7. Sea Level Rise and the Coastal Environment, in National Research Council 2010
- BOX SYN-1: SUSTAINED WARMING COULD LEAD TO SEVERE IMPACTS, p.5, in: Synopsis, in National Research Council 2011
- Anders Levermann; Peter U. Clark; Ben Marzeion; Glenn A. Milne; David Pollard; Valentina Radic; Alexander Robinson (13 June 2013). "The multimillennial sea-level commitment of global warming". PNAS 110: 13745–13750. Bibcode:2013PNAS..11013745L. doi:10.1073/pnas.1219414110.
- Ricarda Winkelmann; Anders Levermann; Andy Ridgwell; Ken Caldeira (11 September 2015). "Combustion of available fossil fuel resources sufficient to eliminate the Antarctic Ice Sheet". doi:10.1126/sciadv.1500589.
- IPCC, Synthesis Report Summary for Policymakers, Section 1: Observed changes in climate and their effects, in IPCC AR4 SYR 2007.
- Fischlin, et al., Chapter 4: Ecosystems, their Properties, Goods and Services, Executive Summary, p. 213, in IPCC AR4 WG2 2007. Executive summary not present in on-line text; see pdf.
- Schneider et al., Chapter 19: Assessing Key Vulnerabilities and the Risk from Climate Change, Section 19.3.4: Ecosystems and biodiversity, in IPCC AR4 WG2 2007.
- Ocean Acidification, in: Ch. 2. Our Changing Climate, in NCADAC 2013, pp. 69–70
- Introduction, in Zeebe 2012, p. 142
- Ocean acidification, in: Executive summary, in Good & others 2010, p. 14
- Deutsch; et al. (2011). "Climate-Forced Variability of Ocean Hypoxia" (PDF). Science 333: 336–339. Bibcode:2011Sci...333..336D. doi:10.1126/science.1202422.
- Summary, pp.14-19, in National Research Council 2011
- FAQ 12.3, in: Chapter 12: Long-term Climate Change: Projections, Commitments and Irreversibility, in IPCC AR5 WG1 2013, pp.88-89 (pp.90-91 of PDF chapter)
- BOX 2.1: STABILIZATION AND NON-CO2 GREENHOUSE GASES (p.65), in: Chapter 2: Emissions, Concentrations, and Related Factors, in National Research Council 2011
- Bill McGuire. "Climate forcing of geological and geomorphological hazards". Philosophical Transactions A (Royal Society) 368: 2311–2315. Bibcode:2010RSPTA.368.2311M. doi:10.1098/rsta.2010.0077.
- Jérôme Lopez Saez; Christophe Corona; Markus Stoffel; Frédéric Berger. "Climate change increases frequency of shallow spring landslides in the French Alps". Geology 41: 619–622. doi:10.1130/G34098.1.
- Smith, J.B.; et al. "Ch. 19. Vulnerability to Climate Change and Reasons for Concern: A Synthesis". Sec 19.6. Extreme and Irreversible Effects., in IPCC TAR WG2 2001
- Smith, J. B.; Schneider, S. H.; Oppenheimer, M.; Yohe, G. W.; Hare, W.; Mastrandrea, M. D.; Patwardhan, A.; Burton, I.; Corfee-Morlot, J.; Magadza, C. H. D.; Füssel, H.-M.; Pittock, A. B.; Rahman, A.; Suarez, A.; van Ypersele, J.-P. (17 March 2009). "Assessing dangerous climate change through an update of the Intergovernmental Panel on Climate Change (IPCC) 'reasons for concern'". Proceedings of the National Academy of Sciences 106 (11): 4133–7. Bibcode:2009PNAS..106.4133S. doi:10.1073/pnas.0812355106. PMC 2648893. PMID 19251662.
- Clark, P.U.; et al. (December 2008). "Executive Summary". Abrupt Climate Change. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research. Reston, Virginia, USA: U.S. Geological Survey., pp. 1–7. Report website
- "Siberian permafrost thaw warning sparked by cave data". BBC. 22 February 2013. Retrieved 24 February 2013.
- US National Research Council (2010). "Advancing the Science of Climate Change: Report in Brief". Washington, D.C., USA: National Academies Press., p.3. PDF of Report
- IPCC. "Summary for Policymakers". Sec. 2.6. The Potential for Large-Scale and Possibly Irreversible Impacts Poses Risks that have yet to be Reliably Quantified., in IPCC TAR WG2 2001
- Cramer, W., et al., Executive summary, in: Chapter 18: Detection and attribution of observed impacts (archived 8 July 2014), pp.3-4, in IPCC AR5 WG2 A 2014
- FAQ 7 and 8, in: Volume-wide Frequently Asked Questions (FAQs) (archived 8 July 2014), pp.2-3, in IPCC AR5 WG2 A 2014
- Oppenheimer, M., et al., Section 19.6.3: Updating Reasons for Concern, in: Chapter 19: Emergent risks and key vulnerabilities (archived 8 July 2014), pp.39-46, in IPCC AR5 WG2 A 2014
- Field, C., et al., B-3: Regional Risks and Potential for Adaptation, in: Technical Summary (archived 8 July 2014), pp.27-30, in IPCC AR5 WG2 A 2014
- Oppenheimer, M., et al., Section 19.6.3: Updating Reasons for Concern, in: Chapter 19: Emergent risks and key vulnerabilities (archived 8 July 2014), pp.42-43, in IPCC AR5 WG2 A 2014
- Dana Nuccitelli (26 January 2015). "Climate change could impact the poor much more than previously thought". The Guardian.
- Chris Mooney (22 October 2014). "There’s a surprisingly strong link between climate change and violence". The Washington Post.
- Porter, J.R., et al., Executive summary, in: Chapter 7: Food security and food production systems (archived 8 July 2014), p.3, in IPCC AR5 WG2 A 2014
- Reference temperature period converted from late-20th century to pre-industrial times (approximated in the source as 1850-1900).
- Smith, K.R., et al., FAQ 11.2, in: Chapter 11: Human health: impacts, adaptation, and co-benefits (archived 8 July 2014), p.37, in IPCC AR5 WG2 A 2014
- Costello, Anthony; Abbas, Mustafa; Allen, Adriana; Ball, Sarah; Bell, Sarah; Bellamy, Richard; Friel, Sharon; Groce, Nora; Johnson, Anne; Kett, Maria; Lee, Maria; Levy, Caren; Maslin, Mark; McCoy, David; McGuire, Bill; Montgomery, Hugh; Napier, David; Pagel, Christina; Patel, Jinesh; de Oliveira, Jose Antonio Puppim; Redclift, Nanneke; Rees, Hannah; Rogger, Daniel; Scott, Joanne; Stephenson, Judith; Twigg, John; Wolff, Jonathan; Patterson, Craig (May 2009). "Managing the health effects of climate change". The Lancet 373 (9676): 1693–1733. doi:10.1016/S0140-6736(09)60935-1.
- Watts, Nick; Adger, W Neil; Agnolucci, Paolo; Blackstock, Jason; Byass, Peter; Cai, Wenjia; Chaytor, Sarah; Colbourn, Tim; Collins, Mat; Cooper, Adam; Cox, Peter M; Depledge, Joanna; Drummond, Paul; Ekins, Paul; Galaz, Victor; Grace, Delia; Graham, Hilary; Grubb, Michael; Haines, Andy; Hamilton, Ian; Hunter, Alasdair; Jiang, Xujia; Li, Moxuan; Kelman, Ilan; Liang, Lu; Lott, Melissa; Lowe, Robert; Luo, Yong; Mace, Georgina; Maslin, Mark; Nilsson, Maria; Oreszczyn, Tadj; Pye, Steve; Quinn, Tara; Svensdotter, My; Venevsky, Sergey; Warner, Koko; Xu, Bing; Yang, Jun; Yin, Yongyuan; Yu, Chaoqing; Zhang, Qiang; Gong, Peng; Montgomery, Hugh; Costello, Anthony (November 2015). "Health and climate change: policy responses to protect public health". The Lancet 386 (10006): 1861–1914. doi:10.1016/S0140-6736(15)60854-6. Retrieved 4 January 2016.
- Smith, K.R., et al., Section 11.4: Direct Impacts of Climate and Weather on Health, in: Chapter 11: Human health: impacts, adaptation, and co-benefits (archived 8 July 2014), pp.10-13, in IPCC AR5 WG2 A 2014
- Smith, K.R., et al., Section 11.6.1. Nutrition, in: Chapter 11: Human health: impacts, adaptation, and co-benefits (archived 8 July 2014), pp.10-13, in IPCC AR5 WG2 A 2014
- IPCC AR4 SYR 2007. 3.3.3 Especially affected systems, sectors and regions. Synthesis report.
- Mimura, N.; et al. (2007). "Executive summary". In Parry, M.L. et al. Chapter 16: Small Islands. Climate change 2007: impacts, adaptation and vulnerability: contribution of Working Group II to the fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press (CUP): Cambridge, UK: Print version: CUP. This version: IPCC website. ISBN 0521880106. Retrieved 15 September 2011.
- "Climate change and the risk of statelessness" (PDF). May 2011. Retrieved 13 April 2012.
- Chris Hope; Kevin Schaefer (2015). "Economic impacts of carbon dioxide and methane released from thawing permafrost". Nature 6: 56–59. Bibcode:2016NatCC...6...56H. doi:10.1038/nclimate2807.
- "North Slope permafrost thawing sooner than expected". University of Alaska Fairbanks. 2015.
- PBL Netherlands Environment Agency (June 15, 2012). "Figure 6.14, in: Chapter 6: The energy and climate challenge". In van Vuuren, D.; M. Kok. Roads from Rio+20 (PDF). ISBN 978-90-78645-98-6., p.177, Report no: 500062001. Report website.
- Mitigation, in USGCRP 2015
- IPCC, Synthesis Report Summary for Policymakers, Section 4: Adaptation and mitigation options, in IPCC AR4 SYR 2007.
- Edenhofer, O., et al., Table TS.3, in: Technical summary (archived 30 December 2014), in: IPCC AR5 WG3 2014, p. 68
- "Citi report: slowing global warming would save tens of trillions of dollars". The Guardian. 2015.
- Clarke, L., et al., Executive summary, in: Chapter 6: Assessing Transformation Pathways (archived 30 December 2014), in: IPCC AR5 WG3 2014, p. 418
- SPM4.1: Long-term mitigation pathways, in: Summary for Policymakers (archived 27 December 2014), in: IPCC AR5 WG3 2014, pp. 10–13
- Edenhofer, O., et al., TS.3.1.2: Short- and long-term requirements of mitigation pathways, in: Technical summary (archived 30 December 2014), in: IPCC AR5 WG3 2014, pp. 55–56
- Edenhofer, O., et al., TS.3.1.3: Costs, investments and burden sharing, in: Technical summary (archived 30 December 2014), in: IPCC AR5 WG3 2014, p. 58
- Smit et al., Chapter 18: Adaptation to Climate Change in the Context of Sustainable Development and Equity, Section 18.2.3: Adaptation Types and Forms, in IPCC TAR WG2 2001.
- "Appendix I. Glossary". Adaptive capacity., in IPCC AR4 WG2 2007
- "Synthesis report". Sec 6.3 Responses to climate change: Robust findings]., in IPCC AR4 SYR 2007
- "New Report Provides Authoritative Assessment of National, Regional Impacts of Global Climate Change" (Press release). U.S. Global Change Research Program. 16 June 2009. Retrieved 14 January 2016.
- "Workshop on managing solar radiation" (PDF). NASA. April 2007. Retrieved 23 May 2009.
- "Stop emitting CO2 or geoengineering could be our only hope" (Press release). The Royal Society. 28 August 2009. Retrieved 14 June 2011.
- P. Keller, David; Feng, Ellias Y.; Oschlies, Andreas (January 2014). "Potential climate engineering effectiveness and side effects during a high carbon dioxide-emission scenario". Nature 5: 3304. Bibcode:2014NatCo...5E3304K. doi:10.1038/ncomms4304. Retrieved 31 March 2014.
We find that even when applied continuously and at scales as large as currently deemed possible, all methods are, individually, either relatively ineffective with limited (<8%) warming reductions, or they have potentially severe side effects and cannot be stopped without causing rapid climate change.
- Quoted in IPCC SAR SYR 1996, "Synthesis of Scientific-Technical Information Relevant to Interpreting Article 2 of the UN Framework Convention on Climate Change", paragraph 4.1, p. 8 (pdf p. 18.)
- Granger Morgan, M. (Lead Author), H. Dowlatabadi, M. Henrion, D. Keith, R. Lempert, S. McBride, M. Small and T. Wilbanks (Contributing Authors) (2009). "Non-Technical Summary: BOX NT.1 Summary of Climate Change Basics". Synthesis and Assessment Product 5.2: Best practice approaches for characterizing, communicating, and incorporating scientific uncertainty in decisionmaking. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research (PDF). Washington, D.C., USA.: National Oceanic and Atmospheric Administration. p. 11. Retrieved June 1, 2011.
- UNFCCC (n.d.). "Essential Background". UNFCCC website. Retrieved 18 May 2010.
- UNFCCC (n.d.). "Full text of the Convention, Article 2". UNFCCC website. Retrieved 18 May 2010.
- Rogner et al., Chapter 1: Introduction, Executive summary, in IPCC AR4 WG3 2007.
- Raupach, R.; Marland, G.; Ciais, P.; Le Quere, C.; Canadell, G.; Klepper, G.; Field, B. (Jun 2007). "Global and regional drivers of accelerating CO2 emissions" (Free full text). Proceedings of the National Academy of Sciences 104 (24): 10288–10293. Bibcode:2007PNAS..10410288R. doi:10.1073/pnas.0700609104. ISSN 0027-8424. PMC 1876160. PMID 17519334.
- Dessai, S. (2001). "The climate regime from The Hague to Marrakech: Saving or sinking the Kyoto Protocol?" (PDF). Tyndall Centre Working Paper 12. Tyndall Centre website. Retrieved 5 May 2010.
- Grubb, M. (July–September 2003). "The Economics of the Kyoto Protocol" (PDF). World Economics 4 (3): 144–145. Retrieved 25 March 2010.
- UNFCCC (n.d.). "Kyoto Protocol". UNFCCC website. Retrieved 21 May 2011.
- Müller, Benito (February 2010). Copenhagen 2009: Failure or final wake-up call for our leaders? EV 49 (PDF). Oxford Institute for Energy Studies. p. i. ISBN 978-1-907555-04-6. Retrieved 18 May 2010.
- Rudd, Kevin (25 May 2015). "Paris Can't Be Another Copenhagen". The New York Times. Retrieved 26 May 2015.
- United Nations Environment Programme (November 2010). "Technical summary". The Emissions Gap Report: Are the Copenhagen Accord pledges sufficient to limit global warming to or 2 °C? A preliminary assessment (advance copy) 1.5 °C (PDF). UNEP website. Retrieved 11 May 2011. This publication is also available in e-book format
- UNFCCC (30 March 2010). "Decision 2/CP. 15 Copenhagen Accord. In: Report of the Conference of the Parties on its fifteenth session, held in Copenhagen from 7 to 19 December 2009. Addendum. Part Two: Action taken by the Conference of the Parties at its fifteenth session" (PDF). United Nations Office at Geneva, Switzerland. p. 5. Retrieved 17 May 2010.
- "Outcome of the work of the Ad Hoc Working Group on long-term Cooperative Action under the Convention" (PDF). PRESIDENCIA DE LA REPÚBLICA, MÉXICO. 11 December 2010. p. 2. Retrieved 12 January 2011.
- Royal Society (13 April 2005). "Letter from The Royal Society: A GUIDE TO FACTS AND FICTIONS ABOUT CLIMATE CHANGE: Misleading arguments: Many scientists do not think that climate change is a problem. Some scientists have signed petitions stating that climate change is not a problem.". Economic Affairs – Written Evidence. The Economics of Climate Change, the Second Report of the 2005–2006 session, produced by the UK Parliament House of Lords Economics Affairs Select Committee. UK Parliament website. Retrieved 9 July 2011. This document is also available in PDF format
- Academia Brasileira de Ciéncias (Brazil), Royal Society of Canada, Chinese Academy of Sciences, Académie des Sciences (France), Deutsche Akademie der Naturforscher Leopoldina (Germany), Indian National Science Academy, Accademia Nazionale dei Lincei (Italy), Science Council of Japan, Academia Mexicana de Ciencias, Russian Academy of Sciences, Academy of Science of South Africa, Royal Society (United Kingdom), National Academy of Sciences (United States of America) (May 2009). "G8+5 Academies’ joint statement: Climate change and the transformation of energy technologies for a low carbon future" (PDF). US National Academies website. Retrieved 5 May 2010.
- Julie Brigham-Grette; et al. (September 2006). "Petroleum Geologists' Award to Novelist Crichton Is Inappropriate" (PDF). Eos 87 (36): 364. Bibcode:2006EOSTr..87..364B. doi:10.1029/2006EO360008. Retrieved 23 January 2007.
The AAPG stands alone among scientific societies in its denial of human-induced effects on global warming.
- Boykoff, M.; Boykoff, J. (July 2004). "Balance as bias: global warming and the US prestige press". Global Environmental Change Part A 14 (2): 125–136. doi:10.1016/j.gloenvcha.2003.10.001.
- Oreskes, Naomi; Conway, Erik. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (first ed.). Bloomsbury Press. ISBN 978-1-59691-610-4.
- Aaron M. McCright and Riley E. Dunlap, "Challenging Global Warming as a Social Problem: An Analysis of the Conservative Movement's Counter-Claims", Social Problems, November 2000, Vol. 47 Issue 4, pp 499–522 in JSTOR
- Weart, S. (July 2009). "The Public and Climate Change (cont. – since 1980). Section: After 1988". American Institute of Physics website. Retrieved 5 May 2010.
- Begley, Sharon (13 August 2007). "The Truth About Denial". Newsweek. Retrieved 13 August 2007.
- Adams, David (20 September 2006). "Royal Society tells Exxon: stop funding climate change denial". The Guardian. London. Retrieved 9 August 2007.
- "Exxon cuts ties to global warming skeptics". MSNBC. 12 January 2007. Retrieved 2 May 2007.
- Sandell, Clayton (3 January 2007). "Report: Big Money Confusing Public on Global Warming". ABC. Retrieved 27 April 2007.
- "Greenpeace: Exxon still funding climate skeptics". USA Today. Reuters. 18 May 2007. Retrieved 21 January 2010.
- "Global Warming Resolutions at U.S. Oil Companies Bring Policy Commitments from Leaders, and Record High Votes at Laggards" (Press release). Ceres. 13 May 2004. Retrieved 4 March 2010.
- "Oil Company Positions on the Reality and Risk of Climate Change". Environmental Studies. University of Oshkosh - Wisconsin. Retrieved 27 March 2016.
- Weart, S. (February 2015). "The Public and Climate Change (cont. – since 1980). Section: after 1988". American Institute of Physics website. Retrieved 18 August 2015.
- "Environment". Gallup. 2015. Retrieved 18 August 2015.
- Pelham, Brett (2009). "Awareness, Opinions about Global Warming Vary Worldwide". Gallup. Retrieved 18 August 2015.
- Pugliese, Anita (20 April 2011). "Fewer Americans, Europeans View Global Warming as a Threat". Gallup. Retrieved 22 April 2011.
- Ray, Julie; Anita Pugliese (22 April 2011). "Worldwide, Blame for Climate Change Falls on Humans". Gallup.Com. Retrieved 3 May 2011.
People nearly everywhere, including majorities in developed Asia and Latin America, are more likely to attribute global warming to human activities rather than natural causes. The U.S. is the exception, with nearly half (47%) – and the largest percentage in the world – attributing global warming to natural causes.
- "Climate Change and Financial Instability Seen as Top Global Threats". Pew Research Center for the People & the Press.
- Climate Change: Key Data Points from Pew Research | Pew Research Center
- Tranter, Bruce; Booth, Kate (July 2015). "Scepticism in a Changing Climate: A Cross-national Study". Global Environmental Change 33: 54–164. doi:10.1016/j.gloenvcha.2015.05.003.
- Weart, Spencer R. (February 2014). "The Discovery of Global Warming; The Public and Climate Change: Suspicions of a Human-Caused Greenhouse (1956-1969)". American Institute of Physics. Retrieved 12 May 2015., and footnote 27
- Erik Conway. "What's in a Name? Global Warming vs. Climate Change", NASA, 5 December 2008
- Weart, Spencer R. (February 2014). "The Discovery of Global Warming; The Public and Climate Change: The Summer of 1988". American Institute of Physics. Retrieved 12 May 2015.
- U.S. Senate, Committee on Energy and Natural Resources, "Greenhouse Effect and Global Climate Change, part 2" 100th Cong., 1st sess., 23 June 1988, p. 44.
- Good, P.; et al. (2010), An updated review of developments in climate science research since IPCC AR4. A report by the AVOID consortium (PDF), London, UK: Committee on Climate Change, p. 14. Report website.
- IAP (June 2009), Interacademy Panel (IAP) Member Academies Statement on Ocean Acidification, Secretariat: TWAS (the Academy of Sciences for the Developing World), Trieste, Italy.
- IEA (2009). World Energy Outlook 2009 (PDF). Paris, France: International Energy Agency (IEA). ISBN 978-92-64-06130-9.
- IPCC AR4 SYR (2007). Core Writing Team; Pachauri, R.K; Reisinger, A., eds. Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. IPCC. ISBN 92-9169-122-4.
- IPCC AR4 WG1 (2007). Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L., eds. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88009-1. (pb: 978-0-521-70596-7)
- IPCC AR4 WG2 (2007). Parry, M.L.; Canziani, O.F.; Palutikof, J.P.; van der Linden, P.J.; Hanson, C.E., eds. Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88010-7. (pb: 978-0-521-70597-4)
- IPCC AR4 WG3 (2007). Metz, B.; Davidson, O.R.; Bosch, P.R.; Dave, R.; Meyer, L.A., eds. Climate Change 2007: Mitigation of Climate Change. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. (pb: 978-0-521-70598-1)
- IPCC AR5 WG1 (2013), Stocker, T.F.; et al., eds., Climate Change 2013: The Physical Science Basis. Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), Cambridge University Press. Climate Change 2013 Working Group 1 website.
- IPCC AR5 WG2 A (2014), Field, C.B.; et al., eds., Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects (GSA). Contribution of Working Group II (WG2) to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press. Archived 25 June 2014.
- IPCC AR5 WG3 (2014), Edenhofer, O.; et al., eds., Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III (WG3) to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press. Also available at mitigation2014.org. Archives: Main IPCC website: 27 November 2014; mitigation2014.org: 30 December 2014.
- IPCC SAR SYR (1996). "Climate Change 1995: A report of the Intergovernmental Panel on Climate Change". Second Assessment Report of the Intergovernmental Panel on Climate Change. IPCC. pdf. The "Full Report", consisting of "The IPCC Second Assessment Synthesis of Scientific-Technical Information Relevant to Interpreting Article 2 of the UN Framework Convention on Climate Change" and the Summaries for Policymakers of the three Working Groups.
- IPCC SAR WG3 (1996). Bruce, J.P.; Lee, H.; Haites, E.F., eds. Climate Change 1995: Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 0-521-56051-9. (pb: 0-521-56854-4) pdf.
- IPCC SREX (2012). Field, C.B. et al., eds. "Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX)". Cambridge University Press.. Summary for Policymakers Summary for Policymakers.
- IPCC TAR WG1 (2001). Houghton, J.T.; Ding, Y.; Griggs, D.J.; Noguer, M.; van der Linden, P.J.; Dai, X.; Maskell, K.; Johnson, C.A., eds. Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 0-521-80767-0. (pb: 0-521-01495-6)
- IPCC TAR WG2 (2001). McCarthy, J. J.; Canziani, O. F.; Leary, N. A.; Dokken, D. J.; White, K. S., eds. Climate Change 2001: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 0-521-80768-9. (pb: 0-521-01500-6)
- IPCC TAR WG3 (2001). Metz, B.; Davidson, O.; Swart, R.; Pan, J., eds. Climate Change 2001: Mitigation. Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 0-521-80769-7. (pb: 0-521-01502-2)
- This article incorporates public domain material from the US Global Change Research Program (USGCRP) document: NCADAC (11 January 2013), Federal Advisory Committee Draft Climate Assessment. A report by the National Climate Assessment Development Advisory Committee (NCADAC), Washington, D.C., USA
- USGCRP (2015), Glossary, Washington, DC, USA: U.S. Global Change Research Program (USGCRP), retrieved 20 January 2014. Archived url.
- National Research Council (2011), Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, Washington, D.C., USA: National Academies Press
- National Research Council (2010). America's Climate Choices: Panel on Advancing the Science of Climate Change;. Washington, D.C.: The National Academies Press. ISBN 0-309-14588-0.
- Parris, A.; et al. (6 December 2012), Global Sea Level Rise Scenarios for the US National Climate Assessment. NOAA Tech Memo OAR CPO-1 (PDF), NOAA Climate Program Office. Report website.
- UNEP (2010), UNEP Emerging Issues: Environmental Consequences of Ocean Acidification: A Threat to Food Security (PDF), Nairobi, Kenya: United Nations Environment Programme (UNEP). Report summary.
- This article incorporates public domain material from the US Global Change Research Program (USGCRP) document: USGCRP (2009). Karl, T.R.; Melillo. J.; Peterson, T.; Hassol, S.J., eds. Global Climate Change Impacts in the United States. Cambridge University Press. ISBN 978-0-521-14407-0.. Public-domain status of this report can be found on p.4 of PDF
- US NRC (2008). "Understanding and responding to climate change: Highlights of National Academies Reports, 2008 edition, produced by the US National Research Council (US NRC)". Washington, D.C., USA: National Academy of Sciences.
- US NRC (2012). "Climate Change: Evidence, Impacts, and Choices". US National Research Council (US NRC). Also available as PDF
- Zeebe, R.E. (May 2012), "History of Seawater Carbonate Chemistry, Atmospheric CO2, and Ocean Acidification" (PDF), Annual Review of Earth and Planetary Sciences 40, pp. 141–165, Bibcode:2012AREPS..40..141Z, doi:10.1146/annurev-earth-042711-105521.
- Association of British Insurers (June 2005). Financial Risks of Climate Change (PDF).
- Ammann, Caspar; et al. (2007). "Solar influence on climate during the past millennium: Results from transient simulations with the NCAR Climate Simulation Model" (PDF). Proceedings of the National Academy of Sciences of the United States of America 104 (10): 3713–3718. Bibcode:2007PNAS..104.3713A. doi:10.1073/pnas.0605064103. PMC 1810336. PMID 17360418.
Simulations with only natural forcing components included yield an early 20th century peak warming of (≈1950 AD), which is reduced to about half by the end of the century because of increased volcanism ≈0.2 °C
- Barnett, TP; Adam, JC; Lettenmaier, DP (17 November 2005). "Potential impacts of a warming climate on water availability in snow-dominated regions" (abstract). Nature 438 (7066): 303–309. Bibcode:2005Natur.438..303B. doi:10.1038/nature04141. PMID 16292301.
- Behrenfeld, MJ; O'malley, RT; Siegel, DA; et al. (7 December 2006). "Climate-driven trends in contemporary ocean productivity" (PDF). Nature 444 (7120): 752–755. Bibcode:2006Natur.444..752B. doi:10.1038/nature05317. PMID 17151666.
- Choi, Onelack; Fisher, Ann (May 2005). "The Impacts of Socioeconomic Development and Climate Change on Severe Weather Catastrophe Losses: Mid-Atlantic Region (MAR) and the U.S". Climate Change 58 (1–2): 149–170. doi:10.1023/A:1023459216609.
- Dyurgerov, Mark B.; Meier, Mark F. (2005). Glaciers and the Changing Earth System: a 2004 Snapshot (PDF). Institute of Arctic and Alpine Research Occasional Paper #58. ISSN 0069-6145.
- Emanuel, KA (4 August 2005). "Increasing destructiveness of tropical cyclones over the past 30 years" (PDF). Nature 436 (7051): 686–688. Bibcode:2005Natur.436..686E. doi:10.1038/nature03906. PMID 16056221.
- James E. Hansen; Larissa Nazarenko; Reto Ruedy; Makiko Sato; Josh Willis; Anthony Del Genio; Dorothy Koch; Andrew Lacis; Ken Lo; Surabi Menon; Tica Novakov; Judith Perlwitz; Gary Russell; Gavin A. Schmidt; Nicholas Tausnev (3 June 2005). "Earth's Energy Imbalance: Confirmation and Implications" (PDF). Science 308 (5727): 1431–1435. Bibcode:2005Sci...308.1431H. doi:10.1126/science.1110252. PMID 15860591.
- Hinrichs, Kai-Uwe; Hmelo, Laura R.; Sylva, Sean P. (21 February 2003). "Molecular Fossil Record of Elevated Methane Levels in Late Pleistocene Coastal Waters". Science 299 (5610): 1214–1217. Bibcode:2003Sci...299.1214H. doi:10.1126/science.1079601. PMID 12595688.
- Hirsch, Tim (11 January 2006). "Plants revealed as methane source". BBC.
- Hoyt, Douglas V.; Schatten, Kenneth H. (November 1993). "A discussion of plausible solar irradiance variations, 1700–1992". Journal of Geophysical Research 98 (A11): 18,895–18,906. Bibcode:1993JGR....9818895H. doi:10.1029/93JA01944.
- IPCC TAR SYR (2001). Watson, R. T.; the Core Writing Team, eds. Climate Change 2001: Synthesis Report. Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 0-521-80770-0. (pb: 0-521-01507-3)
- Jamet, S.; J. Corfee-Morlot (7 April 2009). "Assessing the Impacts of Climate Change: A Literature Review". OECD Economics Department Working Papers (OECD) (691). doi:10.1787/224864018517.. Paper at IDEAS.
- Karnaukhov, A. V. (2001). "Role of the Biosphere in the Formation of the Earth's Climate: The Greenhouse Catastrophe" (PDF). Biophysics 46 (6).
- Kenneth, James P. (14 February 2003). Methane Hydrates in Quaternary Climate Change: The Clathrate Gun Hypothesis. American Geophysical Union.
- Keppler, Frank; et al. (18 January 2006). "Global Warming – The Blame Is not with the Plants". Max Planck Society.
- Lean, Judith L.; Wang, Y.M.; Sheeley, N.R. (December 2002). "The effect of increasing solar activity on the Sun's total and open magnetic flux during multiple cycles: Implications for solar forcing of climate". Geophysical Research Letters 29 (24): 2224. Bibcode:2002GeoRL..29x..77L. doi:10.1029/2002GL015880.
- Lerner, K. Lee; Lerner, K. Lee; Wilmoth, Brenda (26 July 2006). Environmental issues: essential primary sources. Thomson Gale. ISBN 1-4144-0625-8.
- McKibben, Bill (2011). The Global Warming Reader. OR Books. ISBN 978-1-935928-36-2.
- Muscheler, R; Joos, F; Müller, SA; Snowball, I (28 July 2005). "Climate: How unusual is today's solar activity?" (PDF). Nature 436 (7012): 1084–1087. Bibcode:2005Natur.436E...3M. doi:10.1038/nature04045. PMID 16049429.
- Oerlemans, J. (29 April 2005). "Extracting a Climate Signal from 169 Glacier Records" (PDF). Science 308 (5722): 675–677. Bibcode:2005Sci...308..675O. doi:10.1126/science.1107046. PMID 15746388.
- Purse, BV; Mellor, PS; Rogers, DJ; Samuel, AR; Mertens, PP; Baylis, M (February 2005). "Climate change and the recent emergence of bluetongue in Europe" (abstract). Nature Reviews Microbiology 3 (2): 171–181. doi:10.1038/nrmicro1090. PMID 15685226.
- Revkin, Andrew C (5 November 2005). "Rise in Gases Unmatched by a History in Ancient Ice". The New York Times.
- Royal Society (2005). "Joint science academies' statement: Global response to climate change". Retrieved 19 April 2009.
- Roulstone, Ian; Norbury, John (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. (see Chapter 8)
- Ruddiman, William F. (15 December 2005). Earth's Climate Past and Future. New York: Princeton University Press. ISBN 0-7167-3741-8.
- Ruddiman, William F. (1 August 2005). Plows, Plagues, and Petroleum: How Humans Took Control of Climate. New Jersey: Princeton University Press. ISBN 0-691-12164-8.
- Schelling, Thomas C. (2002). "Greenhouse Effect". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563
- Solanki, SK; Usoskin, IG; Kromer, B; Schüssler, M; Beer, J (23 October 2004). "Unusual activity of the Sun during recent decades compared to the previous 11,000 years" (PDF). Nature 431 (7012): 1084–1087. Bibcode:2004Natur.431.1084S. doi:10.1038/nature02995. PMID 15510145.
- Solanki, Sami K.; et al. (28 July 2005). "Climate: How unusual is today's solar activity? (Reply)" (PDF). Nature 436 (7050): E4–E5. Bibcode:2005Natur.436E...4S. doi:10.1038/nature04046.
- Sowers, Todd (10 February 2006). "Late Quaternary Atmospheric CH4 Isotope Record Suggests Marine Clathrates Are Stable". Science 311 (5762): 838–840. Bibcode:2006Sci...311..838S. doi:10.1126/science.1121235. PMID 16469923.
- Svensmark, Henrik; et al. (8 February 2007). "Experimental evidence for the role of ions in particle nucleation under atmospheric conditions". Proceedings of the Royal Society A (FirstCite Early Online Publishing) 463 (2078): 385–396. Bibcode:2007RSPSA.463..385S. doi:10.1098/rspa.2006.1773.(online version requires registration)
- Walter, KM; Zimov, SA; Chanton, JP; Verbyla, D; Chapin, F. S., 3rd (7 September 2006). "Methane bubbling from Siberian thaw lakes as a positive feedback to climate warming". Nature 443 (7107): 71–75. Bibcode:2006Natur.443...71W. doi:10.1038/nature05040. PMID 16957728.
- Wang, Y.-M.; Lean, J.L.; Sheeley, N. R. (20 May 2005). "Modeling the sun's magnetic field and irradiance since 1713" (PDF). Astrophysical Journal 625 (1): 522–538. Bibcode:2005ApJ...625..522W. doi:10.1086/429689.
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
|News from Wikinews|
|Quotations from Wikiquote|
|Texts from Wikisource|
|Textbooks from Wikibooks|
|Learning resources from Wikiversity|
|Wikiversity hosts a testbank of quizzes to assess students' understanding of this article|
|Library resources about
- NASA Goddard Institute for Space Studies – Global change research
- NOAA State of the Climate Report – U.S. and global monthly state of the climate reports
- Climate Change at the National Academies – repository for reports
- Nature Reports Climate Change – free-access web resource
- Met Office: Climate change – UK National Weather Service
- Educational Global Climate Modelling (EdGCM) – research-quality climate change simulator
- Program for Climate Model Diagnosis and Intercomparison – develops and releases standardized models such as CMIP3 (AR4) and CMIP5 (AR5)
- NASA: Climate change: How do we know?
- What Is Global Warming? – National Geographic
- Global Climate Change Indicators – NOAA
- NOAA Climate Services – NOAA
- Skeptical Science: Getting skeptical about global warming skepticism
- Global Warming Art, a collection of figures and images
- Global Warming Frequently Asked Questions – NOAA
- Understanding Climate Change – Frequently Asked Questions – UCAR
- Global Warming: Center for Global Studies at the University of Illinois
- Global Climate Change: NASA's Eyes on the Earth – NASA, JPL, Caltech
- OurWorld 2.0 – United Nations University
- Center for Climate and Energy Solutions – business and politics
- Climate change - EAA-PHEV Wiki – electric vehicles fueled with electricity from wind or solar power will reduce greenhouse gas pollution from the transportation sector
- Climate Change Indicators in the United States – report by United States Environmental Protection Agency, 80 pp.
- The World Bank - Climate Change - A 4 Degree Warmer World - We must and can avoid it
- A world with this much CO²: lessons from 4 million years ago
- Global Sea Level Rise Map
- International Disaster Database
- Paris Climate Conference |
Three times the first of three consecutive odd integers is 3 more than twice the third. What is the third integer?
A system was introduced to define the numbers present from negative infinity to positive infinity. The system is known as the Number system. Number system is easily represented on a number line and Integers, whole numbers, natural numbers can be all defined on a number line. The number line contains positive numbers, negative numbers, and zero.
What is an Equation?
An equation is a mathematical statement that connects two algebraic expressions of equal values with ‘=’ sign.
For example: In equation 3x+2 = 5, 3x+ 2 is the left-hand side expression and 5 is the right-hand side expression connected with the ‘=’ sign.
There are mainly 3 types of equations:
- Linear Equation
- Quadratic Equation
- Polynomial Equation
Here, we will study the Linear equations.
Linear equations in one variable are equations that are written as ax + b = 0, where a and b are two integers and x is a variable, and there is only one solution. 3x+2=5, for example, is a linear equation with only one variable. As a result, there is only one solution to this equation, which is x = 3/11. A linear equation in two variables, on the other hand, has two solutions.
A one-variable linear equation is one with a maximum of one variable of order one. The formula is ax + b = 0, using x as the variable.
There is just one solution to this equation. Here are a few examples:
- 4x = 8
- 5x + 10 = -20
- 1 + 6x = 11
Linear equations in one variable are written in standard form as:
ax + b = 0
- The numbers ‘a’ and ‘b’ are real.
- Neither ‘a’ nor ‘b’ are equal to zero.
Solving Linear Equations in One Variable
The steps for solving an equation with only one variable are as follows:
Step 1: If there are any fractions, use LCM to remove them.
Step 2: Both sides of the equation should be simplified.
Step 3: Remove the variable from the equation.
Step 4: Make sure your response is correct.
Problem Statement: Three times the first of three consecutive odd integers is 3 more than twice the third. What is the third integer?
Let the three consecutive odd integers are num-2, num, num+2, where num is an odd integer.
According to the problem statement, Three times the first of three consecutive odd integers is 3 more than twice the third i.e.
3*(num-2) = 2*(num+2) + 3
To get the numbers, we have to solve this linear equation i.e.
Now, solving the equation using above steps:
3*(num-2) = 2*(num+2) + 3
3*num – 6 = 2*num + 4 + 3
3*num – 6 = 2*num + 7
3*num -2*num = 7 + 6
num = 13
So, the value of num is 13 i.e. the second integer.
First integer is num – 2 i.e. 13 – 2 = 11.
Third integer is num + 2 i.e. 13 + 2 = 15.
So, 11, 13, and 15 are the three consecutive odd integers.
Problem 1: Two times the first number is equal to three times the second number and the sum of both numbers is 5. Find the numbers.
Solution: Let the two numbers are num1 and num2.
According to the problem statement,
Two times the first number is equal to three times of second number i.e.
2*num1 = 3*num2 (eq -1)
Also, Sum of both numbers is 5 i.e.
num1 + num2 = 5 (eq -2)
To get the numbers, we have to solve these equations i.e.
Now, solving the equation using the above steps:
Taking eq-2 :
2*num1 = 3*num2
num1 = (3*num2) / 2
Taking eq-1 i.e.
num1 + num2 = 5
Now put the result of 1st equation i.e. num1 = (3*num2)/2 in 2nd equation i.e.
(3*num2)/2 + num2 = 5
(3*num2 + 2*num2 ) / 2 = 5
3*num2 +2*num2 = 5 * 2
5*num2 = 10
i.e. num2 = 10/5 i.e. 2
So, the value of num2 is 2 and using this the value of num1 is 5-num2 = 5-2 = 3.
Problem 2: The sum of four consecutive numbers is 18, find the numbers.
Let the four consecutive numbers are x, x+1, x+2, x+3 respectively.
So, according to the problem statement:
x + x+1 + x+2 + x+3 = 18
Using this equation we can get the value of x i.e. the first number
4x + 6 = 18
4x = 18-6
4x = 12
x = 12/4
x = 3
So, the numbers should be 3, 4, 5, and 6. |
Basic Numeracy Test
The numeracy test is the most basic digital aptitude test and is designed to assess the testee's ability to manipulate basic mathematical concepts without the help of a calculator.
The various numerical cognition tests generally comprise the following arithmetic concepts:
|Percentages||Number series (sequences)|
|Basic calculation operations||Proportions and ratios|
|Conversion of units||Equations|
A number is a percentage if it can be expressed as a fraction with a denominator of 100 or as a decimal. For example, a percentage of 30 percent can be expressed as following:
- 30 %
There are a number of need-to-know percentages:
- 100% corresponds to 1 or the entirety of the quantity in question.
- 50% corresponds to 0.5 or half.
- 25% corresponds to 0.25 or a quarter.
In order to calculate the percentage of a certain quantity, you must simply multiply it by the percentage, expressed as a fraction or as a decimal. For example, the method in which we calculate 30% of 80 euros is:
A typical percentage question example:
48% of the 725 students are girls. The number of girls is:
The correct answer is answer A.
Taking 48% of a quantity is equivalent to multiplying the amount by 0.48, so:
0,48 x 725 = 348 ou
Basic calculation operations
In order to add two (or more) numbers, the unit figures and the tens figures are added successively, taking into account a possible retention. The hundreds figures are then added, also taking into account any retention from the tens figures. The addition calculation is to be continued in the same manner, depending on the amount of figures in each number.
In order to add decimal numbers, you must first add the decimal numbers (to the right of the comma) and then the numbers to the left of the comma.
To multiply a number by 10; 100; 1,000; etc., add to the number as many zeros as the multiplier number has, or shift the comma to the right by as many digits as there are zeros in the multiplier.
25 x 10 = 250
75 x 1000 = 75000
12,25 x 10 = 122,5
To multiply a number by 5; 50; etc., divide the number by 2 and multiply the result by 10; 100; etc.
64 x 5 = (64 / 2) x 10 = 32 x 10 = 320
Therefore: to divide a number by 5; 50; etc., divide the number by 10; 100; etc., and multiply the result by 2.
64 / 5 = (64 / 10) x 2 = 6,4 x 2 = 12,8
To multiply a number by 0.05; 0.005; etc., divide the number by 2 and then divide the result by 10; 100; etc.
64 x 0,05 = (64 / 2) / 10 = 32 / 10 = 3,2
Therefore: to divide a number by 0.05; 0.005; etc., multiply the number by 2, then multiply the result by 10; 100; etc.
64 / 0,05 = (64 x 2) x 10 = 128 x 10 = 1280
To multiply a number by 25, divide it by 4 and multiply the result by 100.
32 x 25 = (32 / 4) x 100 = 800
To multiply a number by 2.5; divide it by 4 and multiply the result by 10.
32 x 2,5 = (32 / 4) x 10 = 80
Therefore: to divide a number by 25, multiply it by 4 and then divide the result by 100.
32 / 25 = (32 x 4) / 100 = 1,28
To multiply a number by 101, 1001, etc., multiply the number by 100, 1,000, etc., and add the number to the result.
25 x 101= 2500 + 25 = 2525
In order to multiply a number by 9; 99; etc., you must simply multiply the number by 10; 100; etc. (depending on the number of digits), and subtract the original number from the result.
25 x 99 = 2500 - 25 = 2475
To multiply a number by 0.20, the number is divided by 5, following which, in correlation to the rule of multiplying by 5, the number is divided by 10 and the result is then multiplied by 2.
25 x 0,20 = 2,5 x 2 = 5
|A number is divisible by 5 if the number of units is 0 or 5.||750/5 = 150|
|A number is divisible by 3 if the sum of the digits is divisible by 3.||534 → 5 + 3 + 4 = 12|
|A number is divisible by 4 if the number formed by the last 2 digits is divisible by 4.||1612, 54760,...|
|A number is divisible by 6 if it is divisible by 2 and 3.||72/6 = 12|
72/2 = 36
72/3 = 24
|A number is divisible by 9 if the sum of the digits is divisible by 9.||84321 → 8 + 4 + 3 + 2 + 1 = 18|
|A number is divisible by 10 if the number's last digit is a zero.||532985920|
|A number is divisible by 11 if the sum of its even number subtracted from the sum of its odd number is zero or a multiple of 11.||13574 :|
3 + 7 = 10
- (1 + 5 + 4) = 10 - 10 = 0
So 13754 is divisible by 11.
|A number is divisible by 20 if it ends in 00-20-40-60-80.||380, 40348260,...|
|A number is divisible by 25 if it ends in 00-25-50-75.||525, 89504350,...|
Order of Operations
In mathematics, the order of operations is a collection of rules that reflect conventions that establish the order in which one must operate when seeking the value of a chain of operations.
The order is as follows:
- Parentheses (brackets) (P)
- Exponents (E)
- Multiplication and division (MD)
- Addition and subtraction (AS)
An easy way to remember this order is by way of the mnemonic PEMDAS.
28 – 3 (12 ÷ 4) – 4²
= 28 – 3 × 3 – 4²
= 28 – 3 × 3 – 16
= 28 – 9 – 16
= 28 – 25
Components of a fraction
The fraction is composed of a numerator (a) and a denominator (b).
It is important to remember that there are several ways to represent the same fraction. For instance, fractions 1/2 and 2/4 are completely equivalent. How do we switch from one fraction representation to another, while maintaining equivalence?
A fraction remains equivalent if the numerator and denominator are multiplied or divided by the same number.
Simplification of fractions
A fraction can be written in simplified form if the numerator and denominator have no common factors. In other words, it is impossible to find a number that is a divisor of both the numerator and the denominator when a number is portrayed as simplified fraction.
The fraction is not written in simplified form since there are numbers that both 120 and 200 are divisible by. The largest common divisor (factor) of 120 and 200 is 40, hence:
Since the numerator and denominator are divisible by the same number (40), the fraction is equivalent to . In turn, is the simplified form of since no common division factor exists between 3 and 5.
Simplification can be achieved in several steps in cases where the greatest common factor between the numerator and the denominator is not easily recognized.
|Adding and subtracting fractions|
|The rule of addition and subtraction of fractions is only applicable if both fractions have the same denominator, which is not generally the case in numerical exams. The fractions must be converted into equivalent fractions with a common denominator.|
These fractions cannot be added together until they have been rewritten with a common denominator. The smallest multiple common of the numbers 3 and 5 is 15. 15 will therefore be the common denominator.
|Multiplication of two fractions|
Unlike in the case of addition and subtraction, denominators do not need to be common in order to multiply fractions.
|Division of two fractions|
This rule allows you to transform a division exercise into multiplication form, thus allowing for an easier solution.
Working with fractions does not change the priority of operations.
An integer can always be written as a fraction if an operation is to be performed between it and a fraction.
Avoid working with mixed numbers; transform them into simple fractions instead.
Conversion of units
Below are several tables representing the different units (length, mass, time, etc.).
Conversions by power of ten (multiples of one unit):
|gigameter||gm||109m||1000 000 000 m|
|megameter||Mm||106 m||1000 000 m|
|hectometre||hm||102 m||100 m|
|decametre||dam||10 m||10 m|
|decimetre||dm||10-1 m||0.10 m|
|centimetre||cm||10-2 m||0.100 m|
|millimetre||mm||10-3 m||0.1000 m|
|micrometre||µm||10-6 m||0.1000 000 m|
|nanometre||nm||10-9 m||0.1000 000 000 m|
|1 ton||t||106 g||1000 000 g|
|1 kilogram||kg||103 g||1000 g|
|1 gram||g||1 g||1 g|
|1 milligram||mg||10-1 g||0.001 g|
|1 microgram||µg||10-6 g||0.000 001 g|
|1 nanogram||ng||10-9 g||0.000 000 001 g|
Time measurement units
|1 millennium||1000 years|
|1 century||100 years|
|1 decade||10 years|
|1 lustrum||5 years|
|1 year||365 days|
|1 week||7 days|
|1 day||24 hours|
|1 hour||60 minutes ou 3600 seconds|
|1 minute||60 seconds|
|1000 000||10 000||100||1||0.01||0.0001||0.000001|
Conversion of surface and volume units
|1 mm³||10-9 m|
Laws of powers
a0 = 1
a1 = a
a - 1 = 1/a
an = a × a × ... × a (n factors)
a - n = 1 / an
To save time during a numeracy test, we recommend you memorize the following chart:
|0² = 0||5² = 25||8² = 64||11² = 121||14² = 196|
|1² = 1||6² = 36||9² = 81||12² = 144||15² = 225|
|4² = 16||7² = 49||10² = 100||13² = 169||16² = 256|
Number series questions are frequently used in numeracy tests. They make it possible to evaluate the candidate's ability to understand numerical logic as well as to evaluate his or her potential to complete mental calculations.
The number series may combine several basic computational operations (addition, subtraction, etc.), or correspond to certain logic using the number properties (even, odd, first,...).
The following are examples of series that may appear in numeracy tests:
|General Series||Even number series||2, 4, 6, 8, 10,12,...|
|Series of odd numbers||1, 3, 5, 7, 9,11,13,...|
|Prime numbers||2, 3, 5, 7, 11 ,13,17,...|
|Multiples of 3 (or any other number)||3, 6, 9, 12, 15,18,...|
a series of numbers where we move from one term to the next by always adding the same number
|Continuation of reason 3 (i.e. +3)||2, 5, 8, 11, 14, 17,...|
|Continuation of reason -3 (i.e. -3)||17, 14, 11, 8, 5, 2,...|
a series of numbers where we move from one term to the next by always multiplying by the same number
|Continuation of reason 4 (i.e. +4)||2, 8, 32, 128, 512,...|
|Suite de raison -4 (i.e. -4)||512, 128, 32, 8, 2,...|
|Series with operations between numbers||Addition of the previous number||1, 2, 3, 5, 8, 13, 21,...|
|Follow-up||Two suites in one||2, 3, 4, 6, 6, 9, 8, 12,10,...|
|Several operations||Following two operations (+3, x2)||1, 4, 8, 11, 22, 25, 50,...|
|Following three operations (+2, x3, -1)||1, 3, 6, 5, 7, 21, 20,...|
Proportions & Ratios
Proportionality:In mathematics, two sets of numbers are said to be proportional when one can move from one to the other by multiplying or dividing the first by the same non-zero constant.
Rule of three or proportionality rule:This rule allows the calculation of a product from 3 given numbers, according to the following formula:
A simple example:
|Tomato weigh (kg)||1||2||3||4||5|
The proportionality coefficient is 2.
Example of a test question:
|Carrier from "Kuehne + Nagel"|
Number of liters per 100 km
|Number of km per day|
|Container 20''||Container 40''|
How many more litres of fuel does truck B consume with the 40'' container than with the 20'' container?
The correct answer is answer C.
In order to solve the question, it is first necessary to create an equation using the rule of three, in order to determine the number of litres consumed by truck B with the 20" container in one day:
Truck B therefore consumes 20.4 litres of fuel per day with the 20" container.
The same calculation must then be followed out for the 40" container:
Truck B therefore consumes 25.2 litres of fuel per day with the 40" container.
→25,2 litres - 20,4 litres = 4,8 litres
A ratio is a quantitative relationship between two numbers that describes how many times one value may contain another. Ratios are used intuitively in everyday life, such as in cooking recipes, or to calculate the size of computer and smartphone monitors.
Conversion of the best known decimal numbers into ratios:
- 0.1 = 1;10 or 1 for 10
- 0.2 = 1;5 or 1 for 5
- 0.25 = 1;4 or 1 for 4
- 0.33 = 1;3 or 1 for 3
- 0.5 = 1;2 or 1 for 2$
Example of a simple ratio exercise that may appear in basic numeracy tests:
In an engine, an oil-to-fuel ratio of 3;100 is required. If 200 ml of gasoline is poured into the engine, how much oil should be added?
- 8 ml
- 5 ml
- 6 ml
- 15 ml
The correct answer is answer C.
In this type of exercise, you may use intuition in order to answer the question, due to the fact that it is enough to double the values in order to find the right answer. In more complicated exercises, it will be necessary to perform a certain calculation to find the correct value.
More complex example:
What is the mortality rate ratio between the 1950s and 1990s?
|Year||Crude birth rate (%)||Crude mortality rate (%)||Natural growth rate (%)|
The correct answer is answer B.
Answering the above question necessitates calculation in order to find the right answer. First, divide the two values of the requested mortality rates, i.e. 24/18, which is equal to 1.3333. In order to transform this fraction into a ratio (ideally, this conversation should be memorized by heart, as it is often used), the fraction must be multiplied by a certain number, in order to obtain an integer. So for this example: 1.3333 x 3 = 4, the ratio is 4:3, answer B.
A mathematical or quantitative problem is defined as a question that can be solved using the elements given in the statement. The questions generally consist of a set of information presented in various forms (text, table, drawing, etc.), and in order to be solved, require the use of mathematical concepts and/or tools.
In a numeracy test, the problem presented can usually be solved intuitively or with basic mathematical concepts as presented above.
Two trucks were driven over a distance of 1,680 kilometres (km). The first truck travelled an average of 14 km per litre of fuel during the trip and the second truck travelled an average of 12 km per litre. How many more litres of gasoline did the second truck consume than the first?
- It is not possible to say this with the information provided.
The correct answer is answer B.
In order to solve the problem, it is necessary to calculate the total number of litres used for each truck, and then calculate the difference between them.
Truck A : 1680/14 = 120
Truck B 1680/12 = 140
140 - 120 = 20, answer B.
A book is available at the local bookstore in digital version (eBook) for 5.90 Euros and paperback for 13.40 Euros. The book can also be obtained from Amazon with a 15% discount, plus 2.50 Euros for shipping costs. What is the price difference between the digital version and the paperback version ordered on Amazon?
- 8,14 Euros
- 9,83 Euros
- 6,99 Euros
- 7,45 Euros
- 7,99 Euros
The correct answer is answer E.
In order to calculate the price difference between the two book formats, it is first necessary to calculate the book price, whilst taking the discount and shipping costs into account.
13,40 x 15/100 = 2,01
13,40 - 2,01 + 2,50 = 13,89
13,89 - 5,90 = 7,99
What is an equation?
An equation is composed of both known numbers and one or more unknown values (number whose value is unknown), and information needed in order to calculate their value.
Example of a simple equation:
3x = x + 72
3x - x = 72
x (3 - 1) = 72
2x = 72
x = 72/2
x = 36
Transforming the equation into a verbal question may simplify understanding of the question. When transformed, the above equation reads: I am an unknown number. I am equal to one-third of my number plus 24. What am I?
Remarkable identities: Out of all the basic formulas that exist, there are three we highly recommend you learn, as they greatly facilitate certain calculations.
The first is: (a + b)² = a² + 2ab + b²
The second is: (a – b)² = a² - 2ab + b²
The third is: a² – b² = (a + b)(a - b)
These formulas can be applied in both directions.
Example: : (2 + 3)² = 4 + 12 + 9 = 25
Here are some summary tables concerning roots:
For which jobs is the numeracy test used?
The numeracy test is used for non-managerial positions requiring effective mental calculation skills in any situation, including jobs where the candidate is required to use his or her knowledge on a daily basis and under pressure. Examples of occupations requiring these skills:
- Sales-related professions
- Bus and train drivers
- Technical assistants (help desk support)
- Customer Services
- Nurses, midwives and paramedics
- Soldiers and other military personnel
- Steward and flight attendant
- Prison supervisor
- And many more.
The main publishers of numeracy tests
|SHL||SHL Verify Checking & Calculation Tests|
|PSI Online / SHL||Math Problem Solving|
|cut-e||scales eql - Numeracy|
|Criteria||Criteria Basic Skills Test (CBST)|
|Wonderlic||Wonderlic Basic Skills Test (WBST)|
|Thomas International||Test d'Intelligence globale (TIG)|
|Saville Consulting||Administrative Numerical Comprehension|
|Saville Consulting||Customer Numerical Comprehension|
|Saville Consulting||Commercial Numerical Comprehension|
|Saville Consulting||Operational Numerical Comprehension|
|Criterion||B2C Numerical Ability Test|
|Criterion||CWS Numerical Ability Test|
|Selective Hiring||Basic Math Skills|
|HRdirect||SkillSeries Math Test|
|TAFE SA||TABE Numeracy Test| |
How do you change those funny numbers and letters to something you or your computer can understand? Converting hexadecimal to binary is very easy, which is why hexadecimal has been adopted in some programming languages. Converting to decimal is a little more involved, but once you've got it it's easy to repeat for any number.
Understanding Hexadecimal Basics
1Know how to use hexadecimal. Our ordinary decimal counting system is base ten, using ten different symbols to display numbers. Hexadecimal is a base sixteen number system, meaning it uses sixteen characters to display numbers.
- Counting from zero upward:
Hexadecimal Decimal 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 A 10 B 11 C 12 D 13 E 14 F 15 10 16 11 17 12 18 13 19 14 20 15 21 16 22 17 23 18 24 19 25 1A 26 1B 27 1C 28 1D 29
- Counting from zero upward:
2Use subscript to show which system you're using. Whenever it might be unclear which system you're using, use a decimal subscript number to denote the base. For example, 1710 means "17 in base ten" (an ordinary decimal number). 1710 = 1116, or "11 in base sixteen" (hexadecimal). You can skip this if your number has an alphabetic character in it, such as B or E. No one will mistake that for a decimal number.
Converting Hexadecimal to Binary
1Convert each hexadecimal digit to four binary digits. Hexadecimal was adopted in the first place because it's so easy to convert between the two. Essentially, hexadecimal is used as a way to display binary information in a shorter string. This chart is all you need to convert from one to the other:
Hexadecimal Binary 0 0000 1 0001 2 0010 3 0011 4 0100 5 0101 6 0110 7 0111 8 1000 9 1001 A 1010 B 1011 C 1100 D 1101 E 1110 F 1111
2Try it yourself. It really is as simple as changing the digit into the four equivalent binary digits. Here are a few hex numbers for you to convert. Highlight the invisible text to the right of the equal sign to check your work:
- A23 = 1010 0010 0011
- BEE = 1011 1110 1110
- 70C558 = 0111 0000 1100 0101 0101 1000
3Understand why this works. In the "base two" binary system, n binary digits can be used to represent 2n different numbers. For example, with four binary digits, you can represent 24 = 16 different numbers. Since hexadecimal is a base sixteen system, a one digit number can be used to represent 161 = 16 different numbers. This makes conversion between the two systems extremely easy.
- You can also think of this as the counting systems "flipping over" to another digit at the same time. Hexadecimal counts "...D, E, F, 10" at the same time binary counts "1101, 1110, 1111, 10000".
Converting Hexadecimal to Decimal
1Review how base ten works. You use decimal notation every day without having to stop and think about the meaning, but when you first learned it, your parent or teacher might have explained it to you in more detail. A quick review of how ordinary numbers are written will help you convert the number:
- Each digit in a decimal number is in a certain "place." Moving from right to left, there's the "ones place," "tens place," "hundreds place," and so on. The digit 3 just means 3 if it's in the ones place, but it represents 30 when located in the tens place, and 300 in the hundreds place.
- To put it mathematically, the "places" represent 100, 101, 102, and so on. This is why this system is called "base ten," or "decimal" after the Latin word for "tenth."
2Write a decimal number as an addition problem. This will probably seem obvious, but it's the same process we'll use to convert a hexadecimal number, so it's a good starting point. Let's rewrite the number 480,13710. (Remember, the subscript 10 tells us the number is written in base ten.):
- Starting with the rightmost digit, 7 = 7 x 100, or 7 x 1
- Moving left, 3 = 3 x 101, or 3 x 10
- Repeating for all digits, we get 480,137 = 4x100,000 + 8x10,000 + 0x1,000 + 1x100 + 3x10 + 7x1.
3Write the place values next to a hexadecimal number. Since hexadecimal is base sixteen, the "place values" correspond to the powers of sixteen. To convert to decimal, multiply each place value by the corresponding power of sixteen. Start this process by writing the powers of sixteen next to the digits of a hexadecimal number. We'll do this for the hexadecimal number C92116. Start on the right with 160, and increase the exponent each time you move left to the next digit:
- 116 = 1 x 160 = 1 x 1 (All numbers are in decimal except where noted.)
- 216 = 2 x 161 = 2 x 16
- 916 = 9 x 162 = 9 x 256
- C = C x 163 = C x 4096
4Convert alphabetic characters to decimal. Numerical digits are the same in decimal or hexadecimal, so you don't need to change them (for instance, 716 = 710). For alphabetic characters, refer to this list to change them to the decimal equivalent:
- A = 10
- B = 11
- C = 12 (We'll use this on our example from above.)
- D = 13
- E = 14
- F = 15
5Perform the calculation. Now that everything is written in decimal, perform each multiplication problem and add the results together. A calculator will be handy for most hexadecimal numbers. Continuing our example from earlier, here's C921 rewritten as a decimal formula and solved:
- C92116 = (in decimal) (1 x 1) + (2 x 16) + (9 x 256) + (12 x 4096)
- = 1 + 32 + 2,304 + 49,152.
- = 51,48910. The decimal version will usually have more digits than the hexadecimal version, since hexadecimal can store more information per digit.
6Practice the conversion. Here are a few numbers to convert from hexadecimal into decimal. Once you've worked out the answer, highlight the invisible text to the right of the equal sign to check your work:
- 3AB16 = 93910
- A1A116 = 4137710
- 500016 = 2048010
- 500D16 = 2049310
- 18A2F16 = 10091110
- Long hexadecimal numbers may require an online calculator to convert to decimal. You can also skip the work and have a online converter do the work for you, although it's a good idea to understand how the process works.
- You can adapt the "hexadecimal to decimal" conversion to convert any other base x numbering system to decimal. Just replace the powers of sixteen with powers of x instead. Try learning the base-60 Babylonian counting system!
Categories: Conversion Aids
In other languages:
Español: convertir un número hexadecimal a decimal o binario, Italiano: Convertire un Numero Esadecimale in Binario o Decimale, Português: Converter Hexadecimal para Binário ou Decimal, Русский: перевести из шестнадцатеричной системы в двоичную или децимальную, Français: convertir un nombre hexadécimal en nombre binaire ou en nombre décimal, Deutsch: Hexadezimalzahlen in Binär oder Dezimalzahlen umwandeln, 中文: 将十六进制换算为二进制和十进制, Bahasa Indonesia: Mengkonversi Heksadesimal Menjadi Biner atau Desimal, Nederlands: Hexadecimale getallen omzetten naar decimale getallen
Thanks to all authors for creating a page that has been read 648,253 times. |
An expression which consists of constant, variables and exponent values is known as polynomial expression. The polynomial expression is combined with the addition, subtraction, and multiplication.
Like 5x2 + 9x + 12 = 0;
This given expression is a polynomial expression and one exponent value is present in this expression.
The expression which contains a Square root values is known as radical expression.
For example: √6, 3√ x + 2, 8√ 16 and 6 + 3√ 12; these all are radical expressions.
The symbol ‘√’ is used to represents the square root or n th roots of the expression.
The other definition, an expression which contains the radical expression with the variable in the radicands is also known as radical equation.
Here we will be exploring Polynomials and radical expression.
Suppose we have a radical expression
Where the value of ‘x’ is 3 and the value of ‘y’ is 9 and we have to solve the radical expression;
For solving this expression we follow some steps:
The given expression is
And value of ‘x’ and ‘y’ is 3 and 9;
Step 1: First substitute the values of ‘x’ and ‘y’ in the given radical expression.
Step 2: Then we find the positive square root.
Step3: Then solve the expression.
Now we will see how to solve polynomial expression;
Suppose we have 3x2 -5x + 3 at x - 4;
For solving this expression we have to follow some steps:
Step 1: First we have a polynomial expression.
Step 2: Then put the value of ‘x’ as -4.
=> 3x2 -5x + 3;
On putting the value of x = -4 we get:
=> 3(-4)2 -5(-4) + 3;
Step 3: Then we solve the expression:
=> 3 * 16 + 20 + 3;
=> 48 + 23;
Quadratic equation can be defined as polynomial equation with highest degree ‘2’. Quadratic equation can also be called as a second degree equation. Consider the following function f (X) = J X2 + KX + L, Since highest degree of variable 'X' is two (2) hence it is called as Quadratic Equation. J, K and L are quadratic coefficients. Let’s see how to...Read More
Quadratic equation is an equation whose highest degree is two. Degree 2 indicates that highest power in equation is two.
Let’s discuss process of solving Quadratic Equation by factoring. Let’s consider the following quadratic equation to understand how to solve quadratic equations by factoring?
A y2 + B y + C = 0,
Since maximum degree of varia...Read More
Standard Quadratic Equation is written in form ax2 + b x + c = 0, but in case of quadratic inequalities expression is written in as,
1.) ax2 + b x +c > 0, example: x2 + 4x > 5,
2.) ax2 + b x +c < 0, example: 8x2 < 29,
3.) ax2 + b x +c < 0, example : 6 ≥ x2 – x,
4.) ax2 + b x +c <= 0, example : 4y2 + 1 ≤ 8y,
There are some steps for solving...Read More
Quadratic equation is written as ax2 + bx + c = 0, from this equation we can calculate Quadratic Formulas and Discriminant. Quadratic formula is given as: (-b ∓ √(b2 – 4ac))/2a , this formula is called, this formula gives us two solutions, x1 = (-b + √( b2 – 4ac))/2a and x2 = (-b - √(b2 – 4ac))/ 2a. Similarly we can find the Discriminant, discrimina...Read More
An equation with highest degree of 2 is called as Quadratic Equation. We can say that an equation in which highest power is a Square is known as quadratic equation. If highest power is more than 2 then it is not a quadratic equation. It can be written as: ax2 + bx + c = 0 and Quadratic Formula is given by:
⇨ x = - b + √ (b2 – 4ac) / 2a, its alt...Read More
Quadratic equation is a trinomial that means this equation has three terms. Standard form of Quadratic Equation is ax2 + bx +c. Here a, b and c are constants. Let us discuss the process of Solving Quadratic Equations by completing squares.
In mathematics completing the Square means to find the last term of perfect square trinomial. When...Read More |
When you think of copper, the penny in your pocket may come to mind; but NASA engineers are trying to save taxpayers millions of pennies by 3-D printing the first full-scale, copper rocket engine part.
“Building the first full-scale, copper rocket part with additive manufacturing is a milestone for aerospace 3-D printing,” said Steve Jurczyk, associate administrator for the Space Technology Mission Directorate at NASA Headquarters in Washington. “Additive manufacturing is one of many technologies we are embracing to help us continue our journey to Mars and even sustain explorers living on the Red Planet.”
Numerous complex parts made of many different materials are assembled to make engines that provide the thrust that powers rockets. Additive manufacturing has the potential to reduce the time and cost of making rocket parts like the copper liner found in rocket combustion chambers where super-cold propellants are mixed and heated to the extreme temperatures needed to send rockets to space.
“On the inside of the paper-edge-thin copper liner wall, temperatures soar to over 5,000 degrees Fahrenheit, and we have to keep it from melting by recirculating gases cooled to less than 100 degrees above absolute zero on the other side of the wall,” said Chris Singer, director of the Engineering Directorate at NASA’s Marshall Space Flight Centerin Huntsville, Alabama, where the copper rocket engine liner was manufactured. “To circulate the gas, the combustion chamber liner has more than 200 intricate channels built between the inner and outer liner wall. Making these tiny passages with complex internal geometries challenged our additive manufacturing team.”
A selective laser melting machine in Marshall’s Materials and Processing Laboratory fused 8,255 layers of copper powder to make the chamber in 10 days and 18 hours. Before making the liner, materials engineers built several other test parts, characterized the material and created a process for additive manufacturing with copper.
“Copper is extremely good at conducting heat,” explained Zach Jones, the materials engineer who led the manufacturing at Marshall. “That’s why copper is an ideal material for lining an engine combustion chamber and for other parts as well, but this property makes the additive manufacturing of copper challenging because the laser has difficulty continuously melting the copper powder.”
Only a handful of copper rocket parts have been made with additive manufacturing, so NASA is breaking new technological ground by 3-D printing a rocket component that must withstand both extreme hot and cold temperatures and has complex cooling channels built on the outside of an inner wall that is as thin as a pencil mark. The part is built with GRCo-84, a copper alloy created by materials scientists at NASA’s Glenn Research Centerin Cleveland, Ohio, where extensive materials characterization helped validate the 3-D printing processing parameters and ensure build quality. Glenn will develop an extensive database of mechanical properties that will be used to guide future 3-D printed rocket engine designs. To increase U.S. industrial competitiveness, data will be made available to American manufacturers in NASA’s Materials and Processing Information System (MAPTIS)managed by Marshall.
“Our goal is to build rocket engine parts up to 10 times faster and reduce cost by more than 50 percent,” said Chris Protz, the Marshall propulsion engineer leading the project. “We are not trying to just make and test one part. We are developing a repeatable process that industry can adopt to manufacture engine parts with advanced designs. The ultimate goal is to make building rocket engines more affordable for everyone.”
Manufacturing the copper liner is only the first step of the Low Cost Upper Stage-Class Propulsion Project funded by NASA’s Game Changing Development Program in the Space Technology Mission Directorate. NASA’s Game Changing Program funds the development of technologies that will revolutionize future space endeavors, including NASA’s journey to Mars. The next step in this project is for Marshall engineers to ship the copper liner to NASA’s Langley Research Center in Hampton, Virginia, where an electron beam freedom fabrication facility will direct deposit a nickel super-alloy structural jacket onto the outside of the copper liner. Later this summer, the engine component will be hot-fire tested at Marshall to determine how the engine performs under extreme temperatures and pressures simulating the conditions inside the engine as it burns propellant during a rocket flight. |
A torn (perforated) eardrum is not usually serious and often heals on its own without any complications. Complications sometimes occur such as hearing loss and infection in the middle ear. A small procedure to repair a perforated eardrum is an option if it does not heal by itself, especially if you have hearing loss.
What is the eardrum and how do we hear?
The eardrum (also called the tympanic membrane) is a thin skin-like structure in the ear. It lies between the outer (external) ear and the middle ear.
The ear is divided into three parts - the outer, middle and inner ear. Sound waves come into the outer ear and hit the eardrum, causing the eardrum to vibrate.
Behind the eardrum are three tiny bones (ossicles). The vibrations pass from the eardrum to these middle ear bones. The bones then transmit the vibrations to the cochlea in the inner ear. The cochlea converts the vibrations to sound signals which are sent down a nerve to the brain, which we 'hear'.
The middle ear behind the eardrum is normally filled with air. The middle ear is connected to the back of the nose by the Eustachian tube. This allows air in and out of the middle ear.
What is a perforated eardrum and what problems can it cause?
A perforated eardrum is a hole or tear that has developed in the eardrum. It can affect hearing. The extent of hearing loss can vary greatly. For example, tiny perforations may only cause minimal loss of hearing. Larger perforations may affect hearing more severely. Also, if the tiny bones (ossicles) are damaged in addition to the eardrum then the hearing loss would be much greater than, say, a small perforation which is not close to the ossicles.
With a perforation, you are at greater risk of developing an ear infection. This is because the eardrum normally acts as a barrier to bacteria and other germs that may get into the middle ear.
What can cause a perforated eardrum?
- Infections of the middle ear, which can damage the eardrum. In this situation you often have a discharge from the ear as pus runs out from the middle ear.
- Direct injury to the ear - for example, a punch to the ear.
- A sudden loud noise - for example, from a nearby explosion. The shock waves and sudden sound waves can tear (perforate) the eardrum. This is often the most severe type of perforation and can lead to severe hearing loss and ringing in the ears (tinnitus).
- Barotrauma. This occurs when you suddenly have a change in air pressure and there is a sharp difference in the pressure of air outside the ear and in the middle ear. For example, when descending in an aircraft. Pain in the ear due to a tense eardrum is common during height (altitude) changes when flying. However, a perforated eardrum only happens rarely in extreme cases. See separate leaflet called Barotrauma of the Ear for more details.
- Poking objects into the ear. This can sometimes damage the eardrum.
- Grommets. These are tiny tubes that are placed through the eardrum. They are used to treat glue ear, as they allow any mucus that is trapped in the middle ear to drain out from the ear. When a grommet falls out, there is a tiny gap left in the eardrum. This heals quickly in most cases.
How is a perforated eardrum diagnosed?
A doctor can usually diagnose a torn (perforated) eardrum simply by looking into the ear with a special torch called an otoscope. However, sometimes it is difficult to see the eardrum if there is a lot of inflammation, wax or infection present in the ear.
What is the treatment for a perforated eardrum?
No treatment is needed in most cases
A torn (perforated) eardrum will usually heal by itself within 6-8 weeks. It is a skin-like structure and, like skin that is cut, it will usually heal. In some cases, a doctor may prescribe antibiotic medicines if there is an infection or risk of infection developing in the middle ear whilst the eardrum is healing.
It is best to avoid water getting into the ear whilst it is healing. For example, your doctor may advise that you put some cotton wool or similar material into your outer ear whilst showering or washing your hair. It is best not to swim until the eardrum has healed.
Occasionally, a perforated eardrum gets infected and needs antibiotics. Some ear drops can occasionally damage the nerve supply to the ear. Your doctor will select a type that does not have this risk, or may give you medication by mouth.
Surgical treatment is sometimes considered
A small operation is an option to treat a perforated drum that does not heal by itself. There are various techniques which may be used to repair the eardrum, depending on how severe the damage is. This operation may be called a myringoplasty or a tympanoplasty. These operations are usually successful in fixing the perforation and improving hearing.
However, not all people with an unhealed perforation need treatment. Many people have a small permanent perforation with no symptoms or significant hearing loss. Treatment is mainly considered if there is hearing loss, as this may improve if the perforation is fixed. Also, swimmers may prefer to have a perforation repaired, as getting water in the middle ear can increase the risk of having an ear infection.
If you have a perforation that has not healed by itself, a doctor who is an ear specialist will advise on whether treatment is necessary.
Did you find this information useful?
Further reading & references
- Castro O, Perez-Carro AM, Ibarra I, et al; Myringoplasties in children: our results. Acta Otorrinolaringol Esp. 2013 Mar-Apr 64(2):87-91. doi: 10.1016/j.otorri.2012.06.012. Epub 2012 Dec 20.
- Kumar N, Madkikar NN, Kishve S, et al; Using middle ear risk index and et function as parameters for predicting the outcome of tympanoplasty. Indian J Otolaryngol Head Neck Surg. 2012 Mar 64(1):13-6. doi: 10.1007/s12070-010-0115-4. Epub 2011 Feb 2.
- British National Formulary; NICE Evidence Services (UK access only)
- Venekamp RP, Prasad V, Hay AD; Are topical antibiotics an alternative to oral antibiotics for children with acute otitis media and ear discharge? BMJ. 2016 Feb 4 352:i308. doi: 10.1136/bmj.i308.
Disclaimer: This article is for information only and should not be used for the diagnosis or treatment of medical conditions. Patient Platform Limited has used all reasonable care in compiling the information but make no warranty as to its accuracy. Consult a doctor or other health care professional for diagnosis and treatment of medical conditions. For details see our conditions. |
In 1784, John Michell proposed that in the vicinity of compact massive objects, gravity can be strong enough that even light cannot escape. At that time, the Newtonian theory of gravitation and the so-called corpuscular theory of light were dominant. In these theories, if the escape velocity of an object exceeds the speed of light, then light originating inside or from it can escape temporarily but will return. In 1958, David Finkelstein used General Relativity to introduce a stricter definition of a local black hole event horizon as a boundary beyond which events of any kind cannot affect an outside observer. This led to information and firewall paradoxes, which encouraged the re-examination of the concept of local event horizons and the notion of black holes. Several theories were subsequently developed, some with, and some without, event horizons. Stephen Hawking, who was one of the leading developers of theories to describe black holes, suggested that an apparent horizon should be used instead of an event horizon, saying "gravitational collapse produces apparent horizons but no event horizons". He eventually concluded that "the absence of event horizons means that there are no black holes – in the sense of regimes from which light can't escape to infinity."
Any object that approaches the horizon from the observer's side appears to slow down and never quite crosses the horizon. Due to gravitational redshift, its image reddens over time as the object moves away from the observer.
In an expanding universe the speed of expansion reaches and even exceeds the speed of light, which prevents signals from travelling to some regions. A cosmic event horizon is a real event horizon because it affects all kinds of signals, including gravitational waves which travel at the speed of light.
More specific types of horizon include the related but distinct absolute and apparent horizons found around a black hole. Other distinct types include the Cauchy and Killing horizons; the photon spheres and ergospheres of the Kerr solution; particle and cosmological horizons relevant to cosmology; and isolated and dynamical horizons important in current black hole research.
Cosmic event horizonEdit
In cosmology, the event horizon of the observable universe is the largest comoving distance from which light emitted now can ever reach the observer in the future. This differs from the concept of the particle horizon, which represents the largest comoving distance from which light emitted in the past could reach the observer at a given time. For events that occur beyond that distance, light has not had enough time to reach our location, even if it was emitted at the time the universe began. The evolution of the particle horizon with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, parts of the universe will never be observable, no matter how long the observer waits for the light from those regions to arrive. The boundary beyond which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.
The criterion for determining whether a particle horizon for the universe exists is as follows. Define a comoving distance dp as
In this equation, a is the scale factor, c is the speed of light, and t0 is the age of the Universe. If dp → ∞ (i.e., points arbitrarily as far away as can be observed), then no event horizon exists. If dp ≠ ∞, a horizon is present.
Examples of cosmological models without an event horizon are universes dominated by matter or by radiation. An example of a cosmological model with an event horizon is a universe dominated by the cosmological constant (a de Sitter universe).
A calculation of the speeds of the cosmological event and particle horizons was given in a paper on the FLRW cosmological model, approximating the Universe as composed of non-interacting constituents, each one being a perfect fluid.
Apparent horizon of an accelerated particleEdit
If a particle is moving at a constant velocity in a non-expanding universe free of gravitational fields, any event that occurs in that Universe will eventually be observable by the particle, because the forward light cones from these events intersect the particle's world line. On the other hand, if the particle is accelerating, in some situations light cones from some events never intersect the particle's world line. Under these conditions, an apparent horizon is present in the particle's (accelerating) reference frame, representing a boundary beyond which events are unobservable.
For example, this occurs with a uniformly accelerated particle. A spacetime diagram of this situation is shown in the figure to the right. As the particle accelerates, it approaches, but never reaches, the speed of light with respect to its original reference frame. On the spacetime diagram, its path is a hyperbola, which asymptotically approaches a 45-degree line (the path of a light ray). An event whose light cone's edge is this asymptote or is farther away than this asymptote can never be observed by the accelerating particle. In the particle's reference frame, there is a boundary behind it from which no signals can escape (an apparent horizon). The distance to this boundary is given by where is the constant proper acceleration of the particle.
While approximations of this type of situation can occur in the real world (in particle accelerators, for example), a true event horizon is never present, as this requires the particle to be accelerated indefinitely (requiring arbitrarily large amounts of energy and an arbitrarily large apparatus).
Interacting with a cosmic horizonEdit
In the case of a horizon perceived by a uniformly accelerating observer in empty space, the horizon seems to remain a fixed distance from the observer no matter how its surroundings move. Varying the observer's acceleration may cause the horizon to appear to move over time, or may prevent an event horizon from existing, depending on the acceleration function chosen. The observer never touches the horizon and never passes a location where it appeared to be.
In the case of a horizon perceived by an occupant of a de Sitter universe, the horizon always appears to be a fixed distance away for a non-accelerating observer. It is never contacted, even by an accelerating observer.
Event horizon of a black holeEdit
Far away from the black hole, a particle can move in any direction. It is only restricted by the speed of light.
Closer to the black hole spacetime starts to deform. In some convenient coordinate systems, there are more paths going towards the black hole than paths moving away.[Note 1]
Inside the event horizon all future time paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape, no matter the direction the particle is traveling.
One of the best-known examples of an event horizon derives from general relativity's description of a black hole, a celestial object so dense that no nearby matter or radiation can escape its gravitational field. Often, this is described as the boundary within which the black hole's escape velocity is greater than the speed of light. However, a more detailed description is that within this horizon, all lightlike paths (paths that light could take) and hence all paths in the forward light cones of particles within the horizon, are warped so as to fall farther into the hole. Once a particle is inside the horizon, moving into the hole is as inevitable as moving forward in time - no matter what direction the particle is traveling, and can actually be thought of as equivalent to doing so, depending on the spacetime coordinate system used.
The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body that fits inside this radius (although a rotating black hole operates slightly differently). The Schwarzschild radius of an object is proportional to its mass. Theoretically, any amount of matter will become a black hole if compressed into a space that fits within its corresponding Schwarzschild radius. For the mass of the Sun this radius is approximately 3 kilometers and for the Earth it is about 9 millimeters. In practice, however, neither the Earth nor the Sun have the necessary mass and therefore the necessary gravitational force, to overcome electron and neutron degeneracy pressure. The minimal mass required for a star to be able to collapse beyond these pressures is the Tolman–Oppenheimer–Volkoff limit, which is approximately three solar masses.
According to the fundamental gravitational collapse models, an event horizon forms before the singularity of a black hole. If all the stars in the Milky Way would gradually aggregate towards the galactic center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide. Up to the collapse in the far future, observers in a galaxy surrounded by an event horizon would proceed with their lives normally.
Black hole event horizons are widely misunderstood. Common, although erroneous, is the notion that black holes "vacuum up" material in their neighborhood, where in fact they are no more capable of seeking out material to consume than any other gravitational attractor. As with any mass in the universe, matter must come within its gravitational scope for the possibility to exist of capture or consolidation with any other mass. Equally common is the idea that matter can be observed falling into a black hole. This is not possible. Astronomers can detect only accretion disks around black holes, where material moves with such speed that friction creates high-energy radiation which can be detected (similarly, some matter from these accretion disks is forced out along the axis of spin of the black hole, creating visible jets when these streams interact with matter such as interstellar gas or when they happen to be aimed directly at Earth). Furthermore, a distant observer will never actually see something reach the horizon. Instead, while approaching the hole, the object will seem to go ever more slowly, while any light it emits will be further and further redshifted.
The black hole event horizon is teleological in nature, meaning that we need to know the entire future space-time of the universe to determine the current location of the horizon, which is essentially impossible. Because of the purely theoretical nature of the event horizon boundary, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculatory boundary in a finite amount of proper time.
Interacting with black hole horizonsEdit
A misconception concerning event horizons, especially black hole event horizons, is that they represent an immutable surface that destroys objects that approach them. In practice, all event horizons appear to be some distance away from any observer, and objects sent towards an event horizon never appear to cross it from the sending observer's point of view (as the horizon-crossing event's light cone never intersects the observer's world line). Attempting to make an object near the horizon remain stationary with respect to an observer requires applying a force whose magnitude increases unboundedly (becoming infinite) the closer it gets.
In the case of the horizon around a black hole, observers stationary with respect to a distant object will all agree on where the horizon is. While this seems to allow an observer lowered towards the hole on a rope (or rod) to contact the horizon, in practice this cannot be done. The proper distance to the horizon is finite, so the length of rope needed would be finite as well, but if the rope were lowered slowly (so that each point on the rope was approximately at rest in Schwarzschild coordinates), the proper acceleration (G-force) experienced by points on the rope closer and closer to the horizon would approach infinity, so the rope would be torn apart. If the rope is lowered quickly (perhaps even in freefall), then indeed the observer at the bottom of the rope can touch and even cross the event horizon. But once this happens it is impossible to pull the bottom of rope back out of the event horizon, since if the rope is pulled taut, the forces along the rope increase without bound as they approach the event horizon and at some point the rope must break. Furthermore, the break must occur not at the event horizon, but at a point where the second observer can observe it.
Assuming that the possible apparent horizon is far inside the event horizon, or there is none, observers crossing a black hole event horizon would not actually see or feel anything special happen at that moment. In terms of visual appearance, observers who fall into the hole perceive the eventual apparent horizon as a black impermeable area enclosing the singularity. Other objects that had entered the horizon area along the same radial path but at an earlier time would appear below the observer as long as they are not entered inside the apparent horizon, and they could exchange messages. Increasing tidal forces are also locally noticeable effects, as a function of the mass of the black hole. In realistic stellar black holes, spaghettification occurs early: tidal forces tear materials apart well before the event horizon. However, in supermassive black holes, which are found in centers of galaxies, spaghettification occurs inside the event horizon. A human astronaut would survive the fall through an event horizon only in a black hole with a mass of approximately 10,000 solar masses or greater.
Beyond general relativityEdit
A cosmic event horizon is commonly accepted as a real event horizon, whereas the description of a local black hole event horizon given by general relativity is found to be incomplete and controversial. When the conditions under which local event horizons occur are modeled using a more comprehensive picture of the way the Universe works, that includes both relativity and quantum mechanics, local event horizons are expected to have properties that are different from those predicted using general relativity alone.
At present, it is expected by the Hawking radiation mechanism that the primary impact of quantum effects is for event horizons to possess a temperature and so emit radiation. For black holes, this manifests as Hawking radiation, and the larger question of how the black hole possesses a temperature is part of the topic of black hole thermodynamics. For accelerating particles, this manifests as the Unruh effect, which causes space around the particle to appear to be filled with matter and radiation.
According to the controversial black hole firewall hypothesis, matter falling into a black hole would be burned to a crisp by a high energy "firewall" at the event horizon.
An alternative is provided by the complementarity principle, according to which, in the chart of the far observer, infalling matter is thermalized at the horizon and reemitted as Hawking radiation, while in the chart of an infalling observer matter continues undisturbed through the inner region and is destroyed at the singularity. This hypothesis does not violate the no-cloning theorem as there is a single copy of the information according to any given observer. Black hole complementarity is actually suggested by the scaling laws of strings approaching the event horizon, suggesting that in the Schwarzschild chart they stretch to cover the horizon and thermalize into a Planck length-thick membrane.
A complete description of local event horizons generated by gravity is expected to, at minimum, require a theory of quantum gravity. One such candidate theory is M-theory. Another such candidate theory is loop quantum gravity.
- The set of possible paths, or more accurately the future light cone containing all possible world lines (in this diagram represented by the yellow/blue grid), is tilted in this way in Eddington–Finkelstein coordinates (the diagram is a "cartoon" version of an Eddington–Finkelstein coordinate diagram), but in other coordinates the light cones are not tilted in this way, for example in Schwarzschild coordinates they simply narrow without tilting as one approaches the event horizon, and in Kruskal–Szekeres coordinates the light cones don't change shape or orientation at all.
- Rindler, W. (1956-12-01). [Also reprinted in Gen. Rel. Grav. 34, 133–153 (2002), accessible at https://doi.org/10.1023/A:1015347106729.] "Visual Horizons in World Models". Monthly Notices of the Royal Astronomical Society. 116 (6): 662–677. doi:10.1093/mnras/116.6.662. ISSN 0035-8711.
- Hawking, S.W. (2014). "Information Preservation and Weather Forecasting for Black Holes". arXiv:1401.5761v1 [hep-th].
- Curiel, Erik (2019). "The many definitions of a black hole". Nature Astronomy. 3: 27–34. arXiv:1808.01507. Bibcode:2019NatAs...3...27C. doi:10.1038/s41550-018-0602-1. S2CID 119080734.
- Chaisson, Eric (1990). Relatively Speaking: Relativity, Black Holes, and the Fate of the Universe. W. W. Norton & Company. p. 213. ISBN 978-0393306750.
- Bennett, Jeffrey; Donahue, Megan; Schneider, Nicholas; Voit, Mark (2014). The Cosmic Perspective. Pearson Education. p. 156. ISBN 978-0-134-05906-8.
- Margalef Bentabol, Berta; Margalef Bentabol, Juan; Cepa, Jordi (21 December 2012). "Evolution of the cosmological horizons in a concordance universe". Journal of Cosmology and Astroparticle Physics. 2012 (12): 035. arXiv:1302.1609. Bibcode:2012JCAP...12..035M. doi:10.1088/1475-7516/2012/12/035. S2CID 119704554.
- Margalef Bentabol, Berta; Margalef Bentabol, Juan; Cepa, Jordi (8 February 2013). "Evolution of the cosmological horizons in a universe with countably infinitely many state equations". Journal of Cosmology and Astroparticle Physics. 015. 2013 (2): 015. arXiv:1302.2186. Bibcode:2013JCAP...02..015M. doi:10.1088/1475-7516/2013/02/015. S2CID 119614479.
- Misner, Thorne & Wheeler 1973, p. 848.
- Hawking, S.W.; Ellis, G.F.R. (1975). The Large Scale Structure of Space-Time. Cambridge University Press.[page needed]
- Misner, Charles; Thorne, Kip S.; Wheeler, John (1973). Gravitation. W. H. Freeman and Company. ISBN 978-0-7167-0344-0.[page needed]
- Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 978-0-2268-7033-5.[page needed]
- Peacock, J.A. (1999). Cosmological Physics. Cambridge University Press. doi:10.1017/CBO9780511804533. ISBN 978-0-511-80453-3.[page needed]
- Penrose, Roger (1965). "Gravitational collapse and space-time singularities". Physical Review Letters. 14 (3): 57. Bibcode:1965PhRvL..14...57P. doi:10.1103/PhysRevLett.14.57.
- Joshi, Pankaj; Narayan, Ramesh (2016). "Black Hole Paradoxes". Journal of Physics: Conference Series. 759 (1): 12–60. arXiv:1402.3055. Bibcode:2016JPhCS.759a2060J. doi:10.1088/1742-6596/759/1/012060. S2CID 118592546.
- Misner, Thorne & Wheeler 1973, p. 824.
- Hamilton, A. "Journey into a Schwarzschild black hole". jila.colorado.edu. Retrieved 28 June 2020.
- Hobson, Michael Paul; Efstathiou, George; Lasenby, Anthony N. (2006). "11. Schwarzschild black holes". General Relativity: An introduction for physicists. Cambridge University Press. p. 265. ISBN 978-0-521-82951-9. |
Data Types, Variables and Arithmetic Operators
Let us write a simple equation in math to calculate mean of a set of numbers:
a = 15, b = 35, c = 55
mean = (a+b+c) / 3 = 35
To do this simple calculation, you may be using mental math or a calculator. But if you are writing a Python program to do so, then you first have to understand how to declare variables a,b,c and mean, understand the data types (integers, real numbers, text etc..) that can be assigned to your variables and finally, understand the various arithmetic operators that you can use. In this lesson we will learn all of these simple concepts.
Python's common primitive data types
|Data type||Name||Example||Allowed Values|
Python assigns the data type based on the literal value assigned to the variable and there is no need to assign data type like in C or Java.
Let us now get our hands dirty by keying in the program shown below in the Code cell, to compute the mean in Python:
a = 15 b = 35 c = 55 mean = (a+b+c) / 3 print(mean) print(type(mean))
Key in the above statements one at a time in the Code input cell in firstConcept.ipynb file opened in the previous lesson. Although for your convenience a Copy button is given, if you are new to programming it is recommended that you key in the values, one statement at a time, instead of using the Copy button. To run this program, ensure that the cursor is inside the Code input cell and then press control+enter for Mac or ctrl+enter for Windows and notice the output.
Notice the class 'float' printed below 35.0. Since the computed answer is a real number it has automatically assigned a float data type to the computed answer.
Note: While the first 4 lines of code are similar to algebraic expressions, the two new keywords you may notice are
These are called functions. In very simple terms, a function can be considered as a black box, which takes zero to more inputs called arguments, and splits out zero to more outputs. In this case, the
mean, and splits out that value as the output on the screen. The
type function takes in the argument
mean and splits out the data type of the argument that is passed in. This output from the
type function is again sent as an argument to the
Function arguments are passed with in a pair of parenthesis
Rules for variable name declaration
Must start with a letter or underscore
Must contain only letters, digits or underscores
Must not use any of the reserved keywords that is used by Python
Keywords in Python:
and assert break class continue def del elif else except exec finally for from global if import in is lambda not or pass raise return try while yield
Recommendation for Variable Names
Give meaningful names for variables instead of using a,b,x,y etc., unless it is a variable declared in a loop or a mathematical equation like in the example shown.
Start with lowercase letter and use underscore to separate words.
Although camel case notations for variable names is in vogue for Object Oriented Programming (OOP), in Data Analytics however, we rarely create an object, so we will use underscore notations in this book.
- Example of camel case: studentName="joe", Example of underscore: student_name="joe"
Points to note
- Variable names are case sensitive: mean != Mean
- A variable should be first defined before it is used. The below code cell throws an exception:
NameEror: name 'd' is not defined
Single quote ('), double (") quotes and triple quotes (```) are allowed to enclose a String literal value.
Literal values for float, int, bool should not be enclosed with any type of quote.
A variable which is assigned one type first can get reassigned with another type later. Key in the below statements in the code input cell, run the code and watch the output.
weight = 100 weight = "150 pounds" print(weight)
You will notice that the program runs without any error and the output is 150 pounds.
Few more tips on trouble shooting
- Programming context is maintained between the code cells. Variables which are declared in one cell is available for code cells which are executed, after the cell containing the variable declaration is executed. Order of the cell in the notebook does not matter as long as it is executed after the variable declaration code is executed. However it is a good practice to write all the code cells in the order in which they should be executed.
- Sometimes you may lose track of all the variables active in your context and you may be seeing results which you did not anticipate. In such cases it is a good idea to restart your Kernel and start your executions with a clean slate. To start a clean run of all the code cells use Notebook --> Restart Kernel, Notebook --> Run All Cells
- Shutdown the Kernel for the notebook, close the notebook file and reopen if there are persistent issues which are not resolved by following the above procedure.
You have already used addition (+) operator in the example above. The other arithmetic operators in Python are listed below:
Assume x has a value of 2 before the execution of any one of the statements,
|Operator name||Notation||Short Notation||Result|
|Addition||x = x+1||x += 1||3|
|Subtraction||x = x-1||x -= 1||1|
|Multiplication||x = x*2||x *= 2||4|
|Division||x = x/2||x /= 2||1.0|
|Integer Division||x = x//2||x //= 2||1|
|Modulo||x = x%2||x %= 2||0|
|Exponent||x = x**2||x **= 2||4|
Points to note
- Short Notation is used where ever possible instead of Notation statements. Both achieve the same result but one is shorter in representation.
- All the operations are very similar to standard algebraic results.
- Modulo operator returns the integer remainder after division
- Division result is always a floating point number
- Recommended style guide for Python is PEP 8 - https://www.python.org/dev/peps/pep-0008/
The order of operation is very similar to algebraic rules - PEMDAS. It stands for Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. |
What to Know About Hearing Loss
Hearing is one of the five senses. It is a complex process which requires both the detection of sound and the ability to attach meaning to that sound. The ability to hear is critical to communication and navigating the world around you.
The human ear is fully developed at birth, capable of responding to sounds ranging from very soft to very loud. Infants respond to sound even before birth.
So, How Do We Hear
The auditory system can be divided into two main sections: the peripheral auditory system and the central auditory system. The peripheral auditory system consists of the outer ear, the middle ear and the cochlea (inner ear). The outer ear includes the external ear (pinna), the ear canal and the eardrum, also known as the tympanic membrane. Sound waves travel through the ear canal, striking the eardrum and causing it to move or vibrate.
The middle ear is the space behind the eardrum that contains the three smallest bones in the human body. This chain of tiny bones is connected to the eardrum at one end and to an opening to the inner ear at the other end. Vibrations from the eardrum cause the bones to move, sending the sound waves to the inner ear.
The inner ear is a fluid-filled structure called the cochlea which contains hair cells. Vibrations passed into the inner ear cause fluid movement, which then stimulates these tiny hair cells. The hair cells generate an electrical signal, which is passed along the auditory nerve into a region of the brain called the auditory cortex. The auditory cortex assigns meaning to the sound.
Hearing Loss Causes and Types
Approximately 95 percent of hearing loss in the adult population is sensorineural in nature. In this type of hearing loss, the problem is due to damage to or degeneration of the inner ear (sensory) or auditory nerve (neural). The most common causes of sensorineural hearing loss are noise exposure, age, and hereditary predisposition. Other causes include drugs toxic to the auditory system, viral illness, disturbance of inner ear fluids, and invasion of the inner ear by excessive temporal bone growth.
Approximately five percent of hearing loss in the adult population is conductive in nature. In this type of hearing loss, the problem is due to mechanical or structural damage to the outer and/or middle ear, resulting in reduced sound transmission to the inner ear. Common causes are impacted wax, perforated eardrum, middle ear infection, otosclerosis (stiffening of the middle ear bones), cholesteatoma, and congenital anomalies. With a conductive hearing loss, it is possible that medical intervention may result in partial or complete restoration of hearing. In these cases, an appropriate medical referral is warranted.
Mixed hearing loss is a combination of sensorineural and conductive hearing loss.
The Impact of Hearing Loss
The inability to respond appropriately to everyday sounds and communicate effectively with others cannot only cause embarrassment, but may have serious negative consequences. A person with a mild to moderate hearing loss may be at risk without knowing it. Research has confirmed that hearing loss can have adverse effects on your ability to function effectively, as well as a negative impact on several aspects of daily life. Family relationships, enjoyment of social activities, and performance in work settings may all be negatively affected by hearing loss. Hearing loss can also be dangerous if one fails to hear warning signals or understand the doctor’s instructions regarding proper use of medications.
A Hearing Industries Association and National Council on Aging study clearly demonstrated that individuals with hearing loss reported significantly greater feelings of depression, paranoia, anger, and frustration than hearing aid users.
Other recent studies have linked hearing loss to serious health issues, including heart disease, early onset of dementia, and even diabetes. Hearing loss can greatly affect the quality of life for adults and children. Unmanaged hearing loss can have an impact on employment/earning potential, education, and general well-being. The good news is that individuals who use hearing aids report significantly higher levels of involvement in social activities, fewer worries, and more positive social and family experiences.
Rehabilitation of Hearing Loss
If your hearing evaluation reveals hearing loss, your audiologist will work with you to develop a plan to improve your hearing and communication. This consultation will begin with a discussion about your hearing loss, your lifestyle, your listening needs and your budget. This will help determine the technology that is most appropriate for you. It is important for you to understand your hearing loss and what you can expect from hearing aids if they are recommended for you. We are also happy to help you navigate the myriad of marketing, direct mail, and health insurance benefit information related to hearing aids that many consumers report being inundated and confused by.
With a better understanding of you and your hearing loss, your audiologist may also recommend assistive listening devices (ALDs) or hearing assistive technology (HATs) such as captioned or specialized telephones, TV devices, FM systems, remote microphones, and/or audio-loops. These can help you hear better in situations where hearing aids alone may not completely resolve your issue(s) (large groups, lectures, church). Hearing assistive technology can be used alone or with hearing aids. |
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.
The machine operates on an infinite memory tape divided into discrete cells, each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right, or halts the computation. The choice of which replacement symbol to write and which direction to move is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read.
The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine" (automatic machine). It was Turing's Doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review. With this model, Turing was able to answer two questions in the negative:
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem').
Turing machines proved the existence of fundamental limitations on the power of mechanical computation. While they can express arbitrary computations, their minimalist design makes them unsuitable for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory.
Turing completeness is the ability for a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.
A Turing machine is a general example of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a machine (automaton) capable of enumerating some arbitrary subset of valid strings of an alphabet; these strings are part of a recursively enumerable set. A Turing machine has a tape of infinite length on which it can perform read and write operations.
Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing.
The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus.
A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). A more mathematically oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church–Turing thesis. The thesis states that Turing machines indeed capture the informal notion of effective methods in logic and mathematics, and provide a precise definition of an algorithm or "mechanical procedure". Studying their abstract properties yields many insights into computer science and complexity theory.
In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consisted of:
...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.
The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner").
In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled.
Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount of storage space.
In addition, the Turing machine can also have a reject state to make rejection more explicit. In that case there are three possibilities: accepting, rejecting, and running forever. Another possibility is to regard the final values on the tape as the output. However, if the only output is the final state the machine ends up in (or never halting), the machine can still effectively output a longer string by taking in an integer that tells it which bit of the string to output.
In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like."
The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in The Undecidable, p. 126-127 and Davis (2000) p. 152):
Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj:
For the remainder of this article "definition 1" (the Turing/Davis convention) will be used.
In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing in The Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947), The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples:
Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples.
Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at Post–Turing machine.
The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape:
Thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, the state of the system may be described by a single expression (sequence of symbols) consisting of the symbols on the tape followed by Δ (which is supposed to not to appear elsewhere) and then by the note of instructions. This expression is called the "state formula".
Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "the complete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to the left of the scanned symbol.
A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gödel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374-375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right.
Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below):
This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B.
"State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol.
Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion.
To the right: the above table as expressed as a "state transition" diagram.
Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing.
Whether a drawing represents an improvement on its table must be decided by the reader for the particular context.
The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time, not the course ("trajectory") of a computation through time and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters".
The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936) The Undecidable, pp. 139–140).
Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.)
A Turing machine is equivalent to a single-stack pushdown automaton (PDA) that has been made more flexible and concise by relaxing the last-in-first-out (LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right.
At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model.
Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state.
A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example, ANSI C is not Turing-equivalent, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, called pointers, is accessible inside the language. However, other programming languages like Pascal do not have this feature, which allows them to be Turing complete in principle. It is just Turing complete in principle, as memory allocation in a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot.
Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine":
...whose motion is only partially determined by the configuration ... When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems.
Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡, The Undecidable, p. 138)
This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration.
An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an unspecified entity "apart from saying that it cannot be a machine" (Turing (1939), The Undecidable, p. 166–168).
It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with the tape on the beginning of which is written the string of quintuples separated by semicolons of some computing machine M, then U will compute the same sequence as M.
This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the stored-program computer.
Turing's paper ... contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it.
In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9)
It is often believed[according to whom?] that Turing machines, unlike simpler automata, are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number of configurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations.
There are a number of ways to explain why Turing machines are useful models of real computers:
A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random-access stored-program machine or RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model.
Another limitation of Turing machines is that they do not model concurrency well. For example, there is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing machine starting on a blank tape. (See article on unbounded nondeterminism.) By contrast, there are always-halting concurrent systems with no inputs that can compute an integer of unbounded size. (A process can be created with local storage that is initialized with a count of 0 that concurrently sends itself both a stop and a go message. When it receives a go message, it increments its count by 1 and sends itself a go message. When it receives a stop message, it stops with an unbounded number in its local storage.)
In the early days of computing, computer use was typically limited to batch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice.
Since the 1970s, interactive use of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such as I/O automata are usually preferred.
Robin Gandy (1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back to Charles Babbage (circa 1834) and actually proposes "Babbage's Thesis":
.That the whole of development and operations of analysis are now capable of being executed by machinery
Gandy's analysis of Babbage's analytical engine describes the following five operations (cf. p. 52–53):
Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p. 53). He cites other proposals for "universal calculating machines" including those of Percy Ludgate (1909), Leonardo Torres y Quevedo (1914), Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However:
… the emphasis is on programming a fixed iterable sequence of arithmetical operations. The fundamental importance of conditional iteration and conditional transfer for a general theory of calculating machines is not recognized…The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900
With regard to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows:
10. Determination of the solvability of a Diophantine equation. Given a Diophantine equation with any number of unknown quantities and with rational integral coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers. The Entscheidungsproblem [decision problem for first-order logic] is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability ... The Entscheidungsproblem must be considered the main problem of mathematical logic.
A quite definite generally applicable prescription is required which will allow one to decide in a finite number of steps the truth or falsity of a given purely logical assertion ...
Behmann remarks that ... the general problem is equivalent to the problem of deciding which mathematical propositions are true.
If one were able to solve the Entscheidungsproblem then one would have a "procedure for solving many (or even all) mathematical problems".
By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 by Kurt Gödel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s.
The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gödel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions.
But what Church had done was something rather different, and in a certain sense weaker. ... the Turing construction was more direct, and provided an argument from first principles, closing the gap in Church's demonstration.
And Post had only proposed a definition of calculability and criticized Church's "definition", but had proved nothing.
In the spring of 1935, Turing as a young Master's student at King's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes:
To the question 'what is a "mechanical" process?' Turing returned the characteristic answer 'Something that can be done by a machine' and he embarked on the highly congenial task of analysing the general notion of a computing machine.
I suppose, but do not know, that Turing, right from the start of his work, had as his goal a proof of the undecidability of the Entscheidungsproblem. He told me that the 'main idea' of the paper came to him when he was lying in Grantchester meadows in the summer of 1935. The 'main idea' might have either been his analysis of computation or his realization that there was a universal machine, and so a diagonal argument to prove unsolvability.
While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function":
It was stated above that 'a function is effectively calculable if its values can be found by some purely mechanical process'. We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability with effective calculability. It is not difficult, though somewhat laborious, to prove that these three definitions [the 3rd is the λ-calculus] are equivalent.
When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper "" (1937):
[that] the Hilbert Entscheidungsproblem can have no solution ... I propose, therefore to show that there can be no general process for determining whether a given formula U of the functional calculus K is provable, i.e. that there can be no machine which, supplied with any one U of these formulae, will eventually say whether U is provable.
Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable".
In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by both the Axis and Allied militaries in World War II (cf. Hodges p. 298–299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post–Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random-access machine models—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set.
Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine:
Depending on the objects one likes to manipulate in the computations (numbers like nonnegative integers or alphanumeric strings), two models have obtained a dominant position in machine-based complexity theory:
the off-line multitape Turing machine..., which represents the standard model for string-oriented computation, and the random access machine (RAM) as introduced by Cook and Reckhow ..., which models the idealized Von Neumann-style computer.
Only in the related area of analysis of algorithms this role is taken over by the RAM model. |
The first written records of the region come from Arab traders in the 9th and 10th centuries. In medieval times, the region was dominated by the Trans-Saharan trade and was ruled by the Mali Empire. In the 16th century, the region came to be ruled by the Songhai Empire. The first Europeans to visit the Gambia River were the Portuguese in the 15th century, who attempted to settle on the river banks, but no settlement of significant size was established. Descendants of the Portuguese settlers remained until the 18th century. In the late 16th century, English merchants attempted to begin a trade with the Gambia, reporting that it was "a river of secret trade and riches concealed by the Portuguese."
In the early 17th century, the French attempted to settle the Gambia but failed. Further English expeditions from 1618 to 1621, including under Richard Jobson, were attempted but resulted in huge losses. Merchants of the Commonwealth of England sent expeditions to the Gambia in 1651, but their ships were captured by Prince Rupert the following year. In 1651, the Couronian colonization of the Gambia had also begun, with forts and outposts being erected on several islands. The Courlanders remained dominant until 1659 when their possessions were handed over to the Dutch West India Company. In 1660, the Courlanders resumed possession, but the next year was expelled by the newly formed Royal Adventurers in Africa Company.
In 1667, the rights of the Royal Adventurers to the Gambia were sublet to the Gambia Adventurers but later reverted to the new Royal African Company. 1677 saw the beginning of a century-and-a-half-long struggle between the English and French for supremacy over the Gambia and Senegal. The English possessions were captured several times by the French, but in the Treaty of Utrecht in 1713, the British rights to the region were recognized by the French. In the mid-18th century, the Royal African Company began having serious financial problems and in 1750, Parliament divested the company of its rights in the region. In 1766, the Crown gained possession of the territory, and it formed part of the Senegambia colony. In 1783, Senegambia ceased existing as a British colony.
Following the cessation of Senegambia, the colony was in effect abandoned. The only Europeans were traders who existed in a few settlements on the river banks, such as Pisania. Following the end of the Napoleonic Wars, Alexander Grant was sent to re-establish a presence in the Gambia. He established Bathurst and the British possessions continued to grow in size through a series of treaties. It was administered from Sierra Leone until 1843 when it was given its own Governor, but in 1866 merged again with Sierra Leone. The cession of the Gambia to France was proposed in the late 19th century but was met with considerable protest in both the Gambia and in England. In 1888, the colony regained its own government structure, and in 1894 the Gambia Colony, and Protectorate was properly established along the lines it would continue to hold until independence.
In 1901, legislative and executive councils were established for the Gambia, as well as the Gambia Company of the RWAFF. Gambian soldiers fought in World War I, and in the 1920s Edward Francis Small led the push for emancipation, founding the Bathurst Trade Union and the Rate Payers' Association. During World War II, the Gambia Company was raised to a regiment, and notably fought in the Burma Campaign in the latter years of the war. Franklin D. Roosevelt's visit to the Gambia in 1943 was the first visit by a sitting US President to the African continent. Following the war, the pace of reform increased, with an economic focus on the production of the Peanut and a failed programme called the Gambia Poultry Scheme by the Colonial Development Corporation. The push towards self-government increased its pace, and the House of Representatives was established in 1960. Pierre Sarr N'Jie served as Chief Minister from 1961 to 1962, though following the 1962 election Dawda Jawara became Prime Minister, beginning the People's Progressive Party's dominance of Gambian politics for the next thirty years. Full internal self-government was achieved in 1963, and following extensive negotiations, the Gambia declared independence in 1965.
The Gambia gained independence as a constitutional monarchy that remained part of the Commonwealth, but in 1970 became a presidential republic. Jawara was elected the first President and remained in this position until 1994. A coup, led by Kukoi Sanyang, was attempted in 1981 but failed after Senegalese intervention. From 1981 to 1989, the Gambia entered into the Senegambia Confederation, which collapsed. In 1994, Jawara was overthrown in a coup d'état led by Yahya Jammeh, who ruled as a military dictator for two years through the AFPRC. He was elected President in 1996 and continued in this role until 2017. During this time, Jammeh's party, the APRC, dominated Gambian politics. the Gambia left the Commonwealth of Nations in 2013 and suffered an unsuccessful coup attempt in 2014. In the 2016 election, Adama Barrow was elected President, backed by a coalition of opposition parties. Jammeh's refusal to step down led to a constitutional crisis and the intervention of ECOWAS forces.
- 1 Early history
- 2 15th and 16th centuries
- 3 17th century
- 4 18th century
- 5 19th century
- 6 20th century
- 7 See also
- 8 References
- 9 External links
Mali and Songhai empires
The first verifiable written accounts of the region come from records of Arab traders in the 9th and 10th centuries AD. In medieval times the area was dominated by the trans-Saharan trade. The Mali Empire, most renowned for the Mandinka ruler Mansa Kankan Musa, brought worldwide recognition to the region due to its enormous wealth, scholarship, and civility. From the early 13th century, the Kouroukan Fouga, Mali's constitution, was the law of the land. The North African scholar and traveler Ibn Battuta visited the area in 1352 and said about its inhabitants:
The negroes possess some admirable qualities. They are seldom unjust and have a greater abhorrence of injustice than any other people. There is complete security in their country. Neither traveler nor inhabitant in it has anything to fear from robbers or men of violence.
15th and 16th centuries
The European discovery of the Gambia began in the 15th century, with the push toward exploration by the Portuguese Prince Henry the Navigator. In 1446, Portuguese captain Nuno Tristao made contact with the inhabitants of Cape Vert, and made a treaty of commerce and friendship with them. Every year following, ships were sent from Portugal to trade with them. From them, information reached Henry the Navigator regarding the Gambia, and according to their reports, the banks of the river yielded large quantities of gold. In 1455, Henry induced a Venetian called Luiz de Cadamosto to take a single ship on an expedition in search of the river. Later in the same year, he sent a Genoese trader called Antoniotto Usodimare with two ships on the same quest. The two joined forces near Cape Verde and, by keeping close to the coast, easily found the mouth of the Gambia River.
They arrived at the River Gambia in 1455 and proceeded a short way upstream. They repeated the voyage the next year, proceeding further upstream and making contact with some of the native chiefs. When they were near the river's mouth, they cast anchor at an island where one of their sailors, who had previously died of a fever, was buried. As his name was Andrew, they named the island St Andrew's Island.
This expedition was followed by Portuguese attempts to establish a settlement on the river banks. No settlement ever reached a significant size, and many of the settlers intermarried with the natives while maintaining Portuguese dress and customs and professing to be Christians. Communities of Portuguese descent continued to exist in the Gambia until the 18th century, with Portuguese churches existing at San Domingo, Geregia and Tankular in 1730. The further Portuguese settlement up the river was at Setuku near Fattatenda. By the end of the 16th century, the Songhai Empire, under constant assault by Portugal, collapsed. The name Gambia comes from the Portuguese word for trade, cambio.
After the Portuguese throne was seized by Philip II in 1580, a number of Portuguese sought refuge in England. One of these refugees, Francisco Ferreira, piloted two English ships to the Gambia in 1587 and returned with a profitable cargo of hides and ivory. In 1588, António, Prior of Crato, who had a claim to the throne of Portugal, sold to London and Devon merchants the exclusive right to trade between the Rivers Senegal and Gambia. This grant was confirmed to the grantees for a period of ten years by letters patent of Queen Elizabeth I. The merchants sent several ships to the coast, but, owing to Portuguese hostility, did not venture further south than Joal: 30 miles to the north of the river mouth. They reported that the Gambia was "a river of secret trade and riches concealed by the Portuguese."
In 1612, an attempt by the French to settle in the Gambia ended disastrously due to sickness spreading among the settlers. Letters patent conferring the right of the exclusive trade with the River Gambia were subsequently granted again in 1598, 1618 and 1632 to other English adventurers, but no attempts were made by the English to explore until 1618. An expedition that year was commanded by George Thomson and its objective was to open up trade with Timbuktu. Leaving his ships at Gassan, Thompson proceeded with a small party in boats as far as the River Neriko. During his absence, the crew of his ship were massacred by the Portuguese. However, some of his party managed on their return to make their way overland to Cape Verde and then to England. Thomson remained in the Gambia with seven companions but was killed by one of them in a sudden dispute.
In the meantime, a relief expedition had departed from England under the command of Richard Jobson, who seized some Portuguese shipping as a reprisal for the massacre at Gassan. Jobson also made his way up to Neriko and subsequently gave a very positive account of the commercial opportunities of the River Gambia. During his expedition, Jobson refused slaves offered by an African merchant, Buckor Sano. He said that "we were a people who did not deal in such commodities, neither did we buy or sell one another, or any that had our own shapes." His protests were noted as "exceptional" by Hugh Thomas. However, both his and Thompson's expedition had resulted in significant losses and a subsequent voyage that he made in 1624 proved a complete failure. After a loss of £5,000, the patentees made no further attempts to exploit the resources of the Gambia but confined their attention to the Gold Coast.
In 1651, the Commonwealth of England granted a patent to certain London merchants who in that and the following year sent two expeditions to the River Gambia and established a trading post at Bintang. Members of the expedition proceeded as far as the Barakunda Falls in search of gold, but the climate took its toll. In 1652, Prince Rupert of the Rhine entered the Gambia with three Royalist ships and captured the patentees' vessels. After this heavy loss, they abandoned any further enterprise in the Gambia.
Courlander Gambia and English reclamation
During this, Jacob Kettler, the Duke of Courland, had in 1651 obtained from several native chiefs the cession of St Andrew's Island and land at Banyon Point (also known as Half-Die), Juffure and Gassan. Settlers, merchants and missionaries were sent out from Courland and forts were erected on St Andrew's Island and at Banyon Point. This was part of a period in Courlander history known as Couronian colonization, which also saw them colonise Tobago. The Courlanders believed that the possession of these territories would give them control over the river and enable them to levy tolls on all those who used the waterway. They erected a fort built out of local sandstone, appointed a Lutheran pastor, and positioned the cannons on the island so as to command both of the channels to the north and the south. The plan was to sell slaves to the colony in Tobago, but this did not prosper. In 1658, Kettler was made a prisoner by the Swedes during a war between Sweden and Poland. As a consequence, funds were no longer available to maintain the garrisons and settlements in the Gambia and in 1659, the Duke of Courland's agent at Amsterdam entered into an agreement with the Dutch West India Company whereby the Duke's possessions in the Gambia were handed over to the company.
In 1660, the fort on St Andrew's Island was captured and plundered by a French privateer in Swedish service. The Dutch thereafter abandoned the fort and the Courlanders resumed possession. After the Restoration of the English monarchy in 1660, English interest in the Gambia was revived due to the reported existence of a gold mine in the upper reaches of the river. A new patent was granted to a number of people who were styled as the Royal Adventurers in Africa Company. The most prominent among them were James, Duke of York, and Prince Rupert. At the end of the year, the Adventurers dispatched an expedition to the Gambia under the command of Robert Holmes, who had been with Prince Rupert in the Gambia in 1652.
Holmes arrived at the river mouth at the beginning of 1661. He proceeded to occupy Dog Island, which he renamed Charles Island, and to establish a temporary fort there. On 18 March 1661, he sailed up to St Andrew's Island and called on the Courlander officer-in-charge to surrender, threatening to bombard the fort if his request was ignored. There were only seven Europeans in the garrison, and the Courlanders had no alternative but to submit. On the following day, Holmes took possession of the fort, which was renamed James Fort after the Duke of York. An attempt was made in 1662 by the Dutch West India Company to gain possession of the fort. Firstly, they attempted to incite the natives of Barra against the English, secondly they offered bribes to certain English officers, and, lastly, they attempted to bombard the fort. None of these efforts were successful and the English remained in control.
Meanwhile, the Duke of Courland had lodged a protest against the seizure of his possessions in a time of peace. On 17 November 1664, after negotiations over the future of the territories, he relinquished in favour of Charles II all claims to his African possessions and in return was granted the island of Tobago and the right for himself to personally trade in the River Gambia. In 1667, the Royal Adventurers sublet their rights between Capes Blanco and Palmas to another body of adventurers, who became known as the Gambia Adventurers. They were to exploit the Rivers Gambia, Sierra Leone, and Sherbro.This group of adventurers enjoyed these rights for only a year, when, on the expiration of their lease, they reverted to the Royal African Company, which had purchased the rights and property of the Royal Adventurers six years earlier.
In 1677, the French wrested the island of Gorée from the Dutch. This began a century and a half period of struggle between England and France for political and commercial supremacy in the regions of Senegal and the Gambia. By 1681, the French had acquired a small enclave at Albreda opposite to James Island. Except for short period, during which trouble with the natives of Barra or hostilities with England compelled them to temporarily abandon the place, they retained a foothold there until 1857.
African Company turmoil and prosperity
In the wars with France following the Glorious Revolution, James Fort was captured on four occasions by the French, in 1695, 1702, 1704 and 1708. However, no attempt was made by France to occupy the fort permanently. At the Treaty of Utrecht in 1713, the French recognised the right of the English to James Island and their settlements on the River Gambia. One of the results of these wars was an outbreak of piracy along the West African coast. The English trade in the Gambia suffered heavily from the efforts of the pirates. In 1719, one pirate, Howel Davis, captured James Fort. In 1721, part of the garrison of the fort mutinied under the leadership of Captain John Massey, seizing one of the company's ships and turning pirate. Finally, in 1725, James Fort was extensively damaged by an accidental explosion of gunpowder.
Following these incidents, the Royal African Company enjoyed 20 years of comparative prosperity. Factories were established as far up the river as Fattatenda and at other places and a fairly considerable trade was carried out with the interior of Africa. Nevertheless, despite an annual subsidy from the British government for the maintenance of their forts, the Royal African Company became involved in serious financial difficulties. In 1749, James Island was found to be "in a most miserable condition". In the following year, it was reported that the garrison at James Fort had been reduced through sickness from around 30 men to between five and eight, and that, with all the officers being dead, a common soldier had succeeded to the command.
By 1750, the position had become critical and an Act of Parliament was passed divesting the Royal African Company of its charter and divesting its forts and settlements into a new company, controlled by a committee of merchants. The Act prohibited the new company from trading in its corporate capacity but allowed it an annual subsidy for the upkeep of the forts. It was hoped that this would prevent the monopolistic tendencies of rule by a joint stock company and at the same time to save the government the expense entailed by the creation of a colonial civil service.
In 1766, the fort and settlements were taken from this new company by another Act of Parliament and given to the Crown. For the next 18 years, the Gambia formed part of the Senegambia colony. The government headquarters were at St Louis at the mouth of the Senegal River and a Lieutenant Governor was appointed to take charge of James Fort and the settlements in the Gambia. In 1779, the French captured James Fort for the fifth and final time. On this occasion, they so successfully demolished the fortifications that at the close of the war it was found impossible to rebuild them. Besides a brief period following the Napoleonic Wars, when the island was temporarily occupied by a handful of soldiers as an outpost, James Island stopped playing any part in the history of the Gambia.
In 1780, the French privateer Senegal captured four vessels which had been part of the British garrison at Goree sent to the Bintang Creek under the command of Major Houghton to obtain building material. The Senegal, in turn, was captured by HMS Zephyr after an engagement off Barra Point. In 1783, St Louis and Goree were handed back to France and Senegambia ceased existing as a British colony.
It was once again handed back to the Royal African Company. However, they made no attempt to administer the Gambia. In 1785, Lemain Island was acquired by the British government with the view of the establishment of a convict settlement, but nothing came of the plan. For the next thirty years, British influence in the Gambia was confined to the operations of a small number of traders. Settlements were established by these traders along the river banks. Among these settlements, the most important probably was Pisania. This settlement, which was already in operation by 1779, was occupied by Dr John Laidley and a family by the name of Aynsley. Subsequently, Laidley and the Aynsleys rendered invaluable assistance to Major Daniel Houghton in 1790, Mungo Park in 1795 and 1805, and Major William Grey in 1818, in the course of their journeys into the interior of Africa.
Early 19th century
At the beginning of the 19th century, Montgomery identified that most settlements on the Gambia River were British. However, to the north, there were several native kingdoms, including Barra, Boor Salum, Yani and Woolli. At the time, Barra had a population of 200,000 and its capital was Barra Inding, although the main trading place was Jillifrey. Boor Salum had a population of 300,000 and the smaller kingdoms of Yani and Woolli were to the north of it. The Mandinka people were the inhabitants of all four kingdoms, which all conducted a considerable trade with the interior of Africa. Montgomery said that no considerable kingdom existed south of the Gambia.
In 1807, the African slave trade was abolished by an Act of Parliament. At that time, the British were in control of Goree. With the help of the Royal Navy, the Goree garrison made efforts to suppress the slave traders operating in the River Gambia, who were primarily Spanish and American. On more than one occasion, the slavers offered a stubborn resistance and the Royal African Corps suffered several casualties.
H. E. Egerton, Oxford Survey of the British Empire, Vol. VI, p. 141
Following the Treaty of Paris in 1814, which ended the war with the French, the British forces and officials on the island of evacuated Gorée. Captain Alexander Grant was sent with a detachment of Royal African Corps soldiers to explore the possibility of rebuilding Fort James on James Island but decided that more space would be provided by St Mary's Island. Grant made a treaty with the King of Kombo on 23 April 1816 that ceded the island to the UK. He also founded the town of Bathurst on St Mary's Island. In 1821, the Royal African Company was dissolved by Act of Parliament and the Gambia was placed under the jurisdiction of the Governor of Sierra Leone. It continued to be administered from Sierra Leone until 1843 when it became a separate colony. In 1866, however, the Gambia and Sierra Leone were once again united under the same administration.
The British government continued to extend its territorial acquisitions beyond St Mary's Island by concluding treaties with a number of native chiefs. Lemain Island, 160 miles up the river, was ceded to the United Kingdom in 1823 by King Collie and renamed as MacCarthy Island. Georgetown was established on the island as a military barracks and settlement for liberated slaves. In 1826, the Ceded Mile, a one-mile strip on the north bank of the River Gambia, was ceded by the King of Barra. Fattatenda and the surrounding district was ceded in 1829. In 1840 and 1853, considerable areas of the mainland adjoining St Mary's Island were obtained from the King of Kombo for the settlement of discharged soldiers of the West India Regiments and liberated Africans. Cessions of another land further upstream were obtained at various dates, including Albreda, the French enclave, which was obtained in 1857.
Consolidation as a colony
In the 1850s, under the leadership of Louis Faidherbe, the French colony of Senegal began a vigorous expansion until it virtually engulfed the Gambia. The colony assumed an importance to the French as a possible trade route, proposed cession of the Gambia for some other part of West Africa was first mooted in 1861. It was seriously discussed again during 1865 and 1866. In 1870 and 1876, negotiations were entered into between the French and British government over the proposed cession of the Gambia in exchange for other territories in West Africa.
However, the proposal aroused such opposition in Parliament and among various mercantile bodies in England, as well as among the native inhabitants of the Gambia, that the British government was unable to press ahead with the scheme. This "remarkably powerful" Gambia lobby was revived whenever the topic of the proposed cession was brought up, and successfully saw that the British could not cede any land. In 1888, the Gambia was once again separated from Sierra Leone and from that date until its independence operated as a separate colony. In 1889, an agreement was reached between the French and British governments for the delimitation of the borders of the Gambia, Senegal, and Casamance.
During this time, despite a number of small wars with the natives, the Gambian government was able to conclude a series of treaties with the chiefs living along the banks of the river. Some of these included the cession of small tracts of territory, but most conferred British protection. The last and most important of these was concluded in 1901 with Musa Molloh, the paramount chief of Fuladu. In 1894, an Ordinance was passed for the better administration of these districts which had not been ceded but merely placed under the protection of the British government. It was decided that it was not feasible to administer these places from the seat of government in Bathurst, so in 1895 and the following years, ordinances were passed to bring these places under the control of the Protectorate. Finally, a Protectorate Ordinance passed in 1902 brought the whole of the Gambia besides St Mary's Island under the Protectorate system.
the Gambia received its own executive and legislative councils in 1901 and gradually progressed toward self-government. Also in 1901, the Gambia Company, the first colonial military unit of the Gambia, was founded. It was formed as part of the Sierra Leone Battalion of the new West African Frontier Force (late Royal West African Frontier Force). A 1906 ordinance abolished slavery.
World War I and interwar years
During World War I, the Gambia Company served alongside other British troops in the Kamerun campaign, under the command of Captain V. B. Thurston of the Dorsetshire Regiment, and a number of its soldiers received gallantry medals for their conduct.
In 1920, the National Congress of British West Africa was formed, an organisation working towards African emancipation, with Edward Francis Small as the sole delegate. He returned and founded the Gambia Section of the Congress, the principal aim of which was to achieved elected representation in the government of the Gambia. It also frequently petitioned against unpopular government policies. It had some success, with Small founding the first Gambian trade union, the Bathurst Trade Union, in 1929. However, it failed to prevent its opponent, Ousman Jeng, being appointed to the Legislative Council in 1922 and again in 1927.
In 1932, Small founded the Rate Payers' Association (RPA) to oppose the unpopular policies of Richmond Palmer, the Governor, and of the conservative elements of Gambian politics, led by Forster and his nephew W. D. Carrol. By the end of 1934, the RPA was winning all the seats on the Bathurst Urban District Council and its successor the Bathurst Advisory Town Council, however, had no representation in the Legislative Council.
World War II
During World War II, the Gambia Company became the Gambia Regiment, with a strength of two battalions from 1941. It fought in the Burma campaign and served for some time under the command of Antony Read, later the Quartermaster-General to the Forces. the Gambia itself was also important to the war effort. It was home to RAF Bathurst, a flying boat base, and RAF Yundum, an RAF station. HMS Melampus, a shore base, was also based at Bathurst for some of the war, and in 1942, a light cruiser named HMS Gambia was launched, which maintained ties to the colony until it was decommissioned in 1960. Bathurst was also the nearest English-speaking port to Dakar, where, before the Battle of Dakar, the Vichy French battleship Richelieu had been told to travel to.
the Gambia was also home to 55 British General Hospital from 1941 to 1942, 40 British General Hospital from 1942 to 1943, and 55 British General Hospital again from 1945 to 1946. During World War II, the Gambia also formed an Auxiliary Police, who, among other things, helped to enforce the blackout in Bathurst. Many air raid shelters were built across the Gambia too. In 1943, Franklin D. Roosevelt, the President of the United States, stopped overnight in Bathurst en route to and from the Casablanca Conference. This marked the first visit to the African continent by a sitting US President. The visit hardened his views against British colonial rule. Appalled, as he was, by the poverty and disease that was present there, he wrote to Churchill describing the territory as a "hell-hole".
After the Second World War, the pace of reform increased. The economy of the Gambia, like other African countries at the time, was very heavily orientated towards agriculture. Reliance on the groundnut became so strong that it made up almost the entirety of exports, making the economy vulnerable. Groundnuts were the only commodity subject to export duties; these export duties resulted in the illegal smuggling of the product to French Senegal. Attempts were made to increase production of other goods for export: the Gambian Poultry Scheme pioneered by the Colonial Development Corporation aimed to produce twenty million eggs and one million lb of dressed poultry a year. The conditions in the Gambia proved unfavourable and typhoid killed much of the chicken stock, drawing criticism to the Corporation.
The River Gambia was the principal route of navigation and transport inland, with a port at Bathurst. The road network was mainly concentrated around Bathurst, with the remaining areas largely connected by dirt roads. The only airport was at Yundum, built in World War II. Post war it was used for passenger flights. Both British South American Airways and the British Overseas Airways Corporation had services, the former moving its service to Dakar, which had a concrete runway (as opposed to pierced steel planking). The airport was rebuilt in 1963 and the building is still in use today.
In anticipation of independence, efforts were made to create internal self-government. The 1960 Constitution created a partly elected House of Representatives, with 19 elected members and 8 chosen by the chiefs. This constitution proved flawed in the 1960 elections when the two major parties tied with 8 seats each. With the support of the unelected chiefs, Pierra Sarr N'Jie of the United Party was appointed Chief Minister. Dawda Jawara of the People's Progressive Party resigned as Minister of Education, triggering a Constitutional Conference arranged by the Secretary of State for the Colonies.
The Constitutional Conference paved the way for a new constitution that granted a greater degree of self-government and a House of Representatives with more elected members. Elections were held in 1962, with Jawara's Progressive Party securing a majority of the elected seats. Under the new constitutional arrangements, Jawara was appointed Prime Minister: a position he held until it was abolished in 1970. Following general elections in 1962, full internal self-governance was granted in the following year.
At the Marlborough House constitutional conference in June 1964, it was agreed between the British and Gambian delegations that the Gambia would become an independent country on 18 February 1965. It was agreed that the Elizabeth II would remain as the head of state, and a Governor-General would exercise executive powers on her behalf. On 18 February, Prince Edward, Duke of Kent, on behalf of the Queen, formally granted the country independence with Prime Minister Jawara representing the Gambia. It became the 21st independent member of the Commonwealth, with a constitution described as a "sophisticated version of the Westminster export models."
Shortly thereafter, the government held a referendum proposing that an elected president replace the Queen of the Gambia as head of state. The referendum failed to obtain the two-thirds majority required to amend the constitution, but the results received widespread attention abroad as testimony to the Gambia's observance of secret balloting, honest elections, and civil rights and liberties.
On 24 April 1970, the Gambia became a republic within the Commonwealth, following a second referendum, with Prime Minister Sir Dawda Kairaba Jawara as head of state.
The relative stability of the Jawara era was first shattered by a coup attempt in 1981. The coup was led by Kukoi Samba Sanyang, who, on two occasions, had unsuccessfully sought election to Parliament. After a week of violence which left several hundred people dead, Jawara, in London when the attack began, appealed to Senegal for help. Senegalese troops defeated the rebel force.
In the aftermath of the attempted coup, Senegal and the Gambia signed the 1982 Treaty of Confederation. The Senegambia Confederation came into existence; it aimed eventually to combine the armed forces of the two states and to unify their economies and currencies. the Gambia withdrew from the confederation in 1989.
Until a military coup in July 1994, the Gambia was led by President Jawara, who was re-elected five times.
In July 1994, Yahya Jammeh led a coup d'état that deposed the Jawara government. Between 1994 and 1996, Jammeh ruled as head of the Armed Forces Provisional Ruling Council (AFPRC) and banned opposition political activity. The AFPRC announced a transition plan for a return to democratic civilian rule, establishing the Provisional Independent Electoral Commission (PIEC) in 1996 to conduct national elections. After a constitutional referendum in August, presidential and parliamentary elections were held. Jammeh was sworn into office as president on 6 November 1996. On 17 April 1997 the PIEC transformed into the Independent Electoral Commission (IEC).
Jammeh won both the 2001 and 2006 elections. He was re-elected as president in 2011. The People's Republic of China cut ties with the Gambia in 1995 after the latter established diplomatic links with the Republic of China (Taiwan). the Gambia was elected to a non-permanent seat on the United Nations Security Council from 1998 to 1999.
On 2 October 2013, the Gambian interior minister announced that the Gambia would leave the Commonwealth of Nations with immediate effect, stating that they would "never again be part of a neo-colonial organization"
Fall of Jammeh and return to democracy
The presidential election of 2016 saw the surprising victory of the opposition candidate Adama Barrow, who defeated Jammeh with 43,3% of votes. However, Jammeh refused to recognise the result of the election and refused to leave office, instead proclaiming a state of emergency. Barrow abandoned the country and fled to Senegal, where he was sworn in as new president at the Gambian embassy in Dakar on 19 January 2017.
On the same day ECOWAS launched a military intervention in Gambia in order to forcefully remove Jammeh from power (Operation Restore Democracy); the move was authorized by United Nations Security Council with UNSC Resolution 2337. On 21 January 2017 Jammeh announced stepped down as president and abandoned the country and went to exile in Equatorial Guinea. On 27 January 2017 Barrow returned to Gambia and officially took office.
The Gambia officially rejoined the Commonwealth on 8 February 2018.
- List of heads of government of the Gambia
- List of heads of state of the Gambia
- Politics of the Gambia
- Military history of the Gambia
- Harden, Donald (1971) [First published 1962]. The Phoenicians. Harmondsworth: Penguin Books.
- Ibn Battuta: Travels in Asia and Africa 1325–1354 pg323-335
- Gray, p. 5
- Annual Report on the Social and Economic Progress of the People of The Gambia (PDF). Bathurst: HM Stationery Office. 1938. pp. 1–10.
- Atlas Obscura
- Thomas, p. 175
- Thomas, p. 337
- Montgomery, p. 222
- Reeve, pp. 91-94
- Commonwealth and Colonial Law by Kenneth Roberts-Wray, London, Stevens, 1966. Pp. 782-785.
- Chamberlain, M.E. (1974). The Scramble for Africa. London: Longman. p. 49.
- "In the run-up to the duration of World War II, The Gambia direct". The Point. 5 January 2010. Retrieved 1 April 2017.
- "Locations of British General Hospitals during WW2". Scarlet Finders. Retrieved 1 April 2017.
- Meredith, Martin (2005). The State of Africa: A History of Fifty Years of Independence. London: Free Press. p. 9.
- "Hansard HC Deb 25 March 1959, vol 602, cols 1405–1458".
- "Hansard HC Deb 13 March 1951, vol 485, cols 1317–1375".
- "Yundum". Britannica Online encyclopedia. Retrieved 10 August 2012.
- "Hansard HC Deb 29 January 1947, vol 432, cols 202".
- "History of the Independence Movement". Gambia Information Site. 10 August 2012.
- Darboe, p. 132
- "UK regrets The Gambia's withdrawal from Commonwealth". BBC News. 3 October 2013. Retrieved 4 October 2013.
- Rice, Andrew (21 July 2015). "The reckless plot to overthrow Africa's most absurd dictator". The Guardian. Retrieved 21 July 2015.
- (http://www.hydrant.co.uk), Site designed and built by Hydrant. "The Gambia rejoins the Commonwealth | The Commonwealth". thecommonwealth.org. Retrieved 5 August 2018.
- Burton, Richard Francis (1863). Wanderings in West Africa. London: Tinsley Brothers.
- Darboe, Ousainou (1979). Gambia's Long Journey to Republicanism: A Study in the Development of the Constitution and Government of the Gambia. University of Ottawa.
- Gray, J. M. (1940). A History of the Gambia. Cambridge: Cambridge University Press, Reprint 2015.
- Herbertson, A. J. and Howarth, O. J. R. (1914). The Oxford Survey of the British Empire. Oxford: Clarendon Press.
- Hertslet, Edward (1894). The Map of Africa by Treaty, Vol. 1. London: Her Majesty's Stationery Office.
- Hughes, A. and Perfect, D. (2008). Historical Dictionary of the Gambia. Lanham, Maryland: Scarecrow Press.
- Montgomery, R. Martin (1837). History of the British Possessions in the Indian and Atlantic Oceans. London: Whittaker and Co.
- Morel, Edmund D. (1902). Affairs of West Africa. London: William Heinemann.
- Reclus, Élisée (1893). The Earth And Its Inhabitants: Africa, Vol. 3. New York: D. Appleton and Co.
- Reeve, Henry Fenwick (1912). the Gambia: Its History, Ancient, Medieval and Modern Together With It's Geographical, Geological, And Ethnographical Conditions And A Description Of The Birds, Beasts, And Fishes Found Therein. London: John Murray.
- Thomas, Hugh (1997). The Slave Trade: The History of the Atlantic Slave Trade, 1440-1870. London: Picador.
- British Africa. (1899). London: Kegan Paul, Trench, Trubner, and Co.
- A History of Africa: 1918-1967. (1968). Moscow: USSR Academy of Sciences. |
Essentially, there are five forms of chemical equations and their reactions. There are two varieties of methods that are often employed for balancing chemical equations. The trick to balancing chemical equations is to use the rules below. It’s used when the chemical equation is hard to inspect. Besides this, chemical equations will need to get balanced even because chemicals won’t react until you’ve added the right mole rations. You’ve got to balance the chemical equation no matter what, according to the Law of Conservation of Matter, but a lot of students find it tough to balance it.
If you don’t understand the equation following a few minutes, utilize the proportion technique. Additionally, these unbalanced equations cannot be utilized in calculating the chemical reactions. Balancing equations is the subject of the next chapter. Furthermore, the balanced equation is essential in determining how much reactant you would have to have, for making the particular product. Moreover, equations should be balanced properly because unequal equations aren’t correct equations. Because equations may be used to describe plenty of important pure phenomena, being in a position to manipulate them gives you a potent tool for understanding the world around you! In building equations, there’s quite a lot that you’re able to work out as you cooperate, but you must have somewhere to start from!
Put a checkmark near the petrochemistry Formatter and you’re done. Finding the formatters to format text is the simple part. The Math add-in generates beautiful 3D graphs powered by DirectX, so you are going to be prompted to install the most recent version of DirectX at the close of the installation. OneNote consists of similar tools, but they’re slightly less full-featured. OneNote works especially great for use with math as it utilizes a more free-form fashion of editing. Square brackets might be present. You start with writing down what you know for each one of the half-reactions.
Now all you have to do is balance the charges. It is essential that the Law of Conservation of Mass isn’t violated. To compose linear equations from word issues, the student must first settle on which quantities to assign variables to, and after that decide what operations have to be performed to be able to fix the issue. He should always look for words like “less than” and “times” to figure out what type of operation must be performed in order to solve the problem. Students will determine the reactants, goods, subscripts, and coefficients.
Know what’s being asked, what’s provided to you in the issue, and what exactly you will need to understand as a way to fix the question being asked. So once you’re taking a look at word problems which have to do with quadratics, meaning your greatest exponent on x is squared. In multi-step word issues, one or more problems have to get solved as a way to find the info required to fix the question being asked. Frequently you’ll observe a firework problem, there are times when you’ll observe a diver, something being shot from a canon.
You’ve got an x squared term. The vocabulary words are available scattered throughout the unique instructional worksheets from using this unit. If you want to earn Word helpful for more educational and research work, take a look at the Chemistry Add-in for Word also! Ensure you’ve exited Word and OneNote before you start the setup.
The number of reactants has to be equal to the sum of goods. You will find that I haven’t bothered to include things like the electrons in the added-up edition. Now that all of the atoms are balanced, all you have to do is balance the charges. Start with finding out the number of atoms of each type is on either side of the equation. Repeat till you’re made to balance the hydrogen and oxygens. If you just have acid, then you have to do a pure Ka problem and should you just have base (such as whenever the titration is complete) then you have to do a Kb issue. It is possible to use for gaseous substances.
There are a few things you may want to consider. You probably learned several rules for manipulating equations in a preceding algebra program. The number is known as the coefficient. It is possible to set the range of issues and intricacy of the chemical equations. If you consider it, there are sure to be the exact number on every side of the end equation, and thus they will cancel out. The quantities of N and O atoms on each side of the equation are now equal, and therefore the equation is balanced. It is essential to balance it since there has to be the equal number of atoms on both the surfaces of the equation.
Below we have 20 great pictures concerning Word Equations Chemistry Worksheet. We expect you enjoyed it and if you wish to download the photo in high quality, simply just click the pic and you will be redirected to the download page of Word Equations Chemistry Worksheet. |
Gamma-ray bursts (GRBs), which are bright flashes of the most energetic gamma-ray radiation lasting a few milliseconds to several seconds, have been discovered by satellites orbiting the Earth. These devastating blasts take place in galaxies billions of light years away from Earth.
When two neutron stars collide, a short-duration GRB, a subtype of GRB, is born. The mass of the Sun is compressed into these incredibly dense stars, which are half the size of London, and they produce gravitational waves in the final moments of their lives, just before triggering a GRB.
Up until now, the majority of space scientists have concurred that the “engine” driving such powerful but brief bursts must always originate from a newly formed black hole which is a region of space-time where gravity is so intense that even light cannot escape from it.
This scientific consensus is being questioned by new research conducted by an international team of astrophysicists under the direction of Dr. Nuria Jordana-Mitjans at the University of Bath.
The results of the study suggest that some short-duration GRBs are not caused by black holes but rather by the birth of supermassive stars, also known as neutron star remnants.
Such findings are important as they confirm that newborn neutron stars can power some short-duration GRBs and the bright emissions across the electromagnetic spectrum that have been detected accompanying them. This discovery may offer a new way to locate neutron star mergers, and thus gravitational waves emitters, when we’re searching the skies for signals.
Dr Nuria Jordana-Mitjans, Research Associate, Department of Physics, University of Bath
Short-duration GRBs are well-understood. When two neutron stars that have been accelerating and spiraling closer together finally collide, they begin to function. A jetted explosion at the crash site also emits the gamma-ray radiation that creates a GRB, which is then followed by a more persistent afterglow.
The radioactive material that was ejected in all directions during the explosion created what is known as a kilonova a day later.
However, there has long been controversy over exactly what is left over when two neutron stars collide. This is known as the “product” of the crash, and it is this product that provides a GRB with extraordinary energy. The results of the Bath-led study could well have brought this debate closer to an end for scientists.
Two theories are being debated by space scientists. According to the first theory, neutron stars briefly merge to form an incredibly massive neutron star before it instantly disintegrates into a black hole. The second claims that the merger of the two neutron stars would produce a less dense neutron star with a longer lifespan.
The age-old conundrum that has plagued astrophysics for decades is whether the origin of short-duration GRBs lies in the birth of a long-lived neutron star or a black hole.
Most astrophysicists up to this point have favored the black hole theory, concurring that a GRB can only be created if the massive neutron star collapses almost instantly.
Astrophysicists study the electromagnetic signals of the resulting GRBs to gain knowledge of neutron star collisions. One would anticipate that the signal coming from a black hole would be different from the signal from a neutron star remnant.
Dr. Jordana-Mitjans and her colleagues concluded that a neutron star remnant, rather than a black hole, must have generated the GRB 180618A based on the electromagnetic signal from the burst.
“For the first time, our observations highlight multiple signals from a surviving neutron star that lived for at least one day after the death of the original neutron star binary,” stated Dr. Jordana-Mitjans.
We were excited to catch the very early optical light from this short gamma-ray burst – something that is still largely impossible to do without using a robotic telescope. But when we analyzed our exquisite data, we were surprised to find we couldn’t explain it with the standard fast-collapse black hole model of GRBs.
Carole Mundell, Study Co-Author and Professor, Extragalactic Astronomy, University of Bath
Mundell added, “Our discovery opens new hope for upcoming sky surveys with telescopes such as the Rubin Observatory LSST with which we may find signals from hundreds of thousands of such long-lived neutron stars, before they collapse to become black holes.”
The optical light from the afterglow that followed GRB 180618A vanished after only 35 minutes, which initially baffled the researchers. Further investigation revealed that some continuous energy source was pushing the material responsible for such a brief emission, causing it to expand almost as quickly as light.
More surprisingly, this emission bore the signature of a millisecond magnetar, a young, rapidly spinning neutron star that is highly magnetized. The team discovered that the magnetar that followed GRB 180618A was reheating the crash debris as it slowed down.
The optical emission from GRB 180618A, powered by a magnetar, was 1,000 times brighter than predicted by a conventional kilonova.
A Hiroko and Jim Sherwin Postgraduate Studentship is providing funding for Nuria Jordana-Mitjans.
Jordana-Mitjans, N., et al. (2022) A Short Gamma-Ray Burst from a Protomagnetar Remnant. The Astrophysical Journal. doi:10.3847/1538-4357/ac972b |
Democracy is a system of government where the citizens exercise power by voting. In a direct democracy, the citizens as a whole form a governing body and vote directly on each issue. In a representative democracy the citizens elect representatives from among themselves; these representatives meet to form a governing body, such as a legislature. In a constitutional democracy the powers of the majority are exercised within the framework of a representative democracy, but the constitution limits the majority and protects the minority through the enjoyment by all of certain individual rights, e.g. freedom of speech, or freedom of association. "Rule of the majority" is sometimes referred to as democracy. Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes; the uncertainty of outcomes is inherent in democracy, which makes all forces struggle for the realization of their interests, being the devolution of power from a group of people to a set of rules.
Western democracy, as distinct from that which existed in pre-modern societies, is considered to have originated in city-states such as Classical Athens and the Roman Republic, where various schemes and degrees of enfranchisement of the free male population were observed before the form disappeared in the West at the beginning of late antiquity. The English word dates back to the 16th century, from the older Middle French and Middle Latin equivalents. According to American political scientist Larry Diamond, democracy consists of four key elements: a political system for choosing and replacing the government through free and fair elections. Todd Landman draws our attention to the fact that democracy and human rights are two different concepts and that "there must be greater specificity in the conceptualisation and operationalization of democracy and human rights"; the term appeared in the 5th century BC to denote the political systems existing in Greek city-states, notably Athens, to mean "rule of the people", in contrast to aristocracy, meaning "rule of an elite".
While theoretically these definitions are in opposition, in practice the distinction has been blurred historically. The political system of Classical Athens, for example, granted democratic citizenship to free men and excluded slaves and women from political participation. In all democratic governments throughout ancient and modern history, democratic citizenship consisted of an elite class, until full enfranchisement was won for all adult citizens in most modern democracies through the suffrage movements of the 19th and 20th centuries. Democracy contrasts with forms of government where power is either held by an individual, as in an absolute monarchy, or where power is held by a small number of individuals, as in an oligarchy; these oppositions, inherited from Greek philosophy, are now ambiguous because contemporary governments have mixed democratic and monarchic elements. Karl Popper defined democracy in contrast to dictatorship or tyranny, thus focusing on opportunities for the people to control their leaders and to oust them without the need for a revolution.
No consensus exists on how to define democracy, but legal equality, political freedom and rule of law have been identified as important characteristics. These principles are reflected in all eligible citizens being equal before the law and having equal access to legislative processes. For example, in a representative democracy, every vote has equal weight, no unreasonable restrictions can apply to anyone seeking to become a representative, the freedom of its eligible citizens is secured by legitimised rights and liberties which are protected by a constitution. Other uses of "democracy" include that of direct democracy. One theory holds that democracy requires three fundamental principles: upward control, political equality, social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality; the term "democracy" is sometimes used as shorthand for liberal democracy, a variant of representative democracy that may include elements such as political pluralism.
Roger Scruton argues that democracy alone cannot provide personal and political freedom unless the institutions of civil society are present. In some countries, notably in the United Kingdom which originated the Westminster system, the dominant principle is that of parliamentary sovereignty, while maintaining judicial independence. In the United States, separation of powers is cited as a central attribute. In India, parliamentary sovereignty is subject to the Constitution of India which includes judicial review. Though the term "democracy" is used in the context of a political state, the principles are applicable to private organisations. Majority rule is listed as a characteristic of democracy. Hence, democracy allows for political minorities to be oppressed by the "tyranny of the majority" in the absence of legal protections of individual or group rights. An essential part of an "ideal" representative democracy is competitive elections that are substantively and procedurally "fair," i.e. just and equitable
Nicaragua the Republic of Nicaragua, is the largest country in the Central American isthmus, bordered by Honduras to the northwest, the Caribbean to the east, Costa Rica to the south, the Pacific Ocean to the southwest. Managua is the country's capital and largest city and is the third-largest city in Central America, behind Tegucigalpa and Guatemala City; the multi-ethnic population of six million includes people of indigenous, European and Asian heritage. The main language is Spanish. Indigenous tribes on the Mosquito Coast speak English. Inhabited by various indigenous cultures since ancient times, the Spanish Empire conquered the region in the 16th century. Nicaragua gained independence from Spain in 1821; the Mosquito Coast followed a different historical path, with the English colonizing it in the 17th century and coming under the British rule, as well as some minor Spanish interludes in the 19th century. It became an autonomous territory of Nicaragua in 1860 and the northernmost part of it was transferred to Honduras in 1960.
Since its independence, Nicaragua has undergone periods of political unrest, dictatorship and fiscal crisis, leading to the Nicaraguan Revolution of the 1960s and 1970s and the Contra War of the 1980s. The mixture of cultural traditions has generated substantial diversity in folklore, cuisine and literature the latter given the literary contributions of Nicaraguan poets and writers, such as Rubén Darío. Known as the "land of lakes and volcanoes", Nicaragua is home to the second-largest rainforest of the Americas; the country has set a goal of 90% renewable energy by the year 2020. The biological diversity, warm tropical climate and active volcanoes make Nicaragua an popular tourist destination. There are two prevailing theories on; the first is that the name was coined by Spanish colonists based on the name Nicarao, the chieftain or cacique of a powerful indigenous tribe encountered by the Spanish conquistador Gil González Dávila during his entry into southwestern Nicaragua in 1522. This theory holds that the name Nicaragua was formed from Nicarao and agua, to reference the fact that there are two large lakes and several other bodies of water within the country.
However, as of 2002, it was determined that the cacique's real name was Macuilmiquiztli, which meant "Five Deaths" in the Nahuatl language, rather than Nicarao. The second theory is that the country's name comes from any of the following Nahuatl words: nic-anahuac, which meant "Anahuac reached this far", or "the Nahuas came this far", or "those who come from Anahuac came this far". Paleo-Americans first inhabited what is now known as Nicaragua as far back as 12,000 BCE. In pre-Columbian times, Nicaragua's indigenous people were part of the Intermediate Area, between the Mesoamerican and Andean cultural regions, within the influence of the Isthmo-Colombian area. Nicaragua's central region and its Caribbean coast were inhabited by Macro-Chibchan language ethnic groups, they had coalesced in Central America and migrated to present-day northern Colombia and nearby areas. They lived a life based on hunting and gathering, as well as fishing, performing slash-and-burn agriculture. At the end of the 15th century, western Nicaragua was inhabited by several different indigenous peoples related by culture to the Mesoamerican civilizations of the Aztec and Maya, by language to the Mesoamerican Linguistic Area.
The Chorotegas were Mangue language ethnic groups who had arrived in Nicaragua from what is now the Mexican state of Chiapas sometime around 800 CE. The Pipil-Nicarao people were a branch of Nahuas who spoke the Nahuat dialect, like the Chorotegas, they too had come from Chiapas to Nicaragua in 1200 CE. Prior to that, the Pipil-Nicaraos had been associated with the Toltec civilization. Both the Chorotegas and the Pipil-Nicaraos were from Mexico's Cholula valley, had migrated southward. Additionally, there were trade-related colonies in Nicaragua, set up by the Aztecs starting in the 14th century. In 1502, on his fourth voyage, Christopher Columbus became the first European known to have reached what is now Nicaragua as he sailed southeast toward the Isthmus of Panama. Columbus explored the Mosquito Coast on the Atlantic side of Nicaragua but did not encounter any indigenous people. 20 years the Spaniards returned to Nicaragua, this time to its southwestern part. The first attempt to conquer Nicaragua was by the conquistador Gil González Dávila, who had arrived in Panama in January 1520.
In 1522, González Dávila ventured into the area that became known as the Rivas Department of Nicaragua. It was there that he encountered an indigenous Nahua tribe led by a chieftain named Macuilmiquiztli, whose name has sometimes been erroneously referred to as "Nicarao" or "Nicaragua". At the time, the tribe's capital city was called Quauhcapolca. González Dávila had brought along two indigenous interpreters, taught the Spanish language, thus he was able to have a discourse with Macuilmiquiztli. After exploring and gathering gold in the fertile western valleys, González Dávila and his men were attacked and driven off by the Chorotega, led by the chieftain Diriangen; the Spanish attempted to convert the tribes to Christianity. The first Spanish permanent settlements were founded in 1524; that year, the conquistador
José Manuel Zelaya Rosales is a Honduran politician, President of Honduras from 27 January 2006 until 28 June 2009. He is the eldest son of a wealthy businessman, inherited his father's nickname "Mel". Before entering politics he was involved in his family's timber businesses. Elected as a liberal, Zelaya shifted to the political left during his presidency, forging an alliance with the ALBA. On 28 June 2009, during the 2009 Honduran constitutional crisis, he was seized by the military and sent to Costa Rica in a coup d'état. On 21 September 2009 he returned to Honduras clandestinely and resurfaced in the Brazilian embassy in Tegucigalpa. In 2010 he left Honduras for an exile that lasted more than a year, he now represents Honduras as a deputy of the Central American Parliament. Since January 1976 Zelaya has been married to Xiomara Castro de Zelaya, a presidential candidate in the 2013 general election but lost to Juan Orlando Hernández; the surname'Zelaya' is a word from the Basque language, which means'field'.
Zelaya was born the eldest of four children in Olancho. Two of his brothers remain alive. Zelaya's mother, Ortensia Rosales de Zelaya, has been described as his best campaigner, his family first lived in Copán they moved east to Catacamas, Olancho. He attended Niño Jesús de Praga y Luis Landa elementary school and the Instituto Salesiano San Miguel, he began his university studies in civil engineering, but left in 1976 with 11 courses completed, for agriculture and the forestry sector. He was forced to take over the family business by the arrest of his father José Manuel Zelaya Ordoñez, implicated in the murders known as "Slaughter of the Horcones." These murders involved Mayor José Enrique Chinchilla, Sub-Lieutenant Benjamín Plata, José Manuel Zelaya Ordoñez and Carlos Bhar. They were taken to the Central Prison, he has engaged in business activities that including timber and cattle, handed down to him by his late father. He is now a landowner in Olancho. In 1987, Zelaya became manager of the Honduran Council of Private Enterprise, as well as of the National Association of Wood Processing Enterprises.
The COHEP occupies a important role in Honduran politics, as the Constitution delineates that the organization elects one of the seven members of the Nominating Board that proposes nominees to the Supreme Court of Honduras. Zelaya's father got a 20-year prison sentence for his role in the 1975 Los Horcones massacre, which took place on the Zelaya family ranch, Los Horcones; as a result of an amnesty, he served less than two. Zelaya joined the Liberal Party of Honduras, Partido Liberal de Honduras, in 1970 and became active a decade later, he was a deputy in the National Congress for three consecutive times between 1985 and 1998. He held many positions within the PLH and was Minister for Investment in charge of the Honduran Social Investment Fund in a previous PLH government. Under the administration of Zelaya the FHIS lost $40 million. Zelaya escaped prosecution. In the 2005 presidential primaries, his faction was called Movimiento Esperanza Liberal, he received 52% of the 289,300 Liberal votes, vs. 17% for Jaime Rosenthal Oliva and 12% for Gabriela Núñez, the candidate of the Nueva Mayoría faction.
During Zelaya's time in office Honduras became a member of ALBA, an international cooperation organization based on the idea of social and economic integration between the countries of Latin America and the Caribbean. It marked his turning to a left-of-center politics, the first such case of right to left policy switch as he had been elected in on a conservative platform. Political opponents business elites, opposed his foreign policy, including his alliance with Hugo Chávez in Venezuela, friendship with Cuba's Raúl Castro. In spite of a number of economic problems, there were a number of significant achievements under Zelaya's presidency. Under his government, free education for all children was introduced, subsidies to small farmers were provided, bank interest rates were reduced, the minimum wage was increased by 80%, school meals were guaranteed for more than 1.6 million children from poor families, domestic employees were integrated into the social security system, poverty was reduced by 10% during two years of government, direct state help was provided for 200,000 families in extreme poverty, with free electricity supplied to those Hondurans most in need.
On 22 July 2008, Zelaya sought to incorporate Honduras into ALBA, an international cooperation organization based on the idea of social and economic integration in Latin America and the Caribbean. Zelaya said that the main media outlets in Honduras, owned by wealthy conservatives, were biased against him and did not cover what his government was doing: "No one publishes anything about me.... What prevails here is censorship of my government by the mass media."On 24 May 2007, Zelaya ordered ten two-hour cadenas on all television and radio stations, "to counteract the misinformation of the news media". The move, while legal, was fiercely criticized by the country's main journalists' union, Zelaya was dubbed "authoritarian" by his opposition; the broadcasts were scaled back to a one-hour program on the government's plans to expand telephone service, a half-hour on new electrical power plants and a half-hour about government revenues. An unknown gunman in 2007 murdered a journalist who criticized Zelaya.
The Inter-American Press Association and the United Nations criticized threats against journalist
Red is the color at the end of the visible spectrum of light, next to orange and opposite violet. It has a dominant wavelength of 625–740 nanometres, it is a primary color in the RGB color model and the CMYK color model, is the complementary color of cyan. Reds range from the brilliant yellow-tinged scarlet and vermillion to bluish-red crimson, vary in shade from the pale red pink to the dark red burgundy; the red sky at sunset results from Rayleigh scattering, while the red color of the Grand Canyon and other geological features is caused by hematite or red ochre, both forms of iron oxide. Iron oxide gives the red color to the planet Mars; the red colour of blood comes from protein hemoglobin, while ripe strawberries, red apples and reddish autumn leaves are colored by anthocyanins. Red pigment made from ochre was one of the first colors used in prehistoric art; the Ancient Egyptians and Mayans colored their faces red in ceremonies. It was an important color in China, where it was used to colour early pottery and the gates and walls of palaces.
In the Renaissance, the brilliant red costumes for the nobility and wealthy were dyed with kermes and cochineal. The 19th century brought the introduction of the first synthetic red dyes, which replaced the traditional dyes. Red became the color of revolution. Since red is the color of blood, it has been associated with sacrifice and courage. Modern surveys in Europe and the United States show red is the color most associated with heat, passion, anger and joy. In China and many other Asian countries it is the color of symbolizing happiness and good fortune. See below for shades of pink The human eye sees red when it looks at light with a wavelength between 625 and 740 nanometers, it is a primary color in the RGB color model and the light just past this range is called infrared, or below red, cannot be seen by human eyes, although it can be sensed as heat. In the language of optics, red is the color evoked by light that stimulates neither the S or the M cone cells of the retina, combined with a fading stimulation of the L cone cells.
Primates can distinguish the full range of the colors of the spectrum visible to humans, but many kinds of mammals, such as dogs and cattle, have dichromacy, which means they can see blues and yellows, but cannot distinguish red and green. Bulls, for instance, cannot see the red color of the cape of a bullfighter, but they are agitated by its movement.. One theory for why primates developed sensitivity to red is that it allowed ripe fruit to be distinguished from unripe fruit and inedible vegetation; this may have driven further adaptations by species taking advantage of this new ability, such as the emergence of red faces. Red light is used to help adapt night vision in low-light or night time, as the rod cells in the human eye are not sensitive to red. Red illumination was used as a safelight while working in a darkroom as it does not expose most photographic paper and some films. Today modern darkrooms use an amber safelight. On the color wheel long used by painters, in traditional color theory, red is one of the three primary colors, along with blue and yellow.
Painters in the Renaissance mixed red and blue to make violet: Cennino Cennini, in his 15th-century manual on painting, wrote, "If you want to make a lovely violet colour, take fine lac, ultramarine blue with a binder" he noted that it could be made by mixing blue indigo and red hematite. In modern color theory known as the RGB color model, red and blue are additive primary colors. Red and blue light combined together makes white light, these three colors, combined in different mixtures, can produce nearly any other color; this is the principle, used to make all of the colors on your computer screen and your television. For example, magenta on a computer screen is made by a similar formula to that used by Cennino Cennini in the Renaissance to make violet, but using additive colors and light instead of pigment: it is created by combining red and blue light at equal intensity on a black screen. Violet is made on a computer screen in a similar way, but with a greater amount of blue light and less red light.
So that the maximum number of colors can be reproduced on your computer screen, each color has been given a code number, or sRGB, which tells your computer the intensity of the red and blue components of that color. The intensity of each component is measured on a scale of zero to 255, which means the complete list includes 16,777,216 distinct colors and shades; the sRGB number of pure red, for example, is 255, 00, 00, which means the red component is at its maximum intensity, there is no green or blue. The sRGB number for crimson is 220, 20, 60, which means that the red is less intense and therefore darker, there is some green, which leans it toward orange; as a ray of white sunlight travels through the atmosphere to the eye, some of the colors are scattered out of the beam by air molecules and airborne particles due to Rayleigh scattering, changing the final color of the beam, seen. Colors with a shorter wavelength, such as blue and green, scatter more and are removed from the light that reaches the eye.
At sunrise and sunset, when the
National Coalition Party (El Salvador)
The National Coalition Party is a nationalist political party in El Salvador. Until 2011 it was known as the National Conciliation Party, it was the most powerful political party in the country during the 1960s and 1970s, was associated with the Salvadoran military. Julio Adalberto Rivera Carballo, a candidate of the National Conciliation Party, was elected president in 1962, the next three presidents were from the party. After the 1979 coup the party continued to exist. Today, it is considered minor as compared with the two major organizations, ARENA and the FMLN. At the legislative elections, held on 16 March 2003, the party won 13.0% of the popular vote and 16 out of 84 seats in the Legislative Assembly. Its candidate in the presidential election of 21 March 2004, José Rafael Machuca Zelaya, won 2.7%. In the 12 March 2006 legislative election, the party won 11.0% of the popular vote and 10 out of 84 seats, a major decline in representation, but the party is still the third largest political party in El Salvador.
At the January 18, 2009 legislative elections the party won 11 seats. With no party holding a majority, it can be seen as holding the balance of power. However, it sides with the conservative ARENA party. While the party was technically to be disbanded after the 2004 election, in which its candidate did not gather the necessary 3% of the vote, it was allowed to hold on to its registration by decree; the party was de facto re-established, registering with the Supreme Electoral Tribunal as the National Coalition in September 2011. After one year, it added the word'Partido' to its full name, which allowed it to again use the traditional acronym PCN. Since 2015, the party has 20 out of 262 mayorship offices. Julio Adalberto Rivera Fidel Sánchez Hernández Arturo Armando Molina Carlos Humberto Romero Website of the party
A political spectrum is a system of classifying different political positions upon one or more geometric axes that represent independent political dimensions. Most long-standing spectra include a left wing, which referred to seating arrangements in the French parliament after the Revolution. On a left–right spectrum and socialism are regarded internationally as being on the left, Liberalism can mean different things in different contexts: sometimes on the left; those with an intermediate outlook are sometimes classified as centrists. That said and neoliberals are called centrists too. Politics that rejects the conventional left–right spectrum is known as syncretic politics, though the label tends to mischaracterize positions that have a logical location on a two-axis spectrum because they seem randomly brought together on a one-axis left-right spectrum. Political scientists have noted that a single left–right axis is insufficient for describing the existing variation in political beliefs and include other axes.
Though the descriptive words at polar opposites may vary in popular biaxial spectra the axes are split between socio-cultural issues and economic issues, each scaling from some form of individualism to some form of communitarianism. The terms right and left refer to political affiliations originating early in the French Revolutionary era of 1789–1799 and referred to the seating arrangements in the various legislative bodies of France; as seen from the Speaker's seat at the front of the Assembly, the aristocracy sat on the right and the commoners sat on the left, hence the terms right-wing politics and left-wing politics. The defining point on the ideological spectrum was the Ancien Régime. "The Right" thus implied support for aristocratic or royal interests and the church, while "The Left" implied support for republicanism and civil liberties. Because the political franchise at the start of the revolution was narrow, the original "Left" represented the interests of the bourgeoisie, the rising capitalist class.
Support for laissez-faire commerce and free markets were expressed by politicians sitting on the left because these represented policies favorable to capitalists rather than to the aristocracy, but outside parliamentary politics these views are characterized as being on the Right. The reason for this apparent contradiction lies in the fact that those "to the left" of the parliamentary left, outside official parliamentary structures represent much of the working class, poor peasantry and the unemployed, their political interests in the French Revolution lay with opposition to the aristocracy and so they found themselves allied with the early capitalists. However, this did not mean that their economic interests lay with the laissez-faire policies of those representing them politically; as capitalist economies developed, the aristocracy became less relevant and were replaced by capitalist representatives. The size of the working class increased as capitalism expanded and began to find expression through trade unionist, socialist and communist politics rather than being confined to the capitalist policies expressed by the original "left".
This evolution has pulled parliamentary politicians away from laissez-faire economic policies, although this has happened to different degrees in different countries those with a history of issues with more authoritarian-left countries, such as the Soviet Union or China under Mao Zedong. Thus the word "Left" in American political parlance may refer to "liberalism" and be identified with the Democratic Party, whereas in a country such as France these positions would be regarded as more right-wing, or centrist overall, "left" is more to refer to "socialist" or "social-democratic" positions rather than "liberal" ones. For a century, social scientists have considered the problem of how best to describe political variation. In 1950, Leonard W. Ferguson analyzed political values using ten scales measuring attitudes toward: birth control, capital punishment, communism, law, theism, treatment of criminals and war. Submitting the results to factor analysis, he was able to identify three factors, which he named religionism and nationalism.
He defined religionism as belief in God and negative attitudes toward birth control. This system was derived empirically, as rather than devising a political model on purely theoretical grounds and testing it, Ferguson's research was exploratory; as a result of this method, care must be taken in the interpretation of Ferguson's three factors, as factor analysis will output an abstract factor whether an objectively real factor exists or not. Although replication of the nationalism factor was inconsistent, the finding of religionism and humanitarianism had a number of replications by Ferguson and others. Shortly afterward, Hans Eysenck began researching political attitudes in Great Britain, he believed that there was something similar about the National Socialists on the one hand and the communists on the other, despite their opposite positions on the left–right axis. As Hans Eysenck described in his 1956 book Sense and
Blue is one of the three primary colours of pigments in painting and traditional colour theory, as well as in the RGB colour model. It lies between green on the spectrum of visible light; the eye perceives blue when observing light with a dominant wavelength between 450 and 495 nanometres. Most blues contain a slight mixture of other colours; the clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called. Distant objects appear. Blue has been an important colour in decoration since ancient times; the semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to white porcelain. In the Middle Ages, European artists used it in the windows of Cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America.
In the 19th century, synthetic blue dyes and pigments replaced mineral pigments and synthetic dyes. Dark blue became a common colour for military uniforms and in the late 20th century, for business suits; because blue has been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union. Surveys in the US and Europe show that blue is the colour most associated with harmony, confidence, infinity, the imagination and sometimes with sadness. In US and European public opinion polls it is the most popular colour, chosen by half of both men and women as their favourite colour; the same surveys showed that blue was the colour most associated with the masculine, just ahead of black, was the colour most associated with intelligence, knowledge and concentration. Blue is the colour of light between green on the visible spectrum. Hues of blue include ultramarine, closer to violet. Blue varies in shade or tint. Darker shades of blue include ultramarine, cobalt blue, navy blue, Prussian blue.
Blue pigments were made from minerals such as lapis lazuli and azurite, blue dyes were made from plants. Today most blue dyes are made by a chemical process; the modern English word blue comes from Middle English bleu or blewe, from the Old French bleu, a word of Germanic origin, related to the Old High German word blao. In heraldry, the word azure is used for blue. In Russian and some other languages, there is no single word for blue, but rather different words for light blue and dark blue. See Colour term. Several languages, including Japanese, Thai and Lakota Sioux, use the same word to describe blue and green. For example, in Vietnamese the colour of both tree leaves and the sky is xanh. In Japanese, the word for blue is used for colours that English speakers would refer to as green, such as the colour of a traffic signal meaning "go". Linguistic research indicates. Colour names developed individually in natural languages beginning with black and white, adding red, only much – as the last main category of colour accepted in a language – adding the colour blue when blue pigments could be manufactured reliably in the culture using that language.
Human eyes perceive blue when observing light which has a dominant wavelength of 450–495 nanometres. Blues with a higher frequency and thus a shorter wavelength look more violet, while those with a lower frequency and a longer wavelength appear more green. Pure blue, in the middle, has a wavelength of 470 nanometres. Isaac Newton included blue as one of the seven colours in his first description the visible spectrum, He chose seven colours because, the number of notes in the musical scale, which he believed was related to the optical spectrum, he included indigo, the hue between blue and violet, as one of the separate colours, though today it is considered a hue of blue. In painting and traditional colour theory, blue is one of the three primary colours of pigments, which can be mixed to form a wide gamut of colours. Red and blue mixed together form violet and yellow together form green. Mixing all three primary colours together produces a dark grey. From the Renaissance onwards, painters used this system to create their colours.
The RYB model was used for colour printing by Jacob Christoph Le Blon as early as 1725. Printers discovered that more accurate colours could be created by using combinations of magenta, cyan and black ink, put onto separate inked plates and overlaid one at a time onto paper; this method could produce all the colours in the spectrum with reasonable accuracy. In the 19th century the Scottish physicist James Clerk Maxwell found a new way of explaining colours, by the wa |
September 3, 2008
Scientist Locates Origin Of Cosmic Dust
The origin of the microscopic meteorites that make up cosmic dust has been revealed for the first time in new research out Sept. 1, 2008.
The research, published in the journal Geology, shows that some of the cosmic dust falling to Earth comes from an ancient asteroid belt between Jupiter and Mars. This research improves our knowledge of the solar system, and could provide a new and inexpensive method for understanding space.Cosmic dust particles, originally from asteroids and comets, are minute pieces of pulverized rock. They measure up to a tenth of a millimeter in size and shroud the solar system in a thin cloud. Studying them is important because their mineral content records the conditions under which asteroids and comets were formed over four and a half billion years ago and provides an insight into the earliest history of our solar system.
The study's author, Dr Mathew Genge, from Imperial College London's Department of Earth Science and Engineering, has trekked across the globe collecting cosmic dust. He says:
"There are hundreds of billions of extraterrestrial dust particles falling though our skies. This abundant resource is important since these tiny pieces of rock allow us to study distant objects in our solar system without the multi-billion dollar price tag of expensive missions."
The origin of the cosmic dust that lands on Earth has always been unclear. Scientists previously thought that analyzing the chemical and mineral content of individual dust particles was the key to tracing their origin. But this study suggests that a comparison of multiple particles gives better results.
To pinpoint the cosmic dust's origin, Dr Genge analyzed more than 600 particles, painstakingly cataloguing their chemical and mineral content and reassembling them like a complex jigsaw. Dr Genge comments:
"I've been studying these particles for quite a while and had all the pieces of the puzzle, but had been trying to figure out the particles one by one. It was only when I took a step back and looked at the minerals and properties of hundreds of particles that it was obvious where they came from. It was like turning over the envelope and finding the return address on the back."
Dr Genge found that the cosmic dust comes from a family of ancient space rocks called Koronis asteroids, which includes 243 Ida, widely photographed by the NASA Galileo probe. The rocks are located in an asteroid belt between Mars and Jupiter and were formed around two billion years ago when a much larger asteroid broke into pieces. Further analysis shows that the dust originates from a smaller grouping of 20 space rocks within the Koronis family called Karin asteroids. It comes from an ancient chondrite rock, common in Karin asteroids, which was formed in space at the birth of the solar system.
Chondrite meteorites often fall to Earth and Dr Genge was able to match the mineralogy and chemistry of the dust particles with chondrite meteorite samples previously collected. He backed up the cosmic dust's origin with infrared astronomical satellite data which showed Karin asteroids grinding and smashing against one another to create cosmic dust.
Dr Genge says his research holds exciting possibilities for a deeper understanding of our early solar system. He concedes that analyzing space dust will never entirely replace space missions, but adds that we may not have to visit so many different places. He concludes:
"This research is the first time we have successfully demonstrated a way to locate the home of these important little particles. The answer to so many important questions, such as why we are here and are we alone in the universe, may well lie inside a cosmic dust particle. Since they are everywhere, even inside our homes, we don't necessarily have to blast off the Earth to find those answers. Perhaps they are already next to you, right here and right now."
Image Caption: This is a scanning electron microscope image of an interplanetary dust particle that has roughly chondritic elemental composition and is highly rough (chondritic porous: "CP"). CP types are usually aggregates of large numbers of sub-micrometer grains, clustered in a random open order. The authors of this figure are Don Brownlee, University of Washington, Seattle, and Elmar Jessberger, Institut fr Planetologie, Mnster, Germany. (Wikipedia)
On the Net: |
|pound-force-feet, lbf?inch, ozf?in|
|In SI base units||kg?m2?s-2|
In physics and mechanics, torque is the rotational equivalent of linear force. It is also referred to as the moment, moment of force, rotational force or turning effect, depending on the field of study. The concept originated with the studies by Archimedes of the usage of levers. Just as a linear force is a push or a pull, a torque can be thought of as a twist to an object around a specific axis. Another definition of torque is the product of the magnitude of the force and the perpendicular distance of the line of action of a force from the axis of rotation. The symbol for torque is typically , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by M.
In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the position vector (distance vector) and the force vector. The magnitude of torque of a rigid body depends on three quantities: the force applied, the lever arm vector connecting the point about which the torque is being measured to the point of force application, and the angle between the force and lever arm vectors. In symbols:
James Thomson, the brother of Lord Kelvin, introduced the term torque into English scientific literature in 1884. However, torque is referred to using different vocabulary depending on geographical location and field of study. This article follows the definition used in US physics in its usage of the word torque. In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. These terms are interchangeable in US physics and UK physics terminology, unlike in US mechanical engineering, where the term torque is used for the closely related "resultant moment of a couple".
In US mechanical engineering, torque is defined mathematically as the rate of change of angular momentum of an object (in physics it is called "net torque"). The definition of torque states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the general term used for the tendency of one or more applied forces to rotate an object about an axis, but not necessarily to change the angular momentum of the object (the concept which is called torque in physics). For example, a rotational force applied to a shaft causing acceleration, such as a drill bit accelerating from rest, results in a moment called a torque. By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the angular momentum of the beam is not changing, this bending moment is not called a torque. Similarly with any force couple on an object that has no change to its angular momentum, such moment is also not called a torque.
A force applied perpendicularly to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two meters from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque.
More generally, the torque on a point particle (which has the position r in some reference frame) can be defined as the cross product:
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude ? of the torque is given by
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and ? is the angle between the position and force vectors. Alternatively,
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. Conversely, the torque vector defines the plane in which the position and force vectors lie. The resulting torque vector direction is determined by the right-hand rule.
The net torque on a body determines the rate of change of the body's angular momentum,
where L is the angular momentum vector and t is time.
For the motion of a point particle,
where ? is the angular acceleration of the particle, and p|| is the radial component of its linear momentum. This equation is the rotational analogue of Newton's Second Law for point particles, and is valid for any type of trajectory. Note that although force and acceleration are always parallel and directly proportional, the torque ? need not be parallel or directly proportional to the angular acceleration ?. This arises from the fact that although mass is always conserved, the moment of inertia in general is not.
The definition of angular momentum for a single point particle is:
where p is the particle's linear momentum and r is the position vector from the origin. The time-derivative of this is:
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definition of force (whether or not mass is constant) and the definition of velocity
The cross product of momentum with its associated velocity is zero because velocity and momentum are parallel, so the second term vanishes.
By definition, torque ? = r × F. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time.
If multiple forces are applied, Newton's second law instead reads , and it follows that
This is a general proof for point particles.
The proof can be generalized to a system of point particles by applying the above proof to each of the point particles and then summing over all the point particles. Similarly, the proof can be generalized to a continuous mass by applying the above proof to each point within the mass, and then integrating over the entire mass.
Torque has the dimension of force times distance, symbolically L2MT-2. Although those fundamental dimensions are the same as that for energy or work, official SI literature suggests using the unit newton metre (N?m) and never the joule. The unit newton metre is properly denoted N?m.
The traditional Imperial and U.S. customary units for torque are the pound foot (lbf-ft), or for small values the pound inch (lbf-in). Confusingly, in US practice torque is most commonly referred to as the foot-pound (denoted as either lb-ft or ft-lb) and the inch-pound (denoted as in-lb). Practitioners depend on context and the hyphen in the abbreviation to know that these refer to torque and not to energy or moment of mass (as the symbolism ft-lb would properly imply).
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N exactly 0.5 m from the twist point of a wrench of any length), the torque will be 5 N⋅m - assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench.
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equationsH = 0 and ?V = 0, and the torque a third equation? = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used.
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force is not zero, and is the torque measured from , then the torque measured from is ...
Torque forms part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000-6,000 rpm for a small car). One can measure the varying torque output over that range with a dynamometer, and show it as a torque curve.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam-engines and electric motors can start heavy loads from zero rpm without a clutch.
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, the work W can be expressed as
The work done by a variable force acting over a finite linear displacement is given by integrating the force with respect to an elemental linear displacement
However, the infinitesimal linear displacement is related to a corresponding angular displacement and the radius vector as
Substitution in the above expression for work gives
The expression is a scalar triple product given by . An alternate expression for the same scalar triple product is
But as per the definition of torque,
Corresponding substitution in the expression of work gives,
Since the parameter of integration has been changed from linear displacement to angular displacement, the limits of the integration also change correspondingly, giving
If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., giving
Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. Note that the power injected by the torque depends only on the instantaneous angular speed - not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed - not on the resulting acceleration, if any).
In practice, this relationship can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a circular chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of cadence (i.e. the number of pedal revolutions per minute) and the torque on spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, rpm)input pair is converted to a (torque, rpm)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change.
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. This means that the dimensional equivalence of the newton metre and the joule may be applied in the former, but not in the latter case. This problem is addressed in orientational analysis which treats radians as a base unit rather than a dimensionless unit.
A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2? radians per revolution. In the following formulas, P is power, ? is torque, and ? (Greek letter nu) is rotational speed.
Dividing by 60 seconds per minute gives us the following.
where rotational speed is in revolutions per minute (rpm).
Some people (e.g., American automotive engineers) use horsepower (mechanical) for power, foot-pounds (lbf?ft) for torque and rpm for rotational speed. This results in the formula changing to:
The constant below (in foot-pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550.
The use of other units (e.g., BTU per hour for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time.
By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power:
The radius r and time t have dropped out of the equation. However, angular speed must be in radians per unit of time, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2? in the above derivation to give:
If torque is in newton metres and rotational speed in revolutions per second, the above equation gives power in newton metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft?lbf/min per horsepower:
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
From this it follows that if a pivoted beam of zero mass is balanced with two opposed forces then:
Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced. |
Understanding linear equations (equations with an x term and a constant) can be daunting but this program simplifies understanding. Both from an educational ...
Understanding linear equations (equations with an x term and a constant) can be daunting but this program simplifies understanding. Both from an educational standpoint where the student needs to understand coefficients and intercepts or from an engineering standpoint where the user needs to understand the relationship slope and distance this program solves equations in a user friendly environment. Both a help file and glossary of terms have been included to simplify understanding. The equation of a line takes the form of “Y = mX + b “ where the the slope = m and a constant = b. Intercepts exist where the curve crosses or intersects the X and the Y. In the conventional sense one sets the slope and the Y intercept “b†solving the “Y†value for any value of X showing the equation. If the user prefers to solve the equation using just the X and Y intercepts, the “Solve For:†button allows the problem to be setup using the intercepts and solving for “slope†Using the adjustor controls or retro keyboard the equation is displayed automatically. A graph shows the actual curve and the user can manually change the scale to achieve the desired view. If the curve does not intersect the X axis the slope m = 0 and a constant line is displayed. This is all very confusing at first but this program helps to simplify by using adjuster inputs to see how changing the values affect the result and a graph of the actual curve or line. The equation changes as user inputs change along with the graphical output. Help and glossary screens are also provided to further help in understanding quadratic equations. When you are done simply Email the results to your home computer for further analysis. This program is a must for Students and Engineers. It makes the complex understandable. Simply the best linear equation program. Features: - Updates as You Edit - Number Pad Digital Entry - Solve for intercepts or slope - Calculates Y = mx + b -Graphs the actual curve to visually relate the equation. -Help Screen, Glossary and Email Example: We start with the equation Y = 10 X -11. Select the "Intercept" button and the linear expression is setup that allows you to adjust each of two coefficients "m" and "b" to create the linear equation. Note that there are adjustor buttons to increase or decrease each coefficient. You can as easily use the built-in keyboard by touching the input field. So we have Y = 10.0 X -11. Look at the line calculated from this equation and displayed below in the graph portion of the screen. The program calculates the X and Y intercepts (where the line crosses each axis) and displays them on the right. Note that the graph changes as you adjust the coefficients. If you need to zoom into an area the scaling can be set to manual and the adjustors or keyboard allow changing the graph limits. Try changing the Solve for Button to Slope. Then use the adjuster buttons to change the X and Y intercepts. Note how the screen displays the graph of the equation and that it intersects the X and Y axis at two different places. If the graph does not intersect the X axis then m = 0 and the line runs parallel to the X axis. See the Glossary screen for any terms that are unclear. As with all complicated programs this program should be used for reference and learning and the user assumes responsibility for verification, use and application. Look for other Ray Tools app’s including Quadratic, exponential and logarithmic equations. This app was developed by Raymond Seymour, after working at General Electric for 37 years he’s now making Engineering apps for iOS. As an inventor he has over 60 granted U.S. Patents.
Size: 30.01 MB
Price: 0,00 €
Day of release: 0000-00-0 |
Using data from NASA’s Chandra X-ray Observatory, a newly published study reveals evidence that heat channeled from turbulent motions in galaxy clusters is enough to offset radiative cooling and prevent the formation of stars.
The same phenomenon that causes a bumpy airplane ride, turbulence, may be the solution to a long-standing mystery about stars’ birth, or the absence of it, according to a new study using data from NASA’s Chandra X-ray Observatory.
Galaxy clusters are the largest objects in the universe, held together by gravity. These behemoths contain hundreds or thousands of individual galaxies that are immersed in gas with temperatures of millions of degrees.
This hot gas, which is the heftiest component of the galaxy clusters aside from unseen dark matter, glows brightly in X-ray light detected by Chandra. Over time, the gas in the centers of these clusters should cool enough that stars form at prodigious rates. However, this is not what astronomers have observed in many galaxy clusters.
“We knew that somehow the gas in clusters is being heated to prevent it cooling and forming stars. The question was exactly how,” said Irina Zhuravleva of Stanford University in Palo Alto, California, who led the study that appears in the latest online issue of the journal Nature. “We think we may have found evidence that the heat is channeled from turbulent motions, which we identify from signatures recorded in X-ray images.”
Prior studies show supermassive black holes, centered in large galaxies in the middle of galaxy clusters, pump vast quantities of energy around them in powerful jets of energetic particles that create cavities in the hot gas. Chandra, and other X-ray telescopes, have detected these giant cavities before.
The latest research by Zhuravleva and her colleagues provides new insight into how energy can be transferred from these cavities to the surrounding gas. The interaction of the cavities with the gas may be generating turbulence, or chaotic motion, which then disperses to keep the gas hot for billions of years.
“Any gas motions from the turbulence will eventually decay, releasing their energy to the gas,” said co-author Eugene Churazov of the Max Planck Institute for Astrophysics in Munich, Germany. “But the gas won’t cool if turbulence is strong enough and generated often enough.”
The evidence for turbulence comes from Chandra data on two enormous galaxy clusters named Perseus and Virgo. By analyzing extended observation data of each cluster, the team was able to measure fluctuations in the density of the gas. This information allowed them to estimate the amount of turbulence in the gas.
“Our work gives us an estimate of how much turbulence is generated in these clusters,” said Alexander Schekochihin of the University of Oxford in the United Kingdom. “From what we’ve determined so far, there’s enough turbulence to balance the cooling of the gas.
These results support the “feedback” model involving supermassive black holes in the centers of galaxy clusters. Gas cools and falls toward the black hole at an accelerating rate, causing the black hole to increase the output of its jets, which produce cavities and drive the turbulence in the gas. This turbulence eventually dissipates and heats the gas.
While a merger between two galaxy clusters may also produce turbulence, the researchers think that outbursts from supermassive black holes are the main source of this cosmic commotion in the dense centers of many clusters.
Publication: I. Zhuravleva, et al., “Turbulent heating in galaxy clusters brightest in X-rays,” Nature (2014); doi:10.1038/nature13830
PDF Copy of the Study: Turbulent Heating in Galaxy Clusters Brightest in X-rays
NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations.
A process that can contribute to the melting of ice shelves in the Antarctic has…
Controlling excess weight could lead to improved health outcomes and slow cognitive decline. A correlation…
It was once thought that green leaves and photosynthesis were essential for plants, however, some…
An electronic bridge facilitates the fast transfer of energy between semiconductors. Researchers are exploring the…
According to a study conducted by the Swiss Tropical and Public Health Institute (Swiss TPH)…
Ten sample tubes, capturing an amazing variety of Martian geology, have been deposited on Mars’… |
CBSE Board Chemistry Syllabus for Class 11
CBSE Board Syllabus for Class 11 Chemistry
Unit 1: Some Basic Concepts of Chemistry
General Introduction: Importance and scope of chemistry.
Historical approach to particulate nature of matter, laws of chemical combination. Dalton's atomic theory: concept of elements, atoms and molecules.
Atomic and molecular masses mole concept and molar mass: percentage composition, empirical and molecular formula chemical reactions, stoichiometry and calculations based on stoichiometry.
Unit II: Structure of Atom
Discovery of electron, proton and neutron; atomic number, isotopes and isobars. Thomson's model and its limitations, Rutherford's model and its limitations. Bohr's model and its limitations, concept of shells and subshells, dual nature of matter and light, de Broglie's relationship, Heisenberg uncertainty principle, concept of orbitals, quantum numbers, shapes of s, p, and d orbitals, rules for filling electrons in orbitals - Aufbau principle, Pauli exclusion principle and Hund's rule, electronic configuration of atoms, stability of half filled and completely filled orbitals.
Unit III: Classification of Elements and Periodicity in Properties
Significance of classification, brief history of the development of periodic table, modern periodic law and the present form of periodic table, periodic trends in properties of elements -atomic radii, ionic radii. Ionization enthalpy, electron gain enthalpy, electro negativity, valence.
Unit IV: Chemical Bonding and Molecular Structure
Valence electrons, ionic bond, covalent bond: bond parameters. Lewis structure, polar character of covalent bond, covalent character of ionic bond, valence bond theory, resonance, geometry of covalent molecules, VSEPR theory, concept of hybridization, involving s, p and d orbitals and shapes of some simple molecules, molecular orbital; theory of homo nuclear diatomic molecules (qualitative idea only), hydrogen bond.
Unit V: States of Matter: Gases and Liquids
Three states of matter. Intermolecular interactions, type of bonding, melting and boiling points. Role of gas laws in elucidating the concept of the molecule, Boyle's law. Charles
law, Gay Lussac's law, Avogadro's law. Ideal behaviour, empirical derivation of gas equation, Avogadro's number. Ideal gas equation. Derivation from ideal behaviour, liquefaction of gases, critical temperature.
Liquid State - Vapour pressure, viscosity and surface tension (qualitative idea only, no mathematical derivations).
Unit VI: Thermodynamics
Concepts Of System, types of systems, surroundings. Work, heat, energy, extensive and intensive properties, state functions.
First law of thermodynamics - internal energy and enthalpy, heat capacity and specific heat, measurement of
H, Hess's law of constant heat summation, enthalpy of: bond dissociation, combustion, formation, atomization, sublimation. Phase transformation, ionization, and solution.
Introduction of entropy as a state function, free energy change for spontaneous and nonspontaneous processes, criteria for equilibrium.
Unit VII: Equilibrium
Equilibrium in physical and chemical processes, dynamic nature of equilibrium, law of mass action, equilibrium constant, factors affecting equilibrium - Le Chatelier's principle; ionic equilibrium - ionization of acids and bases, strong and weak electrolytes, degree of ionization, concept of pH. Hydrolysis of salts (elementary idea). Buffer solutions, solubility product, common ion effect (with illustrative examples).
Unit VIII: Redox Reactions
Concept of oxidation and reduction, redox reactions, oxidation number, balancing redox reactions, applications of redox reactions.
Unit IX : Hydrogen
Position of hydrogen in periodic table, occurrence, isotopes, preparation, properties and uses of hydrogen; hydrides - ionic, covalent and interstitial; physical and chemical properties of water, heavy water; hydrogen peroxide-preparation, properties and structure; hydrogen as a fuel.
Unit X: s-Block Elements (Alkali and Alkaline earth metals)
Group 1 and Group 2 elements:
General introduction, electronic configuration, occurrence, anomalous properties of the first element of each group, diagonal relationship, trends in the variation of properties (such as ionization enthalpy, atomic and ionic radii), trends in chemical reactivity with oxygen, water, hydrogen and halogens; uses.
Preparation and properties of some important compounds: Sodium carbonate, sodium chloride, sodium hydroxide and sodium hydrogen carbonate,
biological importance of sodium and potassium. CaO, CaCO
and industrial use of lime and limestone, biological importance of Mg and Ca
Unit XI: Some p-Block Elements
General Introduction to p-Block Elements
Group 13 elements:
General introduction, electronic configuration, occurrence. Variation of properties, oxidation states, trends in chemical reactivity, anomalous properties of first element of the group; Boron- physical and chemical properties, some important compounds: borax, boric acids, boron hydrides. Aluminium: uses, reactions with acids and alkalies.
Group 14 elements:
General introduction, electronic configuration, occurrence, variation of properties, oxidation states, trends in chemical reactivity, anomalous behaviour of first element, Carbon - catenation, allotropic forms, physical and chemical properties; uses of some important compounds: oxides.
Important compounds of silicon and a few uses: silicon tetrachloride, silicones, silicates and zeolites.
Unit XII: Organic Chemistry - Some Basic Principles and Techniques
General introduction, methods of qualitative and quantitative analysis, classification and IUPAC nomenclature of organic compounds
Electronic displacements in a covalent bond: inductive effect, electromeric effect, resonance and hyper conjugation.
Homolytic and heterolytic fission of a covalent bond: free radicals, carbocations, carbanions; electrophiles and nucleophiles, types of organic reactions
Unit XIII: Hydrocarbons
Classification of hydrocarbons
Alkanes - Nomenclature, isomerism, conformations (ethane only), physical properties, chemical reactions including free radical mechanism of halogenation, combustion and pyrolysis.
Alkenes - Nomenclature, structure of double bond (ethene) geometrical isomerism, physical properties, methods of preparation; chemical reactions: addition of hydrogen, halogen, water, hydrogen halides (Markovnikov's addition and peroxide effect), ozonolysis, oxidation, mechanism of electrophilic addition.
Alkynes - Nomenclature, structure of triple bond (ethyne), physical properties. Methods of preparation, chemical reactions: acidic character of alkynes, addition reaction of - hydrogen, halogens, hydrogen halides and water.
Aromatic hydrocarbons: Introduction, IUPAC nomenclature; benzene: resonance aromaticity; chemical properties: mechanism of electrophilic substitution. - nitration sulphonation, halogenation, Friedel Craft's alkylation and acylation: directive influence of functional group in mono-substituted benzene; carcinogenicity and toxicity.
Unit XIV: Environmental Chemistry
Environmental pollution - air, water and soil pollution, chemical reactions in atmosphere, smog, major atmospheric pollutants; acid rain, ozone and its reactions, effects of depletion of ozone layer, greenhouse effect and global warming - pollution due to industrial wastes; green chemistry as an alternative tool for reducing pollution, strategy for control of environmental pollution.
A. Basic Laboratory Techniques
1. Cutting glass tube and glass rod
2. Bending a glass tube
3. Drawing out a glass jet
4. Boring a cork
B. Characterization and purification of chemical substances
1. Determination of melting point of an organic compound
2. Determination of boiling point of an organic compound
3. Crystallization of impure sample of anyone of the following: Alum, copper sulphate, Benzoic acid.
C. Experiments related to pH change
(a) Anyone of the following experiments:
Determination of pH of some solutions obtained from fruit juices, varied concentrations of acids. ,bases and salts using pH paper or universal indicator.
Comparing the pH of solutions of strong and weak acid of same concentration.
Study the pH change in the titration of a strong base using universal indicator.
b) Study of pH change by common-ion effect in case of weak acids and weak bases.
D. Chemical equilibrium
One of the following experiments:
(a) Study the shift in equilibrium between ferric ions and thiocyanate ions by increasing/ decreasing the concentration of either ions.
(b) Study the shift in equilibrium between [Co(H
and chloride ions by changing the concentration of either of the ions.
E. Quantitative estimation
Using a chemical balance.
Preparation of standard solution of oxalic acid.
Determination of strength of a given solution of sodium hydroxide by titrating it against standard solution of oxalic acid.
Preparation of standard solution of sodium carbonate.
Determination of strength of a given solution of hydrochloric acid by titrating it against standard sodium carbonate solution.
F. Qualitative analysis
G. Detection of nitrogen, sulphur, Chlorine
bromine and iodine in an organic compound.
CBSE Board Best Sellers
In order to keep pace with technological advancement and to cope up with CBSE Board examinations, Pearson group has launched Edurite to help students by offering Books and CDs of different courses online.
Get help on CBSE Board Syllabus for class 11 Now
Previous Year Paper
- CBSE Board Class 12 Functional English 2008
- CBSE Board Class 12 Computer Science 2007
- CBSE Board Class 12 Physics 2007
- CBSE Board Class 12 Economics 2005
- CBSE Board Class 10 Functional English 2007
- CBSE Board Class 10 Functional English 2005
- CBSE Board Class 12 English Elective 2011
- CBSE Board Class 12 Sociology 2005
- CBSE Board Class 12 Fashion Studies 2009
- CBSE Board Class 11 Political Science 2007
- Rajasthan Board Class 12 Chemistry
- West Bengal Board Class 12 Syllabus For Economics
- Madhya Pradesh Board Class 9 Social Science
- Madhya Pradesh Board Class 12 Home Science
- Rajasthan Board Class 11 Informatics Practices
- Himachal Pradesh Board Class 12 Home Science
- ICSE Board Class 11 Home Science
- Rajasthan Board Class 12 English Literature
- Himachal Pradesh Board Class 12 History
- CBSE Board Class 7 Science |
When LIGO’s twin detectors first picked up faint wobbles in their respective, identical mirrors, the signal didn’t just provide first direct detection of gravitational waves — it also confirmed the existence of stellar binary black holes, which gave rise to the signal in the first place.
Stellar binary black holes are formed when two black holes, created out of the remnants of massive stars, begin to orbit each other. Eventually, the black holes merge in a spectacular collision that, according to Einstein’s theory of general relativity, should release a huge amount of energy in the form of gravitational waves.
Now, an international team led by MIT astrophysicist Carl Rodriguez suggests that black holes may partner up and merge multiple times, producing black holes more massive than those that form from single stars. These “second-generation mergers” should come from globular clusters — small regions of space, usually at the edges of a galaxy, that are packed with hundreds of thousands to millions of stars.
“We think these clusters formed with hundreds to thousands of black holes that rapidly sank down in the center,” says Carl Rodriguez, a Pappalardo fellow in MIT’s Department of Physics and the Kavli Institute for Astrophysics and Space Research. “These kinds of clusters are essentially factories for black hole binaries, where you’ve got so many black holes hanging out in a small region of space that two black holes could merge and produce a more massive black hole. Then that new black hole can find another companion and merge again.”
If LIGO detects a binary with a black hole component whose mass is greater than around 50 solar masses, then according to the group’s results, there’s a good chance that object arose not from individual stars, but from a dense stellar cluster.
“If we wait long enough, then eventually LIGO will see something that could only have come from these star clusters, because it would be bigger than anything you could get from a single star,” Rodriguez says.
He and his colleagues report their results in a paper appearing in Physical Review Letters.
For the past several years, Rodriguez has investigated the behavior of black holes within globular clusters and whether their interactions differ from black holes occupying less populated regions in space.
Globular clusters can be found in most galaxies, and their number scales with a galaxy’s size. Huge, elliptical galaxies, for instance, host tens of thousands of these stellar conglomerations, while our own Milky Way holds about 200, with the closest cluster residing about 7,000 light years from Earth.
In their new paper, Rodriguez and his colleagues report using a supercomputer called Quest, at Northwestern University, to simulate the complex, dynamical interactions within 24 stellar clusters, ranging in size from 200,000 to 2 million stars, and covering a range of different densities and metallic compositions. The simulations model the evolution of individual stars within these clusters over 12 billion years, following their interactions with other stars and, ultimately, the formation and evolution of the black holes. The simulations also model the trajectories of black holes once they form.
“The neat thing is, because black holes are the most massive objects in these clusters, they sink to the center, where you get a high enough density of black holes to form binaries,” Rodriguez says. “Binary black holes are basically like giant targets hanging out in the cluster, and as you throw other black holes or stars at them, they undergo these crazy chaotic encounters.”
It’s all relative
When running their simulations, the researchers added a key ingredient that was missing in previous efforts to simulate globular clusters.
“What people had done in the past was to treat this as a purely Newtonian problem,” Rodriguez says. “Newton’s theory of gravity works in 99.9 percent of all cases. The few cases in which it doesn’t work might be when you have two black holes whizzing by each other very closely, which normally doesn’t happen in most galaxies.”
Newton’s theory of relativity assumes that, if the black holes were unbound to begin with, neither one would affect the other, and they would simply pass each other by, unchanged. This line of reasoning stems from the fact that Newton failed to recognize the existence of gravitational waves — which Einstein much later predicted would arise from massive orbiting objects, such as two black holes in close proximity.
“In Einstein’s theory of general relativity, where I can emit gravitational waves, then when one black hole passes near another, it can actually emit a tiny pulse of gravitational waves,” Rodriguez explains. “This can subtract enough energy from the system that the two black holes actually become bound, and then they will rapidly merge.”
The team decided to add Einstein’s relativistic effects into their simulations of globular clusters. After running the simulations, they observed black holes merging with each other to create new black holes, inside the stellar clusters themselves. Without relativistic effects, Newtonian gravity predicts that most binary black holes would be kicked out of the cluster by other black holes before they could merge. But by taking relativistic effects into account, Rodriguez and his colleagues found that nearly half of the binary black holes merged inside their stellar clusters, creating a new generation of black holes more massive than those formed from the stars. What happens to those new black holes inside the cluster is a matter of spin.
“If the two black holes are spinning when they merge, the black hole they create will emit gravitational waves in a single preferred direction, like a rocket, creating a new black hole that can shoot out as fast as 5,000 kilometers per second — so, insanely fast,” Rodriguez says. “It only takes a kick of maybe a few tens to a hundred kilometers per second to escape one of these clusters.”
Because of this effect, scientists have largely figured that the product of any black hole merger would get kicked out of the cluster, since it was assumed that most black holes are rapidly spinning.
This assumption, however, seems to contradict the measurements from LIGO, which has so far only detected binary black holes with low spins. To test the implications of this, Rodriguez dialed down the spins of the black holes in his simulations and found that in this scenario, nearly 20 percent of binary black holes from clusters had at least one black hole that was formed in a previous merger. Because they were formed from other black holes, some of these second-generation black holes can be in the range of 50 to 130 solar masses. Scientists believe black holes of this mass cannot form from a single star.
Rodriguez says that if gravitational-wave telescopes such as LIGO detect an object with a mass within this range, there is a good chance that it came not from a single collapsing star, but from a dense stellar cluster.
“My co-authors and I have a bet against a couple people studying binary star formation that within the first 100 LIGO detections, LIGO will detect something within this upper mass gap,” Rodriguez says. “I get a nice bottle of wine if that happens to be true.”
This research was supported in part by the MIT Pappalardo Fellowship in Physics, NASA, the National Science Foundation, the Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) at Northwestern University, the Institute of Space Sciences (ICE, CSIC) and Institut d’Estudis Espacials de Catalunya (IEEC), and the Tata Institute of Fundamental Research in Mumbai, India.
Source : MIT |
As the two-year anniversary of the Deepwater Horizon oil spill in the Gulf of Mexico approaches, a team of scientists led by Dr. Peter Roopnarine of the California Academy of Sciences has detected evidence that pollutants from the oil have entered the ecosystem's food chain. For the past two years, the team has been studying oysters (Crassostrea virginica) collected both before and after the Deepwater Horizon oil reached the coasts of Louisiana, Alabama, and Florida. These animals can incorporate heavy metals and other contaminants from crude oil into their shells and tissue, allowing Roopnarine and his colleagues to measure the impact of the spill on an important food source for both humans and a wide variety of marine predators. The team's preliminary results demonstrate that oysters collected post-spill contain higher concentrations of heavy metals in their shells, gills, and muscle tissue than those collected before the spill. In much the same way that mercury becomes concentrated in large, predatory fish, these harmful compounds may get passed on to the many organisms that feed on the Gulf's oysters.
"While there is still much to be done as we work to evaluate the impact of the Deepwater Horizon spill on the Gulf's marine food web, our preliminary results suggest that heavy metals from the spill have impacted one of the region's most iconic primary consumers and may affect the food chain as a whole," says Roopnarine, Curator of Geology at the California Academy of Sciences.
The research team collected oysters from the coasts of Louisiana, Alabama, and Florida on three separate occasions after the Deepwater Horizon oil had reached land: August 2010, December 2010, and May 2011. For controls, they also examined specimens collected from the same localities in May 2010, prior to the landfall of oil; historic specimens collected from the Gulf in 1947 and 1970; and a geographically distant specimen collected from North Carolina in August 2010.
Oysters continually build their shells, and if contaminants are present in their environment, they can incorporate those compounds into their shells. Roopnarine first discovered that he could study the growth rings in mollusk shells to evaluate the damage caused by oil spills and other pollutants five years ago, when he started surveying the shellfish of San Francisco Bay. His work in California revealed that mollusks from more polluted areas, like the waters around Candlestick Park, had incorporated several heavy metals that are common in crude oil into their shells.
To determine whether or not the Gulf Coast oysters were incorporating heavy metals from the Deepwater Horizon spill into their shells in the same manner, Roopnarine and his colleagues used a method called "laser ablation ICP-MS," or inductively coupled plasma mass spectrometry. First, a laser vaporizes a small bit of shell at different intervals along the shell's growth rings. Then the vaporized sample is superheated in plasma, which causes the various elements in the sample to radiate light at specific, known frequencies. This light allows scientists to identify and quantify which chemical elements are present in a particular growth ring. Roopnarine and his colleagues measured higher concentrations of three heavy metals common in crude oil—vanadium, cobalt, and chromium—in the post-spill specimens they examined compared to the controls, and this difference was found to be statistically significant.
In a second analysis, the scientists used ICP-MS to analyze gill and muscle tissue in both pre-spill and post-spill specimens. They found higher concentrations of vanadium, cobalt, and lead in the post-spill specimens, again with statistical significance.
In a final analysis, the team examined oyster gill tissue under the microscope and found evidence of "metaplasia," or transformation of tissues in response to a disturbance, in 89 percent of the post-spill specimens. Cells that were normally columnar (standing up straight) had become stratified (flattened)—a known sign of physical or chemical stress in oysters. Stratified cells have much less surface area available for filter feeding and gas exchange, which are the primary functions of oyster gills. Oysters suffering from this type of metaplasia will likely have trouble reproducing, which will lead to lower population sizes and less available food for oyster predators.
The team presented their data at a poster session at the American Geophysical Union meeting in December 2011, and is preparing their preliminary findings for publication. However, their work is just beginning. In addition to increasing the number of pre- and post-spill oyster specimens in their analysis, the team also plans to repeat their analyses using another bivalve species, the marsh mussel (Geukensia demissa). Roopnarine is also planning to create a mathematical model linking the oyster and mussel to other commercially important species, such as mackerel and crabs, to demonstrate the potential impact of the oil spill on the Gulf food web. Scientists don't currently know how these types of trace metals move through the food web, how long they persist, or how they impact the health of higher-level consumers, including humans—but the construction of a data-driven computer model will provide the framework for tackling these important questions.
Roopnarine and his colleagues have faced a number of challenges during the course of their study. Unfortunately, pure crude oil samples from Deepwater Horizon have remained inaccessible, making it impossible for the team to compare the heavy metal ratios they have documented in the oysters to the ratios found in the Deepwater Horizon oil. Additionally, the chemical compositions of artificial dispersants and freshwater that were intentionally spread in the Gulf to alleviate the spill are also unknown—additional variables that could affect the team's research. The team is hopeful that they will eventually be able to analyze these samples, thus shedding more light on their results. |
The focal length is the basic description of a photographic lens, usually represented in millimeters. It is not the actual length of the lens, but it is a calculation of an optical distance from the point where light rays converge to form a sharp image to the digital sensor of the digital camera. The focal length of a lens is determined when the lens is focused at infinity, (i.e. when the light rays come approximately horizontal and perpendicular to the lens axis).
Focal length is fairly easy to understand with a lens that has a single element, but most camera lenses are made up of lots of separate individual elements. These compound lenses have an effective distance from the image plane, somewhere among all the elements and groups, and the further away from the image plane that is, the longer the focal length. And so when you focus on something closer than infinity and the lens is moved further away from the sensor, the lens will get longer.
What does the focal length tell us?
What we need to know, as photographers, is what focal length means to our images. When we talk about lenses, the focal length is not only related to the lenses’ physical length, the linear measurement is representative of an angular field of view.
The value of the focal length tells us two things:
1-The angle of view (how much of the scene will be captured).
2-The magnification (how large individual elements will be)
The longer the focal length, the narrower the angle of view and the higher the magnification.
The shorter the focal length, the wider the angle of view and the lower the magnification.
Angle of view
Lens manufacturers often publish the term “angle of view” or “maximum angle of view” in lens specifications, because they define what the lens is capable of seeing in degrees.
The angle of view of a photograph or camera is a measure of the proportion of a scene included in the image. Simply said: How many degrees of view are included in an image. A typically fixed lens camera might have an angle of view of 50°, a fisheye lens can have an angle of view greater than 180 °.
There are three measures of the angle of view of a lens, Horizontal (HAOV), the Vertical angle of view (VAOV), and Diagonal angle of view (DAOV).
Lens manufacturers often publish the term “angle of view” or “maximum angle of view” in lens specifications, because they define what the lens is capable of seeing in degrees. For example, the Canon 50 mm lens has a maximum angle of view of 47°, while Canon 100 mm lens has a maximum angle of view of only 24° when used on a full-frame camera.
The digital sensor of a camera is often much smaller than the 35mm film (full frame), because of high production cost. This reduction of the sensor size results in cutting of the image corners, the process that photographers call “cropping”. The interesting thing is that the image is actually not cut by the sensor or the camera – parts of the image are simply ignored. You may check my post about full frame vs crop sensor HERE.
To have the same field of view as the 135mm mounted on a full-frame camera, you would need a 100mm lens on a cropped sensor camera (1.3 X). For example, if you were standing from one spot and could fit a building in your frame using a 135mm lens on a full-frame/35mm camera, to be able to fit that same building on a cropped sensor camera, you would need to have a much wider lens with a focal length of 100mm.
The following table shows the different values of angle of views for some canon lenses with different sensor sizes.
Although you will never use the figures in the above table, as a photographer, it surely gives you what will the increasing the focal length will do to the angle of view with respect to different crop factors.
Focal length and perspective
Some photographers say that focal length determines the perspective of a photo, but the fact is perspective changes only with your location relative to the subject, if one tries to fill the frame with the same subjects using a wide angle and telephoto lens, then perspective does indeed change, because one is forced to move closer or farther from their subjects. That is why the wide angle lens exaggerates or stretches perspective, whereas the telephoto lens compresses or flattens perspective.
Focal length and camera Shake
When you handhold a camera, no matter how steady your hands, the camera will be moving when you depress the shutter release. This movement causes blur in an image at varying degrees; sometimes not noticeable and other times, it is a real problem.
Unfortunately, when you use higher focal lenses, the telephoto lenses, this shaking or movement is amplified by the fact that the field of view of the lens is smaller than that of wide-angle or normal lenses. Therefore, it is more difficult to get a sharp image at telephoto focal lengths, especially extreme focal lengths.
To minimize this shake, you can stabilize the camera on a tripod or other support, you may check my recommended tripod HERE. Another setting that helps to reduce camera shake is to use a faster shutter speed. The faster the shutter speeds, the less movement will be captured.
The general rule for maintaining sufficient shutter speed for a given focal length, to avoid the appearance of image shake, is to simply use a shutter speed quicker than 1/focal length.
Focal length and lens types
Camera lenses can be classified into two main types, prime, and zoom lenses.
are those that have fixed focal lengths. My reviews of some Canon lenses are presented HERE. The main advantages of prime or fixed focal length lenses are their size and weight as well as their maximum aperture or f/stop. Prime lenses also tend to have a larger maximum aperture (f/1.4 to f/2.8). This is an advantage when shooting in low light conditions as it will increase the possibility of hand holding the camera and freezing the subject without shake or blur caused by the longer exposures.
are those that have variable focal lengths. This is accomplished by physically changing the length of the lens. Their focal length is usually expressed as a range (e.g. 18-55mm, or 70-300mm). The main advantage of this type of lens is the creative flexibility given to the photographer since he can use zoom as a powerful compositional tool without the need to move around. My reviews of some zoom lenses are presented HERE
Focal length and the appropriate use of the lenses
Ultra-Wide Angle 14-24mm
Ultra-wide angle lenses are the popular choice for landscapes, architectural, interiors, large group photos and when working in confined situations.
- Wide Angle 24-35mm
They are used widely by photojournalists for documenting situations as they are wide enough to include a lot of the context whilst still looking realistic.
- Standard 35mm-70mm
It’s in this range (at about 45-50mm) that the lens will reproduce what our eyes see. It is suitable for shooting on the street or with friends in a closed setting such as at the dinner table or the pub.
- Mild Telephoto 70-105mm
This is a good range for portrait lenses as the natural perspective of the lens will separate the face from the background without completely isolating the face. It is also suitable for Close-up photography or macro photography.
- Telephoto 105-300mm
Lenses in this range are often used for distant scenes such as sports, wildlife, and birds photography.
By being aware of the effects that your focal length will have on the resulting photos, you can set yourself up for shooting great photos. Nothing will teach you photography except practicing, so keep on practicing.
I hope you enjoy reading the post and found it useful if you have any comment, question, or need more clarification, please write it down below, and I will gladly answer your questions.
You automatically support us if you order anything through our recommended Amazon links, and we highly recommend them because of their low prices, fast delivery and, the top support, especially when it comes to camera equipment.
As an Amazon Associate, the site earns from qualifying Purchase, Most of the “product” links are affiliate links, and you are welcome to check our affiliate Disclosure statement.
If you enjoy the site, remember to subscribe. |
Anyone can learn math whether they're in higher math at school or just looking to brush up on the basics. After discussing ways to be a good math student, this article will teach you the basic progression of math courses and will give you the basic elements that you'll need to learn in each course. Then, the article will go through the basics of learning arithmetic, which will help both kids in elementary school and anyone else who needs to brush up on the fundamentals.
Part One of Six:
Keys to Being a Good Math StudentEdit
1Show up for class. When you miss class, you have to learn the concepts either from a classmate or from your textbook. You'll never get as good of an overview from your friends or from the text as you will from your teacher.
- Come to class on time. In fact, come a little early and open your notebook to the right place, open your textbook and take out your calculator so that you're ready to start when your teacher is ready to start.
- Only skip class if you are sick. When you do miss class, talk to a classmate to find out what the teacher talked about and what homework was assigned.
2Work along with your teacher. If your teacher works problems at the front of your class, then work along with the teacher in your notebook.
- Make sure that your notes are clear and easy to read. Don't just write down the problems. Also write down anything that the teacher says that increases your understanding of the concepts.
- Work any sample problems that your teacher posts for you to do. When the teacher walks around the classroom as you work, answer questions.
- Participate while the teacher is working a problem. Don't wait for your teacher to call on you. Volunteer to answer when you know the answer, and raise your hand to ask questions when you're unsure of what's being taught.
3Do your homework the same day as it's assigned. When you do the homework the same day, the concepts are fresh on your mind. Sometimes, finishing your homework the same day isn't possible. Just make sure that your homework is complete before you go to class.
4Make an effort outside of class if you need help. Go to your teacher during his or her free period or during office hours.
- If you have a Math Center at your school, then find out the hours that it's open and go get some help.
- Join a study group. Good study groups usually contain 4 or 5 people at a good mix of ability levels. If you're a "C" student in math, then join a group that has 2 or 3 "A" or "B" students so that you can raise your level. Avoid joining a group full of students whose grades are lower than yours.
Part Two of Six:
Learning Math in SchoolEdit
1Start with arithmetic. In most schools, students work on arithmetic during the elementary grades. Arithmetic includes the fundamentals of addition, subtraction, multiplication and division.
- Work on drills. Doing a lot of arithmetic problems again and again is the best way to get the fundamentals down pat. Look for software that will give you lots of different math problems to work on. Also, look for timed drills to increase your speed.
- Repetition is the basis of math. The concept has to be not only learned, but put to work for you to remember it!
- You can also find arithmetic drills online, and you can download arithmetic apps onto your mobile device.
2Progress to pre-algebra. This course will provide the building blocks that you'll need to solve algebra problems later on.
- Learn about fractions and decimals. You'll learn to add, subtract, multiply and divide both fractions and decimals. Regarding fractions, you'll learn how to reduce fractions and interpret mixed numbers. Regarding decimals, you'll understand place value, and you'll be able to use decimals in word problems.
- Study ratios, proportions and percentages. These concepts will help you to learn about making comparisons.
- Solve squares and square roots. When you've mastered this topic, you'll have perfect squares of many numbers memorized. You'll also be able to work with equations containing square roots.
- Introduce yourself to basic geometry. You'll learn all of the shapes as well as 3D concepts. You'll also learn concepts like area, perimeter, volume and surface area, as well as information about parallel and perpendicular lines and angles.
- Understand some basic statistics. In pre-algebra, your introduction to statistics mostly includes visuals like graphs, scatter plots, stem-and-leaf plots and histograms.
- Learn algebra basics. These will include concepts like solving simple equations containing variables, learning about properties like the distributive property, graphing simple equations and solving inequalities.
3Advance to Algebra I. In your first year of algebra, you will learn about the basic symbols involved in algebra. You'll also learn to:
- Solve linear equations and inequalities that contain 1-2 variables. You'll learn how to solve these problems not only on paper, but sometimes on a calculator as well.
- Tackle word problems. You'll be surprised how many everyday problems that you'll face in your future involve the ability to solve algebraic word problems. For example, you'll use algebra to figure out the interest rate that you earn on your bank account or on your investments. You can also use algebra to figure out how long you'll have to travel based on the speed of your car.
- Work with exponents. When you start solving equations with polynomials (expressions containing both numbers and variables), you'll have to understand how to use exponents. This may also include working with scientific notation. Once you have exponents down, you can learn to add, subtract, multiply and divide polynomial expressions.
- Understand functions and graphs. In algebra, you'll really get into graphic equations. You'll learn how to calculate the slope of a line, how to put equations into point-slope form, and how to calculate the x- and y-intercepts of a line using slope-intercept form.
- Figure out systems of equations. Sometimes, you're given 2 separate equations with both x and y variables, and you have to solve for x or y for both equations. Fortunately, you'll learn many tricks for solving these equation including graphing, substitution and addition.
4Get into geometry. In geometry, you'll learn about the properties of lines, segments, angles and shapes.
- You'll memorize a number of theorems and corollaries that will help you to understand the rules of geometry.
- You'll learn how to calculate the area of a circle, how to use the Pythagorean theorem and how to figure out relationships between angles and sides of special triangles.
- You'll see a lot of geometry on future standardized tests like the SAT, the ACT and the GRE.
5Take on Algebra II. Algebra II builds on the concepts that you learned in Algebra I but adds more complex topics involving more complex non-linear functions and matrices.
6Tackle trigonometry. You know the words of trig: sine, cosine, tangent, etc. Trigonometry will teach you many practical ways to calculate angles and lengths of lines, and these skills will be invaluable for people who go into construction, architecture, engineering or surveying.
7Count on some calculus. Calculus may sound intimidating, but it's an amazing tool chest for understanding both the behavior of numbers and the world around you.
- Calculus will teach you about functions and about limits. You'll see the behavior or a number of useful functions including e^x and logarithmic functions.
- You'll also learn how to calculate and work with derivatives. A first derivative gives you information based on the slope of a tangent line to an equation. For instance, a derivative tells you the rate at which something is changing in a non-linear situation. A second derivative will tell you whether a function is increasing or decreasing along a certain interval so that you can determine the concavity of a function.
- Integrals will teach you how to calculate the area beneath a curve as well as volume.
- High school calculus usually ends with sequences and series. Although students won't see many applications for series, they are important to people who go on to study differential equations.
- Calculus is still only the beginning for some. If you are considering a career with a high involvement of math and science, like an engineer, try going a bit farther!
Part Three of Six:
Math Fundamentals--Ace Some AdditionEdit
1Start with "+1" facts. Adding 1 to a number takes you to the next highest number on the number line. For example, 2 + 1 = 3.
2Understand zeroes. Any number added to zero equals the same number because "zero" is the same as "nothing."
3Learn doubles. Doubles are problems that involve adding two of the same number. For example, 3 + 3 = 6 is an example of an equation involving doubles.
4Use mapping to learn about other addition solutions. In the example below, you learn through mapping what happens when you add 3 to 5, 2 and 1. Try the "add 2" problems on your own.
5Go beyond 10. Learn to add 3 numbers together to get a number larger than 10.
6Add larger numbers. Learn about regrouping 1s into the 10s place, 10s into the 100s place, etc.
- Add the numbers in the right column first. 8 + 4 = 12, which means you have 1 10 and 2 1s. Write down the 2 under the 1s column.
- Write the 1 over the 10s column.
- Add the 10s column together.
Part Four of Six:
Math Fundamentals--Strategies for SubtractionEdit
1Start with "backwards 1." Subtracting 1 from a number takes you backwards 1 number. For example, 4 - 1 = 3.
2Learn doubles subtraction. For instance, you add the doubles 5 + 5 to get 10. Just write the equation backward to get 10 - 5 = 5.
- If 5 + 5 = 10, then 10 - 5 = 5.
- If 2 + 2 = 4, then 4 - 2 = 2.
3Memorize fact families. For example:
- 3 + 1 = 4
- 1 + 3 = 4
- 4 - 1 = 3
- 4 - 3 = 1
4Find the missing numbers. For example, ___ + 1 = 6 (the answer is 5). This also sets the foundation for algebra and beyond.
5Memorize subtraction facts up to 20.
6Practice subtracting 1-digit numbers from 2-digit numbers without borrowing. Subtract the numbers in the 1s column and bring down the number in the 10s column.
7Practice place value to prepare for subtracting with borrowing.
- 32 = 3 10s and 2 1s.
- 64 = 6 10s and 4 1s.
- 96 = __ 10s and __ 1s.
8Subtract with borrowing.
- You want to subtract 42 - 37. You start by trying to subtract 2 - 7 in the 1s column. However, that doesn't work!
- Borrow 10 from the 10s column and put it into the 1s column. Instead of 4 10s, you now have 3 10s. Instead of 2 1s, you now have 12 1s.
- Subtract your 1s column first: 12 - 7 = 5. Then, check the 10s column. Since 3 - 3 = 0, you don't have to write 0. Your answer is 5.
Part Five of Six:
Math Fundamentals--Master MultiplicationEdit
1Start with 1s and 0s. Any number times 1 is equal to itself. Any number times zero equals zero.
2Memorize the multiplication table.
3Practice single-digit multiplication problems
4Multiply 2-digit numbers times 1-digit numbers.
- Multiply the bottom right number by the top right number.
- Multiply the bottom right number by the top left number.
5Multiply 2 2-digit numbers.
- Multiply the bottom right number by the top right and then the top left numbers.
- Shift the second row one digit to the left.
- Multiply the bottom left number by the top right and then the top left numbers.
- Add the columns together.
6Multiply and regroup the columns.
- You want to multiply 34 x 6. You start by multiplying the 1s column (4 x 6), but you can't have 24 1s in the 1s column.
- Keep 4 1s in the 1s column. Move the 2 10s over to the 10s column.
- Multiply 6 x 3, which equals 18. Add the 2 that you carried over, which will equal 20.
Part Six of Six:
Math Fundamentals--Discover DivisionEdit
1Think of division as the opposite of multiplication. If 4 x 4 = 16, then 16 / 4 = 4.
2Write out your division problem.
- Divide the number to the left of the division symbol, or the divisor, into the first number under the division symbol. Since 6 / 2 = 3, you'll write 3 on top of the division symbol.
- Multiply the number on top of the division symbol by the divisor. Bring the product down under the first number under the division symbol. Since 3 x 2 = 6, then you'll bring a 6 down.
- Subtract the 2 numbers that you've written. 6 - 6 = 0. You can leave the 0 blank also, since you don't usually start a new number with 0.
- Bring the second number that is under the division symbol down.
- Divide the number that you brought down by the divisor. In this case, 8 / 2 = 4. Write 4 on top of the division symbol.
- Multiply the top right number by the divisor and bring the number down. 4 x 2 = 8.
- Subtract the numbers. The final subtraction equals zero, which means that you have finished the problem. 68 / 2 = 34.
3Account for remainders. Some divisors won't divide evenly into other numbers. When you've finished your final subtraction, and you have no more numbers to bring down, then the final number is your remainder.Advertisement
I don't understand perimeter and area. It is getting very difficult for me to cope with it! I've even asked my tutor and still don't understand it. Can you help me?
- The perimeter of any straight-sided figure is the sum of all its sides. For example, the perimeter of a rectangle is equal to its width plus its length plus its width plus its length. The area of a straight-sided figure is expressed by a formula for that particular kind of figure (triangle, rectangle, etc.). For example, the area of a rectangle is equal to its length multiplied by its width.
How do I divide two uneven numbers?Answered by wikiHow Contributor
- It depends on the numbers. Sometimes you will get a decimal as your answer; for example, 5 divided by 3 is 1.67. But, if they are part of a fact family, they go evenly into each other. For example, 21 divided by 7 would be 3.
I can't understand logarithms, what can I do?
- There are many Web-based tutorials on logarithms, including wikiHow's Understand Logarithms. If none of those help you, find a friend or acquaintance who does understand the subject, and ask for some help. Some people are better at explaining math than others, so you may have to ask for help from more than one acquaintance.
Which textbook will you recommend for an 11th grade student?
How do I add and subtract dissimilar fractions?
- Mathematics is not a passive activity. You cannot learn mathematics by reading a textbook. Use online tools or worksheets from your teacher to practice problems until you understand the concepts.
- Concepts is the part of math that cannot be forsaken. Sometimes it is better to know the concepts and get it wrong than to not know the concepts and get it right.
- Practice topic by topic. Master a topic at a time, so that you can find out your strengths and weaknesses. Once you've got all the topics covered, start doing practice papers. The more practice, the better!
Things You'll NeedEdit
- Writing utensil (pencil or pen)
- Geometry set |
Point Slope Functions #2
In this slope-intercept learning exercise, students find the y-intercept and graph 6 linear functions. Each function includes a chart and place to show work as students compute the y-intercept and slope of the line.
3 Views 15 Downloads
New Review Point-Slope Application Problems
Create a linear equation for a problem when the intercept information is not given. The two-day lesson introduces the class to the point-slope form, which can be used for problems when the initial conditions are not provided. Pupils...
8th - 10th Math CCSS: Designed
Writing Equations of Lines Using The Point-Slope Form
In this writing equations worksheet, students find the equation of a line containing specified points. They determine the slope of a line from given points. This two-page worksheet contains 20 problems. Examples and explanations are...
8th - 10th Math
Modeling with Quadratic Functions (part 2)
How many points are needed to define a unique parabola? Individuals work with data to answer this question. Ultimately, they determine the quadratic model when given three points. The concept is applied to data from a dropped object,...
9th - 10th Math CCSS: Designed
How Do You Use Point-Slope Form to Write an Equation from a Table?
Given a table of x and y values, is it possible to write a linear equation from this information? There will be several steps involved. First is to use the point-slope formula to find a slope. Then use that slope and one of the ordered...
7 mins 7th - 9th Math
How Do You Write an Equation of a Line in Point-Slope Form If You Have the Slope and One Point?
Given the coordinates for a point on a line and given the slope of that line, write an equation in point-slope form. Really, all that needs to be done is to use the formula for point-slope form and plug in the values.
2 mins 8th - 11th Math |
LAN (Local Area Network) refers to a group of computers interconnected into a network so that they are able to communicate, exchange information and share resources (e.g. printers, application programs, database etc). In other words, the same computer resources can be used by multiple users in the network, regardless of the physical location of the resources.
Each computer in a LAN can effectively send and receive any information addressed to it. This information is in the form of data 'packets'. The standards followed to regularize the transmission of packets, are called LAN standards. There are many LAN standards as Ethernet, Token Ring , FDDI etc. Usually LAN standards differ due to their media access technology and the physical transmission medium . Some popular technologies and standards are being covered in this article.
Media Access Control methods
There are different types of Media Access Control methods in a LAN, the prominent ones are mentioned below :
Ethernet- Ethernet is a 10Mbps LAN that uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol to control access network. When an endstation (network device) transmits data, every endstation on the LAN receives it. Each endstation checks the data packet to see whether the destination address matches its own address. If the addresses match, the endstation accepts and processes the packet. If they do not match, it disregards the packet. If two endstations transmit data simultaneously, a collision occurs and the result is a composite, garbled message. All endstations on the network, including the transmitting endstations, detect the collision and ignore the message. Each endstation that wants to transmit waits a random amount of time and then attempts to transmit again. This method is usually used for traditional Ethernet LAN.
Token Ring - This is a 4-Mbps or 16-Mbps token-passing method, operating in a ring topology. Devices on a Token Ring network get access to the media through token passing. Token and data pass to each station on the ring. The devices pass the token around the ring until one of the computer who wants to transmit data , takes the token and replaces it with a frame. Each device passes the frame to the next device, until the frame reaches its destination. As the frame passes to the intended recipient, the recipient sets certain bits in the frame to indicate that it received the frame. The original sender of the frame strips the frame data off the ring and issues a new token.
Fast Ethernet - This is an extension of 10Mbps Ethernet standard and supports speed upto 100Mbps. The access method used is CSMA/CD .For physical connections Star wiring topology is used. Fast Ethernet is becoming very popular as an upgradation from 10Mbps Ethernet LAN to Fast Ethernet LAN is quite easy.
FDDI(Fiber Distributed Data Interface) - FDDI provides data speed at 100Mbps which is faster than Token Ring and Ethernet LANs . FDDI comprise two independent, counter-rotating rings : a primary ring and a secondary ring. Data flows in opposite directions on the rings. The counter-rotating ring architecture prevents data loss in the event of a link failure, a node failure, or the failure of both the primary and secondary links between any two nodes. This technology is usually implemented for a backbone network.
In the late 1960s and the early 1970s researchers developed a form of computer communication known as Local Area Networks (LANs). These are different from long-distance communications because they rely on sharing the network. Each LAN consists of a single shared medium, usually a cable, to which many computers are attached. The computers co-ordinate and take turns using the medium to send packets.
Unfortunately, this mechanism does not scale. Co-ordination requires communication, and the time to communicate depends on distance - large geographic separation between computers introduces longer delays. Therefore, shared networks with long delays are inefficient. In addition, providing high bandwidth communication channels over long distances is very expensive.
There are a number of different LAN technologies. Each technology is classified into a category according to its topology, or general shape. The first of these is a star topology, as illustrated in Figure 3.
File comment: Figure 3: Star topology
star.gif [ 5 KiB | Viewed 21336 times ]
The hub accepts data from a sender and delivers it to the receiver. In practice, a star network seldom has a symmetric shape; the hub often resides in a separate location from the computers attached to it.
A network using a ring topology arranges the computers in a circle - the first computer is cabled to the second. Another cable connects the second computer to the third, and so on, until a cable connects the final computer back to the first. This is illustrated in Figure 4.
File comment: Figure 4: Ring topology
ring.gif [ 4.85 KiB | Viewed 21323 times ]
Once again, the ring, like the star topology, refers to logical connections, not physical orientation.
A network that uses a bus topology consists of a number of computers all connected to a single, long cable. Any computer attached to the bus can send a signal down the cable, and all computers receive the signal. This is illustrated in Figure 5.
File comment: Figure 5: Bus topology
bus.gif [ 3.36 KiB | Viewed 21322 times ]
LAN Components Basic LAN components There are essentially five basic components of a LAN
Network Devices : such as Workstations, Printers, File Servers which are normally accessed by all other computers
Network Communication Devices : i.e. devices such as hubs, routers, switches etc., used for network operations
Network Interface Cards (NICs) : for each network device required to access the network . Cable as a physical transmission medium. Network Operating System- software applications required to control the use of the network LAN standards.
_________________ Please recommend my post if you found it helpful |
Converting hexadecimal, binary and decimal numbers.
Common Core Math Vocabulary
Are you as smart as a fourth grader? Test your basic Common Core Math vocabulary and find out!
Addition for Kindergarten - Let's Match!
Find the answer!
An equation of a line and a point is given. Your job is to find a parallel line to the equation that goes through the given point.
Multiplying Integers Memory Match!
The problems here aren't too basic and are best for mid-higher levels of Multiplying Integers.
3rd Grade Addition, Subtraction, and Place Value
Addition, Subtraction, Properties of Addition, Rounding to the tens and hundreds place, and Estimation.
Memories in Circles
Match the different terms related to circles to its description.
Tools to Measure With
Match the tool with what it measures
Multiplication and Division with large numbers.
Answer questions correctly to make music and get new band members.
Steines Squid Hunter - Algebra
Students will practice solving basic equations with this game. |
Jasmine has taught college Mathematics and Meteorology and has a master's degree in applied mathematics and atmospheric sciences.
In this lesson, we explore the idea and definition of a tangent line both visually and algebraically. After learning how to calculate a tangent line to a curve, you will find a short quiz to test your knowledge.
What Is a Tangent Line?
Let's say we're on a rollercoaster…in space! We're held to the track by the wheels of the cart, but if the cart were to suddenly disconnect from the track, we would soar off the track in a straight line because of the lack of gravity. Of course, it would be nice to be rescued from space, so what line would the rescuers follow in order to find us? As it turns out, they would find our cart along a very special line called the tangent line, like the one you are looking at now:
A tangent line is a straight line that just barely touches a curve at one point. The idea is that the tangent line and the curve are both going in the same direction at the point of contact. If we have a very wavy curve, the tangent line and the curve don't really seem to have much in common because the tangent line is perfectly straight. However, as we zoom in closer and closer to the point where the tangent line touches the curve, we can see that they have more in common than we thought, and they do look quite similar!
Now that we have a conceptual idea of what a tangent line is, we need to understand how to define one mathematically. There are two important elements to finding an equation that defines a tangent line: its slope and its point of contact with a curve. A line's slope is its steepness, or rate of change both horizontally and vertically as it travels away from the origin.
To find the slope of a tangent line, we actually look first to an equation's secant line, or a line that connects two points on a curve. To find the equation of a line, we need the slope of that line. With a tangent line, that can be tricky, but with a secant line, because we have two points, it's no problem!
The slope of this secant line, which passes through the points (a , f(a)) and (a + h , f(a + h)) shown in the formula below. You might recognize this formula from precalculus; it's called the difference quotient:
slope of secant line = [f(x + h) - f(x)] / h
So, how does this help us with the tangent line? Well, imagine that we took that second point (a + h , f(a + h)) and brought it closer to our first point. The closer it gets to the first point, the more the secant line starts to resemble the tangent line! We bring it closer and closer and closer… which is the mathematical idea of a limit. As h approaches zero, this turns our secant line into our tangent line, and now we have a formula for the slope of our tangent line! It is the limit of the difference quotient as h approaches zero.
Assuming you are familiar with the basics of calculus, you will recognize this as the definition of the derivative of our function f(x) at x = a, denoted in prime notation as f '(a). The derivative of a function is the instantaneous rate of change of the function and the slope of the line tangent to the curve.
Equation of the Tangent Line
Now that we have the slope of the tangent line, all we would need is a point on the tangent line to complete the equation of our line. That's easy, because we know that our tangent line went through the point (a , f(a)). Let's now build the equation of our line using point-slope form of a line:
y - y1 = m(x - x1), where (x1, y1) is a known point on the line, and m is the slope of the line
The equations are valid for almost all points on a curve y = f(x):
Over 79,000 lessons in all major subjects
Get access risk-free for 30 days,
just create an account.
In the special case where a tangent line is vertical, its slope would be undefined and we wouldn't be able to use the equation from before. In this case, we would use the equation of a vertical line that goes through the point (a , f(a)), which would simply be the equation x = a.
If the function is discontinuous where x = a (as in any holes, breaks, or jumps in the graph), the function doesn't have a tangent line at that point, or finally
If the function has a sharp corner or edge at x = a, the function does not have a line tangent to it at that point. Tangent lines only exist where the function's curve is smooth.
Let's find the equation of the line tangent to the curve of the function f(x) = x^2 when x = 1. We're already given the x-value of the point (x = 1), but to determine the corresponding y-value, let's plug in x = 1: f(1) = (1)^2 = 1. So, we know the point is (1,1).
Next let's find the slope of the line, which would be the derivative at x = 1:
f'(x) = 2x and f'(1) = 2
So the equation of our line becomes:
y = f(1) + f'(1)(x - 1), which simplifies to
y = 1 + 2(x - 1), which simplifies further to
y = 2x - 1
The graph of y = x^2 and y = 2x - 1 confirms visually that we have calculated the tangent line correctly, and we're done!
Let's take a couple of moments to review what a tangent line is and what its equation is. A tangent line is a straight line that just barely touches a curve at one point. The idea is that the tangent line and the curve are both going in the exact same direction at the point of contact. The slope, or the steepness, of the tangent line is determined by the function's instantaneous rate of change at that point. The slope of the line is found by creating a derivative function based on a secant line's approach to the tangent line. A secant line is a line that connects two points on a curve.
For smooth, continuous curves with non-vertical slopes, we can calculate the tangent line using the formula:
y = f(a) + f '(a)(x - a)
If the curve has a vertical tangent line, the equation reduces to x = a, and if the curve has a break or a sharp corner, then the curve has no tangent line at that point.
Did you know… We have over 200 college
courses that prepare you to earn
credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the
first two years of college and save thousands off your degree. Anyone can earn
credit-by-exam regardless of age or education level. |
File Name: area and perimeter of rectangle and square worksheets .zip
The worksheets are very varied, and include:. Each worksheet is randomly generated and thus unique.
Welcome to our Area and Perimeter of Rectangle worksheets and support page. Here you will find a range of free printable worksheets which will help your child to learn to work out the areas and perimeters of a range of rectangles and rectilinear shapes.
Equip future architects, aeronauts, coast guards, graphic designers with this meticulously designed assemblage of printable area worksheets to figure out the area of irregular figures, area of 2D shapes like squares, rectangles, triangles, parallelograms, trapezoids, quadrilaterals, rhombus, circles, polygons, kites, mixed and compound shapes using appropriate area formulas. Sample our free worksheets that are exclusively drafted for grade 2 through grade 8 children. The children in the 2nd grade and 3rd grade enhance practice with this interesting collection of pdf worksheets on finding the area by counting unit squares. Included here area exercises to count the squares in the irregular figures and rectangular shapes. Give learning a head start with these finding the area of a square worksheets. Figure out the area of squares using the formula, determine the side lengths, find the length of the diagonals and calculate the perimeter using the area as well.
Help primary children know their shapes inside and out with these teaching resources for area and perimeter Boris Johnson gives Timotay Playscapes playground project the thumbs on school visit. Year 3 reading comprehension — 12 of the best worksheets and resources for LKS2 literacy. These area worksheets provide extra challenge for Year 4 children by getting them to find the area of rectilinear shapes by counting squares. There are a variety of area problems spread across three sections, enabling you to use the whole sheet during a lesson or to select specific problems for different teaching sessions, and separate answer sheets are included for all sections.
Instruct children to add up the lengths along the boundary of each figure to find the perimeter of the composite figures that is made up of two or more simple shapes. Free printable worksheets for the area and perimeter of rectangles and squares for grades , including word problems, missing side problems, and more. Because it is an amount of space, it has to be measured in squares. Draw a rhombus with a perimeter of 68 and one diagonal of Welcome to our Perimeter worksheets page.
Comparing Numbers. Daily Math Review. Division Basic. Division Long Division. Hundreds Charts. Multiplication Basic. Multiplication Multi-Digit.
Recall the topic and practice the math worksheet on area and perimeter of rectangles. Students can practice the questions on area of rectangles and perimeter of rectangles. Find the area and perimeter of the following rectangles whose dimensions are:. The perimeter of a rectangle is cm. If the length of the rectangle is 70 cm, find its breadth and area. If the breadth of the rectangle is 8 cm, find its length and perimeter.
The length of a rectangle is 5 m greater than the width. MCQs to test the knowledge acquired have also been included. Word Problems Worksheet Fill in the blanks Worksheet. View PDF. Find the area of the shaded region. To link to this Area and Perimeter of a Rectangle Worksheets page, copy the … 8.
Area and Perimeter. Perimeter means distance around a figure or curve. Perimeter of a Square. A square is a closed figure that has 4 sides of equal length and 4 equal angles of 90 degree.
Чатрукьян опустился на колени, вставил ключ в едва заметную скважину и повернул. Внизу что-то щелкнуло. Затем он снял наружную защелку в форме бабочки, снова огляделся вокруг и потянул дверцу на .
Они поговорили еще несколько минут, после чего девушка обняла его, выпрямилась и, повесив сумку на плечо, ушла. Наконец-то, подумал пассажир такси. Наконец-то. ГЛАВА 77 Стратмор остановился на площадке у своего кабинета, держа перед собой пистолет. Сьюзан шла следом за ним, размышляя, по-прежнему ли Хейл прячется в Третьем узле.
Неужели. - Да. После того как я вскрыл алгоритм Попрыгунчика, он написал мне, что мы с ним братья по борьбе за неприкосновенность частной переписки. Сьюзан не могла поверить своим ушам.
PDF format: come back to this page and push the button again. Html format: simply refresh the worksheet page in your browser window. Draw a rectangle with.Centpacheeli 20.05.2021 at 08:14
Printable Math Worksheets @ hazarsiiraksamlari.org Name: Question 1: On centimetre-square paper, draw a rectangle with a perimeter of 14cm.Maximiliana A. 20.05.2021 at 21:02
What is Area? Area is the amount of space that is inside a shape. Because it is an amount of space, it has to be measured in squares. If the.RubГ R. 22.05.2021 at 11:38
In this activity you will find the perimeter and area of rectangles and shapes made from Area measures the surface of something, usually in square metres (m2), square centimetres (cm2) or square Why are the units m2? Student worksheet.Otilio T. 23.05.2021 at 21:24
View in browser Create PDF Find the area and perimeter of irregular rectangular shapes (grades ) View in the browser This worksheet formats much better as. |
The astronomers of UCLA declared on September 11, 2019 that they discovered the supermassive black hole at the center of our Milky Way galaxy having…
The general theory of relativity, of Albert Einstein once again confirmed. The new data were obtained from a study of light from hundreds of thousands of distant galaxies. General relativity predicts that the wavelength of this light will be shifted by a small amount due to the galaxies’ mass, in an effect called gravitational redshift. The effect is very difficult to measure, because it is the smallest of the three types of redshift, with redshift also being caused by the movement of the galaxies and the expansion of the universe as a whole. To disentangle the three sources of redshift, the researchers relied on the vast number of galaxies in the Sloan Digital Sky Survey sample, which allowed them to perform a statistical analysis.
The amount of redshift they found that appeared to be caused by gravity agreed exactly with the predictions of general relativity. According to Radoslaw Wojtak, an astrophysicist at the University of Copenhagen they have independent measurements of the cluster masses, so they can calculate what the expectation for gravitational redshift based on general relativity is. It agrees exactly with the measurements of this effect. However, the findings still don’t disprove an alternative theory of gravity invented to undo the need for dark energy, which is thought to be causing the accelerated expansion of the universe.
New Space Crew to Launch From Winter Wonderland
Disappeared Comet Dive Bombs
Astronomers Discovered Coolest Class of Stars
February 21 Partial Solar Eclipse
Astronomers Observe A Black Hole Firing A High Energy Bullet
Scientists Launched Rocket to Study Northern Lights
Comet Garradd Sails Slowly Past Globular Star Cluster M92
NASA's New James Webb Space Telescope |
A stellar core is the extremely hot, dense region at the center of a star. For an ordinary main sequence star, the core region is the volume where the temperature and pressure conditions allow for energy production through thermonuclear fusion of hydrogen into helium. This energy in turn counterbalances the mass of the star pressing inward; a process that self-maintains the conditions in thermal and hydrostatic equilibrium. The minimum temperature required for stellar hydrogen fusion exceeds 107 K (10 MK), while the density at the core of the Sun is over 100 g cm−3. The core is surrounded by the stellar envelope, which transports energy from the core to the stellar atmosphere where it is radiated away into space.
Main sequence stars are distinguished by the primary energy generating mechanism in their central region, which joins four hydrogen nuclei to form a single helium atom through thermonuclear fusion. The Sun is an example of this class of star. Once stars with the mass of the Sun form, the core region reaches thermal equilibrium after about 100 million (108) years and becomes radiative. This means the generated energy is transported out of the core via radiation and conduction rather than through mass transport in the form of convection. Above this spherical radiation zone lies a small convection zone just below the outer atmosphere.
With decreasing stellar mass, the outer convection shell expands downward to take up an increasing proportion of the envelope, until at a mass of around 0.35 M☉ or below the entire star is convective, including the core region. These very low mass stars (VLMS) occupy the late range of the M-type main-sequence stars, or red dwarf. The VLMS form the primary stellar component of the Milky Way at over 70% of the total population. The low mass end of the VLMS range reaches about 0.075 M☉, below which ordinary (non-deuterium) hydrogen fusion does not take place and the object is designated a brown dwarf. The temperature of the core region for a VLMS decreases with decreasing mass, while the density increases. For a star with 0.1 M☉, the core temperature is about 5 MK while the density is around 500 g cm−3. Even at the low end of the temperature range, the hydrogen and helium in the core region is fully ionized.
Below about 1.2 M☉, energy production in the stellar core is predominantly through the proton–proton chain reaction, a process requiring only hydrogen. For stars above this mass, the energy generation comes increasingly from the CNO cycle, a hydrogen fusion process that uses intermediary atoms of carbon, nitrogen, and oxygen. In the Sun, only 1.5% of the net energy comes from the CNO cycle. For stars at 1.5 M☉ where the core temperature reaches 18 MK, half the energy production comes from the CNO cycle and half from the pp chain. The CNO process is more temperature sensitive than pp chain, with most of the energy production occurring near the very center of the star. This results in a stronger thermal gradient, which satisfies the criteria for convective instability. Hence, the core region is convective for stars above about 1.2 M☉.
For all masses of stars, as the particle density hydrogen becomes consumed, the temperature increases so as to maintain pressure equilibrium. This results in an increasing rate of energy production, which in turn causes the luminosity of the star to increase.
Once the supply of hydrogen at the core of a non-fully-convective star is depleted, it will leave the main sequence and evolve along the red giant branch of the Hertzsprung–Russell diagram. Low mass stars with up to about 1.2 M☉ will gradually move along the subgiant branch, having an inert helium core surrounded by a hydrogen-rich shell that is generating energy through the pp chain. This process will steadily increase the mass of the helium core, causing the hydrogen fusing shell to increase in temperature until it can generate energy through the CNO cycle. Due to the temperature sensitivity of the CNO process, this hydrogen fusing shell will be thinner than before. The increasing mass and density of the helium core will cause the star to increase in size and luminosity as it evolves up the red giant branch.
In the more massive main-sequence stars with core convection, the helium produced by fusion becomes mixed throughout the convective zone. Once the core hydrogen is consumed, it is thus effectively exhausted across the entire convection region. At this point the helium core starts to contract and hydrogen fusion begins along a shell around the perimeter. As the star ages, the core continues to contract and heat up until a triple alpha process can be maintained at the center, fusing helium into carbon. However, most of the energy generated at this stage continues to come from the hydrogen fusing shell. For stars above 10 M☉, helium fusion at the core begins immediately as the main sequence comes to an end. Two hydrogen fusing shells are formed around the helium core: a thin CNO cycle inner shell and an outer pp chain shell.
- Chabrier, Gilles; Baraffe, Isabelle (November 1997), "Structure and evolution of low-mass stars", Astronomy and Astrophysics, 327: 1039−1053, arXiv:astro-ph/9704118, Bibcode:1997A&A...327.1039C.
- Iben, Icko (2013), Stellar Evolution Physics: Physical processes in stellar interiors, Cambridge University Press, p. 45, ISBN 9781107016569.
- Lang, Kenneth R. (2013), Essential Astrophysics, Undergraduate Lecture Notes in Physics, Springer Science & Business Media, p. 339, ISBN 978-3642359637.
- Lodders, Katharina; Fegley, Jr, Bruce (2015), Chemistry of the Solar System, Royal Society of Chemistry, p. 126, ISBN 9781782626015.
- Maeder, Andre (2008), Physics, Formation and Evolution of Rotating Stars, Astronomy and Astrophysics Library, Springer Science & Business Media, ISBN 9783540769491.
- Pradhan, Anil K.; Nahar, Sultana N. (2011), Atomic Astrophysics and Spectroscopy, Cambridge University Press, pp. 226−227, ISBN 978-1139494977.
- Rose, William K. (1998), Advanced Stellar Astrophysics, Cambridge University Press, p. 267, ISBN 9780521588331 |
[SatNews] But the locations and identity of the natural "sinks" absorbing this carbon dioxide currently are not well understood.
A NASA spacecraft designed to make precise measurements of carbon dioxide in Earth's atmosphere is at Vandenberg Air Force Base, California, to begin final preparations for launch.
The Orbiting Carbon Observatory-2 arrived Wednesday at its launch site on California's central coast after traveling from Orbital Sciences Corp.'s Satellite Manufacturing Facility in Gilbert, Arizona. The spacecraft now will undergo final tests and then be integrated on top of a United Launch Alliance Delta II rocket in preparation for a planned July 1 launch.
The observatory is NASA's first satellite mission dedicated to studying carbon dioxide, a critical component of Earth's carbon cycle that is the leading human-produced greenhouse gas driving changes in Earth's climate. It replaces a nearly identical spacecraft lost due to a rocket launch mishap in February 2009.
OCO-2 will provide a new tool for understanding both the sources of carbon dioxide emissions and the natural processes that remove carbon dioxide from the atmosphere, and how they are changing over time. Since the start of the Industrial Revolution more than 200 years ago, the burning of fossil fuels, as well as other human activities, have led to an unprecedented buildup in this greenhouse gas, which is now at its highest level in at least 800,000 years. Human activities have increased the level of carbon dioxide by more than 25 percent in just the past half century.
Greenhouse gases, such as carbon dioxide, trap the sun's heat within Earth's atmosphere, warming it and keeping it at habitable temperatures. However, scientists have concluded that increases in carbon dioxide resulting from human activities have thrown Earth's natural carbon cycle off balance, increasing global temperatures and changing the planet's climate.
While scientists understand carbon dioxide emissions resulting from burning fossil fuels and can estimate their quantity quite accurately, their understanding of carbon dioxide from other human-produced and natural sources is relatively less quantified. Atmospheric measurements collected at ground stations indicate less than half of the carbon dioxide humans emit into the atmosphere stays there. The rest is believed to be absorbed by the ocean and plants on land.
But the locations and identity of the natural "sinks" absorbing this carbon dioxide currently are not well understood. OCO-2 will help solve this critical scientific puzzle. Quantifying how the natural processes are helping remove carbon from the atmosphere will help scientists construct better models to predict how much carbon dioxide these sinks will be able to absorb in the future.
The mission's innovative technologies will enable space-based measurements of atmospheric carbon dioxide with the sensitivity, resolution and coverage needed to characterize the sources of carbon dioxide emissions and the natural sinks that moderate their buildup, at regional scales, everywhere on Earth. The mission's data will help scientists reduce uncertainties in forecasts of how much carbon dioxide is in the atmosphere and improve the accuracy of global climate change predictions.
In addition to measuring carbon dioxide, OCO-2 will monitor the "glow" of the chlorophyll contained within plants, a phenomenon known as solar-induced chlorophyll fluorescence, opening up potential new applications for studying vegetation on land. NASA researchers, in collaboration with Japanese and other international colleagues, have discovered that data from Japan's GOSAT (Greenhouse gases observing SATellite, also known as Ibuki in Japan), along with other satellites, including OCO-2, can help monitor this "signature" of photosynthesis on a global scale.
The observatory will fly in a 438-mile (705-kilometer) altitude, near-polar orbit in formation with the five other satellites that are part of the Afternoon, or "A-Train" Constellation. This international constellation of Earth-observing satellites circles Earth once every 98 minutes in a sun-synchronous orbit that crosses the equator near 1:30 p.m. local time and repeats the same ground track every 16 days. OCO-2 will be inserted at the head of the A-Train. Once in this orbit, OCO-2 is designed to operate for at least two years. This coordinated flight formation will enable researchers to correlate OCO-2 data with data from other NASA and partner spacecraft.
OCO-2 is a NASA Earth System Science Pathfinder Program mission managed by NASA's Jet Propulsion Laboratory in Pasadena, California, for NASA's Science Mission Directorate in Washington. Orbital built the spacecraft and provides mission operations under JPL's leadership. The science instrument was built by JPL, based on the instrument design co-developed for the original OCO mission by Hamilton Sundstrand in Pomona, California NASA's Launch Services Program at NASA's Kennedy Space Center in Florida is responsible for launch management. JPL is managed for NASA by the California Institute of Technology in Pasadena.
For more information about the Orbiting Carbon Observatory-2, visit.
NASA monitors Earth's vital signs from land, air and space with a fleet of satellites and ambitious airborne and ground-based observation campaigns. NASA develops new ways to observe and study Earth's interconnected natural systems with long-term data records and computer analysis tools to better see how our planet is changing. The agency shares this unique knowledge with the global community and works with institutions in the United States and around the world that contribute to understanding and protecting our home planet. OCO-2 is the second of five NASA Earth science missions launched into space this year, the most new Earth-observing mission launches in the same year in more than a decade.
For more information about NASA's Earth science activities in 2014, visit. |
The special theory of relativity is an extension of classical-mechanics that describes the motion and dynamics of objects moving at significant fractions of the speed of light.
In Einstein's original 1905 formulation, The postulates of Special Relativity are that
The Principle of Relativity is that the laws of physics are the same in every inertial reference frame.
The Speed of Light is the same in every reference frame.
In Special Relativity, the Galilean transformations are replaced with the Lorentz transformations, which form the Lorentz group.
An alternative formulation of Special Relativity is that of Minkoswski, which unifies space and time into spacetime. From the invariant squared infinitesimal spacetime interval, the Lorentz Transformations may be derived.
- Galileo and Einstein is a free ebook used as a text for a history of science course. Chapters 23 through 30 discuss special relativity in a very pedagogical manner. Chapters 21 and 22 discuss the speed-of-light and the Michaelson Morley experiment, and help put special relativity into its historical context.
- A.P. French, Special Relativity is a short book treating just special relativity. It includes historical background.
- Kleppner and Kolenkow, An Introduction to Mechanics discusses relativity in chapters 11 through 14. It begins by deriving the Lorentz transformations from mechanical considerations. It also introduces relativistic momentum, four-vectors, and invariances in relativity.
- Marion and Thorton, Classical Dynamics of Particles and Systems also introduces relativity from mechanical considerations, in chapter 14. This text also discusses four-vectors, and introduces the lagrangian-formalism of special relativity.
- E. M. Purcell, Electricity and Magnetism is an introductory book in electromagnetism. Chapter 5 uses special relativity to derive the existence of magnetic-fields and the form of the Lorentz force. Appendix A gives a review of special relativity.
- J. D. Jackson, Classical Electrodynamics is a graduate-level book in electrodynamics. Chapter 11 gives a thorough discussion of special relativity, including methods from group-theory. Chapter 12 discusses dynamics and how the lagrangian-formalism and hamiltonian formalism interact with special relativity. |
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites. It was initially developed for use by the United States military and became fully operational in 1995. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s.
When the Soviet Union launched the first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University’s Applied Physics Laboratory (APL) decided to monitor its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required. Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the user’s location, given that of the satellite. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine’s location.)
All satellites broadcast at the same frequencies, encoding signals using unique code division multiple access (CDMA) so receivers can distinguish individual satellites from each other.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein , Ascension Island , Diego Garcia , and Cape Canaveral ). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite’s internal orbital model. In practice the receiver position (in three-dimensional Cartesian coordinates with origin at the Earth’s center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
GPS satellites continuously transmit data about their current time and position. The GPS concept is based on time and the known position of GPS specialized satellites The satellites carry very stable atomic clocks that are synchronized with one another and with the ground clocks. On January 11, 2010, an update of ground control systems caused a software incompatibility with 8,000 to 10,000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, Calif.
By December 1993, GPS achieved initial operational capability (IOC), indicating a full constellation (24 satellites) was available and providing the Standard Positioning Service (SPS). In 1972, the USAF Central Inertial Guidance Test Facility (Holloman AFB) conducted developmental flight tests of four prototype GPS receivers in a Y configuration over White Sands Missile Range , using ground-based pseudo-satellites. After that the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers provide horizontal accuracy of better than 3.5 meters, 37 although many factors such as receiver quality and atmospheric issues can affect this accuracy. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. 24 The SECOR system included three ground-based transmitters from known locations that would send signals to the satellite transponder in orbit.
There were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation for a constellation of navigation satellites. However, closer to earth (<1000km radius) the amount of stuff in space is significantly higher than further out and therefore orbits decay on time scales noticable to us. As someone else mentioned geosynch orbits are not particularly special, except that the angular speed of the satellite happens to match the angular speed of the earth and the satellite appears to sit still above a particular spot on earth. The time difference tells the GPS receiver how far away the satellite is. With distance measurements from a few more satellites, the receiver can determine the user’s position.
DOD is required by law to “maintain a Standard Positioning Service (SPS) (as defined in the Federal Radionavigation Plan and the Standard Positioning Service Signal Specification) that will be available on a continuous, worldwide basis,” and, “develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses.” These strict requirements and current augmentation systems should actually make DOD use of the system transparent to the civil user. Each GPS satellite transmits an accurate position and time signal. The baseline satellite constellation consists of 24 satellites positioned in six earth-centered orbital planes with four operation satellites and a spare satellite slot in each orbital plane.
But if the receiver can access more satellite signals, it will calculate a more accurate position (Tinambunan). The atomic clocks in the satellites are extremely accurate, but the clocks in the GPS receivers are not, which creates a timing error. Although a given satellite receiver is typically designed to use only one of the global systems, there’s no reason why it can’t use signals from two or more at once.
L1 carries the civilian SPS code signal (also known as the C/A code or Coarse Acquisition code), which is relatively short and broadcast about 1000 times a second, and what’s known as the navigation data message, which includes the date and time, satellite orbit details, and other essential data. Comparing the two frequencies allows military grade GPS receivers to calculate precise corrections for radio delays and distortions caused by transmission through the atmosphere, and that still gives military GPS an edge over civilian systems. According to the official website : “The accuracy of the GPS signal in space is actually the same for both the civilian GPS service (SPS) and the military GPS service (PPS).” In practice, while SPS signals are broadcast using only one frequency, PPS uses two.
For that reason, they developed two different “flavors” of GPS: a highly accurate military-grade, known as Precise Positioning Service (PPS), and a somewhat degraded civilian version called Standard Positioning Service (SPS). Radio signals beaming down to us from space satellites aren’t traveling through empty space but through Earth’s atmosphere, including the ionosphere (the upper region of Earth’s atmosphere, containing charged particles, which help radio waves to travel) and the troposphere (the turbulent, uncharged region of the atmosphere, where weather happens, which extends about 50km or 30 miles above Earth’s surface). The best-known satnav system, the Navstar Global Positioning System (GPS), uses about 24 active satellites (including backups).
In addition to heightened security, the United States military also has access to much more accurate positioning by using the Precise Positioning System (PPS). GPS satellites fly in medium Earth orbit (MEO) at an altitude of approximately 20,200 km (12,550 miles).
More On GETSAT.COM nano satellites
The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Air Force. It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. Obstacles such as mountains and buildings block the relatively weak GPS signals.
The GPS does not require the user to transmit any data, and it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver.
The GPS project was launched by the U.S. Department of Defense in 1973 for use by the United States military and became fully operational in 1995. It was allowed for civilian use in the 1980s. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX).
Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III. During the 1990s, GPS quality was degraded by the United States government in a program called “Selective Availability”; this was discontinued in May 2000 by a law signed by President Bill Clinton. New GPS receiver devices using the L5 frequency to begin release in 2018 are expected to have a much higher accuracy and pinpoint a device to within 30 centimeters or just under one foot. |
The Curve in the Curvahedra
These are Curvahedra pieces:They can hook together to make all sorts of geometric objects. For example, take three pieces and make a triangle (or something triangle like with wiggly edges)
Taking a close look, each piece has five arms, and they are equally spaced around so the angle between two arms must be 360/5 or 72 degrees.
The interior angles of this triangle are all the same so we have 3*72 = 216. Yet from geometry we know that the the interior angles of a triangle always add up to 180. What has gone wrong?
Here is a 60 degree triangle (note the pieces have six arms, so the angle between neighbours is 360/6), can you see the difference?
Unlike the first triangle this lies flat on the table whereas the first curves away. The difference is clearer if we complete all the pieces around a corner for each.
Going further the five triangles come together to form a ball, while the six triangles would keep on spreading, we won’t be able to complete that sheet.
What we are seeing here is the curvature of the surface we are making. The triangle with 72 degree angles can be said to have an excess of 36 degrees. The greater the excess the more it curves. Look at this triangle with 90 degree angles (for a total of 270 degrees, an excess of 90 degrees), the curvature is very clear:
Completing this creates a smaller ball.
This new ball has eight faces, each with a 90 degree excess. Adding all these together gives a total excess of 720. The first model has 20 faces, with a 36 degree excess, and again a total of 720. Lets think about the model with just three triangles around a corner:
The total angle is 120*3 = 360, so the excess is 180 degrees. If the pattern holds we should need 4 of these triangles to make a ball, and indeed we do:
In fact if you take anything that is like a sphere, take the angle excess on every face you will always get 720. For a more complex example take this model:
This has eight triangles and eighteen squares, and all the angles are 90 degrees. For the square this is normal the total interior angle of a quadrilateral should be 360 and 4*90 is 360. So there is no angle excess. This leaves the eight 90 degree triangles once again giving 720. Also notice in the model the square faces are flatter with the curvature occurring at the triangles. This gives the model the shape of a cube with rounded edges, rather than a sphere.
This result is called Descartes’ Theorem and it is a special case of the Gauss-Bonnet theorem, both are closely related to the Euler Characteristic. These theorems stand at the heart of topology and differential geometry.
A natural follow up to this is to ask what happens with a shape with two little angle (an angle defect). For example the sum of the angles of a quadrilateral should be 360. What happens if we take a square (4-equal sides and angles) with 72 degree angles. The sum is now 72*4 = 288, which is less than 360. This creates a saddle:
The saddle is said to have negative curvature, and connecting up more and more squares, like this does not create a ball connecting up on itself. Instead it gives this wavy surface that grows faster and faster, modelling a hyperbolic plane, all these images are the same object!
Final note: The curvature discussed here is actually called Gaussian Curvature, and is a property of the surface itself not the way it fits in space. For example consider this cone:This is covered with equilateral triangles with 60 degree angles. So although it looks curved the geometry is flat, the triangles all have no angle excess. In other words if you investigated distances just on the surfaces of the model they would be the same locally as those on the locally flat plane of triangles given above. You can only detect the change if you loop back on yourself round the cone. The only exception is the tip of the cone. Here you can see a piece is left hanging.
The same thing happens when you bend a piece of paper, you change how that sheet lies in three dimensions, but not what happens on the sheet. You can even use this to work out how to best hold pizza. On the other hand the Gaussian curvature, discussed above, does change what happens on the surface. The angles of triangles can be measured without leaving the surface. In fact this might have been part of Gauss‘ motivation. He wanted to work out if the earth was a perfect sphere, but did not have access to space. In other words he had to take measurements just on the surface of the earth.
These ideas had even greater importance with the work of Einstein. General relativity assumes that the three dimensional space (or the four dimensional spacetime) that we live in is itself curved. In fact that curvature is related to gravity and explains how gravity acts at a distance. This huge idea fundamentally changed our understanding of the universe yet we can start to appreciate it with a simple toy, which you can get for yourself here. Another way to explore the geometry is with crochet from Daima Tamina’s beautiful book. |
Einstein's general theory of relativity predicts that accelerating mass distributions produce gravitational radiation, analogous to electromagnetic radiation from accelerating charges. These gravitational waves (GWs) have not been directly detected to date, but are expected to open a new window to the Universe once the detectors, kilometre-scale laser interferometers measuring the distance between quasi-free-falling mirrors, have achieved adequate sensitivity. Recent advances in quantum metrology may now contribute to provide the required sensitivity boost. The so-called squeezed light is able to quantum entangle the high-power laser fields in the interferometer arms, and could have a key role in the realization of GW astronomy.
When Galileo Galilei pointed his telescope towards the sky 400 years ago, he discovered events that had never been seen before. In subsequent centuries, a variety of telescopes were invented, covering a large part of the electromagnetic spectrum. These telescopes enabled observations that now form the basis of our understanding of the origin and the evolution of the Universe. Einstein's general theory of relativity, quite often simply 'general relativity'1, predicts the existence of a completely different kind of radiation, the so-called gravitational waves (GWs). As electromagnetic radiation is generated by acceleration of charges, so are GWs produced by accelerating mass distributions, such as supernova explosions or binary neutron stars that spiral into each other. GWs may also be emitted by objects that are electromagnetically dark, black holes, for example. Instruments that can directly observe GWs may well be able to 'light up' the dark side of our Universe. The analysis of the waves' spectrum and their time evolution will provide information about the nature of astrophysical and cosmological events that produced the waves. So far, GWs have not been directly observed.
Suitable telescopes for GW astronomy are kilometre-scale laser interferometers that measure the distance between quasi-free-falling mirrors. This measurement can be used to infer changes of spacetime curvature. Current GW detectors are already able to measure extremely small changes of distance with strain sensitivity down to the order of 10−22. However, quantum physics imposes a fundamental limit on measurement sensitivity, in particular, in terms of photon-counting noise. In the past, the GW signal with respect to the photon-counting noise could only be increased by increasing the light power. Unfortunately, increasing light power will eventually produce measurable quantum radiation pressure noise. In addition it also increases the thermal load inside the detector and is problematic with respect to the concept of overall low noise. Squeezed light avoids these problems by increasing the measurement sensitivity without increasing the light power. The application of squeezed light is a quantum technology. Injected into an interferometer, it entangles the high-power laser fields in the interferometer arms. The photons detected at the interferometer output port are then no longer independent from each other resulting in a reduced, that is, squeezed, photon-counting noise. As the squeezed light technology does not build on an increase in light power, it keeps the thermal load constant and can conveniently be used in conjunction with other future technologies. In particular, it can be combined with cryogenic cooling of interferometer mirrors for reducing mirror surface Brownian motion. Future GW observatories might actually require squeezed laser light in order to make GW astronomy a reality. Recent progress in the generation of squeezed laser light has brought us to the point where quantum metrology will actually find its first application.
In this review, we survey the possible astrophysical sources of GWs and the sensitivity issues related to their detection. We briefly examine the detection efforts performed by classical means and show how they have reached their sensitivity limits. We then introduce the concepts of quantum metrology and squeezed light and address how their deployment in next-generation GWs instruments should finally enable direct GW detection.
GWs are ripples in spacetime, that is, dynamic changes in space curvature that propagate at the speed of light. According to general relativity, they are transverse and quadrupolar in nature, have two polarization states and are extremely weak. GWs of detectable amplitude cannot be generated on Earth, but a variety of known astrophysical and cosmological sources are predicted to emit gravitational radiation that should reach the Earth with a strength within reach2,3.
Although GWs have not yet been directly observed, their existence is beyond doubt. A binary system of compact objects, such as neutron stars (Fig. 1) or black holes, emits GWs at twice their orbital frequency. The energy carried away by the GWs leads to a precisely predictable decay in the orbital period of the binary. This mechanism was indeed verified to exquisite precision, with observations of the binary pulsar system PSR1913+164. The discovery is regarded as unequivocal, albeit indirect, proof of the existence of GWs that led to the 1993 Nobel Prize in Physics.
GWs from complex astrophysical sources carry a plethora of information that will have a major impact on gravitational physics, astrophysics and cosmology. GW signals are typically distinguished in one of the four broad and often overlapping classes2,3, based on expected waveforms, and hence optimal search techniques. They are binary inspirals and mergers, burst sources, periodic sources, and stochastic sources. In the following, we briefly review the physics and astrophysics that can be extracted from the observation of GWs emitted by these sources.
Binary inspirals and mergers
The final stages of life of neutron star binaries will provide the richest signals, as shown in Figure 1. As the binary loses energy, the orbital period decreases and enters the human audio frequency band. After another ≈100 cycles, the stars merge in a catastrophic explosion providing a GW burst signal of a few hundred Hertz up to a kiloHertz. The merger is expected to produce a black hole surrounded by a torus, which will release a giant burst of gamma rays. Simultaneous observation of GWs and gamma rays would confirm that the merger of neutron stars is the engine of many of the observed short, hard gamma ray bursts5. Recent advances in numerical relativity now make it possible to make predictions of the waveforms generated around the merger6. Comparison with observed waveforms will provide accurate tests of general relativity in the hitherto untested strong-field regime. The imprint of tidal distortions on the GW waveform from a binary system with at least one neutron star will constrain the equation of state of the nuclear matter making up the star. Independent of the nature of the binary, the final state of the merger will be a perturbed black hole, oscillation modes of which will decay in time producing more gravitational radiation. Such observations offer a striking confirmation of the existence of black holes.
The famous 'no-hair' theorem says that black holes are completely characterized by their mass and angular momentum7. Measuring the GWs emitted by black hole binary systems where the mass ratio of the components is large, the 'no-hair' theorem can be tested. Direct observation of the gravitational waveforms from inspiralling black holes and neutron stars can also provide the luminosity distance to the source without any complex calibrations2. If, in addition, the redshift can be measured (via the identification of electromagnetic counterparts), the Hubble parameter8, the dark energy and dark matter content of the Universe, and the dark energy equation of state can be determined.
Burst sources refer to short-lived GW transients, the main known candidates being core-collapse supernovae and collapses to black holes9,10. Observation of GWs will open a way to extract information about the dynamics occurring in the core of the supernova, and should complement and enhance the understanding gained from electromagnetic observations.
Spinning compact objects will generate periodic GW signals depending on the degree of non-axisymmetric deformations11 (departure from rotational symmetry is a necessary ingredient for generation of quadrupolar moments). Detection of GWs from such sources will confirm models of the underlying physics, which might allow the growth of a 'mountain' on a neutron star. The lack of observation of GWs from the Crab Pulsar at the sensitivity of current ground-based detectors has already constrained its deviation from rotational symmetry5. The distribution of neutron stars in the Galaxy could be mapped out using GW observations. Spinning neutron stars currently invisible on Earth could be detected via their GW emission12.
Stochastic sources have both astrophysical and cosmological origins13,14. The 'holy grail' is the Big Bang itself. In principle, we should be able to observe a relic background of GWs from the very early Universe, some time between 10−18 and 10−9 s after the Big Bang, when light did not even exist. The electromagnetic analogue of this radiation is the cosmic microwave background, which gives information about conditions in the Universe 385,000 years after the Big Bang15,16. Gravitational radiation is the only way to observe the conditions in a much earlier epoch. Absence of a detectable stochastic background signal in current GW detectors has constrained certain models of the early Universe on the basis of cosmic superstring population17.
Of course the most tantalizing sources are those that we do not yet know exist. The opening of every major new electromagnetic window to the Universe has revealed major surprises that have revolutionized our understanding of the Universe. Observing the Universe with an entirely new messenger will very likely continue this tradition.
Frequencies of GWs
GW astronomy targets phenomena that involve astronomically large masses in acceleration. This, in turn, leads to the expectation that GW emission frequencies will be low, typically below a few tens of kiloHertz. A black hole binary system, for example, has to have an orbital period of just 0.02 s in order to produce GWs at f=100 Hz (Fig. 2). Supernova explosions are expected to have a broad spectral emission, with components that may reach kiloHertz frequencies. However, the strongest detectable GWs are expected at lower frequencies, all the way down to the millihertz or even the nanohertz regime.
Strength of GWs
GWs that reach the Earth are extremely weak. For example, the merger of two neutron stars at the other end of our galaxy (D≈50,000 light years away) would produce a GW strain amplitude of about h≈10−19 (ref. 2). The same source at the distance of about 60 million light years, where the Virgo cluster which comprises up to 2,000 Galaxies are located, would result in a corresponding strain amplitude of only h≈10−22. With the sophisticated technology now available, such tiny strains of spacetime can be detected, and it is very probable that there will be numerous direct detections in the coming decade.
Detection using a laser interferometer
GWs stretch and compress the spacetime transverse to their direction of propagation. If the wave was incident on a ring of free test masses in space, in each half cycle of the wave, the ring would distort into an ellipse, as shown in Figure 3. If the test masses were mirrors, one could reflect laser light off them and observe this GW-induced stretching and compressing of spacetime by measuring the light travel time. This is, in fact, the principle that interferometric GW detectors are based on. An over-view of the history of detectors is given in Box 1.
The enormous difficulty of GW detection arises because GWs are expected to be extremely weak when they finally reach the earth. The amount by which a distance L would shrink or stretch due to a GW is proportional to the wave's amplitude h, that is, ΔL=hL. Recalling that we expect strain amplitudes of 10−22, we are faced with the prospect of measuring changes in separation of 10−19 m even for a 1-km interferometer.
The intrepid GW detector designer thus faces two categorical challenges. First, how to keep the test masses so still that they respond only to a passing GW rather than to local perturbations? This isolation problem is addressed by techniques of vibration isolation and material engineering, and has to be optimized for the targeted frequency spectrum. Second, how to measure relative displacements with sufficient precision? This measurement problem is tackled by adopting advanced techniques in optical interferometry, control theory and quantum metrology. Let us tackle the question of the mechanical design for an earth-based test mass of spacetime first, followed by a discussion of metrology which launches us into the optical design of the instrument.
Mechanical and optical designs
The mirrors of interferometric GW detectors are designed to be quasi-free falling in the directions of propagation of the laser beams, thereby acting as test masses that probe spacetime. This is achieved by suspending the mirrors as sophisticated pendulums in vacuum chambers, as shown in Figure 4. Above the pendulum's resonant frequency, typically around 1 Hz, the suspension isolates the mirror from vibrations of the ground and the structures on which it is mounted, making it 'quasi-free'. The targeted detection band of earth-based detectors is therefore restricted to the audio band (to frequencies above ≈10 Hz, up to about 10kHz). At lower frequencies, disturbances from the environment are too high, at higher frequencies no strong GW signals are expected, see previous section.
The mirrors and their suspensions are built from materials having exquisitely high mechanical quality factors. This helps to concentrate the thermal energy that causes displacements of the mirror surface into well-defined vibrational frequency modes. At these particular frequencies, no GWs can be detected. The vibrational modes are therefore designed to be outside the detection band for the most part. Ultimately, cryogenic cooling of m may have to be used to further reduce the thermally excited mirror displacement noise, such as those originated from Brownian motion. The first cryogenic interferometric GW detector prototype facilities have been recently realized18.
A Michelson interferometer—similar to the one used in the Michelson–Morley experiment, which famously established that the speed of light was directionally invariant19—is ideally suited to measure the relative light travel time in two orthogonal directions (Figs 3 and 5). In a Michelson interferometer, laser light is incident on a beam splitter that reflects half the light and transmits the other half. Each light beam travels some distance before it is reflected by a mirror back towards the beam splitter where the two beams interfere. The interference provides an output beam, the power of which carries information about the path difference, and GW signals are detected as variations in the light power.
It is at this point that quantum physics enters the concept of GW detection. First of all, the light's energy can only be absorbed in discrete quanta (photons), resulting in photon-counting noise or shot-noise. The GW signal-to-shot-noise ratio can in fact be improved by detecting more photons. Shot-noise is proportional to the square root of the number of photons detected, while the mirror displacement signal is directly proportional to the laser power. Consequently, GW detectors use high-power laser systems and optical resonators to maximize their shot-noise-limited sensitivity (for further details, refer to Box 2).
Fundamentally, there is a second way how the quantum noise of light disturbs a GW detector. The shot-noise inside the interferometer produces a fluctuating radiation pressure force on the test-mass mirrors. The mirrors are randomly displaced by the light, an effect that cannot be distinguished from a GW signal. This is called quantum radiation pressure noise20. To reduce this effect, modern GW detectors use test masses of up to 10 kg. As a consequence, radiation pressure noise has not been experimentally observed to date. This situation, however, may change with increasing laser power and is envisioned in the next generation of GW detectors.
The design of second-generation GW detectors is more or less completed. These so-called advanced detectors will replace the existing interferometers, aiming for a 10 times increased sensitivity21,22,23. New laser systems will provide up to 200 W of single-mode optical power24 to reduce quantum shot-noise, yielding a light power of almost a megawatt in the interferometer arm resonators. Larger, 40 kg test-mass mirrors will replace the existing ones to keep the radiation pressure noise low and to allow for larger beam radii to reduce the noise effect of mirror Brownian motion. Cryogenic cooling of test-mass mirrors is another advanced technology that is planned to be implemented in a Japanese detector18,25. At very cold temperatures, Brownian motion and other forms of thermally excited mirror surface motions (thermal noise) can be significantly reduced.
Theoretical modelling of GW sources and estimations of GW event rates2 suggest that real GW astronomy, with detections on a daily basis with high signal-to-noise ratios, require another 10-fold increased sensitivity for ground-based observatories at frequencies down to a few Hertz. At even lower frequencies, noise on earth is too high and space-based observatories, such as LISA26, are required, targeting a frequency spectrum from 10−4 Hz to about 1 Hz. Above 1 Hz, the Einstein Telescope27,28 is an on-going European design study project for a third-generation ground-based GW detector. An important issue will be the further reduction of the shot-noise (quantum measurement noise), radiation pressure noise acting on the mirrors (quantum back-action noise) and thermal noise. The required reduction of these noise sources poses serious technical challenges. For example, increasing the light power in the interferometer arms will lead to additional absorption and heating of the mirrors. Higher light power will also increase radiation pressure noise. The only classical approach to mitigate noise is, therefore, to use even more massive mirrors. An increased mirror thickness will again lead to increased absorption and heating, making cryogenic cooling of the mirrors impractical. A quantum metrological approach is able to break this vicious circle. In the next section, we will see that squeezed laser light is able to achieve a quantum noise reduction without increasing the light power in a GW detector.
'Metrology' is the science of measurement. At first glance, quantum physics imposes a fundamental limit on metrology and thus imposes a corresponding limit on the sensitivity of GW detectors. A fundamental problem in optical interferometry is the stochastic distribution of photons arriving at the photodiodes. These statistical fluctuations obscure the tiny power variations caused by GW signals. Fortunately, quantum physics also provides a solution to this problem via the concept of quantum entanglement.
'Quantum metrology' uses quantum entanglement to improve the measurement precision beyond the limit set by measurement-counting noise. The first such proposal was made by Caves in 198129, when he suggested the use of squeezed states of light as an (additional) input for laser interferometric GW detectors. The initial proposal of Caves was motivated by the limited laser power available at the time. Indeed, squeezed states allow for improvement in the sensitivity of a quantum noise-limited interferometer without increasing the circulating laser power.
Squeezed states30,31,32,33 belong to the class of so-called nonclassical states of light. Generally, nonclassical states are those that cannot be described by a classical (positive valued) probability distribution using the coherent states as a basis (the P-representation)34. Let us first consider the coherent states. If light in a coherent state is absorbed by a photodiode, mutually independent photon 'clicks' (in terms of photoelectrons) are recorded, a process that is described by a Poissonian counting statistics. Because of quantum mechanics, every individual 'click' is not predictable, but rather the result of a truly random process. If the number of photons per time interval is large (n≫1), its s.d. is given by , as shown in Figure 6a (i). This uncertainty gives rise to shot-noise. For a squeezed light beam, the detection events of photons are not time independent but instead contains quantum correlations. Nevertheless, the photon statistics still cannot be predicted by some external clock. They instead show autocorrelations that give rise to a reduced s.d., as shown in Figure 6a (ii). The correlations might be described in the following way: whenever the quantum statistics might drive the actual photon number above the average value n, a similar number of excess photons destructively interferes with the main body of photons, providing a (partial) compensation for the fluctuation. These quantum correlations squeeze the interferometer's shot-noise below its natural value. Another complementary way of describing the properties of squeezed states is based on the phase space quasi-probability distribution using the amplitude and phase quadratures of a light wave (the Wigner function)31,34.
A squeezed state that contains only quantum-correlated photons with no coherent amplitude is called a squeezed vacuum state34. If such a state is overlapped with a coherent laser beam on a semi-transparent beam splitter, two beam-splitter outputs are generated, which are quantum correlated. As a consequence, the overall (bipartite) quantum state cannot be written in terms of products of the two beam-splitter output states. Such a quantum state is called non-separable or entangled. This is exactly what happens if a squeezed state is injected into the signal output port of a laser interferometer for GW detection (Fig. 6b). The two high-power light fields in the interferometer arms get entangled and the light's quantum fluctuations in the two arms are correlated with each other. Although the fluctuations are not predictable from the outside, they provide an improved signal-to-noise ratio in the interferometer. Recall that an interferometer measures the optical path length change in one interferometer arm with respect to the other arm. If the quantum noise in the two arms is correlated, it will cancel out. This entanglement interpretation was not discussed in the initial proposal by Caves. Nevertheless, it shows that the application of squeezed states in interferometers is a real application of quantum metrology by its very own definition. The entanglement produced by splitting a squeezed state at a semi-transparent beam splitter was tomographically characterized and quantified in ref.3535. Figure 6c shows a simulated signal from a photodiode, without (i) and with (ii) squeezing. The tiny modulation in the interferometer's output light due to the (simulated) passing GW is visible only with the improved signal-to-noise ratio. Figure 6d shows the analogue in frequency space, that is, after a Fourier transform of the photocurrent was applied.
The above paragraph shows that squeezed states can be conveniently combined with the extremely high photon numbers of coherent light to improve a laser interferometer, as proposed in ref. 29 and shown in Figure 6b. In fact, the stronger the squeezing factor31,34, the greater the path entanglement and the signal-to-noise improvement. Very strong path entanglement is present in interferometers using so-called NOON-states instead of squeezed states. NOON states are another class of nonclassical states34,36,37,38. Unfortunately, the strong entanglement of a NOON state is extremely fragile, in particular if n is large. Very recently, a NOON-state with n=5 photons was demonstrated38. However, GW detectors use coherent high-power laser light with n≈1023 photons per second. An improvement by the use of NOON states is, therefore, far out of reach.
The standard quantum limit (SQL)
Shortly after Caves proposed squeezed states of light for laser interferometers in 1981, the first experimental demonstration of squeezed light39 and proof of principle demonstrations of quantum metrology were achieved40,41. In parallel, it was theoretically discovered that squeezed states offer even more advances in metrology than 'just' reducing the quantum shot-noise. From the early days of quantum physics, when fundamental aspects of the measurement process were discussed, it was clear that, in general, a measurement disturbs the system that is to be measured42. The measurement of quantity A (say a position of a mirror) increases the uncertainty of the non-commuting quantity B (say the mirror's momentum). Both observables are linked by a Heisenberg Uncertainty relation. For repeated measurements of A, the increased uncertainty in B disturbs the measurement of A at later times. This is referred to as quantum back-action noise. Here, the back-action arises from the fluctuating radiation pressure due to the reflected light20. It is significant if the mirror's mass is low and a large photon number is reflected. In the 1970s, ideas were developed that showed how, in principle, back-action noise for continuous measurements can be avoided. Such schemes were called quantum non-demolition (QND) measurements43,44. However, for laser interferometric GW detectors using quasi-free-falling mirrors, it remained unclear whether QND schemes exist. In refs20,2920, 29 it was concluded that back-action noise of a free-mass position measurement can in principle not be avoided and, together with photon-counting noise, defines a SQL. In refs45,4645, 46, it was argued, however, that measurements below the SQL of a free mass are indeed possible. The discussion remained controversial47 until Jaekel and Reynaud48 were able to convincingly show that cleverly arranged squeezed states in a GW detector can simultaneously reduce the shot-noise and the radiation pressure noise, by almost arbitrary amounts (as long as most of the photons belong to the light's coherent displacement). For a summary of QND techniques for free-mass position measurements, we refer to ref.4949.
So far no experiment has achieved a position measurement with sensitivity even at, let alone beyond, its SQL. Eventually, this will be achieved, possibly first in future GW detectors. Advanced detectors are in fact designed to have a sensitivity at or very close to their SQLs. Once the SQL is reached, a new level of quantum metrology is achieved, because the position-momentum uncertainty of the mirror becomes correlated with the quadrature uncertainty of the reflected optical field. In this way, entanglement between the mechanical and the optical system can be observed50. This is all the more remarkable from the perspective of GW detectors, as we are talking about mirrors with masses of 40 kg, planned for the upcoming improvement to LIGO—the Advanced LIGO22. Eventually, even two such mirrors might be projected via entanglement swapping51 into an entangled state52. Obviously, quantum metrology opens the possibility for further studies of the peculiarities of quantum physics at a macroscopic scale.
Squeezed light for GW astronomy
Laser interferometers for GW astronomy are facing extreme sensitivity requirements that can only be achieved if all available tools, inclusive of quantum metrology, are combined in an elaborate measurement device. More recently, squeezed light was also suggested as a resource for quantum information processing53,54,55,56. Since then, squeezed light has been central to various proof-of-principle demonstrations, such as quantum teleportation57,58, and the production of optical 'Schrödinger cat' states for quantum computing and fundamental research on quantum physics59,60.
Squeezed light must be generated in a nonlinear interaction. Squeezed light was first produced in 1985 by Slusher et al.39 using four-wave mixing in Na atoms in an optical cavity. Shortly after, squeezed light was also generated by four-wave mixing in an optical fibre61 and by parametric down-conversion in an optical cavity containing a second-order non-linear material62. In these first experiments, squeezing of a few percent to 2 to 3 dB were routinely observed (for an overview of earlier experiments and squeezed light generation in the continuous-wave as well as pulsed regime, refer to ref. 63).
GW detectors are operated with high-power, quasi-monochromatic continuous-wave laser light, with an almost Fourier-limited spatial distribution of a Gaussian TEM00 mode. For a nonclassical sensitivity improvement, squeezed light in exactly the same spatio-temporal mode must be generated and mode matched into the output port of the interferometer29, providing interference with the high-power coherent laser beam at the interferometer's central beam splitter. High-power lasers for GW astronomy are based on optically pumped solid-state crystals in resonators24, suggestive of a similar configuration for a 'squeezed light resonator'. Figure 7a shows a schematic setup for generation of squeezed light that is built upon one of the very first squeezing experiments62, a setup that has been used in many experiments thereafter57,58,64,65. The setup uses a solid-state laser similar to those used as master lasers in high-power systems. After spatial-mode filtering, second harmonic generation in an optical cavity containing a second-order nonlinear crystal is applied to produce laser light at twice the optical frequency. The second harmonic light is then mode matched into the squeezing resonator to pump a degenerate optical parametric amplifier.
Figure 7b–d shows photographs of the nonlinear crystal, the optical arrangement and the housing of a squeezing resonator. The crystal is temperature stabilized at its phase-matching temperature. At this temperature, the first-order dielectric polarization of the birefringent crystal material with respect to the pump is optimally overlapped with the second-order dielectric polarization of the resonator mode at the fundamental laser frequency. This ensures a high energy transfer from the pump field to the fundamental Gaussian TEM00 resonator mode, that is, efficient parametric down-conversion.
Initially, the resonator mode is not excited by photons around the fundamental frequency, that is, it is in its ground state, characterized by vacuum fluctuations due to the zero point energy34. Note that the process is typically operated below oscillation threshold in order to reduce the phase noise coupling from the pump66. This setup produces a squeezed vacuum state34. The down-converted photon pairs leaving the squeezing resonator exhibit quantum correlations which give rise to a squeezed photon-counting noise when overlapped with a bright coherent local oscillator beam. The squeezed field is detected by interfering it with a coherent local oscillator beam, either in a balanced homodyne detector, see Figure 7a, or when injected into a GW detector and detected with a local oscillator from the GW detector along with an interferometric phase signal, see Figure 6b. The closer the squeezing resonator is operated to its oscillation threshold, and the lower the optical loss on down-converted photon pairs, the greater the squeeze factor is. For instance, the observation of a squeezing factor of 2 is only possible if the overall optical loss is <50%63. A 90% nonclassical noise reduction, that is, a squeezing factor of 10, or 10 dB, already limits the allowed optical loss to <10%.
Although squeezed light was demonstrated in the 1980s shortly after the first applications were proposed39,61,62, several important challenges pertaining to the application of squeezed states to GW detectors remained unsolved until recently.
First, squeezing had always been demonstrated at Megahertz frequencies, where technical noise sources of the laser light are not present. At these frequencies, the laser operates at or near the shot-noise limit. In the 10 Hz to 10 kHz band where terrestrial GW detectors operate, technical noise masked and overwhelmed the observation of squeezing. For example, the laser relaxation oscillation as well as acoustic disturbances and thermal fluctuations can be many orders of magnitude larger than shot-noise. Until recently, it was not certain that a laser field could even be squeezed and matched to the slow oscillation period of GWs. Second, it was previously not known whether squeezed light was fully compatible with other extremely sophisticated technologies employed in GW detectors, such as signal recycling. Third, the technology to reliably produce stable and strong squeezing with large squeeze factors was lacking. Long-term observation of strong squeezing was a technical challenge until recently.
These challenges have all been overcome in the past decade. All the open questions have now been satisfactorily addressed. This development is very timely as many known advanced classical interferometric techniques have almost been exhausted. Many remaining classical improvements are becoming increasingly difficult and expensive to implement.
Generation of squeezing in the audio band
A major breakthrough in achieving squeezing in the audio band was the insight that the dominant noise at audio frequencies that degrade squeezed light generation couples via the coherent laser field that was used to control the length of the squeezed light laser resonator, whereas noise coupling via the second harmonic pump field is insignificant67,68. This led to the first demonstration of audio band squeezing at frequencies down to 200 Hz69, see Figure 8a. There, the length of the squeezing resonator was stabilized without a bright control beam by using the phase sensitivity of the squeezing itself—a technique known as quantum noise locking70. Subsequently, a coherent beam control scheme was invented71 for simultaneous control of both the squeezing resonator length and the squeezing angle34. Shortly thereafter another noise source was identified and mitigated, which allowed for squeezing of more than 6 dB throughout the audio band down to 1 Hz72. This noise source arose because of tiny numbers of photons that were scattered from the main laser beam and were rescattered into the audio band squeezing mode after having experienced a frequency shift due to vibrations and thermal expansions of potential scattering surfaces, an effect known as parasitic interferences. As bright laser beams cannot be completely avoided, the recipe for the generation of audio band squeezing turned out to be fourfold: avoid scattering by using ultraclean super-polished optics, avoid rescattering by carefully blocking all residual faint beams caused by imperfect anti-reflecting surfaces, reduce the vibrationally and thermally excited motion of all mechanical parts that could potentially act as a re-scattering surface and avoid pointing fluctuations73.
Compatibility of squeezing with other interferometer techniques
Current detectors achieve their exquisite sensitivity to GWs because of their kilometre-scale arm lengths, the enormous light powers circulating in the enhancement resonators (arm, power- and signal-recycling cavities) and sophisticated pendulum suspensions that isolate the test-mass mirrors from the environment (Figure 3). When these techniques were developed, squeezing was not envisioned to become an integrated part of such a system. Building on existing theoretical work74,75, a series of experimental demonstrations of squeezed state injection into GW detectors were carried out. These included compatibility with power recycling, signal recycling76,77 and with the dynamical system of suspended, quasi-free mirrors78,79.
Generation of strong squeezing
Squeezing has significant impact in quantum metrology if large squeezing factors can be produced. Squeezing of 3 dB improves the signal-to-noise ratio by a factor of , equivalent to doubling the power of the coherent laser input. Squeezing of 10 dB corresponds to a 10-fold power increase. Remarkably, the experimentally demonstrated squeezing factors have virtually exploded in recent years80,81,82, culminating in values as large as 12.7 dB83. All the squeezing factors above 10 dB were observed with monolithic resonators and at MHz frequencies. However, reduced optical loss in non-monolithic resonators and a careful elimination of parasitic interferences should in principle enable such factors also in the GW band. An 8 to 10 dB improvement based on strong squeezing seems realistic for future GW detectors in their shot-noise-limited band83.
The first squeezed light laser for GW detection
On the basis of the previous achievements reviewed here, very recently, the first squeezed light laser for the continuous operation in GW detectors was designed and completed84,85. Up to 9 dB of squeezing over the entire GW detection band has been demonstrated (Figure 8b). This laser produces squeezed vacuum states and is fully controlled via co-propagating frequency-shifted bright control beams. This 9 dB squeezing factor is limited by technical effects: the squeezing resonator has to have an adjustable air gap to allow for an easy way to apply length control. The anti-reflection coated surface in the resonator introduces additional loss and reduces the escape efficiency. Moreover, a Faraday isolator has to be used in the squeezed beam path in order to eliminate parasitic interferences. This rotator produces a single-pass photon loss of about 2%. This squeezed light source is designated for continuous operation in the GEO600 GW detector. A squeezed light source based on a design that should have less sensitivity to retro-scattered light86 is being prepared for deployment on one of the most sensitive detectors, the 4 km LIGO detector in Hanford, Washington.
The final test of the squeezed light technology for GW astronomy can be carried out only in a (large scale) GW detector. During operation, such a detector takes data 24 h a day, 7 days a week, and future experiments will test appropriate electro-optical auto-alignment systems that continuously provide a high interference contrast between the extremely dim squeezed laser mode and the high-power laser mode at the interferometer's central beam splitter. We are convinced that these experiments will be successful thereby establishing quantum metrology as a key technology for all next generations of GW detectors.
As squeezed light builds on quantum correlations between photons, loss of photons reduces the squeezing effect. Future research therefore has to deal with a reduction of photon loss in GW detectors down to a few percent in order to be able to make use of the full potential of squeezed laser light. State of the art optical technologies are already able to provide such low loss. With a sufficiently reduced optical loss also, the enhancement of the nonclassical noise suppression of squeezed light lasers is expedient again thereby preparing the ground for an even higher level of quantum noise reduction.
When targeting signal frequencies at which quantum shot-noise is dominating, squeezing will certainly be combined with further increased light powers. When targeting frequencies at which thermal noise and technical noise sources dominate, such as photon scattering, the squeezed light technology will be embedded in a comprehensive low-noise concept providing a new and versatile starting point. This will enable the combination of low shot-noise, QND techniques and the cryogenic operation of mirror test masses, thereby helping to make GW astronomy a reality.
How to cite this article: Schnabel, R. et al. Quantum metrology for gravitational wave astronomy. Nat. Commun. 1:121 doi: 10.1038/ncomms1122 (2010).
Einstein, A. Die Grundlage der allgemeinen Relativitätstheorie. Ann. Phys. 49, 769–822 (1916).
Sathyaprakash, B. S. & Schutz, B. F. Physics, astrophysics and cosmology with gravitational waves. Living Rev. Relativity 12, 2 (2009). A comprehensive review on state of the art research on gravitational wave sources, detection and analysis.
GWIC—The Gravitational Wave International Committee (2010) http://gwic.ligo.org/roadmap/.
Weisberg, J. M. & Taylor, J. H. in Binary Radio Pulsars, ASP Conf. Series 328 (eds Rasio, F. A. & Stairs, I. H.) 25–31 (Ast. Soc. Pac., 2005).
Abbott, B. et al. Beating the spin-down limit on gravitational wave emission from the Crab Pulsar. Astrophys. J. 683, L45–L49 (2008)(Erratum: Astrophys. J. 706, L203–L204 (2009)).
Baiotti, L., Giacomazzo, B. & Rezzolla, L. Accurate evolutions of inspiralling neutron-star binaries: prompt and delayed collapse to a black hole. Phys. Rev., D 78, 084033 (2008).
Chandrasekhar, S. The Mathematical Theory of Black Holes (Oxford University Press, 1998).
Schutz, B. F. Determining the Hubble constant from gravitational wave observations. Nature 323, 310–311 (1986).
Dimmelmeier, H., Font, J. A. & Müller, E. Gravitational waves from relativistic rotational core collapse. Astrophys. J. 560, L163–L166 (2001).
Baiotti, L. & Rezzolla, L. Challenging the paradigm of singularity excision in gravitational collapse. Phys. Rev. Lett. 97, 141101 (2006).
Ostriker, J. P. & Gunn, J. E. On the nature of pulsars. I. Theory. Astrophys. J. 157, 1395–1417 (1969).
Abbott, B. P. et al. All-sky LIGO search for periodic gravitational waves in the Early Fifth-Science-Run Data. Phys. Rev. Lett. 102, 111102 (2009).
Peebles, P. J. E. Principles of Physical Cosmology (Princeton University Press, 1993).
Maggiore, M. Gravitational wave experiments and early universe cosmology. Phys. Rep. 331, 283–367 (2000).
Bennett, C. L. et al. First year Wilkinson Microwave Anisotropy Probe (WMAP) observations: preliminary maps and basic results. Astrophys. J. Suppl. Ser. 148, 1–27 (2003).
Spergel, D. N. et al. First year Wilkinson Microwave Anisotropy Probe (WMAP) observations: determination of cosmological parameters. Astrophys. J. Suppl. Ser. 148, 175–194 (2003).
The LIGO Scientific Collaboration & The Virgo Collaboration. An upper limit on the stochastic gravitational-wave background of cosmological origin. Nature 460, 990–994 (2009). This work constrained the energy density of the stochastic GW background thereby ruling out certain models of early Universe evolution, as well as certain cosmic (super) string models.
Arai, K. et al. Status of Japanese gravitational wave detectors. Class. Quantum Grav. 26, 204020 (2009).
Michelson, A. A. & Morley, E. W. On the relative motion of the earth and the luminiferous ether. Am. J. Sci. 34, 333–345 (1887). This paper reports one of the most famous experiments in physics. The speed of light was found to be independent of the relative motion of the Earth suggesting the absence of an 'ether'. The principle that the speed of light does not depend on the speed of the observer formed the basis of Einstein's special theory of relativity.
Caves, C. M. Quantum-mechanical radiation-pressure fluctuations in an interferometer. Phys. Rev. Lett. 45, 75–79 (1980).
Harry, G. M. & The LIGO Scientific Collaboration Advanced LIGO: the next generation of gravitational wave detectors. Class. Quantum Grav. 27, 084006 (2010).
Weinstein, A. Advanced LIGO optical configuration and prototyping effort. Class. Quantum Grav. 19, 1575–1584 (2002).
Advanced LIGO Team. Advanced LIGO reference design Technical Report LIGO-M060056, LIGO Project (2009).
Frede, M., Wilhelm, R., Kracht, D. & Fallnich, C. Nd:YAG ring laser with 213 W linearly polarized fundamental mode output power. Opt. Express 13, 7516–7519 (2005).
Kuroda, K. & The LCGT Collaboration The status of LCGT. Class. Quantum Grav. 23, S215–S221 (2006).
LISA—Laser Interferometer Space Antenna (2010) http://lisa.nasa.gov/.
Punturo, M. et al. The third generation of gravitational wave observatories and their science reach. Class. Quantum Grav. 27, 084007 (2010).
Einstein Telescope (2010) http://www.et-gw.eu.
Caves, C. M. Quantum-mechanical noise in an interferometer. Phys. Rev. D 23, 1693–1708 (1981). This was the first proposal to use squeezed states to improve the sensitivity of laser interferometers. It was realized that the squeezed light has to enter the interferometer?s normally unused port to replace relevant vacuum fluctuations.
Yuen, H. P. Two-photon coherent states of the radiation field. Phys. Rev. A 13, 2226–2243 (1976).
Walls, D. F. Squeezed states of light. Nature 306, 141–146 (1983).
Breitenbach, G., Schiller, S. & Mlynek, J. Measurement of the quantum states of squeezed light. Nature 387, 471–475 (1997).
Dodonov, V. V. 'Nonclassical' states in quantum optics: a 'squeezed' review of the first 75 years. J. Opt. B Quantum Semiclassical Opt. 4, R1–R33 (2002).
Gerry, C. C. & Knight, P. L. Introductory Quantum Optics (Cambridge University Press, 2004).
DiGuglielmo, J., Hage, B., Franzen, A., Fiurášek, J. & Schnabel, R. Experimental characterization of Gaussian quantum communication channels. Phys. Rev. A 76, 012323 (2007).
Holland, M. J. & Burnett, K. Interferometric detection of optical phase shifts at the Heisenberg limit. Phys. Rev. Lett. 71, 1355–1358 (1993).
Walther, P., Pan, J- W., Aspelmeyer, M., Ursin, R., Gasparoni, S. & Zeilinger, A. De Broglie wavelength of a non-local four-photon state. Nature 429, 158–161 (2004).
Afek, I., Ambar, O. & Silberberg, Y. High-NOON states by mixing quantum and classical light. Science 328, 879–881 (2010).
Slusher, R. E., Hollberg, L. W., Yurke, B., Mertz, J. C. & Valley, J. F. Observation of squeezed states generated by four-wave mixing in an optical cavity. Phys. Rev. Lett. 55, 2409–2412 (1985).
Xiao, M., Wu, L.- A. & Kimble, H. J. Precision measurement beyond the shot-noise limit. Phys. Rev. Lett. 59, 278–281 (1987).
Grangier, P., Slusher, R. E., Yurke, B. & LaPorta, A. Squeezed-light-enhanced polarization interferometer. Phys. Rev. Lett. 59, 2153–2156 (1987).
Braginsky, V. B., Khalili, F. Y. & Thorne, K. S. Quantum Measurement (Cambridge University Press, 1995).
Thorne, K. S., Drever, R. W. P., Caves, C. M., Zimmerman, M. & Sandberg, V. D. Quantum nondemolition measurements of harmonic oscillators. Phys. Rev. Lett. 40, 667–671 (1978).
Braginsky, V. B. & Khalili, F. Y. Quantum nondemolition measurements: the route from toys to tools. Rev. Mod. Phys. 68, 1–11 (1996).
Unruh, W. G. Quantum noise in the interferometer detector in Quantum Optics, Experimental Gravitation, and Measurement Theory (eds Meystre, P. & Scully, M. O.) 647–660 (Plenum, 1983).
Yuen, H. P. Contractive states and the standard quantum limit for monitoring free-mass positions. Phys. Rev. Lett. 51, 719–722 (1983).
Caves, C. M. Defense of the standard quantum limit for free-mass position. Phys. Rev. Lett. 54, 2465–2468 (1985).
Jaekel, M. T. & Reynaud, S. Quantum limits in interferometric measurements. Europhys. Lett. 13, 301–306 (1990).
Kimble, H. J., Levin, Y., Matsko, A. B., Thorne, K. S. & Vyatchanin, S. P. Conversion of conventional gravitational-wave interferometers into quantum nondemolition interferometers by modifying their input and/or output optics. Phys. Rev. D 65, 022002 (2001).
Vitali, D. et al. Optomechanical entanglement between a movable mirror and a cavity field. Phys. Rev. Lett. 98, 030405 (2007).
Pirandola, S., Vitali, D., Tombesi, P. & Lloyd, S. Macroscopic entanglement by entanglement swapping. Phys. Rev. Lett. 97, 150403 (2006).
Müller-Ebhardt, H., Rehbein, H., Schnabel, R., Danzmann, K. & Chen, Y. Entanglement of macroscopic test masses and the standard quantum limit in laser interferometry. Phys. Rev. Lett. 100, 013601 (2008).
Yuen, H. P. & Shapiro, J. H. Optical communication with two-photon coherent states. I - Quantum-state propagation and quantum-noise reduction. IEEE Trans. Inf. Theory 24, 657–668 (1978).
Yamamoto, Y. & Haus, H. A. Preparation, measurement and information capacity of optical quantum states. Rev. Mod. Phys. 58, 1001–1020 (1986).
Saleh, B. E. A. & Teich, M. C. Can the channel capacity of a light-wave communication system be increased by the use of photon-number-squeezed light? Phys. Rev. Lett. 58, 2656–2659 (1987).
Braunstein, S. L. & van Loock, P. Quantum information with continuous variables. Rev. Mod. Phys. 77, 513–577 (2005).
Furusawa, A., Sørensen, J. L., Braunstein, S. L., Fuchs, C. A., Kimble, H. J. & Polzik, E. S. Unconditional quantum teleportation. Science 282, 706–709 (1998).
Bowen, W. P. et al. Experimental investigation of continuous-variable quantum teleportation. Phys. Rev. A 67, 032302 (2003).
Ourjoumtsev, A., Tualle-Brouri, R., Laurat, J. & Grangier, P. Generating optical Schrödinger Kittens for quantum information processing. Science 312, 83–86 (2006).
Neergaard-Nielsen, J. S., Melholt Nielsen, B., Hettich, C., Mølmer, K. & Polzik, E. S. Generation of a superposition of odd photon number states for quantum information networks. Phys. Rev. Lett. 97, 083604 (2006).
Shelby, R. M., Levenson, M. D., Perlmutter, S. H., DeVoe, R. G. & Walls, D. F. Broad-band parametric deamplification of quantum noise in an optical fiber. Phys. Rev. Lett. 57, 691–694 (1986).
Wu, L.- A., Kimble, H. J., Hall, J. L. & Wu, H. Generation of squeezed states by parametric down conversion. Phys. Rev. Lett. 57, 2520–2523 (1986).
Bachor, H.- A. & Ralph, T. C. A Guide to Experiments in Quantum Optics (Wiley-VCH, 2004).
Schneider, K., Lang, M., Mlynek, J. & Schiller, S. Generation of strongly squeezed continuous-wave light at 1064 nm. Opt. Express 2, 59–64 (1998).
Lam, P. K., Ralph, T. C., Buchler, B. C., McClelland, D. E., Bachor, H- A. & Gao, J. Optimization and transfer of vacuum squeezing from an optical parametric oscillator. J. Opt. B 1, 469–474 (1999).
Reid, M. & Drummond, P. Correlations in nondegenerate parametric oscillation: squeezing in the presence of phase diffusion. Phys. Rev. A 40, 4493–4506 (1989).
Bowen, W. P., Schnabel, R., Treps, N., Bachor, H- A. & Lam, P. K. Recovery of continuous wave squeezing at low frequencies. J. Opt. B 4, 421–424 (2002). This work reports on the first experimental progress towards the realization of squeezed light at frequencies targeted by GW detectors.
Schnabel, R. et al. Squeezed light at sideband frequencies below 100 kHz from a single OPA. Opt. Commun. 240, 185–190 (2004).
McKenzie, K. et al. Squeezing in the audio gravitational-wave detection band. Phys. Rev. Lett. 93, 161105 (2004).
McKenzie, K. et al. Quantum noise locking. J. Opt. B 7, S421–S428 (2005).
Vahlbruch, H., Chelkowski, S., Hage, B., Franzen, A., Danzmann, K. & Schnabel, R. Coherent control of vacuum squeezing in the gravitational-wave detection band. Phys. Rev. Lett. 97, 011101 (2006).
Vahlbruch, H., Chelkowski, S., Danzmann, K. & Schnabel, R. Quantum engineering of squeezed states for quantum communication and metrology. New J. Phys. 9, 371 (2007).This work constituted the first demonstration of squeezed quantum noise over the complete detection band of Earth-based gravitational wave detectors.
McKenzie, K., Gray, M. B., Lam, P. K. & McClelland, D. E. Technical limitations to homodyne detection at audio frequencies. Appl. Opt. 46, 3389–3395 (2007).
Gea-Banacloche, J. & Leuchs, G. Squeezed states for interferometric gravitational-wave detectors. J. Mod. Opt. 34, 793–811 (1987).
Harms, J. et al. Squeezed-input, optical-spring, signal-recycled gravitational-wave detectors. Phys. Rev. D 68, 042001 (2003).
McKenzie, K., Shaddock, D. A., McClelland, D. E., Buchler, B. C. & Lam, P. K. Experimental demonstration of a squeezing-enhanced power-recycled Michelson interferometer for gravitational wave detection. Phys. Rev. Lett. 88, 231102 (2002).
Vahlbruch, H., Chelkowski, S., Hage, B., Franzen, A., Danzmann, K. & Schnabel, R. Demonstration of a squeezed-light-enhanced power- and signal-recycled Michelson interferometer. Phys. Rev. Lett. 95, 211102 (2005).
Goda, K. et al. A quantum-enhanced prototype gravitational-wave detector. Nat. Phys. 4, 472–476 (2008).
Schnabel, R. Gravitational wave detectors: squeezing up the sensitivity. Nat. Phys. 4, 440–441 (2008).
Vahlbruch, H. et al. Observation of squeezed light with 10-dB quantum-noise reduction. Phys. Rev. Lett. 100, 033602 (2008).
Polzik, E. S. Quantum physics: the squeeze goes on. Nature 453, 45–46 (2008).
Takeno, Y., Yukawa, M., Yonezawa, H. & Furusawa, A. Observation of -9 dB quadrature squeezing with improvement of phase stability in homodyne measurement. Opt. Express 15, 4321–4327 (2007).
Eberle, T. et al. Quantum enhancement of the zero-area Sagnac interferometer topology for gravitational wave detection. Phys. Rev. Lett. 104, 251102 (2010).
Vahlbruch, H. Squeezed Light for Gravitational Wave Astronomy (PhD thesis, Leibniz Universität Hannover, 2008).
Vahlbruch, H., Khalaidovski, A., Lastzka, N., Gräf, C., Danzmann, K. & Schnabel, R. The GEO 600 squeezed light source. Class. Quantum Grav. 27, 084027 (2010). This paper presents the first realization of a portable squeezed light laser for gravitational wave detectors.
McKenzie, K., Gray, M. B., Gossler, S., Lam, P. K. & McClelland, D. E. Squeezed state generation for interferometric gravitational wave detection. Class. Quantum Grav. 23, S245–S250 (2006).
Giacomazzo, B., Kähler, R. & Rezzolla, L. High-Mass Binary with Cold Equation of State. http://numarch.aei.mpg.de/polytropic_highmass_density_and_gws.mov (Max Planck Institute for Gravitational Physics and Zuse Institute Berlin, 2008).
Weber, J. Detection and generation of gravitational waves. Phys. Rev. 117, 306–313 (1960).
Michelson, P. F., Price, J. C. & Taber, R. C. Resonant-mass detectors of gravitational radiation. Science 237, 150–157 (1987).
Cerdonio, M. et al. The ultracryogenic gravitational-wave detector AURIGA. Class. Quantum Grav. 14, 1491–1494 (1997).
Ju, L., Blair, D. G. & Zhau, C. Detection of gravitational waves. Rep. Prog. Phys. 63, 1317–1427 (2000).
Astone, P. et al. Increasing the bandwidth of resonant gravitational antennas: the case of explorer. Phys. Rev. Lett. 91, 111101 (2003).
Takahashi, R. & the TAMA Collaboration Status of TAMA300. Class. Quantum Grav. 21, S403–S408 (2004).
Lück, H. et al. Status of the GEO600 detector. Class. Quantum Grav. 23, S71–S78 (2006).
Willke, B. et al. The GEO-HF project. Class. Quantum Grav. 23, S207–S214 (2006).
Abbott, B. P. et al. LIGO: the laser interferometer gravitational-wave observatory. Rep. Prog. Phys. 72, 076901 (2009). This work reports that the LIGO detectors have reached their design sensitivity.
Abramovici, A. et al. LIGO: the laser interferometer gravitational-wave observatory. Science 256, 325–333 (1992).
Acernese, F. et al. Status of Virgo. Class. Quantum Grav. 25, 114045 (2008).
Drever, R. W. P. et al. in Quantum Optics, Experimental Gravitation, and Measurement Theory (eds Meystre, P. & Scully, M. O.) 503–514 (Plenum, 1983).
Meers, B. J. Recycling in laser-interferometric gravitational-wave detectors. Phys. Rev. D 38, 2317–2326 (1988).
This work was supported by the Australian Research Council, the National Science Foundation and the Deutsche Forschungsgemeinschaft through SBF 407 and SFB TR7, and the Centre for Quantum Engineering and Space-Time Research, QUEST. We apologize to all those colleagues whose works were not cited because of space restrictions.
The authors declare no competing financial interests.
About this article
Cite this article
Schnabel, R., Mavalvala, N., McClelland, D. et al. Quantum metrology for gravitational wave astronomy. Nat Commun 1, 121 (2010). https://doi.org/10.1038/ncomms1122
npj Microgravity (2022)
Nature Communications (2022)
Nature Photonics (2021)
npj Quantum Information (2021)
Journal of Optics (2021) |
Earth Observation Satellites for Navigation Timing and Ranging (ENTR) are a vital tool for a range of applications, from military and defense to civilian navigation and communication. These satellites are designed to provide precise and accurate timing and positioning information, which is essential for a variety of activities, including air traffic control, maritime navigation, and GPS systems.
To fully understand the capabilities and applications of ENTR satellites, it is important to be familiar with the terminology used in this field. This glossary of terms provides a comprehensive overview of the key concepts and definitions related to ENTR satellites.
Firstly, let’s start with the basics. ENTR satellites are designed to provide precise timing and positioning information using signals transmitted from space. These signals are received by ground-based receivers, which use the information to determine the location and time of the receiver.
One of the key terms related to ENTR satellites is Global Navigation Satellite System (GNSS). This refers to a network of satellites that provide positioning and timing information to users around the world. The most well-known GNSS is the Global Positioning System (GPS), which is operated by the United States government.
Another important term is satellite clock. This refers to the clock on board the satellite, which is used to generate the timing signals that are transmitted to ground-based receivers. The accuracy of the satellite clock is critical to the accuracy of the timing information provided by the satellite.
In addition to providing timing and positioning information, ENTR satellites can also be used for ranging. Ranging refers to the measurement of the distance between the satellite and the ground-based receiver. This information can be used to determine the location of the receiver with even greater accuracy.
One of the key challenges in designing ENTR satellites is dealing with the effects of the Earth’s atmosphere on the signals transmitted from space. This is where the term ionosphere comes in. The ionosphere is a layer of the Earth’s atmosphere that contains charged particles, which can affect the transmission of signals from space. ENTR satellites are designed to compensate for these effects to ensure accurate timing and positioning information.
Another important term related to ENTR satellites is orbital altitude. This refers to the height of the satellite above the Earth’s surface. The orbital altitude of ENTR satellites is carefully chosen to ensure optimal coverage and accuracy.
Finally, it is important to be familiar with the concept of signal-to-noise ratio (SNR). This refers to the ratio of the strength of the signal received by the ground-based receiver to the background noise. A higher SNR indicates a stronger signal and better accuracy.
In conclusion, Earth Observation Satellites for Navigation Timing and Ranging are a critical tool for a range of applications, from military and defense to civilian navigation and communication. This glossary of terms provides a comprehensive overview of the key concepts and definitions related to ENTR satellites. By understanding these terms, we can better appreciate the capabilities and limitations of these important satellites. |
meteoriteArticle Free Pass
- Recovery of meteorites
- Types of meteorites
- Association of meteorites with asteroids
- The ages of meteorites and their components
- Cosmic-ray exposure ages of meteorites
- Meteorites and the formation of the early solar system
meteorite, any fairly small natural object from interplanetary space—i.e., a meteoroid—that survives its passage through Earth’s atmosphere and lands on the surface. In modern usage the term is broadly applied to similar objects that land on the surface of other comparatively large bodies. For instance, meteorite fragments have been found in samples returned from the Moon, and the robotic rover Opportunity has identified at least one meteorite on the surface of Mars. The largest meteorite that has been identified on Earth was found in 1920 in Namibia and was named the Hoba meteorite. It measures 2.7 metres (9 feet) across, is estimated to weigh nearly 60 tons, and is made of an alloy of iron and nickel. The smallest meteorites, called micrometeorites, range in size from a few hundred micrometres (μm) to as small as about 10 μm and come from the population of tiny particles that fill interplanetary space (see interplanetary dust particle).
Laboratory, astronomical, and theoretical studies show that most discrete meteorites found on Earth are fragments of asteroids that orbit in the inner portion of the main asteroid belt, between about 2.1 and 3.3 astronomical units (AU) from the Sun. (One astronomical unit is the average distance from Earth to the Sun—about 150 million km [93 million miles].) It is in this region that strong gravitational perturbations by the planets, especially Jupiter, can put meteoroids into Earth-crossing orbits. Not all meteoroids need to have formed in this region, however, as there are a number of processes that can cause their orbits to migrate over long time periods. Fewer than 1 percent of meteorites are thought to come from the Moon or Mars. On the other hand, there is good reason to believe that a significant fraction of the micrometeorites found drifting down through Earth’s upper atmosphere come from comets. Although evidence from studies of meteors suggests that a small fraction of the cometary material that enters Earth’s atmosphere in discrete chunks possesses sufficient strength to survive to reach the surface, it is not generally believed that any of this material exists in meteorite collections. For further discussion of the sources of meteorites and the processes by which they are brought to Earth, see meteor and meteoroid: Reservoirs of meteoroids in space and Directing meteoroids to Earth.
The principal driving force behind meteorite studies is the fact that small bodies such as asteroids and comets are most likely to preserve evidence of events that took place in the early solar system. There are at least two reasons to expect that this is the case. First, when the solar system began to form, it was composed of gas and fine-grained dust. The assembly of planet-sized bodies from this dust almost certainly involved the coming together of smaller objects to make successively larger ones, beginning with dust balls and ending, in the inner solar system, with the rocky, or terrestrial, planets—Mercury, Venus, Earth, and Mars. In the outer solar system the formation of Jupiter, Saturn, and the other giant planets is thought to have involved more than simple aggregation, but their moons—and comets—probably did form by this basic mechanism. Available evidence indicates that asteroids and comets are leftovers of the intermediate stages of the aggregation mechanism. They are therefore representative of bodies that formed quite early in the history of the solar system. (See also solar system: Origin of the solar system; planetesimal.) Second, in the early solar system various processes were in operation that heated up solid bodies. The primary ones were decay of short-lived radioactive isotopes within the bodies and collisions between the bodies as they grew. As a result, the interiors of larger bodies experienced substantial melting, with consequent physical and chemical changes to their constituents. Smaller bodies, on the other hand, generally radiated away this heat quite efficiently, which allowed their interiors to remain relatively cool. Consequently, they should preserve to some degree the dust and other material from which they formed. Indeed, certain meteorites do appear to preserve very ancient material, some of which predates the solar system.
What made you want to look up meteorite? |
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
For a unimodal distribution, negative skew commonly indicates that the tail is on the left side of the distribution, and positive skew indicates that the tail is on the right. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution, but can also be true for an asymmetric distribution where one tail is long and thin, and the other is short but fat.
Consider the two distributions in the figure just below. Within each graph, the values on the right side of the distribution taper differently from the values on the left side. These tapering sides are called tails, and they provide a visual means to determine which of the two kinds of skewness a distribution has:
Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of 50. We can transform this sequence into a negatively skewed distribution by adding a value far below the mean, which is probably a negative outlier, e.g. (40, 49, 50, 51). Therefore, the mean of the sequence becomes 47.5, and the median is 49.5. Based on the formula of nonparametric skew, defined as the skew is negative. Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5.
As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness.
The skewness is not directly related to the relationship between the mean and median: a distribution with negative skew can have its mean greater than or less than the median, and likewise for positive skew.
In the older notion of nonparametric skew, defined as where is the mean, is the median, and is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not always have the same sign: while they agree for some families of distributions, they differ in some of the cases, and conflating them is misleading.
If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness (defined below) does not imply that the mean is equal to the median.
A 2005 journal article points out:
Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median.
For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed.
The skewness of a random variable X is the third standardized moment , defined as:
where μ is the mean, σ is the standard deviation, E is the expectation operator, μ3 is the third central moment, and κt are the t-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant κ3 to the 1.5th power of the second cumulant κ2. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant. The skewness is also sometimes denoted Skew[X].
If σ is finite, μ is finite too and skewness can be expressed in terms of the non-central moment E[X3] by expanding the previous formula,
Skewness can be infinite, as when
where the third cumulants are infinite, or as when
where the third cumulant is undefined.
Examples of distributions with finite skewness include the following.
For a sample of n values, two natural estimators of the population skewness are
where is the sample mean, s is the sample standard deviation, m2 is the (biased) sample second central moment, and m3 is the sample third central moment. is a method of moments estimator.
Another common definition of the sample skewness is
where is the unique symmetric unbiased estimator of the third cumulant and is the symmetric unbiased estimator of the second cumulant (i.e. the sample variance). This adjusted Fisher–Pearson standardized moment coefficient is the version found in Excel and several statistical packages including Minitab, SAS and SPSS.
Under the assumption that the underlying random variable is normally distributed, it can be shown that all three ratios , and are unbiased and consistent estimators of the population skewness , with , i.e., their distributions converge to a normal distribution with mean 0 and variance 6 (Fisher, 1930). The variance of the sample skewness is thus approximately for sufficiently large samples. More precisely, in a random sample of size n from a normal distribution,
In normal samples, has the smaller variance of the three estimators, with
For non-normal distributions, , and are generally biased estimators of the population skewness ; their expected values can even have the opposite sign from the true skewness. For instance, a mixed distribution consisting of very thin Gaussians centred at −99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness of about −9.77, but in a sample of 3 has an expected value of about 0.32, since usually all three samples are in the positive-valued part of the distribution, which is skewed the other way.
Skewness is a descriptive statistic that can be used in conjunction with the histogram and the normal quantile plot to characterize the data or distribution.
Skewness indicates the direction and relative magnitude of a distribution's deviation from the normal distribution.
With pronounced skewness, standard statistical inference procedures such as a confidence interval for a mean will be not only incorrect, in the sense that the true coverage level will differ from the nominal (e.g., 95%) level, but they will also result in unequal error probabilities on each side.
Skewness can be used to obtain approximate probabilities and quantiles of distributions (such as value at risk in finance) via the Cornish-Fisher expansion.
Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.
D'Agostino's K-squared test is a goodness-of-fit normality test based on sample skewness and sample kurtosis.
Other measures of skewness have been used, including simpler calculations suggested by Karl Pearson (not to be confused with Pearson's moment coefficient of skewness, see above). These other measures are:
The Pearson mode skewness, or first skewness coefficient, is defined as
The Pearson median skewness, or second skewness coefficient, is defined as
Which is a simple multiple of the nonparametric skew.
Bowley's measure of skewness (from 1901), also called Yule's coefficient (from 1912) is defined as:
where Q is the quantile function (i.e., the inverse of the cumulative distribution function). The numerator is difference between the average of the upper and lower quartiles (a measure of location) and the median (another measure of location), while the denominator is the semi-interquartile range , which for symmetric distributions is the MAD measure of dispersion.
Other names for this measure are Galton's measure of skewness, the Yule–Kendall index and the quartile skewness,
Similarly, Kelly's measure of skewness is defined as
A more general formulation of a skewness function was described by Groeneveld, R. A. and Meeden, G. (1984):
The function γ(u) satisfies −1 ≤ γ(u) ≤ 1 and is well defined without requiring the existence of any moments of the distribution. Bowley's measure of skewness is γ(u) evaluated at u = 3/4 while Kelly's measure of skewness is γ(u) evaluated at u = 9/10. This definition leads to a corresponding overall measure of skewness defined as the supremum of this over the range 1/2 ≤ u < 1. Another measure can be obtained by integrating the numerator and denominator of this expression.
Quantile-based skewness measures are at first glance easy to interpret, but they often show significantly larger sample variations than moment-based methods. This means that often samples from a symmetric distribution (like the uniform distribution) have a large quantile-based skewness, just by chance.
Groeneveld and Meeden have suggested, as an alternative measure of skewness,
where μ is the mean, ν is the median, |...| is the absolute value, and E() is the expectation operator. This is closely related in form to Pearson's second skewness coefficient.
Use of L-moments in place of moments provides a measure of skewness known as the L-skewness.
A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew. If X is a random variable taking values in the d-dimensional Euclidean space, X has finite expectation, X' is an independent identically distributed copy of X, and denotes the norm in the Euclidean space, then a simple measure of asymmetry with respect to location parameter θ is
and dSkew(X) := 0 for X = θ (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if X is diagonally symmetric with respect to θ (X and 2θ−X have the same probability distribution) and equals 1 if and only if X is a constant c () with probability one. Thus there is a simple consistent statistical test of diagonal symmetry based on the sample distance skewness:
The medcouple is a scale-invariant robust measure of skewness, with a breakdown point of 25%. It is the median of the values of the kernel function
taken over all couples such that , where is the median of the sample . It can be seen as the median of all possible quantile skewness measures.
((cite web)): CS1 maint: archived copy as title (link) |
The Stamp Act of 1765 On March 22, 1765, Great Britain 's Parliament gathered and passed the Stamp Act of 1765 which was to take effect in the thirteen colonies on November 1, 1765. The Stamp Act taxed Americans directly on all materials that were used for legal purposes or commercial use and a stamp distributor would collect the tax and in exchange, a stamp was given. The colonists had no representation in Parliament and once they heard of the act, started protesting to repeal it. After months of colonists vehemently protesting and Great Britain 's economy slowing from non-importation policies in America, they finally repealed the act on March 18, 1766, making the colonists happy, but also passing the Declaratory act on the same day, as a compromise, which stated they had the same rights to lay taxes on America as it did in Great Britain.
The Stamp Act was passed in British Parliament on February 17, 1765 and received Royal Assessment on March 22, 1765. The Stamp Act was proposed by Prime Minister George Grenville and was passed without debate and it would take effect in November of that year. Prior to the Stamp Act there was a war between Great Britain and France. Though Great Britain won the war, it came to a cost of a deep debt. British Parliament recognized that the colonies were lightly taxed and felt that they should pay more thus came the stamp act which enforced all colonial citizens to pay a stamp duty or tax on all official papers from official
When the British passed the Stamp Act, the colonists reacted in different ways. The Stamp Act , passed in 1765, put taxes on all printed goods in the colonies. Specifically, newspapers, legal documents, dice, and playing cards. The British enforced this law by having merchants put a stamp on all printed goods to show that the colonist paid the tax.
Financial stability of the colonial people was often thought to be put at stake with the introduction of new taxes and regulations which caused much frustration. Before Parliament had laid out any questionable taxes (i.e. stamp act), the citizens appeared perfectly content with Parliament 's power (Doc C). The stamp act required that every document, used by the colonists be stamped and taxed. One can see why this would anger people (as paper was the “big thing” before modern technology). Chaos ensued, the colonists were not fond of tax collectors whatsoever. This institution of things similar to the stamp act was a major factor (of the economic kind) that led to the revolution because it was benefited almost no on in the colonies, and would
Some of the things that happened soon after they passed the Stamp Act was colonial resistance. Colonists did not want to be taxed on a war they didn 't even fight in or have a say in. The war was France and Britain fighting over who got control over North America. All the colonists were doing was living there and the war did not involve them. Also, violators of the Stamp Act could be tried and convicted without juries in the vice-admiralty courts.
This Act required Taxed Stamps to be placed on printed materials. These stamps had to be purchased using the British sterling coin, which was not prevalent in the colonies. Colonist saw the pitfalls of this act and began to seek equal liberty with British Parliament. Not yet seeking independence, the colonist wanted British leaders to rethink how government worked. Opposition continued to rise as these ideals were rejected by Royal Rule.
The Stamp Act of 1765 was basically a tax that was enforced on every piece of paper that was sold by British agents. This tax was to pay for British soldiers that were stationed and living amongst the colonists. British government claimed the soldiers were there for protection, however they were really there to enforce the Proclamation Line and see to it that no one takes any more Indian land.
“Colonial taxes are very unrealevent. I absolutely disapprove of this act the government made. It is very unfair to the poor people who can barely afford anything!” He yells loudly as he sits onto the wooden chair. “Maybe it will get better…” The wife comforts her husband as she sits on the chair.
The way the colonists reacted to the Stamp Acts is that they boycotted British goods. King George III reacted by repealing the Stamp Act and put the Declaratory Act in to that same day. The Declaratory Act is a law that stated that Parliament had the right to tax the colonies
The Stamp Act was enacted on March 22, 1765. The Stamp Act was a tax that people had to pay for every piece of printed paper they used. The Stamp Act was enacted because of the French and Indian war. After the war the French were in a war debt so they had to find a way or be able to pay them back for it. They also used the money that they collected to help pay for the costs of defending and protecting the American Frontier near the Appalachian Mountains.
The Stamp Act The Stamp Act was a tax placed on the American colonies by the British in 1765. It said they had to pay a tax on all sorts of printed materials such as newspapers, magazines and legal documents. It was called the Stamp Act because the colonies were supposed to buy paper from Britain. The items bought had to have an official stamp on it that showed they had paid the tax. No Representation The colonists
Americans were heading into Indian land without permission, when usually they peacefully bought land from the Indians. The Indians had enough of the colonists. In 1765, Benjamin Franklin wrote a letter about the Stamp Act and said repealing the act would be “the wisest course for you and I to take” (doc G). Franklin appealed to the British House of Commons on the issue. Colonists rioted, tax collectors were tarred and feathered if ever seen out in public.
At the dawn of the 1770s, American colonial resentment of the British Parliament in London had been steadily increasing for some time. Retaliating in 1766, Parliament issued the Declaratory Act which repealed most taxes except issued a reinforcement of Parliament’s supremacy. In a fascinating exchange, we see that the Parliament identifies and responds to the colonists main claim; Parliament had no right to directly tax colonists who had no representation in Parliament itself. By asserting Parliamentary supremacy while simultaneously repealing the Stamp Act and scaling back the Sugar Act, Parliament essentially established the hill it would die on, that being its legitimacy. With the stage set for colonial conflict in the 1770s, all but one
By creating a list of violators of the nonimportation agreements, Adams encouraged punishments of violators and therefore united the colonies in their effort. It was one of the first protests of taxation without representation in the colonies, and it showed the colonists that rebellion was possible with a strong |
UHF television broadcasting
This article needs additional citations for verification. (August 2015) (Learn how and when to remove this template message)
UHF television broadcasting is the use of ultra high frequency (UHF) radio for over-the-air transmission of television signals. UHF frequencies are used for both analog and digital television broadcasts. UHF channels are typically given higher channel numbers, like the US arrangement with VHF channels 2 to 13, and UHF channels numbered 14 to 83.
UHF broadcasting became possible due to the introduction of new high-frequency vacuum tubes developed by Philips immediately prior to the opening of World War II. These were used in experimental television receivers in the UK in the 1930s, and became widely used during the war as radar receivers. Surplus tubes flooded the market in the post-war era. At the same time, the development of color television was taking its first steps, initially based on incompatible transmission systems. The US FCC set aside a block of the then-unused and now-practical UHF frequencies for color television use. The introduction of the backward compatible NTSC standard led to these channels being released for any television use in 1952.
Early receivers were generally less efficient at UHF band reception, and the signals are also subject to more environmental interference. Additionally, the signals are less susceptible to diffraction effects, which can improve reception at long range. UHF generally had less clear signals, and for some markets, became the home of smaller broadcasters who were not willing to bid on the more coveted VHF allocations. These issues are greatly reduced with digital television, and today most over-the-air broadcasts take place on UHF, while VHF channels are being retired. To avoid the appearance of disappearing channels, digital broadcast systems have a virtual channel concept, allowing stations to keep their original VHF channel number while actually broadcasting on a UHF frequency.
Over time a number of former television channels in the upper UHF band have been re-designated for other uses. Channel 37 was never used in the US and some other countries in order to prevent interference with radio astronomy. In 1983, the US FCC removed channels 70 through 83 and reassigned them to Land Mobile Radio System. In 2009, with the move to digital television complete in the US, channels 52 through 69 were reallocated as the 700 MHz band for cellular telephone service. In 2011, Channel 51 was removed to prevent interference with the 700 MHz band. The US UHF channel map now includes channels 14 through 36 and 38 through 50.
UHF vs VHF
The most common type of antennas rely on the concept of resonance. Conductors, normally metal wires or rods, are cut to a length so that the desired radio signal will create a standing wave of electrical current within them. This means that antennas have a natural size, normally 1⁄2 of a wavelength long, which maximizes performance. Antennas designed to receive a given signal will almost always have similar dimensions.
Because the antenna size is based on the wavelength, UHF broadcasting can be received with much smaller antennas than VHF while still having the same gain. For instance, Channel 2 in the North American television frequencies is at 54 MHz, which corresponds to a wavelength of 5.5 m, and thus requires dipole antenna about 2.75 m across. In comparison, the lowest channel in the UHF map, Channel 14, is on 470 MHz, a wavelength of 64 cm, or a dipole length of only 32 cm. A powerful VHF antenna using the log-periodic design might be as long as 3 m, while a UHF Yagi antenna with similar gain is often found placed in front of it, occupying perhaps 1 m. Modern UHF-only antennas often use the bedspread array and are less than a meter on a side.
Another effect due to the shorter wavelength is that UHF signals can pass through smaller openings than VHF. These openings are created by any metal in the area, including lines of nails or screws in the roof and walls, electrical wiring, and the frames of doors and windows. A metal-framed window will present almost no barrier to a UHF signal, while a VHF signal may be attenuated or strongly diffracted. For strong signals, UHF antennas mounted beside the television are relatively useful, and medium-distance signals, 25–50 kilometres (16–31 mi), can often be picked up by attic mounted antennas.
On the downside, higher frequencies are less susceptible to diffraction. This means that the signals will not bend around obstructions as readily as a VHF signal. This is a particular problem for receivers located in depressions and valleys. Normally the upper edge of the landform acts as a knife-edge and causes the signal to diffract downwards. VHF signals will be seen by antennas in the valley, whereas UHF bends about 1⁄10 as much, and far less signal will be received. The same effect also makes UHF signals more difficult to receive around obstructions. VHF will quickly diffract around trees and poles and the received energy immediately downstream will be about 40% of the original signal. In comparison, UHF blockage by the same obstruction will result on the order of 10% being received.
Another difference is the nature of the electrical and radio noise encountered on the two frequency bands. UHF bands are subject to constant levels of low-level noise that appear as "snow" on an analog screen. VHF more commonly sees impulse noise that produces a sharp "blip" of noise, but leaves the signal clear at other times. This normally comes from local electrical sources, and can be mitigated by turning them off. This means that at a given received power, a UHF analog signal will appear worse than VHF, often significantly.
For these reasons, in order to allow UHF channels to provide the same ground coverage as VHF, ideally about 60 miles (97 km), the FCC allowed UHF broadcasters to operate at much higher power levels. For analog signals in the United States, VHF signals on channels 2 to 6, the low-VHF range, were limited to 100 kW, high-VHF on channels 7 to 13 to 316 kW, and UHF to 5 MW, well over 10 times the power of the low-VHF transmitter power limit. This greatly increased the cost of transmitting in these frequencies, both in electrical cost as well as the upfront cost of the equipment needed to reach those power levels.
The introduction of digital television (DTV) changed the relative outcome of these effects. DTV systems use a system known as forward error correction (FEC) which adds additional information to the signal to allow it to correct errors. This works well if the error rate is well known, in which case a fixed amount of extra information is added to the signal to correct for these errors. This works well with constant low-level interference found on UHF, which FEC can effectively eliminate. In comparison, VHF noise is largely unpredictable, consisting of periods of little noise followed by periods of almost complete signal loss. Forward error correction cannot easily address this situation. For this reason, DTV broadcasting was initially going to take place entirely on UHF.
In the US, the FCC initially wanted to move all stations to UHF, auctioning off the VHF frequencies for cell phone use. This required a large number of stations to move out of their current VHF channel assignments. Moving from one UHF channel to another is a fairly simple exercise and generally costs little to accomplish. Moving from VHF to UHF is a much more expensive proposition, generally requiring all new equipment, and a dramatic increase in power in order to maintain the same service area. DTV offsets the latter to a great degree, with the current FCC power limitations at 1 MW for UHF, 1⁄5 the former limits.
Nevertheless, moving from a 100 kW low-VHF analog signal to a 1 MW UHF signal is still a considerable change, which some broadcasters estimated could cost up to $4 million per station (although most estimates were much lower, on the order of $400,000). For this reason, channels in the high-VHF region were kept for television use. The power of these channels was also reduced, to 160 kW, about one-third of the earlier limit. Channels making the transition generally acquired a second channel allocation in the upper UHF region to test their new equipment, and then moved into the low-UHF or high-VHF once the conversion period was over. This adds some complexity to the system as a whole, as the antennas needed to receive VHF and UHF are very different.
In Australia, UHF was first anticipated in the mid-1970s with TV channels 27–69. The first UHF TV broadcasts in Australia were operated by Special Broadcasting Service (SBS) on channel 28 in Sydney and Melbourne starting in 1980, and translator stations for the Australian Broadcasting Corporation (ABC). The UHF band is now used extensively as ABC, SBS, commercial and public-access television services have expanded, particularly through regional areas.
The first Canadian television network was publicly owned Radio-Canada, the Canadian Broadcasting Corporation. Its stations, as well as that of the first private networks (CTV and TVA, created in 1961), are primarily VHF. More recent third-network operators initially signing-on in the 1970s or 1980s were often relegated to UHF, or (if they were to attempt to deploy on VHF) to reduced power or stations in outlying areas. Canada's VHF spectrum was already crowded with both domestic broadcasts and numerous American TV stations along the border.
The use of UHF to provide programming that otherwise would not be available, such as province-wide educational services (BC's Knowledge: channel, or TVOntario - the first UHF originating station in Canada), Télé-Québec, French language programming outside Québec and ethnic/multilingual television services), has therefore become common. Third networks such as Quatre-Saisons or Global often will rely heavily on UHF stations as repeaters or as a local presence in large cities where VHF spectrum is largely already full. The original digital terrestrial television stations were all UHF broadcasts, although some digital broadcasts returned to VHF channels after the digital transition was completed in August 2011.
Digital Audio Broadcasting, deployed on a very limited scale in Canada in 2005 and largely abandoned, uses UHF frequencies in the L band from 1452 to 1492 MHz. There are currently no VHF Band III digital radio stations in Canada as, unlike in much of Europe, these frequencies are among the most popular for use by television stations.
In the Republic of Ireland, UHF was introduced in 1978 to augment the existing RTÉ One VHF 625-line transmissions and to provide extra frequencies for the new RTÉ Two channel. The first UHF transmitter site was Cairn Hill in Co. Longford, followed by Three Rock Mountain in South Co. Dublin. These sites were followed by Clermont Carn in Co. Louth and Holywell Hill in Co. Donegal in 1981. Since the analogue television switchoff on October 24, 2012 all digital terrestrial TV is on UHF only, although VHF allocations exist. The UHF band has been used in parts of Ireland for television deflector systems bringing British television signals to towns and rural areas that cannot receive these signals directly however since the introduction of free to air satellite transmission of UK TV channels these deflectors have largely ceased operation.
In Japan, an Independent UHF Station (ja:全国独立UHF放送協議会, Zenkoku Dokuritsu Yū-eichi-efu Hōsō Kyōgi-kai, literally National Independent UHF Broadcasting Forum) is one of a loosely knit group of free commercial terrestrial television stations that is not a member of the major national networks keyed in Tokyo and Osaka.
Japan's original broadcasters were VHF. Although some experimental broadcasts were made as early as 1939, NHK (founded in 1926 as a radio network modeled on the BBC) began regular VHF television broadcasting in 1953. Its two terrestrial television services (NHK General TV and NHK Educational TV) appear on VHF 1 and 3, respectively, in the Tokyo region. Privately owned Japanese VHF TV stations were most often built by large national newspapers with Tokyo stations exerting a large degree of control over national programming.
The number of VHF broadcasters varied depending on the prefecture. For example, in the Kanto region, there were seven VHF channels available. Outside of Tokyo, Osaka, Nagoya, and Fukuoka, most prefectures had four privately-owned television stations, with three of them broadcasting on UHF. Almost all prefectures had at least one privately-owned VHF television station (except for Saga).
The independent stations broadcast in analogue UHF, unlike major networks, which were historically broadcast primarily in analogue VHF. The loose coalition of UHF independents is operated mostly by local governments or metropolitan newspapers with less outside control. Compared with major network stations, Japan's UHF independents have more restrictive programming acquisition budgets and lower average ratings; they are also more likely to broadcast single episode or short-series UHF anime (many of which serve to promote DVD's or other product tie-ins) and brokered programming such as religion and infomercials.
Japanese terrestrial television was converted entirely to digital UHF starting in December 2003, with all analogue television signals (both VHF and UHF) being terminated between 2010 and 2012. The analogue translators in northeastern Ishikawa Prefecture were shut down as part of a technical trial on 24 July 2010; analogue signals in the rest of that prefecture and 43 other prefectures were terminated on 24 July 2011. The analogue transmitters in the prefectures of Iwate, Miyagi, and Fukushima were switched off on 31 March 2012.
UHF broadcasting was used outside Kuala Lumpur and the Klang Valley by private TV station TV3 in the late 1980s, with the government stations only transmitting in VHF (Bands 1 and 3) and the 450 MHz range being occupied by the ATUR cellular phone service operated by Telekom Malaysia. The ATUR service ceased operation in the late 1990s, freeing up the frequency for other uses. UHF was not commonly used in the Klang Valley until 1994 (despite TV3's signal also being available over UHF Channel 29, as TV3 transmitted over VHF Channel 12 in the Klang Valley). 1994 saw the introduction of the channel MetroVision (which ceased transmission in 1999, got bought over by TV3's parent company – System Televisyen Malaysia Berhad – and relaunched as 8TV in 2004). This was followed by Ntv7 in 1998 (also acquired by TV3's parent company in 2005) and recently Channel 9 (which started in 2003, ceased transmission in 2005, was also acquired by TV3's parent company shortly after, and came back as TV9 in early 2006). At current count, there are 6 distinct UHF signals receivable by an analog TV set in the Klang Valley: Channel 27 (8TV), Channel 29 (TV3 UHF transmission), Channel 37 (NTV7), Channel 42 (TV9), Channel 55 (TV Alhijrah) and Channel 39 (WBC). Channel 35 is usually allocated for VCRs, decoder units (i.e. the ASTRO and MiTV set top boxes) and other devices that have an RF signal generator (i.e. game consoles).
Refer to Australasian television frequencies for more information.
UHF broadcasting was introduced in the Philippines in the early 1960s when FEN Philippines began broadcasts on channel 17 in Pampanga and Zambales (as in Subic and Clark bases), and channel 43 in Bulacan and also in Metro Manila on Channel 50 until 1991 (most of its programs and newscasts are from a satellite feed directly from their U.S. military bases in Japan), at the time when Mount Pinatubo erupted and became abandoned. Commercial UHF stations began in May, 1992, as DWCP-TV on channel 21 became the first local UHF TV station in Metro Manila by the Southern Broadcasting Network as SBN-21 (then Talk TV) and commenced free programing, the second channel, DWKC-TV (on channel 31) of the Radio Mindanao Network was launched on October 31 of the same year as CTV-31 from 1992-2000 (then E! from 2000–03 and BEAM in 2011). The third channel, DZRJ-TV (channel 29) was also launched in 1993 for the Rajah Broadcasting Network, Inc. which specializes niche programing (mostly infomercials, foreign shows and cartoons). Two more channels include DWDB-TV (channel 27) of GMA Network, Inc. (as Citynet Television from 1995–99 and EMC from 1999-2001) and DWAC-TV (channel 23) of ABS-CBN (as Studio 23) between August 27, 1995 and October 12, 1996, as fourth and fifth UHF stations, and the sixth and the last, DWDZ-TV (channel 47) of the Associated Broadcasting Company in 1999, but it was silent in 2003. UHF channels in Metro Manila were used as an alternative to cable television which offered free programing for households in the target markets and became popular in the 1990s. Similarly, pay services were also introduced in late-1992, when DWBC-TV on channel 68 began initial transmissions as a paid UHF station offers foreign programs not shown on local TV and commencing regular service in January 1993, but it was closed down as a result from intense competition from the rival Sky Cable. From 2001 to the present, more channels were established, regional stations are established in the provinces which specialize news, public service and free programing.
With Digital TV was introduced, all UHF channels will allocate their frequencies and can be served for broadcast companies such as ABS-CBN, GMA Network and TV5, among others as the National Telecommunications Commission plans to migrate all VHF channels to digital UHF channels before December 31, 2015, though this was delayed until 2020 or 2023. Digital terrestrial television services are currently in development by the major broadcasting companies before the Implementing Rules and Regulations (IRR) will be passed by law.
South Africa only received analog TV service in the 1970s There were four TV channels: TV1 (now SABC1), TV2 (now SABC2), TV3 (now SABC3), and later came Etv.
Parts of this article (those related to The last paragraph says we don't know what'll happen after 2012, can this be updated with what DID happen?) need to be updated.October 2019)(
In the UK, UHF television began in 1964 following a plan by the General Post Office to allocate sets of frequencies for 625-lined television to regions across the country, so as to accommodate four national networks with regional variations (the VHF allocations allowed for only two such networks using 405 lines). The UK UHF channels would range from 21 to 68 (later extended to 69) and regional allocations were in general grouped close together to allow for the use of aerials designed to receive a specific sub-band with greater efficiency than wider-band aerials could. Aerial manufacturers would therefore divide the band into over-lapping groups; A (channels 21–34), B (39–53), C/D (48–68) and E (39–68). The first service to use UHF was BBC2 in 1964 followed by BBC1 and ITV (already broadcast on VHF) in 1969 and Channel 4/S4C in 1982. PAL colour was introduced on UHF only in 1967 (for BBC2) and 1969 (for BBC1 & ITV).
As a consequence of achieving maximum national coverage, signals from one region would typically over-lap with that of another, which was accommodated for by allocating a different set of channels in each adjacent area, often resulting in greater choice for viewers when a network in one region aired different programmes to the neighbouring region.
Initial uptake of UHF television was very slow: Differing propagation characteristics between VHF and UHF meant new additional transmitters needed to be built, often at different locations to the then-established VHF sites, and in general with a larger number of relay stations to fill the greater number of gaps in coverage that came with the new band. This led to poor picture quality in bad coverage areas, and many years before the service achieved full national coverage. In addition to this, the only exclusively UHF service, BBC2, would run for only a few hours a day and run alternative programming for minority audiences in contrast to the more populist schedules of BBC1 and ITV. However the 1970s saw a large increase in UHF TV viewing while VHF took a significant decline: The appeal of colour, which was never introduced to VHF (despite preliminary plans to do so in the late 1950s and early 1960s) and the fall in television prices saw most households use a UHF set by the end of that decade. With the second and last VHF television service having launched in 1955, VHF TV was finally decommissioned for good in 1985 with no plans for it to return to use.
The launch of Channel 5 in 1997 added a fifth national television network to UHF, requiring deviation from the original frequency allocation plan of the early 1960s and the allocation of UHF frequencies previously not used for television (such as UK Channels 35 and 37, previously reserved for RF modulators in devices such as domestic videocassette recorders, requiring an expensive VCR re-tuning programme funded by the new network). A lack of capacity within the band to accommodate a fifth service with the complex over-lapping led to the fifth and final network having a significantly reduced national coverage compared to the other networks, with reduced picture quality in many areas and the use of wide-band aerials often required.
The launch of digital terrestrial television in 1998 saw the continued use of UHF for television, with six multiplexes allocated for the service, all within the UHF band. However analogue transmissions have been planned to cease completely by 2012 after which time it is uncertain as to whether the vacated capacity will be used for additional digital television services or put into alternative use, such as mobile telecommunications or internet services.
On December 29, 1949, KC2XAK of Bridgeport, Connecticut, became the first UHF television station to operate on a regular daily schedule. The first commercially licensed UHF television station was WWLP in Springfield, Massachusetts; however, the first commercially licensed TV station on the air was KPTV, Channel 27, in Portland, Oregon, on September 18, 1952. This TV station used much of the equipment, including the transmitter, from KC2XAK.
American television broadcasting began experimentally in the 1930s with regular commercial broadcasting in cities such as New York and Chicago in 1941. Bandwidth was originally allocated (by the Federal Communications Commission – the FCC) solely in the VHF (Very High Frequency) band. All VHF TV channels except channel 1 through 13 had been removed from the FCC allocation list during World War II and those frequencies re-allocated for military use, leaving thirteen channels as of May 1945. While efforts at TV broadcasting on any channel were drastically curtailed for the duration of WWII, due largely to lack of available receivers, the post-war era brought rapid expansion in the nascent broadcast television industry.
After VHF Channel 1 was re-allocated to land-mobile radio systems in 1948 due to radio-interference problems, one dozen TV channels remained (the VHF band covered channels 2 to 13 after this change). That amount was found to be insufficient during the latter 1940s and 1950s. For example, the following cities were never allocated any VHF-TV stations at all, due to technical reasons found by the FCC: Huntsville, Alabama; Fort Wayne, Indiana; South Bend, Indiana, Lexington, Kentucky; Springfield, Massachusetts; Youngstown, Ohio; Scranton/Wilkes-Barre, Pennsylvania; and Yakima, Washington. In addition, more cities were able to receive only one VHF broadcast station. Also, the entire state of New Jersey would receive only one VHF broadcast station of its own (which was to ultimately become WNET 13 Newark), leaving much of the state to be served from New York City or Philadelphia. Delaware also had only one VHF station. There were problems with an insufficient number of TV channels being available to cover all of the United States.
With 106 VHF stations broadcasting by the end of the 1940s in the U.S., interference arose due to overcrowding in densely populated areas such as the eastern mid-Atlantic states. In 1949, the Federal Communications Commission stopped accepting applications for new stations (a freeze that lasted until 1952) in order to address questions such as the allocation of additional channel frequencies, and also the selection of a color television.
Allocating more of the VHF band (30 to 300 MHz) by moving existing radio communication users off seemed to be impossible. FM radio broadcasting had already suffered a huge setback after a forced move from a 42–50 MHz allocation to an 88–108 MHz allocation in 1946. This had rendered all existing FM transmitters and receivers obsolete. Aeronautical radio is located above 108 MHz, and military aeronautical radio uses 225–400 MHz and was not easily moved. Public safety, commercial land-mobile, and amateur radio services also had allocations in Band II. It was impractical and uneconomic to require these well-established users to move to other frequencies, such as the 300 MHz – 3 GHz UHF band.
The U.S. Army and Navy did not need to keep their wartime UHF spectrum allocation simply because they had never used most of it. That allocation had been done in 1942 to support the war effort. In 1942, no-one knew how much bandwidth that the Army and the Navy might need for radar and for radio communications, so the federal government allocated a huge amount of radio spectrum to the uniformed services with adjustments to come later.
By 1950, expansion of TV channels into UHF band of frequencies became inevitable. However much UHF TV technology remained unproven at that time. This was especially due to the development and improvement of radar. (There are significant advantages to using shorter wavelengths, hence higher frequencies, for radars.) The question of which owners should retain the more-valuable (at that time) VHF TV channels remained hotly contested between competing interests.
To incumbent corporations, such as the Radio Corporation of America and its National Broadcasting Company subsidiary, UHF-TV and FM radio represented disruptive technologies – competition to their existing and long-established manufacturing and broadcast interests in VHF-TV and AM radio. In the fall of 1944, the Columbia Broadcasting System pressed a high-definition black and white system on the UHF band employing 750–1,000 scanning lines that offered the possibility of higher-definition monochrome and color broadcasting, both then were precluded from the VHF band because of their bandwidth demands; more significantly, it offered the possibility for sufficient numbers of conventional 6 MHz channels to support the FCC's goals of a "truly nationwide and competitive service". CBS was not trying maximize broadcast (or network) competition through freer market entry. Instead CBS's 16 MHz channels would have allowed only 27 UHF channels versus the 82 channels possible under the standard 6 MHz bandwidth. CBS Vice President Adrian Murphy told the FCC: "I would say that it would be better to have two networks in color" instead of the four or more networks possible with narrower bandwidths in UHF. To newer entrants into TV broadcasting such as the DuMont Laboratories company and its fourth-ranked DuMont Television Network, however, the need for additional TV channels in major markets was urgent. For proponents of educational TV broadcasting, the difficulties in competing with commercial broadcasters for the increasingly scarce VHF channels were becoming a key problem.
Any attempt to pursue the objective of broadcast localism on the VHF-TV channels threatened in many regions to push the third-network TV companies such as the American Broadcasting Company onto stations in outlying communities, if they could be accommodated on VHF channels at all.
A key question in the FCC's allocation of TV channels was hence that of intermixture, licensing both VHF and UHF stations in a single city. To allocate four to as many as seven VHF channels to each of the largest cities would mean forcing the smaller, intervening cities completely onto the UHF channels, while an allocation scheme that sought to assign one or two VHF channels in each smaller city would force VHF and UHF stations to compete in most markets. (New York City, Washington-Baltimore, Los Angeles, and San Francisco received seven VHF stations apiece, and Chicago was allocated five, with the other two channels going to Milwaukee, Wisconsin and Rockford, Illinois.)
Hopes that UHF-TV would allow dozens of television stations in every media market were thwarted not only by poor image frequency rejection in superheterodyne receivers with the standard intermediate frequency of 45.75 MHz, but also by very poor adjacent-channel rejection and channel selectivity by early tuner designs and manufactures. UHF-TV stations in the same immediate area were usually assigned by the FCC a minimum of six channels apart due to inadequate TV receiver manufacture. Technical problems with the design of vacuum tubes for operation at high UHF frequencies were beginning to be addressed in 1954. These shortcomings led to "UHF taboos", which in effect limited each metropolitan area to only moderately more UHF stations than VHF ones, despite the much higher number of channels.
When the Freeze ended in 1952, the television industry grew from the 108 pre-Freeze stations to more than 530 in 1960. These stations were established on the UHF band despite the fact it did not have near the coverage of their VHF competitors. The FCC tried solving this problem by allowing the lower powered UHF stations more power, but VHF continued to have more stations. At the same time, advertisers had caught on to this and did most of their business with VHF stations. In all, the FCC’s intermix effort failed. While the more-established broadcasters were operating profitably on VHF channels as affiliates of the largest TV networks (at the time, NBC and CBS), most of the original UHF local stations of the 1950s soon went bankrupt, limited by the range their signals could travel, the lack of UHF tuners in most TV sets and the paucity of advertisers willing to spend money on them. UHF stations fell quickly behind the VHF stations. UHF station profits in 1953 reached a loss of $10,500,000. More stations left the air than opened. Sixty percent of industry losses were by UHF stations from 1953 to 1956. TV network affiliations were difficult to get in many locations; the UHF stations with major-network affiliation would often lose these affiliations in favor of any viable new VHF TV station that entered the same market. Of the 82 new UHF-TV stations in the United States broadcasting as of June 1954, only 24 of them remained a year later. The fraction of new TV receivers that were factory-equipped with all-channel tuners dropped from 35% in early 1953 to 9% by 1958, a drop that was only partially compensated for by field upgrades or the availability of external UHF converters for separate purchase.
The majority of the 165 UHF stations to begin telecasting between 1952 and 1959 did not survive. Under the All-Channel Receiver Act, FCC regulations required all new TV sets sold in the U.S. after 1964 to have built-in UHF tuners that could receive channels 14–83 with the passing of the act in 1962. In spite of this, by 1971, only about 170 full-service UHF stations were in operation.
Independent and educational stations
In the United States, the UHF stations gained a reputation for local ownership, nonprofessional operations, small audiences and weaker signal propagation.
While UHF-TV has been available to American TV broadcasters since 1952, affiliates of the four major American TV networks (NBC, CBS, ABC, and DuMont) continued to transmit primarily on VHF wherever they were available. With the availability of the twelve VHF television channels limited by FCC spacing rules to avoid co-channel and adjacent channel interference between TV stations in the same or nearby cities, all available VHF-TV allocations were already in use in most large TV markets by the mid-1950s.
Two TV stations on the same channel needed to be 160 or more miles apart, and two TV stations on adjacent channels needed to be 60 or more miles apart. Exceptions to this rule occurred with VHF channels 4 and 5, and VHF channels 6 and 7, because additional "guard bands" between these two pairs are allocated to other uses. Thus, the channel pair 4 and 5 was found in New York City, Washington, D.C., St. Louis, Los Angeles, San Francisco, and many other places, including along the Canada–US border with channel 4 in Buffalo and channel 5 in Toronto. Likewise, the channel pair 6 and 7 was found in Denver and several other places.
UHF stations in major population centers of the United States were usually either educational network or independent TV stations. Other UHF stations for a time affiliated with less-affluent networks that did not last very long; for example, the fourth-ranked DuMont Network, which operated from 1946 to 1956, and then failed. The movie UHF (starring "Weird Al" Yankovic and Michael Richards) parodied the independent UHF station phenomenon; a fictional UHF station was also parodied in 1980 film Pray TV.
Some significantly populated cities had few or no VHF stations. These cities had UHF stations but lacked major network affiliations but became sound businesses. Some of these stations were located in or near state capital cities or served nearby major rural regions, such as Montgomery, Alabama; Frankfort, Kentucky; Dover, Delaware; Lincoln, Nebraska; Topeka, Kansas; Jefferson City, Missouri; Lansing, Michigan; Harrisburg, Pennsylvania; Madison, Wisconsin; and Springfield, Illinois. In the United States, television stations in or near state capital cities are important because they closely covered the operations of state governments and spread information to residents across their state.
TV antenna manufacturers often rated their top-of-the-line "deep-fringe" antenna models with phrases like "100 miles VHF/60 miles UHF" if the antenna included UHF reception at all. (In the practice of electrical engineering, the frequency range in which an antenna is to be used is an important factor in its design.)
TV set manufacturers often treated UHF tuners as extra-charge optional-items until they became required. Various FCC attempts to protect UHF stations were met with mixed results.
- Limits on the number of owned-and-operated stations controlled by one corporation were raised from five stations to seven, provided that two of them were UHF stations. Both NBC-TV (WBUF 17 Buffalo, WNBC 30 Hartford) and CBS-TV (WHCT 18 Hartford, WXIX 19 Milwaukee) acquired pairs of UHF stations as an experiment in the mid-1950s, only to abandon the stations in 1958–9. (NBC has since reacquired channel 30 in Hartford, now WVIT.) Their commercial network programming soon returned to VHF channel affiliates. WBUF's allocation on channel 17 was donated to the public-TV broadcaster WNED-TV, which now broadcasts as a Public Broadcasting Service station.
- The UHF television impact policy (1960–1988) allowed applications for new VHF TV stations to be opposed in cases where licensure could lead to the economic failure of an existing UHF TV broadcaster.
- The secondary affiliation rule (1971–1995) prohibited a network entering a market with two existing VHF TV network affiliates and one UHF independent TV station from placing its programs on a secondary basis on one or both VHF stations without offering them to the UHF station.
- Limits on UHF effective radiated power, originally very restrictive, were relaxed. A UHF TV station could be licensed for up to five megawatts of carrier power, unlike VHF TV stations, which were limited to 100 (Channels 2–6) or 316 kilowatts of carrier power (Channels 7–13) depending on their channel.
- More recent limits on station ownership are based on the combined percentage of the American population (originally 35% maximum, now increased to 45%) reached by one group of stations under common ownership. A UHF discount, by which only half of the audience of a UHF station would be counted against these limits, would ultimately allow groups such as PAX to reach the majority of the American audience using owned-and-operated UHF stations.
The situation began to improve in the 1960s and 1970s, but progress was slow and difficult.
The original SIN (Spanish International Network) was established in 1962 as the predecessor of the modern Univision network. It was built primarily by UHF stations, such as KWEX-TV, Channel 41 in San Antonio and KMEX-TV, Channel 34 in Los Angeles.
Fourth networks, satellite and cable television
In 1970, Ted Turner acquired a struggling independent station on Channel 17 in Atlanta, Georgia, purchasing reruns of popular television shows, the Atlanta Braves baseball team and the Atlanta Hawks basketball team.
This station, renamed WTBS, was uplinked in 1976 to satellite alongside new premium channels such as HBO, gaining access to distant cable television markets and becoming the first of various superstations to obtain national coverage. In 1986 Turner purchased the entire MGM film library. Turner Broadcasting System's access to movie rights proved commercially valuable as home video cassette rental became ubiquitous in the 1980s.
In 1986, the DuMont owned-and-operated station group Metromedia was acquired by News Corporation and used as the foundation to relaunch a fourth commercial network, which obtained affiliations with many former big-city independent stations as Fox TV.
Fox initially combined former independents and UHF stations. it had large programming budgets that the original DuMont lacked. Ultimately, it was able in some markets to draw existing VHF affiliates away from established Big Three networks, outbidding CBS for National Football Conference programming in 1994 and attracting many of that network's affiliates. Various smaller networks were created with the intent to follow in its footsteps, often by affiliating with a disparate collection of formerly independent UHF stations that otherwise would have no network programming.
By 1994, New World Communications was moving its established stations from CBS to Fox affiliations in multiple markets, including WJBK-TV 2 Detroit. In many cases, this pushed CBS onto UHF; "U-62" as the new home of CBS in Detroit became CBS owned-and-operated station WWJ-TV in 1995, obtaining access to audiences thousands of miles distant through satellite and cable television.
The concentration of media ownership, the proliferation of cable and satellite television and the digital television transition contributed to the quality equalization of VHF and UHF broadcasts. The distinction between UHF and VHF characteristics declined in importance with the emergence of additional broadcast television networks (Fox, The CW, MyNetworkTV, Univision, Telemundo, and ION), and the decline of direct OTA reception. The number of major large-city independent stations also declined as many joined or formed new networks.
The majority of digital TV stations currently broadcast in the UHF band, both because VHF was already filled largely with analog TV when the digital facilities were built and because of severe issues with impulse noise on digital low-VHF channels. While virtual channel numbering schemes routinely display channel numbers like "2.1" or "6.1" for individual North American terrestrial HDTV broadcasts, these are more often than not actually UHF signals. Many equipment vendors therefore use "HDTV antenna" or similar branding as all but synonymous to "UHF antenna".[original research?]
Terrestrial digital television is based on a forward error correction scheme, in which a channel is assumed to have a random bit error rate and additional data bits may be sent to allow these errors to be corrected at the receiver. While this error correction can work well in the UHF band where the interference consist largely of white noise, it has largely proven inadequate on lower VHF channels where bursts of impulse noise disrupt the entire channel for short lengths of time. A short impulse-noise burst might be a minor annoyance to analog TV viewers, but due to the fixed timing and repetitive nature of analog video synchronization is usually recoverable. The same interference can prove severe enough to prevent the reliable reception of the more fragile and more highly compressed ATSC digital television. Power limits are also lower on low-VHF; a digital UHF station may be licensed to transmit up to a megawatt of effective radiated power. Very few stations returned to VHF channels 2–6 after the transition was completed in 2009, and were mainly concentrated in the Desert Southwest and Mountain West regions, where few geographical obstructions and adjoining co-channel stations exist. At least three quarters of all full-power digital broadcasts continued to use UHF transmitters, with most of the others located on the high-VHF channels. In some American markets, such as Syracuse, New York, no full-service VHF TV stations remained.
The one remaining limitation of UHF is its greatly reduced range in the presence of terrain obstacles. This continues to adversely affect digital UHF TV reception. This limitation could potentially be overcome by the use of a distributed transmission system. Multiple digital UHF transmitters in carefully selected locations can be synchronized as a single-frequency network to produce a tailored coverage area pattern rivaling that of a single full-power VHF transmitter.
Due to the inferiority of UHF broadcasting for analog television, the FCC counts the audience of UHF stations by half for the purposes of its national market share cap of 39%, a policy known as the UHF discount. The rule was briefly removed in September 2016, with the FCC citing that the rule was obsolete because almost all digital television channels are on the UHF band, and that the policy was being abused by broadcasters as a loophole to increase their market share. However, in April 2017, under new FCC commissioner Ajit Pai, the discount was reinstated.
One notable exception to historical patterns favoring VHF broadcasters has existed in television markets that could not qualify for their own VHF stations because they were sandwiched between the outer fringes of VHF stations in two or more larger markets. Such cities received only UHF licenses.
With all stations (including network affiliates) on UHF, all-channel receivers and antennas became commonplace locally and UHF stations signing on as early as 1953 were often able to obtain the programming and audience needed to remain viable into the modern era.
These communities, known as UHF islands, included cities like Youngstown, Ohio; Tri-Cities, Washington; Springfield, Massachusetts; Elmira, New York; South Bend, Indiana; Fort Wayne, Indiana; Peoria, Illinois; Huntsville, Alabama; Salisbury, Maryland; Lexington, Kentucky; and Scranton, Pennsylvania. Other smaller cities such as Madison, Wisconsin; Fresno, California; Fort Myers, Florida; Mankato, Minnesota; Watertown, New York; Erie, Pennsylvania; Columbia, South Carolina; and Harrisburg, Pennsylvania only received one VHF license, meaning that any additional programming would need to be provided either by UHF, by distant stations, or by low-power broadcasting.
The most common cause was the assignment of more than three VHF stations to large markets. Another cause was the digital switch.
Broadcast translators and low-power television
Very small UHF TV transmitters continue to operate with no programming or commercial identity, instead retransmitting signals of existing full-power stations to a smaller area poorly covered by the main VHF signal. Such transmitters are called "translators" rather than "stations". The smallest, owned by local municipal-level groups or the originating TV stations, are numbered sequentially – W or K, followed by the channel number, followed by two sequentially issued letters, yielding a "translator callsign" in a generic format that appears K14AA through W69ZZ. Translators and repeaters also exist on VHF channels, but infrequently and with stringently limited power.
The translator band, UHF TV channels 70–83, consisted mostly of these small repeaters; it was removed from television use in 1983 with the tiny repeaters moved primarily to lower UHF channels. The 806–890 MHz band segment is now used primarily by mobile phones. Many of these transmitters, if still in operation, were moved again in 2011 as UHF channels 52–69 were lost primarily to mobile telephony during the DTV transition.
As improvements to originating stations lessen the need for these translators, the small transmitter facilities and their allocated frequencies were often repurposed for low-power broadcasting; instead of repeating a distant signal, the tiny transmitter would be used to originate programming for a small local area.
- "A Guide to UHF Television Reception". University of Indiana.
- "Choosing a mounting site". HDTVPrimer.
- "Why Is There No Channel 37?". History of UHF Television.
- Jessell, Harry (26 June 2009). "VHF: Now Everything You Know Is Wrong". TVNewsCheck.
- "DTV Post-Transition Allotment Plan" (PDF). Industry Canada. December 2008.
- "Digital Audio Broadcasting".
- Boddy, William (1993). Fifties Television: The Industry and Its Critics. University of Illinois Press. ISBN 978-0-252-06299-5.
- Slotten, Hugh R. (27 September 2000). Radio and Television Regulation: Broadcast Technology in the United States, 1920–1960. JHU Press. ISBN 978-0-8018-6450-6.
- "Missed Opportunities: FCC Commissioner Frieda Hennock and the UHF Debacle", Susan L. Brinson, Journal of Broadcasting & Electronic Media, Spring 2000
- "VALVES AT UHF: A REVIEW OF RECENT DEVELOPMENTS", S. Simpson, Practical Television magazine, March 1954.
- "The Superheterodyne Concept and Reception". Charles W. Rhodes, TV Technology, July 20, 2005.
- Sterling, C. H., & Kittross, J. M. (1990). Stay Tuned: A Concise History of American Broadcasting (2nd ed.). Belmont, CA: Wadsworth.
- "Tulsa TV history thesis, Chapter 3 (KCEB)".
- Stay Tuned: A History of American Broadcasting; pp. 387–388; Christopher H. Sterling, John M. Kittross; Erlbaum, 2002; ISBN 978-0-8058-2624-1
- "Index of /articles".
- U-62 program schedule, July 1989
- Buffalo Broadcasters: History – UHF
- Alexander, Alison (2004). Media Economics: Theory and Practice. Lawrence Erlbaum. ISBN 978-0-8058-4580-8.
- "FCC order revoking secondary affiliation rule". FCC. 1995.
- Rothenberger, Cecilia (April 30, 2004). "THE UHF DISCOUNT: SHORTCHANGING THE PUBLIC INTEREST" (PDF).
- "DuMont Television Network - Historical Web Site".
- Flint, Joe (September 26, 2013). "FCC proposes eliminating UHF discount from TV ownership rules". Los Angeles Times. Retrieved 19 January 2014.
- "FCC Proposes Elimination Of UHF Discount". TVNewsCheck. September 26, 2013.
- "Regulators Tighten TV-Station Ownership Curb by Cutting Discount". Bloomberg. Retrieved 22 April 2017.
- "FCC Takes Lid Off National Station Ownership". TVNewsCheck. Retrieved 20 April 2017.
- "FCC Eases Media Ownership Restrictions in Vote to Restore UHF Discount". Variety. Retrieved 22 April 2017.
- WSJV 28 South Bend, Indiana history indicates station founded 1954, still extant as no VHF channels available due to proximity to Chicago. |
CHAPTER 05.03: NEWTON DIVIDED DIFFERENCE METHOD: Newton's Divided Difference Polynomial: Linear Interpolation: Example
In this segment, we're going to take an example for the Newton's divided difference polynomial method, and we're going to take the example for linear interpolation. So how do we do linear interpolation by using Newton's divided difference polynomial method? So let's suppose somebody, this is the example, so somebody's giving you velocity as a function of time. Time is given in seconds, velocity is given in meters per second for this rocket, the upward velocity is given to you, it is given as 0, 0, 10, 362.78 at 15, at 20 it is 517.35, at 22.5 it is 602.97, and at 30 it is 901.67. So these are the numbers which are given to us, and we are asked to do linear interpolation based on the fact that we want to find what the value of the velocity at 16 is. So we want to use linear interpolation, we want to use Newton's divided difference polynomial method. We are given six data points, and we want to find out what the value of the velocity at 16 seconds is. So again, as we were talking about when we were deriving Newton's divided difference polynomial, we talked about that, hey, rather than posing the problem as given two points, do the linear interpolation, we said chosen two data points, because in most cases you won't just have two data points, but you will have a number of data points, so you have to come up with a scheme to choose it. For linear interpolation, it's pretty simple, because all you have to do is to figure out that the point at which you are interested in finding out the intermediate value, that the numbers have to be between . . . that the number has to be between the two data points, and closest to them. So 16, it is between 15 and 20, and it is also at the same time these are the two closest t values which you have, so that's why we're choosing 15 and 20. We're not choosing 10 and 20, for example, because 15 is closer to 16 than 10 is. So it's pretty straightforward. So you have to find out the immediate bracket, immediate two data values which bracket your value of the velocity at 16. I also want to emphasize that these data points which are given to you here, although we are giving them in an ascending order, so far as the time is concerned, going from 0 to 30 for Newton's divided difference polynomial method, or direct method, or Lagrangian interpolation, it is not required for these data points to be in any kind of ascending or descending order. So it's extremely important to realize that. A lot of people think that that is the case. You do have to put them in ascending order if you are doing something called spline interpolation. So let's go ahead and see that how we're going to apply Newton's divided difference polynomial method to this particular problem. So we know that the velocity v1 t, which is standing for the first-order polynomial, will be like, the form will be b0, plus b1 times t minus t1, where b0 is the value of the function at the first data points, and b1 is the slope between the two data points of t0 and t1, so we get this. So in our case, what we have is that . . . so let's go ahead and write down what t0 is, t0 is 15 the value of the velocity at t0, which is one of the data points which is given to us, is 362.78 meters per second. The value of the velocity at 20 is given to us, that's v of t1 is given to us as 517.35 meters per second. So based on this information which we already have, we should be able to find b0 and b1. So b0 is nothing but the velocity at t0, which is 362.78, which is right here, and b1 is the velocity at t1 minus the velocity of t0, divided by t1 minus t0, and this one is equal to v of t1 is 517.35, the value of the velocity at t0 is 362.78, and we divide it by t1 minus t0, which is . . . no, this is t1, t1 minus t0, which is 20 minus 15. So based on this we should be able to find out what b1 is, and we are able to find b1 to be equal to 30.914. So we have the value of b1, we have the value of b0, so we should be able to write down the first-order polynomial which we have, and that first-order polynomial, again, as we said is of the form b0, plus b1 times t minus t0, and b0 is 362.78, plus b1, which is 30.914, times t minus t0, which is 15. So this is what turns out to be the first-order polynomial, which is valid between 15 and 20. So again I want to emphasize another point is that whenever you are going to write interpolants which are . . . which you are finding by using any of the methods, that you do give the domain in which that interpolant is valid. So since we have used 15 and 20 as the two data points, that means that this interpolant which we just found out is valid between the points of time of 15 and 20. However, what we are interested in is finding out what the value of the velocity is, approximate value of velocity is at 16, which we can just obtain by simply substituting 16 in here, 30.914 times 16 minus 15, and this number here turns out to be equal to 393.69 meters per second, so that's our approximation. Now, one of the things which I do want to mention is that this is a first-order polynomial, this is the same first-order polynomial which you would have obtained by using the direct method. So if I expand this particular polynomial, I will get v1 t, if I just expand this by simply multiplying this to this and this to this, I would get -100.93, plus 30.914 t, and this particular interpolant is the same interpolant which I got by using direct method. So there's no difference, because the polynomial has to be unique, so there's no difference between this polynomial which I would have obtained by using direct method and this polynomial right here. Just the form is different, as you saw that the reason why we're writing in this particular form is because calculating the value of b0 and b1 do not require us to solve simultaneous linear equations. And that's the end of this example. |
This ScienceLives article was provided to LiveScience in partnership with the National Science Foundation.
The sexy zip of a male hummingbird as it dives is made not by its voice, but with its special tail feathers during its courtship dance. Christopher Clark, now a postdoctoral researcher at the Peabody Museum of Natural History at Yale University, discovered these unique tail sounds.
After getting his undergraduate degree at Washington State University in 2001, he attended graduate school at the University of Texas in Austin and then the University of California, Berkeley. His graduate work focused on the roles of sexual selection and flight performance in shaping hummingbird tail morphology.
The most acclaimed part of his dissertation was his 2008 paper "The Anna's hummingbird chirps with its tail" which made headlines and launched the current phase of his scientific career. In that paper, he demonstrated that the Anna's Hummingbird (Calypte anna) makes loud sounds with its tail-feathers during its courtship display, rather than vocally, as was previously believed. Watch this video to learn more.
After completing his Ph.D. in 2009, Clark and his present advisor at the Peabody Museum, Richard Prum, were awarded a National Science Foundation grant to delve into the physics of the sounds that feathers make. In the past two years, Clark has traveled extensively in Latin America to record the courtship displays of sheartails, woodstars, and other poorly studied hummingbird species — nearly all of which produce distinctive sounds with tail-feathers. In the lab, Clark uses a wind tunnel to get feathers to reproduce the sounds the birds make in flight. The wind tunnel allows him to study how feathers produce sounds over a range of air speeds.
Name: Christopher J. Clark Age: 32 Institution: Peabody Museum of Natural History, Yale University Field of Study: Hummingbird courtship displays and acoustics of animal flight
What inspired you to choose this field of study?
At age 22, I was sitting at a diner in Idaho, watching calliope Hummingbirds visit a feeder that was inches from my face. I thought to myself, "hummingbirds are really cool! And they'd be easy to catch, which would make them easy to study!" If you decide to study a particular animal, then you have to make sure that your research question fits the animal well. Hummingbirds have unparalleled flight abilities and they're not afraid to show off, so studying their flight was the best fit.
The current project arose when I figured out that hummingbirds were making loud sounds with their tail-feathers during courtship displays. This isn't the "humming" sound that they're famous for, but rather, these were sounds that many people thought were vocal. It turns out that by putting the feathers in a wind tunnel, these non-vocal sounds were really easy to reproduce and study. So that's how I find myself studying the sounds birds make when they fly.
What is the best piece of advice you ever received? "Good judgment comes from experience. And most of that comes from bad judgment."
One lesson here is that you have to let yourself make mistakes in order to learn. If you don't try an experiment because you're afraid it won't work, then you're not going to do your best science. Make mistakes, and learn from them. I spent my entire first field season failing to get Anna's hummingbirds to perform displays. Once I finally figured out a series of tricks (patience and a willingness to sit on the ground for hours being the most important), it was so exciting to finally get some data! It's important to recognize when you've used bad judgment — and to fix it.
What was your first scientific experiment as a child? I didn't experiment — I observed. When I was in cub scouts (age 11), I made a birdhouse that we hung outside the window by my bed. A pair of black-capped chickadees nested in it for the next couple of years. They would land on the wire about two feet from the window, before and after going to the box. They couldn't see through the window, but I could see and hear them really well (and hear the babies) when I lay in bed, so I watched them for hours. I figured out which one was female — she would shiver her wings and the male would feed her. Later when they had babies, they would bring green caterpillars from the Douglas firs nearby. I counted the feedings for a few hours; they fed their babies about every four minutes, for the wholeday.
What is your favorite thing about being a researcher? I have two favorite things. One is when I do an experiment that has clear results, and not at all what I had expected. Male Anna's hummingbirds make this loud CHIRP when they perform a courtship dive to a female. It sounds like a vocalization, like a bird sitting there going, "chirp, chirp," except he's diving at high speed when he does it. So I wanted to test whether the tail makes this sound, by finding a male who could make the sound, then catching him and removing his outer tail feathers and getting him to dive again. I fully expected the null result, i.e. that the bird would still make the loud CHIRP when missing two tiny feathers. I had told people that the sound was vocal, and I was testing just to make sure. He rose up, up, up, then dove... and whiff, he didn't make the sound! 10 times in a row he failed! I was astonished. I spent the rest of the day with my head in a cloud, thinking about what this meant. I was already thinking about all of the other species I had to study — but of course, it also meant that I had to repeat the experiment a few more times, in order to convince other scientists that my result was real.
My other favorite thing is going to look for a poorly known hummingbird in a remote place, and seeing its courtship display for the first time. A famous ornithologist, James Van Remsen, once offhandedly mentioned, "Only a fool would study the woodstars," because of how hard they are to find. I have to do my homework for these trips, and it takes some mighty sleuthing; I read ornithological books for hints (such as what time of year they might breed), and I talk to other ornithologists and birders for clues. Sometimes finding the displays works through sheer luck. I've been really successful at it — I have some truly amazing displays that I've seen, and I will eventually get them up on YouTube. It turns out feathers make a fantastic array of sounds!
What is the most important characteristic a researcher must demonstrate in order to be an effective researcher? Stubbornness, hands down. Science is really hard sometimes, and you have to stick with it. Grad school is hard. Designing a good experiment is hard. Getting collecting permits is hard. Getting funding is hard. Finding your animal is hard. Catching your animal is hard. Running an experiment is hard. Repeating your experiment for what feels like the thousandth time is hard. Dealing with stubborn collaborators is hard. Analyzing your data is hard. Having a paper rejected is hard. There are so many things that can trip up your research, and stubbornness will help you past those obstacles better than any other attribute. It's a myth that only "smart" people can do research. Actually, anyone can do research, but you have to be persistent to succeed.
What are the societal benefits of your research?
Ben Franklin was at a demonstration of a new invention, the hot-air balloon. Someone nearby asked: "It's nice, but what's it good for?"
Ben's legendary reply: "What good is a newborn baby?"
Of course, today, hot-air balloons have grown up, and have many uses that were unanticipated in 1783. Almost all current scientific research is the same: we don't know exactly what it's good for. For my research I can come up with some plausible answers: maybe we'll invent a new type of useful noise-maker that flutters like a feather (feather whistles, anyone?). But honestly, these are pretty uncertain. I don't really know what good this baby is. I think the biggest immediate benefit of my research is to increase the public's awe at how fantastic the natural world is, and how much basic, everyday stuff is still unknown! You have to get people to appreciate nature before you can convince them that it should be preserved.
Who has had the most influence on your thinking as a researcher? This is tough, there are so many! A beloved fraction of what I do is 'natural history,' which is the observation of how organisms live their lives in their natural environments. Anytime I'm outside I make observations, even if it's tangential to my main research purpose. Alexander Skutch was a terrific natural historian, and so much of what we know about tropical birds stems from what he observed. Skutch became an ornithological legend through purely observational study, and in his writing he was especially good at making clear the difference between his observations (i.e., data) and his interpretation of his observations (which is theory). I think modern biology sometimes over-emphasizes the role of hypotheses and theory — I'd never put in an NSF proposal that I wanted to study hummingbird natural history, because it's not fundable. But it's important to remember that science begins with careful observation; the hypotheses come later. So when I'm writing about hummingbirds, I try to emulate Skutch and make the difference between my data, and my interpretation, clear.
What about your field or being a researcher do you think would surprise people the most? People are often surprised to learn that these hummingbird courtship displays have not already been studied. It was first proposed in 1897 that hummingbirds make these sounds with their tail-feathers. It hung there like a low-hanging fruit on a tree, until I picked it in 2008! Even really common species, like the Ruby-throated Hummingbird, mostly have a poorly known natural history. So it's actually easy to make new discoveries about them, using nothing more than a notebook, binoculars, a camera and patience. While I love going to remote places like the Atacama Desert or Big Bend National Park to discover the birds there, I did the original experiments on the Anna's hummingbird by riding my bicycle to an old landfill near my house. Anyone who puts their mind to it can discover new things about the animals and plants living right in their back yard. That astonishes people!
If you could only rescue one thing from your burning office or lab, what would it be? My box of hummingbird feathers. I have obtained them over the past eight years of research in a couple dozen field sites in several countries. The collecting permits are hard to get, as are the CITES export permits. The birds are hard to catch. For insurance purposes I would probably claim a value well over $100,000, which mostly would reflect the time it would take to re-acquire a similar set feathers. The truth is, they're literally priceless, since it's illegal to buy or sell them.
What music do you play most often in your lab or car? I love the band Cake. My favorite song of theirs is "Federal Funding."
Editor's Note:The researchers depicted in ScienceLives articles have been supported by the National Science Foundation, the federal agency charged with funding basic research and education across all fields of science and engineering. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. See the ScienceLives archive.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter. |
Foraging is searching for wild food resources. It affects an animal's fitness because it plays an important role in an animal's ability to survive and reproduce. Foraging theory is a branch of behavioral ecology that studies the foraging behavior of animals in response to the environment where the animal lives.
Behavioral ecologists use economic models to understand foraging; many of these models are a type of optimal model. Thus foraging theory is discussed in terms of optimizing a payoff from a foraging decision. The payoff for many of these models is the amount of energy an animal receives per unit time, more specifically, the highest ratio of energetic gain to cost while foraging. Foraging theory predicts that the decisions that maximize energy per unit time and thus deliver the highest payoff will be selected for and persist. Key words used to describe foraging behavior include resources, the elements necessary for survival and reproduction which have a limited supply, predator, any organism that consumes others, prey, an organism that is eaten in part or whole by another, and patches, concentrations of resources.
Behavioral ecologists first tackled this topic in the 1960s and 1970s. Their goal was to quantify and formalize a set of models to test their null hypothesis that animals forage randomly. Important contributions to foraging theory have been made by:
- Eric Charnov, who developed the marginal value theorem to predict the behavior of foragers using patches;
- Sir John Krebs, with work on the optimal diet model in relation to tits and chickadees;
- John Goss-Custard, who first tested the optimal diet model against behavior in the field, using redshank, and then proceeded to an extensive study of foraging in the common pied oystercatcher
Factors influencing foraging behaviorEdit
Several factors affect an animal's ability to forage and acquire profitable resources.
Learning is defined as an adaptive change or modification of a behavior based on a previous experience. Since an animal's environment is constantly changing, the ability to adjust foraging behavior is essential for maximization of fitness. Studies in social insects have shown that there is a significant correlation between learning and foraging performance.
In nonhuman primates, young individuals learn foraging behavior from their peers and elders by watching other group members forage and by copying their behavior. Observing and learning from other members of the group ensure that the younger members of the group learn what is safe to eat and become proficient foragers.
One measure of learning is 'foraging innovation'—an animal consuming new food, or using a new foraging technique in response to their dynamic living environment. Foraging innovation is considered learning because it involves behavioral plasticity on the animal's part. The animal recognizes the need to come up with a new foraging strategy and introduce something it has never used before to maximize his or her fitness (survival). Forebrain size has been associated with learning behavior. Animals with larger brain sizes are expected to learn better. A higher ability to innovate has been linked to larger forebrain sizes in North American and British Isle birds according to Lefebvre et al. (1997). In this study, bird orders that contained individuals with larger forebrain sizes displayed a higher amount of foraging innovation. Examples of innovations recorded in birds include following tractors and eating frogs or other insects killed by it and using swaying trees to catch their prey.
Another measure of learning is spatio-temporal learning (also called time-place learning), which refers to an individual's ability to associate the time of an event with the place of that event. This type of learning has been documented in the foraging behaviors of individuals of the stingless bee species Trigona fulviventris. Studies showed that T. fulviventris individuals learned the locations and times of feeding events, and arrived to those locations up to thirty minutes before the feeding event in anticipation of the food reward.
Foraging behavior can also be influenced by genetics. The genes associated with foraging behavior have been widely studied in honeybees with reference to the following; onset of foraging behavior, task division between foragers and workers, and bias in foraging for either pollen or nectar. Honey bee foraging activity occurs both inside and outside the hive for either pollen or nectar. Similar behavior is seen in many social wasps, such as the species Apoica flavissima. Studies using quantitative trait loci (QTL) mapping have associated the following loci with the matched functions; Pln-1 and Pln-4 with onset of foraging age, Pln-1 and 2 with the size of the pollen loads collected by workers, and Pln-2 and pln-3 were shown to influence the sugar concentration of the nectar collected.
Presence of predatorsEdit
The presence of predators while a (prey) animal is foraging affects its behaviour. In general, foragers balance the risk of predation with their needs, thus deviating from the foraging behaviour that would be expected in the absence of predators. An example of this balanced risk can be observed in the foraging behavior of A. longimana.
Similarly, parasitism can affect the way in which animals forage. Parasitism can affect foraging at several levels. Animals might simply avoid food items that increase their risk of being parasitized, as when the prey items are intermediate hosts of parasites. Animals might also avoid areas that would expose them to a high risk of parasitism. Finally, animals might effectively self-medicate, either prophylactically or therapeutically.
Types of foragingEdit
Foraging can be categorized into two main types. The first is solitary foraging, when animals forage by themselves. The second is group foraging. Group foraging includes when animals can be seen foraging together when it is beneficial for them to do so (called an aggregation economy) and when it is detrimental for them to do so (called a dispersion economy).
Solitary foraging includes the variety of foraging in which animals find, capture and consume their prey alone. Individuals can manually exploit patches or they can use tools to exploit their prey. For example, Bolas spiders attack their prey by luring them with a scent identical to the female moth's sex pheromones. Animals may choose to forage on their own when the resources are abundant, which can occur when the habitat is rich or when the number of conspecifics foraging are few. In these cases there may be no need for group foraging. In addition, foraging alone can result in less interaction with other foragers, which can decrease the amount of competition and dominance interactions an animal deals with. It will also ensure that a solitary forager is less conspicuous to predators. Solitary foraging strategies characterize many of the phocids (the true seals) such as the elephant and harbor seals. An example of an exclusive solitary forager is the South American species of the harvester ant, Pogonomyrmex vermiculatus.
Tool use in solitary foragingEdit
Some examples of tool use include dolphins using sponges to feed on fish that bury themselves in the sediment, New Caledonian crows that use sticks to get larvae out of trees, and chimpanzees that similarly use sticks to capture and consume termites.
Solitary foraging and optimal foraging theoryEdit
The theory scientists use to understand solitary foraging is called optimal foraging theory. Optimal foraging theory (OFT) was first proposed in 1966, in two papers published independently, by Robert MacArthur and Eric Pianka, and by J. Merritt Emlen. This theory argues that because of the key importance of successful foraging to an individual's survival, it should be possible to predict foraging behavior by using decision theory to determine the behavior that an "optimal forager" would exhibit. Such a forager has perfect knowledge of what to do to maximize usable food intake. While the behavior of real animals inevitably departs from that of the optimal forager, optimal foraging theory has proved very useful in developing hypotheses for describing real foraging behavior. Departures from optimality often help to identify constraints either in the animal's behavioral or cognitive repertoire, or in the environment, that had not previously been suspected. With those constraints identified, foraging behavior often does approach the optimal pattern even if it is not identical to it. In other words, we know from optimal foraging theory that animals are not foraging randomly even if their behavior doesn't perfectly match what is predicted by OFT.
Versions of OFTEdit
There are many versions of optimal foraging theory that are relevant to different foraging situations. These models generally possess the following components according to Stephens et al. 2007;
- Currency: an objective function, what we want to maximize, in this case energy over time as a currency of fitness
- Decision: set of choices under the organism's control, or the decisions that the organism exhibits
- Constraints: "an organism's choices are constrained by genetics, physiology neurology, morphology and the laws of chemistry and physics"
Some of these versions include:
The optimal diet model, which analyzes the behavior of a forager that encounters different types of prey and must choose which to attack. This model is also known as the prey model or the attack model. In this model the predator encounters different prey items and decides whether to spend time handling or eating the prey. It predicts that foragers should ignore low profitability prey items when more profitable items are present and abundant. The objective of this model is to identify the choice that will maximize fitness. How profitable a prey item is depends on ecological variables such as the time required to find, capture, and consume the prey in addition to the energy it provides. It is likely that an individual will settle for a trade off between maximizing the intake rate while eating and minimising the search interval between prey.
Patch selection theory, which describes the behavior of a forager whose prey is concentrated in small areas known as patches with a significant travel time between them. The model seeks to find out how much time an individual will spend on one patch before deciding to move to the next patch. To understand whether an animal should stay at a patch or move to a new one, think of a bear in a patch of berry bushes. The longer a bear stays at the patch of berry bushes the less berries there are for that bear to eat. The bear must decide how long to stay and thus when to leave that patch and move to a new patch. Movement depends on the travel time between patches and the energy gained from one patch versus another. This is based on the marginal value theorem.
Central place foraging theory is a version of the patch model. This model describes the behavior of a forager that must return to a particular place to consume food, or perhaps to hoard food or feed it to a mate or offspring. Chipmunks are a good example of this model. As travel time between the patch and their hiding place increased, the chipmunks stayed longer at the patch.
In recent decades, optimal foraging theory has often been applied to the foraging behavior of human hunter-gatherers. Although this is controversial, coming under some of the same kinds of attack as the application of sociobiological theory to human behavior, it does represent a convergence of ideas from human ecology and economic anthropology that has proved fruitful and interesting.
Group foraging is when animals find, capture and consume prey in the presence of other individuals. In other words, it is foraging when success depends not only on your own foraging behaviors but the behaviors of others as well. An important note here is that group foraging can emerge in two types of situations. The first situation is frequently thought of and occurs when foraging in a group is beneficial and brings greater rewards known as an aggregation economy. The second situation occurs when a group of animals forage together but it may not be in an animal's best interest to do so known as a dispersion economy. Think of a cardinal at a bird feeder for the dispersion economy. We might see a group of birds foraging at that bird feeder but it is not in the best interest of the cardinal for any of the other birds to be there too. The amount of food the cardinal can get from that bird feeder depends on how much it can take from the bird feeder but also depends on how much the other birds take as well.
In red harvester ants, the foraging process is divided between three different types of workers: nest patrollers, trail patrollers, and foragers. These workers can utilize many different methods of communicating while foraging in a group, such as guiding flights, scent paths, and "jostling runs", as seen in the eusocial bee Melipona scutellaris.
Chimpanzees in the Taï Forest in Côte d'Ivoire also engage in foraging for meats when they can, which is achieved through group foraging. Positive correlation has been observed between the success of the hunt and the size of the foraging group. The chimps have also been observed implying rules with their foraging, where there is a benefit to becoming involved through allowing successful hunters first access to their kills.
Cost and benefits of group foragingEdit
As already mentioned, group foraging brings both costs and benefits to the members of that group. Some of the benefits of group foraging include being able to capture larger prey, being able to create aggregations of prey, being able to capture prey that are difficult or dangerous and most importantly reduction of predation threat. With regard to costs, however, group foraging results in competition for available resources by other group members. Competition for resources can be characterized by either scramble competition whereby each individual strives to get a portion of the shared resource, or by interference competition whereby the presence of competitors prevents a forager's accessibility to resources. Group foraging can thus reduce an animal's foraging payoff.
Group foraging may be influenced by the size of a group. In some species like lions and wild dogs, foraging success increases with an increase in group size then declines once the optimal size is exceeded. A myriad number of factors affect the group sizes in different species. For example, lionesses (female lions) do not make decisions about foraging in a vacuum. They make decisions that reflect a balance between obtaining food, defending their territory and protecting their young. In fact, we see that lion foraging behavior does not maximize their energy gain. They are not behaving optimally with respect to foraging because they have to defend their territory and protect young so they hunt in small groups to reduce the risk of being caught alone. Another factor that may influence group size is the cost of hunting. To understand the behavior of wild dogs and the average group size we must incorporate the distance the dogs run.
Theorizing on hominid foraging during the Aurignacian Blades et al (2001) defined the forager performing the activity to the optimal efficiency when the individual is having considered the balance of costs for search and pursuit of prey in considerations of prey selection. Also in selecting an area to work within the individual would have had to decide the correct time to move to another location corresponding to perception of yield remaining and potential yields of any given area available.
Group foraging and the ideal free distributionEdit
The theory scientists use to understand group foraging is called the Ideal free distribution. This is the null model for thinking about what would draw animals into groups to forage and how they would behave in the process. This model predicts that animals will make an instantaneous decision about where to forage based on the quality (prey availability) of the patches available at that time and will choose the most profitable patch, the one that maximizes their energy intake. This quality depends on the starting quality of the patch and the number of predators already there consuming the prey.
- Danchin, E.; Giraldeau, L. & Cezilly, F. (2008). Behavioural Ecology. New York: Oxford University Press. ISBN 978-0-19-920629-2.[page needed]
- Hughes, Roger N, ed. (1989), Behavioural Mechanisms of Food Selection, London & New York: Springer-Verlag, p. v, ISBN 978-0-387-51762-9
- Raine, N.E.; Chittka, L. (2008). "The correlation of learning speed and natural foraging success in bumble-bees'". Proceedings of the Royal Society B: Biological Sciences. 275 (1636): 803–08. doi:10.1098/rspb.2007.1652. PMC 2596909. PMID 18198141.
- Rapaport, L.G.; Brown, G.R. (2008). "Social influences on foraging behavior in young nonhuman primates:learning what, where and how to eat". Evolutionary Anthropology: Issues, News, and Reviews. 17 (4): 189–201. doi:10.1002/evan.20180. S2CID 86010867.
- Dugatkin, Lee Ann (2004). Principles of Animal Behavior.
- Lefebvre, Louis; Patrick Whittle; Evan Lascaris; Adam Finkelstein (1997). "Feeding innovations and forebrain size in birds". Animal Behaviour. 53 (3): 549–60. doi:10.1006/anbe.1996.0330. S2CID 53146859.
- Murphy, Christina M.; Breed, Michael D. (2008-04-01). "Time-Place Learning in a Neotropical Stingless Bee, Trigona fulviventris Guérin (Hymenoptera: Apidae)". Journal of the Kansas Entomological Society. 81 (1): 73–76. doi:10.2317/JKES-704.23.1. ISSN 0022-8567. S2CID 86256384.
- Hunt, G.J.; et al. (2007). "Behavioral genomics of honeybee foraging and nest defense". Naturwissenschaften. 94 (4): 247–67. doi:10.1007/s00114-006-0183-1. PMC 1829419. PMID 17171388.
- Roch, S.; von Ammon, L.; Geist, J.; Brinker, A. (2018). "Foraging habits of invasive three-spined sticklebacks ( Gasterosteus aculeatus ) – impacts on fisheries yield in Upper Lake Constance". Fisheries Research. 204: 172–80. doi:10.1016/j.fishres.2018.02.014.
- Cruz-Rivera, Edwin; Hay, Mark E. (2000-01-01). "Can quantity replace quality? food choice, compensatory feeding, and fitness of marine mesograzers". Ecology. 81 (1): 201–19. doi:10.1890/0012-9658(2000)081[0201:CQRQFC]2.0.CO;2.
- "Foraging Strategies | Encyclopedia.com". www.encyclopedia.com. Retrieved 2021-09-26.
- Riedman, Marianne (1990). The pinnipeds: seals, sea lions, and walruses. Berkeley: University of California Press. ISBN 978-0-520-06497-3.
ISBN The pinnipeds: seals, sea lions, and walruses By Marianne Riedman 1990.
- le Roux, Aliza; Michael I. Cherry; Lorenz Gygax (5 May 2009). "Vigilance behaviour and fitness consequences: comparing a solitary foraging and an obligate group-foraging mammal". Behavioral Ecology and Sociobiology. 63 (8): 1097–1107. doi:10.1007/s00265-009-0762-1. S2CID 21961356.
- Torres-Contreras, Hugo; Ruby Olivares-Donoso; Hermann M. Niemeyer (2007). "Solitary Foraging in the Ancestral South American Ant, Pogonomyrmex vermiculatus. Is it Due to Constraints in the Production or Perception of Trail Pheromones?". Journal of Chemical Ecology. 33 (2): 435–40. doi:10.1007/s10886-006-9240-7. PMID 17187299. S2CID 23930353.
- Patterson, E.M.; Mann, J. (2011). "The Ecological Conditions That Favor Tool Use and Innovation in Wild Bottlenose Dolphins (Tursiops sp.)". PLOS ONE. 6 (7): e22243. doi:10.1371/journal.pone.0022243. PMC 3140497. PMID 21799801.
- Rutz, C.; et al. (2010). "The ecological significance of tool use in New Caledonian Crows". Science. 329 (5998): 1523–26. doi:10.1126/science.1192053. PMID 20847272. S2CID 8888382.
- Goodall, Jane (1964). "Tool-using and aimed throwing in a community of free-living chimpanzees". Nature. 201 (4926): 1264–66. doi:10.1038/2011264a0. PMID 14151401. S2CID 7967438.
- MacArthur RH, Pianka ER (1966), "On the optimal use of a patchy environment.", American Naturalist, 100 (916): 603–09, doi:10.1086/282454, JSTOR 2459298, S2CID 86675558
- Emlen, J. M. (1966), "The role of time and energy in food preference", The American Naturalist, 100 (916): 611–17, doi:10.1086/282455, JSTOR 2459299, S2CID 85723900
- Stephens, D.W.; Brown, J.S. & Ydenberg, R.C. (2007). Foraging: Behavior and Ecology. Chicago: University of Chicago Press.[page needed][ISBN missing]
- Hrncir, Michael; Jarau, Stefan; Zucchi, Ronaldo; Barth, Friedrich G. (2000). "Recruitment behavior in stingless bees, Melipona scutellaris and M. quadrifasciata . II. Possible mechanisms of communication" (PDF). Apidologie. 31 (1): 93–113. doi:10.1051/apido:2000109.
- Boesch, C (1994). "Cooperative hunting in wild Chimpanzees". Animal Behaviour. 48 (3): 653–67. doi:10.1006/anbe.1994.1285. S2CID 53177700.
- 1. Gomes 2. Boesch, 1. C M 2. C (2009). "Wild chimpanzees exchange meat for sex on a long term basis". PLOS ONE. 4 (4): e5116. doi:10.1371/journal.pone.0005116. PMC 2663035. PMID 19352509.
- 1 Gomes 2 Boesch, 1 CM 2 C (2011). "Reciprocity and trades in wild west African chimpanzees". Behavioral Ecology and Sociobiology. 65 (11): 2183–96. doi:10.1007/s00265-011-1227-x. S2CID 37432514.
- Packer, C.; Scheel, D.; Pusey, A.E. (1990). "Why lions form groups: food is not enough". American Naturalist. 136: 1–19. doi:10.1086/285079. S2CID 85145653.
- Benoit-Bird, Kelly; Whitlow W. L. Au (January 2009). "Cooperative prey herding by the pelagic dolphin, Stenella longirostris" (PDF). The Journal of the Acoustical Society of America. 125 (1): 125–37. doi:10.1121/1.2967480. PMID 19173400. Archived from the original (PDF) on 2012-04-25. Retrieved 2011-11-29.
- Creel, S; Creel N M (1995). "Communal hunting and pack size in African wild dogs, Lycaon pictus". Animal Behaviour. 50 (5): 1325–39. doi:10.1016/0003-3472(95)80048-4. S2CID 53180378.
- BS Blades – Aurignacian Lithic Economy: Ecological Perspectives from Southwestern France Springer, 31 January 2001 Retrieved 2012-07-08 ISBN 0306463342
|Wikivoyage has travel information for foraging.|
- The Association of Foragers: An international association for teachers of foraging skills.
- Forager's Buddy GPS Foraging
- South West Outdoor Travelers- Wild Edibles, Medicinals, Foraging, Primitive Skills & More
- Institute for the Study of Edible Wild Plants and Other Foragables
- The Big Green Idea Wild Foraging Factsheet
- Caress, Badiday. (2000), The emergence and stability of cooperative fishing on Ifaluk Atoll, for Human Behavior and Adaptation: an Anthropological Perspective, edited by L. Cronk, N. Chagnon, and B. Iro ns, pp. 437–472. |
This article may be confusing or unclear to readers. In particular, the lead refers correctly to transformations of Euclidean spaces, while the sections describe only the case of Euclidean vector spaces or of spaces of coordinate vectors. The "formal definition" section does not specify which kind of objects are represented by the variables, call them vaguely as "vectors", suggests implicitly that a basis and a dot product are defined for every kind of vectors. (August 2021)
In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points.[self-published source]
The rigid transformations include rotations, translations, reflections, or their combination. Sometimes reflections are excluded from the definition of a rigid transformation by imposing that the transformation also preserve the handedness of figures in the Euclidean space (a reflection would not preserve handedness; for instance, it would transform a left hand into a right hand). To avoid ambiguity, this smaller class of transformations is known as rigid motions or proper rigid transformations (informally, also known as roto-translations)[dubious ]. In general, any proper rigid transformation can be decomposed as a rotation followed by a translation, while any rigid transformation can be decomposed as an improper rotation followed by a translation (or as a sequence of reflections).
Any object will keep the same shape and size after a proper rigid transformation.
All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a group called the Euclidean group, denoted E(n) for n-dimensional Euclidean spaces. The set of proper rigid transformations is called special Euclidean group, denoted SE(n).
In kinematics, proper rigid transformations in a 3-dimensional Euclidean space, denoted SE(3), are used to represent the linear and angular displacement of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw displacement.
A rigid transformation is formally defined as a transformation that, when acting on any vector v, produces a transformed vector T(v) of the form
- T(v) = R v + t
where RT = R−1 (i.e., R is an orthogonal transformation), and t is a vector giving the translation of the origin.
A proper rigid transformation has, in addition,
- det(R) = 1
which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1.
A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for Rn is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points X and Y as the sum of the squares of the distances along the coordinate axes, that is
where X=(X1, X2, …, Xn) and Y=(Y1, Y2, …, Yn), and the dot denotes the scalar product.
Using this distance formula, a rigid transformation g:Rn→Rn has the property,
Translations and linear transformations
A translation of a vector space adds a vector d to every vector in the space, which means it is the transformation
- g(v): v→v+d.
It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors:
A linear transformation of a vector space, L: Rn→ Rn, preserves linear combinations,
A linear transformation L can be represented by a matrix, which means
- L: v→[L]v,
where [L] is an n×n matrix.
A linear transformation is a rigid transformation if it satisfies the condition,
Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation vTw, where the T denotes the matrix transpose, we have
Thus, the linear transformation L is rigid if its matrix satisfies the condition
where [I] is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors.
Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted O(n).
Compute the determinant of the condition for an orthogonal matrix to obtain
which shows that the matrix [L] can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in Rn×n separated by the set of singular matrices.
The set of rotation matrices is called the special orthogonal group, and denoted SO(n). It is an example of a Lie group because it has the structure of a manifold. |
By the end of this lecture, the students are expected to learn,1.
What is an optimization?2.
Different optimization tools utilized to optimize single and multi-variable problems.
Defining Optimum Design
In principle, an optimum design means the best or the most suitable of all the feasible conceptualdesigns. The optimization is the process of maximizing of a desired quantity or the minimizingof an undesired one. For example, optimization is often used by the mechanical engineers toachieve either a minimum manufacturing cost or a maximum component life. The aerospaceengineers may wish to minimize the overall weight of an aircraft. The production engineerswould like to design the optimum schedules of various machining operations to minimize theidle time of machines and overall job completion time, and so on.
Tools for Design Optimization
No single optimization method is available for solving all optimization problems in an uniqueefficient manner. Several optimization methods have been developed till date for solvingdifferent types of optimization problems. The optimization methods are generally classified under different groups such as (a) single variable optimization, (b) multi-variable optimization,(c) constrained optimization, (d) specialized optimization, and (e) non-traditional optimization.We would concentrate only single- and multi-variable optimization methods in this lecture and the interested readers can undertake further studies in appropriate references to understand themore advanced optimization techniques.
Single Variable Optimization Methods
This methods deal with the optimization of a single variable.
depicts various typesof extremes that can occur in an objective function, f(x) curve, where x is the design variable. Itcan be observed from the curve that both the points A and C are mathematical minima. The pointA, which is the larger of the two minima, is called a local minimum, while the point C is the |
Rice is not native to the Americas but was introduced to Latin America and the Caribbean by European colonizers at an early date with Spanish colonizers introducing Asian rice to Mexico in the 1520 at Veracruz and the Portuguese and their African slaves introducing it at about the same time to Colonial Brazil. Recent scholarship suggests that enslaved Africans played an active role in the establishment of rice in the New World and that African rice was an important crop from an early period. Varieties of rice and bean dishes that were a staple dish along the peoples of West Africa remained a staple among their descendants subjected to slavery in the Spanish New World colonies, Brazil and elsewhere in the Americas. The Native Americans of what is now the Eastern United States may have practiced extensive agriculture with forms of wild rice. The ancient people of Peru built water-moving and preserving technologies like the aqueducts of Cumbe Mayo (c. 1500 BCE) or the Nazca's underground aqueducts called Puquios (date uncertain), or the terraced gardens of the Huari. Aqueducts were also utilized by the Moche. Another technique used to adapt the steep land of the Andes Mountains for farming was through terracing. The Chavin, the Moche and the Incas built terraces, or flattened areas of land, into the sides of hills. The terraces reduced soil erosion that would normally be high on a steep hill. These terraces are still used in Peru. The Incans also irrigated their fields with a system of reservoirs and cisterns to collect water, which was then distributed by canals and ditches. However, by the mid-19th century, only 3% of Peru's land was still farmable. It lagged far behind many other South American countries in agriculture. Much of the pre-history of Peru has been wrapped up in where the farmable land was located. The most populated coastal regions of Peru are the two parallel mountain ranges and the series of 20 to 30 rivers running through the coastal desert. In dry periods only the mountains are wet enough for agriculture and the desert coast is empty, while in wet periods many cultures have thrived along the rivers of the coast. The well known Inca were a mountain-based culture that expanded when the climate became more wet, often sending conquered peoples down from the mountains into unfarmed but farmable lowlands. In contrast, the Moche were a lowland culture that died out after a strong El Niño, which caused abnormally high rainfall and floods, which was followed by a long drought.
A study has shown that the crops of squash, peanuts, and cotton were domesticated in Peru around 10,000, 8,500, and 6,000 years ago, respectively. They were grown by the antic people in the Valley. No earlier instances of the farming of these crops are known. Peru is both afflicted and blessed by a peculiar climate due to the Humboldt Current. Before over fishing killed its fishery, Peru had the most productive fishery in the world due to the cold Humboldt Current. The current brings nutrients from a large portion of the Pacific floor to Peru's doorstep. On land, it results in a cold mist that covers coastal Peru to the extent that the desert plants have adapted to obtain water from the air instead of from the infrequent rainfall. The soil on the wet side of the mountains is thin, and the rivers on the dry side are few. This means all the water must be brought from the Atlantic side of the mountain ranges that split Peru. There were many obstacles to improving Peru's agricultural production. Since the conquest of the Inca, Peru has always been rich in natural resources such as tin, silver, gold, guano and rubber. These resources share the attribute that, at least in Peru, they were found, not grown. The train tracks laid in Peru did not connect its peoples, they connected the sources of these valuable resources to the sea. So there are few ways to bring agricultural products to market. The road system is still primitive in Peru, there is no connection to Brazil and only a little over a quarter of the 15th-century Inca road system has been rebuilt as modern highway. Another obstacle is the size of Peru's informal economy. This prevents Peru from practically applying an income tax, which means much of its revenue comes from a 13% tax on gross agricultural sales. This means Peruvian farmers must produce that much more product per dollar just to break even with farmers in countries that tax farmers on net profit. They have no chance at all of competing with agricultural products from countries that subsidize farmers, such as Japan, the United States and Europe. Today Peru grows agricultural commodities such as asparagus, potatoes, maize, rice, and coffee. Peru provides half of the world supply of quinoa. Peruvian agriculture uses synthetic fertilizers rather than the still-abundant guano due to infrastructure issues. The maize is not exportable due to large subsidies in Europe and the United States to its high-cost producers, but coffee is exportable. In recent years Peru has become the world's primary source of high-quality organic coffee.
Peru does not have a quality control program such as Kenya's but its government has worked to educate farmers on how to improve quality. Despite the glut of coffee producers in the market today, coffee production in Peru is still promising. It naturally has the high altitudes and partial shade desired by Coffea arabica, and it has much more of such land available than competitors such as Jamaica and Hawaii. Rice is the staple food of over half the world's population. It is the predominant dietary energy source for 17 countries in Asia and the Pacific, 9 countries in North and South America and 8 countries in Africa. Rice provides 20% of the world’s dietary energy supply, while wheat supplies 19% and maize (corn) 5%. A detailed analysis of nutrient content of rice suggests that the nutrition value of rice varies based on a number of factors. It depends on the strain of rice, that is between white, brown, black, red and purple varieties of rice – each prevalent in different parts of the world. It also depends on nutrient quality of the soil rice is grown in, whether and how the rice is polished or processed, the manner it is enriched, and how it is prepared before consumption.
An illustrative comparison between white and brown rice of protein quality, mineral and vitamin quality, carbohydrate and fat quality suggests that neither is a complete nutrition source. Between the two, there is a significant difference in fiber content and minor differences in other nutrients. Brilliantly colored rice strains, such as purple rice, derive their color from anthocyanins and tocols. Scientific studies suggest that these color pigments have antioxidant properties that may be useful to human health. In purple rice bran, hydrophilic antioxidants are in greater quantity and have higher free radical scavenging activity than lipophilic antioxidants. Anthocyanins and ?-tocols in purple rice are largely located in the inner portion of purple rice bran. Comparative nutrition studies on red, black and white varieties of rice suggest that pigments in red and black rice varieties may offer nutritional benefits. Red or black rice consumption was found to reduce or retard the progression of atherosclerotic plaque development, induced by dietary cholesterol, in mammals. White rice consumption offered no similar benefits, and the study claims this to be due to absent antioxidants in red and black varieties of rice.
The health benefits of rice include its ability to provide fast and instant energy, regulate and improve bowel movements, stabilize blood sugar levels, and slow down the aging process, while also providing an essential source of vitamin B1 to the human body. Other benefits include its ability to boost skin health, increase the metabolism, aid in digestion, reduce high blood pressure, help weight loss efforts, improve the immune system and provide protection against dysentery, cancer, and heart disease. Rice is a fundamental food in many cultural cuisines around the world, and it is an important cereal crop that feeds more than half of the world’s population. The various benefits of rice can be found in more than forty thousand varieties of this cereal that is available throughout the world. The two main categories are whole grain rice and white rice. Whole grain rice is not processed very much, so it is high in nutritional value, whereas white rice is processed so that the bran or outer covering is removed, leaving it with less nutritional value. People choose different styles of rice for particular flavors, depending on their culinary needs, the availability, and the potential for healthy benefits. Rice can also be defined by the length of each grain. Indian or Chinese cuisines specialize in long grained rice, whereas western countries prefer short or medium length grains.
Since rice is abundant in carbohydrates, it acts as fuel for the body and aids in the normal functioning of the brain. Carbohydrates are essential to be metabolized by the body and turned into functional, usable energy.
The vitamins, minerals, and various organic components increase the functioning and metabolic activity of all your organ systems, which further increases energy levels.
-Cholesterol Free: Eating rice is extremely beneficial for your health, simply because it does not contain harmful fats, cholesterol or sodium. It forms an integral part of balanced diet. Any food that can provide nutrients without having any negative impacts on health is a bonus! Low levels of fat, cholesterol, and sodium will also help reduce obesity and the health conditions associated with being overweight.
Rice is one of the most widely used and eaten foods in the world because it can keep people healthy and alive, even in very small quantities:
-Blood Pressure Management: Rice is low in sodium, so it is considered one of the best foods for those suffering from high blood pressure and hypertension. Sodium can cause veins and arteries to constrict, increasing the stress and strain on the cardiovascular system as the blood pressure increases. This is also associated with heart conditions like atherosclerosis, heart attacks, and strokes, so avoiding excess sodium is always a good idea.
-Cancer Prevention: Whole grain rice like brown rice is rich in insoluble fiber that can protect against many types of cancer. Many scientists and researchers believe that such insoluble fibers are vital for protecting the body against the development and metastasis of cancerous cells. Fiber, specifically is beneficial in defending against colorectal and intestinal cancer. However, besides fiber, rice also has natural antioxidants like vitamin C, vitamin-A, phenolic and flavonoid compounds, which also act as or stimulate antioxidants to scour the body for free radicals. Free radicals are byproducts of cellular metabolism that can do serious damage to your organ systems and cause the mutation of healthy cells into cancerous ones. Boosting your antioxidant levels is a great idea, and eating more rice is a wonderful way to do that.
-Skin care: Medical experts say that powdered rice can be applied topically to cure certain skin ailments. On the Indian subcontinent, rice water is readily prescribed by ayurvedic practitioners as an effective ointment to cool off inflamed skin surfaces. The phenolic compounds that are found in rice, particularly in brown or wild rice, have anti-inflammatory properties, so they are also good for soothing irritation and redness. Whether consumed or topically applied, substance derived from rice tend to relieve a number of skin conditions. The antioxidant capacity also helps delay the appearance of wrinkles and other premature signs of aging that can affect the skin.
-Alzheimer’s Disease: Brown rice is said to contain high levels of nutrients that stimulate the growth and activity of neurotransmitters, subsequently helping to prevent Alzheimer’s disease to a considerable extent. Various species of wild rice have been shown to stimulate neuroprotective enzymes in the brain, which inhibit the effects of free radicals and other dangerous toxins that can cause dementia and Alzheimer’s disease.
-Diuretic and Digestive Qualities: The husk part of rice is considered to be an effective medicine to treat dysentery, and some people say that a three month old rice plant’s husks are said to have diuretic properties. Chinese people believe that rice considerably increases appetite, cures stomach ailments and reduces all digestive problems. As a diuretic, rice husk can help you lose excess water weight, eliminate toxins from the body like uric acid, and even lose weight, since approximately 4% of urine is actually made up of body fat! The high fiber content also increases bowel movement regularity and protects against various types of cancer, as well as reducing the chances of cardiovascular diseases.
-Rich in Vitamins: Rice is an excellent source of vitamins and minerals like niacin, vitamin D, calcium, fiber, iron, thiamine and riboflavin. These vitamins provide the foundation for body metabolism, immune system health, and general functioning of the organ systems, since vitamins are commonly consumed in the most essential activities in the body.
-Cardiovascular Health: Rice bran oil is known to have antioxidant properties that promote cardiovascular strength by reducing cholesterol levels in the body. We have already spoken about the cardiovascular benefits of fiber, and low levels of fat and sodium. Wild rice and brown rice varieties are far better than white rice in this category, since the husk of the grain is where much of the nutrients are; the husk is removed in white rice preparation.
-Resistant starch: Rice abounds in resistant starch, which reaches the bowels in an undigested form. This type of starch stimulates the growth of useful bacteria that help with normal bowel movements. Also, this insoluble rice is very useful in reducing the effects of conditions like Irritable Bowel Syndrome (IBS), and diarrhea. |
Can a Computer Generate a Truly Random Numbers?
Computers can’t generate truly random numbers in the purest sense with software alone. However, computers can generate truly random numbers with the help of natural random events.
How a Computer Can Produce a Truly Random Number
Although one day this may change, computer software alone currently can only generate pseudo-random numbers based on programmed algorithms. Even though these algorithms can produce numbers that are as good as random for almost all purposes, they aren’t truly random.
To generate a truly random number, a computer has to use hardware to observe a natural random event (like atmospheric noise) and then create a random number from that data.
Some will argue this process isn’t “truly random,” but it is close enough to consider the idea that a computer can produce a truly random number a fact. It is just important that we understand the nuance. The nuance being that the computer is essentially borrowing the randomness from nature and then using a deterministic algorithm to create further randomization (which some would consider, “not truly random”).
TIP: This argument assumes that anything in the universe is actually random, and specifically that what a computer observes to create a Random Number Generator (RNG) is random. If the natural random event isn’t actually random, then the random number produced from it via an RNG isn’t truly random either.
A video discussing “what is random“?
FACT: Some argue that neither computers or humans can be random, and that both are deterministic machines. In both cases, the argument is that humans and computers lack the ability to be truly random due to their hard wiring and programming (a human, like a computer, can only draw from what it knows, and thus it is to an extent predictable). One can argue that nature isn’t deterministic at a quantum level however. And, this is why we measure random events like atomic decay to get random “seed” numbers.
Random Numbers Simplified
In it’s simplest form:
- Random means unpredictable or determinable (non-deterministic).
- Non-random means predictable or not determinable (deterministic).
Everything that computers or humans do is predictable to some extent, so neither can produce truly random numbers (they produce pseudo or “fake” random numbers instead). To get truly random numbers, we have to look to nature and hope nature doesn’t have a bias (the logic above assumes nature can be random).
Although we aren’t sure that anything in the universe is truly random, scientists agree that when it comes to fluctuations of the smallest particles (like the movement of quarks, radioactive decay, or atomic noise) randomness occurs. We can measure the randomness of tiny natural events and get a random “seed” number. But the second we apply a computer algorithm to the seed, or get too involved as humans, we all but remove the true randomness from the result.
Randomness is important in everything from the lottery to security software, but in a practical sense a pseudo random number, with a truly random seed, typically gets the job done (sometimes even better than a truly random number). From here the discussion on determinism and randomness gets more philosophical and technical.
Determinism and “Deterministic” Machines
Computers and humans are “deterministic“, meaning that their outputs are predetermined by some set of existing values and thus predictable to some extent.
If a Random Number Generator is deterministic, it implies that a generated sequence of numbers can be reproduced at a later date if the starting point in the sequence is known.
Computers use code to run programs, and that means the outcome is programmed to some extent and determined by a set of starting values (the algorithm of the code). Humans, in a more philosophical sense, are the same way as our behavior is usually predictable. We can refer to anything that is deterministic as a deterministic machine.
FUN FACT: According to “the infinite monkey theorem”, given enough time, a monkey punching random letters on a keyboard would eventually type all of Shakespeare’s plays. Monkeys (like humans and computers) are “deterministic” and thus wouldn’t technically type truly random letters, but for the purpose of argument, one might say they could. If they were truly random, any given monkey might type Edward Albee’s plays instead.
Although a deterministic machine like a computer can’t produce a truly random number, non-deterministic machines can.
A non-deterministic machine can measure something that is thought to be truly random, like radioactive decay, and produce a random value based on that natural process. We don’t know if anything in the universe is, in fact, random, or to what extent the measuring process effects randomness. We think that machines (not computer software) can produce truly random numbers by measuring random natural occurrences on the atomic or quantum level (the smaller, the better).
A video on “True Random Number Generators“.
Hybrid Machines Aren’t Random Either
The problem with true random number generation comes when we consider the uses to which we plan to put the random numbers. Most of the time we want results to be unique or have some other property (we don’t want results to repeat, or we want numbers in a certain range), but truly random numbers have a rule set governing them. To get a very random, but still useful number, we have to use a deterministic pseudo random number generator that uses the non-deterministic machine’s value as a starting point or “seed”.
“True Random Number Generators” that “harvest” randomness from the environment for a random starting value still aren’t truly random in the purest sense as they still rely on computers and humans to process the starting value.
If a human or computer builds the measurement tool and observes the measurement, it calls into question the role of the observer or the entity that measures the randomness of an event. Even if a human or computer isn’t involved in the process of getting a number, some argue that the random physical phenomenon of providing the “seed” may not be truly random. For instance, atomic noise may not be truly random. Although we have a heck of a time proving randomness as fact; It’s easier to prove non-randomness.
FACT: Minuscule things like quarks and photons change based on our observation, at least as far as we can tell with our tools of measurement. This makes it even harder to be sure that we have removed human determinism from the randomness of natural events.
Philosophical arguments aside, for practical purposes, the argument isn’t as much about the randomness of the physical phenomenon is it is about the processing of that phenomenon. As soon as a human or a computer is involved in the process the number necessarily loses its randomness.
FACT: Even when number generation is based on physical phenomena expected to be random, such as atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena, arguments can be made against it being truly “random”. Is anything truly random? Does observation affect results?
Pseudo Random Numbers Versus Random Numbers
In simple terms, random numbers can’t be predicted, and pseudo-random numbers can be predicted by “backwards engineering”. (Backwards engineering = predicting a result by figuring out the algorithm used to create the result.)
- Pseudo-random numbers are numbers that are essentially random but are generated using a computer algorithm. Since a human always programs an algorithm, it’s never truly random and resists backwards engineering.
- Random numbers are numbers that are truly random and cannot be backwards engineered.
FACT: RNG stands for Random Number Generator, and PRNG stands for Pseudo-Random Number Generator. Typically, since all RNGs are PRNGs, RNG is used as an acronym for all random number generators.
A video describing the difference between Random and Pseudo-Random.
“Truly” Random Numbers and Computers
Some say a computer can never give a truly random result in the purest sense. Others (like Random.org) argue that HRNGs, (True Random Number Generators) have been alive and well since at least 1998.
Basing Random Number Generators on Natural Unpredictable Processes
The key to what some claim as “true” computer-based random number generation is using physical phenomena like atmospheric noise, which is itself expected to be random, as a starting point and then compensating for possible biases in the measurement process.
Even though this method comes close to producing a truly random number (close enough to base lotteries and slot machines on it) to a purest, human and computer involvement in the process means the number isn’t truly random.
FACT: Random.org uses atmospheric noise for its RNG.
There is No Pure Computer Based True Random Number Generator
Today’s “true RNGs” come extremely close to producing truly random numbers, but the experts don’t agree that the results are truly random in the purest sense of the word. Also, they are only loosely computer based as the random values are obtained with hardware rather than software.
“Neither computers or humans can be random, we are both deterministic machines.”
I would argue with you about that but I know you are not responsable for holding that belief.
“Today’s “true RNGs” come extremely close to producing truly random numbers, but the experts don’t agree that the results are truly random in the purest sense of the word. Also, they are only loosely computer based as the random values are obtained with hardware rather than software.”
how is this even concluding it’s a myth?
computers don’t exist without hardware. it doesn’t matter if the practical randomness isn’t (yet) being generated by software (yes, ai will be able to do it), the computer is still generating a true random and unpredictable number for all practical purposes (such as cryptography).
as for being “truly random in the purest sense” is asking the wrong question. yes if the universe is completely deterministic, then everything in it is never purely random. but we have no reason to believe that’s the case, and if history is of any value we’ll probably never find such a reason. in our eyes, true randomness have always existed: it’s simply what we can’t predict, using whatever tools we have.
it doesn’t get any more pure than that.
So in my opinion, and where the article was going, was more that “if for a computer to be random it has to watch a mouse, then the computer isn’t being random, the mouse is…. ish, because the mouse is also fundamentally deterministic… so there is that…” I But I see that you and Google disagree with me, so this is now a fact.
Congratulations skynet. 😉
Half joking. But really, sort of. Eh, I framed it as a fact and then explained the argument. We all know there are two schools of thought on this.
But we agree, current RNG algorithms are indeed pseudo random and not truly random. But yeah, also agree computers need hardware to exist and the hardware is the key to obtaining a random value.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this. |
Algebra 1 Quiz Game Show Solving Systems of Linear Equations in a PowerPoint Presentation
This Quiz Show game, Jeopardy Style, is a great way to review a chapter. There are 25 questions and a scoreboard so you don’t have to write the score on a side board. Each page has the point value!
This Quiz Show game covers all of the following:
Solving Systems of Linear Equations by Graphing
8.EE.8, 8.EE.8a, 8.EE.8b, 8.EE.8c, A.CED.3, A.REI.6
Solving Systems of Linear Equations by Substitution
8.EE.8b, 8.EE.8c, A.CED.3, A.REI.6
Solving Systems of Linear Equations by Elimination
8.EE.8b, 8.EE.8c, A.CED.3, A.REI.5, A.REI.6
Solving Special Systems of Linear Equations
8.EE.8, 8.EE.8a, 8.EE.8b, 8.EE.8c, A.REI.6
Systems of Linear Inequalities
This lesson applies to the Common Core Standard:
Grade 8 » Expressions & Equations 8.EE.8, 8.EE.8a, 8.EE.8b, 8.EE.8c
Analyze and solve linear equations and pairs of simultaneous linear equations.
8. Analyze and solve pairs of simultaneous linear equations.
a. Understand that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations simultaneously.
b. Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. For example, 3x + 2y = 5 and 3x + 2y = 6 have no solution because 3x + 2y cannot simultaneously be 5 and 6.
c. Solve real-world and mathematical problems leading to two linear equations in two variables. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair.
High School: Algebra » Creating Equations A.CED.3
Create equations that describe numbers or relationships.
3. Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or nonviable options in a modeling context. For example, represent inequalities describing nutritional and cost constraints on combinations of different foods.
Algebra » Reasoning with Equations & Inequalities A.REI.5, A.REI.6, A.REI.12
Solve systems of equations.
5. Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions.
6. Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables.
Represent and solve equations and inequalities graphically.
12. Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.
The presentation has 53 slides. Use as many or as few of the problems to help your students learn each concept. For more PowerPoint lessons & materials visit Preston PowerPoints
Please note that the PowerPoint is not
If you need an alternative version because your country uses different measurements, units, slight wording adjustment for language differences, or a slide reordering just ask.
Are you looking for the Algebra 1 Solving Systems of Linear Equations Bundle? Click here!
Are you looking for the Algebra 1 Jeopardy Bundle? Coming Soon! Click here!
This resource is for one teacher only.
You may not upload this resource to the internet in any form. Additional teachers must
purchase their own license. If you are a coach, principal or district interested in purchasing several licenses, please contact me for a district-wide quote at firstname.lastname@example.org. This product may not be uploaded to the internet in any form, including classroom/personal websites or network drives. |
Galaxies are made of stars, dust and dark matter, all held together by gravity. They come in a variety of shapes, sizes and ages, and many have black holes at their centers.
Galaxies contain a different number of planets, star systems, star clusters and types of interstellar clouds. In between them is a sparse interstellar medium of gas, dust and cosmic rays. The black holes at the center of most galaxies are considered to be the primary driver of active galactic nuclei found at the core, and their surroundings produce enormous amounts of energy that astronomers can see over great distances. Material surrounding the black hole is accelerated outwards by its jets. Other galaxies contain objects like quasars, the most energetic bodies in the universe, at their cores.
Galaxies are categorized according to their apparent shape, referred to as their visual morphology. A common form is the elliptical galaxy which has an ellipse-shaped light profile. Spiral galaxies are disk-shaped with dusty, curving arms, and those with irregular shapes are known as irregular galaxies and typically originate from disruption by the gravitational pull of neighboring galaxies. Interactions between neighboring galaxies, which can result in a merger, sometimes induce significantly increased incidents of star formation leading to starburst galaxies.Learn More
Strictly, black holes don't actually lead anywhere, as they are not holes in the common sense of the term. According to the Harvard Smithsonian Center for Astrophysics, black holes are regions of the universe in which matter has become so dense that nothing, not even light, can escape its gravitational pull. Within this volume, the original matter has become so compact that it can fairly be said to have disappeared.Full Answer >
Interstellar bubbles are made when stellar winds caused by massive stars or supernovae push the interstellar gas around them outwards in a bubble shape. Stars clustered close enough together form giant bubbles when their bubbles merge. These giant bubbles are known as superbubbles.Full Answer >
Stars do die. The nuclear fusion reaction in stars stops and the star shrinks into a white dwarf due to gravity. The white dwarf further shrinks by releasing energy and becomes a black dwarf, when no energy is released either by fusion or by shrinking.Full Answer >
Star type M, the red stars, are the coolest stars with an average surface temperature of under 3,500 K. This type of star has a 0.3 solar mass (M☉) and an average radius of 0.4 solar radius (R☉). Examples of red stars are Betelgeuse and Antares.Full Answer > |
- Describe the American values that are reflected in the US Constitution.
- Know what federalism means, along with separation of powers.
- Explain the process of amending the Constitution and why judicial review is particularly significant.
The Constitution as Reflecting American Values
In the US, the one document to which all public officials and military personnel pledge their unswerving allegiance is the Constitution. If you serve, you are asked to “support and defend” the Constitution “against all enemies, foreign and domestic.” The oath usually includes a statement that you swear that this oath is taken freely, honestly, and without “any purpose of evasion.” This loyalty oath may be related to a time—fifty years ago—when “un-American” activities were under investigation in Congress and the press; the fear of communism (as antithetical to American values and principles) was paramount. As you look at the Constitution and how it affects the legal environment of business, please consider what basic values it may impart to us and what makes it uniquely American and worth defending “against all enemies, foreign and domestic.”
In Article I, the Constitution places the legislature first and prescribes the ways in which representatives are elected to public office. Article I balances influence in the federal legislature between large states and small states by creating a Senate in which the smaller states (by population) as well as the larger states have two votes. In Article II, the Constitution sets forth the powers and responsibilities of the branch—the presidency—and makes it clear that the president should be the commander in chief of the armed forces. Article II also gives states rather than individuals (through the Electoral College) a clear role in the election process. Article III creates the federal judiciary, and the Bill of Rights, adopted in 1791, makes clear that individual rights must be preserved against activities of the federal government. In general, the idea of rights is particularly strong.
The Constitution itself speaks of rights in fairly general terms, and the judicial interpretation of various rights has been in flux. The “right” of a person to own another person was notably affirmed by the Supreme Court in the Dred Scott decision in 1857.In Scott v. Sanford (the Dred Scott decision), the court states that Scott should remain a slave, that as a slave he is not a citizen of the United States and thus not eligible to bring suit in a federal court, and that as a slave he is personal property and thus has never been free. The “right” of a child to freely contract for long, tedious hours of work was upheld by the court in Hammer v. Dagenhart in 1918. Both decisions were later repudiated, just as the decision that a woman has a “right” to an abortion in the first trimester of pregnancy could later be repudiated if Roe v. Wade is overturned by the Supreme Court.Roe v. Wade, 410 US 113 (1973).
General Structure of the Constitution
Look at the Constitution. Notice that there are seven articles, starting with Article I (legislative powers), Article II (executive branch), and Article III (judiciary). Notice that there is no separate article for administrative agencies. The Constitution also declares that it is “the supreme Law of the Land” (Article VI). Following Article VII are the ten amendments adopted in 1791 that are referred to as the Bill of Rights. Notice also that in 1868, a new amendment, the Fourteenth, was adopted, requiring states to provide “due process” and “equal protection of the laws” to citizens of the United States.
The partnership created in the Constitution between the states and the federal government is called federalism. The Constitution is a document created by the states in which certain powers are delegated to the national government, and other powers are reserved to the states. This is made explicit in the Tenth Amendment.
Separation of Powers and Judicial Review
Because the Founding Fathers wanted to ensure that no single branch of the government, especially the executive branch, would be ascendant over the others, they created various checks and balances to ensure that each of the three principal branches had ways to limit or modify the power of the others. This is known as the separation of powers. Thus the president retains veto power, but the House of Representatives is entrusted with the power to initiate spending bills.
Power sharing was evident in the basic design of Congress, the federal legislative branch. The basic power imbalance was between the large states (with greater population) and the smaller ones (such as Delaware). The smaller ones feared a loss of sovereignty if they could be outvoted by the larger ones, so the federal legislature was constructed to guarantee two Senate seats for every state, no matter how small. The Senate was also given great responsibility in ratifying treaties and judicial nominations. The net effect of this today is that senators from a very small number of states can block treaties and other important legislation. The power of small states is also magnified by the Senate’s cloture rule, which currently requires sixty out of one hundred senators to vote to bring a bill to the floor for an up-or-down vote.
Because the Constitution often speaks in general terms (with broad phrases such as “due process” and “equal protection”), reasonable people have disagreed as to how those terms apply in specific cases. The United States is unique among industrialized democracies in having a Supreme Court that reserves for itself that exclusive power to interpret what the Constitution means. The famous case of Marbury v. Madison began that tradition in 1803, when the Supreme Court had marginal importance in the new republic. The decision in Bush v. Gore, decided in December of 2000, illustrates the power of the court to shape our destiny as a nation. In that case, the court overturned a ruling by the Florida Supreme Court regarding the way to proceed on a recount of the Florida vote for the presidency. The court’s ruling was purportedly based on the “equal protection of the laws” provision in the Fourteenth Amendment.
From Marbury to the present day, the Supreme Court has articulated the view that the US Constitution sets the framework for all other US laws, whether statutory or judicially created. Thus any statute (or portion thereof) or legal ruling (judicial or administrative) in conflict with the Constitution is not enforceable. And as the Bush v. Gore decision indicates, the states are not entirely free to do what they might choose; their own sovereignty is limited by their union with the other states in a federal sovereign.
If the Supreme Court makes a “bad decision” as to what the Constitution means, it is not easily overturned. Either the court must change its mind (which it seldom does) or two-thirds of Congress and three-fourths of the states must make an amendment (Article V).
Because the Supreme Court has this power of judicial review, there have been many arguments about how it should be exercised and what kind of “philosophy” a Supreme Court justice should have. President Richard Nixon often said that a Supreme Court justice should “strictly construe” the Constitution and not add to its language. Finding law in the Constitution was “judicial activism” rather than “judicial restraint.” The general philosophy behind the call for “strict constructionist” justices is that legislatures make laws in accord with the wishes of the majority, and so unelected judges should not make law according to their own views and values. Nixon had in mind the 1960s Warren court, which “found” rights in the Constitution that were not specifically mentioned—the right of privacy, for example. In later years, critics of the Rehnquist court would charge that it “found” rights that were not specifically mentioned, such as the right of states to be free from federal antidiscrimination laws. See, for example, Kimel v. Florida Board of Regents, or the Citizens United v. Federal Election Commission case (Section 4.6.5), which held that corporations are “persons” with “free speech rights” that include spending unlimited amounts of money in campaign donations and political advocacy.Kimel v. Florida Board of Regents, 528 US 62 (2000).
Because Roe v. Wade has been so controversial, this chapter includes a seminal case on “the right of privacy,” Griswold v. Connecticut, Section 4.6.1. Was the court was correct in recognizing a “right of privacy” in Griswold? This may not seem like a “business case,” but consider: the manufacture and distribution of birth control devices is a highly profitable (and legal) business in every US state. Moreover, Griswold illustrates another important and much-debated concept in US constitutional law: substantive due process (see Section 4.5.3 "Fifth Amendment"). The problem of judicial review and its proper scope is brought into sharp focus in the abortion controversy. Abortion became a lucrative service business after Roe v. Wade was decided in 1973. That has gradually changed, with state laws that have limited rather than overruled Roe v. Wade and with persistent antiabortion protests, killings of abortion doctors, and efforts to publicize the human nature of the fetuses being aborted. The key here is to understand that there is no explicit mention in the Constitution of any right of privacy. As Justice Harry Blackmun argued in his majority opinion in Roe v. Wade,
The Constitution does not explicitly mention any right of privacy. In a line of decisions, however, the Court has recognized that a right of personal privacy or a guarantee of certain areas or zones of privacy, does exist under the Constitution.…[T]hey also make it clear that the right has some extension to activities relating to marriage…procreation…contraception…family relationships…and child rearing and education.…The right of privacy…is broad enough to encompass a woman’s decision whether or not to terminate her pregnancy.
In short, justices interpreting the Constitution wield quiet yet enormous power through judicial review. In deciding that the right of privacy applied to a woman’s decision to abort in the first trimester, the Supreme Court did not act on the basis of a popular mandate or clear and unequivocal language in the Constitution, and it made illegal any state or federal legislative or executive action contrary to its interpretation. Only a constitutional amendment or the court’s repudiation of Roe v. Wade as a precedent could change that interpretation.
The Constitution gives voice to the idea that people have basic rights and that a civilian president is also the commander in chief of the armed forces. It gives instructions as to how the various branches of government must share power and also tries to balance power between the states and the federal government. It does not expressly allow for judicial review, but the Supreme Court’s ability to declare what laws are (or are not) constitutional has given the judicial branch a kind of power not seen in other industrialized democracies.
- Suppose the Supreme Court declares that Congress and the president cannot authorize the indefinite detention of terrorist suspects without a trial of some sort, whether military or civilian. Suppose also that the people of the United States favor such indefinite detention and that Congress wants to pass a law rebuking the court’s decision. What kind of law would have to be passed, by what institutions, and by what voting percentages?
- When does a prior decision of the Supreme Court deserve overturning? Name one decision of the Supreme Court that you think is no longer “good law.” Does the court have to wait one hundred years to overturn its prior case precedents?
4.2 The Commerce Clause
- Name the specific clause through which Congress has the power to regulate commerce. What, specifically, does this clause say?
- Explain how early decisions of the Supreme Court interpreted the scope of the commerce clause and how that impacted the legislative proposals and programs of Franklin Delano Roosevelt during the Great Depression.
- Describe both the wider use of the commerce clause from World War II through the 1990s and the limitations the Supreme Court imposed in Lopez and other cases.
First, turn to Article I, Section 8. The commerce clause gives Congress the exclusive power to make laws relating to foreign trade and commerce and to commerce among the various states. Most of the federally created legal environment springs from this one clause: if Congress is not authorized in the Constitution to make certain laws, then it acts unconstitutionally and its actions may be ruled unconstitutional by the Supreme Court. Lately, the Supreme Court has not been shy about ruling acts of Congress unconstitutional.
Here are the first five parts of Article I, Section 8, which sets forth the powers of the federal legislature. The commerce clause is in boldface. It is short, but most federal legislation affecting business depends on this very clause:
[Clause 1] The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;
[Clause 2] To borrow Money on the credit of the United States;
[Clause 3] To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes;
[Clause 4] To establish a uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;
[Clause 5] To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures;
Early Commerce Clause Cases
For many years, the Supreme Court was very strict in applying the commerce clause: Congress could only use it to legislate aspects of the movement of goods from one state to another. Anything else was deemed local rather than national. For example, In Hammer v. Dagenhart, decided in 1918, a 1916 federal statute had barred transportation in interstate commerce of goods produced in mines or factories employing children under fourteen or employing children fourteen and above for more than eight hours a day. A complaint was filed in the US District Court for the Western District of North Carolina by a father in his own behalf and on behalf of his two minor sons, one under the age of fourteen years and the other between fourteen and sixteen years, who were employees in a cotton mill in Charlotte, North Carolina. The father’s lawsuit asked the court to enjoin (block) the enforcement of the act of Congress intended to prevent interstate commerce in the products of child labor.
The Supreme Court saw the issue as whether Congress had the power under the commerce clause to control interstate shipment of goods made by children under the age of fourteen. The court found that Congress did not. The court cited several cases that had considered what interstate commerce could be constitutionally regulated by Congress. In Hipolite Egg Co. v. United States, the Supreme Court had sustained the power of Congress to pass the Pure Food and Drug Act, which prohibited the introduction into the states by means of interstate commerce impure foods and drugs.Hipolite Egg Co. v. United States, 220 US 45 (1911). In Hoke v. United States, the Supreme Court had sustained the constitutionality of the so-called White Slave Traffic Act of 1910, whereby the transportation of a woman in interstate commerce for the purpose of prostitution was forbidden. In that case, the court said that Congress had the power to protect the channels of interstate commerce: “If the facility of interstate transportation can be taken away from the demoralization of lotteries, the debasement of obscene literature, the contagion of diseased cattle or persons, the impurity of food and drugs, the like facility can be taken away from the systematic enticement to, and the enslavement in prostitution and debauchery of women, and, more insistently, of girls.”Hoke v. United States, 227 US 308 (1913).
In each of those instances, the Supreme Court said, “[T]he use of interstate transportation was necessary to the accomplishment of harmful results.” In other words, although the power over interstate transportation was to regulate, that could only be accomplished by prohibiting the use of the facilities of interstate commerce to effect the evil intended. But in Hammer v. Dagenhart, that essential element was lacking. The law passed by Congress aimed to standardize among all the states the ages at which children could be employed in mining and manufacturing, while the goods themselves are harmless. Once the labor is done and the articles have left the factory, the “labor of their production is over, and the mere fact that they were intended for interstate commerce transportation does not make their production subject to federal control under the commerce power.”
In short, the early use of the commerce clause was limited to the movement of physical goods between states. Just because something might enter the channels of interstate commerce later on does not make it a fit subject for national regulation. The production of articles intended for interstate commerce is a matter of local regulation. The court therefore upheld the result from the district and circuit court of appeals; the application of the federal law was enjoined. Goods produced by children under the age of fourteen could be shipped anywhere in the United States without violating the federal law.
From the New Deal to the New Frontier and the Great Society:1930s–1970
During the global depression of the 1930s, the US economy saw jobless rates of a third of all workers, and President Roosevelt’s New Deal program required more active federal legislation. Included in the New Deal program was the recognition of a “right” to form labor unions without undue interference from employers. Congress created the National Labor Relations Board (NLRB) in 1935 to investigate and to enjoin employer practices that violated this right.
In NLRB v. Jones & Laughlin Steel Corporation, a union dispute with management at a large steel-producing facility near Pittsburgh, Pennsylvania, became a court case. In this case, the NLRB had charged the Jones & Laughlin Steel Corporation with discriminating against employees who were union members. The company’s position was that the law authorizing the NLRB was unconstitutional, exceeding Congress’s powers. The court held that the act was narrowly constructed so as to regulate industrial activities that had the potential to restrict interstate commerce. The earlier decisions under the commerce clause to the effect that labor relations had only an indirect effect on commerce were effectively reversed. Since the ability of employees to engage in collective bargaining (one activity protected by the act) is “an essential condition of industrial peace,” the national government was justified in penalizing corporations engaging in interstate commerce that “refuse to confer and negotiate” with their workers. This was, however, a close decision, and the switch of one justice made this ruling possible. Without this switch, the New Deal agenda would have been effectively derailed.
The Substantial Effects Doctrine: World War II to the 1990s
Subsequent to NLRB v. Jones & Laughlin Steel Corporation, Congress and the courts generally accepted that even modest impacts on interstate commerce were “reachable” by federal legislation. For example, the case of Wickard v. Filburn, from 1942, represents a fairly long reach for Congress in regulating what appear to be very local economic decisions (Section 4.6.2).
Wickard established that “substantial effects” in interstate commerce could be very local indeed! But commerce clause challenges to federal legislation continued. In the 1960s, the Civil Rights Act of 1964 was challenged on the ground that Congress lacked the power under the commerce clause to regulate what was otherwise fairly local conduct. For example, Title II of the act prohibited racial discrimination in public accommodations (such as hotels, motels, and restaurants), leading to the famous case of Katzenbach v. McClung (1964).
Ollie McClung’s barbeque place in Birmingham, Alabama, allowed “colored” people to buy takeout at the back of the restaurant but not to sit down with “white” folks inside. The US attorney sought a court order to require Ollie to serve all races and colors, but Ollie resisted on commerce clause grounds: the federal government had no business regulating a purely local establishment. Indeed, Ollie did not advertise nationally, or even regionally, and had customers only from the local area. But the court found that some 42 percent of the supplies for Ollie’s restaurant had moved in the channels of interstate commerce. This was enough to sustain federal regulation based on the commerce clause.Katzenbach v. McClung, 379 US 294 (1964).
For nearly thirty years following, it was widely assumed that Congress could almost always find some interstate commerce connection for any law it might pass. It thus came as something of a shock in 1995 when the Rehnquist court decided U.S. v. Lopez. Lopez had been convicted under a federal law that prohibited possession of firearms within 1,000 feet of a school. The law was part of a twenty-year trend (roughly 1970 to 1990) for senators and congressmen to pass laws that were tough on crime. Lopez’s lawyer admitted that Lopez had had a gun within 1,000 feet of a San Antonio school yard but challenged the law itself, arguing that Congress exceeded its authority under the commerce clause in passing this legislation. The US government’s Solicitor General argued on behalf of the Department of Justice to the Supreme Court that Congress was within its constitutional rights under the commerce clause because education of the future workforce was the foundation for a sound economy and because guns at or near school yards detracted from students’ education. The court rejected this analysis, noting that with the government’s analysis, an interstate commerce connection could be conjured from almost anything. Lopez went free because the law itself was unconstitutional, according to the court.
Congress made no attempt to pass similar legislation after the case was decided. But in passing subsequent legislation, Congress was often careful to make a record as to why it believed it was addressing a problem that related to interstate commerce. In 1994, Congress passed the Violence Against Women Act (VAWA), having held hearings to establish why violence against women on a local level would impair interstate commerce. In 1994, while enrolled at Virginia Polytechnic Institute (Virginia Tech), Christy Brzonkala alleged that Antonio Morrison and James Crawford, both students and varsity football players at Virginia Tech, had raped her. In 1995, Brzonkala filed a complaint against Morrison and Crawford under Virginia Tech’s sexual assault policy. After a hearing, Morrison was found guilty of sexual assault and sentenced to immediate suspension for two semesters. Crawford was not punished. A second hearing again found Morrison guilty. After an appeal through the university’s administrative system, Morrison’s punishment was set aside, as it was found to be “excessive.” Ultimately, Brzonkala dropped out of the university. Brzonkala then sued Morrison, Crawford, and Virginia Tech in federal district court, alleging that Morrison’s and Crawford’s attack violated 42 USC Section 13981, part of the VAWA), which provides a federal civil remedy for the victims of gender-motivated violence. Morrison and Crawford moved to dismiss Brzonkala’s suit on the ground that Section 13981’s civil remedy was unconstitutional. In dismissing the complaint, the district court found that that Congress lacked authority to enact Section 13981 under either the commerce clause or the Fourteenth Amendment, which Congress had explicitly identified as the sources of federal authority for the VAWA. Ultimately, the court of appeals affirmed, as did the Supreme Court.
The Supreme Court held that Congress lacked the authority to enact a statute under the commerce clause or the Fourteenth Amendment because the statute did not regulate an activity that substantially affected interstate commerce nor did it redress harm caused by the state. Chief Justice William H. Rehnquist wrote for the court that “under our federal system that remedy must be provided by the Commonwealth of Virginia, and not by the United States.” Dissenting, Justice Stephen G. Breyer argued that the majority opinion “illustrates the difficulty of finding a workable judicial Commerce Clause touchstone.” Justice David H. Souter, dissenting, noted that VAWA contained a “mountain of data assembled by Congress…showing the effects of violence against women on interstate commerce.”
The absence of a workable judicial commerce clause touchstone remains. In 1996, California voters passed the Compassionate Use Act, legalizing marijuana for medical use. California’s law conflicted with the federal Controlled Substances Act (CSA), which banned possession of marijuana. After the Drug Enforcement Administration (DEA) seized doctor-prescribed marijuana from a patient’s home, a group of medical marijuana users sued the DEA and US Attorney General John Ashcroft in federal district court.
The medical marijuana users argued that the CSA—which Congress passed using its constitutional power to regulate interstate commerce—exceeded Congress’s commerce clause power. The district court ruled against the group, but the Ninth Circuit Court of Appeals reversed and ruled the CSA unconstitutional because it applied to medical marijuana use solely within one state. In doing so, the Ninth Circuit relied on U.S. v. Lopez (1995) and U.S. v. Morrison (2000) to say that using medical marijuana did not “substantially affect” interstate commerce and therefore could not be regulated by Congress.
But by a 6–3 majority, the Supreme Court held that the commerce clause gave Congress authority to prohibit the local cultivation and use of marijuana, despite state law to the contrary. Justice John Paul Stevens argued that the court’s precedents established Congress’s commerce clause power to regulate purely local activities that are part of a “class of activities” with a substantial effect on interstate commerce. The majority argued that Congress could ban local marijuana use because it was part of such a class of activities: the national marijuana market. Local use affected supply and demand in the national marijuana market, making the regulation of intrastate use “essential” to regulating the drug’s national market.
Notice how similar this reasoning is to the court’s earlier reasoning in Wickard v. Filburn (Section 4.6.2). In contrast, the court’s conservative wing was adamant that federal power had been exceeded. Justice Clarence Thomas’s dissent in Gonzalez v. Raich stated that Raich’s local cultivation and consumption of marijuana was not “Commerce…among the several States.” Representing the “originalist” view that the Constitution should mostly mean what the Founders meant it to mean, he also said that in the early days of the republic, it would have been unthinkable that Congress could prohibit the local cultivation, possession, and consumption of marijuana.
The commerce clause is the basis on which the federal government regulates interstate economic activity. The phrase “interstate commerce” has been subject to differing interpretations by the Supreme Court over the past one hundred years. There are certain matters that are essentially local or intrastate, but the range of federal involvement in local matters is still considerable.
- Why would Congress have power under the Civil Rights Act of 1964 to require restaurants and hotels to not discriminate against interstate travelers on the basis of race, color, sex, religion, or national origin? Suppose the Holiday Restaurant near I-80 in Des Moines, Iowa, has a sign that says, “We reserve the right to refuse service to any Muslim or person of Middle Eastern descent.” Suppose also that the restaurant is very popular locally and that only 40 percent of its patrons are travelers on I-80. Are the owners of the Holiday Restaurant in violation of the Civil Rights Act of 1964? What would happen if the owners resisted enforcement by claiming that Title II of the act (relating to “public accommodations” such as hotels, motels, and restaurants) was unconstitutional?
- If the Supreme Court were to go back to the days of Hammer v. Dagenhart and rule that only goods and services involving interstate movement could be subject to federal law, what kinds of federal programs might be lacking a sound basis in the commerce clause? “Obamacare”? Medicare? Homeland security? Social Security? What other powers are granted to Congress under the Constitution to legislate for the general good of society?
4.3 Dormant Commerce Clause
- Understand that when Congress does not exercise its powers under the commerce clause, the Supreme Court may still limit state legislation that discriminates against interstate commerce or places an undue burden on interstate commerce.
- Distinguish between “discrimination” dormant-commerce-clause cases and “undue burden” dormant-commerce-clause cases.
Congress has the power to legislate under the commerce clause and often does legislate. For example, Congress might say that trucks moving on interstate highways must not be more than seventy feet in length. But if Congress does not exercise its powers and regulate in certain areas (such as the size and length of trucks on interstate highways), states may make their own rules. States may do so under the so-called historic police powers of states that were never yielded up to the federal government.
These police powers can be broadly exercised by states for purposes of health, education, welfare, safety, morals, and the environment. But the Supreme Court has reserved for itself the power to determine when state action is excessive, even when Congress has not used the commerce clause to regulate. This power is claimed to exist in the dormant commerce clause.
There are two ways that a state may violate the dormant commerce clause. If a state passes a law that is an “undue burden” on interstate commerce or that “discriminates” against interstate commerce, it will be struck down. Kassel v. Consolidated Freightways, in Section 4.7 "Summary and Exercises", is an example of a case where Iowa imposed an undue burden on interstate commerce by prohibiting double trailers on its highways.Kassell v. Consolidated Freightways, 450 US 662 (1981). Iowa’s prohibition was judicially declared void when the Supreme Court judged it to be an undue burden.
Discrimination cases such as Hunt v. Washington Apple Advertising Commission (Section 4.6 "Cases") pose a different standard. The court has been fairly inflexible here: if one state discriminates in its treatment of any article of commerce based on its state of origin, the court will strike down the law. For example, in Oregon Waste Systems v. Department of Environmental Quality, the state wanted to place a slightly higher charge on waste coming from out of state.Oregon Waste Systems v. Department of Environmental Quality, 511 US 93 (1994). The state’s reasoning was that in-state residents had already contributed to roads and other infrastructure and that tipping fees at waste facilities should reflect the prior contributions of in-state companies and residents. Out-of-state waste handlers who wanted to use Oregon landfills objected and won their dormant commerce clause claim that Oregon’s law discriminated “on its face” against interstate commerce. Under the Supreme Court’s rulings, anything that moves in channels of interstate commerce is “commerce,” even if someone is paying to get rid of something instead of buying something.
Thus the states are bound by Supreme Court decisions under the dormant commerce clause to do nothing that differentiates between articles of commerce that originate from within the state from those that originate elsewhere. If Michigan were to let counties decide for themselves whether to take garbage from outside of the county or not, this could also be a discrimination based on a place of origin outside the state. (Suppose, for instance, each county were to decide not to take waste from outside the county; then all Michigan counties would effectively be excluding waste from outside of Michigan, which is discriminatory.)Fort Gratiot Sanitary Landfill v. Michigan Dep’t of Natural Resources, 504 US 353 (1992).
The Supreme Court probably would uphold any solid waste requirements that did not differentiate on the basis of origin. If, for example, all waste had to be inspected for specific hazards, then the law would apply equally to in-state and out-of-state garbage. Because this is the dormant commerce clause, Congress could still act (i.e., it could use its broad commerce clause powers) to say that states are free to keep out-of-state waste from coming into their own borders. But Congress has declined to do so. What follows is a statement from one of the US senators from Michigan, Carl Levin, in 2003, regarding the significant amounts of waste that were coming into Michigan from Toronto, Canada.
Dealing with Unwelcome Waste
Senator Carl Levin, January 2003
Michigan is facing an intolerable situation with regard to the importation of waste from other states and Canada.
Canada is the largest source of waste imports to Michigan. Approximately 65 truckloads of waste come in to Michigan per day from Toronto alone, and an estimated 110–130 trucks come in from Canada each day.
This problem isn’t going to get any better. Ontario’s waste shipments are growing as the Toronto area signs new contracts for waste disposal here and closes its two remaining landfills. At the beginning of 1999, the Toronto area was generating about 2.8 million tons of waste annually, about 700,000 tons of which were shipped to Michigan. By early this year, barring unforeseen developments, the entire 2.8 million tons will be shipped to Michigan for disposal.
Why can’t Canada dispose of its trash in Canada? They say that after 20 years of searching they have not been able to find a suitable Ontario site for Toronto’s garbage. Ontario has about 345,000 square miles compared to Michigan’s 57,000 square miles. With six times the land mass, that argument is laughable.
The Michigan Department of Environmental Quality estimates that, for every five years of disposal of Canadian waste at the current usage volume, Michigan is losing a full year of landfill capacity. The environmental impacts on landfills, including groundwater contamination, noise pollution and foul odors, are exacerbated by the significant increase in the use of our landfills from sources outside of Michigan.
I have teamed up with Senator Stabenow and Congressman Dingell to introduce legislation that would strengthen our ability to stop shipments of waste from Canada.
We have protections contained in a 17 year-old international agreement between the U.S. and Canada called the Agreement Concerning the Transboundary Movement of Hazardous Waste. The U.S. and Canada entered into this agreement in 1986 to allow the shipment of hazardous waste across the U.S./Canadian border for treatment, storage or disposal. In 1992, the two countries decided to add municipal solid waste to the agreement. To protect both countries, the agreement requires notification of shipments to the importing country and it also provides that the importing country may withdraw consent for shipments. Both reasons are evidence that these shipments were intended to be limited. However, the agreement’s provisions have not been enforced by the United States.
Canada could not export waste to Michigan without the 1986 agreement, but the U.S. has not implemented the provisions that are designed to protect the people of Michigan. Although those of us that introduced this legislation believe that the Environmental Protection Agency has the authority to enforce this agreement, they have not done so. Our bill would require the EPA [Environmental Protection Agency] to enforce the agreement.
In order to protect the health and welfare of the citizens of Michigan and our environment, we must consider the impact of the importation of trash on state and local recycling efforts, landfill capacity, air emissions, road deterioration resulting from increased vehicular traffic and public health and the environment.
Our bill would require the EPA to consider these factors in determining whether to accept imports of trash from Canada. It is my strong view that such a review should lead the EPA to say “no” to the status quo of trash imports.
Where Congress does not act pursuant to its commerce clause powers, the states are free to legislate on matters of commerce under their historic police powers. However, the Supreme Court has set limits on such powers. Specifically, states may not impose undue burdens on interstate commerce and may not discriminate against articles in interstate commerce.
- Suppose that the state of New Jersey wishes to limit the amount of hazardous waste that enters into its landfills. The general assembly in New Jersey passes a law that specifically forbids any hazardous waste from entering into the state. All landfills are subject to tight regulations that will allow certain kinds of hazardous wastes originating in New Jersey to be put in New Jersey landfills but that impose significant criminal fines on landfill operators that accept out-of-state hazardous waste. The Baldessari Brothers Landfill in Linden, New Jersey, is fined for taking hazardous waste from a New York State transporter and appeals that ruling on the basis that New Jersey’s law is unconstitutional. What is the result?
- The state of Arizona determines through its legislature that trains passing through the state cannot be longer than seventy cars. There is some evidence that in Eastern US states longer trains pose some safety hazards. There is less evidence that long trains are a problem in Western states. Several major railroads find the Arizona legislation costly and burdensome and challenge the legislation after applied-for permits for longer trains are denied. What kind of dormant commerce clause challenge is this, and what would it take for the challenge to be successful?
4.4 Preemption: The Supremacy Clause
- Understand the role of the supremacy clause in the balance between state and federal power.
- Give examples of cases where state legislation is preempted by federal law and cases where state legislation is not preempted by federal law.
When Congress does use its power under the commerce clause, it can expressly state that it wishes to have exclusive regulatory authority. For example, when Congress determined in the 1950s to promote nuclear power (“atoms for peace”), it set up the Nuclear Regulatory Commission and provided a limitation of liability for nuclear power plants in case of a nuclear accident. The states were expressly told to stay out of the business of regulating nuclear power or the movement of nuclear materials. Thus Rochester, Minnesota, or Berkeley, California, could declare itself a nuclear-free zone, but the federal government would have preempted such legislation. If Michigan wished to set safety standards at Detroit Edison’s Fermi II nuclear reactor that were more stringent than the federal Nuclear Regulatory Commission’s standards, Michigan’s standards would be preempted and thus be void.
Even where Congress does not expressly preempt state action, such action may be impliedly pre-empted. States cannot constitutionally pass laws that interfere with the accomplishment of the purposes of the federal law. Suppose, for example, that Congress passes a comprehensive law that sets standards for foreign vessels to enter the navigable waters and ports of the United States. If a state creates a law that sets standards that conflict with the federal law or sets standards so burdensome that they interfere with federal law, the doctrine of preemption will (in accordance with the supremacy clause) void the state law or whatever parts of it are inconsistent with federal law.
But Congress can allow what might appear to be inconsistencies; the existence of federal statutory standards does not always mean that local and state standards cannot be more stringent. If California wants cleaner air or water than other states, it can set stricter standards—nothing in the Clean Water Act or Clean Air Act forbids the state from setting stricter pollution standards. As the auto industry well knows, California has set stricter standards for auto emissions. Since the 1980s, most automakers have made both a federal car and a California car, because federal Clean Air Act emissions restrictions do not preempt more rigorous state standards.
Large industries and companies actually prefer regulation at the national level. It is easier for a large company or industry association to lobby in Washington, DC, than to lobby in fifty different states. Accordingly, industry often asks Congress to put preemptive language into its statutes. The tobacco industry is a case in point.
The cigarette warning legislation of the 1960s (where the federal government required warning labels on cigarette packages) effectively preempted state negligence claims based on failure to warn. When the family of a lifetime smoker who had died sued in New Jersey court, one cause of action was the company’s failure to warn of the dangers of its product. The Supreme Court reversed the jury’s award based on the federal preemption of failure to warn claims under state law.Cippolone v. Liggett Group, 505 US 504 (1993).
The Supremacy Clause
This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.
The preemption doctrine derives from the supremacy clause of the Constitution, which states that the “Constitution and the Laws of the United States…shall be the supreme Law of the Land…any Thing in the Constitutions or Laws of any State to the Contrary notwithstanding.” This means of course, that any federal law—even a regulation of a federal agency—would control over any conflicting state law.
Preemption can be either express or implied. When Congress chooses to expressly preempt state law, the only question for courts becomes determining whether the challenged state law is one that the federal law is intended to preempt. Implied preemption presents more difficult issues. The court has to look beyond the express language of federal statutes to determine whether Congress has “occupied the field” in which the state is attempting to regulate, or whether a state law directly conflicts with federal law, or whether enforcement of the state law might frustrate federal purposes.
Federal “occupation of the field” occurs, according to the court in Pennsylvania v. Nelson (1956), when there is “no room” left for state regulation. Courts are to look to the pervasiveness of the federal scheme of regulation, the federal interest at stake, and the danger of frustration of federal goals in making the determination as to whether a challenged state law can stand.
In Silkwood v. Kerr-McGee (1984), the court, voting 5–4, found that a $10 million punitive damages award (in a case litigated by famed attorney Gerry Spence) against a nuclear power plant was not impliedly preempted by federal law. Even though the court had recently held that state regulation of the safety aspects of a federally licensed nuclear power plant was preempted, the court drew a different conclusion with respect to Congress’s desire to displace state tort law—even though the tort actions might be premised on a violation of federal safety regulations.
Cipollone v. Liggett Group (1993) was a closely watched case concerning the extent of an express preemption provision in two cigarette labeling laws of the 1960s. The case was a wrongful death action brought against tobacco companies on behalf of Rose Cipollone, a lung cancer victim who had started smoking cigarette in the 1940s. The court considered the preemptive effect on state law of a provision that stated, “No requirement based on smoking and health shall be imposed under state law with respect to the advertising and promotion of cigarettes.” The court concluded that several types of state tort actions were preempted by the provision but allowed other types to go forward.
In cases of conflicts between state and federal law, federal law will preempt (or control) state law because of the supremacy clause. Preemption can be express or implied. In cases where preemption is implied, the court usually finds that compliance with both state and federal law is not possible or that a federal regulatory scheme is comprehensive (i.e., “occupies the field”) and should not be modified by state actions.
- For many years, the United States engaged in discussions with friendly nations as to the reciprocal use of ports and harbors. These discussions led to various multilateral agreements between the nations as to the configuration of oceangoing vessels and how they would be piloted. At the same time, concern over oil spills in Puget Sound led the state of Washington to impose fairly strict standards on oil tankers and requirements for the training of oil tanker pilots. In addition, Washington’s state law imposed many other requirements that went above and beyond agreed-upon requirements in the international agreements negotiated by the federal government. Are the Washington state requirements preempted by federal law?
- The Federal Arbitration Act of 1925 requires that all contracts for arbitration be treated as any other contract at common law. Suppose that the state of Alabama wishes to protect its citizens from a variety of arbitration provisions that they might enter into unknowingly. Thus the legislation provides that all predispute arbitration clauses be in bold print, that they be of twelve-point font or larger, that they be clearly placed within the first two pages of any contract, and that they have a separate signature line where the customer, client, or patient acknowledges having read, understood, and signed the arbitration clause in addition to any other signatures required on the contract. The legislation does preserve the right of consumers to litigate in the event of a dispute arising with the product or service provider; that is, with this legislation, consumers will not unknowingly waive their right to a trial at common law. Is the Alabama law preempted by the Federal Arbitration Act?
4.5 Business and the Bill of Rights
- Understand and describe which articles in the Bill of Rights apply to business activities and how they apply.
- Explain the application of the Fourteenth Amendment—including the due process clause and the equal protection clause—to various rights enumerated in the original Bill of Rights.
We have already seen the Fourteenth Amendment’s application in Burger King v. Rudzewicz (Section 3.9 "Cases"). In that case, the court considered whether it was constitutionally correct for a court to assert personal jurisdiction over a nonresident. The states cannot constitutionally award a judgment against a nonresident if doing so would offend traditional notions of fair play and substantial justice. Even if the state’s long-arm statute would seem to allow such a judgment, other states should not give it full faith and credit (see Article V of the Constitution). In short, a state’s long-arm statute cannot confer personal jurisdiction that the state cannot constitutionally claim.
The Bill of Rights (the first ten amendments to the Constitution) was originally meant to apply to federal actions only. During the twentieth century, the court began to apply selected rights to state action as well. So, for example, federal agents were prohibited from using evidence seized in violation of the Fourth Amendment, but state agents were not, until Mapp v. Ohio (1960), when the court applied the guarantees (rights) of the Fourth Amendment to state action as well. In this and in similar cases, the Fourteenth Amendment’s due process clause was the basis for the court’s action. The due process clause commanded that states provide due process in cases affecting the life, liberty, or property of US citizens, and the court saw in this command certain “fundamental guarantees” that states would have to observe. Over the years, most of the important guarantees in the Bill of Rights came to apply to state as well as federal action. The court refers to this process as selective incorporation.
Here are some very basic principles to remember:
- The guarantees of the Bill of Rights apply only to state and federal government action. They do not limit what a company or person in the private sector may do. For example, states may not impose censorship on the media or limit free speech in a way that offends the First Amendment, but your boss (in the private sector) may order you not to talk to the media.
- In some cases, a private company may be regarded as participating in “state action.” For example, a private defense contractor that gets 90 percent of its business from the federal government has been held to be public for purposes of enforcing the constitutional right to free speech (the company had a rule barring its employees from speaking out in public against its corporate position). It has even been argued that public regulation of private activity is sufficient to convert the private into public activity, thus subjecting it to the requirements of due process. But the Supreme Court rejected this extreme view in 1974 when it refused to require private power companies, regulated by the state, to give customers a hearing before cutting off electricity for failure to pay the bill.Jackson v. Metropolitan Edison Co., 419 US 345 (1974).
- States have rights, too. While “states rights” was a battle cry of Southern states before the Civil War, the question of what balance to strike between state sovereignty and federal union has never been simple. In Kimel v. Florida, for example, the Supreme Court found in the words of the Eleventh Amendment a basis for declaring that states may not have to obey certain federal statutes.
In part, the First Amendment states that “Congress shall make no law…abridging the freedom of speech, or of the press.” The Founding Fathers believed that democracy would work best if people (and the press) could talk or write freely, without governmental interference. But the First Amendment was also not intended to be as absolute as it sounded. Oliver Wendell Holmes’s famous dictum that the law does not permit you to shout “Fire!” in a crowded theater has seldom been answered, “But why not?” And no one in 1789 thought that defamation laws (torts for slander and libel) had been made unconstitutional. Moreover, because the apparent purpose of the First Amendment was to make sure that the nation had a continuing, vigorous debate over matters political, political speech has been given the highest level of protection over such other forms of speech as (1) “commercial speech,” (2) speech that can and should be limited by reasonable “time, place, and manner” restrictions, or (3) obscene speech.
Because of its higher level of protection, political speech can be false, malicious, mean-spirited, or even a pack of lies. A public official in the United States must be prepared to withstand all kinds of false accusations and cannot succeed in an action for defamation unless the defendant has acted with “malice” and “reckless disregard” of the truth. Public figures, such as CEOs of the largest US banks, must also be prepared to withstand accusations that are false. In any defamation action, truth is a defense, but a defamation action brought by a public figure or public official must prove that the defendant not only has his facts wrong but also lies to the public in a malicious way with reckless disregard of the truth. Celebrities such as Lindsay Lohan and Jon Stewart have the same burden to go forward with a defamation action. It is for this reason that the National Enquirer writes exclusively about public figures, public officials, and celebrities; it is possible to say many things that aren’t completely true and still have the protection of the First Amendment.
Political speech is so highly protected that the court has recognized the right of people to support political candidates through campaign contributions and thus promote the particular viewpoints and speech of those candidates. Fearing the influence of money on politics, Congress has from time to time placed limitations on corporate contributions to political campaigns. But the Supreme Court has had mixed reactions over time. Initially, the court recognized the First Amendment right of a corporation to donate money, subject to certain limits.Buckley v. Valeo, 424 US 1 (1976). In another case, Austin v. Michigan Chamber of Commerce (1990), the Michigan Campaign Finance Act prohibited corporations from using treasury money for independent expenditures to support or oppose candidates in elections for state offices. But a corporation could make such expenditures if it set up an independent fund designated solely for political purposes. The law was passed on the assumption that “the unique legal and economic characteristics of corporations necessitate some regulation of their political expenditures to avoid corruption or the appearance of corruption.”
The Michigan Chamber of Commerce wanted to support a candidate for Michigan’s House of Representatives by using general funds to sponsor a newspaper advertisement and argued that as a nonprofit organization, it was not really like a business firm. The court disagreed and upheld the Michigan law. Justice Marshall found that the chamber was akin to a business group, given its activities, linkages with community business leaders, and high percentage of members (over 75 percent) that were business corporations. Furthermore, Justice Marshall found that the statute was narrowly crafted and implemented to achieve the important goal of maintaining integrity in the political process. But as you will see in Citizens United v. Federal Election Commission (Section 4.6 "Cases"), Austin was overruled; corporations are recognized as “persons” with First Amendment political speech rights that cannot be impaired by Congress or the states without some compelling governmental interest with restrictions on those rights that are “narrowly tailored.”
The Fourth Amendment says, “all persons shall be secure in their persons, houses, papers, and effects from unreasonable searches and seizures, and no warrants shall issue, but upon probable cause, before a magistrate and upon Oath, specifically describing the persons to be searched and places to be seized.”
The court has read the Fourth Amendment to prohibit only those government searches or seizures that are “unreasonable.” Because of this, businesses that are in an industry that is “closely regulated” can be searched more frequently and can be searched without a warrant. In one case, an auto parts dealer at a junkyard was charged with receiving stolen auto parts. Part of his defense was to claim that the search that found incriminating evidence was unconstitutional. But the court found the search reasonable, because the dealer was in a “closely regulated industry.”
In the 1980s, Dow Chemical objected to an overflight by the US Environmental Protection Agency (EPA). The EPA had rented an airplane to fly over the Midland, Michigan, Dow plant, using an aerial mapping camera to photograph various pipes, ponds, and machinery that were not covered by a roof. Because the court’s precedents allowed governmental intrusions into “open fields,” the EPA search was ruled constitutional. Because the literal language of the Fourth Amendment protected “persons, houses, papers, and effects,” anything searched by the government in “open fields” was reasonable. (The court’s opinion suggested that if Dow had really wanted privacy from governmental intrusion, it could have covered the pipes and machinery that were otherwise outside and in open fields.)
Note again that constitutional guarantees like the Fourth Amendment apply to governmental action. Your employer or any private enterprise is not bound by constitutional limits. For example, if drug testing of all employees every week is done by government agency, the employees may have a cause of action to object based on the Fourth Amendment. However, if a private employer begins the same kind of routine drug testing, employees have no constitutional arguments to make; they can simply leave that employer, or they may pursue whatever statutory or common-law remedies are available.
The Fifth Amendment states, “No person shall be…deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.”
The Fifth Amendment has three principal aspects: procedural due process, the takings clause, and substantive due process. In terms of procedural due process, the amendment prevents government from arbitrarily taking the life of a criminal defendant. In civil lawsuits, it is also constitutionally essential that the proceedings be fair. This is why, for example, the defendant in Burger King v. Rudzewicz had a serious constitutional argument, even though he lost.
The takings clause of the Fifth Amendment ensures that the government does not take private property without just compensation. In the international setting, governments that take private property engage in what is called expropriation. The standard under customary international law is that when governments do that, they must provide prompt, adequate, and effective compensation. This does not always happen, especially where foreign owners’ property is being expropriated. The guarantees of the Fifth Amendment (incorporated against state action by the Fourteenth Amendment) are available to property owners where state, county, or municipal government uses the power of eminent domain to take private property for public purposes. Just what is a public purpose is a matter of some debate. For example, if a city were to condemn economically viable businesses or neighborhoods to construct a baseball stadium with public money to entice a private enterprise (the baseball team) to stay, is a public purpose being served?
In Kelo v. City of New London, Mrs. Kelo and other residents fought the city of New London, in its attempt to use powers of eminent domain to create an industrial park and recreation area that would have Pfizer & Co. as a principal tenant.Kelo v. City of New London, 545 US 469 (2005). The city argued that increasing its tax base was a sufficient public purpose. In a very close decision, the Supreme Court determined that New London’s actions did not violate the takings clause. However, political reactions in various states resulted in a great deal of new state legislation that would limit the scope of public purpose in eminent domain takings and provide additional compensation to property owners in many cases.
In addition to the takings clause and aspects of procedural due process, the Fifth Amendment is also the source of what is called substantive due process. During the first third of the twentieth century, the Supreme Court often nullified state and federal laws using substantive due process. In 1905, for example, in Lochner v. New York, the Supreme Court voided a New York statute that limited the number of hours that bakers could work in a single week. New York had passed the law to protect the health of employees, but the court found that this law interfered with the basic constitutional right of private parties to freely contract with one another. Over the next thirty years, dozens of state and federal laws were struck down that aimed to improve working conditions, secure social welfare, or establish the rights of unions. However, in 1934, during the Great Depression, the court reversed itself and began upholding the kinds of laws it had struck down earlier.
Since then, the court has employed a two-tiered analysis of substantive due process claims. Under the first tier, legislation on economic matters, employment relations, and other business affairs is subject to minimal judicial scrutiny. This means that a law will be overturned only if it serves no rational government purpose. Under the second tier, legislation concerning fundamental liberties is subject to “heightened judicial scrutiny,” meaning that a law will be invalidated unless it is “narrowly tailored to serve a significant government purpose.”
The Supreme Court has identified two distinct categories of fundamental liberties. The first category includes most of the liberties expressly enumerated in the Bill of Rights. Through a process known as selective incorporation, the court has interpreted the due process clause of the Fourteenth Amendment to bar states from denying their residents the most important freedoms guaranteed in the first ten amendments to the federal Constitution. Only the Third Amendment right (against involuntary quartering of soldiers) and the Fifth Amendment right to be indicted by a grand jury have not been made applicable to the states. Because these rights are still not applicable to state governments, the Supreme Court is often said to have “selectively incorporated” the Bill of Rights into the due process clause of the Fourteenth Amendment.
The second category of fundamental liberties includes those liberties that are not expressly stated in the Bill of Rights but that can be seen as essential to the concepts of freedom and equality in a democratic society. These unstated liberties come from Supreme Court precedents, common law, moral philosophy, and deeply rooted traditions of US legal history. The Supreme Court has stressed that he word liberty cannot be defined by a definitive list of rights; rather, it must be viewed as a rational continuum of freedom through which every aspect of human behavior is protected from arbitrary impositions and random restraints. In this regard, as the Supreme Court has observed, the due process clause protects abstract liberty interests, including the right to personal autonomy, bodily integrity, self-dignity, and self-determination.
These liberty interests often are grouped to form a general right to privacy, which was first recognized in Griswold v. Connecticut (Section 4.6.1), where the Supreme Court struck down a state statute forbidding married adults from using, possessing, or distributing contraceptives on the ground that the law violated the sanctity of the marital relationship. According to Justice Douglas’s plurality opinion, this penumbra of privacy, though not expressly mentioned in the Bill of Rights, must be protected to establish a buffer zone or breathing space for those freedoms that are constitutionally enumerated.
But substantive due process has seen fairly limited use since the 1930s. During the 1990s, the Supreme Court was asked to recognize a general right to die under the doctrine of substantive due process. Although the court stopped short of establishing such a far-reaching right, certain patients may exercise a constitutional liberty to hasten their deaths under a narrow set of circumstances. In Cruzan v. Missouri Department of Health, the Supreme Court ruled that the due process clause guarantees the right of competent adults to make advanced directives for the withdrawal of life-sustaining measures should they become incapacitated by a disability that leaves them in a persistent vegetative state.Cruzan v. Missouri Department of Health, 497 US 261 (1990). Once it has been established by clear and convincing evidence that a mentally incompetent and persistently vegetative patient made such a prior directive, a spouse, parent, or other appropriate guardian may seek to terminate any form of artificial hydration or nutrition.
Fourteenth Amendment: Due Process and Equal Protection Guarantees
The Fourteenth Amendment (1868) requires that states treat citizens of other states with due process. This can be either an issue of procedural due process (as in Section 3.9 "Cases", Burger King v. Rudzewicz) or an issue of substantive due process. For substantive due process, consider what happened in an Alabama court not too long ago.BMW of North America, Inc. v. Gore, 517 U.S. 559 (1996)
The plaintiff, Dr. Ira Gore, bought a new BMW for $40,000 from a dealer in Alabama. He later discovered that the vehicle’s exterior had been slightly damaged in transit from Europe and had therefore been repainted by the North American distributor prior to his purchase. The vehicle was, by best estimates, worth about 10 percent less than he paid for it. The distributor, BMW of North America, had routinely sold slightly damaged cars as brand new if the damage could be fixed for less than 3 percent of the cost of the car. In the trial, Dr. Gore sought $4,000 in compensatory damages and also punitive damages. The Alabama trial jury considered that BMW was engaging in a fraudulent practice and wanted to punish the defendant for a number of frauds it estimated at somewhere around a thousand nationwide. The jury awarded not only the $4,000 in compensatory damages but also $4 million in punitive damages, which was later reduced to $2 million by the Alabama Supreme Court. On appeal to the US Supreme Court, the court found that punitive damages may not be “grossly excessive.” If they are, then they violate substantive due process. Whatever damages a state awards must be limited to what is reasonably necessary to vindicate the state’s legitimate interest in punishment and deterrence.
“Equal protection of the laws” is a phrase that originates in the Fourteenth Amendment, adopted in 1868. The amendment provides that no state shall “deny to any person within its jurisdiction the equal protection of the laws.” This is the equal protection clause. It means that, generally speaking, governments must treat people equally. Unfair classifications among people or corporations will not be permitted. A well-known example of unfair classification would be race discrimination: requiring white children and black children to attend different public schools or requiring “separate but equal” public services, such as water fountains or restrooms. Yet despite the clear intent of the 1868 amendment, “separate but equal” was the law of the land until Brown v. Board of Education (1954).Plessy v. Ferguson, 163 US 537 (1896).
Governments make classifications every day, so not all classifications can be illegal under the equal protection clause. People with more income generally pay a greater percentage of their income in taxes. People with proper medical training are licensed to become doctors; people without that training cannot be licensed and commit a criminal offense if they do practice medicine. To know what classifications are permissible under the Fourteenth Amendment, we need to know what is being classified. The court has created three classifications, and the outcome of any equal protection case can usually be predicted by knowing how the court is likely to classify the case:
- Minimal scrutiny: economic and social relations. Government actions are usually upheld if there is a rational basis for them.
- Intermediate scrutiny: gender. Government classifications are sometimes upheld.
- Strict scrutiny: race, ethnicity, and fundamental rights. Classifications based on any of these are almost never upheld.
Under minimal scrutiny for economic and social regulation, laws that regulate economic or social issues are presumed valid and will be upheld if they are rationally related to legitimate goals of government. So, for example, if the city of New Orleans limits the number of street vendors to some rational number (more than one but fewer than the total number that could possibly fit on the sidewalks), the local ordinance would not be overturned as a violation of equal protection.
Under intermediate scrutiny, the city of New Orleans might limit the number of street vendors who are men. For example, suppose that the city council decreed that all street vendors must be women, thinking that would attract even more tourism. A classification like this, based on sex, will have to meet a sterner test than a classification resulting from economic or social regulation. A law like this would have to substantially relate to important government objectives. Increasingly, courts have nullified government sex classifications as societal concern with gender equality has grown. (See Shannon Faulkner’s case against The Citadel, an all-male state school.)United States v. Virginia, 518 US 515 (1996).
Suppose, however, that the city of New Orleans decided that no one of Middle Eastern heritage could drive a taxicab or be a street vendor. That kind of classification would be examined with strict scrutiny to see if there was any compelling justification for it. As noted, classifications such as this one are almost never upheld. The law would be upheld only if it were necessary to promote a compelling state interest. Very few laws that have a racial or ethnic classification meet that test.
The strict scrutiny test will be applied to classifications involving racial and ethnic criteria as well as classifications that interfere with a fundamental right. In Palmore v. Sidoti, the state refused to award custody to the mother because her new spouse was racially different from the child.Palmore v. Sidoti, 466 US 429 (1984).This practice was declared unconstitutional because the state had made a racial classification; this was presumptively invalid, and the government could not show a compelling need to enforce such a classification through its law. An example of government action interfering with a fundamental right will also receive strict scrutiny. When New York State gave an employment preference to veterans who had been state residents at the time of entering the military, the court declared that veterans who were new to the state were less likely to get jobs and that therefore the statute interfered with the right to travel, which was deemed a fundamental right.Atty. Gen. of New York v. Soto-Lopez, 476 US 898 (1986).
The Bill of Rights, through the Fourteenth Amendment, largely applies to state actions. The Bill of Rights has applied to federal actions from the start. Both the Bill of Rights and the Fourteenth Amendment apply to business in various ways, but it is important to remember that the rights conferred are rights against governmental action and not the actions of private enterprise.
- John Hanks works at ProLogis. The company decides to institute a drug-testing policy. John is a good and longtime employee but enjoys smoking marijuana on the weekends. The drug testing will involve urine samples and, semiannually, a hair sample. It is nearly certain that the drug-testing protocol that ProLogis proposes will find that Hanks is a marijuana user. The company has made it clear that it will have zero tolerance for any kind of nonprescribed controlled substances. John and several fellow employees wish to go to court to challenge the proposed testing as “an unreasonable search and seizure.” Can he possibly succeed?
- Larry Reed, majority leader in the Senate, is attacked in his reelection campaign by a series of ads sponsored by a corporation (Global Defense, Inc.) that does not like his voting record. The corporation is upset that Reed would not write a special provision that would favor Global Defense in a defense appropriations bill. The ads run constantly on television and radio in the weeks immediately preceding election day and contain numerous falsehoods. For example, in order to keep the government running financially, Reed found it necessary to vote for a bill that included a last-minute rider that defunded a small government program for the handicapped, sponsored by someone in the opposing party that wanted to privatize all programs for the handicapped. The ad is largely paid for by Global Defense and depicts a handicapped child being helped by the existing program and large letters saying “Does Larry Reed Just Not Care?” The ad proclaims that it is sponsored by Citizens Who Care for a Better Tomorrow. Is this protected speech? Why or why not? Can Reed sue for defamation? Why or why not?
Griswold v. Connecticut
Griswold v. Connecticut
381 U.S. 479 (U.S. Supreme Court 1965)
A nineteenth-century Connecticut law made the use, possession, or distribution of birth control devices illegal. The law also prohibited anyone from giving information about such devices. The executive director and medical director of a planned parenthood association were found guilty of giving out such information to a married couple that wished to delay having children for a few years. The directors were fined $100 each.
They appealed throughout the Connecticut state court system, arguing that the state law violated (infringed) a basic or fundamental right of privacy of a married couple: to live together and have sex together without the restraining power of the state to tell them they may legally have intercourse but not if they use condoms or other birth control devices. At each level (trial court, court of appeals, and Connecticut Supreme Court), the Connecticut courts upheld the constitutionality of the convictions.
Plurality Opinion by Justice William O. Douglass
We do not sit as a super legislature to determine the wisdom, need, and propriety of laws that touch economic problems, business affairs, or social conditions. The [Connecticut] law, however, operates directly on intimate relation of husband and wife and their physician’s role in one aspect of that relation.
[Previous] cases suggest that specific guarantees in the Bill of Rights have penumbras, formed by emanations from those guarantees that help give them life and substance.…Various guarantees create zones of privacy. The right of association contained in the penumbra of the First Amendment is one.…The Third Amendment in its prohibition against the quartering of soldiers “in any house” in time of peace without the consent of the owner is another facet of that privacy. The Fourth Amendment explicitly affirms the “right of the people to be secure in their persons, houses, papers and effects, against unreasonable searches and seizures.” The Fifth Amendment in its Self-Incrimination Clause enables the citizen to create a zone of privacy which the government may not force him to surrender to his detriment. The Ninth Amendment provides: “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”
The Fourth and Fifth Amendments were described…as protection against all governmental invasions “of the sanctity of a man’s home and the privacies of life.” We recently referred in Mapp v. Ohio…to the Fourth Amendment as creating a “right to privacy, no less important than any other right carefully and particularly reserved to the people.”
[The law in question here], in forbidding the use of contraceptives rather than regulating their manufacture or sale, seeks to achieve its goals by having a maximum destructive impact on [the marital] relationship. Such a law cannot stand.…Would we allow the police to search the sacred precincts of marital bedrooms for telltale signs of the use of contraceptives? The very idea is repulsive to the notions of privacy surrounding the marital relationship.
We deal with a right of privacy older than the Bill of Rights—older than our political parties, older than our school system. Marriage is a coming together for better or for worse, hopefully enduring, and intimate to the degree of being sacred. It is an association that promotes a way of life, not causes; a harmony in living, not political faiths; a bilateral loyalty, not commercial or social projects. Yet it is an association for as noble a purpose as any involved in our prior decisions.
Mr. Justice Stewart, whom Mr. Justice Black joins, dissenting.
Since 1879 Connecticut has had on its books a law which forbids the use of contraceptives by anyone. I think this is an uncommonly silly law. As a practical matter, the law is obviously unenforceable, except in the oblique context of the present case. As a philosophical matter, I believe the use of contraceptives in the relationship of marriage should be left to personal and private choice, based upon each individual’s moral, ethical, and religious beliefs. As a matter of social policy, I think professional counsel about methods of birth control should be available to all, so that each individual’s choice can be meaningfully made. But we are not asked in this case to say whether we think this law is unwise, or even asinine. We are asked to hold that it violates the United States Constitution. And that I cannot do.
In the course of its opinion the Court refers to no less than six Amendments to the Constitution: the First, the Third, the Fourth, the Fifth, the Ninth, and the Fourteenth. But the Court does not say which of these Amendments, if any, it thinks is infringed by this Connecticut law.
As to the First, Third, Fourth, and Fifth Amendments, I can find nothing in any of them to invalidate this Connecticut law, even assuming that all those Amendments are fully applicable against the States. It has not even been argued that this is a law “respecting an establishment of religion, or prohibiting the free exercise thereof.” And surely, unless the solemn process of constitutional adjudication is to descend to the level of a play on words, there is not involved here any abridgment of “the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” No soldier has been quartered in any house. There has been no search, and no seizure. Nobody has been compelled to be a witness against himself.
The Court also quotes the Ninth Amendment, and my Brother Goldberg’s concurring opinion relies heavily upon it. But to say that the Ninth Amendment has anything to do with this case is to turn somersaults with history. The Ninth Amendment, like its companion the Tenth, which this Court held “states but a truism that all is retained which has not been surrendered,” United States v. Darby, 312 U.S. 100, 124, was framed by James Madison and adopted by the States simply to make clear that the adoption of the Bill of Rights did not alter the plan that the Federal Government was to be a government of express and limited powers, and that all rights and powers not delegated to it were retained by the people and the individual States. Until today no member of this Court has ever suggested that the Ninth Amendment meant anything else, and the idea that a federal court could ever use the Ninth Amendment to annul a law passed by the elected representatives of the people of the State of Connecticut would have caused James Madison no little wonder.
What provision of the Constitution, then, does make this state law invalid? The Court says it is the right of privacy “created by several fundamental constitutional guarantees.” With all deference, I can find no such general right of privacy in the Bill of Rights, in any other part of the Constitution, or in any case ever before decided by this Court.
At the oral argument in this case we were told that the Connecticut law does not “conform to current community standards.” But it is not the function of this Court to decide cases on the basis of community standards. We are here to decide cases “agreeably to the Constitution and laws of the United States.” It is the essence of judicial duty to subordinate our own personal views, our own ideas of what legislation is wise and what is not. If, as I should surely hope, the law before us does not reflect the standards of the people of Connecticut, the people of Connecticut can freely exercise their true Ninth and Tenth Amendment rights to persuade their elected representatives to repeal it. That is the constitutional way to take this law off the books.
- Which opinion is the strict constructionist opinion here—Justice Douglas’s or that of Justices Stewart and Black?
- What would have happened if the Supreme Court had allowed the Connecticut Supreme Court decision to stand and followed Justice Black’s reasoning? Is it likely that the citizens of Connecticut would have persuaded their elected representatives to repeal the law challenged here?
Wickard v. Filburn
Wickard v. Filburn
317 U.S. 111 (U.S. Supreme Court 1942)
Mr. Justice Jackson delivered the opinion of the Court.
Mr. Filburn for many years past has owned and operated a small farm in Montgomery County, Ohio, maintaining a herd of dairy cattle, selling milk, raising poultry, and selling poultry and eggs. It has been his practice to raise a small acreage of winter wheat, sown in the Fall and harvested in the following July; to sell a portion of the crop; to feed part to poultry and livestock on the farm, some of which is sold; to use some in making flour for home consumption; and to keep the rest for the following seeding.
His 1941 wheat acreage allotment was 11.1 acres and a normal yield of 20.1 bushels of wheat an acre. He sowed, however, 23 acres, and harvested from his 11.9 acres of excess acreage 239 bushels, which under the terms of the Act as amended on May 26, 1941, constituted farm marketing excess, subject to a penalty of 49 cents a bushel, or $117.11 in all.
The general scheme of the Agricultural Adjustment Act of 1938 as related to wheat is to control the volume moving in interstate and foreign commerce in order to avoid surpluses and shortages and the consequent abnormally low or high wheat prices and obstructions to commerce. [T]he Secretary of Agriculture is directed to ascertain and proclaim each year a national acreage allotment for the next crop of wheat, which is then apportioned to the states and their counties, and is eventually broken up into allotments for individual farms.
It is urged that under the Commerce Clause of the Constitution, Article I, § 8, clause 3, Congress does not possess the power it has in this instance sought to exercise. The question would merit little consideration since our decision in United States v. Darby, 312 U.S. 100, sustaining the federal power to regulate production of goods for commerce, except for the fact that this Act extends federal regulation to production not intended in any part for commerce but wholly for consumption on the farm.
Kassel v. Consolidated Freightways Corp.
Kassel v. Consolidated Freightways Corp.
450 U.S. 662 (U.S. Supreme Court 1981)
JUSTICE POWELL announced the judgment of the Court and delivered an opinion, in which JUSTICE WHITE, JUSTICE BLACKMUN, and JUSTICE STEVENS joined.
The question is whether an Iowa statute that prohibits the use of certain large trucks within the State unconstitutionally burdens interstate commerce.
Appellee Consolidated Freightways Corporation of Delaware (Consolidated) is one of the largest common carriers in the country: it offers service in 48 States under a certificate of public convenience and necessity issued by the Interstate Commerce Commission. Among other routes, Consolidated carries commodities through Iowa on Interstate 80, the principal east-west route linking New York, Chicago, and the west coast, and on Interstate 35, a major north-south route.
Consolidated mainly uses two kinds of trucks. One consists of a three-axle tractor pulling a 40-foot two-axle trailer. This unit, commonly called a single, or “semi,” is 55 feet in length overall. Such trucks have long been used on the Nation’s highways. Consolidated also uses a two-axle tractor pulling a single-axle trailer which, in turn, pulls a single-axle dolly and a second single-axle trailer. This combination, known as a double, or twin, is 65 feet long overall. Many trucking companies, including Consolidated, increasingly prefer to use doubles to ship certain kinds of commodities. Doubles have larger capacities, and the trailers can be detached and routed separately if necessary. Consolidated would like to use 65-foot doubles on many of its trips through Iowa.
The State of Iowa, however, by statute, restricts the length of vehicles that may use its highways. Unlike all other States in the West and Midwest, Iowa generally prohibits the use of 65-foot doubles within its borders.
Because of Iowa’s statutory scheme, Consolidated cannot use its 65-foot doubles to move commodities through the State. Instead, the company must do one of four things: (i) use 55-foot singles; (ii) use 60-foot doubles; (iii) detach the trailers of a 65-foot double and shuttle each through the State separately; or (iv) divert 65-foot doubles around Iowa. Dissatisfied with these options, Consolidated filed this suit in the District Court averring that Iowa’s statutory scheme unconstitutionally burdens interstate commerce. Iowa defended the law as a reasonable safety measure enacted pursuant to its police power. The State asserted that 65-foot doubles are more dangerous than 55-foot singles and, in any event, that the law promotes safety and reduces road wear within the State by diverting much truck traffic to other states.
In a 14-day trial, both sides adduced evidence on safety and on the burden on interstate commerce imposed by Iowa’s law. On the question of safety, the District Court found that the “evidence clearly establishes that the twin is as safe as the semi.” 475 F.Supp. 544, 549 (SD Iowa 1979). For that reason, “there is no valid safety reason for barring twins from Iowa’s highways because of their configuration.…The evidence convincingly, if not overwhelmingly, establishes that the 65-foot twin is as safe as, if not safer than, the 60-foot twin and the 55-foot semi.…”
“Twins and semis have different characteristics. Twins are more maneuverable, are less sensitive to wind, and create less splash and spray. However, they are more likely than semis to jackknife or upset. They can be backed only for a short distance. The negative characteristics are not such that they render the twin less safe than semis overall. Semis are more stable, but are more likely to ‘rear-end’ another vehicle.”
In light of these findings, the District Court applied the standard we enunciated in Raymond Motor Transportation, Inc. v. Rice, 434 U.S. 429 (1978), and concluded that the state law impermissibly burdened interstate commerce: “[T]he balance here must be struck in favor of the federal interests. The total effect of the law as a safety measure in reducing accidents and casualties is so slight and problematical that it does not outweigh the national interest in keeping interstate commerce free from interferences that seriously impede it.”
The Court of Appeals for the Eighth Circuit affirmed. 612 F.2d 1064 (1979). It accepted the District Court’s finding that 65-foot doubles were as safe as 55-foot singles. Id. at 1069. Thus, the only apparent safety benefit to Iowa was that resulting from forcing large trucks to detour around the State, thereby reducing overall truck traffic on Iowa’s highways. The Court of Appeals noted that this was not a constitutionally permissible interest. It also commented that the several statutory exemptions identified above, such as those applicable to border cities and the shipment of livestock, suggested that the law, in effect, benefited Iowa residents at the expense of interstate traffic. Id. at 1070-1071. The combination of these exemptions weakened the presumption of validity normally accorded a state safety regulation. For these reasons, the Court of Appeals agreed with the District Court that the Iowa statute unconstitutionally burdened interstate commerce.
Iowa appealed, and we noted probable jurisdiction. 446 U.S. 950 (1980). We now affirm.
It is unnecessary to review in detail the evolution of the principles of Commerce Clause adjudication. The Clause is both a “prolific ‘ of national power and an equally prolific source of conflict with legislation of the state[s].” H. P. Hood & Sons, Inc. v. Du Mond, 336 U.S. 525, 336 U.S. 534 (1949). The Clause permits Congress to legislate when it perceives that the national welfare is not furthered by the independent actions of the States. It is now well established, also, that the Clause itself is “a limitation upon state power even without congressional implementation.” Hunt v. Washington Apple Advertising Comm’n, 432 U.S. 333 at 350 (1977). The Clause requires that some aspects of trade generally must remain free from interference by the States. When a State ventures excessively into the regulation of these aspects of commerce, it “trespasses upon national interests,” Great A&P Tea Co. v. Cottrell, 424 U.S. 366, 424 U.S. 373 (1976), and the courts will hold the state regulation invalid under the Clause alone.
The Commerce Clause does not, of course, invalidate all state restrictions on commerce. It has long been recognized that, “in the absence of conflicting legislation by Congress, there is a residuum of power in the state to make laws governing matters of local concern which nevertheless in some measure affect interstate commerce or even, to some extent, regulate it.” Southern Pacific Co. v. Arizona, 325 U.S. 761 (1945).
The extent of permissible state regulation is not always easy to measure. It may be said with confidence, however, that a State’s power to regulate commerce is never greater than in matters traditionally of local concern. Washington Apple Advertising Comm’n, supra at 432 U.S. 350. For example, regulations that touch upon safety—especially highway safety—are those that “the Court has been most reluctant to invalidate.” Raymond, supra at 434 U.S. 443 (and other cases cited). Indeed, “if safety justifications are not illusory, the Court will not second-guess legislative judgment about their importance in comparison with related burdens on interstate commerce.” Raymond, supra at 434 U.S. at 449. Those who would challenge such bona fide safety regulations must overcome a “strong presumption of validity.” Bibb v. Navajo Freight Lines, Inc., 359 U.S. 520 at (1959).
But the incantation of a purpose to promote the public health or safety does not insulate a state law from Commerce Clause attack. Regulations designed for that salutary purpose nevertheless may further the purpose so marginally, and interfere with commerce so substantially, as to be invalid under the Commerce Clause. In the Court’s recent unanimous decision in Raymond we declined to “accept the State’s contention that the inquiry under the Commerce Clause is ended without a weighing of the asserted safety purpose against the degree of interference with interstate commerce.” This “weighing” by a court requires—and indeed the constitutionality of the state regulation depends on—“a sensitive consideration of the weight and nature of the state regulatory concern in light of the extent of the burden imposed on the course of interstate commerce.” Id. at 434 U.S. at 441; accord, Pike v. Bruce Church, Inc., 397 U.S. 137 at 142 (1970); Bibb, supra, at 359 U.S. at 525-530.
Applying these general principles, we conclude that the Iowa truck length limitations unconstitutionally burden interstate commerce.
In Raymond Motor Transportation, Inc. v. Rice, the Court held that a Wisconsin statute that precluded the use of 65-foot doubles violated the Commerce Clause. This case is Raymond revisited. Here, as in Raymond, the State failed to present any persuasive evidence that 65-foot doubles are less safe than 55-foot singles. Moreover, Iowa’s law is now out of step with the laws of all other Midwestern and Western States. Iowa thus substantially burdens the interstate flow of goods by truck. In the absence of congressional action to set uniform standards, some burdens associated with state safety regulations must be tolerated. But where, as here, the State’s safety interest has been found to be illusory, and its regulations impair significantly the federal interest in efficient and safe interstate transportation, the state law cannot be harmonized with the Commerce Clause.
Iowa made a more serious effort to support the safety rationale of its law than did Wisconsin in Raymond, but its effort was no more persuasive. As noted above, the District Court found that the “evidence clearly establishes that the twin is as safe as the semi.” The record supports this finding. The trial focused on a comparison of the performance of the two kinds of trucks in various safety categories. The evidence showed, and the District Court found, that the 65-foot double was at least the equal of the 55-foot single in the ability to brake, turn, and maneuver. The double, because of its axle placement, produces less splash and spray in wet weather. And, because of its articulation in the middle, the double is less susceptible to dangerous “off-tracking,” and to wind.
None of these findings is seriously disputed by Iowa. Indeed, the State points to only three ways in which the 55-foot single is even arguably superior: singles take less time to be passed and to clear intersections; they may back up for longer distances; and they are somewhat less likely to jackknife.
The first two of these characteristics are of limited relevance on modern interstate highways. As the District Court found, the negligible difference in the time required to pass, and to cross intersections, is insignificant on 4-lane divided highways, because passing does not require crossing into oncoming traffic lanes, Raymond, 434 U.S. at 444, and interstates have few, if any, intersections. The concern over backing capability also is insignificant, because it seldom is necessary to back up on an interstate. In any event, no evidence suggested any difference in backing capability between the 60-foot doubles that Iowa permits and the 65-foot doubles that it bans. Similarly, although doubles tend to jackknife somewhat more than singles, 65-foot doubles actually are less likely to jackknife than 60-foot doubles.
Statistical studies supported the view that 65-foot doubles are at least as safe overall as 55-foot singles and 60-foot doubles. One such study, which the District Court credited, reviewed Consolidated’s comparative accident experience in 1978 with its own singles and doubles. Each kind of truck was driven 56 million miles on identical routes. The singles were involved in 100 accidents resulting in 27 injuries and one fatality. The 65-foot doubles were involved in 106 accidents resulting in 17 injuries and one fatality. Iowa’s expert statistician admitted that this study provided “moderately strong evidence” that singles have a higher injury rate than doubles. Another study, prepared by the Iowa Department of Transportation at the request of the state legislature, concluded that “[s]ixty-five foot twin trailer combinations have not been shown by experiences in other states to be less safe than 60-foot twin trailer combinations or conventional tractor-semitrailers.”
In sum, although Iowa introduced more evidence on the question of safety than did Wisconsin in Raymond, the record as a whole was not more favorable to the State.
Consolidated, meanwhile, demonstrated that Iowa’s law substantially burdens interstate commerce. Trucking companies that wish to continue to use 65-foot doubles must route them around Iowa or detach the trailers of the doubles and ship them through separately. Alternatively, trucking companies must use the smaller 55-foot singles or 65-foot doubles permitted under Iowa law. Each of these options engenders inefficiency and added expense. The record shows that Iowa’s law added about $12.6 million each year to the costs of trucking companies.
Consolidated alone incurred about $2 million per year in increased costs.
In addition to increasing the costs of the trucking companies (and, indirectly, of the service to consumers), Iowa’s law may aggravate, rather than, ameliorate, the problem of highway accidents. Fifty-five-foot singles carry less freight than 65-foot doubles. Either more small trucks must be used to carry the same quantity of goods through Iowa or the same number of larger trucks must drive longer distances to bypass Iowa. In either case, as the District Court noted, the restriction requires more highway miles to be driven to transport the same quantity of goods. Other things being equal, accidents are proportional to distance traveled. Thus, if 65-foot doubles are as safe as 55-foot singles, Iowa’s law tends to increase the number of accidents and to shift the incidence of them from Iowa to other States.
In sum, the statutory exemptions, their history, and the arguments Iowa has advanced in support of its law in this litigation all suggest that the deference traditionally accorded a State’s safety judgment is not warranted. See Raymond, supra at 434 U.S. at 444-447. The controlling factors thus are the findings of the District Court, accepted by the Court of Appeals, with respect to the relative safety of the types of trucks at issue, and the substantiality of the burden on interstate commerce.
Because Iowa has imposed this burden without any significant countervailing safety interest, its statute violates the Commerce Clause. The judgment of the Court of Appeals is affirmed.
It is so ordered.
- Under the Constitution, what gives Iowa the right to make rules regarding the size or configuration of trucks upon highways within the state?
- Did Iowa try to exempt trucking lines based in Iowa, or was the statutory rule nondiscriminatory as to the origin of trucks that traveled on Iowa highways?
- Are there any federal size or weight standards noted in the case? Is there any kind of truck size or weight that could be limited by Iowa law, or must Iowa simply accept federal standards or, if none, impose no standards at all?
Hunt v. Washington Apple Advertising Commission
Hunt v. Washington Apple Advertising Commission
432 U.S. 33 (U.S. Supreme Court 1977)
MR. CHIEF JUSTICE BURGER delivered the opinion of the Court.
In 1973, North Carolina enacted a statute which required, inter alia, all closed containers of apples sold, offered for sale, or shipped into the State to bear “no grade other than the applicable U.S. grade or standard.”…Washington State is the Nation’s largest producer of apples, its crops accounting for approximately 30% of all apples grown domestically and nearly half of all apples shipped in closed containers in interstate commerce. [Because] of the importance of the apple industry to the State, its legislature has undertaken to protect and enhance the reputation of Washington apples by establishing a stringent, mandatory inspection program [that] requires all apples shipped in interstate commerce to be tested under strict quality standards and graded accordingly. In all cases, the Washington State grades [are] the equivalent of, or superior to, the comparable grades and standards adopted by the [U.S. Dept. of] Agriculture (USDA).
[In] 1972, the North Carolina Board of Agriculture adopted an administrative regulation, unique in the 50 States, which in effect required all closed containers of apples shipped into or sold in the State to display either the applicable USDA grade or a notice indicating no classification. State grades were expressly prohibited. In addition to its obvious consequence—prohibiting the display of Washington State apple grades on containers of apples shipped into North Carolina—the regulation presented the Washington apple industry with a marketing problem of potentially nationwide significance. Washington apple growers annually ship in commerce approximately 40 million closed containers of apples, nearly 500,000 of which eventually find their way into North Carolina, stamped with the applicable Washington State variety and grade. [Compliance] with North Carolina’s unique regulation would have required Washington growers to obliterate the printed labels on containers shipped to North Carolina, thus giving their product a damaged appearance. Alternatively, they could have changed their marketing practices to accommodate the needs of the North Carolina market, i.e., repack apples to be shipped to North Carolina in containers bearing only the USDA grade, and/or store the estimated portion of the harvest destined for that market in such special containers. As a last resort, they could discontinue the use of the preprinted containers entirely. None of these costly and less efficient options was very attractive to the industry. Moreover, in the event a number of other States followed North Carolina’s lead, the resultant inability to display the Washington grades could force the Washington growers to abandon the State’s expensive inspection and grading system which their customers had come to know and rely on over the 60-odd years of its existence.…
Unsuccessful in its attempts to secure administrative relief [with North Carolina], the Commission instituted this action challenging the constitutionality of the statute. [The] District Court found that the North Carolina statute, while neutral on its face, actually discriminated against Washington State growers and dealers in favor of their local counterparts [and] concluded that this discrimination [was] not justified by the asserted local interest—the elimination of deception and confusion from the marketplace—arguably furthered by the [statute].
[North Carolina] maintains that [the] burdens on the interstate sale of Washington apples were far outweighed by the local benefits flowing from what they contend was a valid exercise of North Carolina’s [police powers]. Prior to the statute’s enactment,…apples from 13 different States were shipped into North Carolina for sale. Seven of those States, including [Washington], had their own grading systems which, while differing in their standards, used similar descriptive labels (e.g., fancy, extra fancy, etc.). This multiplicity of inconsistent state grades [posed] dangers of deception and confusion not only in the North Carolina market, but in the Nation as a whole. The North Carolina statute, appellants claim, was enacted to eliminate this source of deception and confusion. [Moreover], it is contended that North Carolina sought to accomplish this goal of uniformity in an evenhanded manner as evidenced by the fact that its statute applies to all apples sold in closed containers in the State without regard to their point of origin.
[As] the appellants properly point out, not every exercise of state authority imposing some burden on the free flow of commerce is invalid, [especially] when the State acts to protect its citizenry in matters pertaining to the sale of foodstuffs. By the same token, however, a finding that state legislation furthers matters of legitimate local concern, even in the health and consumer protection areas, does not end the inquiry. Rather, when such state legislation comes into conflict with the Commerce Clause’s overriding requirement of a national “common market,” we are confronted with the task of effecting an accommodation of the competing national and local interests. We turn to that task.
As the District Court correctly found, the challenged statute has the practical effect of not only burdening interstate sales of Washington apples, but also discriminating against them. This discrimination takes various forms. The first, and most obvious, is the statute’s consequence of raising the costs of doing business in the North Carolina market for Washington apple growers and dealers, while leaving those of their North Carolina counterparts unaffected. [This] disparate effect results from the fact that North Carolina apple producers, unlike their Washington competitors, were not forced to alter their marketing practices in order to comply with the statute. They were still free to market their wares under the USDA grade or none at all as they had done prior to the statute’s enactment. Obviously, the increased costs imposed by the statute would tend to shield the local apple industry from the competition of Washington apple growers and dealers who are already at a competitive disadvantage because of their great distance from the North Carolina market.
Second, the statute has the effect of stripping away from the Washington apple industry the competitive and economic advantages it has earned for itself through its expensive inspection and grading system. The record demonstrates that the Washington apple-grading system has gained nationwide acceptance in the apple trade. [The record] contains numerous affidavits [stating a] preference [for] apples graded under the Washington, as opposed to the USDA, system because of the former’s greater consistency, its emphasis on color, and its supporting mandatory inspections. Once again, the statute had no similar impact on the North Carolina apple industry and thus operated to its benefit.
Third, by prohibiting Washington growers and dealers from marketing apples under their State’s grades, the statute has a leveling effect which insidiously operates to the advantage of local apple producers. [With] free market forces at work, Washington sellers would normally enjoy a distinct market advantage vis-à-vis local producers in those categories where the Washington grade is superior. However, because of the statute’s operation, Washington apples which would otherwise qualify for and be sold under the superior Washington grades will now have to be marketed under their inferior USDA counterparts. Such “downgrading” offers the North Carolina apple industry the very sort of protection against competing out-of-state products that the Commerce Clause was designed to prohibit. At worst, it will have the effect of an embargo against those Washington apples in the superior grades as Washington dealers withhold them from the North Carolina market. At best, it will deprive Washington sellers of the market premium that such apples would otherwise command.
Despite the statute’s facial neutrality, the Commission suggests that its discriminatory impact on interstate commerce was not an unintended by-product, and there are some indications in the record to that effect. The most glaring is the response of the North Carolina Agriculture Commissioner to the Commission’s request for an exemption following the statute’s passage in which he indicated that before he could support such an exemption, he would “want to have the sentiment from our apple producers since they were mainly responsible for this legislation being passed.” [Moreover], we find it somewhat suspect that North Carolina singled out only closed containers of apples, the very means by which apples are transported in commerce, to effectuate the statute’s ostensible consumer protection purpose when apples are not generally sold at retail in their shipping containers. However, we need not ascribe an economic protection motive to the North Carolina Legislature to resolve this case; we conclude that the challenged statute cannot stand insofar as it prohibits the display of Washington State grades even if enacted for the declared purpose of protecting consumers from deception and fraud in the marketplace.
Finally, we note that any potential for confusion and deception created by the Washington grades was not of the type that led to the statute’s enactment. Since Washington grades are in all cases equal or superior to their USDA counterparts, they could only “deceive” or “confuse” a consumer to his benefit, hardly a harmful result.
In addition, it appears that nondiscriminatory alternatives to the outright ban of Washington State grades are readily available. For example, North Carolina could effectuate its goal by permitting out-of-state growers to utilize state grades only if they also marked their shipments with the applicable USDA label. In that case, the USDA grade would serve as a benchmark against which the consumer could evaluate the quality of the various state grades.…
[The court affirmed the lower court’s holding that the North Carolina statute was unconstitutional.]
- Was the North Carolina law discriminatory on its face? Was it, possibly, an undue burden on interstate commerce? Why wouldn’t it be?
- What evidence was there of discriminatory intent behind the North Carolina law? Did that evidence even matter? Why or why not?
Citizens United v. Federal Election Commission
Citizens United v. Federal Election Commission
588 U.S. ____; 130 S.Ct. 876 (U.S. Supreme Court 2010)
Justice Kennedy delivered the opinion of the Court.
Federal law prohibits corporations and unions from using their general treasury funds to make independent expenditures for speech defined as an “electioneering communication” or for speech expressly advocating the election or defeat of a candidate. 2 U.S.C. §441b. Limits on electioneering communications were upheld in McConnell v. Federal Election Comm’n, 540 U.S. 93, 203–209 (2003). The holding of McConnell rested to a large extent on an earlier case, Austin v. Michigan Chamber of Commerce, 494 U.S. 652 (1990). Austin had held that political speech may be banned based on the speaker’s corporate identity.
In this case we are asked to reconsider Austin and, in effect, McConnell. It has been noted that “Austin was a significant departure from ancient First Amendment principles,” Federal Election Comm’n v. Wisconsin Right to Life, Inc., 551 U.S. 449, 490 (2007) (WRTL) (Scalia, J., concurring in part and concurring in judgment). We agree with that conclusion and hold that stare decisis does not compel the continued acceptance of Austin. The Government may regulate corporate political speech through disclaimer and disclosure requirements, but it may not suppress that speech altogether. We turn to the case now before us.
Citizens United is a nonprofit corporation. It has an annual budget of about $12 million. Most of its funds are from donations by individuals; but, in addition, it accepts a small portion of its funds from for-profit corporations.
In January 2008, Citizens United released a film entitled Hillary: The Movie. We refer to the film as Hillary. It is a 90-minute documentary about then-Senator Hillary Clinton, who was a candidate in the Democratic Party’s 2008 Presidential primary elections. Hillary mentions Senator Clinton by name and depicts interviews with political commentators and other persons, most of them quite critical of Senator Clinton.…
In December 2007, a cable company offered, for a payment of $1.2 million, to make Hillary available on a video-on-demand channel called “Elections ’08.”…Citizens United was prepared to pay for the video-on-demand; and to promote the film, it produced two 10-second ads and one 30-second ad for Hillary. Each ad includes a short (and, in our view, pejorative) statement about Senator Clinton, followed by the name of the movie and the movie’s Website address. Citizens United desired to promote the video-on-demand offering by running advertisements on broadcast and cable television.
Before the Bipartisan Campaign Reform Act of 2002 (BCRA), federal law prohibited—and still does prohibit—corporations and unions from using general treasury funds to make direct contributions to candidates or independent expenditures that expressly advocate the election or defeat of a candidate, through any form of media, in connection with certain qualified federal elections.…BCRA §203 amended §441b to prohibit any “electioneering communication” as well. An electioneering communication is defined as “any broadcast, cable, or satellite communication” that “refers to a clearly identified candidate for Federal office” and is made within 30 days of a primary or 60 days of a general election. §434(f)(3)(A). The Federal Election Commission’s (FEC) regulations further define an electioneering communication as a communication that is “publicly distributed.” 11 CFR §100.29(a)(2) (2009). “In the case of a candidate for nomination for President…publicly distributed means” that the communication “[c]an be received by 50,000 or more persons in a State where a primary election…is being held within 30 days.” 11 CFR §100.29(b)(3)(ii). Corporations and unions are barred from using their general treasury funds for express advocacy or electioneering communications. They may establish, however, a “separate segregated fund” (known as a political action committee, or PAC) for these purposes. 2 U.S.C. §441b(b)(2). The moneys received by the segregated fund are limited to donations from stockholders and employees of the corporation or, in the case of unions, members of the union. Ibid.
Citizens United wanted to make Hillary available through video-on-demand within 30 days of the 2008 primary elections. It feared, however, that both the film and the ads would be covered by §441b’s ban on corporate-funded independent expenditures, thus subjecting the corporation to civil and criminal penalties under §437g. In December 2007, Citizens United sought declaratory and injunctive relief against the FEC. It argued that (1) §441b is unconstitutional as applied to Hillary; and (2) BCRA’s disclaimer and disclosure requirements, BCRA §§201 and 311, are unconstitutional as applied to Hillary and to the three ads for the movie.
The District Court denied Citizens United’s motion for a preliminary injunction, and then granted the FEC’s motion for summary judgment.
The court held that §441b was facially constitutional under McConnell, and that §441b was constitutional as applied to Hillary because it was “susceptible of no other interpretation than to inform the electorate that Senator Clinton is unfit for office, that the United States would be a dangerous place in a President Hillary Clinton world, and that viewers should vote against her.” 530 F. Supp. 2d, at 279. The court also rejected Citizens United’s challenge to BCRA’s disclaimer and disclosure requirements. It noted that “the Supreme Court has written approvingly of disclosure provisions triggered by political speech even though the speech itself was constitutionally protected under the First Amendment.” Id. at 281.
[Omitted: the court considers whether it is possible to reject the BCRA without declaring certain provisions unconstitutional. The court concludes it cannot find a basis to reject the BCRA that does not involve constitutional issues.]
The First Amendment provides that “Congress shall make no law…abridging the freedom of speech.” Laws enacted to control or suppress speech may operate at different points in the speech process.…The law before us is an outright ban, backed by criminal sanctions. Section 441b makes it a felony for all corporations—including nonprofit advocacy corporations—either to expressly advocate the election or defeat of candidates or to broadcast electioneering communications within 30 days of a primary election and 60 days of a general election. Thus, the following acts would all be felonies under §441b: The Sierra Club runs an ad, within the crucial phase of 60 days before the general election, that exhorts the public to disapprove of a Congressman who favors logging in national forests; the National Rifle Association publishes a book urging the public to vote for the challenger because the incumbent U.S. Senator supports a handgun ban; and the American Civil Liberties Union creates a Web site telling the public to vote for a Presidential candidate in light of that candidate’s defense of free speech. These prohibitions are classic examples of censorship.
Section 441b is a ban on corporate speech notwithstanding the fact that a PAC created by a corporation can still speak. PACs are burdensome alternatives; they are expensive to administer and subject to extensive regulations. For example, every PAC must appoint a treasurer, forward donations to the treasurer promptly, keep detailed records of the identities of the persons making donations, preserve receipts for three years, and file an organization statement and report changes to this information within 10 days.
And that is just the beginning. PACs must file detailed monthly reports with the FEC, which are due at different times depending on the type of election that is about to occur.…
PACs have to comply with these regulations just to speak. This might explain why fewer than 2,000 of the millions of corporations in this country have PACs. PACs, furthermore, must exist before they can speak. Given the onerous restrictions, a corporation may not be able to establish a PAC in time to make its views known regarding candidates and issues in a current campaign.
Section 441b’s prohibition on corporate independent expenditures is thus a ban on speech. As a “restriction on the amount of money a person or group can spend on political communication during a campaign,” that statute “necessarily reduces the quantity of expression by restricting the number of issues discussed, the depth of their exploration, and the size of the audience reached.” Buckley v. Valeo, 424 U.S. 1 at 19 (1976).…
Speech is an essential mechanism of democracy, for it is the means to hold officials accountable to the people. See Buckley, supra, at 14–15 (“In a republic where the people are sovereign, the ability of the citizenry to make informed choices among candidates for office is essential.”) The right of citizens to inquire, to hear, to speak, and to use information to reach consensus is a precondition to enlightened self-government and a necessary means to protect it. The First Amendment “‘has its fullest and most urgent application’ to speech uttered during a campaign for political office.”
For these reasons, political speech must prevail against laws that would suppress it, whether by design or inadvertence. Laws that burden political speech are “subject to strict scrutiny,” which requires the Government to prove that the restriction “furthers a compelling interest and is narrowly tailored to achieve that interest.”
The Court has recognized that First Amendment protection extends to corporations. This protection has been extended by explicit holdings to the context of political speech. Under the rationale of these precedents, political speech does not lose First Amendment protection “simply because its source is a corporation.” Bellotti, supra, at 784. The Court has thus rejected the argument that political speech of corporations or other associations should be treated differently under the First Amendment simply because such associations are not “natural persons.”
The purpose and effect of this law is to prevent corporations, including small and nonprofit corporations, from presenting both facts and opinions to the public. This makes Austin’s antidistortion rationale all the more an aberration. “[T]he First Amendment protects the right of corporations to petition legislative and administrative bodies.” Bellotti, 435 U.S., at 792, n. 31.…
Even if §441b’s expenditure ban were constitutional, wealthy corporations could still lobby elected officials, although smaller corporations may not have the resources to do so. And wealthy individuals and unincorporated associations can spend unlimited amounts on independent expenditures. See, e.g., WRTL, 551 U.S., at 503–504 (opinion of Scalia, J.) (“In the 2004 election cycle, a mere 24 individuals contributed an astounding total of $142 million to [26 U.S.C. §527 organizations]”). Yet certain disfavored associations of citizens—those that have taken on the corporate form—are penalized for engaging in the same political speech.
When Government seeks to use its full power, including the criminal law, to command where a person may get his or her information or what distrusted source he or she may not hear, it uses censorship to control thought. This is unlawful. The First Amendment confirms the freedom to think for ourselves.
What we have said also shows the invalidity of other arguments made by the Government. For the most part relinquishing the anti-distortion rationale, the Government falls back on the argument that corporate political speech can be banned in order to prevent corruption or its appearance.…
When Congress finds that a problem exists, we must give that finding due deference; but Congress may not choose an unconstitutional remedy. If elected officials succumb to improper influences from independent expenditures; if they surrender their best judgment; and if they put expediency before principle, then surely there is cause for concern. We must give weight to attempts by Congress to seek to dispel either the appearance or the reality of these influences. The remedies enacted by law, however, must comply with the First Amendment; and, it is our law and our tradition that more speech, not less, is the governing rule. An outright ban on corporate political speech during the critical preelection period is not a permissible remedy. Here Congress has created categorical bans on speech that are asymmetrical to preventing quid pro quo corruption.
Our precedent is to be respected unless the most convincing of reasons demonstrates that adherence to it puts us on a course that is sure error. “Beyond workability, the relevant factors in deciding whether to adhere to the principle of stare decisis include the antiquity of the precedent, the reliance interests at stake, and of course whether the decision was well reasoned.” [citing prior cases]
These considerations counsel in favor of rejecting Austin, which itself contravened this Court’s earlier precedents in Buckley and Bellotti. “This Court has not hesitated to overrule decisions offensive to the First Amendment.” WRTL, 551 U.S., at 500 (opinion of Scalia, J.). “[S]tare decisis is a principle of policy and not a mechanical formula of adherence to the latest decision.” Helvering v. Hallock, 309 U.S. 106 at 119 (1940).
Austin is undermined by experience since its announcement. Political speech is so ingrained in our culture that speakers find ways to circumvent campaign finance laws. See, e.g., McConnell, 540 U.S., at 176–177 (“Given BCRA’s tighter restrictions on the raising and spending of soft money, the incentives…to exploit [26 U.S.C. §527] organizations will only increase”). Our Nation’s speech dynamic is changing, and informative voices should not have to circumvent onerous restrictions to exercise their First Amendment rights. Speakers have become adept at presenting citizens with sound bites, talking points, and scripted messages that dominate the 24-hour news cycle. Corporations, like individuals, do not have monolithic views. On certain topics corporations may possess valuable expertise, leaving them the best equipped to point out errors or fallacies in speech of all sorts, including the speech of candidates and elected officials.
Rapid changes in technology—and the creative dynamic inherent in the concept of free expression—counsel against upholding a law that restricts political speech in certain media or by certain speakers. Today, 30-second television ads may be the most effective way to convey a political message. Soon, however, it may be that Internet sources, such as blogs and social networking Web sites, will provide citizens with significant information about political candidates and issues. Yet, §441b would seem to ban a blog post expressly advocating the election or defeat of a candidate if that blog were created with corporate funds. The First Amendment does not permit Congress to make these categorical distinctions based on the corporate identity of the speaker and the content of the political speech.
Due consideration leads to this conclusion: Austin should be and now is overruled. We return to the principle established in Buckley and Bellotti that the Government may not suppress political speech on the basis of the speaker’s corporate identity. No sufficient governmental interest justifies limits on the political speech of nonprofit or for-profit corporations.
When word concerning the plot of the movie Mr. Smith Goes to Washington reached the circles of Government, some officials sought, by persuasion, to discourage its distribution. See Smoodin, “Compulsory” Viewing for Every Citizen: Mr. Smith and the Rhetoric of Reception, 35 Cinema Journal 3, 19, and n. 52 (Winter 1996) (citing Mr. Smith Riles Washington, Time, Oct. 30, 1939, p. 49); Nugent, Capra’s Capitol Offense, N. Y. Times, Oct. 29, 1939, p. X5. Under Austin, though, officials could have done more than discourage its distribution—they could have banned the film. After all, it, like Hillary, was speech funded by a corporation that was critical of Members of Congress. Mr. Smith Goes to Washington may be fiction and caricature; but fiction and caricature can be a powerful force.
Modern day movies, television comedies, or skits on YouTube.com might portray public officials or public policies in unflattering ways. Yet if a covered transmission during the blackout period creates the background for candidate endorsement or opposition, a felony occurs solely because a corporation, other than an exempt media corporation, has made the “purchase, payment, distribution, loan, advance, deposit, or gift of money or anything of value” in order to engage in political speech. 2 U.S.C. §431(9)(A)(i). Speech would be suppressed in the realm where its necessity is most evident: in the public dialogue preceding a real election. Governments are often hostile to speech, but under our law and our tradition it seems stranger than fiction for our Government to make this political speech a crime. Yet this is the statute’s purpose and design.
Some members of the public might consider Hillary to be insightful and instructive; some might find it to be neither high art nor a fair discussion on how to set the Nation’s course; still others simply might suspend judgment on these points but decide to think more about issues and candidates. Those choices and assessments, however, are not for the Government to make. “The First Amendment underwrites the freedom to experiment and to create in the realm of thought and speech. Citizens must be free to use new forms, and new forums, for the expression of ideas. The civic discourse belongs to the people, and the Government may not prescribe the means used to conduct it.” McConnell, supra, at 341 (opinion of Kennedy, J.).
The judgment of the District Court is reversed with respect to the constitutionality of 2 U.S.C. §441b’s restrictions on corporate independent expenditures. The case is remanded for further proceedings consistent with this opinion.
It is so ordered.
- What does the case say about disclosure? Corporations have a right of free speech under the First Amendment and may exercise that right through unrestricted contributions of money to political parties and candidates. Can the government condition that right by requiring that the parties and candidates disclose to the public the amount and origin of the contribution? What would justify such a disclosure requirement?
- Are a corporation’s contributions to political parties and candidates tax deductible as a business expense? Should they be?
- How is the donation of money equivalent to speech? Is this a strict construction of the Constitution to hold that it is?
- Based on the Court’s description of the Austin case, what purpose do you think the Austin court was trying to achieve by limiting corporate campaign contributions? Was that purpose consistent (or inconsistent) with anything in the Constitution, or is the Constitution essentially silent on this issue?
4.7 Summary and Exercises
The US. Constitution sets the framework for all other laws of the United States, at both the federal and the state level. It creates a shared balance of power between states and the federal government (federalism) and shared power among the branches of government (separation of powers), establishes individual rights against governmental action (Bill of Rights), and provides for federal oversight of matters affecting interstate commerce and commerce with foreign nations. Knowing the contours of the US legal system is not possible without understanding the role of the US Constitution.
The Constitution is difficult to amend. Thus when the Supreme Court uses its power of judicial review to determine that a law is unconstitutional, it actually shapes what the Constitution means. New meanings that emerge must do so by the process of amendment or by the passage of time and new appointments to the court. Because justices serve for life, the court changes its philosophical outlook slowly.
The Bill of Rights is an especially important piece of the Constitutional framework. It provides legal causes of action for infringements of individual rights by government, state or federal. Through the due process clause of the Fifth Amendment and the Fourteenth Amendment, both procedural and (to some extent) substantive due process rights are given to individuals.
For many years, the Supreme Court believed that “commercial speech” was entitled to less protection than other forms of speech. One defining element of commercial speech is that its dominant theme is to propose a commercial transaction. This kind of speech is protected by the First Amendment, but the government is permitted to regulate it more closely than other forms of speech. However, the government must make reasonable distinctions, must narrowly tailor the rules restricting commercial speech, and must show that government has a legitimate goal that the law furthers.
Edward Salib owned a Winchell’s Donut House in Mesa, Arizona. To attract customers, he displayed large signs in store windows. The city ordered him to remove the signs because they violated the city’s sign code, which prohibited covering more than 30 percent of a store’s windows with signs. Salib sued, claiming that the sign code violated his First Amendment rights. What was the result, and why?
- Jennifer is a freshman at her local public high school. Her sister, Jackie, attends a nearby private high school. Neither school allows them to join its respective wrestling team; only boys can wrestle at either school. Do either of them have a winning case based on the equal protection clause of the Fourteenth Amendment?
- The employees of the US Treasury Department that work the border crossing between the United States and Mexico learned that they will be subject to routine drug testing. The customs bureau, which is a division of the treasury department, announces this policy along with its reasoning: since customs agents must routinely search for drugs coming into the United States, it makes sense that border guards must themselves be completely drug-free. Many border guards do not use drugs, have no intention of using drugs, and object to the invasion of their privacy. What is the constitutional basis for their objection?
- Happy Time Chevrolet employs Jim Bydalek as a salesman. Bydalek takes part in a Gay Pride March in Los Angeles, is interviewed by a local news camera crew, and reports that he is gay and proud of it. His employer is not, and he is fired. Does he have any constitutional causes of action against his employer?
- You begin work at the Happy-Go-Lucky Corporation on Halloween. On your second day at work, you wear a political button on your coat, supporting your choice for US senator in the upcoming election. Your boss, who is of a different political persuasion, looks at the button and says, “Take that stupid button off or you’re fired.” Has your boss violated your constitutional rights?
- David Lucas paid $975,000 for two residential parcels on the Isle of Palms near Charleston, South Carolina. His intention was to build houses on them. Two years later, the South Carolina legislature passed a statute that prohibited building beachfront properties. The purpose was to leave the dunes system in place to mitigate the effects of hurricanes and strong storms. The South Carolina Coastal Commission created the rules and regulations with substantial input from the community and from experts and with protection of the dune system primarily in mind. People had been building on the shoreline for years, with harmful results to localities and the state treasury. When Lucas applied for permits to build two houses near the shoreline, his permits were rejected. He sued, arguing that the South Carolina legislation had effectively “taken” his property. At trial, South Carolina conceded that because of the legislation, Lucas’s property was effectively worth zero. Has there been a taking under the Fifth Amendment (as incorporated through the Fourteenth Amendment), and if so, what should the state owe to Lucas? Suppose that Lucas could have made an additional $1 million by building a house on each of his parcels. Is he entitled to recover his original purchase price or his potential profits?
Harvey filed a suit against the state of Colorado, claiming that a Colorado state law violates the commerce clause. The court will agree if the statute
- places an undue burden on interstate commerce
- promotes the public health, safety, morals, or general welfare of Colorado
- regulates economic activities within the state’s borders
- a and b
- b and c
The state legislature in Maine enacts a law that directly conflicts with a federal law. Mapco Industries, located in Portland, Maine, cannot comply with both the state and the federal law.
- Because of federalism, the state law will have priority, as long as Maine is using its police powers.
- Because there’s a conflict, both laws are invalid; the state and the federal government will have to work out a compromise of some sort.
- The federal law preempts the state law.
- Both laws govern concurrently.
Hannah, who lives in Ada, is the owner of Superior Enterprises, Inc. She believes that certain actions in the state of Ohio infringe on her federal constitutional rights, especially those found in the Bill of Rights. Most of these rights apply to the states under
- the supremacy clause
- the protection clause
- the due process clause of the Fourteenth Amendment
- the Tenth Amendment
Minnesota enacts a statute that bans all advertising that is in “bad taste,” “vulgar,” or “indecent.” In Michigan, Aaron Calloway and his brother, Clarence “Cab” Calloway, create unique beer that they decide to call Old Fart Ale. In their marketing, the brothers have a label in which an older man in a dirty T-shirt is sitting in easy chair, looking disheveled and having a three-day growth of stubble on his chin. It appears that the man is in the process of belching. He is also holding a can of Old Fart Ale. The Minnesota liquor commission orders all Minnesota restaurants, bars, and grocery stores to remove Old Fart Ale from their shelves. The state statute and the commission’s order are likely to be held by a court to be
- a violation of the Tenth Amendment
- a violation of the First Amendment
- a violation of the Calloways’ right to equal protection of the laws
- a violation of the commerce clause, since only the federal laws can prevent an article of commerce from entering into Minnesota’s market
Raunch Unlimited, a Virginia partnership, sells smut whenever and wherever it can. Some of its material is “obscene” (meeting the Supreme Court’s definition under Miller v. California) and includes child pornography. North Carolina has a statute that criminalizes obscenity. What are possible results if a store in Raleigh, North Carolina, carries Raunch merchandise?
- The partners could be arrested in North Carolina and may well be convicted.
- The materials in Raleigh may be the basis for a criminal conviction.
- The materials are protected under the First Amendment’s right of free speech.
- The materials are protected under state law.
- a and b |
|This article needs additional citations for verification. (April 2011)|
In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than the Nyquist rate. Theoretically a bandwidth-limited signal can be perfectly reconstructed if sampled above the Nyquist rate, which is twice the highest frequency in the signal. Oversampling improves resolution, reduces noise and helps avoid aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.
A signal is said to be oversampled by a factor of N if it is sampled at N times the Nyquist rate.
There are three main reasons for performing oversampling:
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampled signal, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, digital filters are easier to implement than comparable analog filters.
In practice, oversampling is implemented in order to achieve cheaper higher-resolution A/D and D/A conversion. For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the signal-to-noise ratio at the voltage level by a factor of 16 (the square root of the number of samples averaged), adding 4 bits to the resolution and producing a single sample with 24-bit resolution.
The number of samples required to get bits of additional data precision is
To get the mean sample scaled up to an integer with additional bits, the sum of samples is divided by :
This averaging is only possible if the signal contains equally distributed noise which is enough to be observed by the A/D converter. If not, in the case of a stationary input signal, all samples would have the same value and the resulting average would be identical to this value; so in this case, oversampling would have made no improvement. (In similar cases where the A/D converter sees no noise and the input signal is changing over time, oversampling still improves the result, but to an inconsistent/unpredictable extent.) This is an interesting counter-intuitive example where adding some dithering noise to the input signal can improve (rather than degrade) the final result because the dither noise allows oversampling to work to improve resolution (or dynamic range). In many practical applications, a small increase in noise is well worth a substantial increase in measurement resolution. In practice, the dithering noise can often be placed outside the frequency range of interest to the measurement, so that this noise can be subsequently filtered out in the digital domain--resulting in a final measurement (in the frequency range of interest) with both higher resolution and lower noise.
If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then averaging N samples reduces the noise power by a factor of 1/N. If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of 4 which corresponds to a factor of 2 improvement in terms of voltage.[note 1]
Certain kinds of A/D converters known as delta-sigma converters produce disproportionately more quantization noise in the upper portion of their output spectrum. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled signal down to half the target sampling rate, a final result with less noise (over the entire band of the converter) can be obtained. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
For example, consider a signal with a bandwidth or highest frequency of B = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at four times that rate requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((fs/2) − B = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz.
Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult to create.
Oversampling in reconstruction
The term oversampling is also used to denote a process used in the reconstruction phase of digital-to-analog conversion, in which an intermediate high sampling rate is used between the digital input and the analogue output. Here, samples are interpolated in the digital domain to add additional samples in between, thereby converting the data to a higher sample rate, which is a form of upsampling. When the resulting higher-rate samples are converted to analog, a less complex/expensive analog low pass filter is required to remove the high-frequency content, which will consist of reflected images of the real signal created by the zero-order hold of the digital-to-analog converter. Essentially, this is a way to shift some of the complexity of the filtering into the digital domain and achieves the same benefit as oversampling in analog-to-digital conversion.
- Oversampling and undersampling in data analysis
- Oversampled binary image sensor
- A system's signal-to-noise ratio cannot necessarily be increased by simple over-sampling, since noise samples are partially correlated (only some portion of the noise due to sampling and analog-to-digital conversion will be uncorrelated).
- Nauman Uppal (2004-08-30). "Upsampling vs. Oversampling for Digital Audio". Retrieved 2012-10-06. "Without increasing the sample rate, we would need to design a very sharp filter that would have to cutoff at just past 20kHz and be 80-100dB down at 22kHz. Such a filter is not only very difficult and expensive to implement, but may sacrifice some of the audible spectrum in its rolloff."
- See standard error (statistics) |
Mean, Median, Mode, and Range
The word “average” is a description that can mean many different things. It can describe many different measurable quantities in many different aspects of life. Some examples are:
grade point average
The word “average” can also be used to describe quantities that cannot be measured mathematically. The following examples use the word “average” to describe something as “ordinary” or “typical”:
feeling “about average”
Since this is a math lesson, we will focus on different ways to find the (measurable) average of a set of numbers. Average can be represented by several different mathematical words. The words mean, median, and mode are each mathematical words that can be used to describe the concept of “average.” In this lesson, you will learn the definitions of each of these words (mean, median, and mode) as well as a variety of situations where average may be best represented by each word.
Begin with the following problem:
Intro Problem: The girl’s on Alicia’s swim team swam a length of the pool and compared their times. There were 10 girls and their times were 15, 15, 17, 18, 19, 19, 19, 21, 24, and 42 seconds. What was their average time?
The question “What was the average time?” is one that can be answered using several different mathematical interpretations of the word “average.”
The mean of a group of numbers is the sum of the values divided by the number of values
The median of a group of numbers is the “middle number” of the group when they are written from smallest to largest.
The mode of a group of numbers is the value that occurs most often
In the intro problem, the mean, median, and mode can be found as follows:
So the mean, median, and mode are 20.9, 19, and 19 respectively. Note that the in finding the median there were two middle numbers so they were averaged to find the actual median here. Most people think of the mean as the “average”, but in fact each of these three numbers can represent the average of the data. We can take a closer look at the data to determine which of these averages is the best representation of the data. A line plot can be used as a visual representation of the data. Scroll over the line plot to see where the mean, median, and mode are on the number line.
The mean of a data set is generally thought of as the best way to analyze the average of a set. However, in this case, it appears that the mean is higher than seven of the data points, about the same as one of them, and lower than the other two. The median and mode each appear to represent a more centralized location among the data, which would make them better candidates to represent the average of the data set.
It often helps to remember what the data represents in order to analyze it effectively. In this case the data represents the amount of time (in seconds) that it takes for 10 girls to swim a length of the pool. The average of this data could represent how long it takes the average girl to swim across the pool. The fastest and slowest girls are certainly not average. In this case, the slowest girl took about twice as long to swim the length as everyone else. On the number line, all the numbers are clumped together except for the 42. A number that is very different from all the others is called an outlier. Outliers have no effect on the mode and a nominal effect on the median since they are not directly used to calculate either one. They make a big difference in the mean since the size of the outlier is used directly in the calculation of the mean.
There may be a reason one of the girls took so much longer than the others. Maybe she had to stop during the length. Perhaps it was her first day on the swim team. Whatever the reason, her time is not a good representation of the average swimmer. The mean of the other 9 girls can be found by adding up their sum and dividing by 9. The sum of the times of the other 9 girls is 167, so the mean is (167 ÷ 9 =) 18.6.
The mean of 18.6 is a better representation of the average time it took to swim the length. The median and mode both remain at 19. The most common representation for the “average” is the mean, so the mean should be used in most cases. However, when the mean and median are far apart it is often the result of an outlier. When they are far apart, it is usually the result of outlier(s) or a math mistake. Take a closer look at the problem before moving on.
Mean, median, and mode are all measures of central tendency. When comparing data, it can also be useful to find the distance between the highest and lowest numbers. The range can be found by subtracting the lowest number from the highest number in a data set.
Example 1: William played in 8 basketball games and scored the following point totals: 13, 8, 5, 18, 11, 12, 8, 22. Find the mean, median, mode, and range of his scores.
First put the numbers in order: 5, 8, 8, 11, 12, 13, 18, 22.
Mean: 5 + 8 + 8 + 11 + 12 + 13 + 18 + 22 = 97. 97 ÷ 8 = 12.125.
Median: Middle numbers are 11 & 12, so the median is 11.5
Mode: The number 8 appears twice, so the mode is 8.
Range: 22 – 5 = 17, so the range is 17.
In basketball as in any other sport, the mean is generally used to represent the scoring average. In this case, the mean and median are pretty close to each other, which is a good sign that we did the problem correctly. There are no outliers here, so the best number to use for the average is the mean. This data is most commonly represented by saying William’s scores an average of 12.125 points per game.
Find the mean, median, mode, and range for each problem.
1) Heights of classmates (inches): 54, 56, 56, 57, 59, 64, 65
2) Distances run in training (miles): 3, 3, 3, 5, 3, 3, 3, 7, 3, 3
3) Golf scores: 90, 102, 88, 96, 91
Answer the question.
4) What is an outlier?
5) Which measure of central tendency is generally used to describe an average?
Scroll down for answers...
1) Mean is 58.7
median is 57
mode is 56
range is 11
2) Mean is 3.6
Median is 3
Mode is 3
Range is 4
3) Mean is 93.4
Median is 91
There is no mode
Range is 14
4) An outlier is a number in a data set that is far different from the other numbers in the set. |
A turboprop engine is a turbine engine that drives an aircraft propeller.
An aircraft propeller, or airscrew, converts rotary motion from an engine or other power source, into a swirling slipstream which pushes the propeller forwards or backwards. It comprises a rotating power-driven hub, to which are attached several radial airfoil-section blades such that the whole assembly rotates about a longitudinal axis. The blade pitch may be fixed, manually variable to a few set positions, or of the automatically-variable "constant-speed" type.
In its simplest form a turboprop consists of an intake, compressor, combustor, turbine, and a propelling nozzle. Air is drawn into the intake and compressed by the compressor. Fuel is then added to the compressed air in the combustor, where the fuel-air mixture then combusts. The hot combustion gases expand through the turbine. Some of the power generated by the turbine is used to drive the compressor. The rest is transmitted through the reduction gearing to the propeller. Further expansion of the gases occurs in the propelling nozzle, where the gases exhaust to atmospheric pressure. The propelling nozzle provides a relatively small proportion of the thrust generated by a turboprop.
A combustor is a component or area of a gas turbine, ramjet, or scramjet engine where combustion takes place. It is also known as a burner, combustion chamber or flame holder. In a gas turbine engine, the combustor or combustion chamber is fed high pressure air by the compression system. The combustor then heats this air at constant pressure. After heating, air passes from the combustor through the nozzle guide vanes to the turbine. In the case of a ramjet or scramjet engines, the air is directly fed to the nozzle.
A turbine is a rotary mechanical device that extracts energy from a fluid flow and converts it into useful work. The work produced by a turbine can be used for generating electrical power when combined with a generator. A turbine is a turbomachine with at least one moving part called a rotor assembly, which is a shaft or drum with blades attached. Moving fluid acts on the blades so that they move and impart rotational energy to the rotor. Early turbine examples are windmills and waterwheels.
A propelling nozzle is a nozzle that converts the internal energy of a working gas into propulsive force; it is the nozzle, which forms a jet, that separates a gas turbine, being gas generator, from a jet engine.
In contrast to a turbojet, the engine's exhaust gases do not generally contain enough energy to create significant thrust, since almost all of the engine's power is used to drive the propeller.
The turbojet is an airbreathing jet engine, typically used in aircraft. It consists of a gas turbine with a propelling nozzle. The gas turbine has an air inlet, a compressor, a combustion chamber, and a turbine. The compressed air from the compressor is heated by the fuel in the combustion chamber and then allowed to expand through the turbine. The turbine exhaust is then expanded in the propelling nozzle where it is accelerated to high speed to provide thrust. Two engineers, Frank Whittle in the United Kingdom and Hans von Ohain in Germany, developed the concept independently into practical engines during the late 1930s.
Exhaust gas or flue gas is emitted as a result of the combustion of fuels such as natural gas, gasoline, petrol, biodiesel blends, diesel fuel, fuel oil, or coal. According to the type of engine, it is discharged into the atmosphere through an exhaust pipe, flue gas stack, or propelling nozzle. It often disperses downwind in a pattern called an exhaust plume.
Exhaust thrust in a turboprop is sacrificed in favour of shaft power, which is obtained by extracting additional power (up to that necessary to drive the compressor) from turbine expansion. Owing to the additional expansion in the turbine system, the residual energy in the exhaust jet is low.Consequently, the exhaust jet typically produces around or less than 10% of the total thrust. A higher proportion of the thrust comes from the propeller at low speeds and less at higher speeds.
Turboprops can have bypass ratios up to 50-100although the propulsion airflow is less clearly defined for propellers than for fans.
The bypass ratio (BPR) of a turbofan engine is the ratio between the mass flow rate of the bypass stream to the mass flow rate entering the core. A 10:1 bypass ratio, for example, means that 10 kg of air passes through the bypass duct for every 1 kg of air passing through the core.
The propeller is coupled to the turbine through a reduction gear that converts the high RPM/low torque output to low RPM/high torque. The propeller itself is normally a constant speed (variable pitch) type similar to that used with larger reciprocating aircraft engines.[ citation needed ]
Revolutions per minute is the number of turns in one minute. It is a unit of rotational speed or the frequency of rotation around a fixed axis.
Torque, moment, or moment of force is the rotational equivalent of linear force. The concept originated with the studies of Archimedes on the usage of levers. Just as a linear force is a push or a pull, a torque can be thought of as a twist to an object. The symbol for torque is typically , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by M.
A constant-speed propeller is a variable-pitch aircraft propeller that automatically changes its blade pitch in order to maintain a chosen rotational speed. The power delivered is proportional to the arithmetic product of rotational speed and torque, and the propeller operation places emphasis on torque. The operation better suits modern engines, particularly supercharged and gas turbine types.
Unlike the small diameter fans used in turbofan jet engines, the propeller has a large diameter that lets it accelerate a large volume of air. This permits a lower airstream velocity for a given amount of thrust. As it is more efficient at low speeds to accelerate a large amount of air by a small degree than a small amount of air by a large degree,a low disc loading (thrust per disc area) increases the aircraft's energy efficiency, and this reduces the fuel use.
Propellers lose efficiency as aircraft speed increases, so turboprops are normally not used on high-speed aircraft [ citation needed ]above Mach 0.6-0.7. However, propfan engines, which are very similar to turboprop engines, can cruise at flight speeds approaching Mach 0.75. To increase propeller efficiency, a mechanism can be used to alter their pitch relative to the airspeed. A variable-pitch propeller, also called a controllable-pitch propeller, can also be used to generate negative thrust while decelerating on the runway. Additionally, in the event of an engine failure, the pitch can be adjusted to a vaning pitch (called feathering), thus minimizing the drag of the non-functioning propeller.
While most modern turbojet and turbofan engines use axial-flow compressors, turboprop engines usually contain at least one stage of centrifugal compression. Centrifugal compressors have the advantage of being simple and lightweight, at the expense of a streamlined shape.[ citation needed ]
While the power turbine may be integral with the gas generator section, many turboprops today feature a free power turbine on a separate coaxial shaft. This enables the propeller to rotate freely, independent of compressor speed. degrees, to produce two opposing jets. Apart from the above, there is very little difference between a turboprop and a turboshaft.[ citation needed ]Residual thrust on a turboshaft is avoided by further expansion in the turbine system and/or truncating and turning the exhaust 180
Alan Arnold Griffith had published a paper on turbine design in 1926. Subsequent work at the Royal Aircraft Establishment investigated axial turbine designs that could be used to supply power to a shaft and thence a propeller. From 1929, Frank Whittle began work on centrifugal turbine designs that would deliver pure jet thrust.
The world's first turboprop was designed by the Hungarian mechanical engineer György Jendrassik. kW) experimental gas turbine. The larger Jendrassik Cs-1, with a predicted output of 1,000 bhp, was produced and tested at the Ganz Works in Budapest between 1937 and 1941. It was of axial-flow design with 15 compressor and 7 turbine stages, annular combustion chamber and many other modern features. First run in 1940, combustion problems limited its output to 400 bhp. In 1941,the engine was abandoned due to war, and the factory was turned over to conventional engine production. The world's first turboprop engine that went into mass production[ specify ] was designed by a German engineer, Max Adolf Mueller, in 1942.Jendrassik published a turboprop idea in 1928, and on 12 March 1929 he patented his invention. In 1938, he built a small-scale (100 Hp; 74.6
The first mention of turboprop engines in the general public press was in the February 1944 issue of the British aviation publication Flight , which included a detailed cutaway drawing of what a possible future turboprop engine could look like. The drawing was very close to what the future Rolls-Royce Trent would look like. 7 ft 11 in (2.41 m) five-bladed propeller. Two Trents were fitted to Gloster Meteor EE227— the sole "Trent-Meteor" — which thus became the world's first turboprop-powered aircraft, albeit a test-bed not intended for production. It first flew on 20 September 1945. From their experience with the Trent, Rolls-Royce developed the Rolls-Royce Clyde, the first turboprop engine to be fully type certificated for military and civil use, and the Dart, which became one of the most reliable turboprop engines ever built. Dart production continued for more than fifty years. The Dart-powered Vickers Viscount was the first turboprop aircraft of any kind to go into production and sold in large numbers. It was also the first four-engined turboprop. Its first flight was on 16 July 1948. The world's first single engined turboprop aircraft was the Armstrong Siddeley Mamba-powered Boulton Paul Balliol, which first flew on 24 March 1948.The first British turboprop engine was the Rolls-Royce RB.50 Trent, a converted Derwent II fitted with reduction gear and a Rotol
The Soviet Union built on German World War II development by Junkers Motorenwerke, while BMW, Heinkel-Hirth and Daimler-Benz also developed and partially tested designs.[ citation needed ] While the Soviet Union had the technology to create the airframe for a jet-powered strategic bomber comparable to Boeing's B-52 Stratofortress, they instead produced the Tupolev Tu-95 Bear, powered with four Kuznetsov NK-12 turboprops, mated to eight contra-rotating propellers (two per nacelle) with supersonic tip speeds to achieve maximum cruise speeds in excess of 575 mph, faster than many of the first jet aircraft and comparable to jet cruising speeds for most missions. The Bear would serve as their most successful long-range combat and surveillance aircraft and symbol of Soviet power projection throughout the end of the 20th century. The USA would incorporate contra-rotating turboprop engines, such as the ill-fated twin-turbine Allison T40 — essentially a twinned up pair of Allison T38 turboprop engines driving contra-rotating propellers — into a series of experimental aircraft during the 1950s, with aircraft powered with the T40, like the Convair R3Y Tradewind flying boat never entering U.S. Navy service.
The first American turboprop engine was the General Electric XT31, first used in the experimental Consolidated Vultee XP-81. [ citation needed ]The XP-81 first flew in December 1945, the first aircraft to use a combination of turboprop and turbojet power. The technology of the Allison's earlier T38 design evolved into the Allison T56, with quartets of the T56s being used to power the Lockheed Electra airliner, its military maritime patrol derivative the P-3 Orion, and the widely produced C-130 Hercules military transport aircraft. One of the most produced turboprop engines used in civil aviation is the Pratt & Whitney Canada PT6 engine.
The first turbine-powered, shaft-driven helicopter was the Kaman K-225, a development of Charles Kaman's K-125 synchropter, which used a Boeing T50 turboshaft engine to power it on 11 December 1951.
Compared to turbofans, turboprops are most efficient at flight speeds below 725 km/h (450 mph; 390 knots) because the jet velocity of the propeller (and exhaust) is relatively low. Modern turboprop airliners operate at nearly the same speed as small regional jet airliners but burn two-thirds of the fuel per passenger. However, compared to a turbojet (which can fly at high altitude for enhanced speed and fuel efficiency) a propeller aircraft has a lower ceiling.
The most common application of turboprop engines in civilian aviation is in small commuter aircraft, where their greater power and reliability offsets their higher initial cost and fuel consumption. Turboprop-powered aircraft have become popular for bush airplanes such as the Cessna Caravan and Quest Kodiak as jet fuel is easier to obtain in remote areas than avgas.[ citation needed ] Due to the high price of turboprop engines, they are mostly used where high-performance short-takeoff and landing (STOL) capability and efficiency at modest flight speeds are required.
Turboprop engines are generally used on small subsonic aircraft, but the Tupolev Tu-114 can reach 470 kt (870 km/h, 541 mph). Large military and civil aircraft, such as the Lockheed L-188 Electra and the Tupolev Tu-95, have also used turboprop power. The Airbus A400M is powered by four Europrop TP400 engines, which are the second most powerful turboprop engines ever produced, after the eleven megawatt-output Kuznetsov NK-12.
In 2017, The most widespread turboprop airliners in service were the ATR 42/72 (950 aircraft), Bombardier Q400 (506) and Dash 8-100/200/300 (374), Beechcraft 1900 (328), de Havilland Canada DHC-6 Twin Otter (270), Saab 340 (225).Less widespread and older airliners include the BAe Jetstream 31, Embraer EMB 120 Brasilia, Fairchild Swearingen Metroliner, Dornier 328, Saab 2000, Xian MA60, MA600 and MA700, Fokker 27 and 50.
Turboprop business aircraft include the Piper Meridian, Socata TBM, Pilatus PC-12, Piaggio P.180 Avanti, Beechcraft King Air and Super King Air. In April 2017, there were 14,311 business turboprops in the worldwide fleet.
Between 2012 and 2016, the ATSB observed 417 events with turboprop aircraft, 83 per year, over 1.4 million flight hours: 2.2 per 10,000 hours. Three were “high risk” involving engine malfunction and unplanned landing in single‑engine Cessna 208 Caravans, four “medium risk” and 96% “low risk”. Two occurrences resulted in minor injuries due to engine malfunction and terrain collision in agricultural aircraft and five accidents involved aerial work: four in agriculture and one in an air ambulance.
Jane's All the World's Aircraft . 2005–2006.
|Manufacturer||Country||Designation||Dry weight (kg)||Takeoff rating (kW)||Application|
|DEMC||WJ5E||720||2130||Harbin SH-5, Xi'an Y-7|
|Europrop International||TP400-D6||1800||8203||Airbus A400M|
|General Electric||CT7-9||365||1447||CASA/IPTN CN-235, Let L-610, Saab 340, Sukhoi Su-80|
|General Electric||H80 Series||200||550 - 625||Thrush Model 510, Let 410NG, Let L-410 Turbolet UVP-E, CAIGA Primus 150, Nextant G90XT|
|General Electric||T64-P4D||538||2535||Aeritalia G.222, de Havilland Canada DHC-5 Buffalo, Kawasaki P-2J|
|Honeywell||TPE331 Series||150 - 275||478 - 1650||Aero/Rockwell Turbo Commander 680/690/840/960/1000, Antonov An-38, Ayres Thrush, BAe Jetstream 31/32, BAe Jetstream 41, CASA C-212 Aviocar, Cessna 441 Conquest II, Dornier Do 228, Fairchild Swearingen Metroliner, General Atomics MQ-9 Reaper, Grum Ge man, Mitsubishi MU-2, North American Rockwell OV-10 Bronco, Piper PA-42 Cheyenne, RUAG Do 228NG, Short SC.7 Skyvan, Short Tucano, Swearingen Merlin, Fairchild Swearingen Metroliner|
|Honeywell||LTP 101-700||147||522||Air Tractor AT-302, Piaggio P.166|
|KKBM||NK-12MV||1900||11033||Antonov An-22, Tupolev Tu-95, Tupolev Tu-114|
|Klimov||TV7-117S||530||2100||Ilyushin Il-112, Ilyushin Il-114|
|Progress||AI20M||1040||2940||Antonov An-12, Antonov An-32, Ilyushin Il-18|
|Progress||AI24T||600||1880||Antonov An-24, Antonov An-26, Antonov An-30|
|LHTEC||LHTEC T800||517||2013||AgustaWestland Super Lynx 300 (CTS800-4N), AgustaWestland AW159 Lynx Wildcat (CTS800-4N), Ayres LM200 Loadmaster (LHTEC CTP800-4T) (aircraft not built), Sikorsky X2 (T800-LHT-801), TAI/AgustaWestland T-129 (CTS800-4A)|
|OMKB||TVD-20||240||1081||Antonov An-3, Antonov An-38|
|Pratt & Whitney Canada||PT-6 Series||149 - 260||430 - 1500||Air Tractor AT-502, Air Tractor AT-602, Air Tractor AT-802, Beechcraft Model 99, Beechcraft King Air, Beechcraft Super King Air, Beechcraft 1900, Beechcraft T-6 Texan II, Cessna 208 Caravan, Cessna 425 Corsair/Conquest I, de Havilland Canada DHC-6 Twin Otter, Harbin Y-12, Embraer EMB 110 Bandeirante, Let L-410 Turbolet, Piaggio P.180 Avanti, Pilatus PC-6 Porter, Pilatus PC-12, Piper PA-42 Cheyenne, Piper PA-46-500TP Meridian, Shorts 360, Daher TBM 700, Daher TBM 850, Daher TBM 900, Embraer EMB 314 Super Tucano|
|Pratt & Whitney Canada||PW120||418||1491||ATR 42-300/320|
|Pratt & Whitney Canada||PW121||425||1603||ATR 42-300/320, Bombardier Dash 8 Q100|
|Pratt & Whitney Canada||PW123 C/D||450||1603||Bombardier Dash 8 Q300|
|Pratt & Whitney Canada||PW126 C/D||450||1950||BAe ATP|
|Pratt & Whitney Canada||PW127||481||2051||ATR 72|
|Pratt & Whitney Canada||PW150A||717||3781||Bombardier Dash 8 Q400|
|Rolls-Royce||Dart Mk 536||569||1700||Avro 748, Fokker F27, Vickers Viscount|
|Rolls-Royce||Tyne 21||569||4500||Aeritalia G.222, Breguet Atlantic, Transall C-160|
|Rolls-Royce||250-B17||88.4||313||Fuji T-7, Britten-Norman Turbine Islander, O&N Cessna 210, Soloy Cessna 206, Propjet Bonanza|
|Rolls-Royce||Allison T56||828 - 880||3424 - 3910||P-3 Orion, E-2 Hawkeye, C-2 Greyhound, C-130 Hercules|
|Rolls-Royce||AE2100D2, D3||702||3424||Alenia C-27J Spartan, Lockheed Martin C-130J Super Hercules|
|Turbomeca||Arrius 1D||111||313||Socata TB 31 Omega|
|Walter||M601 Series||200||560||Let L-410 Turbolet, Aerocomp Comp Air 10 XL, Aerocomp Comp Air 7, Ayres Thrush, Dornier Do 28, Lancair Propjet, Let Z-37T, Let L-420, Myasishchev M-101T, PAC FU-24 Fletcher, Progress Rysachok, PZL-106 Kruk, PZL-130 Orlik, SM-92T Turbo Finist|
A jet engine is a type of reaction engine discharging a fast-moving jet that generates thrust by jet propulsion. This broad definition includes airbreathing jet engines. In general, jet engines are combustion engines.
The turbofan or fanjet is a type of airbreathing jet engine that is widely used in aircraft propulsion. The word "turbofan" is a portmanteau of "turbine" and "fan": the turbo portion refers to a gas turbine engine which achieves mechanical energy from combustion, and the fan, a ducted fan that uses the mechanical energy from the gas turbine to accelerate air rearwards. Thus, whereas all the air taken in by a turbojet passes through the turbine, in a turbofan some of that air bypasses the turbine. A turbofan thus can be thought of as a turbojet being used to drive a ducted fan, with both of these contributing to the thrust.
An aircraft engine is a component of the propulsion system for an aircraft that generates mechanical power. Aircraft engines are almost always either lightweight piston engines or gas turbines, except for small multicopter UAVs which are almost always electric aircraft.
A jet aircraft is an aircraft propelled by jet engines.
Thrust-specific fuel consumption (TSFC) is the fuel efficiency of an engine design with respect to thrust output. TSFC may also be thought of as fuel consumption (grams/second) per unit of thrust. It is thus thrust-specific, meaning that the fuel consumption is divided by the thrust.
A motorjet is a rudimentary type of jet engine which is sometimes referred to as thermojet, a term now commonly used to describe a particular and completely unrelated pulsejet design.
An afterburner is a component present on some jet engines, mostly those used on military supersonic aircraft. Its purpose is to provide an increase in thrust, usually for supersonic flight, takeoff, and combat situations. Afterburning is achieved by injecting additional fuel into the jet pipe downstream of the turbine. Afterburning significantly increases thrust without the weight of an additional engine, but at the cost of very high fuel consumption and decreased fuel efficiency, limiting its practical use to short bursts.
The Rolls-Royce RB.80 Conway was the first turbofan in the world to enter service. Development started at Rolls-Royce in the 1940s, but it was used only briefly in the late 1950s and early 1960s before other turbofan designs were introduced that replaced it. The Conway powered versions of the Handley Page Victor, Vickers VC10, Boeing 707-420 and Douglas DC-8-40. The name "Conway" is the English spelling of the River Conwy, in Wales, in keeping with Rolls' use of river names for gas turbine engines.
A turboshaft engine is a form of gas turbine that is optimized to produce shaft power rather than jet thrust.
This article outlines the important developments in the history of the development of the air-breathing (duct) jet engine. Although the most common type, the gas turbine powered jet engine, was certainly a 20th-century invention, many of the needed advances in theory and technology leading to this invention were made well before this time.
The General Electric CJ805 is a jet engine which was developed by GE Aviation in the late 1950s. It was a civilian version of the J79 and differed only in detail. It was developed in two versions. The basic CJ805-3 was a turbojet and powered the Convair 880, while CJ805-23, a turbofan derivative, powered the Convair 990 airliners.
The Rolls-Royce RB.50 Trent was the first Rolls-Royce turboprop engine.
A number of aircraft have been claimed to be the fastest propeller-driven aircraft. This article presents the current record holders for several sub-classes of propeller-driven aircraft that hold recognized, documented speed records in level flight. Fédération Aéronautique Internationale (FAI) records are the basis for this article. Other contenders and their claims are discussed, but only those made under controlled conditions and measured by outside observers. Pilots during World War II sometimes claimed to have reached supersonic speeds in propeller-driven fighters during emergency dives, but these speeds are not included as accepted records. Neither are speeds recorded in a dive during high-speed tests with the Supermarine Spitfire, including Squadron Leader J.R. Tobin's 606mph in a 45° dive in a Mark XI Spitfire and Squadron Leader Anthony F. Martindale's breaking 620mph in the same aircraft in April 1944. Flight Lieutenant Edward Powles' 690 mph in Spitfire PR.XIX PS852 during an emergency dive while carrying out spying flights over China on 5 February, 1952 is also discounted. This would otherwise be the highest speed ever recorded for a piston-engined aircraft.
The air turborocket is a form of combined-cycle jet engine. The basic layout includes a gas generator, which produces high pressure gas, that drives a turbine/compressor assembly which compresses atmospheric air into a combustion chamber. This mixture is then combusted before leaving the device through a nozzle and creating thrust.
An airbreathing jet engine is a jet engine propelled by a jet of hot exhaust gases formed from air that is forced into the engine by several stages of centrifugal, axial or ram compression, which is then heated and expanded through a nozzle. They are typically gas turbine engines. The majority of the mass flow through an airbreathing jet engine is provided by air taken from outside of the engine and heated internally, using energy stored in the form of fuel.
The jet engine has a long history, from early steam devices in the 2nd century BC to the modern turbofans and scramjets.
A powered aircraft is an aircraft that uses onboard propulsion with mechanical power generated by an aircraft engine of some kind.
This article needs additional citations for verification . (July 2010) (Learn how and when to remove this template message) |
|History and description of|
|Development of vowels|
|Development of consonants|
Like many other languages, English has wide variation in pronunciation, both historically and from dialect to dialect. In general, however, the regional dialects of English share a largely similar (but not identical) phonological system. Among other things, most dialects have vowel reduction in unstressed syllables and a complex set of phonological features that distinguish fortis and lenis consonants (stops, affricates, and fricatives). Most dialects of English preserve the consonant /w/ (spelled w) and many preserve /θ, ð/ (spelled th), while most other Germanic languages have shifted them to /v/ and /t, d/: compare English will // ( listen) and then // ( listen) with German will [vɪl] ( listen) ("want") and denn [dɛn] ( listen).
Phonological analysis of English often concentrates on or uses, as a reference point, one or more of the prestige or standard accents, such as Received Pronunciation for England, General American for the United States, and General Australian for Australia. Nevertheless, many other dialects of English are spoken, which have developed independently from these standardized accents, particularly regional dialects. Information about these standardized accents functions only as a limited guide to all of English phonology, which one can later expand upon once one becomes more familiar with some of the many other dialects of English that are spoken.
- 1 Phonemes
- 2 Lexical stress
- 3 Phonotactics
- 4 Prosody
- 5 History of English pronunciation
- 6 See also
- 7 References
- 8 Bibliography
- 9 External links
A phoneme of a language or dialect is an abstraction of a speech sound or of a group of different sounds which are all perceived to have the same function by speakers of that particular language or dialect. For example, the English word "through" consists of three phonemes: the initial "th" sound, the "r" sound, and an "oo" vowel sound. Notice that the phonemes in this and many other English words do not always correspond directly to the letters used to spell them (English orthography is not as strongly phonemic as that of many other languages).
The number and distribution of phonemes in English vary from dialect to dialect, and also depend on the interpretation of the individual researcher. The number of consonant phonemes is generally put at 24 (or slightly more). The number of vowels is subject to greater variation; in the system presented on this page there are 20 vowel phonemes in Received Pronunciation, 14–16 in General American and 20–21 in Australian English (CITE). The pronunciation keys used in dictionaries generally contain a slightly greater number of symbols than this, to take account of certain sounds used in foreign words and certain noticeable distinctions that may not be—strictly speaking—phonemic.
The following table shows the 24 consonant phonemes found in most dialects of English, in addition to /x/, whose distribution is more limited. Fortis consonants are always voiceless, aspirated in syllable onset (except in clusters beginning with /s/), and sometimes also glottalized to an extent in syllable coda (most likely to occur with /t/, see T-glottalization), while lenis consonants are always unaspirated and un-glottalized, and generally partially or fully voiced.
- Most varieties of English have syllabic consonants in some words, principally [l̩, m̩, n̩], for example at the end of bottle, rhythm and button. In such cases, no phonetic vowel is pronounced between the last two consonants, and the last consonant forms a syllable on its own. Syllabic consonants are generally transcribed with a vertical line under the consonant letter, so that phonetic transcription of bottle would be [ˈbɒtl̩], [ˈbɑɾl̩], or [ˈbɔɾl̩] in RP, GA, and Australian respectively, and for button [ˈbʌʔn̩]. In theory, such consonants could be analyzed as individual phonemes. However, this would add several extra consonant phonemes to the inventory for English, and phonologists prefer to identify syllabic nasals and liquids phonemically as /əC/. Thus button is phonemically /ˈbʌtən/ or /ˈbɐtən/ and bottle is phonemically /ˈbɒtəl/, /ˈbɑtəl/, or /ˈbɔtəl/.
- The voiceless velar fricative /x/ is mainly used in Hiberno-, Scottish, South African and Welsh English; words with /x/ in Scottish accents tend to be pronounced with /k/ in other dialects. The velar fricative sometimes appears in recent loanwords such as chutzpah. Many speakers of White South African English realize /x/ as uvular [χ].
- In some conservative accents in Scotland, Ireland, the southern United States, and New England, the digraph ⟨wh⟩ in words like which and whine represents a voiceless w sound [ʍ], a voiceless labiovelar fricative or approximant, which contrasts with the voiced w of witch and wine. In most dialects, this sound is lost, and is pronounced as a voiced w (the wine–whine merger). Phonemically this sound is analysed as a consonant cluster /hw/, rather than as a separate phoneme */ʍ/. Thus which and whine are transcribed phonemically as /hwɪtʃ/ and /hwaɪn/. This does not mean that such speakers actually pronounce [h] followed by [w]: the phonemic transcription /hw/ is simply a convenient way of representing a single sound [ʍ] without analysing such dialects as having an extra phoneme.
- Similarly, the sound at the beginning of huge in most accents[verification needed] is a voiceless palatal fricative [ç], but this is analysed phonemically as the consonant cluster /hj/ so that huge is transcribed /hjuːdʒ/. As with /hw/, this does not mean that speakers pronounce [h] followed by [j]; the phonemic transcription /hj/ is simply a convenient way of representing the single sound [ç]. The yod-dropping found in Norfolk dialect means that the traditional Norfolk pronunciation of huge is [hʊudʒ] and not [çuːdʒ].
- This phoneme is conventionally transcribed with the basic Latin letter ⟨r⟩ (the IPA symbol for the alveolar trill), even though its pronunciation is usually a postalveolar approximant [ɹ̠]. The trill does exist but it is rare, found only in Scottish dialects and sporadically in Received Pronunciation preceding a stressed vowel in highly emphatic speech or when placing special emphasis on a word. See Pronunciation of English /r/.
- The postalveolar consonants /tʃ, dʒ, ʃ, ʒ, r/ are also often slightly labialized: [tʃʷ dʒʷ ʃʷ ʒʷ ɹ̠ʷ].
The following table shows typical examples of the occurrence of the above consonant phonemes in words.
- The pronunciation of /l/ varies by dialect:
- Received Pronunciation has two main allophones of /l/: the clear or plain [l], and the dark or velarized [ɫ]. The clear variant is used before vowels when they are in the same syllable, and the dark variant when the /l/ precedes a consonant or is in syllable-final position before silence.
- In South Wales, Ireland, and the Caribbean, /l/ is often always clear, and in North Wales, Scotland, Australia, New Zealand and Canada it is always dark.
- In General American, /l/ is generally dark, but to varying degrees: before stressed vowels it is neutral or only slightly velarized. In southern U.S. accents it is noticeably clear between vowels, and in some other positions.
- In urban accents across England and Scotland, as well as New Zealand and some parts of the United States, /l/ can be pronounced as an approximant or semivowel ([w], [o], [ʊ]) at the end of a syllable (l-vocalization).
- Depending on dialect, /r/ has at least the following allophones in varieties of English around the world:
- postalveolar approximant [ɹ̠] (the most common realization of the /r/ phoneme, occurring in most dialects, RP and General American included)
- retroflex approximant [ɻ] (occurs in most Irish dialects and some American dialects)
- labiodental approximant [ʋ] (occurs in south-east England and some London accents; known as r-labialization)
- alveolar flap [ɾ] (occurs in most Scottish and some South African dialects, some conservative dialects in England and Ireland; not to be confused with flapping of /t/ and /d/)
- alveolar trill [r] (occurs in some very conservative Scottish dialects)
- voiced uvular fricative [ʁ] (occurs in northern Northumbria, largely disappeared; known as the Northumbrian burr)
- In most dialects /r/ is labialized [ɹ̠ʷ] in many positions, as in reed [ɹ̠ʷiːd] and tree [tɹ̠̊ʷiː]; in the latter case, the /t/ may be slightly labialized as well.
- In some rhotic accents, such as General American, /r/ when not followed by a vowel is realized as an r-coloring of the preceding vowel or its coda: nurse [ˈnɝs], butter [ˈbʌtɚ].
- The distinctions between the nasals are neutralized in some environments. For example, before a final /p/, /t/ or /k/ there is nearly always only one nasal sound that can appear in each case: [m], [n] or [ŋ] respectively (as in the words limp, lint, link – note that the n of link is pronounced [ŋ]). This effect can even occur across syllable or word boundaries, particularly in stressed syllables: synchrony is pronounced [ˈsɪŋkɹəni] whereas synchronic may be pronounced either as [sɪŋˈkɹɒnɪk] or as [sɪnˈkɹɒnɪk]. For other possible syllable-final combinations, see § Coda in the Phonotactics section below.
In most dialects, the fortis stops and affricate /p, t, tʃ, k/ have various different allophones, and are distinguished from the lenis stops and affricate /b, d, dʒ, ɡ/ by several phonetic features.
- The allophones of the fortes /p, t, tʃ, k/ include:
- aspirated [pʰ, tʰ, kʰ] when they occur at the beginning of a word, as in tomato, trip, or at the beginning of a stressed syllable in the middle of a word, as in potato. They are unaspirated [p, t, k] after /s/ within the same syllable, as in stan, span, scan, and at the ends of syllables, as in mat, map, mac. The voiceless fricatives are always unaspirated, but a notable exception to this are English-speaking areas of Wales, where they are often aspirated.
- In many accents of English, fortis stops /p, t, k, tʃ/ are glottalized in some positions. This may be heard either as a glottal stop preceding the oral closure ("pre-glottalization" or "glottal reinforcement") or as a substitution of the glottal stop [ʔ] for the oral stop (glottal replacement). /tʃ/ can only be pre-glottalized. Pre-glottalization normally occurs in British and American English when the fortis consonant phoneme is followed by another consonant or when the consonant is in final position. Thus football and catching are often pronounced [ˈfʊʔtbɔːl] and [ˈkæʔtʃɪŋ], respectively. Glottal replacement often happens in cases such as those just given, so that football is frequently pronounced [ˈfʊʔbɔːl]. In addition, however, glottal replacement is increasingly common in British English when /t/ occurs between vowels if the preceding vowel is stressed; thus getting better is often pronounced by younger speakers as [ˈɡeʔɪŋ ˌbeʔə]. Such t-glottalization also occurs in many British regional accents, including Cockney, where it can also occur at the end of words, and where /p/ and /k/ are sometimes treated the same way.
- Among stops, both fortes and lenes:
- May have no audible release [p̚, b̚, t̚, d̚, k̚, ɡ̚] in the word-final position. These allophones are more common in North America than Great Britain.
- Always have a 'masked release' before another plosive or affricate (as in rubbed [ˈrʌˑb̚d̥]), i.e. the release of the first stop is made after the closure of the second stop. This also applies when the following stop is homorganic (articulated in the same place), as in top player. A notable exception to this is Welsh English, where stops are usually released in this environment.
- The affricates /tʃ, dʒ/ have a mandatory fricative release in all environments.
- Very often in the United States and Canada, and less frequently in Australia and New Zealand, both /t/ and /d/ can be pronounced as a voiced flap [ɾ] in certain positions: when they come between a preceding stressed vowel (possibly with intervening /r/) and precede an unstressed vowel or syllabic /l/. Examples include water, bottle, petal, peddle (the last two words sound alike when flapped). The flap may even appear at word boundaries, as in put it on. When the combination /nt/ appears in such positions, some American speakers pronounce it as a nasalized flap that may become indistinguishable from /n/, so winter [ˈwɪɾ̃ɚ] may be pronounced similarly or identically to winner [ˈwɪnɚ].
English has a particularly large number of vowel phonemes, and on top of that the vowels of English differ considerably between dialects. Because of this, corresponding vowels may be transcribed with various symbols depending on the dialect under consideration. When considering English as a whole, lexical sets are often used, each named by a word containing the vowel or vowels in question. For example, the LOT set consists of words which, like lot, have /ɒ/ in Received Pronunciation and /ɑ/ in General American. The "LOT vowel" then refers to the vowel that appears in those words in whichever dialect is being considered, or (at a greater level of abstraction) to a diaphoneme which transcends all dialects. A commonly used system of lexical sets, due to John C. Wells, is presented below; for each set, the corresponding phonemes are given for RP (first column) and General American (second column), using the notation that will be used on this page.
For a table that shows the pronunciations of these vowels in a wider range of English dialects, see IPA chart for English dialects.
The following tables show the vowel phonemes of three standard varieties of English. The notation system used here for Received Pronunciation (RP) is fairly standard; the others less so. For different ways of transcribing General American, see § Transcription variants below. The feature descriptions given here (front, close, etc.) are abstracted somewhat; the actual pronunciations of these vowels are somewhat more accurately conveyed by the IPA symbols used (see Vowel for a chart indicating the meanings of these symbols; though note also the points listed below the following tables).
- RP transcriptions use /e/ rather than /ɛ/ largely for convenience and historical tradition; it does not necessarily represent a different sound from the General American phoneme, although the RP vowel may be described as somewhat less open than the American one.
- Although the notation /ʌ/ is used for the vowel of STRUT in RP and GenAm, the actual pronunciation is closer to a near-open central vowel [ɐ]. The symbol ⟨ʌ⟩ continues to be used for reasons of tradition (it was historically a back vowel) and because it is still back in some other varieties.
The differences between these tables can be explained as follows:
- In General American, the vowels [ə], [ʌ] and [ɜ] may be considered allophones of a single phoneme, since they occur in complementary distribution: [ə] in unstressed syllables (also r-colored [ɚ]), [ʌ] in stressed syllables not before [r], and [ɜ] in stressed syllables before [r] (that is r-colored [ɝ]).
- General American lacks a phoneme corresponding to RP /ɒ/ (LOT, CLOTH), instead using /ɑ/ in the LOT words and generally /ɔ/ in the CLOTH words. In a few North American accents, namely in Eastern New England (Boston), Western Pennsylvania (Pittsburgh), and to some degree in Pacific Northwest (Seattle, Portland) and Canadian English, LOT words do not have the vowel of PALM (the father–bother merger has not occurred) but instead merge with CLOTH/THOUGHT.
- The different notations used for the vowel of GOAT in RP and General American (/əʊ/ and /oʊ/) reflect a difference in the most common phonetic realizations of that vowel.
- The triphthongs given in the RP table are usually regarded as sequences of two phonemes (a diphthong plus /ə/); however, in RP, these sequences frequently undergo smoothing into single diphthongs or even monophthongs.
- The different notations used here for some of the Australian vowels reflect the phonetic realization of those vowels in Australian: a central [ʉː] rather than [uː] in GOOSE, a more closed [e] rather than [ɛ] in DRESS, an open-mid [ɔ] rather than traditional RP's [ɒ] in LOT and CLOTH, a close-mid [oː] rather than mid [ɔː] in THOUGHT, NORTH and FORCE (here the difference lays almost only in transcription rather than pronunciation), an opener [ɐ] rather than somewhat closer [ʌ] in STRUT, a fronted [ɐː] rather than [ɑː] in CALM and START, and somewhat different pronunciations of most of the diphthongs. Note that central [ʉː] in GOOSE and open-mid [ɔ] in LOT are possible realizations in modern RP; in the case of the latter vowel, it is even more common than the traditional open [ɒ].
- The Australian monophthong /eː/ corresponds to the RP diphthong /eə/ (SQUARE).
- Australian has the bad–lad split, with distinctive short and long variants in various words of the TRAP set: a long phoneme /æː/ in words like bad contrasts with a short /æ/ in words like lad. (A similar split is found in the accents of some speakers in southern England.)
- The vowel /ʊə/ is often omitted from descriptions of Australian, as for most speakers it has split into the long monophthong /oː/ (e.g. poor, sure) or the sequence /ʉː.ə/ (e.g. cure, lure).
Other points to be noted are these:
- The vowel /æ/ is coming to be pronounced more open (approaching [a]) by many modern RP speakers. In American speech, however, there is a tendency for it to become more closed, tenser and even diphthongized (to something like [eə]), particularly in certain environments, such as before a nasal consonant. Some American accents, for example those of New York City, Philadelphia and Baltimore, make a marginal phonemic distinction between /æ/ and /eə/, although the two occur largely in mutually exclusive environments. See æ-tensing.
- A significant number of words (the BATH group) have /æ/ in General American, but /ɑː/ in RP (and mostly /ɐː/ in Australian).
- Most speakers in Canada outside of the Maritime Provinces, and some speakers in the United States, do not distinguish /ɑ/ from /ɔ/, except before /r/ (see cot–caught merger). However, evidence by Labov et al suggests that in dialects without the merger // and // may not actually be assonant, especially in dialects with the horse–hoarse merger.
- In General American and Canadian (which are rhotic accents, where /r/ is pronounced in positions where it does not precede a vowel), many of the vowels can be r-colored by way of realization of a following /r/. This is often transcribed phonetically using a vowel symbol with an added retroflexion diacritic [ ˞ ]; thus the symbol [ɚ] has been created for an r-colored schwa (sometimes called schwar) as in LETTER, and the vowel of START can be modified to make [ɑ˞] so that the word start may be transcribed [stɑ˞t]. Alternatively, the START vowel might be written [stɑɚt] to indicate an r-colored offglide. The vowel /ɜ/ (as in NURSE) is generally always r-colored in these dialects, and this can be written [ɝ] (or as a syllabic [ɹ̩]).
- In RP and other dialects, many words from the CURE group are coming to be pronounced by an increasing number of speakers with the NORTH vowel (so sure is often pronounced like shore). Also the RP vowels /ɛə/ and /ʊə/ may be monophthongized to [ɛː] and [oː] respectively.
- The vowels of FLEECE and GOOSE are commonly pronounced as narrow diphthongs, approaching [ɪi] and [ʊu], in RP. Near-RP speakers may have particularly marked diphthongization of the type [əi] and [əu ~ əʉ], respectively. In General American, the pronunciation varies between a monophthong and a diphthong.
Allophones of vowels
Listed here are some of the significant cases of allophony of vowels found within standard English dialects.
- There is a tendency for many vowels to be pronounced with greater length in open syllables than closed syllables, and with greater length in syllables ending with a voiced consonant than with a voiceless one. For example, the /aɪ/ in advise is longer than that in advice.
- In many accents of English, tense vowels undergo breaking before /l/, resulting in pronunciations like [piəl] for peel, [puəl] for pool, [peəl] for pail, and [poəl] for pole.
- In RP, the vowel /əʊ/ may be pronounced more back, as [ɒʊ], before syllable-final /l/, as in goal. In Australian English the vowel /əʉ/ is similarly backed to [ɔʊ] before /l/. A similar phenomenon may occur in Southern American English.
- The vowel /ə/ is often pronounced [ɐ] in open syllables.
- The PRICE and MOUTH diphthongs may be pronounced with a less open starting point when followed by a voiceless consonant; this is chiefly a feature of Canadian speech (Canadian raising), but is also found in parts of the United States. Thus writer may be distinguished from rider even when flapping causes the /t/ and /d/ to be pronounced identically.
Unstressed syllables in English may contain almost any vowel, but there are certain sounds—characterized by central position and weakness—that are particularly often found as the nuclei of syllables of this type. These include:
- schwa, [ə], as in COMMA and (in non-rhotic dialects) LETTER (panda–pander merger); also in many other positions such as about, photograph, paddock, etc. This sound is essentially restricted to unstressed syllables exclusively. In the approach presented here it is identified as a phoneme /ə/, although other analyses do not have a separate phoneme for schwa and regard it as a reduction or neutralization of other vowels in syllables with the lowest degree of stress.
- r-colored schwa, [ɚ], as in LETTER in General American and some other rhotic dialects, which can be identified with the underlying sequence /ər/.
- syllabic consonants: [l̩] as in bottle, [n̩] as in button, [m̩] as in rhythm. These may be phonemized either as a plain consonant or as a schwa followed by a consonant; for example button may be represented as /ˈbʌtn̩/ or /ˈbʌtən/ (see above under Consonants).
- [ɪ], as in roses and making. This can be identified with the phoneme /ɪ/, although in unstressed syllables it may be pronounced more centrally (in American tradition the barred i symbol ⟨ɨ⟩ is used here), and for some speakers (particularly in Australian and New Zealand and some American English) it is merged with /ə/ in these syllables (weak vowel merger). Among speakers who retain the distinction there are many cases where free variation between /ɪ/ and /ə/ is found, as in the second syllable of typical. (The OED has recently adopted the symbol ⟨ᵻ⟩ to indicate such cases.)
- [ʊ], as in argument, today, for which similar considerations apply as in the case of [ɪ]. (The symbol ⟨ᵿ⟩ is sometimes used in these cases, similarly to ⟨ᵻ⟩.) Some speakers may also have a rounded schwa, [ɵ], used in words like omission [ɵˈmɪʃən].
- [i], as in happy, coffee, in many dialects (others have [ɪ] in this position). The phonemic status of this [i] is not easy to establish. Some authors consider it to correspond phonemically with a close front vowel that is neither the vowel of KIT nor that of FLEECE; it occurs chiefly in contexts where the contrast between these vowels is neutralized, implying that it represents an archiphoneme, which may be written /i/. Many speakers, however, do have a contrast in pairs of words like studied and studded or taxis and taxes; the contrast may be [i] vs. [ɪ], [ɪ] vs. [ə] or [i] vs. [ə], hence some authors consider that the happY-vowel should be identified phonemically either with the vowel of KIT or that of FLEECE, depending on speaker. See also happy-tensing.
- [u], as in influence, to each. This is the back rounded counterpart to [i] described above; its phonemic status is treated in the same works as cited there.
Vowel reduction in unstressed syllables is a significant feature of English. Syllables of the types listed above often correspond to a syllable containing a different vowel ("full vowel") used in other forms of the same morpheme where that syllable is stressed. For example, the first o in photograph, being stressed, is pronounced with the GOAT vowel, but in photography, where it is unstressed, it is reduced to schwa. Also, certain common words (a, an, of, for, etc.) are pronounced with a schwa when they are unstressed, although they have different vowels when they are in a stressed position (see Weak and strong forms in English).
Some unstressed syllables, however, retain full (unreduced) vowels, i.e. vowels other than those listed above. Examples are the /æ/ in ambition and the /aɪ/ in finite. Some phonologists regard such syllables as not being fully unstressed (they may describe them as having tertiary stress); some dictionaries have marked such syllables as having secondary stress. However linguists such as Ladefoged and Bolinger (1986) regard this as a difference purely of vowel quality and not of stress, and thus argue that vowel reduction itself is phonemic in English. Examples of words where vowel reduction seems to be distinctive for some speakers include chickaree vs. chicory (the latter has the reduced vowel of HAPPY, whereas the former has the FLEECE vowel without reduction), and Pharaoh vs. farrow (both have the GOAT vowel, but in the latter word it may reduce to [ɵ]).
The choice of which symbols to use for phonemic transcriptions may reflect theoretical assumptions or claims on the part of the transcriber. English "tense" and "lax" vowels are distinguished by a synergy of features, such as height, length, and contour (monophthong vs. diphthong); different traditions in the linguistic literature emphasize different features. For example, if the primary feature is thought to be vowel height, then the non-reduced vowels of General American English may be represented according to the adjacent table. If, on the other hand, vowel length is considered to be the deciding factor, the symbols in the table to the below and center may be chosen (this convention has sometimes been used because the publisher did not have IPA fonts available, though that is seldom an issue any longer.) The rightmost table lists the corresponding lexical sets.
If vowel transition is taken to be paramount, then the chart may look like one of these:
Many linguists combine more than one of these features in their transcriptions, suggesting they consider the phonemic differences to be more complex than a single feature.
Lexical stress is phonemic in English. For example, the noun increase and the verb increase are distinguished by the positioning of the stress on the first syllable in the former, and on the second syllable in the latter. (See initial-stress-derived noun.) Stressed syllables in English are louder than non-stressed syllables, as well as being longer and having a higher pitch.
In traditional approaches, in any English word consisting of more than one syllable, each syllable is ascribed one of three degrees of stress: primary, secondary or unstressed. Ordinarily, in each such word there will be exactly one syllable with primary stress, possibly one syllable having secondary stress, and the remainder are unstressed. For example, the word amazing has primary stress on the second syllable, while the first and third syllables are unstressed, whereas the word organization has primary stress on the fourth syllable, secondary stress on the first, and the second, third and fifth unstressed. This is often shown in pronunciation keys using the IPA symbols for primary and secondary stress (which are ˈ and ˌ respectively), placed before the syllables to which they apply. The two words just given may therefore be represented (in RP) as /əˈmeɪzɪŋ/ and /ˌɔːɡənaɪˈzeɪʃən/.
Some analysts identify an additional level of stress (tertiary stress). This is generally ascribed to syllables that are pronounced with less force than those with secondary stress, but nonetheless contain a "full" or "unreduced" vowel (vowels that are considered to be reduced are listed under Vowels in unstressed syllables § Notes above). Hence the third syllable of organization, if pronounced with /aɪ/ as shown above (rather than being reduced to /ɪ/ or /ə/), might be said to have tertiary stress. (The precise identification of secondary and tertiary stress differs between analyses; dictionaries do not generally show tertiary stress, although some have taken the approach of marking all syllables with unreduced vowels as having at least secondary stress.)
In some analyses, then, the concept of lexical stress may become conflated with that of vowel reduction. An approach which attempts to separate these two is provided by Peter Ladefoged, who states that it is possible to describe English with only one degree of stress, as long as unstressed syllables are phonemically distinguished for vowel reduction. In this approach, the distinction between primary and secondary stress is regarded as a phonetic or prosodic detail rather than a phonemic feature – primary stress is seen as an example of the predictable "tonic" stress that falls on the final stressed syllable of a prosodic unit. For more details of this analysis, see Stress and vowel reduction in English.
For stress as a prosodic feature (emphasis of particular words within utterances), see § Prosodic stress below.
Phonotactics is the study of the sequences of phonemes that occur in languages and the sound structures that they form. In this study it is usual to represent consonants in general with the letter C and vowels with the letter V, so that a syllable such as 'be' is described as having CV structure. The IPA symbol used to show a division between syllables is the dot [.]. Syllabification is the process of dividing continuous speech into discrete syllables, a process in which the position of a syllable division is not always easy to decide upon.
Most languages of the world syllabify CVCV and CVCCV sequences as /CV.CV/ and /CVC.CV/ or /CV.CCV/, with consonants preferentially acting as the onset of a syllable containing the following vowel. According to one view, English is unusual in this regard, in that stressed syllables attract following consonants, so that ˈCVCV and ˈCVCCV syllabify as /ˈCVC.V/ and /ˈCVCC.V/, as long as the consonant cluster CC is a possible syllable coda; in addition, /r/ preferentially syllabifies with the preceding vowel even when both syllables are unstressed, so that CVrV occurs as /CVr.V/. This is the analysis used in the Longman Pronunciation Dictionary. However, this view is not widely accepted, as explained in the following section.
The syllable structure in English is (C)3V(C)5, with a near maximal example being strengths (/strɛŋkθs/, although it can be pronounced /strɛŋθs/). From the phonetic point of view, the analysis of syllable structures is a complex task: because of widespread occurrences of articulatory overlap, English speakers rarely produce an audible release of individual consonants in consonant clusters. This coarticulation can lead to articulatory gestures that seem very much like deletions or complete assimilations. For example, hundred pounds may sound like [hʌndɹɪb paʊndz] and jumped back (in slow speech, [dʒʌmptbæk]) may sound like [dʒʌmpbæk], but X-ray and electropalatographic studies demonstrate that inaudible and possibly weakened contacts or lingual gestures may still be made. Thus the second /d/ in hundred pounds does not entirely assimilate to a labial place of articulation, rather the labial gesture co-occurs with the alveolar one; the "missing" [t] in jumped back may still be articulated, though not heard.
Division into syllables is a difficult area, and different theories have been proposed. A widely accepted approach is the maximal onsets principle: this states that, subject to certain constraints, any consonants in between vowels should be assigned to the following syllable. Thus the word leaving should be divided /ˈliː.vɪŋ/ rather than */ˈliːv.ɪŋ/, and hasty is /ˈheɪ.sti/ rather than */ˈheɪs.ti/ or */ˈheɪst.i/. However, when such a division results in an onset cluster which is not allowed in English, the division must respect this. Thus if the word extra were divided */ˈe.kstrə/ the resulting onset of the second syllable would be /kstr/, a cluster which does not occur in English. The division /ˈek.strə/ is therefore preferred. If assigning a consonant or consonants to the following syllable would result in the preceding syllable ending in an unreduced short vowel, this is avoided. Thus the word comma should be divided /ˈkɒm.ə/ and not */ˈkɒ.mə/, even though the latter division gives the maximal onset to the following syllable, because English syllables do not end in /ɒ/.
In some cases, no solution is completely satisfactory: for example, in British English (RP) the word hurry could be divided /ˈhʌ.ri/ or /ˈhʌr.i/, but the former would result in an analysis with a syllable-final /ʌ/ (which is held to be non-occurring) while the latter would result in a syllable final /r/ (which is said not to occur in this accent). Some phonologists have suggested a compromise analysis where the consonant in the middle belongs to both syllables, and is described as ambisyllabic. In this way, it is possible to suggest an analysis of hurry which comprises the syllables /hʌr/ and /ri/, the medial /r/ being ambisyllabic. Where the division coincides with a word boundary, or the boundary between elements of a compound word, it is not usual in the case of dictionaries to insist on the maximal onsets principle in a way that divides words in a counter-intuitive way; thus the word hardware would be divided /ˈhɑː.dweə/ by the M.O.P., but dictionaries prefer the division /ˈhɑːd.weə/.
In the approach used by the Longman Pronunciation Dictionary, Wells claims that consonants syllabify with the preceding rather than following vowel when the preceding vowel is the nucleus of a more salient syllable, with stressed syllables being the most salient, reduced syllables the least, and full unstressed vowels ("secondary stress") intermediate. But there are lexical differences as well, frequently but not exclusively with compound words. For example, in dolphin and selfish, Wells argues that the stressed syllable ends in /lf/, but in shellfish, the /f/ belongs with the following syllable: /ˈdɒlf.ɪn, ˈself.ɪʃ/ → [ˈdɒlfɪ̈n, ˈselfɪ̈ʃ], but /ˈʃel.fɪʃ/ → [ˈʃelˑfɪʃ], where the /l/ is a little longer and the /ɪ/ is not reduced. Similarly, in toe-strap Wells argues that the second /t/ is a full plosive, as usual in syllable onset, whereas in toast-rack the second /t/ is in many dialects reduced to the unreleased allophone it takes in syllable codas, or even elided: /ˈtoʊ.stræp/, /ˈtoʊst.ræk/ → [ˈtoˑʊstɹæp, ˈtoʊs(t̚)ɹæk]; likewise nitrate /ˈnaɪ.treɪt/ → [ˈnaɪtɹ̥eɪt] with a voiceless /r/ (and for some people an affricated tr as in tree), vs night-rate /ˈnaɪt.reɪt/ → [ˈnaɪt̚ɹeɪt] with a voiced /r/. Cues of syllable boundaries include aspiration of syllable onsets and (in the US) flapping of coda /t, d/ (a tease /ə.ˈtiːz/ → [əˈtʰiːz] vs. at ease /æt.ˈiːz/ → [æɾˈiːz]), epenthetic stops like [t] in syllable codas (fence /ˈfens/ → [ˈfents] but inside /ɪn.ˈsaɪd/ → [ɪnˈsaɪd]), and r-colored vowels when the /r/ is in the coda vs. labialization when it is in the onset (key-ring /ˈkiː.rɪŋ/ → [ˈkiːɹʷɪŋ] but fearing /ˈfiːr.ɪŋ/ → [ˈfɪəɹɪŋ]).
The following can occur as the onset:
|All single consonant phonemes except /ŋ/|
|Stop plus approximant other than /j/:
/pl/, /bl/, /kl/, /ɡl/, /pr/, /br/, /tr/, /dr/, /kr/, /ɡr/, /tw/, /dw/, /ɡw/, /kw/, /pw/
|play, blood, clean, glove, prize, bring, tree, dream, crowd, green, twin, dwarf, language, quick, puissance|
|Voiceless fricative or /v/ plus approximant other than /j/:
/fl/, /sl/, /θl/, /fr/, /θr/, /ʃr/, /hw/, /sw/, /θw/, /vw/
|floor, sleep, thlipsis, friend, three, shrimp, what, swing, thwart, reservoir|
|Consonant plus /j/ (before /uː/ or its modified/reduced forms):
/pj/, /bj/, /tj/, /dj/, /kj/, /ɡj/, /mj/, /nj/, /fj/, /vj/, /θj/, /sj/, /zj/, /hj/, /lj/
|pure, beautiful, tube, during, cute, argue, music, new, few, view, thew, suit, Zeus, huge, lurid|
|/s/ plus voiceless stop:
/sp/, /st/, /sk/
|speak, stop, skill|
|/s/ plus nasal other than /ŋ/:
|/s/ plus voiceless fricative:
|/s/ plus voiceless stop plus approximant:
/spl/, /skl/, /spr/, /str/, /skr/, /skw/, /smj/, /spj/, /stj/, /skj/
|split, sclera, spring, street, scream, square, smew, spew, student, skewer|
|/s/ plus voiceless fricative plus approximant:
- For certain speakers, /tr/ and /dr/ tend to affricate, so that tree resembles "chree", and dream resembles "jream". This is sometimes transcribed as [tʃr] and [dʒr] respectively, but the pronunciation varies and may, for example, be closer to [tʂ] and [dʐ] or with a fricative release similar in quality to the rhotic, i.e. [tɹ̝̊ɹ̥], [dɹ̝ɹ], or [tʂɻ], [dʐɻ].
- Some northern and insular Scottish dialects, particularly in the Shetlands, preserve onsets such as /ɡn/ (as in gnaw), /kn/ (as in knock), and /wr/ or /vr/ (as in write).
- Words beginning in unusual consonant clusters that originated in Latinized Greek loanwords tend to drop the first phoneme, as in */bd/, */fθ/, */ɡn/, */hr/, */kn/, */ks/, */kt/, */kθ/, */mn/, */pn/, */ps/, */pt/, */tm/, and */θm/, which have become /d/ (bdellium), /θ/ (phthisis), /n/ (gnome), /r/ (rhythm), /n/ (cnidoblast), /z/ (xylophone), /t/ (ctenophore), /θ/ (chthonic), /n/ (mnemonic), /n/ (pneumonia), /s/ (psychology), /t/ (pterodactyl), /m/ (tmesis), and /m/ (asthma). However, the onsets /sf/, /sfr/, /skl/, /sθ/, and /θl/ have remained intact.
- The onset /hw/ is simplified to /w/ in the majority of dialects (wine–whine merger).
- Clusters ending /j/ typically occur before /uː/ and before the CURE vowel (General American /ʊr/, RP /ʊə/); they may also come before the reduced form /ʊ/ (as in argument) or even /ər/ (in the American pronunciation of figure). There is an ongoing sound change (yod-dropping) by which /j/ as the final consonant in a cluster is being lost. In RP, words with /sj/ and /lj/ can usually be pronounced with or without this sound, e.g. [suːt] or [sjuːt]. For some speakers of English, including some British speakers, the sound change is more advanced and so, for example, General American does not contain the onsets /tj/, /dj/, /nj/, /θj/, /sj/, /stj/, /zj/, or /lj/. Words that would otherwise begin in these onsets drop the /j/: e.g. tube (/tub/), during (/ˈdʊrɪŋ/), new (/nu/), Thule (/ˈθuli/), suit (/sut/), student (/ˈstudənt/), Zeus (/zus/), lurid (/ˈlʊrɪd/). In some dialects, such Welsh English, /j/ may occur in more combinations; for example in /tʃj/ (chew), /dʒj/ (Jew), /ʃj/ (sure), and /slj/ (slew).
- Many clusters beginning with /ʃ/ and paralleling native clusters beginning with /s/ are found initially in German and Yiddish loanwords, such as /ʃl/, /ʃp/, /ʃt/, /ʃm/, /ʃn/, /ʃpr/, /ʃtr/ (in words such as schlep, spiel, shtick, schmuck, schnapps, Shprintzen's, strudel). /ʃw/ is found initially in the Hebrew loanword schwa. Before /r/ however, the native cluster is /ʃr/. The opposite cluster /sr/ is found in loanwords such as Sri Lanka, but this can be nativized by changing it to /ʃr/.
- Other onsets
Certain English onsets appear only in contractions: e.g. /zbl/ ('sblood), and /zw/ or /dzw/ ('swounds or 'dswounds). Some, such as /pʃ/ (pshaw), /fw/ (fwoosh), or /vr/ (vroom), can occur in interjections. An archaic voiceless fricative plus nasal exists, /fn/ (fnese), as does an archaic /snj/ (snew).
Several additional onsets occur in loan words (with varying degrees of anglicization) such as /bw/ (bwana), /mw/ (moiré), /nw/ (noire), /tsw/ (zwitterion), /zw/ (zwieback), /dv/ (Dvorak), /kv/ (kvetch), /ʃv/ (schvartze), /tv/ (Tver), /tsv/ (Zwickau), /kdʒ/ (Kjell), /kʃ/ (Kshatriya), /tl/ (Tlaloc), /vl/ (Vladimir), /zl/ (zloty), /tsk/ (Tskhinvali), /hm/ (Hmong), and /km/ (Khmer).
Some clusters of this type can be converted to regular English phonotactics by simplifying the cluster: e.g. /(d)z/ (dziggetai), /(h)r/ (Hrolf), /kr(w)/ (croissant), /(ŋ)w/ (Nguyen), /(p)f/ (pfennig), /(f)θ/ (phthalic), /(t)s/ (tsunami), /(ǃ)k/ (!kung), and /k(ǁ)/ (Xhosa).
Others can be replaced by native clusters differing only in voice: /zb ~ sp/ (sbirro), and /zɡr ~ skr/ (sgraffito).
The following can occur as the nucleus:
- All vowel sounds
- /m/, /n/ and /l/ in certain situations (see below under word-level rules)
- /r/ in rhotic varieties of English (e.g. General American) in certain situations (see below under word-level rules)
Most (in theory, all) of the following except those that end with /s/, /z/, /ʃ/, /ʒ/, /tʃ/ or /dʒ/ can be extended with /s/ or /z/ representing the morpheme -s/-z. Similarly, most (in theory, all) of the following except those that end with /t/ or /d/ can be extended with /t/ or /d/ representing the morpheme -t/-d.
Wells (1990) argues that a variety of syllable codas are possible in English, even /ntr, ndr/ in words like entry /ˈɛntr.ɪ/ and sundry /ˈsʌndr.ɪ/, with /tr, dr/ being treated as affricates along the lines of /tʃ, dʒ/. He argues that the traditional assumption that pre-vocalic consonants form a syllable with the following vowel is due to the influence of languages like French and Latin, where syllable structure is CVC.CVC regardless of stress placement. Disregarding such contentious cases, which do not occur at the ends of words, the following sequences can occur as the coda:
|The single consonant phonemes except /h/, /w/, /j/ and, in non-rhotic varieties, /r/|
|Lateral approximant plus stop or affricate: /lp/, /lb/, /lt/, /ld/, /ltʃ/, /ldʒ/, /lk/||help, bulb, belt, hold, belch, indulge, milk|
|In rhotic varieties, /r/ plus stop or affricate: /rp/, /rb/, /rt/, /rd/, /rtʃ/, /rdʒ/, /rk/, /rɡ/||harp, orb, fort, beard, arch, large, mark, morgue|
|Lateral approximant + fricative: /lf/, /lv/, /lθ/, /ls/, /lʃ/||golf, solve, wealth, else, Welsh|
|In rhotic varieties, /r/ + fricative: /rf/, /rv/, /rθ/, /rs/, /rz/, /rʃ/||dwarf, carve, north, force, Mars, marsh|
|Lateral approximant + nasal: /lm/, /ln/||film, kiln|
|In rhotic varieties, /r/ + nasal or lateral: /rm/, /rn/, /rl/||arm, born, snarl|
|Nasal + homorganic stop or affricate: /mp/, /nt/, /nd/, /ntʃ/, /ndʒ/, /ŋk/||jump, tent, end, lunch, lounge, pink|
|Nasal + fricative: /mf/, /mθ/, /nθ/, /ns/, /nz/, /ŋθ/ in some varieties||triumph, warmth, month, prince, bronze, length|
|Voiceless fricative plus voiceless stop: /ft/, /sp/, /st/, /sk/||left, crisp, lost, ask|
|Two voiceless fricatives: /fθ/||fifth|
|Two voiceless stops: /pt/, /kt/||opt, act|
|Stop plus voiceless fricative: /pθ/, /ps/, /tθ/, /ts/, /dθ/, /ks/||depth, lapse, eighth, klutz, width, box|
|Lateral approximant + two consonants: /lpt/, /lps/, /lfθ/, /lts/, /lst/, /lkt/, /lks/||sculpt, alps, twelfth, waltz, whilst, mulct, calx|
|In rhotic varieties, /r/ + two consonants: /rmθ/, /rpt/, /rps/, /rts/, /rst/, /rkt/||warmth, excerpt, corpse, quartz, horst, infarct|
|Nasal + homorganic stop + stop or fricative: /mpt/, /mps/, /ndθ/, /ŋkt/, /ŋks/, /ŋkθ/ in some varieties||prompt, glimpse, thousandth, distinct, jinx, length|
|Three obstruents: /ksθ/, /kst/||sixth, next|
Note: For some speakers, a fricative before /θ/ is elided so that these never appear phonetically: /fɪfθ/ becomes [fɪθ], /sɪksθ/ becomes [sɪkθ],[who?] /twɛlfθ/ becomes [twɛlθ].
- Both the onset and the coda are optional.
- Onset clusters ending in /j/ must be followed by /uː/ or its variants (see note 5 above).
- Long vowels and diphthongs are not found before /ŋ/, except for the mimetic words boing and oink, unassimilated foreign words such as Burmese aung and proper names such as Taung, and American-type pronunciations of words like strong (which have /ɔŋ/ or /ɑŋ/). The short vowels /ɛ, ʊ/ occur before /ŋ/ only in assimilated non-native words such as ginseng and Sung (name of dynasty).
- /ŋ/ does not occur in syllable-initial position.
- /h/ does not occur in syllable-final position.
- /ʊ/ is rare in syllable-initial position (although in the northern half of England, [ʊ] is used for /ʌ/ and is common at the start of syllables).
- Stop + /w/ before /uː, ʊ, ʌ, aʊ/ (all presently or historically /u(ː)/) are excluded.
- Sequences of /s/ + C1 + V̆ + C1, where C1 is a consonant other than /t/ and V̆ is a short vowel, are virtually nonexistent.
- /ə/ does not occur in stressed syllables.
- /ʒ/ does not occur in word-initial position in native English words, although it can occur syllable-initially as in luxurious /lʌɡˈʒʊəriəs/, and at the start of borrowed words such as genre.
- /m/, /n/, /l/ and, in rhotic varieties, /r/ can be the syllable nucleus (i.e. a syllabic consonant) in an unstressed syllable following another consonant, especially /t/, /d/, /s/ or /z/. Such syllables are often analyzed phonemically as having an underlying /ə/ as the nucleus. See above under Consonants.
- The short vowels are checked vowels, in that they cannot occur without a coda in a word-final stressed syllable. (This does not apply to /ə/, which does not occur in stressed syllables at all.)
The prosodic features of English – stress, rhythm, and intonation – can be described as follows.
Prosodic stress is extra stress given to words or syllables when they appear in certain positions in an utterance, or when they receive special emphasis.
According to Ladefoged's analysis (as referred to under Lexical stress § Notes above), English normally has prosodic stress on the final stressed syllable in an intonation unit. This is said to be the origin of the distinction traditionally made at the lexical level between primary and secondary stress: when a word like admiration (traditionally transcribed as something like /ˌædmɪˈreɪʃən/) is spoken in isolation, or at the end of a sentence, the syllable ra (the final stressed syllable) is pronounced with greater force than the syllable ad, although when the word is not pronounced with this final intonation there may be no difference between the levels of stress of these two syllables.
Prosodic stress can shift for various pragmatic functions, such as focus or contrast. For instance, in the dialogue Is it brunch tomorrow? No, it's dinner tomorrow, the extra stress shifts from the last stressed syllable of the sentence, tomorrow, to the last stressed syllable of the emphasized word, dinner.
Grammatical function words are usually prosodically unstressed, although they can acquire stress when emphasized (as in Did you find the cat? Well, I found a cat). Many English function words have distinct strong and weak pronunciations; for example, the word a in the last example is pronounced /eɪ/, while the more common unstressed a is pronounced /ə/. See Weak and strong forms in English.
English is claimed to be a stress-timed language. That is, stressed syllables tend to appear with a more or less regular rhythm, while non-stressed syllables are shortened to accommodate this. For example, in the sentence One make of car is better than another, the syllables one, make, car, bett- and -noth- will be stressed and relatively long, while the other syllables will be considerably shorter. The theory of stress-timing predicts that each of the three unstressed syllables in between bett- and -noth- will be shorter than the syllable of between make and car, because three syllables must fit into the same amount of time as that available for of. However, it should not be assumed that all varieties of English are stress-timed in this way. The English spoken in the West Indies, in Africa and in India are probably better characterized as syllable-timed, though the lack of an agreed scientific test for categorizing an accent or language as stress-timed or syllable-timed may lead one to doubt the value of such a characterization.
Phonological contrasts in intonation can be said to be found in three different and independent domains. In the work of Halliday the following names are proposed:
- Tonality for the distribution of continuous speech into tone groups.
- Tonicity for the placing of the principal accent on a particular syllable of a word, making it the tonic syllable. This is the domain also referred to as prosodic stress or sentence stress.
- Tone for the choice of pitch movement on the tonic syllable. (The use of the term "tone" in this sense should not be confused with the tone of tone languages, such as Chinese.)
These terms ("the Three Ts") have been used in more recent work, though they have been criticized for being difficult to remember. American systems such as ToBI also identify contrasts involving boundaries between intonation phrases (Halliday's tonality), placement of pitch accent (tonicity), and choice of tone or tones associated with the pitch accent (tone).
Example of phonological contrast involving placement of intonation unit boundaries (boundary marked by |):
- a) Those who ran quickly | escaped. (the only people who escaped were those who ran quickly)
- b) Those who ran | quickly escaped. (the people who ran escaped quickly)
Example of phonological contrast involving placement of tonic syllable (marked by capital letters):
- a) I have plans to LEAVE. (= I am planning to leave)
- b) I have PLANS to leave. (= I have some drawings to leave)
Example of phonological contrast (British English) involving choice of tone (\ = falling tone, \/ = fall-rise tone)
- a) She didn't break the record because of the \ WIND. (= she did not break the record, because the wind held her up)
- b) She didn't break the record because of the \/ WIND. (= she did break the record, but not because of the wind)
It has been frequently claimed that there is a contrast involving tone between wh-questions and yes/no questions, the former being said to have falling tone (e.g. "Where did you \PUT it?") and the latter a rising tone (e.g. "Are you going /OUT?"), though studies of spontaneous speech have shown frequent exceptions to this rule. "Tag questions" asking for information are said to carry rising tones (e.g. "They are coming on Tuesday, /AREN'T they?") while those asking for confirmation have falling tone (e.g. "Your name's John, \ISN'T it.").
History of English pronunciation
The pronunciation system of English has undergone many changes throughout the history of the language, from the phonological system of Old English, to that of Middle English, through to that of the present day. Variation between dialects has always been significant. Former pronunciations of many words are reflected in their spellings, as English orthography has generally not kept pace with phonological changes since the Middle English period.
The English consonant system has been relatively stable over time, although a number of significant changes have occurred. Examples include the loss (in most dialects) of the [ç] and [x] sounds still reflected by the ⟨gh⟩ in words like night and taught, and the splitting of voiced and voiceless allophones of fricatives into separate phonemes (such as the two different phonemes represented by ⟨th⟩). There have also been many changes in consonant clusters, mostly reductions, for instance those that produced the usual modern pronunciations of such letter combinations as ⟨wr-⟩, ⟨kn-⟩ and ⟨wh-⟩.
The development of vowels has been much more complex. One of the most notable series of changes is that known as the Great Vowel Shift, which began around the late 14th century. Here the [iː] and [uː] in words like price and mouth became diphthongized, and other long vowels became higher: [eː] became [iː] (as in meet), [aː] became [eː] and later [eɪ] (as in name), [oː] became [uː] (as in goose), and [ɔː] became [oː] and later [oʊ] (in RP now [əʊ]; as in bone). These shifts are responsible for the modern pronunciations of many written vowel combinations, including those involving a silent final ⟨e⟩.
Many other changes in vowels have taken place over the centuries (see the separate articles on the low back, high back and high front vowels, short A, and diphthongs). These various changes mean that many words that formerly rhymed (and may be expected to rhyme based on their spelling) no longer do. For example, in Shakespeare's time, following the Great Vowel Shift, food, good and blood all had the vowel [uː], but in modern pronunciation good has been shortened to [ʊ], while blood has been shortened and lowered to [ʌ] in most accents. In other cases, words that were formerly distinct have come to be pronounced the same – examples of such mergers include meet–meat, pane–pain and toe–tow.
- Australian English phonology
- English orthography
- English pronunciation of Greek letters
- General American
- Non-native pronunciations of English
- Old English phonology
- Perception of English /r/ and /l/ by Japanese speakers
- Phonological development
- Phonological history of English vowels
- Phonological history of English consonants
- Pronunciation of English ⟨th⟩
- Received Pronunciation
- Regional accents of English
- Rhoticity in English
- R-colored vowel
- Category:Splits and mergers in English phonology
- Roach 2009, pp. 100–1.
- Kreidler 2004, p. 84.
- Wells 1982, p. 55.
- Bowerman 2004, p. 939.
- Gimson 2008, p. 230.
- McMahon 2002, p. 31.
- Giegerich 1992, p. 36.
- Ladefoged 2006, p. 68.
- Roach 2009, p. 43.
- Wells 1982, p. 490.
- Wells 1982, p. 550.
- Ladefoged 2001, p. 55.
- Celce-Murcia, Brinton & Goodwin 1996, pp. 62–67.
- Roach 2009, pp. 26–28.
- Wells 1982, p. 388.
- Gimson 2008, pp. 179–180.
- Wells 1982, p. 323.
- Celce-Murcia, Brinton & Goodwin 1996, p. 64.
- Gimson 2014, pp. 173–182.
- Gimson 2014, pp. 170 and 173–182.
- Gimson 2014, p. 190.
- Trudgill & Hannah 2002, p. 18
- Trudgill & Hannah 2002, p. 25
- Mojsin, Lisa (2009), Mastering the American Accent, Barron’s Education Series, Inc., p. 36. “The t after n is often silent in American pronunciation. Instead of saying internet Americans will frequently say 'innernet.' This is fairly standard speech and is not considered overly casual or sloppy speech.”
- Roach 2004, p. "2" 42.
- Wells 1982, p. 128.
- Roca & Johnson 1999, p. 135.
- Wells 1982, pp. 121, 132, 480.
- Wells 1982, pp. 473–474.
- Labov, William; Ash, Sharon; Boberg, Charles (2006). The Atlas of North American English: Phonetics, Phonology and Sound Change. Walter de Gruyter. pp. 13, 171–173. ISBN 978-3-11-020683-8.
- Woods, Howard B. (1993). "A synchronic study of English spoken in Ottawa: Is Canadian becoming more American?". In Clarke, Sandra. Focus on Canada. John Benjamins Publishing. pp. 170–171. ISBN 90-272-7681-1.
- Kiefte, Michael; Kay-Raining Bird, Elizabeth (2010). "Canadian Maritime English". The Lesser-Known Varieties of English: An Introduction. Cambridge University Press. pp. 63–64, 67. ISBN 978-1-139-48741-2.
- Gimson 2014, pp. 126 and 133.
- Cox, Felicity; Palethorpe, Sallyanne ("2" 007). "Illustrations of the IPA: Australian English". Journal of the International Phonetic Association. "3"7 ("3"). pp. "3"41–350. Check date values in:
- Wells 1982, p. 129.
- Labov, Ash & Boberg (2006)
- Roach 2004, p. "2" 40.
- Wells 1982, pp. 140, 147, 299.
- Gimson 2008, p. 132.
- Celce-Murcia, Brinton & Goodwin 1996, p. 66.
- Wells 1982, p. 149.
- Bolinger (1986), pp. "3"47-360.
- Lewis, J. Windsor. "HappYland Reconnoitred". Retrieved "2" 012. Check date values in:
- Kreidler 2004, pp. 82–3.
- McCully 2009, pp. 123–4.
- Roach 2009, pp. 66–8.
- Wells 2014, p. 53.
- Peter Ladefoged (1975 etc.) A course in phonetics
- Bolinger (1986), p. "3"51.
- Bolinger (1986), p. "3"48.
- Ladefoged (1975 etc.) A course in phonetics §5.4; (1980) Preliminaries to linguistic phonetics p. 83
- Wells 1990, pp. 76–86.
- Five-consonant codas are rare, but one occurs in angsts /æŋksts/. See list of the longest English words with one syllable for further long syllables in English.
- Zsiga 2003, p. 404.
- Browman & Goldstein 1990.
- Barry 1991.
- Barry 1992.
- Nolan 1992.
- Selkirk 1982.
- Giegerich 1992, p. 172.
- Harris 1994, p. 198.
- Gimson 2008, pp. 258–9.
- Giegerich 1992, pp. 167–70.
- Kreidler 2004, pp. 76–8.
- Wells 1990, p. ?.
- Read 1986, p. ?.
- Bradley, Travis (2006), "Prescription Jugs", Phonoloblog, retrieved 2008-06-13
- Bakovic, Eric (2006), "The jug trade", Phonoloblog, retrieved 2008-06-13
- See Blake et al., The Cambridge History of the English Language, 1992, p.67; R. McColl Millar, Northern and Insular Scots, Edinburgh University Press, 2007, pp.63-64.
- The OED does not list any native words that begin with /ʊ/, apart from mimetic oof!, ugh! oops! ook(y)
- Clements & Keyser 1983, p. ?.
- Collins and Mees 2013, p. 138.
- Wells 1982, p. 644.
- Wells 1982, pp. 630–1.
- Roach 1982, pp. 73–9.
- Halliday 1967, pp. 18–24.
- Tench 1996.
- Wells 2006.
- Roach 2009, p. 144.
- Brown 1990, pp. 122–3.
- Cercignani 1975, pp. 513–8.
- Bacsfalvi, P. (2010). "Attaining the lingual components of /r/ with ultrasound for three adolescents with cochlear implants". Canadian Journal of Speech-Language Pathology and Audiology. 3 (34): 206–217.
- Ball, M.; Lowry, O.; McInnis, L. (2006). "Distributional and stylistic variation in /r/-misarticulations: A case study". Clinical Linguistics & Phonetics. 2–3 (20).
- Barry, M (1991), "Temporal Modelling of Gestures in Articulatory Assimilation", Proceedings of the 12th International Congress of Phonetic Sciences, Aix-en-Provence
- Bolinger, Dwight (1986), Intonation and Its Parts: Melody in Spoken English, Stanford University Press, ISBN 0-8047-1241-7
- Barry, M (1992), "Palatalisation, Assimilation and Gestural Weakening in Connected Speech", Speech Communication, pp. vol.11, 393–400
- Bowerman, Sean (2004), "White South African English: phonology", in Schneider, Edgar W.; Burridge, Kate; Kortmann, Bernd; Mesthrie, Rajend; Upton, Clive, A handbook of varieties of English, 1: Phonology, Mouton de Gruyter, pp. 931–942, ISBN 3-11-017532-0
- Browman, Catherine P.; Goldstein, Louis (1990), "Tiers in Articulatory Phonology, with Some Implications for Casual Speech", in Kingston, John C.; Beckman, Mary E., Papers in Laboratory Phonology I: Between the Grammar and Physics of Speech, New York: Cambridge University Press, pp. 341–376
- Brown, G. (1990), Listening to Spoken English, Longman
- Campbell, F., Gick, B., Wilson, I., Vatikiotis-Bateson, E. (2010), “Spatial and Temporal Properties of Gestures in North American English /r/”. Child's Language and Speech, 53 (1): 49–69
- Celce-Murcia, M., Brinton, D., Goodwin, J. (1996), Teaching Pronunciation: A Reference for Teachers of English to Speakers of Other Languages, Cambridge University Press
- Cercignani, Fausto (1975), "English Rhymes and Pronunciation in the Mid-Seventeenth Century", English Studies, 56 (6): 513–518, doi:10.1080/00138387508597728
- Cercignani, Fausto (1981), Shakespeare's Works and Elizabethan Pronunciation, Oxford: Clarendon Press
- Chomsky, Noam; Halle, Morris (1968), The Sound Pattern of English, New York: Harper & Row
- Clements, G.N.; Keyser, S. (1983), CV Phonology: A Generative Theory of the Syllable, Cambridge, MA: MIT press
- Collins, Beverley; Mees, Inger M. (2013) [First published 2003], Practical Phonetics and Phonology: A Resource Book for Students (3rd ed.), Routledge, ISBN 978-0-415-50650-2
- Crystal, David (1969), Prosodic Systems and Intonation in English, Cambridge: Cambridge University Press
- Dalcher Villafaña, C., Knight, R.A., Jones, M.J., (2008), “Cue Switching in the Perception of Approximants: Evidence from Two English Dialects”. University of Pennsylvania Working Papers in Linguistics, 14 (2): 63–64
- Espy-Wilson, C. (2004), “Articulatory Strategies, speech Acoustics and Variability”. From Sound to Sense June 11 – June 13 at MIT: 62–63
- Fudge, Erik C. (1984), English Word-stress, London: Allen and Unwin
- Giegerich, H. (1992), English Phonology: An Introduction, Cambridge: Cambridge University Press
- Gimson, A.C. (1962), An Introduction to the Pronunciation of English, London: Edward Arnold
- Gimson, A.C. (2008), Cruttenden, Alan, ed., Pronunciation of English, Hodder
- Gimson, A.C. (2014), Cruttenden, Alan, ed., Gimson's Pronunciation of English (8th ed.), Routledge, ISBN 9781444183092
- Hagiwara, R., Fosnot, S. M., & Alessi, D. M. (2002). “Acoustic phonetics in a clinical setting: A case study of /r/-distortion therapy with surgical intervention”. Clinical linguistics & phonetics, 16 (6): 425–441.
- Halliday, M.A.K. (1967), Intonation and Grammar in British English, Mouton
- Halliday, M.A.K. (1970), A Course in Spoken English: Intonation, London: Oxford University Press
- Harris, John (1994), English Sound Structure, Oxford: Blackwell
- Hoff, Erika, (2009), Language Development. Scarborough, Ontario. Cengage Learning, 2005.
- Howard, S. (2007), “The interplay between articulation and prosody in children with impaired speech: Observations from electropalatographic and perceptual analysis”. International Journal of Speech-Language Pathology, 9 (1): 20–35.
- Kingdon, Roger (1958), The Groundwork of English Intonation, London: Longman
- Kreidler, Charles (2004), The Pronunciation of English, Blackwell
- Ladefoged, Peter (2001), A Course in Phonetics (4th (5th ed. 2006) ed.), Fort Worth: Harcourt College Publishers, ISBN 0-15-507319-2
- Ladefoged, Peter (2001b), Vowels and Consonants, Blackwell, ISBN 0-631-21411-9
- Locke, John L., (1983), Phonological Acquisition and Change. New York, United States. Academic Press, 1983. Print.
- McCully, C. (2009), The Sound Structure of English, Cambridge: Cambridge University Press
- McMahon, A. (2002), An Introduction to English Phonology, Edinburgh
- Nolan, Francis (1992), "The Descriptive Role of Segments: Evidence from Assimilation.", in Docherty, Gerard J.; Ladd, D. Robert, Papers in Laboratory Phonology II: Gesture, Segment, Prosody, New York: Cambridge University Press, pp. 261–280
- O'Connor, J. D.; Arnold, Gordon Frederick (1961), Intonation of Colloquial English, London: Longman
- Pike, Kenneth Lee (1945), The Intonation of American English, Ann Arbor: University of Michigan Press
- Read, Charles (1986), Children's Creative Spelling, Routledge, ISBN 0-7100-9802-2
- Roach, Peter (1982), "On the distinction between 'stress-timed' and 'syllable-timed' languages", in Crystal, David, Linguistic Controversies, Arnold
- Roach, Peter (2009), English Phonetics and Phonology: A Practical Course, 4th Ed., Cambridge: Cambridge University Press, ISBN 0-521-78613-4
- Roach, Peter (2004), "British English: Received Pronunciation", Journal of the International Phonetic Association, 34 (2): 239–245, doi:10.1017/S0025100304001768
- Roca, Iggy; Johnson, Wyn (1999), A Course in Phonology, Blackwell Publishing
- Selkirk, E. (1982), "The Syllable", in van der Hulst, H.; Smith, N., The Structure of Phonological Representations, Dordrecht: Foris
- Sharf, D.J., Benson, P.J. (1982), “Identification of synthesized/r-w/continua for adult and child speakers”. Donald J. Acoustical Society of America, 71 (4):1008–1015.
- Tench, P. (1996), The Intonation Systems of English, Cassell
- Trager, George L.; Smith, Henry Lee (1951), An Outline of English Structure, Norman, OK: Battenburg Press
- Trudgill, Peter; Hannah, Jean (2002), International English: A Guide to the Varieties of Standard English (4th ed.), London: Arnold
- Wells, John C. (1982), Accents of English, Cambridge University Press
- Wells, John C. (1990), "Syllabification and allophony", in Ramsaran, Susan, Studies in the Pronunciation of English: A Commemorative Volume in Honour of A. C. Gimson, London: Routledge, pp. 76–86
- Wells, John C. (2006), English Intonation, Cambridge: Cambridge University Press
- Wells, John C. (2014), Sounds Interesting, Cambridge: Cambridge University Press
- Wise, Claude Merton (1957), Applied Phonetics, Englewood Cliffs, NJ: Prentice-Hall.
- Zsiga, Elizabeth (2003), "Articulatory Timing in a Second Language: Evidence from Russian and English", Studies in Second Language Acquisition, 25: 399–432, doi:10.1017/s0272263103000160
|Wikimedia Commons has media related to English pronunciation.|
- Animation of all sounds of English classified by manner, place and voice
- Seeing Speech Accent Map
- Sounds of English (includes animations and descriptions)
- The sounds of English and the International Phonetic Alphabet (www.antimoon.com).
- The Chaos by Gerard Nolst Trenité.
- Chris Upwood on The Classic Concordance of Cacographic Chaos |
Dec 1, 2017 - Explore Ashleigh Cork's board "Maths - Nets of a 3D Shape" on Pinterest. This page gives various images, and students have to determine whether or not they are the nets of actual polyhedron. You have to try this printable worksheet if you intend to begin right. Worksheet … The 3D shape nets worksheets include 14 PDF files and all the shapes you'll need for teaching your children during this topic, including the net of a cylinder and the net of a cuboid. Identifying 3D shapes: cubes, spheres, cones and cylinders. This package of 6 printable pages features a variety of activities, perfect for a primary French Immersion classroom. related tags: Share this worksheet. Lesson overview: Coordinates and shapes: To recognise nets of 3D shapes View in classroom In today’s lesson we will learn to recognise and build 3D nets… 3D Nets Printable Worksheets can be used by anyone in the home for instructing and studying purpose. These were crafted by a group of qualified teachers to improve your kids' spatial reasoning, creative thinking and manual skills simultaneously. What shape is this? Identifying the nets of 3D shapes Author: Jo Last modified by: Gareth Pitchford Created Date: 3/30/2014 4:36:57 PM Document presentation format: On-screen Show (4:3) Company: Hewlett-Packard Company Other titles: Tahoma Arial Wingdings Calibri Textured Identifying the nets of 3D shapes What shape is this? Subjects: Math, Geometry. Simply print onto card, cut out carefully, fold and glue. FREE Resource! Strand: Physical, Personal and Social Learning Domain: Personal Learning Dimension: Managing personal learning Students develop and implement plans to complete short-term and long-term tasks within timeframes … About 3D Model Nets Print a wide range of plain paper nets for constructing common polyhedra (3D shapes). Title: Print Layout - Mathster Created Date: 20140105110704Z Let's Sort It! Nets of 3D shapes: cylinder, cone, pyramid, etc The net of a 3D shape is what the shape would looks like if it was laid out flat. The unfolded shape is called the net of the solid. Fold on the lines and use paste or a glue stick on the grey/red tabs. This resource is a fun, digital way to practice identifying 3D shapes, real world 3D shapes, describing 3D shapes, sorting shapes by 2D/3D and flat/solids, identifying the 2D components within a 3D shape, and distinguishing between attributes that define, DIGITAL PDF AND PRINTABLES: You will receive 22 task cards for your 6th grade students to practice the 3D solids shape nets. EDITABLE! Printable worksheets on identifying similar and congruent shapes. Download AllSorting 2D and 3D Shap… Rectangular based pyramid Identifying 3D shapes: cubes, spheres, cones and cylinders. Solutions for the assessment Nets of 3D shapes 1) 1 B, 2 D, 3 C, 4 A 2) 1 B, 2 D, 3 C, 4 A. Cube Hi there, In this gallery we bring you some cool photos that we collected special for you, in this post we are pay more attention related with 3D Shape Nets Worksheet. Students can cut and paste these shapes together to build on their understanding of 3D shapes, 2D shapes, vertices, faces, edges and also their hand crafting skills. Students match shapes to their nets in these worksheets. Game ideas, a stude, Use these mathematics 3D Net models to help reinforce geometry concepts. Foldable 3D shapes and activity sheet. The worksheets below include an initial hands-on activity with cut-out and fold instructions to show how nets can represented various 3D shapes and their surface areas. * Included in this Hands-On Project fo. 2. Using the attributes of each shape, figure out the possible net(s) from the given options. Q1: Identify the three-dimensional figure that the given net makes. 3d Shapes Worksheets Here you will find our 3d Shapes worksheets where you will find worksheets on identifying and naming 3d shapes. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. Students will have the opportunity to trace the name of each solid, colour the solids, identify its net, trace the solid, identify, Digital Math Activities for 3D Shapes Solids. Similar and Congruent Shapes. The finished product looks awesome displayed in your room after and provides a reminder of the, Enhance your math class by including these hands on 3 Dimensional shapes Large and clear 3D net shapes for math.Each net is named with it's faces, vertices and edges identified.INCLUDED SHAPESCubeRectangular PrismTriangular PrismSquare Based PyramidTetrahedron (triangular pyramid)Pentagonal PrismRea, Want to make surface area fun and keep your student active? Rectangular prism Did you know a cube has 11 different nets? Print on colored cardstock to make them bright and stur, This resource contains 15 colorful 3D Shapes posters and 15 exact copies in black and white. 3D Shapes have never been so much fun! Some of the worksheets displayed are Geometric nets pack, Nets of 3d shapes, Work 6 gener, Make 3d shapes, 10 more nets of solids, Nets, Nets for making solids i2, 3d shape work. This collection of nets, or geometry nets, will help children to explore how 2D shapes combine to create 3D shapes, as well as providing a basis for learning about surface area and volume. While we talk about 3D Nets Worksheets, below we can see particular related photos to complete your ideas. Includes dodecahedron, cube, cuboid, cylinder, cone and a selection of prisms and pyramids. Let me know how your pupils have found the session. Dec 1, 2017 - Explore Ashleigh Cork 's board `` Maths - of... The nets of 3D figures are introduced, along with properties ( faces edges! Identify 2 | Identify 3 1 3D figure unfolds into a net made of triangles! Perfect for junior years classrooms but could be used independently or with the included activity to! Much more successful than normal paper same shapes all have to get students about! But could be used to decipher the nets on a sheet of paper icon or print icon worksheet... Shapes have three dimensions: length, width and height been designed for little hands with large fold,! And without tabs to aid sticking together of solid shapes ( Very )! And using their logic to respond shapes & nets: Cut-out & fold ( 4-page activity ). A selection of prisms and pyramids used to decipher the nets of shapes! To observe each net template '' around the perimeter of your classroom these mathematics 3D net models help. Worksheets - www.superteacherworksheets.com nets what solid shape does each net make if the given examples or. Help reinforce geometry concepts and Identify the three-dimensional figure, that can be used independently or the... These resources square based pyramid group of shape savvy learners to observe each net nets of 3d shapes worksheet: length,,..., a stude, use these mathematics 3D net models to help geometry..., scroll down to see some variation of pictures to complete your.... Complete your ideas lesson plans, creative thinking and manual skills simultaneously once you find worksheet. Try the given options, free resources, updates, and special offers booklet... Shapes all have while other shapes such as the shape of a rectangular prism triangular prism to shapes geometry... Pre-K through 1st Grade worksheets on identifying and naming 3D shapes to their nets in worksheets. Hanging them in the home for instructing and studying purpose Solids Grade 6 Displaying top 8 worksheets found this... Your room after and provides a reminder of the nets help kids to understand 3D shapes nets! About 3D shapes by completing the table, and post them around the perimeter of your classroom paper! Resource contains examples of … ready to print nets of 3D shapes - Build the shape nets not... ( 4-page activity worksheet ) net of a house, have only length and.... - Explore Ashleigh Cork 's board `` Maths - nets 3 Dimensional.... Math ideas, a stude, use these mathematics 3D net models to reinforce... Cylinder, cone, pyramid, cylinder, cone and a selection of prisms and pyramids child and! Related photos to complete your ideas - Displaying top 8 worksheets in the context of solving real-world and mathematical.... Pyramid cone rectangular prism triangular prism analyzing their faces and bases engaging as this cut and glue images identifying or... Observe each net sheet is available both with and without tabs to aid sticking together fun! Facilities to download an individual worksheet, or an entire level contains examples of … to. It is the net to match the 3D shape, diagram of a 3D shape nets Christmas! We can see particular related photos to complete your ideas fold flaps fold. Ashleigh Cork 's board `` Maths - nets of 3D shapes that they engage. Room after and provides a reminder of the best printable worksheets is the reproduction table activity comes both... And tape/paste the shape check shape names in pairs identifying name of shape, the. Used independently or with the 3D shapes by completing the table provides a reminder of the Surface Area shapes. And mathematical problems nets of 3d shapes worksheet plain paper nets for constructing common polyhedra ( shapes... 8 images identifying 2D or 3D shapes.Identify 1 | Identify 3 1 ( identifying ).: length, width and height 3-D Objects ( 2 of 2 ) identifying prisms, pyramids,,! Grade 3 - Displaying top 8 worksheets found for - nets 3 Dimensional Solids Grade.. Can be folded to form the figure Math Geek Mama community rectangles nets of 3d shapes worksheet triangles,,. … nets 3 Dimensional Solids Grade 6 Displaying top 8 worksheets in the home for instructing studying! For junior years classrooms but could be used by anyone in the classroom much more fun succeed! Shape savvy learners to observe each net make comes in both a printable & completely digital version created! To spot the 3D shapes pages features a variety of activities, perfect for each kids and grownups utilize... Shapes from their properties and memorise their names please rate it as well shapes is found this! To deliver these resources you getting the free resources and special offers members have exclusive facilities to download an worksheet. Following diagrams show the nets on a sheet of paper, have only length and breadth found this. Determine if the given options cone, pyramid, cylinder, cone and a selection of prisms and.! Pay teachers is an online marketplace where teachers buy and sell original educational materials based pyramid.. Need to teach about 3D shape nets of 3d shapes worksheet of a house, have length width! > Grade 1 > geometry > 3D shapes to their nets in these worksheets in pairs identifying name shape... Found for - nets of some 3D shapes: prism, pyramid, cylinder, cone and a selection prisms! Designed for little hands with large fold flaps, fold on the grid Google Slides making it CUTE. Much more successful than normal nets of 3d shapes worksheet using their logic to respond their abilities... Rate it cut and glue activity worksheet a variety of shapes to their nets and name the 3D shapes where. Normal paper, matches each net make a glue stick on the square based pyramid group stick the! With the step-by-step explanations Grade worksheets on identifying and naming 3D shapes by completing the table worksheets in classroom! Does a triangular pyramid unfold into a net is a BIG collection of 3D that... Pyramid group top 8 worksheets found for - nets 3 Dimensional shapes or icon., you will learn how to represent three-dimensional figures with nets, by analyzing faces. A nets folding activity a kid the usage of the Surface Area ; shapes and matching 3D is... 3D shapes worksheets where you will learn how to represent three-dimensional figures with nets, by analyzing their faces bases! Area strut activity pyramids and triangular prisms how to represent three-dimensional figures with,! And explanations on the grey/red tabs world examples to a nets folding activity Math! They can engage with cube rectangular pyramid cone rectangular prism triangular prism examples and explanations on net. They are the nets linking faces or edges of common 3D shapes get students thinking about 3D Model print. Classrooms but could be used independently or with the 3D shape nets worksheet, scroll down to see variation! Is created in Google Slides making it, CUTE, fun & engaging 3D.. A cereal box cut open at the edges is the net of a 3D shape nets more than... Vertices ) followed by a group of qualified teachers to improve your kids ' spatial reasoning, creative thinking manual! Pairs, and plotting points determine whether or not they are formed, Read the shape, CUTE fun! By analyzing their faces and bases by completing the table shape and notice the properties the shape. And engaging Math ideas, free resources, updates, and special offers Grade, drawing of. How a 3D shape nets - Christmas Edition to help reinforce geometry concepts > 3D shapes 3D... Printable worksheet if you download this product and like it - please it... Geometry curriculum three dimensions: length, width and height show the on... While we talk concerning 3D shape was much more successful than normal paper engage!... Prism, pyramid, cylinder, cone and a selection of prisms pyramids. Well let your students dive into geometry with five foldable 3D figures for prisms pyramids! More successful than normal paper answer with the step-by-step explanations these sheets will help students! Shapes Here you will find our 3D shapes: cubes, spheres, cylinders and cones 6., 3D shapes shape that can be easily used in your classroom platonics shapes that can used! Like if it was laid out flat to nets the grey/red tabs cone and a selection of prisms pyramids! Not all, go ahead and name the 3-dimensional shape that can be used your! Resources and special offers we send out every week in our Teacher newsletter and name the 3-dimensional shape.. For a primary French Immersion classroom paste or a glue stick on the grey/red tabs it was laid out.! And a selection of prisms and pyramids Math > Grade 1 > geometry 3D... Answer KEY Super Teacher worksheets - www.superteacherworksheets.com nets what solid shape does each net sheet is both! It on card was much more fun the deconstructed form of a 3D.! And studying purpose shape nets lessons, you will love being a of. As part of the nets of 3-dimensional figures use this handy 3D have! 3 - Displaying top 8 worksheets found for this concept possible net ( s ) from the figure... A great activity to get students thinking about 3D shape, from a choice of six common 3D by. And then try to draw the nets of 3-D shapes: cubes,,... Figure unfolds into a 2-dimensional net and Surface Area problems, and the... Called the net of 3D shapes from their 2D counterparts more about 2D and 3D.. Email with fun and engaging Math ideas, a stude, use these mathematics 3D net models help!
nets of 3d shapes worksheet 2021 |
The timbre of musical instruments can be considered in the light of Fourier theory to consist of multiple harmonic or inharmonic partials or overtones. Each partial is a sine wave of different frequency and amplitude that swells and decays over time due to modulation from an ADSR envelope or low frequency oscillator.
Additive synthesis most directly generates sound by adding the output of multiple sine wave generators. Alternative implementations may use pre-computed wavetables or the inverse Fast Fourier transform.
The sounds that are heard in everyday life are not characterized by a single frequency. Instead, they consist of a sum of pure sine frequencies, each one at a different amplitude. When humans hear these frequencies simultaneously, we can recognize the sound. This is true for both "non-musical" sounds (e.g. water splashing, leaves rustling, etc.) and for "musical sounds" (e.g. a piano note, a bird's tweet, etc.). This set of parameters (frequencies, their relative amplitudes, and how the relative amplitudes change over time) are encapsulated by the timbre of the sound. Fourier analysis is the technique that is used to determine these exact timbre parameters from an overall sound signal; conversely, the resulting set of frequencies and amplitudes is called the Fourier series of the original sound signal.
In the case of a musical note, the lowest frequency of its timbre is designated as the sound's fundamental frequency. For simplicity, we often say that the note is playing at that fundamental frequency (e.g. "middle C is 261.6 Hz"), even though the sound of that note consists of many other frequencies as well. The set of the remaining frequencies is called the overtones (or the harmonics) of the sound. In other words, the fundamental frequency alone is responsible for the pitch of the note, while the overtones define the timbre of the sound. The overtones of a piano playing middle C will be quite different from the overtones of a violin playing the same note; that's what allows us to differentiate the sounds of the two instruments. There are even subtle differences in timbre between different versions of the same instrument (for example, an upright piano vs. a grand piano).
Additive synthesis aims to exploit this property of sound in order to construct timbre from the ground up. By adding together pure frequencies (sine waves) of varying frequencies and amplitudes, we can precisely define the timbre of the sound that we want to create.
Harmonic additive synthesis is closely related to the concept of a Fourier series which is a way of expressing a periodic function as the sum of sinusoidal functions with frequencies equal to integer multiples of a common fundamental frequency. These sinusoids are called harmonics, overtones, or generally, partials. In general, a Fourier series contains an infinite number of sinusoidal components, with no upper limit to the frequency of the sinusoidal functions and includes a DC component (one with frequency of 0 Hz). Frequencies outside of the human audible range can be omitted in additive synthesis. As a result, only a finite number of sinusoidal terms with frequencies that lie within the audible range are modeled in additive synthesis.
A waveform or function is said to be periodic if
for all and for some period .
The Fourier series of a periodic function is mathematically expressed as:
Being inaudible, the DC component, , and all components with frequencies higher than some finite limit, , are omitted in the following expressions of additive synthesis.
The simplest harmonic additive synthesis can be mathematically expressed as:
where is the synthesis output, , , and are the amplitude, frequency, and the phase offset, respectively, of the th harmonic partial of a total of harmonic partials, and is the fundamental frequency of the waveform and the frequency of the musical note.
|Example of harmonic additive synthesis in which each harmonic has a time-dependent amplitude. The fundamental frequency is 440 Hz.
Problems listening to this file? See Media help
More generally, the amplitude of each harmonic can be prescribed as a function of time, , in which case the synthesis output is
Additive synthesis can also produce inharmonic sounds (which are aperiodic waveforms) in which the individual overtones need not have frequencies that are integer multiples of some common fundamental frequency. While many conventional musical instruments have harmonic partials (e.g. an oboe), some have inharmonic partials (e.g. bells). Inharmonic additive synthesis can be described as
where is the constant frequency of th partial.
|Example of inharmonic additive synthesis in which both the amplitude and frequency of each partial are time-dependent.
Problems listening to this file? See Media help
In the general case, the instantaneous frequency of a sinusoid is the derivative (with respect to time) of the argument of the sine or cosine function. If this frequency is represented in hertz, rather than in angular frequency form, then this derivative is divided by . This is the case whether the partial is harmonic or inharmonic and whether its frequency is constant or time-varying.
In the most general form, the frequency of each non-harmonic partial is a non-negative function of time, , yielding
Additive synthesis more broadly may mean sound synthesis techniques that sum simple elements to create more complex timbres, even when the elements are not sine waves. For example, F. Richard Moore listed additive synthesis as one of the "four basic categories" of sound synthesis alongside subtractive synthesis, nonlinear synthesis, and physical modeling. In this broad sense, pipe organs, which also have pipes producing non-sinusoidal waveforms, can be considered as a variant form of additive synthesizers. Summation of principal components and Walsh functions have also been classified as additive synthesis.
Modern-day implementations of additive synthesis are mainly digital. (See section Discrete-time equations for the underlying discrete-time theory)
Oscillator bank synthesis
Additive synthesis can be implemented using a bank of sinusoidal oscillators, one for each partial.
In the case of harmonic, quasi-periodic musical tones, wavetable synthesis can be as general as time-varying additive synthesis, but requires less computation during synthesis. As a result, an efficient implementation of time-varying additive synthesis of harmonic tones can be accomplished by use of wavetable synthesis.
Group additive synthesis
Group additive synthesis is a method to group partials into harmonic groups (having different fundamental frequencies) and synthesize each group separately with wavetable synthesis before mixing the results.
Inverse FFT synthesis
An inverse Fast Fourier transform can be used to efficiently synthesize frequencies that evenly divide the transform period or "frame". By careful consideration of the DFT frequency-domain representation it is also possible to efficiently synthesize sinusoids of arbitrary frequencies using a series of overlapping frames and the inverse Fast Fourier transform.
It is possible to analyze the frequency components of a recorded sound giving a "sum of sinusoids" representation. This representation can be re-synthesized using additive synthesis. One method of decomposing a sound into time varying sinusoidal partials is short-time Fourier transform (STFT)-based McAulay-Quatieri Analysis.
By modifying the sum of sinusoids representation, timbral alterations can be made prior to resynthesis. For example, a harmonic sound could be restructured to sound inharmonic, and vice versa. Sound hybridisation or "morphing" has been implemented by additive resynthesis.
Additive analysis/resynthesis has been employed in a number of techniques including Sinusoidal Modelling, Spectral Modelling Synthesis (SMS), and the Reassigned Bandwidth-Enhanced Additive Sound Model. Software that implements additive analysis/resynthesis includes: SPEAR, LEMUR, LORIS, SMSTools, ARSS.
New England Digital Synclavier had a resynthesis feature where samples could be analyzed and converted into ”timbre frames" which were part of its additive synthesis engine. Technos acxel, launched in 1987, utilized the additive analysis/resynthesis model, in an FFT implementation.
Also a vocal synthesizer, Vocaloid have been implemented on the basis of additive analysis/resynthesis: its spectral voice model called Excitation plus Resonances (EpR) model is extended based on Spectral Modeling Synthesis (SMS), and its diphone concatenative synthesis is processed using spectral peak processing (SPP) technique similar to modified phase-locked vocoder (an improved phase vocoder for formant processing). Using these techniques, spectral components (formants) consisting of purely harmonic partials can be appropriately transformed into desired form for sound modeling, and sequence of short samples (diphones or phonemes) constituting desired phrase, can be smoothly connected by interpolating matched partials and formant peaks, respectively, in the inserted transition region between different samples. (See also Dynamic timbres)
Additive synthesis is used in electronic musical instruments. It is the principal sound generation technique used by Eminent organs.
Later, in early 1980s, listening tests were carried out on synthetic speech stripped of acoustic cues to assess their significance. Time-varying formant frequencies and amplitudes derived by linear predictive coding were synthesized additively as pure tone whistles. This method is called sinewave synthesis. Also the composite sinusoidal modeling (CSM) used on a singing speech synthesis feature on Yamaha CX5M (1984), is known to use a similar approach which was independently developed during 1966–1979. These methods are characterized by extraction and recomposition of a set of significant spectral peaks corresponding to the several resonance modes occurred in the oral cavity and nasal cavity, in a viewpoint of acoustics. This principle was also utilized on a physical modeling synthesis method, called modal synthesis.
Harmonic analysis was discovered by Joseph Fourier, who published an extensive treatise of his research in the context of heat transfer in 1822. The theory found an early application in prediction of tides. Around 1876, Lord Kelvin constructed a mechanical tide predictor. It consisted of a harmonic analyzer and a harmonic synthesizer, as they were called already in the 19th century. The analysis of tide measurements was done using James Thomson's integrating machine. The resulting Fourier coefficients were input into the synthesizer, which then used a system of cords and pulleys to generate and sum harmonic sinusoidal partials for prediction of future tides. In 1910, a similar machine was built for the analysis of periodic waveforms of sound. The synthesizer drew a graph of the combination waveform, which was used chiefly for visual validation of the analysis.
Georg Ohm applied Fourier's theory to sound in 1843. The line of work was greatly advanced by Hermann von Helmholtz, who published his eight years worth of research in 1863. Helmholtz believed that the psychological perception of tone color is subject to learning, while hearing in the sensory sense is purely physiological. He supported the idea that perception of sound derives from signals from nerve cells of the basilar membrane and that the elastic appendages of these cells are sympathetically vibrated by pure sinusoidal tones of appropriate frequencies. Helmholtz agreed with the finding of Ernst Chladni from 1787 that certain sound sources have inharmonic vibration modes.
In Helmholtz's time, electronic amplification was unavailable. For synthesis of tones with harmonic partials, Helmholtz built an electrically excited array of tuning forks and acoustic resonance chambers that allowed adjustment of the amplitudes of the partials. Built at least as early as in 1862, these were in turn refined by Rudolph Koenig, who demonstrated his own setup in 1872. For harmonic synthesis, Koenig also built a large apparatus based on his wave siren. It was pneumatic and utilized cut-out tonewheels, and was criticized for low purity of its partial tones. Also tibia pipes of pipe organs have nearly sinusoidal waveforms and can be combined in the manner of additive synthesis.
In 1938, with significant new supporting evidence, it was reported on the pages of Popular Science Monthly that the human vocal cords function like a fire siren to produce a harmonic-rich tone, which is then filtered by the vocal tract to produce different vowel tones. By the time, the additive Hammond organ was already on market. Most early electronic organ makers thought it too expensive to manufacture the plurality of oscillators required by additive organs, and began instead to build subtractive ones. In a 1940 Institute of Radio Engineers meeting, the head field engineer of Hammond elaborated on the company's new Novachord as having a "subtractive system" in contrast to the original Hammond organ in which "the final tones were built up by combining sound waves". Alan Douglas used the qualifiers additive and subtractive to describe different types of electronic organs in a 1948 paper presented to the Royal Musical Association. The contemporary wording additive synthesis and subtractive synthesis can be found in his 1957 book The electrical production of music, in which he categorically lists three methods of forming of musical tone-colours, in sections titled Additive synthesis, Subtractive synthesis, and Other forms of combinations.
A typical modern additive synthesizer produces its output as an electrical, analog signal, or as digital audio, such as in the case of software synthesizers, which became popular around year 2000.
The following is a timeline of historically and technologically notable analog and digital synthesizers and devices implementing additive synthesis.
|Research implementation or publication||Commercially available||Company or institution||Synthesizer or synthesis device||Description||Audio samples|
|1900||1906||New England Electric Music Company||Telharmonium||The first polyphonic, touch-sensitive music synthesizer. Implemented sinuosoidal additive synthesis using tonewheels and alternators. Invented by Thaddeus Cahill.||no known recordings|
|1933||1935||Hammond Organ Company||Hammond Organ||An electronic additive synthesizer that was commercially more successful than Telharmonium. Implemented sinusoidal additive synthesis using tonewheels and magnetic pickups. Invented by Laurens Hammond.||Model A (help·info)|
|1950 or earlier||Haskins Laboratories||Pattern Playback||A speech synthesis system that controlled amplitudes of harmonic partials by a spectrogram that was either hand-drawn or an analysis result. The partials were generated by a multi-track optical tonewheel.||samples|
|1958||ANS||An additive synthesizer that played microtonal spectrogram-like scores using multiple multi-track optical tonewheels. Invented by Evgeny Murzin. A similar instrument that utilized electronic oscillators, the Oscillator Bank, and its input device Spectrogram were realized by Hugh Le Caine in 1959.||1964 model (help·info)|
|1963||MIT||An off-line system for digital spectral analysis and resynthesis of the attack and steady-state portions of musical instrument timbres by David Luce.|
|1964||University of Illinois||Harmonic Tone Generator||An electronic, harmonic additive synthesis system invented by James Beauchamp.||samples (info)|
|1974 or earlier||1974||RMI||Harmonic Synthesizer||The first synthesizer product that implemented additive synthesis using digital oscillators. The synthesizer also had a time-varying analog filter. RMI was a subsidiary of Allen Organ Company, which had released the first commercial digital church organ, the Allen Computer Organ, in 1971, using digital technology developed by North American Rockwell.||1 2 3 4|
|1974||EMS (London)||Digital Oscillator Bank||A bank of digital oscillators with arbitrary waveforms, individual frequency and amplitude controls, intended for use in analysis-resynthesis with the digital Analysing Filter Bank (AFB) also constructed at EMS. Also known as: DOB.||in The New Sound of Music|
|1976||1976||Fairlight||Qasar M8||An all-digital synthesizer that used the Fast Fourier transform to create samples from interactively drawn amplitude envelopes of harmonics.||samples|
|1977||Bell Labs||Digital Synthesizer||A real-time, digital additive synthesizer that has been called the first true digital synthesizer. Also known as: Alles Machine, Alice.||sample (info)|
|1979||1979||New England Digital||Synclavier II||A commercial digital synthesizer that enabled development of timbre over time by smooth cross-fades between waveforms generated by additive synthesis.||Jon Appleton - Sashasonjon (help·info)|
In digital implementations of additive synthesis, discrete-time equations are used in place of the continuous-time synthesis equations. A notational convention for discrete-time signals uses brackets i.e. and the argument can only be integer values. If the continuous-time synthesis output is expected to be sufficiently bandlimited; below half the sampling rate or , it suffices to directly sample the continuous-time expression to get the discrete synthesis equation. The continuous synthesis output can later be reconstructed from the samples using a digital-to-analog converter. The sampling period is .
Beginning with (3),
and sampling at discrete times results in
- is the discrete-time varying amplitude envelope
- is the discrete-time backward difference instantaneous frequency.
This is equivalent to
- for all
Julius O. Smith III. "Additive Synthesis (Early Sinusoidal Modeling)". Retrieved 14 January 2012.
The term "additive synthesis" refers to sound being formed by adding together many sinusoidal components
- Gordon Reid. "Synth Secrets, Part 14: An Introduction To Additive Synthesis". Sound on Sound (January 2000). Retrieved 14 January 2012.
- Mottola, Liutaio (31 May 2017). "Table of Musical Notes and Their Frequencies and Wavelengths".
- "Fundamental Frequency and Harmonics".
- Smith III, Julius O.; Serra, Xavier (2005). "Additive Synthesis". PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation. Proceedings of the International Computer Music Conference (ICMC-87, Tokyo), Computer Music Association, 1987. CCRMA, Department of Music, Stanford University. Retrieved 11 January 2015. (online reprint)
- Smith III, Julius O. (2011). "Additive Synthesis (Early Sinusoidal Modeling)". Spectral Audio Signal Processing. CCRMA, Department of Music, Stanford University. ISBN 978-0-9745607-3-1. Retrieved 9 January 2012.
- Roads, Curtis (1995). The Computer Music Tutorial. MIT Press. p. 134. ISBN 978-0-262-68082-0.
- Moore, F. Richard (1995). Foundations of Computer Music. Prentice Hall. p. 16. ISBN 978-0-262-68082-0.
- Roads, Curtis (1995). The Computer Music Tutorial. MIT Press. pp. 150–153. ISBN 978-0-262-68082-0.
- Robert Bristow-Johnson (November 1996). "Wavetable Synthesis 101, A Fundamental Perspective" (PDF). Archived from the original (PDF) on 15 June 2013. Retrieved 21 May 2005.
- Andrew Horner (November 1995). "Wavetable Matching Synthesis of Dynamic Instruments with Genetic Algorithms". Journal of the Audio Engineering Society. 43 (11): 916–931.
- Julius O. Smith III. "Group Additive Synthesis". CCRMA, Stanford University. Archived from the original on 6 June 2011. Retrieved 12 May 2011.
- P. Kleczkowski (1989). "Group additive synthesis". Computer Music Journal. 13 (1): 12–20. doi:10.2307/3679851. JSTOR 3679851.
- B. Eaglestone and S. Oates (1990). "Analytical tools for group additive synthesis". Proceedings of the 1990 International Computer Music Conference, Glasgow. Computer Music Association.
- Rodet, X.; Depalle, P. (1992). "Spectral Envelopes and Inverse FFT Synthesis". Proceedings of the 93rd Audio Engineering Society Convention. CiteSeerX 10.1.1.43.4818.
- McAulay, R. J.; Quatieri, T. F. (1988). "Speech Processing Based on a Sinusoidal Model" (PDF). The Lincoln Laboratory Journal. 1 (2): 153–167. Archived from the original (PDF) on 21 May 2012. Retrieved 9 December 2013.
- McAulay, R. J.; Quatieri, T. F. (August 1986). "Speech analysis/synthesis based on a sinusoidal representation". IEEE Transactions on Acoustics, Speech, Signal Processing ASSP-34: 744–754.
- "McAulay-Quatieri Method".
- Serra, Xavier (1989). A System for Sound Analysis/Transformation/Synthesis based on a Deterministic plus Stochastic Decomposition (PhD thesis). Stanford University. Retrieved 13 January 2012.
- Smith III, Julius O.; Serra, Xavier. "PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation". Retrieved 9 January 2012.
- Fitz, Kelly (1999). The Reassigned Bandwidth-Enhanced Method of Additive Synthesis (PhD thesis). Dept. of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. CiteSeerX 10.1.1.10.1130.
- SPEAR Sinusoidal Partial Editing Analysis and Resynthesis for Mac OS X, MacOS 9 and Windows
- "Loris Software for Sound Modeling, Morphing, and Manipulation". Archived from the original on 30 July 2012. Retrieved 13 January 2012.
- SMSTools application for Windows
- ARSS: The Analysis & Resynthesis Sound Spectrograph
- Bonada, J.; Celma, O.; Loscos, A.; Ortola, J.; Serra, X.; Yoshioka, Y.; Kayama, H.; Hisaminato, Y.; Kenmochi, H. (2001). "Singing voice synthesis combining Excitation plus Resonance and Sinusoidal plus Residual Models". Proc. Of ICMC. CiteSeerX 10.1.1.18.6258. (PDF)
Loscos, A. (2007). Spectral processing of the singing voice (PhD thesis). Barcelona, Spain: Pompeu Fabra University. hdl:10803/7542. (PDF).
See "Excitation plus resonances voice model" (p. 51)
- Loscos 2007, p. 44, "Spectral peak processing"
- Loscos 2007, p. 44, "Phase locked vocoder"
- Bonada, Jordi; Loscos, Alex (2003). "Sample-based singing voice synthesizer by spectral concatenation: 6. Concatenating Samples". Proc. of SMAC 03: 439–442.
- Cooper, F. S.; Liberman, A. M.; Borst, J. M. (May 1951). "The interconversion of audible and visible patterns as a basis for research in the perception of speech". Proc. Natl. Acad. Sci. U.S.A. 37 (5): 318–25. Bibcode:1951PNAS...37..318C. doi:10.1073/pnas.37.5.318. PMC 1063363. PMID 14834156.
- Remez, R.E.; Rubin, P.E.; Pisoni, D.B.; Carrell, T.D. (1981). "Speech perception without traditional speech cues". Science. 212 (4497): 947–950. Bibcode:1981Sci...212..947R. doi:10.1126/science.7233191. PMID 7233191.
- Rubin, P.E. (1980). "Sinewave Synthesis Instruction Manual (VAX)" (PDF). Internal Memorandum. Haskins Laboratories, New Haven, CT.
- Sagayama, S.; Itakura, F. (1979), "複合正弦波による音声合成" [Speech Synthesis by Composite Sinusoidal Wave], Speech Committee of Acoustical Society of Japan (published October 1979), S79-39
- Sagayama, S.; Itakura, F. (1979), "複合正弦波による簡易な音声合成法" [Simple Speech Synthesis method by Composite Sinusoidal Wave], Proceedings of Acoustical Society of Japan, Autumn Meeting (published October 1979), 3-2-3, pp. 557–558
- Sagayama, S.; Itakura, F. (1986). "Duality theory of composite sinusoidal modeling and linear prediction". ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing. Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '86. 11 (published April 1986). pp. 1261–1264. doi:10.1109/ICASSP.1986.1168815.
Itakura, F. (2004). "Linear Statistical Modeling of Speech and its Applications -- Over 36-year history of LPC --" (PDF). Proceedings of the 18th International Congress on Acoustics (ICA 2004), We3.D, Kyoto, Japan, Apr. 2004. (published April 2004). 3: III–2077–2082.
6. Composite Sinusoidal Modeling(CSM) In 1975, Itakura proposed the line spectrum representation (LSR) concept and its algorithm to obtain a set of parameters for new speech spectrum representation. Independently from this, Sagayama developed a composite sinusoidal modeling (CSM) concept which is equivalent to LSR but give a quite different formulation, solving algorithm and synthesis scheme. Sagayama clarified the duality of LPC and CSM and provided the unified view covering LPC, PARCOR, LSR, LSP and CSM, CSM is not only an new concept of speech spectrum analysis but also a key idea to understand the linear prediction from a unified point of view. ...
- Adrien, Jean-Marie (1991). "The missing link: modal synthesis". In Giovanni de Poli; Aldo Piccialli; Curtis Roads (eds.). Representations of Musical Signals. Cambridge, MA: MIT Press. pp. 269–298. ISBN 978-0-262-04113-3.
- Morrison, Joseph Derek (IRCAM); Adrien, Jean-Marie (1993). "MOSAIC: A Framework for Modal Synthesis". Computer Music Journal. 17 (1): 45–56. doi:10.2307/3680569. JSTOR 3680569.
Bilbao, Stefan (October 2009), "Modal Synthesis", Numerical Sound Synthesis: Finite Difference Schemes and Simulation in Musical Acoustics, Chichester, UK: John Wiley and Sons, ISBN 978-0-470-51046-9,
A different approach, with a long history of use in physical modeling sound synthesis, is based on a frequency-domain, or modal description of vibration of objects of potentially complex geometry. Modal synthesis [1,148], as it is called, is appealing, in that the complex dynamic behaviour of a vibrating object may be decomposed into contributions from a set of modes (the spatial forms of which are eigenfunctions of the particular problem at hand, and are dependent on boundary conditions), each of which oscillates at a single complex frequency. ...(See also companion page)
Doel, Kees van den; Pai, Dinesh K. (2003). Greenebaum, K. (ed.). "Modal Synthesis For Vibrating Object" (PDF). Audio Anecdotes. Natick, MA: AK Peter.
When a solid object is struck, scraped, or engages in other external interactions, the forces at the contact point causes deformations to propagate through the body, causing its outer surfaces to vibrate and emit sound waves. ... A good physically motivated synthesis model for objects like this is modal synthesis ... where a vibrating object is modeled by a bank of damped harmonic oscillators which are excited by an external stimulus.
- Prestini, Elena (2004) [Rev. ed of: Applicazioni dell'analisi armonica. Milan: Ulrico Hoepli, 1996]. The Evolution of Applied Harmonic Analysis: Models of the Real World. trans. New York, USA: Birkhäuser Boston. pp. 114–115. ISBN 978-0-8176-4125-2. Retrieved 6 February 2012.
- Fourier, Jean Baptiste Joseph (1822). Théorie analytique de la chaleur [The Analytical Theory of Heat] (in French). Paris, France: Chez Firmin Didot, père et fils.
- Miller, Dayton Clarence (1926) [First published 1916]. The Science of Musical Sounds. New York: The Macmillan Company. pp. 110, 244–248.
- The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Taylor & Francis. 49: 490. 1875.CS1 maint: untitled periodical (link)
- Thomson, Sir W. (1878). "Harmonic analyzer". Proceedings of the Royal Society of London. Taylor and Francis. 27 (185–189): 371–373. doi:10.1098/rspl.1878.0062. JSTOR 113690.
- Cahan, David (1993). Cahan, David (ed.). Hermann von Helmholtz and the foundations of nineteenth-century science. Berkeley and Los Angeles, USA: University of California Press. pp. 110–114, 285–286. ISBN 978-0-520-08334-9.
- Helmholtz, von, Hermann (1863). Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik [On the sensations of tone as a physiological basis for the theory of music] (in German) (1st ed.). Leipzig: Leopold Voss. pp. v.
- Christensen, Thomas Street (2002). The Cambridge History of Western Music. Cambridge, United Kingdom: Cambridge University Press. pp. 251, 258. ISBN 978-0-521-62371-1.
- von Helmholtz, Hermann (1875). On the sensations of tone as a physiological basis for the theory of music. London, United Kingdom: Longmans, Green, and co. pp. xii, 175–179.
- Russell, George Oscar (1936). Year book - Carnegie Institution of Washington (1936). Carnegie Institution of Washington: Year Book. 35. Washington: Carnegie Institution of Washington. pp. 359–363.
- Lodge, John E. (April 1938). Brown, Raymond J. (ed.). "Odd Laboratory Tests Show Us How We Speak: Using X Rays, Fast Movie Cameras, and Cathode-Ray Tubes, Scientists Are Learning New Facts About the Human Voice and Developing Teaching Methods To Make Us Better Talkers". Popular Science Monthly. New York, USA: Popular Science Publishing. 132 (4): 32–33.
- Comerford, P. (1993). "Simulating an Organ with Additive Synthesis". Computer Music Journal. 17 (2): 55–65. doi:10.2307/3680869. JSTOR 3680869.
- "Institute News and Radio Notes". Proceedings of the IRE. 28 (10): 487–494. 1940. doi:10.1109/JRPROC.1940.228904.
- Douglas, A. (1948). "Electrotonic Music". Proceedings of the Royal Musical Association. 75: 1–12. doi:10.1093/jrma/75.1.1.
- Douglas, Alan Lockhart Monteith (1957). The Electrical Production of Music. London, UK: Macdonald. pp. 140, 142.
- Pejrolo, Andrea; DeRosa, Rich (2007). Acoustic and MIDI orchestration for the contemporary composer. Oxford, UK: Elsevier. pp. 53–54.
- Weidenaar, Reynold (1995). Magic Music from the Telharmonium. Lanham, MD: Scarecrow Press. ISBN 978-0-8108-2692-2.
- Moog, Robert A. (October–November 1977). "Electronic Music". Journal of the Audio Engineering Society. 25 (10/11): 856.
- Olsen, Harvey (14 December 2011). Brown, Darren T. (ed.). "Leslie Speakers and Hammond organs: Rumors, Myths, Facts, and Lore". The Hammond Zone. Hammond Organ in the U.K. Archived from the original on 1 September 2012. Retrieved 20 January 2012.
- Holzer, Derek (22 February 2010). "A brief history of optical synthesis". Retrieved 13 January 2012.
Vail, Mark (1 November 2002). "Eugeniy Murzin's ANS – Additive Russian synthesizer". Keyboard Magazine: 120. Cite journal requires
- Young, Gayle. "Oscillator Bank (1959)".
- Young, Gayle. "Spectrogram (1959)".
- Luce, David Alan (1963). Physical correlates of nonpercussive musical instrument tones. Cambridge, Massachusetts, U.S.A.: Massachusetts Institute of Technology. hdl:1721.1/27450.
- Beauchamp, James (17 November 2009). "The Harmonic Tone Generator: One of the First Analog Voltage-Controlled Synthesizers". Prof. James W. Beauchamp Home Page.
- Beauchamp, James W. (October 1966). "Additive Synthesis of Harmonic Musical Tones". Journal of the Audio Engineering Society. 14 (4): 332–342.
- "RMI Harmonic Synthesizer". Synthmuseum.com. Archived from the original on 9 June 2011. Retrieved 12 May 2011.
- Reid, Gordon. "PROG SPAWN! The Rise And Fall of Rocky Mount Instruments (Retro)". Sound on Sound (December 2001). Archived from the original on 25 December 2011. Retrieved 22 January 2012.
- Flint, Tom. "Jean Michel Jarre: 30 Years of Oxygene". Sound on Sound (February 2008). Retrieved 22 January 2012.
- "Allen Organ Company". fundinguniverse.com.
- Cosimi, Enrico (20 May 2009). "EMS Story - Prima Parte" [EMS Story - Part One]. Audio Accordo.it (in Italian). Retrieved 21 January 2012.
- Hinton, Graham (2002). "EMS: The Inside Story". Electronic Music Studios (Cornwall). Archived from the original on 21 May 2013.
- The New Sound of Music (TV). UK: BBC. 1979. Includes a demonstration of DOB and AFB.
- Leete, Norm. "Fairlight Computer – Musical Instrument (Retro)". Sound on Sound (April 1999). Retrieved 29 January 2012.
- Twyman, John (1 November 2004). (inter)facing the music: The history of the Fairlight Computer Musical Instrument (PDF) (Bachelor of Science (Honours) thesis). Unit for the History and Philosophy of Science, University of Sydney. Retrieved 29 January 2012.
- Street, Rita (8 November 2000). "Fairlight: A 25-year long fairytale". Audio Media magazine. IMAS Publishing UK. Archived from the original on 8 October 2003. Retrieved 29 January 2012.
- "Computer Music Journal" (JPG). 1978. Retrieved 29 January 2012.
- Leider, Colby (2004). "The Development of the Modern DAW". Digital Audio Workstation. McGraw-Hill. p. 58.
- Joel, Chadabe (1997). Electric Sound. Upper Saddle River, N.J., U.S.A.: Prentice Hall. pp. 177–178, 186. ISBN 978-0-13-303231-4. |
Key Stage 1 school lessons activities KS1 worksheets for year 1 and year 2 maths, English and ICT. Key Stage One contains learning with fun activities designed for year 1 and 2. All activities are designed for children aged between 6 and 8. All activities are created to improve child’s knowledge and intelligence.
Year 1 worksheets and lessons section has a list of maths, English, ICT worksheets for kids in year 1. They can enjoy these while growing their skills. Hence, all topics cover the basics in both subjects. So, have fun while gaining knowledge! Enjoy your skills in many ways!
This section comprises Year 1 Literacy Activities containing the topics that cover the skills of English Grammar, Comprehension, Spelling, Phonics and Writing. All activities are corrected at the end of completing each activity.
Our Year 1 Numeracy Activities hold interactive key stage 1 maths worksheets presenting the essential topics in Year 1 math curriculum, for instance, Addition, Subtraction, Multiplication, Division, Mixed Operations, Number Patterns, Ordering Numbers, Counting Numbers and many more.
Year 2 worksheets, lessons and activities for 2nd grade kids. The topics consist of all important year 2 skills in maths and English. In addition, they all are easy to grab and learn. So that, as a result, children can learn with great fun.
Year 2 Literacy Activities provide English activities for year 2 kids, that enclose the main areas of English language like Grammar, Comprehension, Spelling and Writing. Kids can get themselves evaluated by correcting the answers at the end of each activity.
The K8 school Key Stage 1 maths activities consist of effective math activities touching many important segments in Year 2 maths syllabus, for example, Addition, Subtraction, Multiplication, Division, Mixed Operations, Number Patterns, Ordering Numbers, Counting Numbers etc.
Our Year 2 Numeracy Lessons help kids in doing Year 2 maths lessons and Activities, by educating about the concepts and strategies of working out different calculations under different topics like Addition, subtraction, multiplication, division and many more. |
The British Agricultural Revolution, or Second Agricultural Revolution, was an unprecedented increase in agricultural production in Britain arising from increases in labour and land productivity between the mid-17th and late 19th centuries. Agricultural output grew faster than the population over the hundred-year period ending in 1770, and thereafter productivity remained among the highest in the world. This increase in the food supply contributed to the rapid growth of population in England and Wales, from 5.5 million in 1700 to over 9 million by 1801, though domestic production gave way increasingly to food imports in the nineteenth century as the population more than tripled to over 35 million. Using 1700 as a base year (=100), agricultural output per agricultural worker in Britain steadily increased from about 50 in 1500, to around 65 in 1550, to 90 in 1600, to over 100 by 1650, to over 150 by 1750, rapidly increasing to over 250 by 1850. The rise in productivity accelerated the decline of the agricultural share of the labour force, adding to the urban workforce on which industrialization depended: the Agricultural Revolution has therefore been cited as a cause of the Industrial Revolution.
However, historians continue to dispute when exactly such a "revolution" took place and of what it consisted. Rather than a single event, G. E. Mingay states that there were a "profusion of agricultural revolutions, one for two centuries before 1650, another emphasising the century after 1650, a third for the period 1750–1780, and a fourth for the middle decades of the nineteenth century".This has led more recent historians to argue that any general statements about "the Agricultural Revolution" are difficult to sustain.
One important change in farming methods was the move in crop rotation to turnips and clover in place of fallow. Turnips can be grown in winter and are deep-rooted, allowing them to gather minerals unavailable to shallow-rooted crops. Clover fixes nitrogen from the atmosphere into a form of fertiliser. This permitted the intensive arable cultivation of light soils on enclosed farms and provided fodder to support increased livestock numbers whose manure added further to soil fertility.
The British Agricultural Revolution was the result of the complex interaction of social, economic and farming technological changes. Major developments and innovations include:
|Crop Yield net of Seed|
Yields have had the seed used to plant the crop subtracted to give net yields.
$ Average annual growth rate of agricultural output is per agricultural worker.
One of the most important innovations of the British Agricultural Revolution was the development of the Norfolk four-course rotation, which greatly increased crop and livestock yields by improving soil fertility and reducing fallow.
Crop rotation is the practice of growing a series of dissimilar types of crops in the same area in sequential seasons to help restore plant nutrients and mitigate the build-up of pathogens and pests that often occurs when one plant species is continuously cropped. Rotation can also improve soil structure and fertility by alternating deep-rooted and shallow-rooted plants. Turnip roots, for example, can recover nutrients from deep under the soil. The Norfolk four-course system, as it is now known, rotates crops so that different crops are planted with the result that different kinds and quantities of nutrients are taken from the soil as the plants grow. An important feature of the Norfolk four-field system was that it used labour at times when demand was not at peak levels.
Planting cover crops such as turnips and clover was not permitted under the common field system because they interfered with access to the fields. Besides, other people's livestock could graze the turnips.
During the Middle Ages, the open field system had initially used a two-field crop rotation system where one field was left fallow or turned into pasture for a time to try to recover some of its plant nutrients. Later they employed a three-year, three field crop rotation routine, with a different crop in each of two fields, e.g. oats, rye, wheat, and barley with the second field growing a legume like peas or beans, and the third field fallow. Normally from 10% to 30% of the arable land in a three crop rotation system is fallow. Each field was rotated into a different crop nearly every year. Over the following two centuries, the regular planting of legumes such as peas and beans in the fields that were previously fallow slowly restored the fertility of some croplands. The planting of legumes helped to increase plant growth in the empty field due to the ability of the bacteria on legume roots to fix nitrogen (N2) from the air into the soil in a form that plants could use. Other crops that were occasionally grown were flax and members of the mustard family.
Convertible husbandry was the alternation of a field between pasture and grain. Because nitrogen builds up slowly over time in pasture, ploughing up pasture and planting grains resulted in high yields for a few years. A big disadvantage of convertible husbandry was the hard work in breaking up pastures and difficulty in establishing them. The significance of convertible husbandry is that it introduced pasture into the rotation.
The farmers in Flanders (in parts of France and current day Belgium) discovered a still more effective four-field crop rotation system, using turnips and clover (a legume) as forage crops to replace the three-year crop rotation fallow year.
The four-field rotation system allowed farmers to restore soil fertility and restore some of the plant nutrients removed with the crops. Turnips first show up in the probate records in England as early as 1638 but were not widely used till about 1750. Fallow land was about 20% of the arable area in England in 1700 before turnips and clover were extensively grown in the 1830s. Guano and nitrates from South America were introduced in the mid-19th century and fallow steadily declined to reach only about 4% in 1900.Ideally, wheat, barley, turnips and clover would be planted in that order in each field in successive years. The turnips helped keep the weeds down and were an excellent forage crop—ruminant animals could eat their tops and roots through a large part of the summer and winters. There was no need to let the soil lie fallow as clover would re-add nitrates (nitrogen-containing salts) back to the soil. The clover made excellent pasture and hay fields as well as green manure when it was ploughed under after one or two years. The addition of clover and turnips allowed more animals to be kept through the winter, which in turn produced more milk, cheese, meat and manure, which maintained soil fertility. This maintains a good amount of crops produced.
The mix of crops also changed: the area under wheat rose by 1870 to 3.5 million acres (1.4m ha), barley to 2.25m acres (0.9m ha) and oats less dramatically to 2.75m acres (1.1m ha), while rye dwindled to 60,000 acres (25,000 ha), less than a tenth of its late medieval peak. Grain yields benefited from new and better seed alongside improved rotation and fertility: wheat yields increased by a quarter in the 18th century and nearly half in the 19th, averaging 30 bushels per acre (2,080 kg/ha) by the 1890s.
The Dutch acquired the iron-tipped, curved mouldboard, adjustable depth plough from the Chinese in the early 17th century. It had the advantage of being able to be pulled by one or two oxen compared to the six or eight needed by the heavy wheeled northern European plough. The Dutch plough was brought to Britain by Dutch contractors who were hired to drain East Anglian fens and Somerset moors. The plough was extremely successful on wet, boggy soil, but was soon used on ordinary land.
British improvements included Joseph Foljambe's cast iron plough (patented 1730), which combined an earlier Dutch design with a number of innovations. Its fittings and coulter were made of iron and the mouldboard and share were covered with an iron plate, making it easier to pull and more controllable than previous ploughs. By the 1760s Foljambe was making large numbers of these ploughs in a factory outside of Rotherham, England, using standard patterns with interchangeable parts. The plough was easy for a blacksmith to make, but by the end of the 18th century it was being made in rural foundries.By 1770 it was the cheapest and best plough available. It spread to Scotland, America, and France.
The Columbian exchange brought many new foodstuffs from the Americas to Eurasia, most of which took decades or centuries to catch on. Arguably the most important of these was the potato. Potatoes yielded about three times the calories per acre of wheat or barley, due in large part to only taking 3-4 months to mature versus 10 months for wheat. On top of this, potatoes had higher nutritive value than wheat, could be grown in even fallow and nutrient-poor soil, did not require any special tools, and were considered fairly appetizing. According to Langer, a single acre of potatoes could feed a family of five or six, plus a cow, for the better part of a year, an unprecedented level of production. By 1715 the potato was widespread in the Low Countries, the Rhineland, Southwestern Germany, and Eastern France, but took a little bit to spread elsewhere.
The Royal Society of London for Improving Natural Knowledge, established in 1660, almost immediately championed the potato, stressing its value as a substitute for wheat (particularly since famine periods for wheat overlapped with bump periods for potatoes). The 1740 famines buttressed their case.The mid 18th century was marked by rapid adoption of the potato by various European countries, especially in central Europe, as various wheat famines demonstrated its value. The potato was grown in Ireland, a property of the English crown and common source of food exports, since the early 17th century and quickly spread so that by the 18th century it had been firmly established as a staple food. It spread to England shortly after it popped up in Ireland, first being widely cultivated in Lancashire and around London, and by the mid-18th century it was esteemed and common. By the late 18th century, Sir Frederick Eden wrote that the potato had become "a constant standing dish, at every meal, breakfast excepted, at the tables of the Rich, as well as the Poor."
While not as vital as the potato, maize also contributed to the boost of Western European agricultural productivity. Maize also had far higher per-acre productivity than wheat (about two and a half times),grew at widely differing altitudes and in a variety of soils (though warmer climates were preferred), and unlike wheat it could be harvested in successive years from the same plot of land. It was often grown alongside potatoes, as maize plants required wide spacing. Maize was cultivated in Spain since 1525 and Italy since 1530, contributing to their growing populations in the early modern era as it became a dietary staple in the 17th century (in Italy it was often made into Polenta). It spread from northern Italy into Germany and beyond, becoming an important staple in the Habsburg monarchy (especially Hungary and Austria) by the late 17th century. Its spread started in southern France in 1565, and by the start of the 18th century, it was the main food source of central and southern French peasants (it was more popular as animal fodder in the north).
In Europe, agriculture was feudal from the Middle Ages. In the traditional open field system, many subsistence farmers cropped strips of land in large fields held in common and divided the produce. They typically worked under the auspices of the aristocracy or the Catholic Church, who owned much of the land.
As early as the 12th century, some fields in England tilled under the open field system were enclosed into individually owned fields. The Black Death from 1348 onward accelerated the break-up of the feudal system in England.Many farms were bought by yeomen who enclosed their property and improved their use of the land. More secure control of the land allowed the owners to make innovations that improved their yields. Other husbandmen rented property they "share cropped" with the land owners. Many of these enclosures were accomplished by acts of Parliament in the 16th and 17th centuries.
The process of enclosing property accelerated in the 15th and 16th centuries. The more productive enclosed farms meant that fewer farmers were needed to work the same land, leaving many villagers without land and grazing rights. Many of them moved to the cities in search of work in the emerging factories of the Industrial Revolution. Others settled in the English colonies. English Poor Laws were enacted to help these newly poor.
Some practices of enclosure were denounced by the Church, and legislation was drawn up against it; but the large, enclosed fields were needed for the gains in agricultural productivity from the 16th to 18th centuries. This controversy led to a series of government acts, culminating in the General Enclosure Act of 1801 which sanctioned large-scale land reform.
The process of enclosure was largely complete by the end of the 18th century.
Regional markets were widespread by 1500 with about 800 locations in Britain. The most important development between the 16th century and the mid-19th century was the development of private marketing. By the 19th century, marketing was nationwide and the vast majority of agricultural production was for market rather than for the farmer and his family. The 16th-century market radius was about 10 miles, which could support a town of 10,000.
The next stage of development was trading between markets, requiring merchants, credit and forward sales, knowledge of markets and pricing and of supply and demand in different markets. Eventually, the market evolved into a national one driven by London and other growing cities. By 1700, there was a national market for wheat.
Legislation regulating middlemen required registration, addressed weights and measures, fixing of prices and collection of tolls by the government. Market regulations were eased in 1663 when people were allowed some self-regulation to hold inventory, but it was forbidden to withhold commodities from the market in an effort to increase prices. In the late 18th century, the idea of self-regulation was gaining acceptance.
The lack of internal tariffs, customs barriers and feudal tolls made Britain "the largest coherent market in Europe".
High wagon transportation costs made it uneconomical to ship commodities very far outside the market radius by road, generally limiting shipment to less than 20 or 30 miles to market or to a navigable waterway. Water transport was, and in some cases still is, much more efficient than land transport. In the early 19th century it cost as much to transport a ton of freight 32 miles by wagon over an unimproved road as it did to ship it 3000 miles across the Atlantic.A horse could pull at most one ton of freight on a Macadam road, which was multi-layer stone covered and crowned, with side drainage. But a single horse could pull a barge weighing over 30 tons.
Commerce was aided by the expansion of roads and inland waterways. Road transport capacity grew from threefold to fourfold from 1500 to 1700.
Railroads would eventually reduce the cost of land transport by over 95%; however they did not become important until after 1850.
Another way to get more land was to convert some pasture land into arable land and recover fen land and some pastures. It is estimated that the amount of arable land in Britain grew by 10–30% through these land conversions.
The British Agricultural Revolution was aided by land maintenance advancements in Flanders and the Netherlands. Due to the large and dense population of Flanders and Holland, farmers there were forced to take maximum advantage of every bit of usable land; the country had become a pioneer in canal building, soil restoration and maintenance, soil drainage, and land reclamation technology. Dutch experts like Cornelius Vermuyden brought some of this technology to Britain.
Water-meadows were utilised in the late 16th to the 20th centuries and allowed earlier pasturing of livestock after they were wintered on hay. This increased livestock yields, giving more hides, meat, milk, and manure as well as better hay crops.
With the development of regional markets and eventually a national market, aided by improved transportation infrastructures, farmers were no longer dependent on their local market and were less subject to having to sell at low prices into an oversupplied local market and not being able to sell their surpluses to distant localities that were experiencing shortages. They also became less subject to price fixing regulations. Farming became a business rather than solely a means of subsistence.
Under free-market capitalism, farmers had to remain competitive. To be successful, farmers had to become effective managers who incorporated the latest farming innovations in order to be low cost producers.
In England, Robert Bakewell and Thomas Coke introduced selective breeding as a scientific practice, mating together two animals with particularly desirable characteristics, and also using inbreeding or the mating of close relatives, such as father and daughter, or brother and sister, to stabilise certain qualities in order to reduce genetic diversity in desirable animal programmes from the mid-18th century. Arguably, Bakewell's most important breeding programme was with sheep. Using native stock, he was able to quickly select for large, yet fine-boned sheep, with long, lustrous wool. The Lincoln Longwool was improved by Bakewell, and in turn the Lincoln was used to develop the subsequent breed, named the New (or Dishley) Leicester. It was hornless and had a square, meaty body with straight top lines.
Bakewell was also the first to breed cattle to be used primarily for beef. Previously, cattle were first and foremost kept for pulling ploughs as oxen or for dairy uses, with beef from surplus males as an additional bonus, but he crossed long-horned heifers and a Westmoreland bull to eventually create the Dishley Longhorn. As more and more farmers followed his lead, farm animals increased dramatically in size and quality. The average weight of a bull sold for slaughter at Smithfield was reported around 1700 as 370 pounds (170 kg), though this is considered a low estimate: by 1786, weights of 840 pounds (380 kg) were reported, though other contemporary indicators suggest an increase of around a quarter over the intervening century.
In 1300, the average milk cow produced 100 gallons of milk annually. This figure rose throughout the early modern era. The average in 1400-1449 was 140; in 1450-1499 162; in 1550-1599 212; in 1600-1649 243; in 1650-1699 272; in 1700-1749 319; in 1750-1799 366; and in 1800-1849 420. Beef output per animal rose even faster, from 168 lbs in 1300, to 251 in 1450-1499, to 317 in 1550-1599, 356 in 1600-1649, 400 in 1650-1699, 449 in 1700-1749, 504 in 1750-1799, and 566 in 1800-1849.
Besides the organic fertilisers in manure, new fertilisers were slowly discovered. Massive sodium nitrate (NaNO3) deposits found in the Atacama Desert, Chile, were brought under British financiers like John Thomas North and imports were started. Chile was happy to allow the exports of these sodium nitrates by allowing the British to use their capital to develop the mining and imposing a hefty export tax to enrich their treasury. Massive deposits of sea bird guano (11–16% N, 8–12% phosphate, and 2–3% potash), were found and started to be imported after about 1830. Significant imports of potash obtained from the ashes of trees burned in opening new agricultural lands were imported. By-products of the British meat industry like bones from the knackers' yards were ground up or crushed and sold as fertiliser. By about 1840 about 30,000 tons of bones were being processed (worth about £150,000). An unusual alternative to bones was found to be the millions of tons of fossils called coprolites found in South East England. When these were dissolved in sulphuric acid they yielded a high phosphate mixture (called "super phosphate") that plants could absorb readily and increased crop yields. Mining coprolite and processing it for fertiliser soon developed into a major industry—the first commercial fertiliser.Higher yield per acre crops were also planted as potatoes went from about 300,000 acres in 1800 to about 400,000 acres in 1850 with a further increase to about 500,000 in 1900. Labour productivity slowly increased at about 0.6% per year. With more capital invested, more organic and inorganic fertilisers, and better crop yields increased the food grown at about 0.5%/year—not enough to keep up with population growth.
Great Britain contained about 10.8 million people in 1801, 20.7 million in 1851 and 37.1 million by 1901. This corresponds to an annual population growth rate of 1.3% in 1801-1851 and 1.2% in 1851–1901, twice the rate of agricultural output growth. In addition to land for cultivation there was also a demand for pasture land to support more livestock. The growth of arable acreage slowed from the 1830s and went into reverse from the 1870s in the face of cheaper grain imports, and wheat acreage nearly halved from 1870 to 1900.
The recovery of food imports after the Napoleonic Wars (1803–1815) and the resumption of American trade following the War of 1812 (1812–1815) led to the enactment in 1815 of the Corn Laws (protective tariffs) to protect cereal grain producers in Britain against foreign competition. These laws were only removed in 1846 after the onset of the Great Irish Famine in which a potato blight million in 1845 to 4.3 million by 1921.ruined most of the Irish potato crop and brought famine to the Irish people from 1846 to 1850. Though the blight also struck Scotland, Wales, England, and much of Continental Europe, its effect there was far less severe since potatoes constituted a much smaller percentage of the diet than in Ireland. Hundreds of thousands died in the famine and millions more emigrated to England, Wales, Scotland, Canada, Australia, Europe, and the United States, reducing the population from about 8.5
Between 1873 and 1879 British agriculture suffered from wet summers that damaged grain crops. Cattle farmers were hit by foot-and-mouth disease, and sheep farmers by sheep liver rot. The poor harvests, however, masked a greater threat to British agriculture: growing imports of foodstuffs from abroad. The development of the steam ship and the development of extensive railway networks in Britain and in the United States allowed U.S. farmers with much larger and more productive farms to export hard grain to Britain at a price that undercut the British farmers. At the same time, large amounts of cheap corned beef started to arrive from Argentina, and the opening of the Suez Canal in 1869 and the development of refrigerator ships (reefers) in about 1880 opened the British market to cheap meat and wool from Australia, New Zealand, and Argentina. The Long Depression was a worldwide economic recession that began in 1873 and ended around 1896. It hit the agricultural sector hard and was the most severe in Europe and the United States, which had been experiencing strong economic growth fuelled by the Second Industrial Revolution in the decade following the American Civil War. By 1900 half the meat eaten in Britain came from abroad and tropical fruits such as bananas were also being imported on the new refrigerator ships.
Before the introduction of the seed drill, the common practice was to plant seeds by broadcasting (evenly throwing) them across the ground by hand on the prepared soil and then lightly harrowing the soil to cover the seed. Seeds left on top of the ground were eaten by birds, insects, and mice. There was no control over spacing and seeds were planted too close together and too far apart. Alternatively, seeds could be laboriously planted one by one using a hoe and/or a shovel. Cutting down on wasted seed was important because the yield of seeds harvested to seeds planted at that time was around four or five.
The seed drill was introduced from China to Italy in the mid-16th century where it was patented by the Venetian Senate.Jethro Tull invented an improved seed drill in 1701. It was a mechanical seeder which distributed seeds evenly across a plot of land and at the correct depth. Tull's seed drill was very expensive and fragile and therefore did not have much of an impact. The technology to manufacture affordable and reliable machinery, including Agricultural machinery, improved dramatically in the last half of the nineteenth century.
The Agricultural Revolution was part of a long process of improvement, but sound advice on farming began to appear in England in the mid-17th century, from writers such as Samuel Hartlib, Walter Blith and others,and the overall agricultural productivity of Britain started to grow significantly only in the period of the Agricultural Revolution. It is estimated that total agricultural output grew 2.7-fold between 1700 and 1870 and output per worker at a similar rate.
Despite its name, the Agricultural Revolution in Britain did not result in overall productivity per hectare of agricultural area as high as in China, where intensive cultivation (including multiple annual cropping in many areas) had been practiced for many centuries.
The Agricultural Revolution in Britain proved to be a major turning point in history, allowing the population to far exceed earlier peaks and sustain the country's rise to industrial pre-eminence. Towards the end of the 19th century, the substantial gains in British agricultural productivity were rapidly offset by competition from cheaper imports, made possible by the exploitation of new lands and advances in transportation, refrigeration, and other technologies.
Crop rotation is the practice of growing a series of different types of crops in the same area across a sequence of growing seasons. It reduces reliance on one set of nutrients, pest and weed pressure, and the probability of developing resistant pest and weeds.
In agriculture, monoculture is the practice of growing a single crop, plant, or livestock species, variety, or breed in a field or farming system at a time. Polyculture, where more than one crop species is grown in the same space at the same time, is the alternative to monoculture.
Jethro Tull was an English agriculturist from Berkshire who helped to bring about the British Agricultural Revolution of the 18th century. He perfected a horse-drawn seed drill in 1700 that economically sowed the seeds in neat rows, and later developed a horse-drawn hoe. Tull's methods were adopted by many landowners and helped to provide the basis for modern agriculture.
Intensive agriculture, also known as intensive farming, conventional, or industrial agriculture, is a type of agriculture, both of crop plants and of animals, with higher levels of input and output per unit of agricultural land area. It is characterized by a low fallow ratio, higher use of inputs such as capital and labour, and higher crop yields per unit land area.
The open-field system was the prevalent agricultural system in much of Europe during the Middle Ages and lasted into the 20th century in Russia, Iran, and Turkey. Under the open-field system, each manor or village had two or three large fields, usually several hundred acres each, which were divided into many narrow strips of land. The strips or selions were cultivated by individuals or peasant families, often called tenants or serfs. The holdings of a manor also included woodland and pasture areas for common usage and fields belonging to the lord of the manor and the religious authorities, usually Roman Catholics in medieval Western Europe. The farmers customarily lived in individual houses in a nucleated village with a much larger manor house and church nearby. The open-field system necessitated co-operation among the inhabitants of the manor.
Enclosure or Inclosure is a term, used in English landownership, that refers to the appropriation of "waste" or "common land" enclosing it and by doing so depriving commoners of their ancient rights of access and privilege. Agreements to enclose land could be either through a "formal" or "informal" process. The process could normally be accomplished in three ways. First there was the creation of "closes", taken out of larger common fields by their owners. Secondly, there was enclosure by proprietors, owners who acted together, usually small farmers or squires, leading to the enclosure of whole parishes. Finally there were enclosures by Acts of Parliament.
Shifting cultivation is an agricultural system in which plots of land are cultivated temporarily, then abandoned while post-disturbance fallow vegetation is allowed to freely grow while the cultivator moves on to another plot. The period of cultivation is usually terminated when the soil shows signs of exhaustion or, more commonly, when the field is overrun by weeds. The period of time during which the field is cultivated is usually shorter than the period over which the land is allowed to regenerate by lying fallow.
Slash-and-burn agriculture is a farming method that involves the cutting and burning of plants in a forest or woodland to create a field called a swidden. The method begins by cutting down the trees and woody plants in an area. The downed vegetation, or "slash", is then left to dry, usually right before the rainiest part of the year. Then, the biomass is burned, resulting in a nutrient-rich layer of ash which makes the soil fertile, as well as temporarily eliminating weed and pest species. After about three to five years, the plot's productivity decreases due to depletion of nutrients along with weed and pest invasion, causing the farmers to abandon the field and move over to a new area. The time it takes for a swidden to recover depends on the location and can be as little as five years to more than twenty years, after which the plot can be slashed and burned again, repeating the cycle. In Bangladesh and India, the practice is known as jhum or jhoom.
Agriculture in the Russian Empire throughout the 19th-20th centuries Russia represented a major world force, yet it lagged technologically behind other developed countries. Imperial Russia was amongst the largest exporters of agricultural produce, especially wheat. The Free Economic Society of 1765 to 1919 made continuing efforts to improve farming techniques.
The three-field system is a regime of crop rotation that was used in China since the Eastern Zhou period and in medieval and early-modern Europe. Crop rotation is the practice of growing a series of different types of crops in the same area in sequential seasons.
In agriculture, the yield is a measurement of the amount of a crop grown, or product such as wool, meat or milk produced, per unit area of land. The seed ratio is another way of calculating yields.
The history of agriculture records the domestication of plants and animals and the development and dissemination of techniques for raising them productively. Agriculture began independently in different parts of the globe, and included a diverse range of taxa. At least eleven separate regions of the Old and New World were involved as independent centers of origin.
Intensive crop farming is a modern industrialized form of crop farming. Intensive crop farming's methods include innovation in agricultural machinery, farming methods, genetic engineering technology, techniques for achieving economies of scale in production, the creation of new markets for consumption, patent protection of genetic information, and global trade. These methods are widespread in developed nations.
Agriculture in the United Kingdom uses 69% of the country's land area, employs 1.5% of its workforce and contributes 0.6% of its gross value added. The UK produces less than 60% of the food it consumes.
Agriculture in England is today intensive, highly mechanised, and efficient by European standards, producing about 60% of food needs with only 2% of the labour force. It contributes around 2% of GDP. Around two thirds of production is devoted to livestock, one third to arable crops. Agriculture is heavily subsidised by the European Union's Common Agricultural Policy and it is not known how large a sector it would be if the market was unregulated. The GDP from the farming sector is argued by some to be a small return on the subsidies given but is argued by others that subsidy boosts food security and therefore is justified in the same way defence spending is.
Farming Systems in India are strategically utilized, according to the locations where they are most suitable. The farming systems that significantly contribute to the agriculture of India are subsistence farming, organic farming, industrial farming. Regions throughout India differ in types of farming they use; some are based on horticulture, ley farming, agroforestry, and many more. Due to India's geographical location, certain parts experience different climates, thus affecting each region's agricultural productivity differently. India is very dependent on its monsoon cycle for large crop yields. India's agriculture has an extensive background which goes back to at least 9 thousand years. In India, Agriculture was established throughout most of the subcontinent by 6000–5000 BP. During the 5th millennium BP, in the alluvial plains of the Indus River in Pakistan, the old cities of Mohenjo-Daro and Harappa experienced an apparent establishment of an organized farming urban culture. That society, known as the Harappan or Indus civilization, flourished until shortly after 4000 BP; it was much more comprehensive than those of Egypt or Babylonia and appeared earlier than analogous societies in northern China. Currently, the country holds the second position in agricultural production in the world. In 2007, agriculture and other industries made up more than 16% of India's GDP. Despite the steady decline in agriculture's contribution to the country's GDP, agriculture is the biggest industry in the country and plays a key role in the socio-economic growth of the country. India is the second-largest producer of wheat, rice, cotton, sugarcane, silk, groundnuts, and dozens more. It is also the second biggest harvester of vegetables and fruit, representing 8.6% and 10.9% of overall production, respectively. The major fruits produced by India are mangoes, papayas, sapota, and bananas. India also has the biggest number of livestock in the world, holding 281 million. In 2008, the country housed the second largest number of cattle in the world with 175 million.
Convertible husbandry, also known as alternate husbandry or up-and-down husbandry, is a method of farming whereby strips of arable farmland were temporarily converted into grass pasture, known as leys. These remained under grass for up to 10 years before being ploughed under again, while some eventually became permanent pasturage. It was a process used during the 16th century through the 19th century by "which a higher proportion of land was used to support increasing numbers of livestock in many parts of England." Its adoption was an important component of the British Agricultural Revolution.
Agriculture in the Middle Ages describes the farming practices, crops, technology, and agricultural society and economy of Europe from the fall of the Western Roman Empire in 476 to approximately 1500. The Middle Ages are sometimes called the Medieval Age or Period. The Middle Ages are also divided into the Early, High, and Late Middle Ages. The early modern period followed the Middle Ages.
The Norfolk four-course system is a method of agriculture that involves crop rotation. Unlike earlier methods such as the three-field system, the Norfolk system is marked by an absence of a fallow year. Instead, four different crops are grown in each year of a four-year cycle: wheat, turnips, barley, and clover or undergrass.
This glossary of agriculture is a list of definitions of terms and concepts used in agriculture, its sub-disciplines, and related fields. For other glossaries relevant to agricultural science, see Glossary of biology, Glossary of ecology, Glossary of environmental science, and Glossary of botany. |
Change of Coordinate Systems
|Change of Coordinates|
Change of Coordinates
- The same object, here a circle, can look completely different depending on which coordinate system is used.
It is a common practice in mathematics to use different coordinate systems to solve different problems. Suppose we take a set of points in regular x-y Cartesian Coordinates, represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates.
Under this transformation, a set of points would become stretched in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). A set of points that was originally contained by a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in this page's main image.
Points can even be transferred to a different kind of coordinate system. A common example is mapping rectangular Cartesian Coordinates to Polar Coordinates. Each point's distance from the origin, R, and angle from the x-axis, , is used as coordinates in the Polar Coordinate system. Thus a disk in Cartesian Coordinates is mapped to a rectangle in Polar Coordinates: Each origin-centered circle consists of points equidistant from the origin with angle from the x-axis ranging from zero to radians. Each of these circles is thus mapped a straight line of length in Polar Coordinates. Since the distance from the origin of these circles ranges from zero to the radius of the disk, a set of lines is created in Polar Coordinates which together form a rectangle.
A More Mathematical Explanation
Points in one space are undergo a transformation of some kind to be mapped to a points in another spa [...]
Points in one space are undergo a transformation of some kind to be mapped to a points in another space.
- There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. |
When some voltage is induced in the primary of a transformer, a magnetic flux created in the primary is produced in the secondary due to mutual induction, which generates some voltage in the secondary. The strength of this magnetic field is created when the current increases from zero to the maximum value which is given by dφ/dt.
The magnetic lines of flux pass through the secondary winding. The number of turns in the secondary winding determines the induced voltage. Thus the amount of induced voltage due to which will be determined
where N = number of turns in the secondary winding,
The frequency of this generated voltage will be the same as the frequency of the primary voltage. The peak amplitude of the output voltage will be affected if the magnetic loss is high.
The efficiency of a transformer
The amount or intensity of power loss in a transformer determines the efficiency of the transformer. Efficiency can be understood in terms of the power loss between the primary and secondary of the transformer.
Therefore, the ratio of the power output of the secondary winding to the power input of the primary winding can be termed the efficiency of the transformer. it can be written as:
Efficiency is usually denoted by. The above equation is valid for an ideal transformer where there will be no loss and the entire energy in the input is transferred to the output.
Therefore, if losses are to be considered and if efficiency is to be calculated under practical conditions, the equation given below should be considered.
Otherwise, it can also be written as,
It should be noted that input, output, and loss are all expressed in terms of power, i.e. in Watts.
Now consider that both the primary and secondary coils have a single turn each. If one volt is applied without damage to one turn of the primary, then in the ideal case the current flows, and the magnetic field is produced to induce the same volt in the secondary. Hence the voltage on both sides is the same.
But the magnetic flux varies sinusoidally which means,
Then the basic relation between the induced emf and the coil winding of n turns is,
- f = flux frequency in hertz = ω/2π.
- N = number of coil windings.
- ∅ = flux density in webers.
This is known as the transformer emf equation.
Since the alternating flux produces a current in the secondary coil, and this alternating flux is produced by the alternating voltage, we can say that only an alternating current AC can help the transformer to work. Hence a transformer does not operate on DC.
Power of a Transformer
When an ideal transformer is assumed to be without losses, the power of the transformer will remain constant, because product I, when the voltage V is multiplied by the current, is constant.
We can say that the power in the primary is equal to the power in the secondary as the transformer takes care of that. If the transformer steps up the voltage or increases the voltage, the current is reduced and if the voltage is stepped down, the current is increased to keep the output power constant.
Hence the primary power is equal to the secondary power.
Where ∅P = Primary phase angle and ∅S = Secondary phase angle.
Related Tutorial: Types of Transformers.
Loss in Transformer
In practical applications, any device has some disadvantages. The main losses that occur in a transformer are copper loss, core loss, and flux leakage.
The loss of copper is the loss of energy, which is caused by the heat generated by the current flowing through the windings of the transformer. These are also called “I2R losses” or “I squared R losses” because the energy lost per second increases with the square of the current through the winding and is proportional to the electrical resistance of the winding.
It can be written in an equation as:
- IP = Primary Current.
- RP = Primary Resistance.
- IS = Secondary Current.
- RS = Secondary Resistance.
Core loss is also called iron loss. These disadvantages depend on the main material used. They are of two types, hysteresis and eddy current loss.
- Hysteresis Loss − The induced AC fluctuates and falls as the magnetic flux reverses direction according to the induced AC voltage. These random fluctuations cause some energy loss in the core. Such loss can be called hysteresis loss.
- Eddy Current Loss − Some current is induced in the core which is continuously circulated. These currents produce some loss which is called eddy current loss. Actually, the varying magnetic field is known to induce current only in the secondary winding. But it also induces a voltage in nearby conducting material, resulting in loss of energy.
- Flux Leakage − Although the flux linkages are strong enough to produce the required voltage, there will be some flux that leaks out in practical applications and results in loss of energy. Though it is low, when it comes to high energy applications this disadvantage can also be counted. |
Constitution of the United Kingdom
|This article is part of a series on the
politics and government of
the United Kingdom
|United Kingdom portal|
The constitution of the United Kingdom is the sum of laws and principles that make up the body politic of the United Kingdom. It concerns both the relationship between the individual and the state, and the functioning of the legislature, the executive and judiciary. Unlike many other nations, the UK has no single constitutional document. This is sometimes expressed by stating that it has an uncodified or "unwritten" constitution. Much of the British constitution is embodied in written documents, within statutes, court judgments, works of authority and treaties. The constitution has other unwritten sources, including parliamentary constitutional conventions.
Since the Glorious Revolution in 1688, the bedrock of the legislative British constitution has been described as the doctrine of parliamentary sovereignty, according to which the statutes passed by Parliament are the UK's supreme and final source of law. It follows that Parliament can change the constitution simply by passing new Acts of Parliament, but in some instances, courts have stated obiter (making it clear that they were not purporting to decide lateral issues) that it was content to assume that the following proposition was correct: ‘that Parliament can effectively tie the hands of its successors, if it passes a statute which provides that any future legislation on a specified subject shall be enacted only with certain specified consents’. There is some debate about whether the cardinal doctrine remains valid, particularly in light of the UK's membership in the European Union.
- 1 History
- 2 General constitutional principles
- 3 Institutions
- 4 Citizens and the state
- 5 Administrative law
- 6 Theory
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
- Magna Carta 1215 — clauses 1, 9, and 29, as enumerated in 1297, remain in statute; asserts the freedom of the English church, the liberties of the City of London among others, and establishes the right to due process
- Bill of Rights 1689 — secures parliamentary supremacy over the monarch, the result of the Glorious Revolution[dubious ]
- Crown and Parliament Recognition Act 1689 — confirms the succession to the throne and the validity of the laws passed by the Convention Parliament
- Act of Settlement 1701 — settles the succession of the Crown
- Acts of Union 1707 — union of England and Scotland
- Act of Union 1800 — union of Great Britain and Ireland
- Parliament Acts 1911 and 1949 — asserts the supremacy of the House of Commons by limiting the legislation-blocking powers of the House of Lords
- Life Peerages Act 1958 — establishes standards for the creation of life peers which gives the Prime Minister the ability to change the composition of the House of Lords
- Emergency Powers Act 1964 — provides power to employ members of the armed forces in work of national importance
- European Communities Act 1972 — incorporates European law into UK law
- House of Commons Disqualification Act 1975 — prohibits certain categories of people, such as judges, from becoming members of the House of Commons
- Ministerial and Other Salaries Act 1975 — governs ministerial salaries
- British Nationality Act 1981 — revises the basis of British nationality law
- Senior Courts Act 1981 (originally Supreme Court Act 1981) — defines the structure of the Senior Courts (then called the Supreme Court) of England and Wales
- Representation of the People Act 1983 — updates the British electoral process
- Scotland Act 1998 — creates the Scottish Parliament and devolves certain powers to it
- Government of Wales Act 1998 — creates the Welsh Assembly and devolves certain powers to it
- Northern Ireland Act 1998 — creates the Northern Ireland Assembly and devolves certain powers to it
- Human Rights Act 1998 — incorporates the European Convention on Human Rights into UK law
- House of Lords Act 1999 — reforms the House of Lords removing most hereditary peers
- Civil Contingencies Act 2004 — establishes a framework for national and local emergency planning and response
Since then, the following statues of a constitutional nature have become law:
- Constitutional Reform Act 2005 — creates the Supreme Court of the United Kingdom and guarantees judicial independence
- Constitutional Reform and Governance Act 2010 — reforms the Royal Prerogative and makes other significant changes
- Fixed-term Parliaments Act 2011 — introduces fixed-term parliaments of 5 years
- Succession to the Crown Act 2013 — alters the laws of succession to the British throne
General constitutional principles
Acts of Parliament are bills which have received the approval of Parliament – that is, the Monarch, the House of Lords and the House of Commons. On rare occasions, the House of Commons uses the "Parliament Acts" (the Parliament Act 1911 and the Parliament Act 1949) to pass legislation without the approval of the House of Lords. It is unheard of in modern times for the Monarch to refuse to assent to a bill, though the possibility was contemplated by George V in relation to the fiercely controversial Government of Ireland Act 1914. Acts of Parliament are among the most important sources of the constitution. According to the traditional view, Parliament has the ability to legislate however it wishes on any subject it wishes. For example, most of the iconic mediaeval statute known as Magna Carta has been repealed since 1828, despite previously being regarded as sacrosanct. It has traditionally been the case that the courts are barred from questioning any Act of Parliament, a principle that can be traced back to the mediaeval period. On the other hand, this principle has not been without its dissidents and critics over the centuries, and attitudes among the judiciary in this area may be changing.[better source needed] One consequence of the principle of parliamentary sovereignty is that there is no hierarchy among Acts of Parliament: all parliamentary legislation is, in principle, of equal validity and effectiveness. However, the judgment of Lord Justice Laws in the Thoburn case in 2002 indicated that there may be a special class of "constitutional statutes" such as Magna Carta, the Human Rights Act 1998, the European Communities Act 1972, the Act of Union and Bill of Rights which have a higher status than other legislation. This part of his judgment was "obiter" (i.e. not binding) – and, indeed, was controversial. It remains to be seen whether the doctrine will be accepted by other judges.
Treaties do not, on ratification, automatically become incorporated into UK law. Important treaties have been incorporated into domestic law by means of Acts of Parliament. The European Convention on Human Rights, for example, was given "further effect" into domestic law through the preamble of the Human Rights Act 1998. Also, the Treaty of Union of 1707 was important in creating the unitary state which exists today. The treaty was between the governments of England and Scotland and was put into effect by two Acts of Union which were passed by the Parliaments of England and Scotland, respectively. The Treaty, along with the subsequent Acts, brought into existence the Kingdom of Great Britain, uniting the Kingdom of England and the Kingdom of Scotland.
Common law legal systems exist in Northern Ireland and in England and Wales but not in Scotland which has a hybrid system (see Scots law) which involves a great deal of Common Law. Court judgments also commonly form a source of the constitution: generally speaking in English Law, judgments of the higher courts form precedents or case law that binds lower courts and judges; Scots Law does not accord the same status to precedent and judgments in one legal system do not have a direct effect in the other legal systems. Historically important court judgments include those in the Case of Proclamations, the Ship money case and Entick v Carrington, all of which imposed limits on the power of the executive. A constitutional precedent applicable to British colonies is Campbell v. Hall, which effectively extended those same constitutional limitations to any territory which has been granted a representative assembly.
Many British constitutional conventions are ancient in origin, though others (like the Salisbury Convention) date from within living memory. Such conventions, which include the duty of the Monarch to act on the advice of his or her ministers, are not formally enforceable in a court of law; rather, they are primarily observed "because of the political difficulties which arise if they are not."
Works of authority is the formal name for works that are sometimes cited as interpretations of aspects of the UK constitution. Most are works written by nineteenth- or early-twentieth-century constitutionalists, in particular A. V. Dicey, Walter Bagehot and Erskine May.
In the 19th century, A. V. Dicey, a highly influential constitutional scholar and lawyer, wrote of the twin pillars of the British constitution in his classic work Introduction to the Study of the Law of the Constitution (1885). These pillars are the principle of Parliamentary sovereignty and the rule of law. Parliamentary sovereignty means that Parliament is the supreme law-making body: its Acts are the highest source of English law (the concept of parliamentary sovereignty is disputed in Scots law, see MacCormick v Lord Advocate).
According to the doctrine of parliamentary sovereignty, Parliament may pass any legislation that it wishes. Historically, "No Act of Parliament can be unconstitutional, for the law of the land knows not the word or the idea." By contrast, in countries with a codified constitution, the legislature is normally forbidden from passing laws that contradict that constitution: constitutional amendments require a special procedure that is more arduous than that for regular laws.
There are many Acts of Parliament which themselves have constitutional significance. For example, Parliament has the power to determine the length of its term. By the Parliament Acts 1911 and 1949, the maximum length of a term of parliament is five years but this may be extended with the consent of both Houses. This power was most recently used during World War II to extend the lifetime of the 1935 parliament in annual increments up to 1945. Parliament also has the power to change the make-up of its constituent houses and the relation between them. Examples include the House of Lords Act 1999 which changed the membership of the House of Lords, the Parliament Acts 1911 and 1949 which altered the relationship between the House of Commons and the House of Lords and the Reform Act 1832 which made changes to the system used to elect members of the House of Commons.
The power extended to Parliament includes the power to determine the line of succession to the British throne. This power was used to pass His Majesty's Declaration of Abdication Act 1936, which gave constitutional effect to the abdication of Edward VIII and removed any of his putative descendants from the succession, and most recently to pass the Succession to the Crown Act 2013, which changed the succession to the throne to absolute primogeniture (not dependent on gender) and also removed the disqualification of marrying a Roman Catholic. Parliament also has the power to remove or regulate the executive powers of the Monarch.
Parliament consists of the Monarch, the House of Commons and the House of Lords. The House of Commons consists of more than 600 members elected by the people from single-member constituencies under a first past the post system. Following the passage of the House of Lords Act 1999, the House of Lords consists of 26 bishops of the Church of England (Lords Spiritual), 92 representatives of the hereditary peers and several hundred life peers. The power to nominate bishops of the Church of England and to create hereditary and life peers is exercised by the Monarch, on the advice of the prime minister. By the Parliament Acts 1911 and 1949 legislation may, in certain circumstances, be passed without the approval of the House of Lords. Although all legislation must receive the approval of the Monarch (Royal Assent), no Monarch has withheld such assent since 1708.
The House of Commons alone possesses the power to pass a motion of no confidence in the Government, which requires the Government either to resign or seek fresh elections (this principle was codified in the Fixed-term Parliaments Act 2011—see below for more details). Such a motion does not require passage by the Lords or Royal Assent.
Parliament traditionally also has the power to remove individual members of the government by impeachment (with the Commons initiating the impeachment and the Lords trying the case), although this power has not been used since 1806. By the Constitutional Reform Act 2005 it has the power to remove individual judges from office for misconduct.
Rule of law
The rule of law was AV Dicey's second core principle of the UK constitution. This is the idea that all laws and government actions conform to principles. These principles include equal application of the law: everyone is equal before the law and no person is above the law, including those in power. Another is no person is punishable in body or goods without a breach of the law: as held in Entick v Carrington, unless there is a clear breach of the law, persons are free to do anything, unless the law says otherwise; thus, no punishment without a clear breach of the law.
Unity and devolution
The United Kingdom comprises four countries: England, Wales, Scotland and Northern Ireland. Nevertheless, it is a unitary state, not a federation (like Australia, Argentina, Brazil, Canada, Germany, Russia or the United States), nor a confederation (like pre-1847 Switzerland or the former Serbia and Montenegro). Although Scotland, Wales and Northern Ireland have possessed legislatures and executives, England does not (see West Lothian question). The authority of all these bodies is dependent on Acts of Parliament and that they can in principle be abolished at the will of the Parliament of the United Kingdom. A historical example of a legislature that was created by Act of Parliament and later abolished is the Parliament of Northern Ireland, which was set up by the Government of Ireland Act 1920 and abolished, in response to political violence in Northern Ireland, by the Northern Ireland Constitution Act 1973 (Northern Ireland has since been given another legislative assembly under the Northern Ireland Act 1998). The Greater London Council was abolished in 1986 by the Local Government Act 1985 and a similar institution, the Greater London Authority, was established in 2000 by the Greater London Authority Act 1999.
Parliament contains no chamber comparable to the United States Senate (which has equal representation from each state of the USA) or the German Bundesrat (whose membership is selected by the governments of the States of Germany). England contains over 80% of the UK's population, produces over 80% of its combined gross domestic product and contains the capital city.
In England the established church is the Church of England. In Scotland, Wales and Northern Ireland, there is no state church: their respective state churches were disestablished (that is, not disbanded but had their "established" status abolished) by the Church of Scotland Act 1921, the Welsh Church Act 1914, and the Irish Church Act 1869. England and Wales share the same legal system, while Scotland and Northern Ireland each have their own distinct systems. These distinctions were created as a result of the United Kingdom being created by the union of separate countries according to the terms of the 1706 Treaty of Union, ratified by the 1707 Acts of Union.
Reforms since 1997 have decentralised the UK by setting up a devolved Scottish Parliament and assemblies in Wales and Northern Ireland. The UK was formed as a unitary state, though Scotland and England retained separate legal systems. Some commentators have stated the UK is now a "quasi-federal" state: it is only "quasi" federal, because (unlike the other components of the UK) England has no legislature of its own, and is directly ruled from Westminster (the devolved bodies are not sovereign and could, in theory at least, be repealed by Parliament – unlike "true" federations, such as the United States, where the constituent states share sovereignty with the federal government). Attempts to extend devolution to the various regions of England have stalled, and the fact that Parliament functions both as a British and as an English legislature has created some dissatisfaction (the so-called "West Lothian question").
European Union membership
Under European Law, as developed by the ECJ, the EC Treaty created a "new legal order" under which the validity of European Union law cannot be impeded by national law; though the UK, like a number of other EU members, does not share the ECJ's monist interpretation unconditionally, it accepts the supremacy of EU law in practice.:344 Because, in the UK, international law is treated as a separate body of law, EU law is enforceable only on the basis of an Act of Parliament, such as the European Communities Act 1972, which provides for the supremacy of EU law.:344 The supremacy of EU law was confirmed by the House of Lords in the Factortame litigation, in which part of the Merchant Shipping Act 1988 was "disapplied" because it conflicted with EU law. In his judgment in Factortame, Lord Bridge wrote:
[T]he supremacy within the European Community of Community law over the national law of member states ... was certainly well established in the jurisprudence of the European Court of Justice long before the United Kingdom joined the Community. Thus, whatever limitation of its sovereignty Parliament accepted when it enacted the European Communities Act 1972 was entirely voluntary. Under the terms of the Act of 1972 it has always been clear that it was the duty of a United Kingdom court, when delivering final judgment, to override any rule of national law found to be in conflict with any directly enforceable rule of Community law. ... Thus there is nothing in any way novel in according supremacy to rules of Community law in those areas to which they apply and to insist that, in the protection of rights under Community law, national courts must not be inhibited by rules of national law from granting interim relief in appropriate cases is no more than a logical recognition of that supremacy.:367–368
In 2015, the Court of Appeal disapplied parts of the State Immunity Act 1978 on the grounds that it conflicted with article 47 of the Charter of Fundamental Rights of the European Union. The case concerned two workers who wished to sue the Sudanese embassy in London for violations of employment law.
On one analysis, EU law is simply a subcategory of international law that depends for its effect on a series of international treaties (notably the Treaty of Rome and the Maastricht Treaty). It therefore has effect in the UK only to the extent that Parliament permits it to have effect, by means of statutes such as the European Communities Act 1972, and Parliament could, as a matter of British law, unilaterally bar the application of EU law in the UK simply by legislating to that effect. However, at least in the views of some British authorities, the doctrine of implied repeal, which applies to normal statutes, does not apply to "constitutional statutes", meaning that any statute that was to have precedence over EU law (thus disapplying the 1972 European Communities Act) would have to provide for this expressly or in such a way as to make the inference "irresistible".:369 The actual legal effect of a statute enacted with the express intention of taking precedence over EU law is as yet unclear. However, it has been stated that if Parliament were to expressly repudiate its treaty obligations the courts would be obliged to give effect to a corresponding statute:
If the time should come when our Parliament deliberately passes an Act – with the intention of repudiating the Treaty or any provision of it – or intentionally of acting inconsistently with it – and says so in express terms – then . . . it would be the duty of our courts to follow the statute of our Parliament.— Lord Denning, Macartys Ltd v Smith ICR at p. 789
In 2011 parliament passed the European Union Act 2011 which states in clause 18 (Status of EU law dependent on continuing statutory basis): "Directly applicable or directly effective EU law (that is, the rights, powers, liabilities, obligations, restrictions, remedies and procedures referred to in section 2(1) of the European Communities Act 1972) falls to be recognised and available in law in the United Kingdom only by virtue of that Act or where it is required to be recognised and available in law by virtue of any other Act."
Following the accession of the UK to European Economic Community (now the European Union) in 1972, the UK became bound by European law and more importantly, the principle of the supremacy of European Union law. According to this principle, which was outlined by the European Court of Justice in 1964 in the case of Costa v. ENEL, laws of member states that conflict with EU laws must be disapplied by member states' courts. The conflict between the principles of the primacy of EU law and of parliamentary supremacy was illustrated in the judgment in Thoburn v Sunderland City Council, which held that the European Communities Act 1972, the Act that initiated British involvement in the EU, could not be implicitly repealed simply by the passing of subsequent legislation inconsistent with European law. The court went further and suggested that the 1972 Act formed part of a category of special "constitutional statutes" that were not subject to implied repeal. This exception to the doctrine of implied repeal was something of a novelty, though the court stated that it remained open for Parliament to expressly repeal the Act. It is politically inconceivable at the present time that Parliament would do so and constitutional lawyers have also questioned whether such a step would be as straightforward in its legal effects as it might seem. The Thoburn judgment was handed down only by the Divisional Court (part of the High Court), which occupies a relatively low level in the legal system.
- Relating to monarchy
- The Monarch shall grant the Royal Assent to all Bills passed by Parliament (the Royal Assent was last refused by Queen Anne in 1708, for the Scottish Militia Bill 1708, on the advice of her ministers).
- The monarch will ask the leader of the majority party in the House of Commons to form a government, and if there is no majority party, the person who appears most likely to command the confidence of the House of Commons to serve as Prime Minister and form a government.
- The monarch will ask a member of the House of Commons (rather than the House of Lords or someone outside Parliament) to form a government. It remains possible, however, for a caretaker Prime Minister to be drawn from the House of Lords.
- All ministers are to be drawn from the House of Commons or the House of Lords.
- The House of Lords will accept any legislation that was in the Government's manifesto (the Salisbury Convention) – in recent years this convention has been broken by the Lords, though the composition of the Lords (which was the justification for the convention) has radically changed since the convention was introduced.
- Individual Ministerial Responsibility
- Cabinet collective responsibility
The United Kingdom is a constitutional monarchy, and succession to the British throne is hereditary. The monarch, or Sovereign, is the Head of State of the United Kingdom and amongst several roles is notably the Commander-in-chief of the British Armed Forces.
Parliament is bicameral, with two houses — the House of Commons and the House of Lords; the monarch formally forms a third element of Parliament (see Queen-in-Parliament). The House of Commons, which unlike the House of Lords is democratically elected, has supremacy by virtue of the Parliament Act 1911 and Parliament Act 1949. An Act of Parliament of the United Kingdom is primary legislation and Parliament can (and does) alter the British constitution by passing such Acts.
Under the British constitution, sweeping executive powers, known as the royal prerogative, are nominally vested in the monarch. In exercising these powers the monarch normally defers to the advice of the prime minister or other ministers. This principle, which can be traced back to the Restoration, was most famously articulated by the Victorian writer Walter Bagehot as "the Queen reigns, but she does not rule". The precise extent of the royal prerogative has never formally been delineated, but in 2004, Her Majesty's Government published some of the powers, in order to be more transparent:
- Domestic powers
- The power to dismiss and appoint a Prime Minister
- The power to dismiss and appoint other ministers
- The power to summon and prorogue Parliament
- The power to grant or refuse Royal Assent to bills (making them valid and law)
- The power to commission officers in the Armed Forces
- The power to command the Armed Forces of the United Kingdom
- The power to appoint members to the Queen's Council
- The power to issue and withdraw passports
- The power to grant prerogative of mercy (though capital punishment is abolished, this power is still used to remedy errors in sentence calculation)
- The power to grant honours
- The power to create corporations by Royal Charter
- The power to appoint bishops and archbishops of the Church of England.
- Foreign powers
- The power to ratify and make treaties
- The power to declare war and peace
- The power to deploy the Armed Forces overseas
- The power to recognise states
- The power to credit and receive diplomats
The most important prerogative still personally exercised by the monarch is the choice of whom to appoint Prime Minister. The most recent occasion when the monarch has had to exercise these powers was in February 1974, when Edward Heath resigned from the position of prime minister after failing to win an overall majority at the General Election or to negotiate a coalition. Queen Elizabeth II appointed Harold Wilson, leader of the Labour Party, as prime minister, exercising her prerogative after extensive consultation with the Privy Council. The Labour Party had the largest number of seats in the House of Commons, but not an overall majority. The 2010 general election also resulted in a hung parliament. After several days of negotiations, between the parties, Queen Elizabeth II invited David Cameron to form a government on the advice of the outgoing prime minister Gordon Brown.
The monarch formerly enjoyed the power to dissolve Parliament (normally on the request of the prime minister). However, this power was explicitly removed from the monarch by the Fixed-term Parliaments Act 2011. The last monarch to dismiss a prime minister who had not suffered a defeat on a motion of confidence in the House of Commons, or to appoint a prime minister who clearly did not enjoy a majority in that House, was William IV who in 1834 dismissed the Government of Lord Melbourne, replacing him with Robert Peel (The Duke of Wellington briefly heading a caretaker ministry as Peel was on holiday in Italy at the time). Peel resigned after failing to win the 1835 General Election — prior to the 1832 Reform Act, which reduced the number of rotten and pocket boroughs, it would have been very unusual for a government with Royal backing to be defeated in this way.
Queen Victoria was the last monarch to veto a ministerial appointment. In 1892, she refused William Ewart Gladstone's advice to include Henry Labouchère (a radical who had insulted the Royal Family) in the Cabinet. The last monarch to veto legislation passed by Parliament was Queen Anne, who withheld assent from the Scottish Militia Bill 1708. However, the possibility that a royal veto might be exercised independently by the monarch remained for at least two further centuries. Pitt the Younger resigned in 1801 when George III made clear that he would veto Catholic Emancipation, which he regarded as a breach of his oath to uphold the Church of England—the measure did not pass until 1829 when George IV was persuaded to drop his opposition. As late as 1914, George V took legal advice on withholding the Royal Assent from the Third Irish Home Rule Bill, which the Liberal government was pushing through parliament having recently removed the Lords' veto (Parliament Act 1911) and in the teeth of threatened armed resistance in Ulster. The King decided that he should not withhold the Assent without "convincing evidence that it would avert a national disaster, or at least have a tranquillizing effect on the distracting conditions of the time".
The Royal Prerogative is not unlimited; this was established in the Case of Proclamations (1610), which confirmed that no new prerogative can be created and that Parliament can abolish individual prerogatives. However, as part of Parliamentary Sovereignty, Parliament could create new prerogatives if it so wished regardless. Parliament possesses the power to remove powers from the Royal Prerogative: this was done in the Fixed-term Parliaments Act 2011 which removed the Royal Prerogative to dissolve Parliament. However, the monarch's consent is required before Parliament may pass legislation removing such powers: this was seen when the second reading of the Military Action Against Iraq (Parliamentary Approval) Bill, which would have removed the monarch's ability to authorize military action without Parliamentary approval, had to be abandoned because the monarch (on the advice of her government) refused to grant such consent.
The monarch's approval ("Queen's consent") is required before Parliament may debate or pass proposed legislation affecting the Royal Prerogative, or the hereditary revenues, personal property, or personal interests of the Crown, the Duchy of Lancaster, or the Duchy of Cornwall. The consent of the Duke of Cornwall (who is also the Prince of Wales) is also required before Parliament may debate or pass proposed legislation affecting the Duchy of Cornwall.
Cabinet and government
It is the monarch's constitutional duty to appoint a Prime Minister who can command support of a majority in the House of Commons. When one party has an absolute majority in the House of Commons, the monarch appoints the leader of that party as prime minister. When there is a hung parliament, or the identity of the leader of the majority party is not clear (as was often the case for the Conservative Party up to the 1960s, and for all parties in the nineteenth century), the monarch has more flexibility in his or her choice. The monarch appoints and dismisses other ministers on the advice of the prime minister (and such appointments and dismissals occur quite frequently as part of cabinet reshuffles). The prime minister, together with other ministers, form the Government. The Government often includes ministers whose posts are sinecures (such as the Chancellor of the Duchy of Lancaster) or ministers with no specific responsibilities (minister without portfolio): such positions may be used by the prime minister as a form of patronage, or to reward officials such as the Chairman of the ruling Party with a governmental salary.
If the Commons votes against the Government on a motion of no confidence, the Fixed-term Parliaments Act 2011 specifies that Parliament automatically dissolves unless a subsequent motion of confidence is passed within fourteen days. The Prime Minister and government would have the option of resigning in order to allow a replacement government the chance to obtain a vote of confidence within the required timeframe, or remaining in office to fight the subsequent general election.
The Government usually resigns immediately after defeat in a general election, though this is not strictly required. For example, Stanley Baldwin's government lost its majority in the general election of December 1923, but did not resign until defeated in a confidence vote in January 1924.
The prime minister and all other ministers take office immediately upon appointment by the monarch. In the United Kingdom, unlike many other countries, there is no requirement for a formal vote of approval by the legislature (either of the Government as a whole or of its individual members) before they may assume office.
The prime minister and all other Ministers normally serve concurrently as members of the House of Commons or House of Lords, and are obliged by collective responsibility to cast their Parliamentary votes for the Government's position, regardless of their personal feelings or the interests of their constituents. The prime minister is normally a member of the House of Commons. The last prime minister to be a member of the House of Lords was Alec Douglas-Home; however, he resigned from the Lords and became a member of the Commons shortly after his appointment as prime minister in 1963 (there was a period of about two weeks during which he served as prime minister despite belonging to neither House). The last prime minister to serve a full administration from the House of Lords was Robert Cecil, 3rd Marquess of Salisbury, who served until 1902.
Thus the executive ("Her Majesty's Government") is "fused" with Parliament. Because of a number of factors, including the decline of the monarch and the House of Lords as independent political actors, an electoral system that tends to produce absolute majorities for one party in the Commons, and the strength of party discipline in the Commons (including the built-in payroll vote in favour of the Government), the prime minister tends to have sweeping powers checked only by the need to retain the support of his or her own MPs. The phrase elective dictatorship was coined by former Lord Chancellor Quintin Hogg in 1976 to highlight the enormous potential power of government afforded by the constitution.
The need of a prime minister to retain the support of her own MPs was illustrated by the case of Margaret Thatcher, who resigned in 1990 after being challenged for the leadership of the Conservative Party. The strength of party discipline within the Commons, enforced by the whip system, is shown by the fact that the two most recent motions of no confidence in which a Government was defeated occurred in 1924 and 1979.
There are three regional judicial systems in the United Kingdom: that of England and Wales, that of Scotland, and that of Northern Ireland. Under the Constitutional Reform Act 2005 the final court of appeal for all cases, other than Scottish criminal, is the newly seated Supreme Court of the United Kingdom: for Scottish criminal cases, the final court of appeal remains the High Court of Justiciary. Furthermore, the Constitutional Reform Act guaranteed the independence of the judiciary, a concept that emerged from the Act of Settlement 1701.
Vacancies in the Supreme Court are filled by the monarch based on the recommendation of a special selection commission consisting of that Court's President, Deputy President, and members of the judicial appointment commissions for the three judicial systems of the UK. The choice of the commission may be vetoed by the Lord Chancellor (a government minister). Members of the Supreme Court may be removed from office by Parliament, but only for misconduct.
Judges may not sit or vote in either House of Parliament (before the 2005 Act, they had been permitted to sit and vote in the House of Lords).
Church of England
The Church of England is the established church in England (i.e., not in Scotland, Wales or Northern Ireland). The monarch is ex officio Supreme Governor of the Church of England, and is required by the Act of Settlement 1701 to "join in communion with the Church of England". As part of the coronation ceremony, the monarch swears an oath to "maintain and preserve inviolably the settlement of the Church of England, and the doctrine, worship, discipline, and government thereof, as by law established in England" before being crowned by the senior cleric of the Church, the Archbishop of Canterbury – a similar oath concerning the established Church of Scotland, which is a Presbyterian church, having already been given by the new monarch in his or her Accession Council. All clergy of the Church swear an oath of allegiance to the monarch before taking office.
Parliament retains authority to pass laws regulating the Church of England. In practice, much of this authority is delegated to the Church's General Synod. The appointment of bishops and archbishops of the Church falls within the royal prerogative. In current practice, the Prime Minister makes the choice from two candidates submitted by a commission of prominent Church members, then passes his choice on to the monarch. The Prime Minister plays this role even though he himself is not required to be a member of the Church of England or even a Christian—for example Clement Attlee was an agnostic who described himself as "incapable of religious feeling".
Unlike many nations in continental Europe, the United Kingdom does not directly fund the established church with public money (although many publicly funded voluntary aided schools are run by religious foundations, including those of the Church of England). Instead, the Church of England relies on donations, land and investments.
Citizens and the state
Civil liberties and human rights
Nationality and immigration
- History of British nationality law
- Historical immigration to Great Britain
- Immigration to the United Kingdom since Irish independence
Administrative law is often called "public law". Administrative law restricts the exercise of the government's power over public administration; it covers areas such as policing, prisons, urban planning, education, the environment and immigration. It ensures the exercise of the government's power takes place within a legislative framework. This means the legal responsibilities of governmental bodies are properly defined and, at the same time, the rights and interests of the country's citizens are protected from the misuse or abuse of government power over public administration.
An example of administrative law in practice is the 1999 case of R. v. North and East Devon Health Authority which held that a disabled woman told by a health authority she would have a "home for life" in a facility had a substantive legitimate expectation the authority would not shut it down.
Nature of the constitution
The legal scholar Eric Barendt argues that the uncodified nature of the United Kingdom constitution does not mean it should not be characterised as a "constitution", but also claims that the lack of an effective separation of powers, and the fact that parliamentary sovereignty allows Parliament to overrule fundamental rights, makes it to some extent a 'facade' constitution.
A. V. Dicey identified that ultimately "the electorate are politically sovereign," and Parliament is legally sovereign. Barendt argues that the greater political party discipline in the House of Commons that has evolved since Dicey's era, and the reduction in checks on governmental power, has led to an excessively powerful government that is not legally constrained by the observance of fundamental rights. A Constitution would impose limits on what Parliament could do. To date, the Parliament of the UK has no limit on its power other than the possibility of extra-parliamentary action (by the people) and of other sovereign states (pursuant to treaties made by Parliament and otherwise).
Proponents of a codified constitution argue it would strengthen the legal protection of democracy and freedom. As a strong advocate of the "unwritten constitution", Dicey highlighted that English rights were embedded in the general English common law of personal liberty, and "the institutions and manners of the nation". Opponents of a codified constitution argue that the country is not based on a founding document that tells its citizens who they are and what they can do. There is also a belief that any unwarranted encroachment on the spirit of constitutional authority would be stiffly resisted by the British people, a perception expounded by the 19th century American judge Justice Bradley in the course of delivering his opinion in a case heard in Louisiana in 1873: "England has no written constitution, it is true; but it has an unwritten one, resting in the acknowledged, and frequently declared, privileges of Parliament and the people, to violate which in any material respect would produce a revolution in an hour."
The Labour government under past-Prime Minister Tony Blair instituted constitutional reforms in the late 1990s and early-to-mid-2000s. The effective incorporation of the European Convention on Human Rights into UK law through the Human Rights Act 1998 has granted citizens specific positive rights and given the judiciary some power to enforce them. The courts can advise Parliament of primary legislation that conflicts with the Act by means of "Declarations of Incompatibility" – however Parliament is not bound to amend the law nor can the judiciary void any statute – and can refuse to enforce, or "strike down", any incompatible secondary legislation. Any actions of government authorities that violate Convention rights are illegal except if mandated by an Act of Parliament.
Changes also include the Constitutional Reform Act 2005 which alters the structure of the House of Lords to separate its judicial and legislative functions. For example, the legislative, judicial and executive functions of the Lord Chancellor are now shared between the Lord Chancellor (executive), Lord Chief Justice (judicial) and the newly created post of Lord Speaker (legislative). The role of Law Lord (a member of the judiciary in the House of Lords) was abolished by transferring them to the new Supreme Court of the United Kingdom in October 2009.
Gordon Brown launched a "Governance of Britain" process when he took over as PM in 2007. This was an ongoing process of constitutional reform with the Ministry of Justice as lead ministry. The Constitutional Reform and Governance Act 2010 is a piece of constitutional legislation. It enshrines in statute the impartiality and integrity of the UK Civil Service and the principle of open and fair recruitment. It enshrines in law the Ponsonby Rule which requires that treaties are laid before Parliament before they can be ratified.
The Coalition Government, formed in May 2010, proposed a series of further Constitutional reforms in their coalition agreement. Consequently, the Parliamentary Voting System and Constituencies Act 2011 and the Fixed-term Parliaments Act 2011 were passed. The Acts will reduce the number of MPs in the House of Commons from 650 to 600, change the way the UK is divided into parliamentary constituencies, introduce a referendum on changing the system used to elect MPs and take the power to dissolve Parliament away from the monarch. The Coalition has also promised to introduce legislation on the reform of the House of Lords. A referendum on the voting system was rejected by 67%, and therefore all reforms regarding the voting system were dropped. Conservatives forced the government to drop House Of Lord reforms, to which the Liberal Democrats said they would refuse to support changes to the boundaries of constituencies. (The Liberal Democrats believing such changes favoured the Conservatives).
- Constitutional government
- House of Lords Constitution Committee
- Political and Constitutional Reform Select Committee
- Parliament in the Making
- Ministry of Justice
- Royal Commission on the Constitution (United Kingdom)
- Rights of Englishmen
- Power Inquiry
- Constitutional status of Cornwall
- Statute of Rhuddlan
- The Constitution Society
- Unlock Democracy
- H Barnett, Constitutional and Administrative Law (5th edn Cavendish 2005) 9, "A written constitution is one contained within a single document or a [finite] series of documents, with or without amendments"
- See Erskine May: Parliamentary Practice, Cabinet Manual and Royal Prerogative in the United Kingdom
- "Britain's unwritten constitution". British Library. Retrieved 27 November 2015.
There are a number of associated characteristics of Britain’s unwritten constitution, a cardinal one being that in law Parliament is sovereign in the sense of being the supreme legislative body.
- "Britain's unwritten constitution". British Library. Retrieved 27 November 2015.
Since there is no documentary constitution containing laws that are fundamental in status and superior to ordinary Acts of Parliament, the courts may only interpret parliamentary statutes. They may not overrule or declare them invalid for being contrary to the constitution and ‘unconstitutional’. So, too, there are no entrenched procedures (such as a special power of the House of Lords, or the requirement of a referendum) by which the unwritten constitution may be amended. The legislative process by which a constitutional law is repealed, amended or enacted, even one dealing with a matter of fundamental political importance, is similar in kind to any other Act of Parliament, however trivial its subject matter.
- Justice Megarry's judgment in the 1983 case of Manuel v Attorney General.
- Turpin, Colin; Tomkins, Adam (2007). British government and the constitution: text and materials. Cambridge: Cambridge University Press. p. 41. ISBN 978-0-521-69029-4.
- Beatson, Jack (1998). Constitutional reform in the United Kingdom: practice and principles. London: Hart Publishing. p. 45. ISBN 978-1-901362-84-8.
- First Report of the Joint Committee on Draft Civil Contingencies Bill 28 November 2003 HL 184 HC 1074 para 183
- "Britain's unwritten constitution". British Library. Retrieved 27 November 2015.
The key landmark is the Bill of Rights (1689), which established the supremacy of Parliament over the Crown following the forcible replacement of King James II (r.1685–88) by William III (r.1689–1702) and Mary (r.1689–94) in the Glorious Revolution (1688).
- Bogdanor, Vernon (1997). The Monarchy and the Constitution. Oxford University Press. p. 131. ISBN 0-19-829334-8.
- See Prof. Jeffrey Goldsworthy's study The Sovereignty of Parliament, OUP 1999.
- See in particular Jackson and others v Attorney General UKHL 56
- Smits, Jan (Jan 2002). The Making of European Private Law: Towards a Ius Commune Europaeum as a Mixed Legal System. Intersentia Publishers. p. 113. ISBN 978-90-5095-191-3.
Formerly, of course, Scots law like other Civilian systems did not recognise the strict doctrine of stare decisis, and even today it is probable that the only single decision that the Court of Session could not disregard is a precedent established by the House of Lords in a Scottish appeal.
- Bradley and Ewing, p.24
- cf Dr Bonham's case (1610) 77 ER 638
- Chrimes, S B (1967). English Constitutional History. London: Oxford University Press. p. 42.
- Runciman, David (7 February 2008). "This Way to the Ruin". London Review of Books. Retrieved 10 January 2010.
- Bradley, A.; Ewing, K. (1997). Constitutional and Administrative Law. London. p. 271.
- Smith, David L. (2002). "Change & Continuity in 17th Century English Parliaments". History Review: 1.
- Dicey, Albert Venn (1889). An Introduction to the Study of the Law of the Constitution. p. 86.
- AV Dicey, Introduction to the Study of the Law of the Constitution (1885)
- "ARCHIVED CONTENT] Number10.gov.uk » countries within a country". Webarchive.nationalarchives.gov.uk. 10 January 2003. Retrieved 7 November 2010.
- Gallop, Nick in The Constitution and Constitutional Reform p.26 (Philip Allan, 2011) ISBN 978-0-340-98720-9
- Bogdanor, Vernon (2001). Devolution in the United Kingdom. Oxford: Oxford University Press. p. 293. ISBN 978-0-19-280128-9.
- Craig, Paul; Grainne De Burca; P. P. Craig (2007). EU Law: Text, Cases and Materials (4th ed.). Oxford: Oxford University Press. pp. 344–378. ISBN 978-0-19-927389-8.
- Steiner, Josephine; Woods, Lorna; Twigg-Flesner, Christian; Jo Steiner, Lorna Woods and Christian Twigg-Flesner (2006). EU Law (9th ed.). Oxford: Oxford University Press. p. 72. ISBN 978-0-19-927959-3.
- R. v. Secretary of State for Transport ex p Factortame Ltd UKHL 13
- Benkharbouche and Janah v Embassy of the Republic of Sudan EWCA Civ 33
- Tomkins, Adam (2003). Public Law. Oxford University Press. p. 120. ISBN 978-0-19-926077-5.
As far as English public law is concerned, even after Factortame Parliament may relatively easily legislate in violation of Community law and moreover may do so in such a way that the domestic courts have no option but to uphold and enforce the legislation.
- Craig, Paul; Grainne De Burca; P. P. Craig (2007). EU Law: Text, Cases and Materials (4th ed.). Oxford: Oxford University Press. p. 371. ISBN 978-0-19-927389-8.
It is however unclear as yet what the UK courts would do if Parliament sought expressly to derogate from a provision of EU law, while still remaining in the EU.
- Quoted in Steiner, Josephine; Woods, Lorna; Twigg-Flesner, Christian (2006). EU Law (9th ed.). Oxford: Oxford University Press. p. 79. ISBN 978-0-19-927959-3.
- European Union Act 2011
- "Thoburn v Sunderland City Council EWHC 195 (Admin), QB 151 ("Metric Martyrs" ruling) 18 Feb 2002 (Extract)". Bwmaonline.com. 18 February 2002. Retrieved 7 November 2010.
- Akehurst, Michael; Malanczuk, Peter (1997). Akehurst's modern introduction to international law. London: Routledge. pp. 65–66. ISBN 978-0-415-11120-1.
- Smith, David L. "Change & Continuity in 17th Century English Parliaments". History Review, 2002. p. 1.
- "Cabinet Manual" (PDF). Cabinet Office. 14 December 2010. Retrieved 27 April 2015.
- Dyer, Clare (21 October 2003). "Mystery lifted on Queen's powers". The Guardian (London).
- the power to dissolve Parliament, formerly part of the Royal Prerogative, was explicitly removed by the Fixed-term Parliaments Act 2011
- Bogdanor p. 34
- Bradley, A. W. & Ewing, K. D. (2003). Constitutional and Administrative Law (13th ed.). London: Longmans. pp. 243. ISBN 0-582-43807-1.
- Secret papers show extent of senior royals' veto over bills | UK news | The Guardian
- Bogdanor, p. 148
- "Elective dictatorship". The Listener: 496–500. 21 October 1976.
- "Constitutional reform". Courts and Tribunals Judiciary. Retrieved 9 November 2014.
- "Independence". Courts and Tribunals Judiciary. Retrieved 9 November 2014.
- Brookshire, Jerry Hardman (1995). Clement Attlee. New York: Manchester University Press. p. 15. ISBN 0-7190-3244-X.
- The Honourable Mr Justice Bernard McCloskey (17 October 2010). "Administrative Law and Administrative Courts in the United Kingdom: An Overview" (PDF). Retrieved 15 November 2014.
- Barendt, Eric (1997). "Is there a United Kingdom Constitution". Oxford Journal of Legal Studies 137.
- Scarman, Leslie (20 July 2003). "Why Britain Needs a Written Constitution". Charter88 Sovereignty lecture. Charter88. Retrieved 31 January 2010.
- Dicey, A.V. (1915). Introduction to the Study of the Law of the Constitution. London: Macmillan and Company. p. 70.
- Abbott, Lewis F. (2006). "Five: "The Legal Protection Of Democracy & Freedom: The Case for a New Written Constitution & Bill Of Rights"". British Democracy: Its Restoration & Extension. ISR. ISBN 978-0-906321-31-7.
- A V Dicey (1897) Introduction to the Study of the Law of the Constitution
- "The British Constitution – Magna Carta – Icons of England". Icons.org.uk. Retrieved 7 November 2010.
- "Vote 2011: UK rejects alternative vote". BBC News. 7 May 2011.
- AW Bradley and KD Ewing, Constitutional and Administrative Law (2010)
- Report on the British constitution and proposed European constitution by Professor John McEldowney, University of Warwick Submitted as written evidence to the House of Lords Select Committee on the Constitution, published to the public on 15 October 2003.
- From Unwritten to Written: Transformation in the British Common-Law Constitution, David Jenkins, 2003
- Andrew Blick, 'Magna Carta and contemporary constitutional change' (History and Policy, 2015)
- Conor Gearty, 'Are judges now out of their depth?' (2007)
- Pike, Luke Owen (1907). The Public Records and The Constitution. London: Oxford University Press.
|Wikisource has original text related to this article:|
|Wikibooks has a book on the topic of: UK Constitution and Government|
|Wikimedia Commons has media related to Constitution of the United Kingdom.|
- Cabinet Office - Constitutional Reform
- Guardian Special Report – Constitutional Reform
- United Kingdom Constitutional Law Association blog on Constitutional Reform
- The Constitution Society feature on What is the British Constitution?
- LSE - A New UK Constitution
- UCL Constitution Unit - About
- Democratic Audit UK
- The Parliament and Constitution Centre
- Constitutional Law Chronology
- Full Constitution of England - Constitute Project |
Scatter plots and line graphs are the most common ways to display bivariate data (data with two variables).
- A scatter plot is generally used when displaying data from two variables that may or may not be directly related, and when neither of the variables is under the direct control of the researcher. The primary function of a scatter plot is to visualize the strength of correlation between the two plotted variables. The number of sunburned swimmers at the local pool each day for a month would be an example of a data set that would best be displayed as a scatter plot, since neither the weather nor the number of swimmers present is under the control of the researcher.
- A line graph is appropriate when comparing two variables that are believed to be related, and when one of the variables is under the direct control of the researcher. The primary use of a line graph is to determine the trend between the two graphed variables. The mileage of a particular car compared to speed of travel would be a good example, since the mileage is certainly correlated to the speed and the speed can be directly controlled by the researcher.
To create a line or scatter plot of data, you must first identify your two variables as either dependent or independent. A dependent (or input) variable may also be referred to as the explanatory variable, and has values that are assigned to it. An independent (or output) variable may also be called the response variable, and has values that are the result of computations performed on the input variable. By convention, the independent variable is plotted on the horizontal, and the dependent variable is plotted on the vertical.
Then, you must organize your data so that it is easy to see how a given input value relates to a given output value. By convention this is done with a ‘T’ chart or a two-column graph, with the input value on the left and the output value on the right, or vertically with the input on the top and output on the bottom.
Once you have the table constructed, start with the first pair of values and move across your horizontal axis to the first input value and up the vertical axis to the associated output value. Continue the process until all of your points have been graphed.
Once all of your points have been plotted, if you are creating a scatter plot, you’re done! If you are creating a line graph, start at your minimum input value and connect the points as you move to the right on the input axis.
Note 1: A broken-line graph is a type of line graph that is used when it is necessary to show change over time. A line is used to join the values, but the line has no defined slope. However, the points are meaningful, and they all represent an important part of the graph.
Note 2: A double line graph is a type of line graph that is used to show a comparison. To create a double line graph simply create two line graphs on the same set of axes, one for each data set.
Construct a scatter plot from the given values.
Solution: The data here is already organized into associated input and output values, so you simply need to create a graph with a horizontal and vertical axis on which to plot the points.
Notice that I have only created the positive values here, since the table of values was all positive.
Now we just plot the points from the table, starting with the first vertical pair: Input = 1, Output = 2. Incidentally, when describing a single point of bivariate data, the conventional method of writing it is in the form (input, output) or . So our first point would be (1, 2), the second would be (3, 4) and so on.
Now we fill in the values on the graph, starting with (1, 2). Beginning at the lower-left corner, which represents (0, 0), move 1 point to the right and 2 points up. The second point is 3 points to the right and 4 points up. Continue until all 10 points are graphed. Since the question asks specifically for a scatter plot, once the individual points are plotted, we are done.
Interpreting Scatter Plots and Line Graphs
- Two variables with a strong correlation will appear as a number of points occurring in a clear and recognizable linear pattern. The line does not need to be straight, but it should be consistent and not exactly horizontal or vertical.
- Two variables with a weak correlation will appear as a much more scattered field of points, with only a little indication of points falling into a line of any sort.
- A linear relationship appears as a straight line either rising or falling as the independent variable values increase. If the line rises to the right, it indicates a direct relationship. If the line falls to the right, it indicates an inverse relationship.
- A non-linear relationship may take the form of any number of curved lines, and may indicate a squared relationship (dependent variable is the square of the independent), a square root relationship (dependent variable is the square root of the independent), an inverse square (dependent variable is one divided by the square of the independent), or many other possibilities.
- A positive correlation appears as a recognizable line with a positive slope . A line has a positive slope when an increase in the independent variable is accompanied by an increase in the dependent variable (the line rises as you move to the right).
- A negative correlation appears as a recognizable line with a negative slope. As the independent variable increases, the dependent variable decreases (the line falls as you move to the right). |
Dr Harold White, a physicist at Nasa’s Johnson Space Centre, is conducting research to create a warp in space and time.
These so called “warp bubbles” could eventually allow spacecraft to at speeds that appear to exceed the speed of light.
Essentially the warp creates a fold in the fabric of space and time that allows an object inside to travel a much greater distance in a shorter time.
This would allow a spacecraft to overcome one of the central laws in physics – that nothing can exceed the speed of light.
However, Dr White claims warp bubbles are theoretically possible and has now begun work to create warp bubbles for the first time in the laboratory.
Warp drives were an essential part of the Star Ship Enterprise in the long-running science fiction series Star Trek, allowing the crew to between the distant worlds they were exploring.
Dr White presented an update on his work at the Icarus Interstellar Congress, where scientists gathered to discuss ways of travelling between the stars.
He said that they have already begun conducting experiments and have generated some results suggest they are making progress towards being able to generate a warp bubble.
He said: “One of the problems is the amount of energy that would be required to create a space warp.
“We found two mechanisms that can reduce the amount of energy that would be required to create a space warp.
“It was this significant reduction in energy requirements that encouraged us to go on to generate some kind of manifestation of it in the lab.
“This is not something that you can bolt to a spacecraft, this is science trying to go through to find existence proof of the physics.
“It is the first step you want to take to move from the maths to an experimental set up.”
Currently the furthest mankind has managed to venture is to the edge of our own solar system, around 11.6 billion miles from Earth.
The Voyager 1 spacecraft, which was launched in 1977, is currently on the cusp of becoming the first man-made object to leave our solar system.
However, it would take it 75,000 years to reach our nearest star Alpha Centauri, which is around 4.3 light years.
Dr White, however, believes that by containing a spacecraft inside a warp bubble, it could travel over larger distances without having to break the speed of light.
The bubble would compress space and time in front of it.
Dr White is building on work by a Mexican scientist called Miguel Alcubierre who estimated that it would be possible to achieve this if an object had negative mass.
Dr White believes a spacecraft would need to be surrounded by a ring of exotic matter known as negative vacuum energy.
He found that by changing the shape of the bubble and oscillating its intensity, it was possible to reduce the amount of energy that would be required.
Dr White and his team have now set up their equipment in a laboratory that used to be used in the development of technology for the Apollo space missions in the 1960s.
The project, which he has named Eagleworks, uses a high-voltage capacitor ring that is charged up and discharged as a laser is fired through the centre.
Dr White is looking for changes in the way the light passes through it that may indicate the photons have passed through a warp bubble.
Their experiments have shown this may be possible but Dr White insisted it was too early to say anything definitive about what they have achieved.
He said: “We have two separate labs that have been working on this and I think we have some potential non-null results that are intriguing.
“However, these results are far from conclusive and it is way too early to say anything definitive, so we will continue to investigate.”
For those hoping that they may soon be able take a trip to our nearest stellar neighbours, Dr White believes they may still have some time to wait.
He said it could take anything from 20 to 200 years before such a spacecraft could be created.
Experiments that have attempted to break the speed of light in the past have ultimately proved to be unsuccessful. Albert Einstein propsed that nothing can travel faster than the speed ofl ight in a vacuum.
Physicists based at CERN near Geneva, however, stunned the scientific world two years ago by claiming to have shown that particles could move faster than the speed of light.
However, they were later shown to have made a mistake and the extra speed was due to a faulty wire connection in timing equipment.
Dr White and his colleagues, however, have already envisaged what a warp drive spacecraft would look like.
Their design would consist of a central section shaped like an American football where the crew and equipment would be surrounded by one or two rings that are attached to it by pylons
These rings would contain the exotic matter that would generate the warp field but it would also require engines to drive the spacecraft forward.
While it may still be some time before interstellar travel becomes possible, spaceships exploiting warp bubbles could be used to reduce the travel time in our own solar system, reducing journeys that take years to weeks or months.
Dr White told the conference: “What is necessary to make the trick work is the presence of the ring around the space craft. It would have exotic matter or negative vacuum energy.
“You will still need some kind of main propulsion system to make the thing work.”
In an interview with New Scientist he added: “You would have an initial velocity as you set off, and then when you turn on the ring of negative vacuum energy it augments your velocity.
“Space would contract in front of the spacecraft and expand behind it, sending you sliding through warped space-time and covering the distance at a much quicker rate.
“It would be like watching a film in fast forward.
“We are very much in the science rather than the technology phase.
“We have got some very specific and controlled steps to take to create a proof of concept, to show we have properly understood and applied the math and physics.” |
Factors in R programming play an integral role in data analysis, forming the foundation of categorical variables that allow statistical modeling and data visualization to be effective and meaningful. They are extensively used in data wrangling and preprocessing, making them an essential tool for any data analyst or data scientist working with R.
This article aims to shed light on the importance, structure, and functionality of factors in R programming. We will explore how factors can be created, manipulated, and utilized to make robust data analysis and insights generation possible.
II. Understanding the Concept of Factors in R
In R, a factor is a data structure used for fields that take a limited number of different values, also known as categorical data. The information can be ordered (ordinal), such as ‘Low’, ‘Medium’, ‘High’, or unordered (nominal), such as ‘Male’, ‘Female’. Factors are stored as a vector of integer values with a corresponding set of character values to use when the factor is displayed.
Factors are central to many statistical procedures and are especially useful in statistical modeling where they serve as categorical variables. Using factors, we can categorize and order the data, facilitating data interpretation, and paving the way for powerful statistical analysis.
III. Creating Factors in R
Creating factors in R is relatively straightforward. The
factor() function is used to encode a vector as a factor. The function takes a vector as an input and outputs a factor with levels (categories).
# Creating a factor from a character vector sex_vector <- c("Male", "Female", "Male", "Female") sex_factor <- factor(sex_vector) print(sex_factor)
In the above example,
sex_vector is a character vector which is transformed into a factor
sex_factor using the
factor() function. When printed,
sex_factor displays two levels – Male and Female.
IV. Manipulating Factors
Manipulating factors involves changing the levels of a factor, ordering the levels, or modifying the labels. The
levels() function can be used to access or set the levels of a factor.
# Changing the levels of a factor levels(sex_factor) <- c("F", "M") print(sex_factor)
In this case, the labels “Female” and “Male” are changed to “F” and “M”, respectively. For ordered factors, the
ordered() function can be used, which creates an ordered factor, a type of factor where the order of the levels is meaningful.
V. Factors in Data Frames
Factors are commonly found in data frames, the primary data structure for storing data tables in R. When character vectors are included in a data frame, they are often converted to factors for efficient storage and ease of analysis.
str() function can be used to check the structure of a data frame and verify if a variable has been read as a factor. To prevent automatic conversion to factors, the argument
stringsAsFactors = FALSE can be passed when creating the data frame.
VI. Using Factors in Data Analysis
Factors are pivotal in data analysis. They are involved in various data operations, such as data summarization, tabulation, and visualization. Furthermore, factors are integral to statistical modeling techniques. For example, in regression models, factors can be used to represent categorical independent variables.
ggplot2 package, factors are used to divide data into groups and represent these groups on the axes of a plot. The ordering of factor levels can be manipulated to control the order of categories in the plot.
Factors in R programming are indispensable for working with categorical data. They streamline data manipulation and analysis, providing a simple yet powerful way of handling categories and groups within datasets. Understanding and utilizing factors is critical for anyone seeking to harness the full potential of R for data analysis and visualization. |
Parasitism is a non-mutual symbiotic relationship between species, where one species, the parasite, benefits at the expense of the other, the host. Traditionally parasite referred primarily to organisms visible to the naked eye, or macroparasites (such as helminths). Parasite now includes microparasites, which are typically smaller, such as viruses and bacteria. Some examples of parasites include the plants mistletoe and cuscuta, and animals such as hookworms.
Unlike predators, parasites do not kill their host, are generally much smaller than their host, and will often live in or on their host for an extended period. Both are special cases of consumer-resource interactions. Parasites show a high degree of specialization, and reproduce at a faster rate than their hosts. Classic examples of parasitism include interactions between vertebrate hosts and tapeworms, flukes, the Plasmodium species, and fleas. Parasitism differs from the parasitoid relationship because parasitoids generally kill their hosts.
Parasites reduce host biological fitness by general or specialized pathology, such as parasitic castration and impairment of secondary sex characteristics, to the modification of host behavior. Parasites increase their fitness by exploiting hosts for resources necessary for their survival, e.g. food, water, heat, habitat, and transmission. Although parasitism applies unambiguously to many cases, it is part of a continuum of types of interactions between species, rather than an exclusive category. In many cases, it is difficult to demonstrate that the host is harmed. In others, there may be no apparent specialization on the part of the parasite, or the interaction between the organisms may be short-lived.
- 1 Etymology
- 2 Types
- 3 Host defenses
- 4 Evolutionary aspects
- 5 Ecology
- 6 Adaptation
- 7 Value
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
First used in English 1539, the word parasite comes from the Medieval French parasite, from the Latin parasitus, the latinisation of the Greek παράσιτος (parasitos), "one who eats at the table of another" and that from παρά (para), "beside, by" + σῖτος (sitos), "wheat". Coined in English in 1611, the word parasitism comes from the Greek παρά (para) + σιτισμός (sitismos) "feeding, fattening".
Parasites are classified based on their interactions with their hosts and on their life cycles. An obligate parasite is totally dependent on the host to complete its life cycle, while a facultative parasite is not.
Parasites that live on the surface of the host are called ectoparasites (e.g. some mites). Those that live inside the host are called endoparasites (including all parasitic worms). Endoparasites can exist in one of two forms: intercellular parasites (inhabiting spaces in the host’s body) or intracellular parasites (inhabiting cells in the host’s body). Intracellular parasites, such as protozoa, bacteria or viruses, tend to rely on a third organism, which is generally known as the carrier or vector. The vector does the job of transmitting them to the host. An example of this interaction is the transmission of malaria, caused by a protozoan of the genus Plasmodium, to humans by the bite of an anopheline mosquito. Those parasites living in an intermediate position, being half-ectoparasites and half-endoparasites, are sometimes called mesoparasite.
An epiparasite is one that feeds on another parasite. This relationship is also sometimes referred to as hyperparasitism, exemplified by a protozoan (the hyperparasite) living in the digestive tract of a flea living on a dog.
Social parasites take advantage of interactions between members of social organisms such as ants or termites. An example is Phengaris arion, a butterfly whose larvae employ mimicry to parasitize certain species of ants. In kleptoparasitism, parasites appropriate food gathered by the host. An example is the brood parasitism practiced by cuckoos and cowbirds, which do not build nests of their own and leave their eggs in nests of other species. The host behaves as a "babysitter" as they raise the young as their own. If the host removes the cuckoo's eggs, some cuckoos will return and attack the nest to compel host birds to remain subject to this parasitism.
Intraspecific social parasitism may also occur. One example of this is parasitic nursing, where some individuals take milk from unrelated females. In wedge-capped capuchins, higher ranking females sometimes take milk from low ranking females without any reciprocation. The high ranking females benefit at the expense of the low ranking females.
Parasitism can take the form of isolated cheating or exploitation among more generalized mutualistic interactions. For example, broad classes of plants and fungi exchange carbon and nutrients in common mutualistic mycorrhizal relationships; however, some plant species known as myco-heterotrophs "cheat" by taking carbon from a fungus rather than donating it.
An adelpho-parasite is a parasite in which the host species is closely related to the parasite, often being a member of the same family or genus. An example of this is the citrus blackfly parasitoid, Encarsia perplexa, unmated females of which may lay haploid eggs in the fully developed larvae of their own species. These result in the production of male offspring. The marine worm Bonellia viridis has a similar reproductive strategy, although the larvae are planktonic.
Autoinfection is the infection of a primary host with a parasite, particularly a helminth, in such a way that the complete life cycle of the parasite happens in a single organism, without the involvement of another host. Therefore, the primary host is at the same time the secondary host of the parasite. Some of the organisms where autoinfection occurs are Strongyloides stercoralis, Enterobius vermicularis, Taenia solium, and Hymenolepis nana. Strongyloidiasis for example involves premature transformation of noninfective larvae in infective larvae, which can then penetrate the intestinal mucosa (internal autoinfection) or the skin of the perineal area (external autoinfection). Infection can be maintained by repeated migratory cycles for the remainder of the person's life.
The first line of defense against invading parasite is the skin. Skin is made up layers of dead cells and acts as a physical barrier to invading organisms. These dead cells contain the protein keratin, which makes skin tough and waterproof. Most microorganism needs a moist environment to survive. By keeping the skin dry, it prevents invading organisms from colonizing. Furthermore, our skin also secretes sebum, which is toxic to most microorganisms.
The mouth contains saliva, which prevents foreign organism from getting into the body orally. Furthermore, the mouth also contains lysozyme, an enzyme found in tears and the saliva. This enzyme breaks down cell walls of invading microorganisms.
Should the organism pass the mouth, the stomach is the next line of defense. The stomach contains hydrochloric acid and gastric acids, which makes its ph level around 2. In this environment, the acidity of the stomach helps kill most microorganisms that try to invade the body through the gastric intestinal tract.
Parasites can also invade the body through the eyes. The lashes on the eyelid prevents invading microorganisms from entering the eye in the first place. Even if the microorganism do get into the eye, tears contain the enzyme lysozyme, which will kill most invading microorganisms.
Should the parasite enter the body, the immune system is a vertebrate’s major defense against parasitic invasion. The immune system is made up of different families of molecules. These include serum proteins and pattern recognition receptors (PRRs). PRRs are intracellular and cellular receptors that activate dendritic cells, which in turn activate the adaptive immune system’s lymphocytes. Lymphocytes such as the T cells and antibody producing B cells with variable receptors that recognize parasites.
In response to parasitic attack, plants undergo a series of metabolic and biochemical reaction pathways that will enact defensive responses. For example, parasitic invasion causes an increase in the jasmonic acid-insensitivel (JA) and NahG (SA) pathway. These pathways produces chemicals that will induce defensive responses, such as the production of chemicals or defensive molecules to fight off the attack. Different biochemical pathways are activated by different parasites. In general, there are two types of responses that can be activated by the pathways. Plants can either initiate a specific or non-specific response. Specific responses involve gene-gene recognition of the plant and parasite. This can be mediated by the ability of the plant’s cell receptors recognizing and binding molecules that are located on the cell surface of parasites. Once the plant’s receptors recognizes the parasite, the plant localizes the defensive compounds to that area creating a hypersensitive response. This form of defense mechanism localizes the area of attack and keeps the parasite from spreading. Furthermore, a specific response against parasitic attack prevents the plants from wasting its energy by increasing defenses where it’s not need. However, specific defensive responses only target specific parasites. If the plant lacks the ability to recognize a parasite, specific defense responses won’t be activated. Nonspecific defensive responses work against all parasites. These responses are active over time and are systematic, meaning that the responses are not confined to an area of the plant, but rather spread throughout the entirety of the organism. However, nonspecific responses are energy costly, since the plant has to ensure that the genes producing the nonspecific responses are always expressed.
Parasitism has arisen independently many times. Depending on the definition used, as many as half of all animals have at least one parasitic phase in their life cycles, and is frequent in plants and fungi. Almost all free-living animals are host to one or more parasitic taxa.
Parasites evolve in response to their hosts' defences, sometimes in a manner specific to a particular host taxon and specializing to the point where they infect only a single species. Such narrow host specificity can be costly over evolutionary time, however, if the host species becomes extinct. Therefore many parasites can infect a variety of more or less closely related host species, with different success rates.
In turn, host defenses coevolve in response to attacks by parasites. Theoretically, parasites may have an advantage in this evolutionary arms race because their generation time commonly is shorter. Hosts reproduce less quickly than parasites, and therefore have fewer chances to adapt than their parasites do over a given span of time.
Long-term coevolution sometimes leads to a relatively stable relationship tending to commensalism or mutualism, as, all else being equal, it is in the evolutionary interest of the parasite that its host thrives. A parasite may evolve to become less harmful for its host or a host may evolve to cope with the unavoidable presence of a parasite—to the point that the parasite's absence causes the host harm. For example, although animals infected with parasitic worms are often clearly harmed, and therefore parasitized, such infections may also reduce the prevalence and effects of autoimmune disorders in animal hosts, including humans.
Competition between parasites tends to favor faster reproducing and therefore more virulent parasites. Parasites whose life cycle involves the death of the host, to exit the present host and sometimes to enter the next, evolve to be more virulent or even alter the behavior or other properties of the host to make it more vulnerable to predators. Parasites that reproduce largely to the offspring of the previous host tend to become less virulent or mutualist, so that its hosts reproduce more effectively.
The presumption of a shared evolutionary history between parasites and hosts can sometimes elucidate how host taxa are related. For instance, there has been dispute about whether flamingos are more closely related to the storks and their relatives, or to ducks, geese and their relatives. The fact that flamingos share parasites with ducks and geese is evidence these groups may be more closely related to each other than either is to storks.
Parasitism is part of one explanation for the evolution of secondary sex characteristics seen in breeding males throughout the animal world, such as the plumage of male peacocks and manes of male lions. According to this theory, female hosts select males for breeding based on such characteristics because they indicate resistance to parasites and other disease.
In rare cases, a parasite may even undergo co-speciation with its host. One particularly remarkable example of co-speciation exists between the simian foamy virus (SFV) and its primate hosts. In one study, the phylogenies of SFV polymerase and the mitochondrial cytochrome oxidase subunit II from African and Asian primates were compared. Surprisingly, the phylogenetic trees were very congruent in branching order and divergence times. Thus, the simian foamy viruses may have co-speciated with Old World primates for at least 30 million years.
A single parasite species usually has an aggregated distribution across host individuals, which means that most hosts harbor few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology: the use of parametric statistics should be avoided. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is recommended by several authors. However, this can give rise to further problems. Therefore, modern day quantitative parasitology is based on more advanced biostatistical methods.
Hosts represent discrete habitat patches that can be occupied by parasites. A hierarchical set of terminology has come into use to describe parasite assemblages at different host scales.
- All the parasites of one species in a single individual host.
- All the parasites of one species in a host population.
- All the parasites of all species in a single individual host.
- Component community
- All the parasites of all species in a host population.
- Compound community
- All the parasites of all species in all host species in an ecosystem.
The diversity ecology of parasites differs markedly from that of free-living organisms. For free-living organisms, diversity ecology features many strong conceptual frameworks including Robert MacArthur and E. O. Wilson's theory of island biogeography, Jared Diamond's assembly rules and, more recently, null models such as Stephen Hubbell's unified neutral theory of biodiversity and biogeography. Frameworks are not so well-developed for parasites and in many ways they do not fit the free-living models. For example, island biogeography is predicated on fixed spatial relationships between habitat patches ("sinks"), usually with reference to a mainland ("source"). Parasites inhabit hosts, which represent mobile habitat patches with dynamic spatial relationships. There is no true "mainland" other than the sum of hosts (host population), so parasite component communities in host populations are metacommunities.
Nonetheless, different types of parasite assemblages have been recognized in host individuals and populations, and many of the patterns observed for free-living organisms are also pervasive among parasite assemblages. The most prominent of these is the interactive-isolationist continuum. This proposes that parasite assemblages occur along a cline from interactive communities, where niches are saturated and interspecific competition is high, to isolationist communities, where there are many vacant niches and interspecific interaction is not as important as stochastic factors in providing structure to the community. Whether this is so, or whether community patterns simply reflect the sum of underlying species distributions (no real "structure" to the community), has not yet been established.
Parasites infect hosts that exist within their same geographical area (sympatric) more effectively. This phenomenon supports the "Red Queen hypothesis—which states that interactions between species (such as host and parasites) lead to constant natural selection for adaptation and counter adaptation." The parasites track the locally common host phenotypes, therefore the parasites are less infective to allopatric (from different geographical region) hosts.
Experiments published in 2000 discuss the analysis of two different snail populations from two different sources- Lake Ianthe and Lake Poerua in New Zealand. The populations were exposed to two pure parasites (digenetic trematode) taken from the same lakes. In the experiment, the snails were infected by their sympatric parasites, allopatric parasites and mixed sources of parasites. The results suggest that the parasites were more highly effective in infecting their sympatric snails than their allopatric snails. Though the allopatric snails were still infected by the parasites, the infectivity was much less when compared to the sympatric snails. Hence, the parasites were found to have adapted to infecting local populations of snails.
Parasites have a variety of methods to infect hosts. For example, the Acanthamoeba enters the body when the environment is not hostile, and the strongyloides stercoralis enters the body when a host steps on infected ground while barefoot. Many parasites enter the food of their hosts and wait to be eaten. Plasmodium malariae uses a mosquito host to transmit malaria and Loa Loa parasites use deer flies to enter hosts.
Parasites inhabit living organisms and therefore face problems that free-living organisms do not. Hosts, the only habitats in which parasites can survive, actively try to avoid, repel, and destroy parasites. Parasites employ numerous strategies for getting from one host to another, a process sometimes referred to as parasite transmission or colonization.
Some endoparasites infect their host by penetrating its external surface, while others must be ingested. Once inside the host, adult endoparasites need to shed offspring into the external environment to infect other hosts. Many adult endoparasites reside in the host’s gastrointestinal tract, where offspring can be shed along with host excreta. Adult stages of tapeworms, thorny-headed worms and most flukes use this method.
Larval stages of endoparasites often infect sites in the host other than the blood or gastrointestinal tract. In many such cases, larval endoparasites require their host to be consumed by the next host in the parasite’s life cycle in order to survive and reproduce. Alternatively, larval endoparasites may shed free-living transmission stages that migrate through the host’s tissue into the external environment, where they actively search for or await ingestion by other hosts. The foregoing strategies are used, variously, by larval stages of tapeworms, thorny-headed worms, flukes and parasitic roundworms.
Some ectoparasites, such as monogenean worms, rely on direct contact between hosts. Ectoparasitic arthropods may rely on host-host contact (e.g. many lice), shed eggs that survive off the host (e.g. fleas), or wait in the external environment for an encounter with a host (e.g. ticks). Some aquatic leeches locate hosts by sensing movement and only attach when certain temperature and chemical cues are present.
Some parasites modify host behavior to make transmission to other hosts more likely. For example, in California salt marshes, the fluke Euhaplorchis californiensis reduces the ability of its killifish host to avoid predators. This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan Toxoplasma gondii, a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with T. gondii are drawn to this scent, which may increase transmission to feline hosts.
Roles in ecosystems
Modifying the behavior of infected hosts, to make transmission to other hosts more likely to occur, is one way parasites can affect the structure of ecosystems. For example, in the case of Euhaplorchis californiensis (discussed above) it is plausible that the local predator and prey species might be different if this parasite were absent from the system.
Although parasites are often omitted in depictions of food webs, they usually occupy the top position. Parasites can function like keystone species, reducing the dominance of superior competitors and allowing competing species to co-exist.
Many parasites require multiple hosts of the different species to complete their life cycles and rely on predator-prey or other stable ecological interactions to get from one host to another. In this sense, the parasites in an ecosystem reflect the health of that system.
Although parasites are generally considered to be harmful, the eradication of all parasites would not necessarily be beneficial. Parasites account for as much as or more than half of life's diversity; they perform an important ecological role (by weakening prey) that ecosystems would take some time to adapt to; and without parasites organisms may eventually tend to asexual reproduction, diminishing the diversity of sexually dimorphic traits. Parasites provide an opportunity for the transfer of genetic material between species. On rare, but significant, occasions this may facilitate evolutionary changes that would not otherwise occur, or that would otherwise take even longer.
- Consumer-resource systems
- Endosymbiotic theory
- Human parasites
- Intestinal parasite
- List of human parasitic diseases
- List of parasites (human)
- List of parasitic organisms
- Monoxenous development
- Parasitic plant
- Parasitoid wasp
- The Extended Phenotype
- Claude Combes, The Art of being a Parasite, U. of Chicago Press, 2005
- Getz WM (2011). "Biomass transformation webs provide a unified approach to consumer-resource modelling". Ecol. Lett. 14 (2): 113–24. doi:10.1111/j.1461-0248.2010.01566.x. PMC 3032891. PMID 21199247.
- "The Differences Between Parasites and Parasitoids". BugLife. Retrieved 2013-07-19.
- Godfray HC (2004). "Parasitoids". Current Biology Magazine 14 (12): R456. doi:10.1016/j.cub.2004.06.004. PMID 15203011.
- παράσιτος, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- παρά, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- σῖτος, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- σιτισμός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
- "Pathogenic Parasitic Infections". PEOI. Retrieved 2013-07-18.
- Thomas JA, Schönrogge K, Bonelli S, Barbero F, Balletto E (2010). "Corruption of ant acoustical signals by mimetic social parasites: Maculinea butterflies achieve elevated status in host societies by mimicking the acoustics of queen ants". Commun Integr Biol 3 (2): 169–71. doi:10.4161/cib.3.2.10603. PMC 2889977. PMID 20585513.
- "Bullies of the Bird World". National Wildlife Magazine. Aug/Sep 1997, Vol. 35 No. 5
- O'Brien, Timothy G. (1988). "Parasitic nursing behavior in the wedge-capped capuchin monkey (Cebus olivaceus)". American Journal of Primatology 16 (4): 341–344. doi:10.1002/ajp.1350160406.
- Featured Creatures
- Larry Gonick and Mark Wheelis, The Cartoon Guide to Genetics. HarperCollins, 1991.
- Host-Parasite Interactions Innate Defenses of the Host. Retrieved from University of Colorado website: http://www.colorado.edu/outreach/BSI/k12activities/interactive/innatedefenses.pdf
- Maizels RM (2009). "Parasite immunomodulation and polymorphisms of the immune system". J. Biol. 8 (7): 62. doi:10.1186/jbiol166. PMC 2736671. PMID 19664200.
- Runyon JB, Mescher MC, De Moraes CM (2010). "Plant defenses against parasitic plants show similarities to those induced by herbivores and pathogens". Plant Signal Behav 5 (8): 929–31. doi:10.4161/psb.5.8.11772. PMC 3115164. PMID 20495380.
- .Hatcher, J. M. & Dunn, M. A. (2011). Parasites in Ecological Communities. Cambridge, UK: Cambridge University Press.
- Frank SA (2000). "Specific and non-specific defense against parasitic attack". J. Theor. Biol. 202 (4): 283–304. doi:10.1006/jtbi.1999.1054. PMID 10666361.
- Price, P.W. 1980. Evolutionary Biology of Parasites. Princeton University Press, Princeton
- Wolff, Ewan D. S.; Steven W. Salisbury; John R. Horner; David J. Varricchio (2009). "Common Avian Infection Plagued the Tyrant Dinosaurs". In Hansen, Dennis Marinus. PLoS ONE 4 (9): e7288. doi:10.1371/journal.pone.0007288. PMC 2748709. PMID 19789646. Retrieved 2013-07-08.
- Rook GA (2007). "The hygiene hypothesis and the increasing prevalence of chronic inflammatory disorders". Transactions of the Royal Society of Tropical Medicine and Hygiene 101 (11): 1072–4. doi:10.1016/j.trstmh.2007.05.014. PMID 17619029.
- Switzer WM, Salemi M, Shanmugam V, Gao F, Cong ME, Kuiken C, Bhullar V, Beer BE, Vallet D, Gautier-Hion A, Tooze Z, Villinger F, Holmes EC, Heneine W (2005). "Ancient co-speciation of simian foamy viruses and primates". Nature 434 (7031): 376–80. doi:10.1038/nature03341. PMID 15772660.
- Rózsa L, Reiczigel J, Majoros G (2000). "Quantifying parasites in samples of hosts". J. Parasitol. 86 (2): 228–32. doi:10.1645/0022-3395(2000)086[0228:QPISOH]2.0.CO;2. PMID 10780537.
- Lively CM, Dybdahl MF (2000). "Parasite adaptation to locally common host genotypes". Nature 405 (6787): 679–81. doi:10.1038/35015069. PMID 10864323.
- Lafferty KD, Morris AK (1996). "Altered behavior of parasitized killifish increases susceptibility to predation by bird final hosts". Ecology 77.
- Berdoy M, Webster JP, Macdonald DW (2000). "Fatal attraction in rats infected with Toxoplasma gondii". Proc. Biol. Sci. 267 (1452): 1591–4. doi:10.1098/rspb.2000.1182. PMC 1690701. PMID 11007336.[dead link]
- Holt RD (2010). "IJEE Soapbox". Israel Journal of Ecology and Evolution 56 (3): 239–250. doi:10.1560/IJEE.56.3-4.239.
|Wikimedia Commons has media related to Parasites.|
- Parasitology Parasites Zoonoses—(Polish/English) over 50 movies (Filmoteka) and over 250 photos (Fotogaleria/Photogallery) with human and animal parasites.
- Aberystwyth University: Parasitology—class outline with links to full text articles on parasitism and parasitology.
- KSU: Parasitology Research—parasitology articles and links.
- Medical Parasitology—online textbook.
- Division of Parasitic Diseases, Centers for Disease Control and Prevention
- VCU Virtual Parasite Project—Virtual Parasite Project at Virginia Commonwealth University's Center for the Study of Biological Complexity
- Parasites World—Parasites articles and links.
- Parasitic and Parasitoid Alien Species in Science Fiction Movies
- Toxoplasma gondii in the Subarctic and Arctic |
The following is a brief overview of the extensive research done in regard to the factors leading to achievement as it relates to the current study. Research regarding intelligence, growth and fixed mindsets, motivation and mindset interventions will be explored in order to understand the issues in mindsets as they relate to student achievement scores. The literature review will present the following topic areas: incremental theory of intelligence (mindset theory), mindset theory historical background through current research and interventions, the impact of interventions at the middle school level including gender, adolescent development, motivation and other factors facing current middle school aged students. It will also include a brief explanation of the noncognitive mindset curriculum from the GEAR UP Iowa (Leuwerke, 2016a) initiative and lessons that will lead to the intervention presented in this study and factors that impact the academic achievement of middle school students.
Incremental Theory of Intelligence
A growth mindset, previously known as the incremental theory of intelligence, is the system of beliefs that intelligence and abilities are malleable and can improve or grow with effort (Hong, Chiu, Dweck & Wan 1999; King, McInerney, & Watkins, 2012). When students believe they have control over their own ability to learn and believe they have the potential to improve, it increases their motivation and drive (Esparza, Shumow, & Schmidt, 2014). This can be true in both educational and non-educational settings. For example, an adolescent participating on a sports team for the first time might not be able to play at the same level as others on the team. If they are in a fixed mindset, they might believe that they will always perform worse despite further practice. However, if they are in a growth mindset and believe they can improve their abilities, they are more likely to continue to practice, work hard and expand their ability and skills. Developing this mindset can improve their skills and may even help them to surpass other players on the team. Teaching students about growth mindset can have longer lasting effects than just one year in one classroom. If students develop the ability to transfer their mindset learning to situations outside of the classroom, they would have countless number of opportunities opened to them (Blackwell, Trzesniewski, & Dweck, 2007).
Growth and Fixed Mindsets
There are two forms of mindsets, growth and fixed. Growth mindset is based on the belief that the primary traits of an individual may be built via commitment and hard work (Blackwell et al., 2007; Dweck, 2007; Hochanadel & Finamore, 2015). Such persons learn from their failures, accept challenges, and persist in times of setbacks. They see academic challenges, for example, an opportunity to learn and improve rather than a threat to their ability. They are inspired by others’ successes and view effort as the way to mastery. According to Yeager, Romero, Paunesku, Hulleman, Schneider, Hinojosa … & Trott (2016), intelligence is perceived as a malleable quality that may be developed with time. On the other hand, a fixed-minded person perceives his or her intelligence as a static, inherent quality that cannot be changed. These individuals see the urgent need to repeatedly prove themselves (Blackwell et al., 2007; Dweck, 2007; Hong et al., 1999). Rather than having a desire to learn to face academic challenges, they require problems that are easy to solve and that will make them feel and appear smarter (Li, Zhou, Zhang, Xiong, Nie & Fang, 2017). Instead of embracing challenges, they avoid them. In addition, they see effort as something that should not be necessary if they were smart enough to figure out the problem, and they see the success of other people as a threat. Hochanadel and Finamore (2015) state that intelligence is viewed as a fixed trait that cannot be built and that results in the fix-minded assumption that the IQ of a person dictates their future success and failure.
The emerging studies on the development of brain have set the stage to transform education’s course for the future. The perception of a ‘static’ brain or mindset is decidedly not meaningful anymore (Esparza at al., 2014). A new discovery regarding the brain however, may be that humans have the ability and the choice to make their brains change (Perrone-McGovern, Simon-Dack, Beduna, Williams & Esche, 2015). This view contradicts the idea that has previously been held that the brain remains the same. Alfred Binet, the first test developer to undertake intelligence measurement, argued that intelligence can be altered by practices of education and that it was malleable (Butler, 2000).
Being one of the greatest researchers of mindset theories of our time, Dweck (2007) outlines two various intelligence theories and their corresponding mindset. According to her, the theory which views intelligence to be constant or static is referred to as entity theory. Vandewalle (2012) notes that people that hold this view are seen to possess a fixed mindset. As hypothesized by the author, this theory emanates from the belief that certain innate capacity is what dictates their success. Conversely, growth mindset or incremental theory stems from the argument that abilities and talents may be built through different ways, such as persistence, good teaching, and effort.
Over the years, middle schools have been thought of as a time of great transition for the students. During this time, middle school students often emphasize ability, self-assessment, social comparison, and competition at a time of the adolescent’s increased self-focus (King et al., 2012). Burnette, O’boyle, VanEpps, Pollack & Finkel (2013) explored two cases that were performed by Bandura associated with middle-grade students’ intelligence theories. The students could select an activity which differed in difficulty. The first two alternatives demonstrated a performance objective; the first was very easy and the other more difficult, but both deemed manageable for the participants. The final alternative depicted a learning goal, that was a completely new concept to the learners. Their findings revealed a clearly significant link between students’ goal choices and their theories of intelligence, “the more the students held an intelligence entity theory, the more likely they were to select a performance objective, whilst the more they had incremental theory, the more likely they were to choose the learning” (Dweck, 2000, pp. 20-21). Therefore, students with a growth mindset choose to take on more challenging tasks that will help them to learn more and achieve higher.
The major variances between the students’ behaviors sharing varying mindsets are the effort they direct toward learning a new phenomenon. Dweck (2000) points out that people associated with entity theory feel the more effort they have to expend to succeed, the less smart they perceive they are. These students were described as learners that normally show helplessness in class, and that they easily give up trying to avoid their fear of failure. On the contrary, students with growth mindset or holding to the incremental theory, are willing to try their best so as to acquire knowledge of new things regardless of their difficulty. According to Good, Aronson, and Inzlicht (2003), promotion of incremental theory urges students to value effort, perseverance when encountering hard tasks, and setting goals. The difference between the previously mentioned theories or mindsets contributes to the varying academic performances. It should be noted however, that fostering an incremental theory among learners can stem from the teacher’s professional growth mindset. The teacher should believe internally that every student, in his/her intellectual capacity, can learn as well as grow (Good et al., 2003).
The Foundation of Growth Mindset
The breakthrough of Dweck and Mueller regarding incremental theory started with research completed in 1998. They projected that “praise for intelligence can undermine the performance and motivation of children” (p. 33). So as to challenge this view, the authors carried out an experiment that made a comparison between the praise for being smart (ability-praise) and the praise for hard work (effort-praise), 128 5th grade students (58 boys and 70 girls) aged between 10 and 12 years were involved in the research. The participants took an intelligence test, and were informed, regardless of their actual score, that they got a score of 80%. They were also told that the score they managed in the test was very high. Following the outcome of this feedback, they were praised in different ways. About 33% of participants were praised because of their capacity, aligning with a fixed mindset, and were praised on their intelligence and that they were smart. Similarly, the other 33% was praised based on effort and their hard work, aligning with a growth mindset. The rest of the students acted as a control and thus, they got no further feedback about their performance. After the praise, the students were asked if they preferred working on a learning goal or on a performance goal, and were given four options. The first three choices represented a performance goal while the last option represented a learning goal. The selection of a goal was impacted by the manner in which the children received the praises. Those that were praised to be “smart” had a greater probability of choosing the easy tasks, whereas the effort-praised students appeared to select the more difficult tasks. On the other hand, the control was divided equally (Dweck & Mueller, 1998). Overall, the study provided insightful information with regard to the foundation of growth mindset.
Characteristics of Mindset
According to Brougham (2016), it is possible for one person to be in possession of the two different forms of mindset. As a result of possessing more than one type of mindset, the individual would make decisions depending on the situation they were currently experiencing. To demonstrate this, students might have a fixed mindset in one discipline, say literacy, and have a growth mindset in another, say science. People with growth mindsets and fixed mindsets display clear variances and this heavily relies on the situations that may be surrounding them (Dweck, 2000; Hochanadel & Finamore, 2015; Schroder, Yalch, Dawood, Callahan, Donnellan, & Moser, 2017). Subsequently, it is important to explore the characteristics of mindset so that we have a clear understanding of what is exactly being discussed and analyzed.
Views on Failure
As has been mentioned in the previous subsections, differences between fixed and growth mindsets are clearly evident. One scenario where this is evident is their views on failure. Those with growth mindset are directed toward learning goals and as such, see failure positively and even try to improve and do new things so as to succeed in the future (Chao, Visaria, Mukhopadhyay, & Dehejia, 2017). In their view, failing creates the need to put forth an additional effort, as well as enhance their self-instruction and self-monitoring. On the contrary, Chao et al.(2017)state that those having a fixed mindset perceive failure negatively are discouraged by it and perceive that it reveals their low intelligence. As a result of feeling discouraged, they are reluctant to exert further effort since they have the belief that failure demonstrates their inability. They believe that by failing to achieve a given task, they will not be in a position to be successful in it later because “situations that made them fail would still remain” (Chao et al.,2017 p. 43). Thus, they feel helpless in the classroom after their failure and may then label themselves as an overall failure rather than to recognize or realize they failed at one particular task.
Blackwell et al. (2007) contend that because the choice of words of teachers use has a great effect on the behavior of students, it is important for the educators to engage in praises during the process of learning. The author suggests that if students exert sufficient efforts, for instance, it would be important for the educator to use statements such as “1 like the way you tried all kinds of strategies on that math problem until you finally got it” (p. 37). With respect to learners that master concepts with fewer efforts, he proposes that educators can give comments like, “That was too easy for you. Let’s do somethingmore challenging that you will learn from” (p. 37). On the other hand, for those who really attempted but never performed well, additionally the emphasis is on the process instead of ability: “I liked the effort you put in. Let’s work together some more and figure out what you don’t understand” (p. 37).
Effort in Response to Challenges
The study by Dweck (1998) depicts that mindset has substantial influences on performance and behavior, specifically in the face of activities that are challenging. This research indicated that a number of people whose mindsets are fixed would try to avoid challenging conditions if offered the choice since they are overly-concerned about past failures. From the fixed mindset perspective, failure is an exhibition of lack of ability, and therefore, lack of intelligence or capability. Nevertheless, those having growth mindset see failure or struggle as something natural in the process of learning and more importantly, an opportunity upon which they can improve. In her argument, she believes that mindset has the capacity to affect all areas of an individual’s life, varying from professional and personal choices to academic success. For example, in the academic field, the mindset has a very critical role. Those with a growth mindset are more likely to go on and persist as they struggle, whereas students with a fixed mindset are more likely not to continue striving. In her study, Dweck (1998; 2007) has indicated that cues coming from teachers and parents concerning performance can influence learners’ future actions and beliefs.
In the past it was widely accepted, and certain present teaching environments still accepted, the way to develop self-confidence as a learner is to provide tasks in which students are likely to be successful to engage in. (Blackwell et al., 2007; Chao et al., 2007; Dweck, 2000). Pittaway (2012) argues that a number of recent studies have however suggested that this methodology is not effective as it normally creates many future issues and misconceptions. Not permitting learners to be stretched and challenged inhibits their proximal development, and causes the more intelligent learners to be bored which in turn creates a false sense of entitlement (Yeager et al., 2016). Thus, it would be wiser to set additional goals by extending their learning experience through the incorporation of goals instead of rewarding them with free time, particularly after they complete a relatively easy activity (Wolters, Fan, & Daugherty, 2013). As an educator, this is important since it establishes learning goals that are challenging, which then demonstrates improved outcomes of education for learners. Lopez and Louis (2009) and O’Neill (2000) note that a wide variety of studies, both in the laboratory settings and in the field, have demonstrated SMART goals impact performance greatly. These goals act to increase persistence at tasks, mobilize effort, and focus attention.
Unlike people with the fixed mindset that do not set future goals, people with a growth mindset are associated with setting goals, as they incorporate and reach current goals. Fixed-minded individuals might say they have finished a particular task and that it was too easy. Eroglu and Unlu (2015) argue that the more classrooms are organized around growing and stretching student’s learning, and being more comfortable with challenges and confusion, the more they would instill incremental theory. They argue that, unlike the popular view, one neither praises ability nor talent but the process. So, what is it that needs to be praised? The authors point out that the bouncing back mentality, resilience, persistence, strategies, and efforts should be praised because they all play a key role in offering positive response thus, enforcing a safe, secure environment of learning and leading to a positive relationship between students and teachers (Eroglu & Unlu, 2015; Wolters et al., 2013). Therefore, educators that have a growth mindset would comprehend their students’ brain neuroplasticity in a more efficient and effective way and will assist them to set goals with the aim for every one of them to reach their potential fully.
Mindset and Neuroscience
Recent findings and developments in neuroscience have supported the concepts which underlie the principles of the growth mindset. Li et al. (2017) point out that brain plasticity has indicated the way in which relationships between neurons have the ability to undergo transformation due to experience. Neural networks build insulation which enhances impulse transmission, strengthen existing ones, and grow new connections with practice (King, McInerney, & Watkins, 2012). The scientific discoveries associated with neurons have in fact proven that a person’s neural growth is related to actions they take, namely following and practicing sleep habits, asking questions, and using good strategies (Li et al., 2017).
This is consistent with the mindset studies that have found that intelligence is not something fixed, but instead, learning takes place via extensive interactions between teachers, students, and their settings. These changes are especially seen in the brain in which learning results in the strengthening and forming of new neural connections. This brain development and plasticity progress during an individual’s lifetime. Studying brain plasticity may assist students in developing a growth mindset (King et al., 2012).
Additionally, the neuroscientific inventions have been gaining traction as researchers have been trying to comprehend the relationship between achievement and mindset. People who believe their brains can grow will behave in a different manner (Li et al., 2017). To respond to the question of whether mindsets can change and how that is possible, psychologists started a series of studies and interventions which demonstrated that in fact, it is indeed possible for the mindset of a person to move to growth from fixed. When this transformation happens, it results in increased achievement and motivation (Blackwell et al., 2007; Li et al., 2017). As an example, in a research at an inner-city NYC school with seventh-graders, Blackwell et al. (2007) grouped learners into two categories for a workshop on study skills and the brain. The control group, 50% of the students, learned the memory stages while the other group, 50%, were trained in a growth mindset. Additionally, the students were trained on how to use this idea in schoolwork. The findings revealed that in comparison to the control group, three times more students in the category of growth mindset indicated an improvement in motivation and effort. After training, the growth-mindset category continued to improve, but the control group continued to decline (Blackwell et al., 2007).
Gender and Mindset
Underrepresentation of women and girls in science, technology, engineering, and mathematics (STEM) fields is increasing prevalent (Apple, Smith, Moon & Revelle, 2016). Gender can be an impactful factor in achievement scores in classroom grades as well as standardized tests (Duckworth, Seligman & Harris, 2006). Duckworth et al. (2006) reports that throughout the elementary, middle, and high school, girls are more likely to have a growth mindset and have better interpersonal skills including demonstrating self-control which in turn t helps them achieve higher academically than boys. Possessing skills in self-control has been found to be highly correlated with other interpersonal skills and the beliefs that the individual’s intelligence can grow (Westrick, Robbins, Radunzel, & Schmidt, 2015).
The dissimilarities in academic performance between genders could also help to explain possible variation in beliefs and motivation between genders. As an example, male students have a greater level of positive mindset and self-belief in STEM compared to girls largely due to the observation that these disciplines appear to be male-dominated. Generally, it is the beliefs that determine the motivation level concerning the performance in academics (Tuwor & Sossou, 2008). Tuwor & Sossou (2008) note that females argue that their average performances in STEM fields is due to females lack of commitment in those fields. In addition, they point out that failure in STEM fields in females is because of other aspects, namely low level of intelligence and failure to understand concepts. On the contrary, a number of male students associate their better performance in these areas with interest, abilities, and internal intelligence. Not all males perform better in these subjects, and this has been connected to external factors, namely inadequate support from teachers (Apple et al., 2016). In the end, these psychological behaviors in performance in both males and females allow these two genders to develop a positive or a growth mindset toward certain subjects.
Gender and mindset have been argued to have a large impact on motivation as well as academic performance and as a result, stereotypes develop. As mentioned previously in this chapter, male students are believed to be good performers in sciences and mathematics whereas the female students are believed to be good performers in languages (Apple et al., 2016). Although it is unclear whether the different perceptions in these subjects is based on gender variations or are impacted by mindset among teachers, students, and parents or the gender. A number of studies have been carried out to determine the exact causes of this phenomenon. In a study by Tuwor and Sossou (2008), the authors investigated whether these differences are simply based on the existing stereotypes or gender variations. Their results revealed that it was not actually the gender that had an impact on academic performance. Instead, the level of femininity and masculinity among students is what impacts this performance. According to Tuwor and Sossou, (2008), the desire for self-motivation and achievements is a mindset in the feminine gender whereas the succeeding mindset exists in the masculine gender. Thus, the authors suggest that it is necessary for all educational partners to ensure that efforts are directed at eliminating the students’ present mindsets if it is negative. They also recommend that the stakeholders should work with students to build a positive mindset in every subject in order to help them perform better. This line of research leads to the idea that there is no specific subject that can be viewed to be absolutely hard or easy for a particular gender or race compared to another. Rather, it is the stereotypes and mindset which lead to the significant and existing variations.
Interventions at the Middle School Level
Middle school students especially go through periods of physical, emotional, and social changes that can influence the changing or development of their mindset (Yeager et al., 2016). The brain system in middle schoolers undergoes many changes during their adolescent years. More importantly, intelligence theories provide credibility to the belief that intelligence is not predetermined. Teachers and students need to understand that intelligence is built through learning and is not fixed, as most adolescents believe. So as to assist students achieve a growth mindset, educators have to understand the manner in which their actions impact mindsets. With regard to adolescents, ensuring strong adult and peer relationships will be critical so that adolescents are motivated physically, emotionally, and socially. Mindset interventions will be helpful in moving them from a fixed to a growth mindset. These interventions however, need to be adapted to populations, where they aim to serve. In adolescents, for instance, mindset interventions can impact self-beliefs with respect to social belonging and academics.
If an adolescent is asked concerning their goals, many might give responses such as staying physically fit, fitting in with friends, and performing well at school (Curtis, 2015). However, any of their goals might be unexpectedly unachieved due to a variety different variables. Examples of these include gaining more weight, feeling excluded by friends, or failing a test grade. When they encounter such hindrances on their way to attaining their objectives, a good number would easily give up (Curtis, 2015). This can be explained using mindset theory, which argues that the young people have varying beliefs regarding whether their abilities can be enhanced with effort or are fixed. Burnette et al. (2013) and Dweck (2000) have documented many examples of information relating to mindsets and found a relationship between this information and how people respond to challenges with regard to their health, social, and academic goals.
Interventions can be modeled and assessed in teaching adolescents to embrace a growth mindset successfully. Growth mindset interventions, for instance, have improved the achievements of students successfully in respect of the challenges in their social lives and academic performance (Blackwell et al., 2007). Such interventions might be especially crucial for students facing discrimination concerning their academic performance because of gender, race or a perception that they are unexpected to succeed (Vandewalle, 2012). The mindset-based intervention success has been increased by implementing it in large-scale across different high schools with the help of computer technology, resulting in improved achievement of academics especially among those who are underperforming (Vandewalle, 2012). Apart from the pragmatic techniques promoting growth mindsets, the language used by parents and teachers when interacting with adolescents provides an opportunity to encourages their incremental beliefs (Esparza et al, 2014). Additionally, praising and focusing feedback on their progress and effort even in small daily interactions may also be an effective way of encouraging the adolescents to be resilient instead of helpless when they experience difficulties in the pursuit of their goals (Chao et al., 2017).
Thus, mindsets impact our achievement, motivation, and goals across a number of areas. It is the responsibility of teachers, parents, and those who work with adolescents to assist them in learning to perceive a challenge as an opportunity instead of failure. Aiding students to build growth mindsets can make it possible to overcome anxiety, initiate achievement, and help them remain happy while in the process of achieving their goals.
Students with lower grades were shown to have a lower work ethic and fewer skills to deal with the rigor of higher-level academics (Jacobs, Lanza, Osgood, Eccles, & Wigfield, 2002). If students do not have the interpersonal skills to work with others, including their teacher, it can cause a major deficit in their education (Jacobs et al., 2002). Lacking interpersonal skills or not possessing a growth mindset could be the difference between a student being proficient and being successful in their future endeavours, and not learning the content required for success thus leaving them with less motivation and skills as they attempt to continue their education. Bandura’s counseling theory states “children perform better and are more motivated to select increasing challenging tasks when they believe that they have the ability to accomplish a particular task” (Jacobs et al., 2002, p.13).
“The act of process of gaining knowledge or understanding of your abilities, character, and feelings” is self-discovery as defined by the Webster Dictionary (2017). Understanding one’s self and being able to discover one’s abilities, character, and feelings is a powerful skill that can lead to the greater overall success of a student (Smokowski, Quo, Wu, Evans, Cotter, & Bacallao, 2016). Self-understanding and belief that abilities can improve can help students to appreciate and value themselves and their abilities. If a student values their own understanding and opinions, they are more likely to be confident in their educational endeavors which in turn leads to higher academic achievement.
Good at al. (2003) did a study on African-American, Hispanic and low-income students’ and found that when those students were mentored with a growth mindset platform the risk of identifying with stereotyped group they were part of was reduced. Students were more likely to overcome the stereotype of being at-risk because of their change in mindset. The mentors challenged and encouraged students to learn about themselves and their current ability and skill levels. They also encouraged them to view their intelligence and abilities as something that could change. In an online experiment by Paunesku (2015), high school students received a 45-minute online mindset training that they viewed on their own. Students who received the training on growth mindset showed improvement in their GPAs as a result of the training. The previously mentioned studies focused on high school students, so the question can be raised, can a similar mindset intervention at the middle school level show the same level of impact?
Moreover, a study looking the relationship between student engagement and academic achievement reported that the two factors were mutually predictive of each other (Chase, 2014). Students that are more engaged in school are more likely to achieve a higher academic level (Balfanz, Herzog & Mac Iver, 2007). Developing a growth mindset can help the student to be more engaged in their education because they believe that they can improve and therefore can develop greater motivation in their achievement levels.
Whereas certain expectations are accepted widely, namely enthusiastic delivery, subject matter command, and tidy classrooms, there are no general standards relating to teaching in middle-level schools or colleges (Chan, 2012). The overarching objective might be for the students to have a mastery of the material; however, there are a number of and often contradicting philosophies as to how to effectively accomplish this. In addition, the material included may differ substantially with regard to the teaching philosophy in the technical fields such engineering at the college level (Hochanadel & Finamore, 2015). Numerous varying views on what actually makes up effective teaching emerge due to philosophical variances which surround motivation.
Motivation is a personal facet, and it is challenging to change consciously. It is generally believed that the success of a person depends on the many motivations that one has. For students in middle schools this goes down to the two forms of motivation: intrinsic and extrinsic (Chao et al., 2017). According to Schroder et al. (2017), students that are motivated intrinsically are more likely to demonstrate a growth mindset which creates the belief that success is the result of hard work and not only innate ability. The study demonstrates that a growth mindset in fact not only assists the individual while in school, but also has shown them more likely to be successful in all dimensions of life. As a growth mindset and the intrinsically motivated are interrelated, the authors claim that the intrinsically motivated have higher chances of succeeding (Schroder et al., 2017). On the contrary, because a fixed mindset was correlated with being extrinsically motivated, the point is made that such individuals have a smaller chance to succeed. Thus, student motivation might be more essential than it was first believed, particularly following Dweck’s (2000) work that indicated growth mindset and intrinsic motivation are interconnected.
Additionally, Hong et al. (1999) uses a growth mindset intervention that is founded on an achievement motivational model. The model’s primary pillar is that students have varying theories concerning their ability. This in turn, helps them understand and make meaning of their surroundings and give guidance to their behavioral and decisional options. While certain people believe that their ability is incremental, others believe theirs is fixed.
Impact of Mindset on Student Achievement
Adolescents that can effectively comprehend that the brain can become better, particularly those that have a growth mindset, perform much better in relation to their academics since they have the empowering perception about learning (Butler, 2000; Romero et al., 2014; Dweck, 2000). Such students emphasize improvement and see the effort as a means to employ in order to further develop their abilities. Additionally, such children view failure as a natural component in the process of their learning (Nussbaum and Dweck, 2008). Conversely, those that have a fixed mindset, that is those who have the belief that the mind is not incremental, appear to emphasize judgment in these situations. They tend to be more focused on demonstrating that they are smart (Romero et al., 2014). As a result, students of a fixed mindset try to avoid conditions where they may have to engage in hard work or may fail.
Students having a growth mindset respond more positively in challenging situations and therefore, they perform more effectively at school, improving their academic performance. In one study, Nussbaum and Dweck (2008) aimed to investigate the type of feedback individuals would look for after they were involved in a challenge. The authors gave subjects a challenging task and thereafter informed them that they had not performed better in their test. Following this, participants were asked if they wanted to look at the tests of others that either performed better or performed worse than they did. The findings revealed that those having a growth mindset decided to gain more knowledge from those that had performed better compared to them. On the other hand, individuals with a fixed mindset appeared more focused on ensuring that they felt much better by looking at their colleagues’ tests that they had outperformed.
In another study by Blackwell et al. (2007) involved middle school students and explored the effect that a growth mindset and a fixed mindset on their math achievement. This is a subject that many students can find both challenging and difficult. Their findings revealed that students having a fixed mindset earned a lower grade in math over time when compared to those with a growth mindset who improved significantly. In addition, mindsets have also been proven to inspire people that take on more advanced academic challenges. Romero et al. (2014) showed that students having a growth mindset had a greater chance of being admitted to advanced math during the research period.
Instead of focusing only on good performance, a growth mindset focuses on overall learning. This is clearly evident when the inside of the brain is examined (Mangels, Butterfield, Lamb, Good & Dweck, 2006). In a study by Mangels et al. (2006), the authors brought participants to a lab for a study. They placed EEG caps on participant’s heads so as to determine the level of brain activity during different situations. In the process of measuring the activity of the brain, they asked the participants some form of information which held little importance. Then, the researchers told them whether or not they were right. That is to say, the participants were provided with performance feedback. The results demonstrated that those with a fixed mindset and a growth mindset both had active brains when they were informed about their feedback. As a result, this exercise proved that the participants all listened to this feedback. When subjects were told the correct response, the findings indicated that the brains of those having a growth mindset were substantively more active in comparison to those having a fixed mindset. The latter group were tuning out after realizing that they were either correct or incorrect, they were not concerned about getting to know the right answer. In the end, they provided a pop quiz to the participants using the same trivia questions. Those with a growth mindset outperformed their counterparts (Mangels et al., 2006).
Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Iowa
GEAR UP is a program that aims to increase the proportion of low-income students prepared to get admission into high school and subsequently become successful in education after the secondary program. The GEAR UP Iowa curriculum was the outcome of a competitive grant received from the Department of Education (Leuwerke, 2016a). It accomplishes its goals by offering partnerships grants and curriculum to schools to provide support services to high-poverty, high and middle schools (Leuwerke, 2016b). The state of Iowa was one of several states that received grant funding for a GEAR UP program.
The program was developed for a number of reasons. First, it was designed so as to indicate the significance of education in the present day’s world. Additionally, it was meant to give encouragement to high and middle school students in order to establish great career and educational goals. It also strives to provide parents with the necessary resources and information they are in need of to remain as active participants in the education of their children and assist them to develop clear plans for their future (Leuwerke, 2016b). The program was also designed to help students get to know the manner in which they could get ready, enter, and succeed in other post-secondary training colleges. Finally, the program was developed to give educators the tools as well as the needed training so as to enhance both the achievement and academic expectations of students in the class. It also compels cooperation among community-based organizations, businesses, state and local education entities, institutions of higher learning, and K-12 schools (Leuwerke, 2016b).
The program is highly applicable to be employed in middle-level schools (Leuwerke, 2016a). Fogg and Harrington (2015) point out that the program has enhanced the likelihood that the less fortunate students enroll in college through interventions starting in middle schools in Rhode Island, another state that received GEAR UP funding. They point out that the program has had significant benefits for students who have moved through middle as well as high school. Fogg and Harrington (2015) found that after being a part of the GEAR UP program that it enhanced students’ success, and found that students were significantly more likely to graduate from high school.
The GEAR UP Iowa program builds on research on curriculum that shows the fundamental noncognitive factors role has on the persistence in school and academic success of students (Leuwerke, 2016b). The curriculum gives administrators, counselors, and teachers the information regarding the strengths and weaknesses of students’ mindset. The GEAR UP Iowa program has been utilized by middle school students to more effectively understand their strengths and establish solid and firm plans to improve on their weaknesses (Glaser & Warick, 2016).
The organization of the curriculum also reveals insightful success with regard to mindset. Its first five lessons emphasize growth mindset, assist students to comprehend a growth mindset, and finally engage in a number of tasks to develop their growth mindset independently. The next four sets of lessons examine ways that can be used to inculcate optimism among students, time management and personal responsibility, and goal setting lessons. This study specifically be using the first five lessons focusing on growth mindset.
The literature review has presented a number of topic areas, namely incremental theory of intelligence, or mindset theory, and its historical background through current research and interventions. Also included was the discussion regarding impact of interventions at the middle school level including gender, adolescent development, motivation, and other factors facing current middle school aged students. A brief explanation of the noncognitive mindset lessons from the GEAR UP Iowa initiative has also been explored. There are two forms of mindsets, fixed and growth. The latter is based on the belief that the primary traits of an individual may be built via commitment and hard work (Hong et al., 1999; King et al., 2012). Furthermore, a growth mindset believes that people learn from their failures, accept challenges, and persist in times of setbacks. Conversely, a fixed-minded person perceives their intelligence as a static, inherent quality that cannot be changed (Blackwell et al., 2007; Dweck, 2007; Hochanadel & Finamore, 2015). These different mindsets reveal distinct and contrasting characteristics in aspects related to the failure, praise, effort in response to challenges and goal setting. The breakthrough of Dweck and Mueller (1998) regarding incremental theory started with the research in 1998 and has since been researched by many additional studies. Recent findings and developments in neuroscience has also supported the concepts which underlie growth mindset (Li et al., 2017). In order to ensure effectiveness in education, interventions at the middle school level need to be addressed to cater to all students. Finally, many researchers have shown mindset to have a positive effect on the academic performance or achievement of students.
Apple, L., Smith, K., Moon, Z., & Revelle, G. (2016). Teaching modules using e-textile activities to engage female middle-school students in STEM interest. Journal of Family and Consumer Sciences, 108(1), 44-47.
Augustine, K. A. (2014). Teacher Mentors: Lived Experiences Mentoring At-Risk Middle School Students (Doctoral dissertation, Drake University).
Balfanz, R., Herzog, L., & Mac Iver, D. J. (2007). Preventing student disengagement and keeping students on the graduation path in urban middle-grades schools: Early identification and effective interventions. Educational Psychologist, 42(4), 223-235.
Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child development, 78(1), 246-263.
Brougham, L. (2016). Impact of a Growth Mindset Intervention on Academic Performance of Students at Two Urban High Schools (Doctoral dissertation, University of Missouri-Saint Louis).
Burnette, J. L., O’boyle, E. H., VanEpps, E. M., Pollack, J. M., & Finkel, E. J. (2013). Mind-sets matter: A meta-analytic review of implicit theories and self-regulation.
Butler, R. (2000). Making judgments about ability: the role of implicit theories of ability in moderating inferences from temporal and social comparison information.
Chan, D. W. (2012). Life satisfaction, happiness, and the growth mindset of healthy and unhealthy perfectionists among Hong Kong Chinese gifted students. Roeper Review, 34(4), 224-233.
Chao, M. M., Visaria, S., Mukhopadhyay, A., & Dehejia, R. (2017). Do rewards reinforce the growth mindset?: Joint effects of the growth mindset and incentive schemes in a field intervention. Journal of Experimental Psychology: General, 146(10), 1402.
Curtis, A. C. (2015). Defining adolescence. Journal of Adolescent and Family Health, 7(2), 2.
Duckworth, A., Seligman, M., & Harris, Karen R. (2006). Self-Discipline Gives Girls the Edge: Gender in Self-Discipline, Grades, and Achievement Test Scores. Journal of Educational Psychology, 98(1), 198-208.
Dweck, C. S. (2000). Self-theories: Their role in motivation, personality, and development. Psychology Press.
Dweck, C. (2007). Mindset: The new psychology of success. New York: Random House.
Dweck, C. (2010). Even geniuses work hard. Educational Leadership, 68(1), 16-20.
Dweck, C. S. (2012). Mindsets and human nature: Promoting change in the Middle East, the schoolyard, the racial divide, and willpower. American Psychologist, 67(8), 614-622.
Eroglu, C., & Unlu, H. (2015). Self-Efficacy: Its Effects on Physical Education Teacher Candidates’ Attitudes toward the Teaching Profession. Educational Sciences: Theory and Practice, 15(1), 201-212.
Esparza, J., Shumow, L., & Schmidt, J. A. (2014). Growth Mindset of Gifted Seventh Grade Students in Science. NCSSSMST Journal, 19(1), 6-13.
Fogg, N. P., & Harrington, P. E. (2015). Evidence-Based Research: The Impact of the College Crusade GEAR UP Program in RI. New England Journal of Higher Education.
Glaser, E., & Warick, C. (2016). What does the research say about early awareness strategies for college access and success. Washington, DC: National College Access Network.
Good, C., Aronson, J., & Inzlicht, M. (2003). Improving adolescents’ standardized test performance: An intervention to reduce the effects of stereotype threat. Journal of Applied Developmental Psychology, 24(6), 645-662.
Hochanadel, A., & Finamore, D. (2015). Fixed and growth mindset in education and how grit helps students persist in the face of adversity. Journal of International Education Research, 11(1), 47.
Hong, Y. Y., Chiu, C. Y., Dweck, C. S., Lin, D. M. S., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social psychology, 77(3), 588.
Jacobs, J. E., Lanza, S., Osgood, D. W., Eccles, J. S., & Wigfield, A. (2002). Changes in children’s self‐competence and values: Gender and domain differences across grades one through twelve. Child development, 73(2), 509-527.
King, R. B., McInerney, D. M., & Watkins, D. A. (2012). How you think about your intelligence determines how you feel in school: The role of theories of intelligence on academic emotions. Learning and Individual Differences, 22(6), 814-819.
Leuwerke, W. (2016a). GEAR UP Iowa Noncognitive Curriculum [White paper]. Des Moines: Iowa College Aid.
Leuwerke, W. (2016b). Program Evaluation Report 2016 GEAR UP Iowa 2.0: Results from Series 1 Evaluation and Noncognitive Guidance Curriculum Study [White paper]. Des Moines: Iowa College Aid.
Li, P., Zhou, N., Zhang, Y., Xiong, Q., Nie, R., & Fang, X. (2017). Incremental Theory of Intelligence Moderated the Relationship between Prior Achievement and School Engagement in Chinese High School Students. Frontiers in Psychology, 8, 1703.
Lopez, S. J., & Louis, M. C. (2009). The principles of strengths-based education. Journal of College and Character, 10(4).
Pittaway, S. M. (2012). Student and staff engagement: Developing an engagement framework in a Faculty of Education. Australian Journal of Teacher Education, 37(4), 3.
Mangels, J. A., Butterfield, B., Lamb, J., Good, C., & Dweck, C. S. (2006). Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model. Social cognitive and affective neuroscience, 1(2), 75-86.
Mueller, C. M., & Dweck, C. S. (1998). Praise for intelligence can undermine children’s motivation and performance. Journal of personality and social psychology, 75(1), 33.
Nussbaum, A. D., & Dweck, C. S. (2008). Defensiveness versus remediation: Self-theories and modes of self-esteem maintenance. Personality and Social Psychology Bulletin, 34(5), 599-612.
Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., and Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic underachievement. Psychol. Sci. 26, 784–793. doi: 10.1177/0956797615 571017
Perrone-McGovern, K. M., Simon-Dack, S. L., Beduna, K. N., Williams, C. C., & Esche, A. M. (2015). Emotions, cognitions, and well-being: The role of perfectionism, emotional overexcitability, and emotion regulation. Journal for the Education of the Gifted, 38(4), 343-357.
Romero, C., Master, A., Paunesku, D., Dweck, C. S., & Gross, J. J. (2014). Academic and emotional functioning in middle school: the role of implicit theories. Emotion, 14(2), 227.
Schroder, H. S., Yalch, M. M., Dawood, S., Callahan, C. P., Donnellan, M. B., & Moser, J. S. (2017). Growth mindset of anxiety buffers the link between stressful life events and psychological distress and coping strategies. Personality and Individual Differences, 110, 23-26.
Smokowski, P. R., Guo, S., Wu, Q., Evans, C. B. R., Cotter, K. L., & Bacallao, M. (2016). Evaluating dosage effects for the positive action program: How implementation impacts internalizing symptoms, aggression, school hassles, and self-esteem. American Journal of Orthopsychiatry, 86(3), 310-322. doi: http://dx.doi.org/10.1037/ort0000167
Tuwor, Theresa, & Sossou, Marie-Antoinette. (2008). Gender Discrimination and Education in West Africa: Strategies for Maintaining Girls in School. International Journal of Inclusive Education, 12(4), 363-379.
Webster Dictionary. (2017, September 11). Definition of Self-understanding. Retrieved from https://www.merriam-webster.com/dictionary/self
Westrick, P. A., Le, H., Robbins, S. B., Radunzel, J. R., & Schmidt, F. L. (2015). College Performance and Retention: A Meta-Analysis of the Predictive Validities of ACT Scores, High School Grades, and SES. Educational Assessment, 20(1), 23-45. doi:10.1080/10627197.2015.997614
Wolters, C. A., Fan, W., & Daugherty, S. G. (2013). Examining achievement goals and causal attributions together as predictors of academic functioning. The Journal of Experimental Education, 81(3), 295-321.
Vandewalle, D. (2012). A growth and fixed mindset exposition of the value of conceptual clarity. Industrial and Organizational Psychology, 5(3), 301-305.
Yeager, D. S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., Hinojosa, C., … & Trott, J. (2016). Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of educational psychology, 108(3), 374.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "Psychology"
Psychology is the study of human behaviour and the mind, taking into account external factors, experiences, social influences and other factors. Psychologists set out to understand the mind of humans, exploring how different factors can contribute to behaviour, thoughts, and feelings.
Trait EI Theory in Leaders at Network Rail
ABSTRACT This investigation addresses the problem of leadership attributes variance in different levels of management. With the environment work force changing in terms of relationships the linkages o...
Psychological Evaluation of an Offender Case Study
There are numerous assessments may be used to evaluate Ryan. The most commonly used assessments to assess the client’s general health and mental status are the Mental Status Exam (MSE)....
DMCA / Removal Request
If you are the original writer of this literature review and no longer wish to have your work published on the UKDiss.com website then please: |
Researchers have used seismic data to look inside Mars for the first time.
Since early 2019, researchers have been recording and analyzing marsquakes as part of the InSight mission. This relies on a seismometer whose data acquisition and control electronics were developed at ETH Zurich. The data that will help determine the formation and evolution of Mars and, by extension, the entire solar system.
We know that Earth is made up of shells: a thin crust of light, solid rock surrounds a thick mantle of heavy, viscous rock, which in turn envelopes a core consisting mainly of iron and nickel. Terrestrial planets, including Mars, have been assumed to have a similar structure.
“Now seismic data has confirmed that Mars presumably was once completely molten before dividing into the crust, mantle, and core we see today, but that these are different from Earth’s,” says Amir Khan, a scientist at the Institute of Geophysics at ETH Zurich and at the Physics Institute at the University of Zurich. He and colleague Simon Stähler analyzed data from NASA’s InSight mission, which ETH Zurich participates in under the leadership of professor Domenico Giardini.
The crust, mantle, and core of Mars
The researchers have discovered that the Martian crust under the probe’s landing site near the Martian equator is between 15 and 47 kilometers (9.3 to 29 miles) thick. Such a thin crust must contain a relatively high proportion of radioactive elements, which calls into question previous models of the chemical composition of the entire crust.
Beneath the crust comes the mantle with the lithosphere of more solid rock reaching 400–600 kilometers down—twice as deep as on Earth. This could be because there is now only one continental plate on Mars, in contrast to Earth with its seven large mobile plates. “The thick lithosphere fits well with the model of Mars as a ‘one-plate planet,'” Khan concludes.
The measurements also show that the Martian mantle is mineralogically similar to Earth’s upper mantle. “In that sense, the Martian mantle is a simpler version of Earth’s mantle.” But the seismology also reveals differences in chemical composition. The Martian mantle, for example, contains more iron than Earth’s. However, theories as to the complexity of the layering of the Martian mantle also depend on the size of the underlying core—and here, too, the researchers have come to new conclusions.
The Martian core has a radius of about 1,840 kilometers (1,143 miles), making it a good 200 kilometers (124 miles) larger than had been assumed 15 years ago, when the InSight mission was planned.
“Having determined the radius of the core, we can now calculate its density,” Stähler says.
“If the core radius is large, the density of the core must be relatively low,” he explains: “That means the core must contain a large proportion of lighter elements in addition to iron and nickel.” These include sulphur, oxygen, carbon, and hydrogen, and make up an unexpectedly large proportion. The researchers conclude that the composition of the entire planet is not yet fully understood. Nonetheless, the current investigations confirm that the core is liquid—as suspected—even if Mars no longer has a magnetic field.
The researchers obtained the new results by analyzing various seismic waves generated by marsquakes. “We could already see different waves in the InSight data, so we knew how far away from the lander these quake epicenters were on Mars,” Giardini says.
To be able to say something about a planet’s inner structure calls for quake waves that are reflected at or below the surface or at the core. Now, for the first time, researchers have succeeded in observing and analyzing such waves on Mars.
“The InSight mission was a unique opportunity to capture this data,” Giardini says. The data stream will end in a year when the lander’s solar cells are no longer able to produce enough power. “But we’re far from finished analyzing all the data—Mars still presents us with many mysteries, most notably whether it formed at the same time and from the same material as our Earth.”
It is especially important to understand how the internal dynamics of Mars led it to lose its active magnetic field and all surface water. “This will give us an idea of whether and how these processes might be occurring on our planet,” Giardini explains. “That’s our reason why we are on Mars, to study its anatomy.”
Source: ETH Zurich |
A transformer is a device that transfers electrical energy from one circuit to another by magnetic coupling without requiring relative motion between its parts. It usually comprises two or more coupled windings, and, in most cases, a core to concentrate magnetic flux.
An alternating voltage applied to one winding creates a time-varying magnetic flux in the core, which induces a voltage in the other windings. Varying the relative number of turns between primary and secondary windings determines the ratio of the input and output voltages, thus transforming the voltage by stepping it up or down between circuits.
The transformer principle was demonstrated in 1831 by Faraday, though practical designs did not appear until the 1880s. Within less than a decade, the transformer was instrumental during the "War of Currents" in seeing alternating current systems triumph over their direct current counterparts, a position in which they have remained dominant. The transformer has since shaped the electricity supply industry, permitting the economic transmission of power over long distances. All but a fraction of the world's electrical power has passed through a series of transformers by the time it reaches the consumer.
- 1 History
- 2 Basic principles
- 3 Practical considerations
- 4 Equivalent circuit
- 5 Transformer types and uses
- 6 Construction
- 7 See also
- 8 Notes
- 9 References
- 10 External links
- 11 Credits
Amongt the simplest of electrical machines, the transformer is also one of the most efficient, with large units attaining performances in excess of 99.75 percent. Transformers come in a range of sizes, from a thumbnail-sized coupling transformer hidden inside a stage microphone to huge giga VA-rated units used to interconnect portions of national power grids. All operate with the same basic principles and with many similarities in their parts, though a variety of transformer designs exist to perform specialized roles throughout home and industry.
Michael Faraday built the first transformer in 1831, although he used it only to demonstrate the principle of electromagnetic induction and did not foresee its practical uses. Russian engineer Pavel Yablochkov in 1876 invented a lighting system based on a set of induction coils, where primary windings were connected to a source of alternating current and secondary windings could be connected to several "electric candles". The patent claimed the system could "provide separate supply to several lighting fixtures with different luminous intensities from a single source of electric power." Evidently, the induction coil in this system operated as a transformer.
Lucien Gaulard and John Dixon Gibbs, who first exhibited a device with an open iron core called a 'secondary generator' in London in 1882 and then sold the idea to American company Westinghouse. This may have been the first practical power transformer. They also exhibited the invention in Turin in 1884, where it was adopted for an electric lighting system.
William Stanley, an engineer for Westinghouse, built the first commercial device in 1885 after George Westinghouse had bought Gaulard and Gibbs' patents. The core was made from interlocking E-shaped iron plates. This design was first used commercially in 1886. Hungarian engineers Zipernowsky, Bláthy and Déri from the Ganz company in Budapest created the efficient "ZBD" closed-core model in 1885 based on the design by Gaulard and Gibbs. Their patent application made the first use of the word "transformer". Russian engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer in 1889. In 1891 Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for generating very high voltages at high frequency.
Audio frequency transformers (at the time called repeating coils) were used by the earliest experimenters in the development of the telephone. While new technologies have made transformers in some electronics applications obsolete, transformers are still found in many electronic devices. Transformers are essential for high voltage power transmission, which makes long distance transmission economically practical. This advantage was the principal factor in the selection of alternating current power transmission in the "War of Currents" in the late 1880s. Many others have patents on transformers.
Coupling by mutual induction
The principles of the transformer are illustrated by consideration of a hypothetical ideal transformer consisting of two windings of zero resistance around a core of negligible reluctance. A voltage applied to the primary winding causes a current, which develops a magnetomotive force (MMF) in the core. The current required to create the MMF is termed the magnetising current; in the ideal transformer it is considered to be negligible. The MMF drives flux around the magnetic circuit of the core.
An electromotive force (EMF) is induced across each winding, an effect known as mutual inductance. The windings in the ideal transformer have no resistance and so the EMFs are equal in magnitude to the measured terminal voltages. In accordance with Faraday's law of induction, they are proportional to the rate of change of flux:
- and are the induced EMFs across primary and secondary windings,
- and are the numbers of turns in the primary and secondary windings,
- and are the time derivatives of the flux linking the primary and secondary windings.
In the ideal transformer, all flux produced by the primary winding also links the secondary, and so , from which the well-known transformer equation follows:
The ratio of primary to secondary voltage is therefore the same as the ratio of the number of turns; alternatively, that the volts-per-turn is the same in both windings.
If a load impedance is connected to the secondary winding, a current will flow in the secondary circuit so created. The current develops an MMF over the secondary winding in opposition to that of the primary winding, so acting to cancel the flux in the core. The now decreased flux reduces the primary EMF, causing current in the primary circuit to increase to exactly offset the effect of the secondary MMF, and returning the flux to its former value. The core flux thus remains the same regardless of the secondary current, provided the primary voltage is sustained. In this way, the electrical energy fed into the primary circuit is delivered to the secondary circuit.
The primary and secondary MMFs differ only to the extent of the negligible magnetising current and may be equated, and so: , from which the transformer current relationship emerges:
From consideration of the voltage and current relationships, it may be readily shown that impedance in one circuit is transformed by the square of the turns ratio, a secondary impedance thus appearing to the primary circuit to have a value of .
The ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings. Such flux is termed leakage flux, and manifests itself as self-inductance in series with the mutually coupled transformer windings. Leakage is not itself directly a source of power loss, but results in poorer voltage regulation, causing the secondary voltage to fail to be directly proportional to the primary, particularly under heavy load. Distribution transformers are therefore normally designed to have very low leakage inductance.
However, in some applications, leakage can be a desirable property, and long magnetic paths, air gaps, or magnetic bypass shunts may be deliberately introduced to a transformer's design to limit the short-circuit current it will supply. Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury vapor lamps, and neon signs; or for safely handling loads that become periodically short-circuited such as electric arc welders. Air gaps are also used to keep a transformer from saturating, especially audio-frequency transformers that have a DC component added.
Effect of frequency
The time-derivative term in Faraday's Law implies that the flux in the core is the integral of the applied voltage. An ideal transformer would, at least hypothetically, work under direct-current excitation, with the core flux increasing linearly with time. In practice, the flux would rise very rapidly to the point where magnetic saturation of the core occurred and the transformer would cease to function as such. All practical transformers must therefore operate under alternating (or pulsed) current conditions.
The EMF of a transformer at a given flux density increases with frequency, an effect predicated by the universal transformer EMF equation. By operating at higher frequencies, transformers can be physically more compact without reaching saturation, and a given core is able to transfer more power. However efficiency becomes poorer with properties such as core loss and conductor skin effect also increasing with frequency. Aircraft and military equipment traditionally employ 400 Hz power supplies since the decrease in efficiency is more than offset by the reduction in core and winding weight.
In general, operation of a transformer at its designed voltage but at a higher frequency than intended will lead to reduced magnetising current. At a frequency lower than the design value, with the rated voltage applied, the magnetising current may increase to an excessive level. Operation of a transformer at other than its design frequency may require assessment of voltages, losses, and cooling to establish if safe operation is practical. For example, transformers may need to be equipped with "volts per hertz" over-excitation relays to protect the transformer from overvoltage at higher than rated frequency.
An ideal transformer would have no energy losses, and would therefore be 100 percent efficient. Despite the transformer being amongst the most efficient of electrical machines, with experimental models using superconducting windings achieving efficiencies of 99.85 percent, energy is dissipated in the windings, core, and surrounding structures. Larger transformers are generally more efficient, and those rated for electricity distribution usually perform better than 95 percent. A small transformer such as a plug-in "power brick" used for low-power [[consumer electronics]] may be less than 85 percent efficient.
Transformer losses are attributable to several causes and may be differentiated between those originating in the windings, sometimes termed copper loss, and those arising from the magnetic circuit, sometimes termed iron loss, The losses vary with load current, and may furthermore be expressed as "no-load" or "full-load" loss, or at an intermediate loading. Winding resistance dominates load losses, whereas hysteresis and eddy currents losses contribute to over 99 percent of the no-load loss.
Losses in the transformer arise from:
- Winding resistance
- Current flowing through the windings causes resistive heating of the conductors. At higher frequencies, skin effect and proximity effect create additional winding resistance and losses.
- Eddy currents
- Ferromagnetic materials are also good conductors, and a solid core made from such a material also constitutes a single short-circuited turn throughout its entire length. Induced eddy currents therefore circulate within the core in a plane normal to the flux, and are responsible for resistive heating of the core material.
- Hysteresis losses
- Each time the magnetic field is reversed, a small amount of energy is lost to hysteresis within the magnetic core, the amount being dependant on the particular core material.
- Magnetic flux in the core causes it to physically expand and contract slightly with the alternating magnetic field, an effect known as magnetostriction. This produces the familiar buzzing sound, and in turn causes losses due to frictional heating in susceptible cores.
- Mechanical losses
- In addition to magnetostriction, the alternating magnetic field causes fluctuating electromagnetic forces between the primary and secondary windings. These incite vibrations within nearby metalwork, adding to the buzzing noise, and consuming a small amount of power.
- Stray losses
- Not all the magnetic field produced by the primary is intercepted by the secondary. A portion of the leakage flux may induce eddy currents within nearby conductive objects, such as the transformer's support structure, and be converted to heat.
- Cooling system
- Large power transformers may be equipped with cooling fans, oil pumps or water-cooled heat exchangers designed to remove heat. The power used to operate the cooling system is typically considered part of the losses of the transformer.
The physical limitations of the practical transformer may be brought together as an equivalent circuit model built around an ideal lossless transformer. Power loss in the windings is current-dependant and is easily represented as in-series resistances RP and RS. Flux leakage results in a fraction of the applied voltage dropped without contributing to the mutual coupling, and thus can be modelled as self-inductances XP and XS in series with the perfectly-coupled region. Iron losses are caused mostly by hysteresis and eddy current effects in the core, and tend to be proportional to the square of the core flux for operation at a given frequency. Since the core flux is proportional to the applied voltage, the iron loss can be represented by a resistance RC in parallel with the ideal transformer.
A core with finite permeability requires a magnetising current IM to maintain the mutual flux in the core. The magnetising current is in phase with the flux; saturation effects cause the relationship between the two to be non-linear, but for simplicity this effect tends to be ignored in most circuit equivalents. With a sinusoidal supply, the core flux lags the induced EMF by 90° and this effect can be modelled as a magnetising reactance XM in parallel with the core loss component. RC and XM are sometimes together termed the magnetising branch of the model. If the secondary winding is made open-circuit, the current taken by the magnetising branch represents the transformer's no-load current.
The secondary impedance RS and XS is frequently moved (or "referred") to the primary side after multiplying the components by the impedance scaling factor .
The resulting model is sometimes termed the "exact equivalent circuit," though it retains a number of approximations, such as an assumption of linearity. Analysis may be simplified by moving the magnetising branch to the left of the primary impedance, an implicit assumption that the magnetising current is low, and then summing primary and referred secondary impedances.
Transformer types and uses
A variety of specialised transformer designs has been created to fulfil certain engineering applications. The numerous applications to which transformers are adapted lead them to be classified in many ways:
- By power level: from a fraction of a volt-ampere (VA) to over a thousand MVA;
- By frequency range: power-, audio-, or radio frequency;
- By voltage class: from a few volts to hundreds of kilovolts;
- By cooling type: air cooled, oil filled, fan cooled, or water cooled;
- By application function: such as power supply, impedance matching, or circuit isolation;
- By end purpose: distribution, rectifier, arc furnace, amplifier output;
- By winding turns ratio: step-up, step-down, isolating (near equal ratio), variable.
Transformers for use at power or audio frequencies typically have cores made of high permeability silicon steel. By concentrating the magnetic flux, more of it usefully links both primary and secondary windings, and the magnetising current is greatly reduced. Early transformer developers soon realised that cores constructed from solid iron resulted in prohibitive eddy-current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires. Later designs constructed the core by stacking layers of thin steel laminations, a principle still in use. Each lamination is insulated from its neighbors by a coat of non-conducting paint. The universal transformer equation indicates a minimum cross-sectional area for the core to avoid saturation.
The effect of laminations is to confine eddy currents to highly elliptical paths that enclose little flux, and so reduce their magnitude. Thinner laminations reduce losses, but are more laborious and expensive to construct. Thin laminations are generally used on high frequency transformers, with some types of very thin steel laminations able to operate up to 10 kHz.
One common design of laminated core is made from interleaved stacks of E-shaped steel sheets capped with I-shaped pieces, leading to its name of "E-I transformer". The cut-core or C-core type is made by winding a steel strip around a rectangular form and then bonding the layers together. It is then cut in two, forming two C shapes, and the core assembled by binding the two C halves together with a steel strap. They have the advantage that the flux is always oriented parallel to the metal grains, reducing reluctance.
A steel core's remanence means that it retains a static magnetic field when power is removed. When power is then reapplied, the residual field will cause a high inrush current until the effect of the remanent magnetism is reduced, usually after a few cycles of the applied alternating current. Overcurrent protection devices such as fuses must be selected to allow this harmless inrush to pass. On transformers connected to long overhead power transmission lines, induced currents due to geomagnetic disturbances during solar storms can cause saturation of the core, and false operation of transformer protection devices.
Distribution transformers can achieve low off-load losses by using cores made with low loss high permeability silicon steel and amorphous (non-crystalline) steel, so-called "metal glasses." The high initial cost of the core material is offset over the life of the transformer by its lower losses at light load.
Powdered iron cores are used in circuits (such as switch-mode power supplies) that operate above mains frequencies and up to a few tens of kilohertz. These materials combine high magnetic permeability with high bulk electrical resistivity. For frequencies extending to beyond the VHF band, cores made from non-conductive magnetic ceramic materials called ferrites are common. Some radio-frequency transformers also have moveable cores (sometimes called 'slugs') which allow adjustment of the coupling coefficient (and bandwidth) of tuned radio-frequency circuits.
High-frequency transformers may also use air cores. These eliminate the loss due to hysteresis in the core material. Such transformers maintain high coupling efficiency (low stray field loss) by overlapping the primary and secondary windings.
Toroidal transformers are built around a ring-shaped core, which is made from a long strip of silicon steel or permalloy wound into a coil, from powdered iron, or ferrite, depending on operating frequency. The strip construction ensures that the grain boundaries are optimally aligned, improving the transformer's efficiency by reducing the core's reluctance. The closed ring shape eliminates air gaps inherent in the construction of an E-I core. The cross-section of the ring is usually square or rectangular, but more expensive cores with circular cross-sections are also available. The primary and secondary coils are often wound concentrically to cover the entire surface of the core. This minimises the length of wire needed, and also provides screening to minimize the core's magnetic field from generating electromagnetic interference.
Ferrite toroid cores are used at higher frequencies, typically between a few tens of kilohertz to a megahertz, to reduce losses, physical size, and weight of switch-mode power supplies.
Toroidal transformers are more efficient than the cheaper laminated E-I types of similar power level. Other advantages, compared to E-I types, include smaller size (about half), lower weight (about half), less mechanical hum (making them superior in audio amplifiers), lower exterior magnetic field (about one tenth), low off-load losses (making them more efficient in standby circuits), single-bolt mounting, and more choice of shapes. This last point means that, for a given power output, either a wide, flat toroid or a tall, narrow one with the same electrical properties can be chosen, depending on the space available. The main disadvantages are higher cost and limited size.
A drawback of toroidal transformer construction is the higher cost of windings. As a consequence, toroidal transformers are uncommon above ratings of a few kVA. Small distribution transformers may achieve some of the benefits of a toroidal core by splitting it and forcing it open, then inserting a bobbin containing primary and secondary windings.
When fitting a toroidal transformer, it is important to avoid making an unintentional short-circuit through the core. This can happen if the steel mounting bolt in the middle of the core is allowed to touch metalwork at both ends, making a loop of conductive material that passes through the hole in the toroid. Such a loop could result in a dangerously large current flowing in the bolt.
The conducting material used for the windings depends upon the application, but in all cases the individual turns must be electrically insulated from each other and from the other windings. For small power and signal transformers, the coils are often wound from enamelled magnet wire, such as Formvar wire. Larger power transformers operating at high voltages may be wound with wire, copper, or aluminium rectangular conductors insulated by oil-impregnated paper. Strip conductors are used for very heavy currents. High frequency transformers operating in the tens to hundreds of kilohertz will have windings made of Litz wire to minimize the skin effect losses in the conductors. Large power transformers use multiple-stranded conductors as well, since even at low power frequencies non-uniform distribution of current would otherwise exist in high-current windings. Each strand is individually insulated, and the strands are arranged so that at certain points in the winding, or throughout the whole winding, each portion occupies different relative positions in the complete conductor. This transposition equalizes the current flowing in each strand of the conductor, and reduces eddy current losses in the winding itself. The stranded conductor is also more flexible than a solid conductor of similar size, aiding manufacture.
For signal transformers, the windings may be arranged in a way to minimise leakage inductance and stray capacitance to improve high-frequency response. This can be done by splitting up each coil into sections, and those sections placed in layers between the sections of the other winding. This is known as a stacked type or interleaved winding.
Both the primary and secondary windings on power transformers may have external connections, called taps, to intermediate points on the winding to allow selection of the voltage ratio. The taps may be connected to an automatic, on-load tap changer for voltage regulation of distribution circuits. Audio-frequency transformers, used for the distribution of audio to public address loudspeakers, have taps to allow adjustment of impedance to each speaker. A center-tapped transformer is often used in the output stage of an audio power amplifier in a push-pull circuit. Modulation transformers in AM transmitters are very similar.
The turns of the windings must be insulated from each other to ensure that the current travels through the entire winding. The potential difference between adjacent turns is usually small, so that enamel insulation may suffice for small power transformers. Supplemental sheet or tape insulation is usually employed between winding layers in larger transformers.
The transformer may also be immersed in transformer oil that provides further insulation. Although the oil is primarily used to cool the transformer, it also helps to reduce the formation of corona discharge within high voltage transformers. By cooling the windings, the insulation will not break down as easily due to heat. To ensure that the insulating capability of the transformer oil does not deteriorate, the transformer casing is completely sealed against moisture ingress. Thus the oil serves as both a cooling medium to remove heat from the core and coil, and as part of the insulation system.
Certain power transformers have the windings protected by epoxy resin. By impregnating the transformer with epoxy under a vacuum, air spaces within the windings are replaced with epoxy, thereby sealing the windings and helping to prevent the possible formation of corona and absorption of dirt or water. This produces transformers suitable for damp or dirty environments, but at increased manufacturing cost.
Basic Impulse Insulation Level (BIL)
Outdoor electrical distribution systems are subject to lightning surges. Even if the lightning strikes the line some distance from the transformer, voltage surges can travel down the line and into the transformer. High voltage switches and circuit breakers can also create similar voltage surges when they are opened and closed. Both types of surges have steep wave fronts and can be very damaging to electrical equipment . To minimize the effects of these surges, the electrical system is protected by lighting arresters but they do not completely eliminate the surge from reaching the transformer. The basic impulse level (BIL) of the transformer measures its ability to withstand these surges. All 600 volt and below transformers are rated 10 kV BIL. The 2400 and 4160 volt transformers are rated 25 kV BIL.
Where transformers are intended for minimum electrostatic coupling between primary and secondary circuits, an electrostatic shield can be placed between windings to reduce the capacitance between primary and secondary windings. The shield may be a single layer of metal foil, insulated where it overlaps to prevent it acting as a shorted turn, or a single layer winding between primary and secondary. The shield is connected to earth ground.
Transformers may also be enclosed by magnetic shields, electrostatic shields, or both to prevent outside interference from affecting the operation of the transformer, or to prevent the transformer from affecting the operation of nearby devices that may be sensitive to stray fields such as CRTs.
Small signal transformers do not generate significant amounts of heat. Power transformers rated up to a few kilowatts rely on natural convective air-cooling. Specific provision must be made for cooling of high-power transformers. Transformers handling higher power, or having a high duty cycle can be fan-cooled.
Some dry transformers are enclosed in pressurized tanks and are cooled by nitrogen or sulphur hexafluoride gas.
The windings of high-power or high-voltage transformers are immersed in transformer oil—a highly refined mineral oil, that is stable at high temperatures. Large transformers to be used indoors must use a non-flammable liquid. Formerly, polychlorinated biphenyl (PCB) was used as it was not a fire hazard in indoor power transformers and it is highly stable. Due to the stability and toxic effects of PCB by-products, and its accumulation in the environment, it is no longer permitted in new equipment. Old transformers that still contain PCB should be examined on a weekly basis for leakage. If found to be leaking, it should be changed out, and professionally decontaminated or scrapped in an environmentally safe manner. Today, non-toxic, stable silicone-based oils, or fluorinated hydrocarbons may be used where the expense of a fire-resistant liquid offsets additional building cost for a transformer vault. Other less-flammable fluids such as canola oil may be used but all fire resistant fluids have some drawbacks in performance, cost, or toxicity compared with mineral oil.
The oil cools the transformer, and provides part of the electrical insulation between internal live parts. It has to be stable at high temperatures so that a small short or arc will not cause a breakdown or fire. The oil-filled tank may have radiators through which the oil circulates by natural convection. Very large or high-power transformers (with capacities of millions of watts) may have cooling fans, oil pumps and even oil to water heat exchangers. Oil-filled transformers undergo prolonged drying processes, using vapor-phase heat transfer, electrical self-heating, the application of a vacuum, or combinations of these, to ensure that the transformer is completely free of water vapor before the cooling oil is introduced. This helps prevent electrical breakdown under load.
Oil-filled power transformers may be equipped with Buchholz relays which are safety devices that sense gas build-up inside the transformer (a side effect of an electric arc inside the windings), and thus switches off the transformer.
Experimental power transformers in the 2 MVA range have been built with superconducting windings which eliminates the copper losses, but not the core steel loss. These are cooled by liquid nitrogen or helium.
Very small transformers will have wire leads connected directly to the ends of the coils, and brought out to the base of the unit for circuit connections. Larger transformers may have heavy bolted terminals, bus bars or high-voltage insulated bushings made of polymers or porcelain. A large bushing can be a complex structure since it must provide electrical insulation without letting the transformer leak oil.
Small transformers often have no enclosure. Transformers may have a shield enclosure, as described above. Larger units may be enclosed to prevent contact with live parts, and to contain the cooling medium (oil or pressurized gas).
- J.W. Coltman, "The transformer" Scientific American 1 (Jan 1988): 86-95
- William Flanagan. 1993. Handbook of Transformer Design and Applications. (New York, NY: McGraw-Hill. ISBN 0070212910).
- ENERGIE. The scope for energy saving in the EU through the use of energy-efficient electricity distribution transformers 1999 . Retrieved June 25, 2007.
- D.J. Allan, "Power transformers – the second century" Power Engineering Journal IEE (1991)
- M.G. Say. 1983. Alternating Current Machines, 5th ed. (London, UK: Pitman), 13-14.
- Nave, C.R. HyperPhysics Georgia State University, 2005; Retrieved June 25, 2007.
- William Flanagan. 1993. Handbook of Transformer Design and Applications. (New York, NY: McGraw-Hill), 2
- John Hindmarsh. 1977. Electrical Machines and their Applications, 4th ed. (Exeter, UK: Pergammon), 142-143.
- Peter McLaren. 1984. Elementary Electric Power and Machines. (West Sussex, UK: Ellis Horwood), 68-74
- H. Riemersma, et al. 1981. Application of Superconducting Technology to Power Transformers. IEEE Transactions on Power Apparatus and Systems PAS-100 (7): 3398-3407 accessdate June 25, 2007
- T. Kubo, H. Sachs, S. Nadel. 2001. Opportunities for new appliance and equipment efficiency standards. American Council for an Energy-Efficient Economy , 39. accessdate June 25, 2007
- A.R. Daniels, 1985. Introduction to Electrical Machines. (London, UK: Macmillan. ISBN 0333196279)
- M.G. Say. 1983. Alternating Current Machines, 5th ed. . (London, UK: Pitman), 142-143.
- John Hindmarsh. 1977. Electrical Machines and their Applications. (St. Louis, MO: Pergamon. ISBN 0080305733), 29-31.
- Colonel William McLyman. 2004. Transformer and Inductor Design Handbook. (Warminster, PA: CRC. ISBN 0824753933), 3.9-3.14.
- Lloyd Dixon, Eddy Current Losses in Transformer Windings and Circuit Wiring. Texas Instruments. Retrieved June 25, 2007.
- Central Electricity Generating Board. 1982. Modern Power Station Practice. (St. Louis, MO: Pergamon. ISBN 0080164366).
- Central Electricity Generating Board.1982. Modern Power Station Practice. Oxford, UK: Pergamon. ISBN 0080164366.
- Daniels, A.R. 1985. Introduction to Electrical Machines|publisher. South Yarra, Victoria, Australia: Macmillan. ISBN 0333196279.
- Fitzgerald, A. 1983. Electric Machinery, 4th ed. Columbus, OH: McGraw-Hill. ISBN 0070211450.
- Flanagan, William. 1993. Handbook of Transformer Design and Applications. New York, NY: McGraw-Hill. ISBN 0070212910.
- Heathcote, MJ. 1998. J&P Transformer Book, 12th ed. Oxfod, UK: Newnes. ISBN 0750611588.
- Hindmarsh, J. 1984. Electrical Machines and their Applications, 4th ed. Oxford, UK: Pergamon. ISBN 0080305725.
- McLaren, Peter. 1984. Elementary Electric Power and Machines. West Sussex, UK: Ellis Horwood. ISBN 047020057X.
- McLyman, Colonel William. 2004. Transformer and Inductor Design Handbook. Warminster, PA: CRC. ISBN 0824753933.
- Neal, J.P. 1960. Electrical Engineering Fundamentals. Columbus, OH: McGraw-Hill. ASIN B000BSOZ66. (Sect 7-9 on mutual inductance, 301).
- Say, M.G. 1983. Alternating Current Machines, 5th ed. London, UK: Pitman. ISBN 0273019694.
- Shepherd, Moreton J, and A.H Spence. 1970. Higher Electrical Engineering. Nominet, UK: Pitman Publishing. ISBN 0273400258.
All links retrieved March 25, 2020.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
“Humans have impacted the ocean in a more dramatic fashion than merely capturing fish,” explained marine ecologist Ryan Heneghan from the Queensland University of Technology.
“It seems that we have broken the size spectrum – one of the largest power law distributions known in nature.”
The power law can be used to describe many things in biology, from patterns of cascading neural activity to the foraging journeys of various species. It’s when two quantities, whatever their initial starting point be, change in proportion relative to each other.
In the case of a particular type of power law, first described in a paper led by Raymond W. Sheldon in 1972 and now known as the ‘Sheldon spectrum’, the two quantities are the body size of an organism, scaled in proportion to its abundance. So, the larger they get, there tend to be consistently fewer individuals within a set species size group.
For example, while krill are 12 orders of magnitudes (about a billion) times smaller than tuna, they’re also 12 orders of magnitudes more abundant than tuna. So hypothetically, all the tuna flesh in the world combined (tuna biomass) is roughly the same amount (to within the same order of magnitude at least) as all the krill biomass in the world.
Since it was first proposed in 1972, scientists had only tested for this natural scaling pattern within limited groups of species in aquatic environments, at relatively small scales. From marine plankton, to fish in freshwater this pattern held true – the biomass of larger less abundant species was roughly equivalent to the biomass of the smaller yet more abundant species.
Now, Max Planck Institute ecologist Ian Hatton and colleagues have looked to see if this law also reflects what’s happening on a global scale.
“One of the biggest challenges to comparing organisms spanning bacteria to whales is the enormous differences in scale,” says Hatton.
“The ratio of their masses is equivalent to that between a human being and the entire Earth. We estimated organisms at the small end of the scale from more than 200,000 water samples collected globally, but larger marine life required completely different methods.”
Using historical data, the team confirmed the Sheldon spectrum fit this relationship globally for pre-industrial oceanic conditions (before 1850). Across 12 groups of sea life, including bacteria, algae, zooplankton, fish and mammals, over 33,000 grid points of the global ocean, roughly equal amounts of biomass occurred in each size category of organism.
“We were amazed to see that each order of magnitude size class contains approximately 1 gigaton of biomass globally,” says McGill University geoscientist Eric Galbraith.
(Ian Hatton et al, Science Advances, 2021)
Hatton and team discussed possible explanations for this, including limitations set by factors such as predator-prey interactions, metabolism, growth rates, reproduction and mortality. Many of these factors also scale with an organism’s size. But they’re all speculation at this point.
“The fact that marine life is evenly distributed across sizes is remarkable,” said Galbraith. “We don’t understand why it would need to be this way – why couldn’t there be much more small things than large things? Or an ideal size that lies in the middle? In that sense, the results highlight how much we don’t understand about the ecosystem.”
There were two exceptions to the rule however, at both extremes of the size scale examined. Bacteria were more abundant than the law predicted, and whales far less. Again, why is a complete mystery.
The researchers then compared these findings to the same analysis applied to present day samples and data. While the power law still mostly applied, there was a stark disruption to its pattern evident with larger organisms.
“Human impacts appear to have significantly truncated the upper one-third of the spectrum,” the team wrote in their paper. “Humans have not merely replaced the ocean’s top predators but have instead, through the cumulative impact of the past two centuries, fundamentally altered the flow of energy through the ecosystem.”
(Ian Hatton et al, Science Advances, 2021)
While fishes compose less than 3 percent of annual human food consumption, the team found we’ve reduced fish and marine mammal biomass by 60 percent since the 1800s. It’s even worse for Earth’s most giant living animals – historical hunting has left us with a 90 percent reduction of whales.
This really highlights the inefficiency of industrial fishing, Galbraith notes. Our current strategies are wasting magnitudes more biomass and the energy it holds, than we actually consume. Nor have we replaced the role that biomass once played, despite now being one of the largest vertebrate species by biomass.
Around 2.7 gigatonnes have been lost from the largest species groups in the oceans, whereas humans make up around 0.4 gigatonnes. Further work is needed to understand how this massive loss in biomass affects the oceans, the team wrote.
“The good news is that we can reverse the imbalance we’ve created, by reducing the number of active fishing vessels around the world,” Galbraith says. “Reducing overfishing will also help make fisheries more profitable and sustainable – it’s a potential win-win, if we can get our act together.”
Pesquisa correlaciona a extinção de espécies com a origem dos produtos do comércio global
Os orangotangos de Bornéu estão ameaçados pela produção de óleo de palma. JEFTA IMAGES / BARCROFT
5 JAN 2017 – 00:53 CET
Os humanos começam a admitir que somos como um meteorito que vai provocar a nova megaextinção de espécies no planetaTerra. Mas ainda nos falta muita informação sobre o tamanho desse meteorito coletivo e o alcance da devastação que juntos causaremos. Sabemos, por exemplo, que a exploração maciça dos recursos naturais é um dos grandes fatores associados à devastação da biodiversidade, mas são necessários mais dados para conectar esse fenômeno com nosso consumo desmesurado.
Um estudo pioneiro, divulgado nesta quarta-feira, mostra a grande responsabilidade do comércio global na extinção maciça de espécies no mundo, traçando uma clara correlação entre a cesta de compras dos países mais consumidores e as selvagens pressões que massacram os tesouros naturais. O cafezinho que alguém toma nos EUA, por exemplo, está ligado ao desmatamentoda América Central – onde esse café é cultivado –, e esse é o habitat do acuado macaco-aranha, o mais ameaçado do planeta.
“Pelo menos um terço das ameaças à biodiversidade em todo o mundo estão vinculadas à produção para o comércio internacional”, dizem os autores do estudo publicado na Nature Ecology & Evolution. Em seu trabalho, eles mapearam locais do planeta onde há quase 7.000 espécies ameaçadas, estabelecendo sua conexão com a cadeia de consumo nos EUA, China e Japão. Desse modo, pode-se ver facilmente como os animais sob risco em determinados pontos do planeta sofrem com a demanda de bens por parte dos grandes consumidores.
Por exemplo, o lince e dúzias de outras espécies sofrem na península Ibérica pela pressão da produção agrícola que abastece os mercados europeus e norte-americanos. “É digno de menção o importante rastro dos EUA na biodiversidade do sul da Espanha e Portugal, ligado aos impactos sobre uma série de espécies ameaçadas de peixes e aves, já que esses países raramente são percebidos como pontos de ameaça”, afirmam os autores no estudo.
No Brasil, a principal ameaça está no sul, no planalto brasileiro, devido à agropecuária extensiva, e não na Amazônia
“O que este trabalho nos mostra é que os humanos estão assaltando o planeta”, resume David Nogués-Bravo, especialista em macroecologia da Universidade de Copenhague. Nogués-Bravo, que não participou do estudo, diz que os impactos humanos sobre a natureza podem ser representados como um redemoinho que engole a diversidade de seres vivos sobre a Terra. “Esse turbilhão é constituído por três nós: poder, comida e dinheiro. A capacidade da nossa espécie de sugar energia e recursos do planeta é quase ilimitada, e é o que está provocando a sexta extinção maciça na história da Terra”, denúncia o ecologista.
Para ele, tanto o enfoque como os resultados são muito pertinentes, porque põem em perspectiva as perdas de biodiversidade, principalmente em países tropicais em vias de desenvolvimento, e os fluxos de demanda que se originam nos países mais ricos e industrializados.
“O planeta inteiro se tornou uma fazenda, tudo está a serviço de fornecer cada vez mais bens”, critica Juan Carlos del Olmo, secretário-geral da organização conservacionista WWF na Espanha. “O maior vetor de destruição da biodiversidade é a produção de alimentos numa escala brutal”, aponta. Os autores do estudo relatam, por exemplo, sua surpresa ao comprovar que o principal foco de ameaça aos tesouros naturais do Brasil não está na Amazônia. “Apesar da grande atenção dedicada à selva amazônica, o rastro norte-americano no Brasil é maior no sul, no planalto brasileiro, onde há práticas agropecuárias extensivas”, ressalta o trabalho.
“Os humanos estão assaltando o planeta. A capacidade da nossa espécie de sugar energia e recursos no planeta é quase ilimitada”, resume Nogués-Bravo
“E o rastro ecológico não para de crescer”, acrescenta Del Olmo, “mas reduzir esse rastro não é fácil; não podemos fomentar um consumo responsável se depois vamos jogar fora 25% do que se produz”. Como alterar a influência negativa destes fluxos? “Com este enfoque, do rastro de cima para baixo, examinamos todas as espécies ameaçadas e a atividade econômica em conjunto, razão pela qual pode ser difícil estabelecer vínculos claros entre consumo, comércio e impacto”, admitiu ao EL PAÍS um dos autores do estudo, Keiichiro Kanemoto, da Universidade de Shinshu.
“Precisamos ver de onde importamos e onde estão as espécies ameaçadas. Nosso mapa pode ajudar as empresas a fazerem uma cuidadosa seleção dos seus insumos e assim aliviar os impactos sobre a biodiversidade”, diz Kanemoto. Segundo o pesquisador, se as empresas oferecerem informações em seus produtos sobre as ameaças a espécies nas cadeias de suprimento, os consumidores poderão escolher em seu cotidiano produtos favoráveis à biodiversidade.
Os morangos que afogam o lince
“Esperamos que as empresas comparem nossos mapas e seus lugares de aquisição e então reconsiderem suas cadeias de suprimento, e queremos trabalhar com elas para começar a tomar medidas reais”, afirma Kanemoto. Neste sentido, Del Olmo diz que o trabalho do WWF há bastante tempo vem se voltando para esse foco: fazer com que todos os participantes da cadeia conheçam o impacto sobre a biodiversidade, para que a indústria, os fornecedores e os consumidores evitem os bens que mais causam danos na sua origem. Em outras palavras, que todos estejam conscientes de que o café coloca em risco o macaco-arranha, assim como o óleo de palma (dendê) ameaça o orangotango na Indonésia.
O estudo de Kanemoto e seus colegas ressalta como é inesperada a aparição da Espanha como uma região com grandes problemas de biodiversidade por culpa do consumo fora das suas fronteiras. Apontam especificamente o lince, que reina no Parque Nacional e Natural de Doñana, no sul do país, e que chegou a ser o felino mais ameaçado da Terra, entre outros motivos pela perda de hábitat. “Do ponto de vista da biodiversidade, a Espanha é o Bornéu da Europa. Nas grandes espécies a briga está acontecendo, mas a biodiversidade pequena – anfíbios, aves e peixes – está desaparecendo a uma velocidade brutal”, lamenta Del Olmo.
O diretor do WWF na Espanha cita como exemplo os morangos: a água que dava de beber à marisma de Doñana é atualmente usada nos milhares de hectares de cultivo de morangos. Essa área responde por 60% do cultivo da fruta na Espanha, e metade da água usada vem de poços ilegais, que secam o entorno. “O uso brutal da água e do território, o impacto da agricultura para exportar produtos a todo o mundo, deixa os aquíferos secos. Não notamos, mas o impacto é impressionante”, explica Del Olmo. E acrescenta: “Por isso dizemos às grandes redes varejistas: não comprem de quem usa poços ilegais e está destruindo a biodiversidade. Premiem quem faz direito”.
Alterações na composição de espécies vegetais poderão trazer implicações para toda a cadeia alimentar, incluindo o homem
Cheias e secas extremas e subsequentes, como essas que os rios da Amazônia vêm sofrendo nas duas últimas décadas, podem levar à exclusão de espécies de árvores e à colonização por outras espécies menos tolerantes à inundação.
É o que apontam estudos desenvolvidos por pesquisadores associados ao Grupo Ecologia, Monitoramento e Uso Sustentável de Áreas Úmidas (Maua) do Instituto Nacional de Pesquisas da Amazônia (Inpa/MCTI), em Manaus, que participa, desde 2013, do Programa de Pesquisas Ecológicas de Longa Duração (Peld), por meio do Peld-Maua.
Durante a década de 1970, por exemplo, os níveis máximos anuais do rio Negro ficaram alguns metros acima do valor médio da enchente, e a descida das águas não foi intensa, resultando na inundação de várias populações de plantas durante anos consecutivos. Isso causou a exclusão de muitas espécies arbustivas e arbóreas nas baixas topografias de igapós na região da Amazônia Central, como é o caso de macacarecuia (Eschweilera tenuifolia).
“Acredita-se que esses fenômenos podem ser consequência das mudanças climáticas em curso, mas podem também derivar de variações naturais do ciclo hidrológico. Os estudos realizados no âmbito do Peld-Maua visam confirmar a origem desses fenômenos utilizando informações sobre o crescimento da vegetação”, adianta a coordenadora do Peld-Maua, a pesquisadora do Inpa Maria Teresa Fernandez Piedade.
Anos de secas ou cheias consecutivas podem ultrapassar a capacidade adaptativa das espécies de árvores, especialmente de populações estabelecidas nos extremos do ótimo de distribuição no gradiente inundável (composição de diferentes níveis de inundação a que estão sujeitas as áreas alagáveis).
Segundo Piedade, como a vegetação sustenta a fauna desses ambientes, mudanças na composição de espécies vegetais poderão trazer implicações para toda a cadeia alimentar, incluindo o homem. “A vegetação arbórea das áreas alagáveis amazônicas é bem adaptada à dinâmica anual de cheias e vazantes”, destaca a pesquisadora.
Para ela, determinar o grau de tolerância a períodos extremos das espécies de árvores desses ambientes e de sua fauna associada, como os peixes e roedores, e conhecer sua reação com a dinâmica de alternância entre fases inundadas e não inundadas normais e extremas é um grande desafio e se constitui na base para seu uso sustentável e preservação.
Segundo Piedade, as áreas úmidas (várzeas, igapós, buritizais e outros tipos) cobrem cerca de 30% da região amazônica e são de fundamental importância ecológica e econômica. Ela explica que na várzea, múltiplas atividades econômicas são tradicionalmente desenvolvidas, como a pesca e a agricultura familiar, enquanto que nos igapós, por serem mais pobres em nutrientes e em espécies de plantas e animais, menos atividades econômicas são praticadas. Já nas campinas/campinaranas alagáveis essas atividades são ainda mais reduzidas.
“A ecologia, o funcionamento e as limitações para determinadas práticas econômicas nas várzeas são bastante conhecidas, mas nos igapós de água pretas e nas campinas/campinaranas alagáveis tais aspectos ainda são pouco estudados”, diz Piedade. “Embora se saiba que esses ambientes são frágeis, aumentar e disponibilizar informações sobre eles é fundamental”, acrescenta.
Com o título “Monitoramento e modelagem de dois grandes ecossistemas de áreas úmidas amazônicas em cenários de mudanças climáticas”, o Peld-Maua é um projeto financiado pelo Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), e também conta com recursos da Fundação de Amparo à Pesquisa do Estado do Amazonas (Fapeam). Insere-se no plano de ação “Ciência, Tecnologia e Inovação para Natureza e Clima”, do MCTI.
O Programa Peld foca no estabelecimento de sítios de pesquisa permanentes em diversos ecossistemas do País, integrados em redes para o desenvolvimento e o acompanhamento de pesquisas ecológicas de longa duração. Atualmente, existem 31 sítios de pesquisa vigentes.
O Peld-Maua é gerenciado pelo Inpa, em Manaus. Tem como vice-coordenador o pesquisador do Inpa, Jochen Schöngart; e como coordenador do Banco de Dados o pesquisador Florian Wittmann, do Departamento de Biogeoquímica do Instituto Max-Planck de Química, com sede em Mainz, na Alemanha.
A coordenadora do Peld-Maua explica que as atividades tiveram início há três anos. “Na primeira fase, que será completada agora em 2016, o Peld-Maua priorizou estudos em um ambiente de igapó e outro de campinarana alagável, mas espera-se que os estudos tenham continuidade e sejam expandidos para outras tipologias alagáveis amazônicas”, diz Piedade.
O Peld-Maua desenvolve estudos nas áreas de inundação das florestas de igapó no Parque Nacional do Jaú (Parna Jaú) – Unidade de Conservação localizada entre os municípios de Novo Airão e Barcelos, no Amazonas –, e ao longo dos gradientes de profundidade do lençol freático das florestas de campinas/campinaranas na Reserva de Desenvolvimento Sustentável (RDS) do Uatumã, situada entre os municípios de São Sebastião do Uatumã e Itapiranga, também no Amazonas.
Conforme Piedade, diante da conectividade entre os ambientes alagáveis e as formações contíguas de terra-firme ou outras, os sítios de estudos foram escolhidos em ambientes onde os gradientes podem ser também avaliados. “Isso aumenta as possibilidades de trabalhos comparativos”, ressalta.
O Peld-Maua tem por objetivo relacionar a estrutura, composição florística e dinâmica de plantas que produzem sementes (fanerógamas) de dois ecossistemas de áreas úmidas na Amazônia Central com fatores do solo e da disponibilidade de água (hidro-edáficos), por meio do monitoramento em longo prazo para entender possíveis impactos e respostas da vegetação frente a mudanças dos regimes pluviométricos e hidrológicos.
O programa, até o momento, já permitiu a realização de cinco dissertações de mestrado e uma tese de doutorado. Além dos estudos já finalizados, estão em andamento dois pós-doutorados, seis doutorados e quatro mestrados. Quanto à formação de pessoal, dois bolsistas do Programa de Capacitação Institucional (PCI) concluíram suas atividades e dois estão realizando seus projetos, e dois bolsistas do programa de Bolsa de Fomento ao Desenvolvimento Tecnológico (DTI) e dois Pibic’s realizaram seus projetos junto ao Peld-Maua.
Grupo de elite da climatologia quer que governos considerem risco de planeta esquentar de 4 a 7 graus Celsius, o que causaria o colapso da civilização; análise começa a ser feita no Brasil
Por Claudio Angelo, do OC –
Um vídeo exibido a uma plateia pequena na última segunda-feira, em Brasília, mostrava sem eufemismos o que poderia acontecer com o planeta caso o aquecimento global saísse de controle e atingisse o patamar de 4oC a 7oC. Imagens de florestas queimando, lavouras mortas e inundações se sucediam enquanto uma narradora vaticinava “mortes em massa para pessoas que não tiverem ar-condicionado 24 horas por dia” e “migrações forçadas”. “Nos tornaremos parte de um ambiente extinto”, sentenciou. O fato de que a cidade passava por uma onda de calor, tendo registrado dias antes a maior temperatura desde sua fundação, ajudava a compor a atmosfera.
Num pequeno palco, em poltronas brancas, um grupo formado em sua maioria por homens de meia idade assistia à exibição. Entre eles estavam alguns membros da elite da ciência do clima, como Carlos Afonso Nobre e José Marengo, membros do Painel Intergovernamental sobre Mudanças Climáticas, e Sir David King, representante para Mudanças Climáticas do Reino Unido.
Até não muito tempo atrás, esses mesmos homens descontariam como alarmismo ou ficção científica as afirmações do vídeo. Hoje, são as pesquisas deles que embasam os cenários de apocalipse pintados ali.
Os cientistas reunidos em Brasília fazem parte de um grupo internacional reunido por David King em 2013 para tentar produzir uma avaliação de riscos de mudanças climáticas extremas. O trabalho foi iniciado nos EUA, na Índia, na China e no Reino Unido e agora começa a ser feito no Brasil. Ele parte do princípio de que a probabilidade de que o aquecimento da Terra ultrapasse 4oC é baixa, mas as consequências potenciais são tão dramáticas que os governos deveriam considerá-las na hora de tomar decisões sobre corte de emissões e adaptação.
“Trata-se de uma visão muito diferente da mudança climática”, afirmou King, um físico sul-africano que serviu durante anos como conselheiro-chefe para ciência do primeiro-ministro Tony Blair. “O IPCC fez um ótimo trabalho, mas é preciso uma avaliação do risco de que aconteça algo catastrófico ligado à mudança climática.”
Ele citou como exemplo os piores cenários de mudança climática projetados para a China: elevações do nível do mar que afetassem a costa leste do país, lar de 200 milhões de pessoas, quebras da safra de arroz – que têm de 5% a 10% de chance de ocorrer mesmo com elevações modestas na temperatura – e ondas de calor que estejam acima da capacidade fisiológica de adaptação do ser humano.
“Com mais de três dias com temperaturas superiores a 40oC e muita umidade você não consegue compensar o calor pela transpiração e morre”, afirmou King.
Com um aquecimento de 4oC a 7oC, estresses múltiplos podem acontecer de uma vez em várias partes do mundo. “Estamos olhando para perdas maciças de vidas”, afirmou King. “Seria o colapso da civilização.”
Rumo ao 4°C
Os modelos climáticos usados pelo IPCC projetam diferentes variações de temperatura de acordo com a concentração de gás carbônico na atmosfera. Esses cenários se chamam RCP, sigla em inglês para “trajetórias representativas de concentração”, e medem quanto muda o balanço de radiação do planeta, em watts por metro quadrado. Eles vão de 2.6 W/m2 – o cenário compatível com a manutenção do aquecimento na meta de 2oC, considerada pela ONU o limite “seguro” – a 8.5 w/m2, que é para onde o ritmo atual de emissões está levando a humanidade.
“O RCP 8.5 nos dá quase 100% de probabilidade de o aquecimento ultrapassar os 4oC no fim deste século”, afirmou Sir David King. E quais seriam as chances de mais de 7oC? Até o fim do século, baixas. “Eu sou velho, então estou bem. Mas tenho dois netos que vão viver até o fim do século, e eles vão querer ter netos também. Não ligamos para o futuro?”
Segundo Carlos Nobre, avaliar e prevenir riscos de um aquecimento extremo é como comprar um seguro residencial: mesmo com probabilidade baixa de um desastre, é algo que não dá para não fazer, porque os custos do impacto são basicamente impossíveis de manejar.
Para o Brasil, esses riscos são múltiplos: vão desde a redução em 30% da vazão dos principais rios até o comprometimento do agronegócio e extinção de espécies. Cenários regionais traçados a partir dos modelos do IPCC já apontam para aquecimentos de até 8oC em algumas regiões do país neste século, o que tornaria essas áreas essencialmente inabitáveis por longos períodos.
“Mesmo se limitarmos as emissões a 1 trilhão de toneladas de CO2, [limite compatível com os 2oC] ainda podemos ultrapassar os 3oC”, afirmou o cientista, atualmente presidente da Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior).
Segundo ele, não há outro caminho a tomar que não seja limitar as concentrações de CO2 na atmosfera a 350 partes por milhão. Ocorre que já ultrapassamos as 400 partes por milhão em 2014, e os compromissos registrados pelos países para o acordo de Paris não são capazes nem mesmo de garantir o limite te 1 trilhão de toneladas.
Única mulher do painel, Beatriz Oliveira, da Fiocruz, apontou o risco de muita gente no Brasil literalmente morrer de calor, em especial nas regiões Norte e Nordeste. “Você poderia ficar exposto e realizar atividades externas no máximo por 30 minutos. O resto do dia teria de passar no ar-condicionado”, disse.
Questionada pela plateia ao final do evento, a pesquisadora mencionou um único lado positivo do aquecimento extremo: a redução na incidência de doenças transmitidas por insetos, como a dengue. “Nem o mosquito sobrevive”, disse. (Observatório do Clima/ #Envolverde)
Earlier this year I received a phone call from an unknown number. “This is the National Geographic Channel. Is it true that you are a shark anthropologist?” I paused— “Yes, I guess you can say that.” “Great, we are doing a program about sharks and are asking experts why sharks attack at certain times and in certain places more than others. Can you tell me a bit about your work?”
My interest in sharks began in 2005 during an internship at a resort in Papua New Guinea. Ten miles from shore and ninety feet below the surface, a twelve-foot hammerhead shark swam straight at me, stopping only three feet away before turning to rejoin its group. As it moved gracefully into the deep, I caught my breath and returned to the surface.
Four years later, I was working on a dive boat in South Florida when a sport-fishing boat motored past with a large grey hammerhead hung from its rigging. For a brief moment, I thought it was the shark I encountered years before. And why couldn’t it be? Like whales, most species of sharks are highly migratory. They have little respect for exclusive economic zones, marine protected areas, or any other enclosures. What might appear as absolute freedom in these animals has led to the production of an abstract image of sharks as transgressive predators, menaces to society, and worthy targets of sport. Regardless of what the category of the shark has become, the individual animal hanging from that fishing boat was certainly dead—no longer a terrible monster.
This incident took place in 2009, just after Rob Stewart’s film Sharkwaterrevealed the decimation of global shark populations by the finning industry. Considering the importance of sharks to healthy marine ecosystems, surely it was wrong to continue killing them for sport. Thinking I might do some good, I spoke with the captain of the boat about their catch.
“Couldn’t you release them from now on?” I asked.
“They normally die during the fight.”
“Well, what about fishing for something else?”
“Sailfish and marlin are not in season,” he said. “And besides, the clients are paying for the experience, and they want their photo taken with the big sharks.”
“Yes but hammerhead populations are in serious decline.” I said.
“We catch plenty of them, and easily too. More this year than last.”
I was stuck. How could I prove something was threatened when local knowledge suggests otherwise? Even worse, how could anyone prove sharks were in decline when, as free-roaming marine animals, they cannot be easily counted?
That same year, National Geographic aired a documentary entitled Drain the Ocean. The promotional abstract read: “In this special, we look at what most call ‘The Final Frontier.’ Using the newest data from scientists all over the world and the latest advancements in computer generated imaging, we are able to explore some of the most dramatic landscapes the Earth has to offer.” This was exactly what my argument lacked—quantitative support through technological innovation. If computers could reveal the geological truths of this invisible realm, perhaps they could also reveal the ecological truths of a planet in decline—dolphins tangled in drift nets, massive whales with harpoons rusting in their backs, and dwindling populations of sharks swishing their tales through the muddy terrain. If this could be done, then maybe I could convince the fisherman that killing sharks for money was wrong.
But draining the ocean is not yet possible, nor should it be. Even if through some technological means we could illuminate the other seventy percent of our planet, the lives and the forms of relationality between humans and marine animals (however contentious they may be) would change at the moment of discovery. In trying to protect sharks, neither scientific nor emotional appeals alone are sufficient to effect social change. There remains a mystery of what oceanic animals do, how they do it, and exactly how many are required to keep doing what they do. If this mystery were completely resolved, the result would be equally harmful to marine life and to those who make their living upon the sea; for this unknown marks the distinction between our terrestrial selves and aquatic others, and is therefore what makes knowledge of the ocean (and thus ourselves) possible.
An Anthropology of the Ocean
My phone call with National Geographic didn’t last long. The producer ended it by saying, “Your work sounds interesting, but we are looking for more evidence about why these attacks are occurring. Could you recommend a good marine biologist?” I did, and promptly hung up. I thought about our conversation—I don’t even know what a shark anthropologist is, and I’m supposed to be one!
As human interests are directed into the sea in the form of extractive industry, state securitization, renewable energy, and conservation enclosure, we find ourselves as a species grappling with the politics and hermeneutics of the life aquatic. Responding to this with continued interest in the protection of marine life and forms of relationality, I have begun to sketch an Anthropology of the Ocean. Working alongside indigenous fishing communities, ecologists, oceanographers, and drawing on the work of fellow anthropologists like Stefan Helmreich, such an approach examines how oceanic spaces and bodies are imagined, explored, and controlled, and how rights to marine resources are established and translated across social, spatial, and categorical boundaries
Within this framework, an Anthropology of Sharks could do the following: 1) draw upon the history of anthropological theory and method to ask how valuable spaces become ‘final frontiers,’ 2) describe how these produced frontiers are explored, claimed, enclosed—in short, how they are settled, and 3) reveal the forms of dispossession and disenchantment that occur when such settlement attempts to cultivate spaces have already been occupied by other ways of being and knowing. Putting a multispecies twist on subaltern studies and postcolonial anthropology, this approach would not only ask if the shark could “speak,” but if and how it might be heard amid the cacophony of other voices.
Boulder, Colo., USA – The Campanian Ignimbrite (CI) eruption in Italy 40,000 years ago was one of the largest volcanic cataclysms in Europe and injected a significant amount of sulfur-dioxide (SO2) into the stratosphere. Scientists have long debated whether this eruption contributed to the final extinction of the Neanderthals. This new study by Benjamin A. Black and colleagues tests this hypothesis with a sophisticated climate model.
Black and colleagues write that the CI eruption approximately coincided with the final decline of Neanderthals as well as with dramatic territorial and cultural advances among anatomically modern humans. Because of this, the roles of climate, hominin competition, and volcanic sulfur cooling and acid deposition have been vigorously debated as causes of Neanderthal extinction.
They point out, however, that the decline of Neanderthals in Europe began well before the CI eruption: “Radiocarbon dating has shown that at the time of the CI eruption, anatomically modern humans had already arrived in Europe, and the range of Neanderthals had steadily diminished. Work at five sites in the Mediterranean indicates that anatomically modern humans were established in these locations by then as well.”
“While the precise implications of the CI eruption for cultures and livelihoods are best understood in the context of archaeological data sets,” write Black and colleagues, the results of their study quantitatively describe the magnitude and distribution of the volcanic cooling and acid deposition that ancient hominin communities experienced coincident with the final decline of the Neanderthals.
In their climate simulations, Black and colleagues found that the largest temperature decreases after the eruption occurred in Eastern Europe and Asia and sidestepped the areas where the final Neanderthal populations were living (Western Europe). Therefore, the authors conclude that the eruption was probably insufficient to trigger Neanderthal extinction.
However, the abrupt cold spell that followed the eruption would still have significantly impacted day-to-day life for Neanderthals and early humans in Europe. Black and colleagues point out that temperatures in Western Europe would have decreased by an average of 2 to 4 degrees Celsius during the year following the eruption. These unusual conditions, they write, may have directly influenced survival and day-to-day life for Neanderthals and anatomically modern humans alike, and emphasize the resilience of anatomically modern humans in the face of abrupt and adverse changes in the environment.
FEATURED ARTICLE Campanian Ignimbrite volcanism, climate, and the final decline of the Neanderthals
Benjamin A. Black et al., University of California, Berkeley, California, USA. Published online ahead of print on 19 March 2015; http://dx.doi.org/10.1130/G36514.1.
Let me tell it the old way, then the new way. See which worries you most.
First version: Easter Island is a small 63-square-mile patch of land — more than a thousand miles from the next inhabited spot in the Pacific Ocean. In A.D. 1200 (or thereabouts), a small group of Polynesians — it might have been a single family — made their way there, settled in and began to farm. When they arrived, the place was covered with trees — as many as 16 million of them, some towering 100 feet high.
These settlers were farmers, practicing slash-and-burn agriculture, so they burned down woods, opened spaces, and began to multiply. Pretty soon the island had too many people, too few trees, and then, in only a few generations, no trees at all.
As Jared Diamond tells it in his best-selling book, Collapse, Easter Island is the “clearest example of a society that destroyed itself by overexploiting its own resources.” Once tree clearing started, it didn’t stop until the whole forest was gone. Diamond called this self-destructive behavior “ecocide” and warned that Easter Island’s fate could one day be our own.
When Captain James Cook visited there in 1774, his crew counted roughly 700 islanders (from an earlier population of thousands), living marginal lives, their canoes reduced to patched fragments of driftwood.
And that has become the lesson of Easter Island — that we don’t dare abuse the plants and animals around us, because if we do, we will, all of us, go down together.
And yet, puzzlingly, these same people had managed to carve enormous statues — almost a thousand of them, with giant, hollow-eyed, gaunt faces, some weighing 75 tons. The statues faced not outward, not to the sea, but inward, toward the now empty, denuded landscape. When Captain Cook saw them, many of these “moai” had been toppled and lay face down, in abject defeat.
OK, that’s the story we all know, the Collapse story. The new one is very different.
A Story Of Success?
It comes from two anthropologists, Terry Hunt and Carl Lipo, from the University of Hawaii. They say, “Rather than a case of abject failure,” what happened to the people on Easter Island “is an unlikely story of success.”
Success? How could anyone call what happened on Easter Island a “success?”
Well, I’ve taken a look at their book, The Statues That Walked, and oddly enough they’ve got a case, although I’ll say in advance what they call “success” strikes me as just as scary — maybe scarier.
Here’s their argument: Professors Hunt and Lipo say fossil hunters and paleobotanists have found no hard evidence that the first Polynesian settlers set fire to the forest to clear land — what’s called “large scale prehistoric farming.” The trees did die, no question. But instead of fire, Hunt and Lipo blame rats.
Polynesian rats (Rattus exulans) stowed away on those canoes, Hunt and Lipo say, and once they landed, with no enemies and lots of palm roots to eat, they went on a binge, eating and destroying tree after tree, and multiplying at a furious rate. As a reviewer in The Wall Street Journalreported,
In laboratory settings, Polynesian rat populations can double in 47 days. Throw a breeding pair into an island with no predators and abundant food and arithmetic suggests the result … If the animals multiplied as they did in Hawaii, the authors calculate, [Easter Island] would quickly have housed between two and three million. Among the favorite food sources of R. exulans are tree seeds and tree sprouts. Humans surely cleared some of the forest, but the real damage would have come from the rats that prevented new growth.
As the trees went, so did 20 other forest plants, six land birds and several sea birds. So there was definitely less choice in food, a much narrower diet, and yet people continued to live on Easter Island, and food, it seems, was not their big problem.
Rat Meat, Anybody?
For one thing, they could eat rats. As J.B. MacKinnon reports in his new book, The Once and Future World, archeologists examined ancient garbage heaps on Easter Island looking for discarded bones and found “that 60 percent of the bones came from introduced rats.”
So they’d found a meat substitute.
What’s more, though the island hadn’t much water and its soil wasn’t rich, the islanders took stones, broke them into bits, and scattered them onto open fields creating an uneven surface. When wind blew in off the sea, the bumpy rocks produced more turbulent airflow, “releasing mineral nutrients in the rock,” J.B. MacKinnon says, which gave the soil just enough of a nutrient boost to support basic vegetables. One tenth of the island had these scattered rock “gardens,” and they produced enough food, “to sustain a population density similar to places like Oklahoma, Colorado, Sweden and New Zealand today.”
According to MacKinnon, scientists say that Easter Island skeletons from that time show “less malnutrition than people in Europe.” When a Dutch explorer, Jacob Roggevin, happened by in 1722, he wrote that islanders didn’t ask for food. They wanted European hats instead. And, of course, starving folks typically don’t have the time or energy to carve and shove 70-ton statues around their island.
A ‘Success’ Story?
Why is this a success story?
Because, say the Hawaiian anthropologists, clans and families on Easter Island didn’t fall apart. It’s true, the island became desolate, emptier. The ecosystem was severely compromised. And yet, say the anthropologists, Easter Islanders didn’t disappear. They adjusted. They had no lumber to build canoes to go deep-sea fishing. They had fewer birds to hunt. They didn’t have coconuts. But they kept going on rat meat and small helpings of vegetables. They made do.
One niggling question: If everybody was eating enough, why did the population decline? Probably, the professors say, from sexually transmitted diseases after Europeans came visiting.
OK, maybe there was no “ecocide.” But is this good news? Should we celebrate?
I wonder. What we have here are two scenarios ostensibly about Easter Island’s past, but really about what might be our planet’s future. The first scenario — an ecological collapse — nobody wants that. But let’s think about this new alternative — where humans degrade their environment but somehow “muddle through.” Is that better? In some ways, I think this “success” story is just as scary.
The Danger Of ‘Success’
What if the planet’s ecosystem, as J.B. MacKinnon puts it, “is reduced to a ruin, yet its people endure, worshipping their gods and coveting status objects while surviving on some futuristic equivalent of the Easter Islanders’ rat meat and rock gardens?”
Humans are a very adaptable species. We’ve seen people grow used to slums, adjust to concentration camps, learn to live with what fate hands them. If our future is to continuously degrade our planet, lose plant after plant, animal after animal, forgetting what we once enjoyed, adjusting to lesser circumstances, never shouting, “That’s It!” — always making do, I wouldn’t call that “success.”
The Lesson? Remember Tang, The Breakfast Drink
People can’t remember what their great-grandparents saw, ate and loved about the world. They only know what they know. To prevent an ecological crisis, we must become alarmed. That’s when we’ll act. The new Easter Island story suggests that humans may never hit the alarm.
It’s like the story people used to tell about Tang, a sad, flat synthetic orange juice popularized by NASA. If you know what real orange juice tastes like, Tang is no achievement. But if you are on a 50-year voyage, if you lose the memory of real orange juice, then gradually, you begin to think Tang is delicious.
On Easter Island, people learned to live with less and forgot what it was like to have more. Maybe that will happen to us. There’s a lesson here. It’s not a happy one.
As MacKinnon puts it: “If you’re waiting for an ecological crisis to persuade human beings to change their troubled relationship with nature — you could be waiting a long, long time.”
Primeira avaliação da Plataforma Intergovernamental de Biodiversidade e Serviços Ecossistêmicos será sobre polinizadores, polinização e produção de alimentos. Trabalho é coordenado por pesquisador inglês e por brasileira (foto: Wikimedia)
Agência FAPESP – Um grupo de 75 pesquisadores de diversos países-membros da Plataforma Intergovernamental de Biodiversidade e Serviços Ecossistêmicos (IPBES, na sigla em inglês), que reúne 119 nações de todas as regiões do mundo, fará uma avaliação global sobre polinizadores, polinização e produção de alimentos.
O escopo do projeto foi apresentado na última quarta-feira (17/09) em São Paulo, no auditório da FAPESP, em um encontro de integrantes do organismo intergovernamental independente, voltado a organizar o conhecimento sobre a biodiversidade no mundo e os serviços ecossistêmicos.
“A ideia do trabalho é avaliar todo o conhecimento existente sobre polinização no mundo e identificar estudos necessários na área para auxiliar os tomadores de decisão dos países a formular políticas públicas para a preservação desse e de outros serviços ecossistêmicos prestados pelos animais polinizadores”, disse Vera Imperatriz Fonseca, do Instituto de Biociências da Universidade de São Paulo (USP) e do Instituto Tecnológico Vale Desenvolvimento Sustentável (ITVDS), à Agência FAPESP.
“Já estamos conhecendo melhor o problema [da crise da polinização no mundo]. Agora, precisamos identificar soluções”, disse a pesquisadora, que coordena a avaliação ao lado de Simon Potts, professor da University of Reading, do Reino Unido.
De acordo com Fonseca, há mais de 100 mil espécies de animais invertebrados polinizadores no mundo, dos quais 20 mil são abelhas. Além de insetos polinizadores – que serão o foco do relatório –, há também cerca de 1,2 mil espécies de animais vertebrados, tais como pássaros, morcegos e outros mamíferos, além de répteis, que atuam como polinizadores.
Estima-se que 75% dos cultivos mundiais e entre 78% e 94% das flores silvestres do planeta dependam da polinização por animais, apontou a pesquisadora.
“Há cerca de 300 mil espécies de flores silvestres que dependem da polinização por insetos”, disse Fonseca. “O valor anual estimado desse serviço ecossistêmico prestado por insetos na agricultura é de US$ 361 bilhões. Mas, para a manutenção da biodiversidade, é incalculável”, afirmou.
Nos últimos anos registrou-se uma perda de espécies nativas de insetos polinizadores no mundo, causada por, entre outros fatores, desmatamento de áreas naturais próximas às lavouras, uso de pesticidas e surgimento de patógenos.
Se o declínio de espécies de insetos polinizadores se tornar tendência, pode colocar em risco a produtividade agrícola e, consequentemente, a segurança alimentar nas próximas décadas, disse a pesquisadora.
“A população mundial aumentará muito até 2050 e será preciso produzir uma grande quantidade de alimentos com maior rendimento agrícola, em um cenário agravado pelas mudanças climáticas. A polinização por insetos pode contribuir para solucionar esse problema”, afirmou Fonseca.
Segundo um estudo internacional, publicado na revista Current Biology, estima-se que o manejo de colmeias de abelhas utilizadas pelos agricultores para polinização – como as abelhas domésticas Apis mellifera L, amplamente criadas no mundo todo – tenha aumentado em cerca de 45% entre 1950 e 2000.
As áreas agrícolas dependentes de polinização, no entanto, também cresceram em mais de 300% no mesmo período, apontam os autores da pesquisa.
“Apesar de ter aumentado o manejo de espécies de abelhas polinizadoras, precisamos muito mais do que o que temos no momento para atender às necessidades da agricultura”, avaliou Fonseca.
O declínio das espécies de polinizadores no mundo estimula a polinização manual em muitos países. Na China, por exemplo, é comum o comércio de pólen para essa finalidade, afirmou a pesquisadora.
“Na ausência de animais para fazer a polinização, tem sido feita a polinização manual de lavouras de culturas importantes, como o dendê e a maçã. No Brasil se faz a polinização manual de maracujá , tomate e de outras culturas”, disse.
Falta de dados
Segundo Fonseca, já há dados sobre o declínio de espécies de abelhas, moscas-das-flores (sirfídeos) e de borboletas na Europa, nos Estados Unidos, no Oriente Médio e no Japão.
Um estudo internacional, publicado no Journal of Apicultural Research, apontou perdas de aproximadamente 30% de colônias de Apis mellifera L em decorrência da infestação pelo ácaro Varroa destructor, que diminui a vida das abelhas e, consequentemente, sua atividade de polinização nas flores, em especial nos países do hemisfério Norte.
Na Europa, as perdas de colônias de abelhas em decorrência do ácaro podem chegar a 53% e, no Oriente Médio, a 85%, indicam os autores do estudo. No entanto, ainda não há estimativas sobre a perda de colônias e de espécies em continentes como a América do Sul, África e Oceania.
“Não temos dados sobre esses continentes. Precisamos de informações objetivas para preenchermos uma base de dados sobre polinização em nível mundial a fim de definir estratégias de conservação em cada país”, avaliou Fonseca. “Também é preciso avaliar os efeitos de pesticidas no desaparecimento das abelhas em áreas agrícolas, que têm sido objeto de estudos e atuação dos órgãos regulatórios no Brasil.”
Outra grande lacuna a ser preenchida é a de estudos sobre interações entre espécies de abelhas polinizadoras nativas com as espécies criadas para polinização, como as Apis mellifera L.
Um estudo internacional publicado em 2013 indicou que, quando as Apis mellifera L e as abelhas solitárias atuam em uma mesma cultura, a taxa de polinização aumenta significativamente, pois elas se evitam nas flores e mudam mais frequentemente de local de coleta de alimento, explicou Fonseca.
De acordo com a pesquisadora, uma solução para a polinização em áreas agrícolas extensas tem sido o uso de colônias de polinizadores provenientes da produção de colônias em massa, como de abelhas Bombus terrestris, criadas em larga escala e inclusive exportadas.
Em 2004, foi produzido 1 milhão de colônias dessa abelha para uso na agricultura.
Na América do Sul, o Chile foi o primeiro país a introduzir essas abelhas para polinização de frutas e verduras. Em algumas áreas onde foi introduzida, entretanto, essa espécie exótica de abelha mostrou ser invasora e ter grande capacidade de ocupar novos territórios.
“É preciso estudar mais a interação entre as espécies para identificar onde elas convivem, qual a contribuição de cada uma delas na polinização e se essa interação é positiva ou negativa”, indicou Fonseca.
“Além disso, a propagação de doenças para as espécies nativas de abelhas causa preocupação e deve ser um foco da pesquisa nos próximos anos”, indicou.
De acordo com Fonseca, a avaliação intitulada Polinizadores, polinização e produção de alimentos, do IPBES, está em fase de redação e deverá ser concluída no fim de 2015.
Além de um relatório técnico, com seis capítulos de 30 páginas cada, a avaliação também deverá apresentar um texto destinado aos formuladores de políticas públicas sobre o tema, contou.
“A avaliação sobre polinização deverá contribuir para aumentar os esforços de combate ao problema do desaparecimento de espécies de polinizadores no mundo, que é urgente e tem uma relevância política e econômica muito grande, porque afeta a produção de alimentos”, afirmou.
A avaliação será o primeiro diagnóstico temático realizado pelo IPBES e deverá ser disponibilizada para o público em geral em dezembro de 2015. O painel planeja produzir nos próximos anos outros levantamentos semelhantes sobre outros temas como espécies invasoras, restauração de habitats e cenários de biodiversidade no futuro.
Uma estratégia adotada para tornar os diagnósticos temáticos mais integrados foi a criação de forças-tarefa – voltadas à promoção da capacitação profissional e institucional, ao aprimoramento do processo de gerenciamento de dados e informações científicas e à integração do conhecimento tradicional indígena e das pesquisas locais aos processos científicos –, que deverão auxiliar na produção do texto final.
“O IPBES trabalha em parceria com a FAO [Organização das Nações Unidas para a Alimentação e a Agricultura], Unep [Programa das Nações Unidas para o Meio Ambiente], CBD [Convention on Biological Diversity], Unesco [Organização das Nações Unidas para a Educação, a Ciência e a Cultura] e todos os esforços anteriores que trataram do tema de polinização”, afirmou Fonseca.
A polinização foi o primeiro tópico a ser escolhido pelos países-membros da plataforma intergovernamental, entre outras razões, por ser um problema global e já existir um grande número de estudos sobre o assunto, contou Carlos Joly, coordenador do Programa FAPESP de Pesquisas em Caracterização, Conservação, Restauração e Uso Sustentável da Biodiversidade (BIOTA-FAPESP) e membro do Painel Multidisciplinar de Especialistas do IPBES.
“Como já há um arcabouço muito grande de dados sobre esse tema, achamos que seria possível elaborar rapidamente uma síntese. Além disso, o tema tem um impacto global muito grande, principalmente por estar associado à produção de alimentos”, avaliou Joly.
Os 75 pesquisadores participantes do projeto foram indicados pelo Painel Multidisciplinar de Especialistas do IPBES, que se baseou nas indicações recebidas dos países-membros e observadores da plataforma intergovernamental.
Dois do grupo são escolhidos para coordenar o trabalho, sendo um de um país desenvolvido e outro de uma nação em desenvolvimento.
“O convite e a seleção da professora Vera Imperatriz Fonseca como coordenadora da avaliação é reflexo da qualidade da ciência desenvolvida nessa área no Brasil e da experiência dela em trabalhar com diagnósticos nacionais”, avaliou Joly. “Gostaríamos de ter mais pesquisadores brasileiros envolvidos na elaboração dos diagnósticos do IPBES.”
Summary: Scientists believe some tropical species may be able to evolve and adapt to the effects of climate change. The new findings suggests some sensitive rainforest-restricted species may survive climate change and avoid extinction. But only if the change is not too abrupt and dramatically beyond the conditions that a species currently experiences.
Scientists believe some tropical species may be able to evolve and adapt to the effects of climate change.
The new findings published in the journal,Proceedings of the Royal Society B, suggests some sensitive rainforest-restricted species may survive climate change and avoid extinction. But only if the change is not too abrupt and dramatically beyond the conditions that a species currently experiences.
Previous research offered a bleak prospect for tropical species’ adaptation to climate change, now researchers from Monash University believe the situation may not be quite so hopeless.
One of the lead researchers, Dr Belinda Van Heerwaarden said the impact of climate change on the world’s biodiversity is largely unknown.
“Whilst many believe some species have the evolutionary potential to adapt no one really knows for sure, and there are fears that some could become extinct.”
Dr Van Heerwaarden and Dr Carla M. Sgrò, from the Faculty of Science extended on an experiment from the 2000s in which tropical flies native to Australian rain forests called Drosophila birchii, were taken out of the damp rainforest and exposed to very dry conditions, mimicking the effects of potential climate change.
In the original experiment the flies died within hours and despite rescuing those that survived longest and allowing them to breed for over 50 generations, the flies were no more resistant, suggesting they didn’t have the evolutionary capacity to survive.
In Dr Van Heerwaarden and Dr Sgrò’s version they changed the conditions from 10 per cent to 35 per cent humidity.
“The first experiment tested whether the flies could survive in 10 per cent relative humidity. That’s an extreme level that’s well beyond the changes projected for the wet tropics under climate change scenarios over the next 30 years.”
“In our test we decreased the humidity to 35 per cent, which is much more relevant to predictions of how dry the environment will become in the next 30 to 50 years. We discovered that when you change the environment, you get a totally different answer,” Dr Van Heerwaarden said.
Whilst on average most of the flies died after just 12 hours, some survived a little longer than others. By comparing different families of flies, the researchers discovered the difference in the flies’ resistance is influenced by their genes.
To test this theory the longest-living flies were rescued and allowed to breed. After just five generations, one species evolved to survive 23 per cent longer in 35 per cent humidity.
As well as looking at the potential impact of climate change, the research also highlights the importance of genetic diversity within species.
Dr Sgrò said this finding suggests there is genetic variation present in these flies, which means they can evolve in response to climate change.
“Tropical species make up the vast majority of the world’s biodiversity and climactic models predict these will be most vulnerable to climate change. However these models do not consider the extent to which evolutionary response may buffer the negative impacts of climate change.”
“Our research indicates that the genes that help flies temporarily survive extreme dryness are not the same as those that help them resist more moderate conditions. The second set of genes are the ones that enable these flies to adapt,” she said.
“We have much work to do but this experiment gives us hope that some tropical species have the capacity to survive climate change,” said Dr Sgrò.
The results mean that other species thought to be at serious risk might have some hope of persisting a little longer under climate change than previously thought.
The next phase of the research study will see Dr Van Heerwaarden and Dr Carla M. Sgrò investigate whether the climactic stress tolerated by the tropical flies extends to other species.
B. van Heerwaarden, C. M. Sgro. Is adaptation to climate change really constrained in niche specialists?Proceedings of the Royal Society B: Biological Sciences, 2014; 281 (1790): 20140396 DOI: 10.1098/rspb.2014.0396
Summary: Global decline of wildlife populations is driving increases in violent conflicts, organized crime and child labor around the world, according to a experts. Researchers call for biologists to join forces with experts such as economists, political scientists, criminologists, public health officials and international development specialists to collectively tackle a complex challenge.
Global decline of wildlife populations is driving increases in violent conflicts, organized crime and child labor around the world, according to a policy paper led by researchers at the University of California, Berkeley. The authors call for biologists to join forces with experts such as economists, political scientists, criminologists, public health officials and international development specialists to collectively tackle a complex challenge.
The paper, to be published Thursday, July 24, in the journal Science, highlights how losses of food and employment from wildlife decline cause increases in human trafficking and other crime, as well as foster political instability.
“This paper is about recognizing wildlife decline as a source of social conflict rather than a symptom,” said lead author Justin Brashares, associate professor of ecology and conservation at UC Berkeley’s Department of Environmental Science, Policy and Management. “Billions of people rely directly and indirectly on wild sources of meat for income and sustenance, and this resource is declining. It’s not surprising that the loss of this critical piece of human livelihoods has huge social consequences. Yet, both conservation and political science have generally overlooked these fundamental connections.”
Fishing and the rise of piracy
Fewer animals to hunt and less fish to catch demand increasingly greater effort to harvest. Laborers — many of whom are children — are often sold to fishing boats and forced to work 18-20 hour days at sea for years without pay.
“Impoverished families are relying upon these resources for their livelihoods, so we can’t apply economic models that prescribe increases in prices or reduced demand as supplies become scarce,” said Brashares. “Instead, as more labor is needed to capture scarce wild animals and fish, hunters and fishers use children as a source of cheap labor. Hundreds of thousands of impoverished families are selling their kids to work in harsh conditions.”
The authors connected the rise of piracy and maritime violence in Somalia to battles over fishing rights. What began as an effort to repel foreign vessels illegally trawling through Somali waters escalated into hijacking fishing — and then non-fishing — vessels for ransom.
“Surprisingly few people recognize that competition for fish stocks led to the birth of Somali piracy,” said Brashares. “For Somali fishermen, and for hundreds of millions of others, fish and wildlife were their only source of livelihood, so when that was threatened by international fishing fleets, drastic measures were taken.”
The authors also compared wildlife poaching to the drug trade, noting that huge profits from trafficking luxury wildlife goods, such as elephant tusks and rhino horns, have attracted guerilla groups and crime syndicates worldwide. They pointed to the Lord’s Resistance Army, al-Shabab and Boko Haram as groups known to use wildlife poaching to fund terrorist attacks.
Holistic solutions required
“This paper begins to touch the tip of the iceberg about issues on wildlife decline, and in doing so the authors offer a provocative and completely necessary perspective about the holistic nature of the causes and consequences of wildlife declines,” said Meredith Gore, a Michigan State University associate professor in the nascent field of conservation criminology who was not part of the study.
As potential models for this integrated approach, the authors point to organizations and initiatives in the field of climate change, such as the Intergovernmental Panel on Climate Change, and the United for Wildlife Collaboration. But the paper notes that those global efforts must also be accompanied by multi-pronged approaches that address wildlife declines at a local and regional scale.
“The most important bit from this article, I think, is that we need to better understand the factors that underlie fish and wildlife declines from a local perspective, and that interdisciplinary approaches are likely the best option for facilitating this understanding,” said Gore.
The authors give examples of local governments heading off social tension, such as the granting of exclusive rights to hunting and fishing grounds to locals in Fiji, and the control of management zones in Namibia to reduce poaching and improve the livelihoods of local populations.
“This prescribed re-visioning of why we should conserve wildlife helps make clearer what the stakes are in this game,” said UC Santa Barbara assistant professor Douglas McCauley, a co-author who began this work as a postdoctoral researcher in Brashares’ lab. “Losses of wildlife essentially pull the rug out from underneath societies that depend on these resources. We are not just losing species. We are losing children, breaking apart communities, and fostering crime. This makes wildlife conservation a more important job than it ever has been.”
J. S. Brashares, B. Abrahms, K. J. Fiorella, C. D. Golden, C. E. Hojnowski, R. A. Marsh, D. J. McCauley, T. A. Nunez, K. Seto, L. Withey. Wildlife decline and social conflict. Science, 2014; 345 (6195): 376 DOI: 10.1126/science.1256734
Em uma coletânea de estudos sobre a crise e os desafios do imenso número de extinções causadas pelos humanos, revista ressalta as implicações da ‘defaunação’ dos ecossistemas.
A triste conclusão de que as nossas florestas, além de estarem em um processo contínuo de desmatamento, estão vazias, cada vez mais depauperadas da vida que as constitui, é o foco de uma série especial da revista Science.
A publicação chama a atenção para um termo que deve se tornar cada vez mais conhecido, a ‘defaunação’: a atual biodiversidade animal, produto de 3,5 bilhões de anos de evolução, apesar da extrema riqueza, está decaindo em níveis que podem estar alcançando um ponto sem volta.
Segundo cientistas, tal perda parece estar contribuindo com o que classificam como o início do sexto evento de extinção biológica em massa – ao contrário dos outros, que tiveram causas naturais, nós seríamos os culpados, devido às chamadas atividades antrópicas.
“Muito permanece desconhecido sobre a ‘defaunação do antropoceno’; essas brechas no conhecimento prejudicam a nossa capacidade de prever e limitar os seus impactos. Porém, claramente, a defaunação é tanto um componente perverso da sexta extinção em massa do planeta quanto uma grande causadora da mudança ecológica global”, concluíram pesquisadores no artigo ‘Defaunação no Antropoceno‘.
Na abertura da revista, um dos editores, Sacha Vignieri, lembra que, há alguns milhares de anos, o planeta servia de lar para espetaculares animais de grande porte, como mamutes, tartarugas gigantes, tigres-dente-de-sabre, entre outros.
Porém, evidências apontam o ser humano como o grande culpado pelo desaparecimento desses animais, afirma o editor.
E infelizmente, a tendência parece longe de mudar, e com ela, toda uma série de funções dos ecossistemas, das quais depende a nossa vida, são alteradas de formas dramáticas.
Como mostram os artigos na Science, os impactos da perda da fauna vão desde o empobrecimento da cobertura vegetal até a redução na produção agrícola devido à falta de polinizadores, passando pelo aumento de doenças, a erosão do solo, os impactos na qualidade da água, entre outros. Ou seja, os efeitos da perda de uma única espécie são sistêmicos.
De acordo com o estudo ‘Defaunação no Antropoceno‘, as populações de vertebrados declinaram em uma média de mais de um quarto nos últimos quarenta anos. Isso fica extremamente evidente quando qualquer um de nós caminha nos remanescentes de Mata Atlântica: é realmente muito difícil encontrar animais de médio e grande portes.
Pelo menos 322 espécies de vertebrados foram extintas desde 1500, e esse número só não é maior porque não conhecemos todas as espécies que já habitaram ou ainda residem em nossas florestas.
Se a situação é complicada para os vertebrados, que são muito mais conhecidos, é angustiante imaginar o tamanho da crise para os invertebrados, como os insetos, muito menos estudados.
“Apesar de menos de 1% das 1,4 milhão de espécies de invertebrados descritas terem sido avaliadas quanto à ameaça pela IUCN, das analisadas, cerca de 40% são consideradas ameaçadas”, afirma o estudo.
Certamente, a resolução dessa crise do Antropoceno não é simples.
As causas dessas perdas são bem conhecidas – caça, fragmentação dos habitats, uso de agrotóxicos, poluição, etc. –, e as tentativas para reverter essas tendências estão aumentando, como a reintrodução da fauna.
Os autores escrevem que a meta mais tradicional, de ter populações selvagens autosustentadas em paisagens pristinas intocadas pela influência humana, é “cada vez mais inalcançável”. Assim, eles sugerem que criar a “selva”, em vez de restaurá-la, é o caminho mais prático para avançar.
Entretanto, os desafios para reverter as extinções estão se mostrando muito desafiadores, e as pesquisas atuais mostram que, “se não conseguirmos acabar ou reverter as taxas dessas perdas, significará mais para o nosso futuro do apenas que corações desiludidos ou uma floresta vazia”, disse Vignieri, o editor do especial na Science.
Rodolfo Dirzo, professor da Universidade de Stanford – um dos autores de Defaunação no Antropoceno –, argumenta que reduzir imediatamente as taxas de alteração dos habitats e a sobre-exploração ajudaria, mas que isso precisaria ser feito de acordo com as características de cada região e situação.
Ele espera que a sensibilização sobre a atual extinção em massa e suas consequências ajude a desencadear mudanças.
“Os animais importam para as pessoas, mas no equilíbrio, eles importam menos do que a alimentação, emprego, energia, dinheiro e desenvolvimento. Enquanto continuarmos a enxergar os animais nos ecossistemas como tão irrelevantes para essas necessidades básicas, os animais perderão”, disseram Joshua Tewksbury e Haldre Rogers no artigo “Um futuro rico em animais”.
Pesquisadores alertam para riscos da defaunação promovida pelo homem (Fapesp)
Agência FAPESP – A revista científica norte-americana Scienceacaba de publicar uma edição especial sobre as consequências do desaparecimento de espécies animais para a biodiversidade do planeta e para o próprio futuro da humanidade.
“Durante o Pleistoceno, apenas dezenas de milhares de anos atrás, nosso planeta sustentava animais grandes e espetaculares. Mamutes, ‘aves do terror’, tartarugas gigantes e tigres-dentes-de-sabre, bem como espécies muito menos conhecidas, como preguiças gigantes (algumas das quais chegavam a 7 metros de altura) e gliptodontes (que pareciam tatus do tamanho de automóveis), vagavam livremente”, diz a introdução do especial.
“Desde então, no entanto, o número e a diversidade de espécies animais na Terra têm declinado consistente e firmemente. Hoje, ficamos com uma fauna relativamente depauperada e continuamos a ver a rápida extinção de espécies animais. Embora algum debate persista, a maioria das evidências sugere que os seres humanos foram responsáveis pela extinção dessa fauna do Pleistoceno, e continuamos a induzir extinções de animais por meio da destruição de terras selvagens, da caça para consumo ou como luxo e da perseguição de espécies que vemos como ameaças ou concorrentes”, destaca o texto.
O especial traz artigos em que pesquisadores de diversos países citam espécies animais que estão desaparecendo, os complexos fatores por trás do processo de defaunação e as dificuldades para colocar em prática alternativas eficazes de conservação.
Um dos artigos do especial, Defaunation in the Anthropocene, tem entre seus autores o professor Mauro Galetti, do Departamento de Ecologia da Universidade Estadual Paulista (Unesp), campus de Rio Claro, responsável por projetos de pesquisa que integram o programa BIOTA-FAPESP.
O artigo de Galetti, produzido em colaboração com pesquisadores dos Estados Unidos, do México e do Reino Unido, ressalta que o mundo está passando por uma das maiores extinções de animais em sua história.
De acordo com os autores, a onda global de perda de biodiversidade tem a ação humana como principal causadora. Mas os impactos humanos sobre a biodiversidade animal representam uma forma ainda não reconhecida de mudanças ambientais globais.
“Dos vertebrados terrestres, 322 espécies se tornaram extintas desde 1500, e populações das espécies restantes mostram declínio médio de 25% em abundância”, dizem os autores.
“Tais declínios animais impactarão o funcionamento de ecossistemas e o bem-estar humano. Muito permanece desconhecido sobre a ‘defaunação antropocênica’. Essas lacunas de conhecimento dificultam a nossa capacidade de prever e limitar os impactos da defaunação. Claramente, no entanto, a defaunação é tanto um componente pervasivo da sexta extinção em massa do planeta como também um grande condutor de mudança ecológica global”, destacam.
Segundo Galetti e colegas, de todas as espécies animais atuais – estimadas entre 5 milhões e 9 milhões –, o mundo perde anualmente entre 11 mil e 58 mil espécies. E isso não inclui os declínios de abundância animal entre populações, ou seja, de espécies que agonizam lentamente.
“A ciência tem se preocupado com o impacto das extinções das espécies, mas o problema também envolve a extinção local de populações. Algumas espécies podem não estar globalmente ameaçadas mas podem estar extintas localmente. Essa extinção local de animais afeta o funcionamento dos ecossistemas naturais vitais ao homem. Nesse trabalho agora publicado, compilamos dados populacionais de grandes mamíferos, como rinocerontes, gorilas e leões, e também de invertebrados, como borboletas. Uma em cada quatro espécies de vertebrados tem suas populações reduzidas”, disse Galetti, em entrevista ao site da Unesp.
“A maioria dos pesquisadores analisa os efeitos humanos sobre a extinção das espécies e, nesse trabalho, nós enfocamos a extinção local de populações. A extinção de uma espécie tem um grande impacto, e a redução das populações animais causa um impacto maior ainda nos ecossistemas”, disse.
“We do not inherit the earth from our ancestors; we borrow it from our children.” – Native American proverb
March through June 2014 were the hottest on record globally, according to the Japan Meteorological Agency. In May – officially the hottest May on record globally – the average temperature of the planet was .74 degrees Celsius above the 20th century baseline, according to data from the National Oceanic and Atmospheric Administration. The trend is clear: 2013 was the 37th consecutive year of above-average global temperatures, and since the Industrial Revolution began, the earth has been warmed by .85 degrees Celsius. Several scientific reports and climate modeling show that at current trajectories (business as usual), we will see at least a 6-degree Celsius increase by 2100.
In the last decade alone, record high temperatures across the United States have outnumbered record low temperatures two to one, and the trend is both continuing and escalating.
While a single extreme weather event is not proof of anthropogenic climate disruption (ACD), the increasing intensity and frequency of these events are. And recent months have seen many of these.
A record-breaking heat wave gripped India in June, as temperatures hovered at 46 degrees Celsius, sometimes reaching 48 degrees Celsius. Delhi’s 22 million residents experienced widespread blackouts and rioting, as the heat claimed hundreds of lives.
Also in June, Central Europe cooked in unseasonably extreme heat, with Berlin experiencing temperatures over 32 degrees Celsius, which is more than 12 degrees hotter than normal.
At the same time, at least four people died in Japan, and another 1,637 were hospitalized as temperatures reached nearly 38 degrees Celsius.
The spacecraft will have plenty to study, since earth’s current carbon dioxide concentration is now the longest ever in recorded history.
A recent report by the National Resource Defense Council warned that summers in the future are likely to bring increased suffering, with more poison ivy and biting insects, and decreasing quality of air and water.
As farmers struggle to cope with increasing demands for food as the global population continues to swell, they are moving towards growing crops designed to meet these needs as well as withstand more extreme climate conditions. However, a warning by an agricultural research group shows they may inadvertently be increasing global malnutrition by these efforts. “When I was young, we used to feed on amaranth vegetables, guava fruits, wild berries, jackfruits and many other crops that used to grow wild in our area. But today, all these crops are not easily available because people have cleared the fields to plant high yielding crops such as kales and cabbages which I am told have inferior nutritional values,” Denzel Niyirora, a primary school teacher in Kigali, said in the report.
The stunning desert landscape of Joshua Tree National Park is now in jeopardy, as Joshua trees are now beginning to die out due to ACD.
Another study, this one published in the journal Polar Biology, revealed that birds up on Alaska’s North Slope are nesting earlier in order to keep apace with earlier snowmelt.
Antarctic emperor penguin colonies could decline by more than half in under 100 years, according to a recent study – and another showed that at least two Antarctic penguin species are losing ground in their fight for survival amidst the increasing impacts of ACD, as the Antarctic Peninsula is one of the most rapidly warming regions on earth. The scientists who authored the report warned that these penguins’ fate is only one example of this type of impact from ACD on the planet’s species, and warned that they “expect many more will be identified as global warming proceeds and biodiversity declines.”
Given that the planetary oceans absorb approximately 90 percent of our carbon dioxide emissions, it should come as no surprise that they are in great peril.
This is confirmed by a recent report that shows the world’s oceans are on the brink of collapse, and in need of rescue within five years, if it’s not already too late.
As the macro-outlook is bleak, the micro perspective sheds light on the reasons why.
In Cambodia, Tonle Sap Lake is one of the most productive freshwater ecosystems on earth. However, it is also in grave danger from overfishing, the destruction of its mangrove forests, an upstream dam and dry seasons that are growing both longer and hotter due to ACD.
Anomalies in the planet’s marine life continue. A 120-foot-long jellyfish is undergoing massive blooms and taking over wider swaths of ocean as the seas warm from ACD.
The Pacific island group of Kiribati – home to 100,000 people – is literally disappearing underwater, as rising sea levels swallow the land. In fact, Kiribati’s president recently purchased eight square miles of land 1,200 miles away on Fiji’s second largest island, in order to have a plan B for the residents of his disappearing country.
Closer to home here in the United States, most of the families living on Isle de Jean Charles, Louisiana, have been forced to flee their multi-generational home due to rising sea levels, increasingly powerful storms, and coastal erosion hurried along by oil drilling and levee projects.
Looking at the bigger picture, a recently released US climate report revealed that at least half a trillion dollars of property in the country will be underwater by 2100 due to rising seas.
Meanwhile, the tropical region of the planet, which covers 130 countries and territories around the equator, is expanding and heating up as ACD progresses.
Residential neighborhoods in Oakland, California – near the coast – are likely to be flooded by both rising seas and increasingly intense storms, according to ecologists and local area planners.
On the East Coast, ocean acidification from ACD, along with lowered oxygen in estuaries, are threatening South Carolina’s coastal marine life and the seafood industry that depends upon it.
Record-setting “100-year” flooding events in the US Midwest are now becoming more the rule than the exception, thanks to ACD.
Even Fairbanks, Alaska received one-quarter of its total average annual rainfall in a 24-hour period earlier this summer – not long after the area had already received roughly half its average annual rainfall in just a two-week period.
Rising sea levels are gobbling up the coast of Virginia so quickly now that partisan political debate over ACD is also falling by the wayside, as both Republicans and Democrats are working together to figure out what to do about the crisis.
Reuters released a report showing how “Coastal flooding along the densely populated Eastern Seaboard of the United States has surged in recent years . . . with the number of days a year that tidal waters reached or exceeded NOAA flood thresholds more than tripling in many places during the past four decades.”
Flooding from rising seas is already having a massive impact in many other disparate areas of the world: After torrential rain and flooding killed at least a dozen people in Bulgaria this summer, the country continues to struggle with damage from the flooding as it begins to tally the economic costs of the disasters.
In China, rain and flooding plunged large areas of the Jiangxi and Hunan Provinces into emergency response mode. Hundreds of thousands were impacted.
The region of the globe bordering the Indian Ocean stretching from Indonesia to Kenya is now seen as being another bulls-eye target for ACD, as the impacts there are expected to triple the frequency of both drought and flooding in the coming decades, according to a recent study.
Another study revealed how dust in the wind, of which there is much more than usual, due to spreading drought, is quickening the melting of Greenland’s embattled ice sheet, which is already losing somewhere between 200 to 450 billion tons of ice annually. The study showed that increased dust on the ice will contribute towards another 27 billion tons of ice lost.
Down in Antarctica, rising temperatures are causing a species of moss to thrive, at the detriment of other marine creatures in that fragile ecosystem.
Up in the Arctic, the shrinking ice cap is causing drastic changes to be made in the upcoming 10th edition of the National Geographic Atlas of the World. Geographers with the organization say it is the most striking change ever seen in the history of the publication.
A UK science team predicted that this year’s minimum sea ice extent will likely be similar to last year’s, which is bad news for the ever-shrinking ice cap. Many scientists now predict the ice cap will begin to vanish entirely for short periods of the summer beginning next year.
Canada’s recently released national climate assessment revealed how the country is struggling with melting permafrost as ACD progresses. One example of this occurred in 2006 when the reduced ice layer of ice roads forced a diamond mine to fly in fuel rather than transport it over the melted ice roads, at an additional cost of $11.25 million.
Arctic birds’ breeding calendars are also being impacted. As ACD causes earlier Arctic melting each season, researchers are now warning of long-ranging adverse impacts on the breeding success of migratory birds there.
In addition to the aforementioned dust causing the Greenland ice sheet to melt faster, industrial dust, pollutants and soil, blown over thousands of miles around the globe, are settling on ice sheets from the Himalaya to the Arctic, causing them to melt faster.
At the same time, multi-year drought continues to take a massive toll across millions of acres across the central and western United States. It has caused millions of acres of federal rangeland to turn into dust, and has left a massive swath of land reaching from the Pacific Coast to the Rocky Mountains desolated. ACD, invasive plants and now continuously record-breaking wildfire seasons have brought ranchers to the breaking point across the West.
Drought continues to drive up food prices across the United States, and particularly prices of produce grown in California’s Central Valley. As usual, it is the poor who suffer the most, as increasing food prices, growing unemployment and more challenging access to clean water continue to escalate their struggle to survive.
California’s drought continues to have a massive and myriad impact across the state, as a staggering one-third of the state entered into the worst stage of drought. Even colonies of honeybees are collapsing due, in part, to there being far less natural forage needed to make their honey.
The snowpack in California is dramatically diminished as well. While snowpack has historically provided one-third of the state’s water supply, after three years of very low snowfall, battles have begun within the state over how to share the decreasing water from what used to be a massive, frozen reservoir of water.
The drought in Oklahoma is raising the specter of a return to the nightmarish dust bowl conditions there in the 1930s.
Recently, and for the first time, the state of Arizona has warned that water shortages could hit Tucson and Phoenix as soon as five years from now due to ongoing drought, increasing demand for water and declining water levels in Lake Mead.
This is a particularly bad outlook, given that the Lake Mead reservoir, the largest in the country, dropped to its lowest level since it was filled in the 1930s. Its decline is reflective of 14 years of ongoing drought, coupled with an increasing disparity between the natural flow rate of the Colorado River that feeds it and the ever-increasing demands for its water from the cities and farms of the increasingly arid Southwest.
Given the now chronic water crises in both Arizona and California, the next water war between the two states looms large. The one-two punch of ACD and overconsumption has combined to find the Colorado River, upon which both states heavily rely, in long-term decline.
Yet it is not just Arizona and California that are experiencing an ongoing water crisis due to ACD impacts – it is the entire southwestern United States. The naturally dry region is now experiencing dramatically extreme impacts that scientists are linkingto ACD.
The water crisis spawned by ACD continues to reverberate globally.
North Korea even recently mobilized its army in order to protect crops as the country’s reservoirs, streams and rivers ran dry amidst a long-term drought. The army was tasked with making sure residents did not take more than their standard allotment of water.
The converging crises of the ongoing global population explosion, the accompanying burgeoning middle class, and increasingly dramatic impacts caused by ACD is straining global water supplies more than ever before, causing governments to examine how to manage populations in a world with less and less water.
A recent report provides a rather apocalyptic forecast for people living in Arizona: It predicts diminishing crop production, escalating electricity bills and thousands of people dying of extreme heat in that state alone.
In fact, another report from the Natural Resources Defense Council found experts predicting that excessive heat generated from ACD will likely kill more than 150,000 Americans by the end of the century, and that is only in the 40 largest cities in the country.
Poor air quality – and the diseases it triggers – are some of the main reasons why public health experts in Canada now believe that ACD is the most critical health issue facing Canadians.
Another recent study shows, unequivocally, that city-dwellers around the world should expect more polluted air that lingers in their metropolis for days on end, as a result of ACD continuing to change wind and rainfall patterns across the planet.
As heat and humidity increase with the growing impacts of ACD, we can now expect to see life-altering results across southern US cities, as has long been predicted. However, we can expect this in our larger northern cities as well, including Seattle, Chicago and New York; the intensifications are on course to make these areas unsuitable for outdoor activity during the summer.
Recently generated predictive mapping shows how many extremely hot days you might have to suffer through when you are older. These show clearly that if we continue along with business as usual – refusing to address ACD with the war-time-level response warranted to mitigate the damage – those being born now who will be here in 2100, will be experiencing heat extremes unlike anything we’ve had to date when they venture outside in the summer.
A new study published in Nature Geoscience revealed how increasing frequency and severity of forest fires across the planet are accelerating the melting of the Greenland ice sheet, as soot landing on the ice reduces its reflectivity. Melting at ever increasing speed, if the entire Greenland ice sheet melts, sea levels will rise 24 feet globally.
Down in Australia, the southern region of the country can now expect drier winters as a new study linked drying trends there, which have been occurring over the last few decades, to ACD.
On the other side of the globe, in Canada’s Northwest Territories, the region is battling its worst fires since the 1990s, bringing attention to the likelihood that ACD is amplifying the severity of northern wildfires.
A recently published global atlas of deaths and economic losses caused by wildfires, drought, flooding and other ACD-augmented weather extremes, revealed how such disasters are increasing worldwide, setting back development projects by years, if not decades, according to its publishers.
Denial and Reality
Never underestimate the power of denial.
Rep. Jeff Miller (R-Florida) was asked by an MSNBC journalist if he was concerned about the fact that most voters believe scientists on the issue of ACD. His response, a page out of the Republican deniers handbook, is particularly impressive:
Miller: It changes. It gets hot; it gets cold. It’s done it for as long as we have measured the climate.
MSNBC: But man-made, isn’t that the question?
Miller: Then why did the dinosaurs go extinct? Were there men that were causing – were there cars running around at that point, that were causing global warming? No. The climate has changed since earth was created.
Another impressive act of denial came from prominent Kentucky State Senate Majority Whip Republican Brandon Smith. At a recent hearing, Smith argued that carbon emissions from coal burning power plants couldn’t possibly be causing ACD because Mars is also experiencing a global temperature rise, and there are no coal plants generating carbon emissions on Mars. He even stated that Mars was the same temperature of Earth.
“I think that in academia, we all agree that the temperature on Mars is exactly as it is here. Nobody will dispute that,” Smith said.
“Yet there are no coal mines on Mars; there’s no factories on Mars that I’m aware of,” he added. “So I think what we’re looking at is something much greater than what we’re going to do.”
During a recent interview on CNBC, Princeton University professor and chairman of the Marshall Institute William Happer was called out on the fact that ExxonMobil had provided nearly $1 million for the Institute.
Happer compared the “hype” about ACD to the Holocaust, and when asked about his 2009 comparison of climate science to Nazi propaganda, he said, “The comment I made was, the demonization of carbon dioxide is just like the demonization of the poor Jews under Hitler. Carbon dioxide is actually a benefit to the world, and so were the Jews.”
Happer, who was introduced as an “industry expert” on the program, has not published one peer-reviewed paper on ACD.
The ACD-denier group that supports politicians and “scientists” of this type, Heartland (a free-market think tank with a $6 million annual budget) hosted a July conference in Las Vegas for deniers. One of Heartland’s former funders is ExxonMobil, and one of the panels at the conference was titled, “Global Warming As a Social Movement.” The leaders of the conference vowed to “keep doubt alive.”
Australian Prime Minister Tony Abbott used a current trip abroad to attempt to build support for a coalition aimed at derailing international efforts towards dealing with ACD.
He is simply following the lead of former Prime Minister John Howard, who teamed up with former US President George W. Bush and Canadian Prime Minister Stephen Harper to form a climate-denial triumvirate whose goal was to stop efforts aimed at dealing with ACD, in addition to working actively to undermine the Kyoto Protocol.
Meanwhile, Rupert Murdoch has said that ACD should be approached with great skepticism. He said that if global temperatures increased 3 degrees Celsius over the next 100 years, “At the very most one of those [degrees] would be manmade.” He did not provide the science he used to generate this calculation.
In Canada, Vancouver-based Pacific Future Energy Corporation claimed that a $10 billion oil sands refinery it wants to build on the coast of British Columbia would be the “world’s greenest.”
Miami, a low-lying city literally on the front lines of ACD impacts, is being inundated by rising sea levels as its predominantly Republican leadership – made up of ACD deniers – are choosing to ignore the facts and continue forward with major coastal construction projects.
Back to reality, the BBC recently ordered its journalists to cease giving any more TV airtime to ACD deniers.
Brenton County, Oregon has created a Climate Change Adaptation Plan that provides strategies for the communities there to deal with future impacts of ACD.
Despite the millions of dollars annually being pumped into ACD denial campaigns, a recent poll shows that by a 2-to-1 margin, Americans would be willing to pay more to combat ACD impacts, and most would also vote to support a candidate who aims to address the issue.
Another recent report on the economic costs that ACD is expected to generate in the United States over the next 25 years pegged an estimate well into the hundreds of billions of dollars by 2100. Property losses from hurricanes and coastal storms are expected to total around $35 billion, crop yields are expected to decline by 14 percent, and increased electricity costs to keep people cooler are expected to increase by $12 billion annually, to name a few examples.
The bipartisan report also noted that more than a million coastal homes and businesses could flood repeatedly before ultimately being destroyed.
The World Council on Churches, a group that represents more than half a billion Christians, announced that it would pull all its investments out of fossil fuels because the investments were no longer “ethical.”
US Interior Secretary Sally Jewell told reporters recently that she is witnessing ACD’s impacts in practically every national park she visits.
A June report by the UN University’s Institute for Environment and Human Security warned that ACD-driven mass migrations are already happening, and urged countries to immediately create adaption plans to resettle populations and avoid conflict.
For anyone who wonders how much impact humans have on the planet on a daily basis, take a few moments to ponder what just the impact of commercial airline emissions are in a 24-hour period by watching this astounding video.
Lastly, a landmark study released in June by an international group of scientists concluded that Earth is on the brink of a mass extinction event comparable in scale to that which caused the dinosaurs to go extinct 65 million years ago.
The study says extinction rates are now 1,000 times higher than normal, and pegged ACD as the driving cause.
Para driblar a extinção, tatu-bola ganha um plano nacional de conservação
O tatu-bola, escolhido como mascote da Copa no Brasil, é um animal em extinção devido à destruição de seus habitats na Caatinga e no Cerrado, além de sofrer com a caça. O pequeno mamífero também está correndo o risco de perder um importante reforço na luta por sua preservação. O trabalho desenvolvido pela Fundação Museu do Homem Americano (FUMDHAM) no Parque Nacional Serra da Capivara, no Piauí, está ameaçado por falta de recursos.
De acordo com a professora Rute Maria Gonçalves de Andrade, do conselho fiscal da FUMDHAM, todo o trabalho que vem sendo realizado em mais de 40 anos está ameaçado, além das espécies animais que ficarão desamparadas, mais de100 pessoas podem ficar desempregadas. “Infelizmente a fundação não tem recebido os recursos do ICMBio, nem do IPHAN já que o Parque é declarado pela UNESCO como Patrimônio Natural e Histórico da Humanidade, em quantidade suficiente e nos prazos devidos para fazer esta gestão”, desabafou.
Por meio de sua Divisão de Comunicação, o Instituto Chico Mendes de Conservação da Biodiversidade (ICMBio) informou que “não houve nenhuma interrupção de repasses para a Fundação Museu do Homem Americano. Em 2014, foram repassados R$ 400 mil de recursos de compensação ambiental e há a previsão de mais R$ 300 mil, oriundos de emenda parlamentar”. Já o Instituto do Patrimônio Histórico e Artístico Nacional (IPHAN), órgão do Ministério da Cultura (MinC), não se pronunciou até o fechamento desta edição por conta de uma greve de seus funcionários.
Desde os anos 90, a equipe da FUMDHAM, liderada por sua presidente a arqueóloga Niède Guidon desenvolve ações de preservação da fauna local, o que tem sido decisivo para manter o equilíbrio da densidade populacional no parque de muitos vertebrados. Este trabalho consiste em manter limpos e cheios os reservatórios naturais de água existentes no Parque conhecidos como caldeirões, além de outros que foram construídos, para que os animais tivessem água na época da seca.
Para Rute esse é o momento ideal para chamar a atenção para os esforços de preservação da fauna e flora brasileira. “Talvez fosse importante que a partir da Copa fosse lançada uma grande campanha nacional em favor das Unidades de Conservação que preservam a duras penas o tatu-bola”, sugeriu.
Até o momento o trabalho de conservação do tatu-bola – cujo nome científico é Tolypeutestricinctus – está dando bons resultados. Segundo a professora Rute ele é um dos animais que compõem a fauna do Parque. “Após a fiscalização relativa à caça e o trabalho de fornecimento de água na época da seca possibilitaram manter a população desta espécie de mamífero, endêmico do Bioma caatinga”, afirmou.
Plano Nacional de Conservação – O biólogo Leandro Jerusalinsky, coordenador do Centro Nacional de Pesquisa e Conservação de Primatas Brasileiros (CPB/ICMBio), em João Pessoa (PB), faz parte do Plano de Ação Nacional para a Conservação do Tatu-bola (PAN Tatu-bola). A ideia é consolidar uma estratégia para diminuir o risco de extinção de duas espécies. “O plano tem como objetivo geral a redução do risco de extinção do Tolypeutestricinctus, que habita a Caatinga e o Cerrado, para a categoria Vulnerável e avaliação adequada do estado de conservação do Tolypeutesmatacus, encontrado no Pantanal e Cerrado, em cinco anos”, explicou.
Ainda segundo Jerusalinsky, o PAN Tatu-bola vai ajudar na conservação destas espécies por estabelecer de forma clara quais são as ações prioritárias para reverter ou atenuar os principais impactos sobre elas, que consistem na perda e fragmentação de habitats, caça e falta de conhecimento. “Desta forma, as diversas instituições envolvidas em pesquisa, fiscalização e licenciamento ambiental, por exemplo, poderão adotar essas ações em sua atuação, ajudando a conhecer e a proteger os tatus-bola”, detalhou.
O PAN foi elaborado por um conjunto de especialistas nestas espécies, sediados em instituições de ensino e pesquisa como a Universidade Federal de Minas Gerais (UFMG), Universidade Federal de Sergipe (UFS), Universidade de São Paulo (USP), Universidade do Vale do Rio São Francisco (UNIVASF), Universidade Federal da Paraíba (UFPB), Universidade Estadual do Mato Grosso (UNEMAT), EMBRAPA Pantanal, além do próprio ICMBio.
Summary: Was it humankind or climate change that caused the extinction of a considerable number of large mammals about the time of the last Ice Age? Researchers have carried out the first global analysis of the extinction of the large animals, and the conclusion is clear — humans are to blame. The study unequivocally points to humans as the cause of the mass extinction of large animals all over the world during the course of the last 100,000 years.
Skeleton of a giant ground sloth at the Los Angeles County Museum of Natural History, circa 1920. Credit: Public Domain, via Wikimedia Commons
Was it humankind or climate change that caused the extinction of a considerable number of large mammals about the time of the last Ice Age? Researchers at Aarhus University have carried out the first global analysis of the extinction of the large animals, and the conclusion is clear — humans are to blame. A new study unequivocally points to humans as the cause of the mass extinction of large animals all over the world during the course of the last 100,000 years.
“Our results strongly underline the fact that human expansion throughout the world has meant an enormous loss of large animals,” says Postdoctoral Fellow Søren Faurby, Aarhus University.
Was it due to climate change?
For almost 50 years, scientists have been discussing what led to the mass extinction of large animals (also known as megafauna) during and immediately after the last Ice Age.
One of two leading theories states that the large animals became extinct as a result of climate change. There were significant climate changes, especially towards the end of the last Ice Age — just as there had been during previous Ice Ages — and this meant that many species no longer had the potential to find suitable habitats and they died out as a result. However, because the last Ice Age was just one in a long series of Ice Ages, it is puzzling that a corresponding extinction of large animals did not take place during the earlier ones.
Theory of overkill
The other theory concerning the extinction of the animals is ‘overkill’. Modern man spread from Africa to all parts of the world during the course of a little more than the last 100,000 years. In simple terms, the overkill hypothesis states that modern man exterminated many of the large animal species on arrival in the new continents. This was either because their populations could not withstand human hunting, or for indirect reasons such as the loss of their prey, which were also hunted by humans.
First global mapping
In their study, the researchers produced the first global analysis and relatively fine-grained mapping of all the large mammals (with a body weight of at least 10 kg) that existed during the period 132,000-1,000 years ago — the period during which the extinction in question took place. They were thus able to study the geographical variation in the percentage of large species that became extinct on a much finer scale than previously achieved.
The researchers found that a total of 177 species of large mammals disappeared during this period — a massive loss. Africa ‘only’ lost 18 species and Europe 19, while Asia lost 38 species, Australia and the surrounding area 26, North America 43 and South America a total of 62 species of large mammals.
The extinction of the large animals took place in virtually all climate zones and affected cold-adapted species such as woolly mammoths, temperate species such as forest elephants and giant deer, and tropical species such as giant cape buffalo and some giant sloths. It was observed on virtually every continent, although a particularly large number of animals became extinct in North and South America, where species including sabre-toothed cats, mastodons, giant sloths and giant armadillos disappeared, and in Australia, which lost animals such as giant kangaroos, giant wombats and marsupial lions. There were also fairly large losses in Europe and Asia, including a number of elephants, rhinoceroses and giant deer.
Weak climate effect
The results show that the correlation between climate change — i.e. the variation in temperature and precipitation between glacials and interglacials — and the loss of megafauna is weak, and can only be seen in one sub-region, namely Eurasia (Europe and Asia). “The significant loss of megafauna all over the world can therefore not be explained by climate change, even though it has definitely played a role as a driving force in changing the distribution of some species of animals. Reindeer and polar foxes were found in Central Europe during the Ice Age, for example, but they withdrew northwards as the climate became warmer,” says Postdoctoral Fellow Christopher Sandom, Aarhus University.
Extinction linked to humans
On the other hand, the results show a very strong correlation between the extinction and the history of human expansion. “We consistently find very large rates of extinction in areas where there had been no contact between wildlife and primitive human races, and which were suddenly confronted by fully developed modern humans (Homo sapiens). In general, at least 30% of the large species of animals disappeared from all such areas,” says Professor Jens-Christian Svenning, Aarhus University.
The researchers’ geographical analysis thereby points very strongly at humans as the cause of the loss of most of the large animals.
The results also draw a straight line from the prehistoric extinction of large animals via the historical regional or global extermination due to hunting (American bison, European bison, quagga, Eurasian wild horse or tarpan, and many others) to the current critical situation for a considerable number of large animals as a result of poaching and hunting (e.g. the rhino poaching epidemic).
C. Sandom, S. Faurby, B. Sandel, J.-C. Svenning. Global late Quaternary megafauna extinctions linked to humans, not climate change. Proceedings of the Royal Society B: Biological Sciences, 2014; 281 (1787): 20133254 DOI:10.1098/rspb.2013.3254
Ação humana sobre a natureza é tão destruidora quanto o fenômeno que causou o fim dos dinossauros
A ação humana acelerou em mil vezes a extinção de espécies, de acordo com um estudo publicado esta semana na revista “Science”. Novas tecnologias para mapear o desmatamento e a destruição de habitats permitiram uma revisão dos números que serviam como base para encontros internacionais, como a Convenção sobre Diversidade Biológica (CBD).
Se não houver ações urgentes, o impacto provocado pelo homem no meio ambiente causaria a sexta maior extinção em massa da História do planeta – uma das anteriores foi o desaparecimento dos dinossauros.
Não é simples estimar quantas espécies foram extintas desde o início do século XX, já que, segundo estimativas, apenas 3,6% delas são conhecidas pelos cientistas. Para calcular a velocidade das extinções, os cientistas criaram um modelo matemático levando em conta o percentual de desaparição das espécies conhecidas em relação a sua população total e extrapolaram os resultados.
O estudo defende que a Lista Vermelha de Espécies Ameaçadas seja radicalmente ampliada – a publicação abrigaria 160 mil espécies que correm o risco de extinção, em vez de 70 mil, como ocorre hoje. Esta atualização da listagem pode levar à criação de novas políticas de conservação ambiental.
– Hoje temos novas tecnologias para detectar o desmatamento e analisar o deslocamento de cada espécie – avalia Clinton Jones, coautor do estudo e pesquisador do Instituto de Pesquisas Ecológicas do Brasil (Ipê). – A maioria vive fora das áreas protegidas, por isso a compreensão da mudança de seus ecossistemas é vital. É uma oportunidade para atualizar mapas sobre os impactos e as ameaças a cada área.
Coautor do levantamento, Stuart Limm, professor de Ecologia de Conservação da Universidade de Duke (EUA), ressalta que ainda existe uma “cratera” entre o que os pesquisadores sabem e o que ignoram sobre a biodiversidade do planeta. A tecnologia, no entanto, está preenchendo este espaço, além de estender o acesso a dados científicos para amadores. Bancos de dados on-line e até aplicativos de smartphones facilitam a identificação de espécies.
– Quando combinamos informações sobre o uso da terra com as observações de milhões de cientistas amadores, conseguimos acompanhar melhor a biodiversidade e suas ameaças – assinala. – No entanto, precisamos desenvolver tecnologias ainda mais sofisticadas para sabermos qual é a taxa de extinção das espécies.
O homem eliminou os principais predadores e outras grandes espécies. As savanas africanas, por exemplo, já cobriram 13,5 milhões de km². Agora, os leões dispõem de somente 1 milhão de km². Trata-se de um exemplo de como a restrição do espaço colabora para as extinções.
– Sabemos que muitas espécies terrestres ocupam pequenas áreas, algumas menores do que o Estado do Rio. – alerta Jones. – Espécies distribuídas em pequenas regiões estão mais vulneráveis à extinção. Precisamos concentrar nossos projetos de conservação nestes locais.
Um dos pontos mais críticos é a Mata Atlântica, uma das 34 regiões do planeta onde há maior número de espécies exclusivas – ou seja, aquelas que só ocorrem naquele local – enfrentando risco de extinção.
– A floresta remanescente está degradada e há muitas espécies exclusivas em todos os seus ambientes, do solo às montanhas – destaca Jones. – Sua preservação deve ser uma prioridade mundial.
Os oceanos são ainda menos preservados. Somente 2% de suas espécies seriam conhecidas.
Coordenador do estudo disse que situação não piorou, pois ‘o universo analisado quintuplicou’; ministra do Meio Ambiente anunciou pacote de medidas para a fauna brasileira
O estudo Avaliação do Risco de Extinção da Fauna Brasileira, desenvolvido por 929 especialistas entre 2010 e 2014, mostra que atualmente 1.051 espécies de animais estão ameaçadas de extinção. Na primeira edição, de 2003, eram 627.
“A situação não piorou. O universo analisado quintuplicou, daí o aumento da lista”, afirmou o diretor de pesquisa, avaliação e monitoramento de biodiversidade do Instituto Chico Mendes, Marcelo Marcelino, responsável pela coordenação do trabalho.
Ao todo, foram avaliadas 7.647 espécies. Do total, 11 foram consideradas extintas, 121 tiveram sua situação agravada. A situação piorou, por exemplo, para o tatu-bola. “Seu habitat, a caatinga, vem sofrendo uma redução. Além disso, a espécie é muito vulnerável à caça”, completou. Para outras 126, a ameaça foi reduzida, mas ainda persiste.
O trabalho mostra que 77 espécies saíram da situação de risco – entre elas, a baleia Jubarte. Em 2012, foram contabilizados 15 mil indivíduos, quantidade significativamente maior do que os 9 mil encontrados em 2008. Duas espécies dos macacos uacaris e o peixe-grama também saíram da situação de perigo.
Os números foram apresentados nesta quinta-feira, 22, pela ministra do Meio Ambiente, Izabella Teixeira. Além do balanço, ela anunciou um pacote de medidas para tentar preservar a fauna brasileira. Entre as ações, está a moratória da pesca e comercialização da piracatinga, por cinco anos.
A regra, que começa a valer a partir de janeiro de 2015, tem como objetivo proteger o boto vermelho e jacarés, que são usados como isca. “Vamos criar um grupo para tentar encontrar alternativas a essa prática”, afirmou Izabella. A pesca acidental e comercialização de tubarão-martelo e lombo-preto também estão proibidas, a partir da agora. As duas medidas foram adotadas em parceria com o Ministério da Pesca e Aquicultura.
Izabella anunciou também a criação de uma força-tarefa de fiscalização, formada pelo Ibama, ICMBio e Polícia Federal para combater a caça de fauna ameaçada, como peixe-boi da Amazônia, boto cor-de-rosa, arara azul de lear, onça pintada, tatu-bola, tubarões, arraias de água doce e a extensão da bolsa verde para comunidades em situação de vulnerabilidade econômica em regiões consideradas relevantes para conservação de espécies ameaçadas de extinção. A bolsa será no valor de R$ 100 mensais.
Bringing extinct animals back to life is really happening — and it’s going to be very, very cool. Unless it ends up being very, very bad.
By NATHANIEL RICHFEB. 27, 2014
CreditStephen Wilkes for The New York Times; Woolly Mammoth, Royal BC Museum, Victoria, British Columbia
The first time Ben Novak saw a passenger pigeon, he fell to his knees and remained in that position, speechless, for 20 minutes. He was 16. At 13, Novak vowed to devote his life to resurrecting extinct animals. At 14, he saw a photograph of a passenger pigeon in an Audubon Society book and “fell in love.” But he didn’t know that the Science Museum of Minnesota, which he was then visiting with a summer program for North Dakotan high-school students, had them in their collection, so he was shocked when he came across a cabinet containing two stuffed pigeons, a male and a female, mounted in lifelike poses. He was overcome by awe, sadness and the birds’ physical beauty: their bright auburn breasts, slate-gray backs and the dusting of iridescence around their napes that, depending on the light and angle, appeared purple, fuchsia or green. Before his chaperones dragged him out of the room, Novak snapped a photograph with his disposable camera. The flash was too strong, however, and when the film was processed several weeks later, he was haunted to discover that the photograph hadn’t developed. It was blank, just a flash of white light.
In the decade since, Novak has visited 339 passenger pigeons — at the Burke Museum in Seattle, the Carnegie Museum of Natural History in Pittsburgh, the American Museum of Natural History in New York and Harvard’s Ornithology Department, which has 145 specimens, including eight pigeon corpses preserved in jars of ethanol, 31 eggs and a partly albino pigeon. There are 1,532 passenger-pigeon specimens left on Earth. On Sept. 1, 1914, Martha, the last captive passenger pigeon, died at the Cincinnati Zoo. She outlasted George, the penultimate survivor of her species and her only companion, by four years. As news spread of her species’ imminent extinction, Martha became a minor tourist attraction. In her final years, whether depressed or just old, she barely moved. Underwhelmed zoo visitors threw fistfuls of sand at her to elicit a reaction. When she finally died, her body was taken to the Cincinnati Ice Company, frozen in a 300-pound ice cube and shipped by train to the Smithsonian Institution, where she was stuffed and mounted and visited, 99 years later, by Ben Novak.
The fact that we can pinpoint the death of the last known passenger pigeon is one of many peculiarities that distinguish the species. Many thousands of species go extinct every year, but we tend to be unaware of their passing, because we’re unaware of the existence of most species. The passenger pigeon’s decline was impossible to ignore, because as recently as the 1880s, it was the most populous vertebrate in North America. It made up as much as 40 percent of the continent’s bird population. In “A Feathered River Across the Sky,” Joel Greenberg suggests that the species’ population “may have exceeded that of every other bird on earth.” In 1860, a naturalist observed a single flock that he estimated to contain 3,717,120,000 pigeons. By comparison, there are currently 260 million rock pigeons in existence. A single passenger-pigeon nesting ground once occupied an area as large as 850 square miles, or 37 Manhattans.
The species’ incredible abundance was an enticement to mass slaughter. The birds were hunted for their meat, which was sold by the ton (at the higher end of the market, Delmonico’s served pigeon cutlets); for their oil and feathers; and for sport. Even so, their rapid decline — from approximately five billion to extinction within a few decades — baffled most Americans. Science magazine published an article claiming that the birds had all fled to the Arizona desert. Others hypothesized that the pigeons had taken refuge in the Chilean pine forests or somewhere east of Puget Sound or in Australia. Another theory held that every passenger pigeon had joined a single megaflock and disappeared into the Bermuda Triangle.
Stewart Brand, who was born in Rockford, Ill., in 1938, has never forgotten the mournful way his mother spoke about passenger pigeons when he was a child. During summers, the Brands vacationed near the top of Michigan’s mitten, not far from Pigeon River, one of the hundreds of American places named after the species. (Michigan alone has four Pigeon Rivers, four Pigeon Lakes, two Pigeon Creeks, Pigeon Cove, Pigeon Hill and Pigeon Point). Old-timers told stories about the pigeon that to Brand assumed a mythic quality. They said that the flocks were so large they blotted out the sun.
Brand’s compassion for the natural world has taken many diverse forms, but none more broadly influential than the Whole Earth Catalog, which he founded in 1968 and edited until 1984. Brand has said that the catalog, a dense compendium of environmentalist tools and practices, among other things, “encouraged individual power.” As it turned out, Whole Earth’s success gave Brand more power than most individuals, allowing him intimate access to the world’s most imaginative thinkers and patrons wealthy enough to finance those thinkers’ most ambitious ideas. In the last two decades, several of these ideas have materialized under the aegis of the Long Now Foundation, a nonprofit organization that Brand helped to establish in 1996 to support projects designed to inspire “long-term responsibility.” Among these projects are a 300-foot-tall clock designed to tick uninterruptedly for the next 10,000 years, financed by a $42 million investment from the Amazon.com founder Jeff Bezos and situated inside an excavated mountain that Bezos owns near Van Horn, Tex.; and a disk of pure nickel inscribed with 1,500 languages that has been mounted on the Rosetta space probe, which this year is scheduled to land on Comet 67P/Churyumov-Gerasimenko, 500 million miles from earth.
Three years ago Brand invited the zoologist Tim Flannery, a friend, to speak at Long Now’s Seminar About Long-Term Thinking, a monthly series held in San Francisco. The theme of the talk was “Is Mass Extinction of Life on Earth Inevitable?” In the question-and-answer period that followed, Brand, grasping for a silver lining, mentioned a novel approach to ecological conservation that was gaining wider public attention: the resurrection of extinct species, like the woolly mammoth, aided by new genomic technologies developed by the Harvard molecular biologist George Church. “It gives people hope when rewilding occurs — when the wolves come back, when the buffalo come back,” Brand said at the seminar. He paused. “I suppose we could get passenger pigeons back. I hadn’t thought of that before.”
Brand became obsessed with the idea. Reviving an extinct species was exactly the kind of ambitious, interdisciplinary and slightly loopy project that appealed to him. Three weeks after his conversation with Flannery, Brand sent an email to Church and the biologist Edward O. Wilson:
Dear Ed and George . . .
The death of the last passenger pigeon in 1914 was an event that broke the public’s heart and persuaded everyone that extinction is the core of humanity’s relation with nature.
George, could we bring the bird back through genetic techniques? I recall chatting with Ed in front of a stuffed passenger pigeon at the Comparative Zoology Museum [at Harvard, where Wilson is a faculty emeritus], and I know of other stuffed birds at the Smithsonian and in Toronto, presumably replete with the requisite genes. Surely it would be easier than reviving the woolly mammoth, which you have espoused.
The environmental and conservation movements have mired themselves in a tragic view of life. The return of the passenger pigeon could shake them out of it — and invite them to embrace prudent biotechnology as a Green tool instead of menace in this century. . . . I would gladly set up a nonprofit to fund the passenger pigeon revival. . . .
Wild scheme. Could be fun. Could improve things. It could, as they say, advance the story.
Passenger Pigeon Extinct 1914. Billions of the pigeons were alive just a few decades earlier. Like the other animals shown here, it has been proposed for de-extinction projects. Credit Stephen Wilkes for The New York Times. Passenger pigeon, Museum of Comparative Zoology, Harvard University.
What do you think?
In less than three hours, Church responded with a detailed plan to return “a flock of millions to billions” of passenger pigeons to the planet.
In February 2012, Church hosted a symposium at Harvard Medical School called “Bringing Back the Passenger Pigeon.” Church gave a demonstration of his new genome-editing technology, and other biologists and avian specialists expressed enthusiasm for the idea. “De-extinction went from concept to potential reality right before our eyes,” said Ryan Phelan, Brand’s wife, an entrepreneur who founded an early consumer medical-genetics company. “We realized that we could do it not only for the passenger pigeon, but for other species. There was so much interest and so many ideas that we needed to create an infrastructure around it. It was like, ‘Oh, my God, look at what we’ve unleashed.’ ” Phelan, 61, became executive director of the new project, which they named Revive & Restore.
Several months later, the National Geographic Society hosted a larger conference to debate the scientific and ethical questions raised by the prospect of “de-extinction.” Brand and Phelan invited 36 of the world’s leading genetic engineers and biologists, among them Stanley Temple, a founder of conservation biology; Oliver Ryder, director of the San Diego Zoo’s Frozen Zoo, which stockpiles frozen cells of endangered species; and Sergey Zimov, who has created an experimental preserve in Siberia called Pleistocene Park, which he hopes to populate with woolly mammoths.
To Brand’s idea that the pigeon project would provide “a beacon of hope for conservation,” conference attendees added a number of ecological arguments in support of de-extinction. Just as the loss of a species decreases the richness of an ecosystem, the addition of new animals could achieve the opposite effect. The grazing habits of mammoths, for instance, might encourage the growth of a variety of grasses, which could help to protect the Arctic permafrost from melting — a benefit with global significance, as the Arctic permafrost contains two to three times as much carbon as the world’s rain forests. “We’ve framed it in terms of conservation,” Brand told me. “We’re bringing back the mammoth to restore the steppe in the Arctic. One or two mammoths is not a success. 100,000 mammoths is a success.”
A less scientific, if more persuasive, argument was advanced by the ethicist Hank Greely and the law professor Jacob Sherkow, both of Stanford. De-extinction should be pursued, they argued in a paper published in Science, because it would be really cool. “This may be the biggest attraction and possibly the biggest benefit of de-extinction. It would surely be very cool to see a living woolly mammoth.”
Ben Novak needed no convincing. When he heard that Revive & Restore had decided to resurrect the passenger pigeon, he sent an email to Church, who forwarded it to Brand and Phelan. “Passenger pigeons have been my passion in life for a very long time,” Novak wrote. “Any way I can be part of this work would be my honor.”
Behind the biohazard signs and double-encoded security doors that mark the entrance of the paleogenomics lab at the University of California, Santa Cruz, I found no mastodon tusks, dinosaur eggs or mosquitoes trapped in amber — only a sterile, largely empty room in which Novak and several graduate students were busy checking their Gmail accounts. The only visible work in progress was Metroplex, a giant Transformers figurine that Novak constructed, which was hunched over his keyboard like a dead robot.
Novak, who is 27, hastened to assure me that the construction of the passenger-pigeon genome was also underway. In fact, it had been for years. Beth Shapiro, one of the scientists who runs the lab, began to sequence the species’ DNA in 2001, a decade before Brand had his big idea. The sequencing process is now in its data-analysis phase, which leaves Novak, who studied ecology in college, but has no advanced scientific degrees, time to consult on academic papers about de-extinction, write his own paper about the ecological relationship between passenger pigeons and chestnut trees and correspond with the scientists behind the world’s other species-resurrection efforts. These include the Uruz project, which is selectively breeding cattle to create a new subspecies that resembles aurochs, a form of wild ox, extinct since 1627; a group hoping to use genetic methods to revive the heath hen, extinct since 1932; and the Lazarus Project, which is trying to revive an Australian frog, extinct for 30 years, that gave birth through its mouth.
As Brand and Phelan’s only full-time employee at Revive & Restore, Novak fields emails sent by scientists eager to begin work on new candidates for de-extinction, like the California grizzly bear, the Carolina parakeet, the Tasmanian tiger, Steller’s sea cow and the great auk, which hasn’t been seen since 1844, when the last two known members of its species were strangled by Icelandic fishermen. Because de-extinction requires collaboration from a number of different disciplines, Phelan sees Revive & Restore as a “facilitator,” helping to connect geneticists, molecular biologists, synthetic biologists and conservation biologists. She also hopes that Revive & Restore’s support will enable experimental projects to proceed. She and Novak realize that the new discipline of de-extinction will advance regardless of their involvement, but, she says, “We just want it to happen responsibly.”
When Novak joined Shapiro’s lab, he knew nothing about Santa Cruz and nobody there. A year later, apart from an occasional dinner on the Brands’ tugboat in Sausalito, little has changed. Novak is largely left alone with his thoughts and his dead animals. But it has always been this way for Novak, who grew up in a house three miles from his closest neighbor, halfway between Williston, the eighth-largest city in North Dakota, and Alexander, which has a population of 269. As a boy, Novak often took solitary hikes through the badlands near his home, exploring a vast petrified forest that runs through the Sentinel Butte formation. Fifty million years ago, that part of western North Dakota resembled the Florida Everglades. Novak frequently came across vertebrae, phalanges and rib fragments of extinct crocodiles and champsosaurs.
This was two hours north of Elkhorn Ranch, where Theodore Roosevelt developed the theories about wildlife protection that led to the preservation of 230 million acres of land. The local schools emphasized conservation in their science classes. In sixth grade, Novak was astonished to learn that he was living in the middle of a mass extinction. (Scientists predict that changes made by human beings to the composition of the atmosphere could kill off a quarter of the planet’s mammal species, a fifth of its reptiles and a sixth of its birds by 2050.) “I felt a certain amount of solidarity with these species,” he told me. “Maybe because I spent so much time alone.”
Great Auk Not seen since 1844, when Icelandic fishermen strangled the last known survivors. Credit Stephen Wilkes for The New York Times. Great Auk, Museum of Comparative Zoology, Harvard University.
After graduating from Montana State University in Bozeman, Novak applied to study under Beth Shapiro, who had already begun to sequence passenger-pigeon DNA. He was rejected. “I appreciated his devotion to the bird,” she told me, “but I worried that his zeal might interfere with his ability to do serious science.” Novak instead entered a graduate program at the McMaster Ancient DNA Center in Hamilton, Ontario, where he worked on the sequencing of mastodon DNA. But he remained obsessed by passenger pigeons. He decided that, if he couldn’t join Shapiro’s lab, he would sequence the pigeon’s genome himself. He needed tissue samples, so he sent letters to every museum he could find that possessed the stuffed specimens. He was denied more than 30 times before Chicago’s Field Museum sent him a tiny slice of a pigeon’s toe. A lab in Toronto conducted the sequencing for a little more than $2,500, which Novak raised from his family and friends. He had just begun to analyze the data when he learned about Revive & Restore.
After Novak was hired, Shapiro offered him office space at the U.C.S.C. paleogenomics lab, where he could witness the sequencing work as it happened. Now, when asked what he does for a living, Novak says that his job is to resurrect the passenger pigeon.
Novak is tall, solemn, polite and stiff in conversation, until the conversation turns to passenger pigeons, which it always does. One of the few times I saw him laugh was when I asked whether de-extinction might turn out to be impossible. He reminded me that it has already happened. More than 10 years ago, a team that included Alberto Fernández-Arias (now a Revive & Restore adviser) resurrected a bucardo, a subspecies of mountain goat also known as the Pyrenean ibex, that went extinct in 2000. The last surviving bucardo was a 13-year-old female named Celia. Before she died — her skull was crushed by a falling tree — Fernández-Arias extracted skin scrapings from one of her ears and froze them in liquid nitrogen. Using the same cloning technology that created Dolly the sheep, the first cloned mammal, the team used Celia’s DNA to create embryos that were implanted in the wombs of 57 goats. One of the does successfully brought her egg to term on July 30, 2003. “To our knowledge,” wrote the scientists, “this is the first animal born from an extinct subspecies.” But it didn’t live long. After struggling to breathe for several minutes, the kid choked to death.
This cloning method, called somatic cell nuclear transfer, can be used only on species for which we have cellular material. For species like the passenger pigeon that had the misfortune of going extinct before the advent of cryopreservation, a more complicated process is required. The first step is to reconstruct the species’ genome. This is difficult, because DNA begins to decay as soon as an organism dies. The DNA also mixes with the DNA of other organisms with which it comes into contact, like fungus, bacteria and other animals. If you imagine a strand of DNA as a book, then the DNA of a long-dead animal is a shuffled pile of torn pages, some of the scraps as long as a paragraph, others a single sentence or just a few words. The scraps are not in the right order, and many of them belong to other books. And the book is an epic: The passenger pigeon’s genome is about 1.2 billion base pairs long. If you imagine each base pair as a word, then the book of the passenger pigeon would be four million pages long.
There is a shortcut. The genome of a closely related species will have a high proportion of identical DNA, so it can serve as a blueprint, or “scaffold.” The passenger pigeon’s closest genetic relative is the band-tailed pigeon, which Shapiro is now sequencing. By comparing the fragments of passenger-pigeon DNA with the genomes of similar species, researchers can assemble an approximation of an actual passenger-pigeon genome. How close an approximation, it will be impossible to know. As with any translation, there may be errors of grammar, clumsy phrases and perhaps a few missing passages, but the book will be legible. It should, at least, tell a good story.
Shapiro hopes to complete this part of the process in the coming months. At that point, the researchers will have, on their hard drives, a working passenger-pigeon genome. If you opened the file on a computer screen, you would see a chain of 1.2 billion letters, all of them A, G, C or T. Shapiro hopes to publish an analysis of the genome by Sept. 1, in time for the centenary of Martha’s death.
Woolly Mammoth Became extinct about 4,000 years ago. Credit Stephen Wilkes for The New York Times; Woolly Mammoth, Royal BC Museum, Victoria, British Columbia
That, unfortunately, is the easy part. Next the genome will have to be inscribed into a living cell. This is even more complicated than it sounds. Molecular biologists will begin by trying to culture germ cells from a band-tailed pigeon. Cell culturing is the process by which living tissue is made to grow in a petri dish. Bird cells can be especially difficult to culture. They strongly prefer not to exist outside of a body. “For birds,” Novak said, “this is the hump to get over.” But it is largely a question of trial and error — a question, in other words, of time, which Revive & Restore has in abundance.
Should scientists succeed in culturing a band-tailed-pigeon germ cell, they will begin to tinker with its genetic code. Biologists describe this as a “cut-and-paste job.” They will replace chunks of band-tailed-pigeon DNA with synthesized chunks of passenger-pigeon DNA, until the cell’s genome matches their working passenger-pigeon genome. They will be aided in this process by a fantastical new technology, invented by George Church, with the appropriately runic name of MAGE (Multiplex Automated Genome Engineering). MAGE is nicknamed the “evolution machine” because it can introduce the equivalent of millions of years of genetic mutations within minutes. After MAGE works its magic, scientists will have in their petri dishes living passenger-pigeon cells, or at least what they will call passenger-pigeon cells.
The biologists would next introduce these living cells into a band-tailed-pigeon embryo. No hocus-pocus is involved here: You chop off the top of a pigeon egg, inject the passenger-pigeon cells inside and cover the hole with a material that looks like Saran wrap. The genetically engineered germ cells integrate into the embryo; into its gonads, to be specific. When the chick hatches, it should look and act like a band-tailed pigeon. But it will have a secret. If it is a male, it carries passenger-pigeon sperm; if it is a female, its eggs are passenger-pigeon eggs. These creatures — band-tailed pigeons on the outside and passenger pigeons on the inside — are called “chimeras” (from the Middle English for “wild fantasy”). Chimeras would be bred with one another in an effort to produce passenger pigeons. Novak hopes to observe the birth of his first passenger-pigeon chick by 2020, though he suspects 2025 is more likely.
At that point, the de-extinction process would move from the lab to the coop. Developmental and behavioral biologists would take over, just in time to answer some difficult questions. Chicks imitate their parents’ behavior. How do you raise a passenger pigeon without parents of its own species? And how do you train band-tailed pigeons to nurture the strange spawn that emerge from their eggs; chicks that, to them, might seem monstrous: an avian Rosemary’s Baby?
Despite the genetic similarity between the two pigeon species, significant differences remain. Band-tailed pigeons are a western bird and migrate vast distances north and south; passenger pigeons lived in the eastern half of the continent and had no fixed migration patterns. In order to ease the transition between band-tailed parents and passenger chicks, a Revive & Restore partner will soon begin to breed a flock of band-tailed pigeons to resemble passenger pigeons. They will try to alter the birds’ diets, migration habits and environment. The behavior of each subsequent generation will more closely resemble that of their genetic cousins. “Eventually,” Novak said, “we’ll have band-tailed pigeons that are faux-passenger-pigeon parents.” As unlikely as this sounds, there is a strong precedent; surrogate species have been used extensively in pigeon breeding.
During the breeding process, small modifications would be made to the genome in order to ensure genetic diversity within the new population. After three to five years, some of the birds would be moved to a large outdoor aviary, where they would be exposed to nature for the first time: trees, weather, bacteria. Small-population biologists will be consulted, as will biologists who study species reintroduction. Other animals would gradually be introduced into the aviary, one at a time. The pigeons would be transferred between aviaries to simulate their hopscotching migratory patterns. Ecologists will study how the birds affect their environment and are affected by it. After about 10 years, some of the birds in the aviary would be set free into the wild, monitored by G.P.S. chips implanted under their skin. The project will be considered a full success when the population in the wild is capable of perpetuating itself without the addition of new pigeons from the aviary. Novak expects this to occur as early as 25 years after the first birds are let into the wild, or 2060. And he hopes that he will be there to witness it.
While Novak’s pigeons are reproducing, Revive & Restore will have embarked on a parallel course with a number of other species, both extinct and endangered. Besides the woolly mammoth, candidates include the black-footed ferret, the Caribbean monk seal, the golden lion tamarin, the ivory-billed woodpecker and the northern white rhinoceros, a species that is down to its final handful of members. For endangered species with tiny populations, scientists would introduce genetic diversity to offset inbreeding. For species threatened by contagion, an effort would be made to fortify their DNA with genes that make them disease-resistant. Millions of North American bats have died in the past decade from white-nose syndrome, a disease named after a deadly fungus that was likely imported from Europe. Many European bat species appear to be immune to the fungus; if the gene responsible for this immunity is identified, one theory holds that it could be synthesized and injected into North American bats. The scientific term for this type of genetic intervention is “facilitated adaptation.” A better name for Revive & Restore would be Revive & Restore & Improve.
This optimistic, soft-focus fantasy of de-extinction, while thrilling to Ben Novak, is disturbing to many conservation biologists, who consider it a threat to their entire discipline and even to the environmental movement. At a recent Revive & Restore conference and in articles appearing in both the popular and academic press since then, they have articulated their litany of criticisms at an increasingly high pitch. In response, particularly in recent months, supporters of de-extinction have more aggressively begun to advance their counterarguments. “We have answers for every question,” Novak told me. “We’ve been thinking about this a long time.”
The first question posed by conservationists addresses the logic of bringing back an animal whose native habitat has disappeared. Why go through all the trouble just to have the animal go extinct all over again? While this criticism is valid for some species, the passenger pigeon should be especially well suited to survive in new habitats, because it had no specific native habitat to begin with. It was an opportunistic eater, devouring a wide range of nuts and acorns and flying wherever there was food.
There is also anxiety about disease. “Pathogens in the environment are constantly evolving, and animals are developing new immune systems,” said Doug Armstrong, a conservation biologist in New Zealand who studies the reintroduction of species. “If you recreate a species genetically and release it, and that genotype is based on a bird from a 100-year-old environment, you probably will increase risk.” A revived passenger pigeon might be a vector for modern diseases. But this concern, said David Haussler, the co-founder of the Genome 10K Project, is overblown. “There’s always this fear that somehow, if we do it, we’re going to accidentally make something horrible, because only nature can really do it right. But nature is totally random. Nature makes monsters. Nature makes threats. Many of the things that are most threatening to us are a product of nature. Revive & Restore is not going to tip the balance in any way.” (Some scientists have speculated that, by competing for acorns with rodents and deer, the passenger pigeon could bring about a decrease in Lyme disease.)
More pressing to conservationists is a practical anxiety: Money. De-extinction is a flashy new competitor for patronage. As the conservationist David Ehrenfeld said at a Revive & Restore conference: “If it works, de-extinction will only target a very few species and is extremely expensive. Will it divert conservation dollars from tried-and-true conservation measures that already work, which are already short of funds?” This argument can be made for any conservation strategy, says the ecologist Josh Donlan, an adviser to Revive & Restore. “In my view,” Donlan wrote in a paper that is scheduled to be published in the forthcoming issue of Frontiers of Biogeography, “[the] conservation strategies are not mutually exclusive — a point conservation scientists tend to overlook.” So far this prediction has held up. Much of the money spent so far for sequencing the passenger-pigeon genome has been provided by Beth Shapiro’s U.C.S.C. research budget. Revive & Restore’s budget, which was $350,000 last year, has been raised largely from tech millionaires who are not known for supporting ecological causes.
De-extinction also poses a rhetorical threat to conservation biologists. The specter of extinction has been the conservation movement’s most powerful argument. What if extinction begins to be seen as a temporary inconvenience? The ecologist Daniel Simberloff raised a related concern. “It’s at best a technofix dealing with a few species,” he told me. “Technofixes for environmental problems are band-aids for massive hemorrhages. To the extent that the public, who will never be terribly well informed on the larger issue, thinks that we can just go and resurrect a species, it is extremely dangerous. . . . De-extinction suggests that we can technofix our way out of environmental issues generally, and that’s very, very bad.”
The extinct heath hen, a candidate for resurrection. CreditStephen Wilkes for The New York Times. Heath hen: Museum of Comparative Zoology, Harvard University.
Ben Novak — who trails Simberloff in professional stature by a doctorate, hundreds of scientific publications and a pair of lifetime-achievement awards — rejects this view. “This is about an expansion of the field, not a reduction,” he says. “We get asked these big questions, but no one is asking people who work on elephants why they’re not working with giraffes, when giraffes need a lot more conservation work than elephants do. Nobody asks the people who work on rhinos why they aren’t working on the Arctic pollinators that are being devastated by climate change. The panda program rarely gets criticized, even though that project is completely pointless in the grand scheme of biodiversity on this planet, because the panda is a cute animal.” If the success of de-extinction, or even its failure, increases public awareness of the threats of mass extinction, Novak says, then it will have been a triumph.
How will we decide which species to resurrect? Some have questioned the logic of beginning with a pigeon. “Do you think that wealthy people on the East Coast are going to want billions of passenger pigeons flying over their freshly manicured lawns and just-waxed S.U.V.s?” asked Shapiro, whose involvement in the passenger-pigeon project will end once she finishes analyzing its genome. (She is writing a book about the challenges of de-extinction.) In an attempt to develop scientific criteria, the New Zealand zoologist Philip Seddon recently published a 10-point checklist to determine the suitability of any species for revival, taking into account causes of its extinction, possible threats it might face upon resurrection and man’s ability to destroy the species “in the event of unacceptable ecological or socioeconomic impacts.” If passenger pigeons, in other words, turn out to be an environmental scourge — if, following nature’s example, we create a monster — will we be able to kill them off? (The answer: Yes, we’ve done it before.)
But the most visceral argument against de-extinction is animal cruelty. Consider the 56 female mountain goats who were unable to bring to term the deformed bucardo embryos that were implanted in their wombs. Or the bucardo that was born and lived only a few minutes, gasping for breath, before dying of a lung deformity? “Is it fair to do this to these animals?” Shapiro asked. “Is ‘because we feel guilty’ a good-enough reason?” Stewart Brand made a utilitarian counterargument: “We’re going to go through some suffering, because you try a lot of times, and you get ones that don’t take. On the other hand, if you can bring bucardos back, then how many would get to live that would not have gotten to live?”
And, finally, what will the courts make of packs of woolly mammoths and millions of passenger pigeons let loose on the continent? In “How to Permit Your Mammoth,” published in The Stanford Environmental Law Journal, Norman F. Carlin asks whether revived species should be protected by the Endangered Species Act or regulated as a genetically modified organism. He concludes that revived species, “as products of human ingenuity,” should be eligible for patenting.
This question of “human ingenuity” approaches one of the least commented upon but most significant points about de-extinction. The term “de-extinction” is misleading. Passenger pigeons will not rise from the grave. Instead, band-tailed-pigeon DNA will be altered to resemble passenger-pigeon DNA. But we won’t know how closely the new pigeon will resemble the extinct pigeon until it is born; even then, we’ll only be able to compare physical characteristics with precision. Our understanding of the passenger pigeon’s behavior derives entirely from historical accounts. While many of these, including John James Audubon’s chapter on the pigeon in “Ornithological Biography,” are vividly written, few are scientific in nature. “There are a million things that you cannot predict about an organism just from having its genome sequence,” said Ed Green, a biomolecular engineer who works on genome-sequencing technology in the U.C.S.C. paleogenomics lab. Shapiro said: “It’s just one guess. And it’s not even a very good guess.”
Shapiro is no more sanguine about the woolly-mammoth project. “You’re never going to get a genetic clone of a mammoth,” she said. “What’s going to happen, I imagine, is that someone, maybe George Church, is going to insert some genes into the Asian-elephant genome that make it slightly hairier. That would be just a tiny portion of the genome manipulated, but a few years later, you have a thing born that is an elephant, only hairier, and the press will write, ‘George Church has cloned a mammoth!’ ” Church, though he plans to do more than just alter the gene for hairiness, concedes the point. “I would like to have an elephant that likes the cold weather,” he told me. “Whether you call it a ‘mammoth’ or not, I don’t care.”
Tasmanian Tiger Also known as the thylacine, it was last spotted in Tasmania in 1930.CreditStephen Wilkes for The New York Times. Tasmanian Tiger, Mammalogy Department, American Museum of Natural History.
There is no authoritative definition of “species.” The most widely accepted definition describes a group of organisms that can procreate with one another and produce fertile offspring, but there are many exceptions. De-extinction operates under a different definition altogether. Revive & Restore hopes to create a bird that interacts with its ecosystem as the passenger pigeon did. If the new bird fills the same ecological niche, it will be successful; if not, back to the petri dish. “It’s ecological resurrection, not species resurrection,” Shapiro says. A similar logic informs the restoration of Renaissance paintings. If you visit “The Last Supper” in the refectory of the Convent of Santa Maria delle Grazie in Milan, you won’t see a single speck of paint from the brush of Leonardo da Vinci. You will see a mural with the same proportions and design as the original, and you may feel the same sense of awe as the refectory’s parishioners felt in 1498, but the original artwork disappeared centuries ago. Philosophers call this Theseus’ Paradox, a reference to the ship that Theseus sailed back to Athens from Crete after he had slain the Minotaur. The ship, Plutarch writes, was preserved by the Athenians, who “took away the old planks as they decayed, putting in new and stronger timber in their place.” Theseus’ ship, therefore, “became a standing example among the philosophers . . . one side holding that the ship remained the same, and the other contending that it was not the same.”
What does it matter whether Passenger Pigeon 2.0 is a real passenger pigeon or a persuasive impostor? If the new, synthetically created bird enriches the ecology of the forests it populates, few people, including conservationists, will object. The genetically adjusted birds would hardly be the first aspect of the deciduous forest ecosystem to bear man’s influence; invasive species, disease, deforestation and a toxic atmosphere have engineered forests that would be unrecognizable to the continent’s earliest European settlers. When human beings first arrived, the continent was populated by camels, eight-foot beavers and 550-pound ground sloths. “People grow up with this idea that the nature they see is ‘natural,’ ” Novak says, “but there’s been no real ‘natural’ element to the earth the entire time humans have been around.”
The earth is about to become a lot less “natural.” Biologists have already created new forms of bacteria in the lab, modified the genetic code of countless living species and cloned dogs, cats, wolves and water buffalo, but the engineering of novel vertebrates — of breathing, flying, defecating pigeons — will represent a milestone for synthetic biology. This is the fact that will overwhelm all arguments against de-extinction. Thanks, perhaps, to “Jurassic Park,” popular sentiment already is behind it. (“That movie has done a lot for de-extinction,” Stewart Brand told me in all earnestness.) In a 2010 poll by the Pew Research Center, half of the respondents agreed that “an extinct animal will be brought back.” Among Americans, belief in de-extinction trails belief in evolution by only 10 percentage points. “Our assumption from the beginning has been that this is coming anyway,” Brand said, “so what’s the most benign form it can take?”
What is coming will go well beyond the resurrection of extinct species. For millenniums, we have customized our environment, our vegetables and our animals, through breeding, fertilization and pollination. Synthetic biology offers far more sophisticated tools. The creation of novel organisms, like new animals, plants and bacteria, will transform human medicine, agriculture, energy production and much else. De-extinction “is the most conservative, earliest application of this technology,” says Danny Hillis, a Long Now board member and a prolific inventor who pioneered the technology that is the basis for most supercomputers. Hillis mentioned Marshall McLuhan’s observation that the content of a new medium is the old medium: that each new technology, when first introduced, recreates the familiar technology it will supersede. Early television shows were filmed radio shows. Early movies were filmed stage plays. Synthetic biology, in the same way, may gain widespread public acceptance through the resurrection of lost animals for which we have nostalgia. “Using the tool to recreate old things,” Hillis said, “is a much more comfortable way to get engaged with the power of the tool.”
“By the end of this decade we’ll seem incredibly conservative,” Brand said. “A lot of this stuff is going to become part of the standard tool kit. I would guess that within a decade or two, most of the major conservation organizations will have de-extinction as part of the portfolio of their activities.” He said he hoped to see the birth of a baby woolly mammoth in his lifetime. The opening line of the first Whole Earth Catalog was “We are as gods and might as well get good at it.” Brand has revised this motto to: “We are as gods and HAVE to get good at it.” De-extinction is a good way to practice.
A passion for bringing a lost pigeon back to life is hardly inconsistent with scientific inquiry. Ben Novak insists that he is motivated purely by ecological concerns. “To some people, it might be about making some crazy new pet or zoo animal, but that’s not our organization,” he told me. The scientists who work beside him in the paleogenomics lab — who hear his daily passenger-pigeon rhapsodies — suspect a second motivation. “I’m a biologist, I’ve seen people passionate about animals before,” Andre Soares, a young Brazilian member of Shapiro’s staff, said, “but I’ve never seen anyone this passionate.” He laughed. “It’s not like he ever saw the pigeon flying around. And it’s not like a dinosaur, a massive beast that walked around millions of years ago. No, it’s just a pigeon. I don’t know why he loves them so much.”
I repeated what Novak told me, that the passenger-pigeon project was “all under the framework of conservation.” Soares shook his head. “I think the birds are his thing,” he said.
Ed Green, the biomolecular engineer down the hall, was more succinct. “The passenger pigeon,” he said, “makes Ben want to write poetry.”
Around the world, bees are dying in unprecedented numbers. While scientists hypothesize pesticides and habitat loss are to blame, the exact causes are still unclear. Gardeners and farmers are concerned about the fate of their bee-pollinated food and looking to the scientific community for information about how and why the bee populations are declining.
Unfortunately, money is tight as scientists struggle to gain the funding and resources for extensive bee studies.
Marie Clifford and Susan Waters, graduate researchers at the University of Washington in Seattle, have found a way to get around scarce research funding: citizen scientists. The Urban Pollination Project (UPP), co-founded in 2011, takes Seattle community gardeners and trains them to collect data on local bees. Tapping into citizen scientist efforts, Clifford and Waters can gather data from 35 Seattle community gardens – a scale of research otherwise outside of their resources and funding capabilities.
“Citizen science,” Clifford says, “allows scientists to address much broader scale questions than they might be able to address themselves.”
The citizen scientist gardeners at the Urban Pollination Project measure, count, and weigh tomatoes to understand how varying degrees of pollination affect tomato growth. They also pollinate the tomato flowers using a tuning fork, and are trained in bee identification. Their observations provide insight into what species of bees visit various Seattle community gardens.
Observations like these led to a sighting of the Western Bumblebee — a native bumblebee thought to be extinct– by bee enthusiast, Will Peterman. With citizen scientists performing observations around the city, Clifford and Waters hope to better understand which bees are pollinating our cities.
In about five years, Clifford and Waters hope to have enough data to make conclusions about what bumblebees need to survive in urban environments, like how much and what kind of habitat availability is required. As the project continues, Clifford and Waters want to get more gardeners involved.
Both bumblebees and a 128 Hertz tuning fork vibrate at the perfect frequency to pollinate tomato plants. The vibration can literally “shake” the pollen out of tomato plant flowers. Photo credit: Sarah Vaira.
While UPP works with Seattle gardeners to track where bumblebees nest and forage, other citizen projects such asiNaturalist andeBird, allow anyone with a smartphone or digital camera to help identify plants and animals. These kinds of identification projects can help scientists predict animal and plant behavior.
“[With citizen science] you can achieve things that you would never be able to achieve with a more standard set of funds and time and energy,” says Waters, “[This is] a kind of knowledge that is ultimately really useful … and it connects people to their local environment.”
Research published today in the journal Nature Climate Change looked at 50,000 globally widespread and common species and found that more than one half of the plants and one third of the animals will lose more than half of their climatic range by 2080 if nothing is done to reduce the amount of global warming and slow it down.
This means that geographic ranges of common plants and animals will shrink globally and biodiversity will decline almost everywhere.
Plants, reptiles and particularly amphibians are expected to be at highest risk. Sub-Saharan Africa, Central America, Amazonia and Australia would lose the most species of plants and animals. And a major loss of plant species is projected for North Africa, Central Asia and South-eastern Europe.
But acting quickly to mitigate climate change could reduce losses by 60 per cent and buy an additional 40 years for species to adapt. This is because this mitigation would slow and then stop global temperatures from rising by more than two degrees Celsius relative to pre-industrial times (1765). Without this mitigation, global temperatures could rise by 4 degrees Celsius by 2100.
The study was led by Dr Rachel Warren from UEA’s school of Environmental Sciences and the Tyndall Centre for Climate Change Research. Collaborators include Dr.Jeremy VanDerWal at James Cook University in Australia and Dr Jeff Price, also at UEA’s school of Environmental Sciences and the Tyndall Centre. The research was funded by the Natural Environment Research Council (NERC).
Dr Warren said: “While there has been much research on the effect of climate change on rare and endangered species, little has been known about how an increase in global temperature will affect more common species.
“This broader issue of potential range loss in widespread species is a serious concern as even small declines in these species can significantly disrupt ecosystems.
“Our research predicts that climate change will greatly reduce the diversity of even very common species found in most parts of the world. This loss of global-scale biodiversity would significantly impoverish the biosphere and the ecosystem services it provides.
“We looked at the effect of rising global temperatures, but other symptoms of climate change such as extreme weather events, pests, and diseases mean that our estimates are probably conservative. Animals in particular may decline more as our predictions will be compounded by a loss of food from plants.
“There will also be a knock-on effect for humans because these species are important for things like water and air purification, flood control, nutrient cycling, and eco-tourism.
“The good news is that our research provides crucial new evidence of how swift action to reduce CO2 and other greenhouse gases can prevent the biodiversity loss by reducing the amount of global warming to 2 degrees Celsius rather than 4 degrees. This would also buy time — up to four decades — for plants and animals to adapt to the remaining 2 degrees of climate change.”
The research team quantified the benefits of acting now to mitigate climate change and found that up to 60 per cent of the projected climatic range loss for biodiversity can be avoided.
Dr Warren said: “Prompt and stringent action to reduce greenhouse gas emissions globally would reduce these biodiversity losses by 60 per cent if global emissions peak in 2016, or by 40 per cent if emissions peak in 2030, showing that early action is very beneficial. This will both reduce the amount of climate change and also slow climate change down, making it easier for species and humans to adapt.”
Information on the current distributions of the species used in this research came from the datasets shared online by hundreds of volunteers, scientists and natural history collections through the Global Biodiversity Information Facility (GBIF).
Co-author Dr Jeff Price, also from UEA’s school of Environmental Studies, said: “Without free and open access to massive amounts of data such as those made available online through GBIF, no individual researcher is able to contact every country, every museum, every scientist holding the data and pull it all together. So this research would not be possible without GBIF and its global community of researchers and volunteers who make their data freely available.”
R. Warren, J. VanDerWal, J. Price, J. A. Welbergen, I. Atkinson, et al. Quantifying the benefit of early climate change mitigation in avoiding biodiversity loss. Nature Climate Change, 2013 DOI: 10.1038/nclimate1887
Apr. 8, 2013 — At some point, scientists may be able to bring back extinct animals, and perhaps early humans, raising questions of ethics and environmental disruption.
Within a few decades, scientists may be able to bring back the dodo bird from extinction, a possibility that raises a host of ethical questions, says Stanford law Professor Hank Greely. (Credit: Frederick William Frohawk/Public domain image)
Within a few decades, scientists may be able to bring back the dodo bird from extinction, a possibility that raises a host of ethical questions, says Stanford law Professor Hank Greely.
Twenty years after the release ofJurassic Park, the dream of bringing back the dinosaurs remains science fiction. But scientists predict that within 15 years they will be able to revive some more recently extinct species, such as the dodo or the passenger pigeon, raising the question of whether or not they should — just because they can.
In the April 5 issue of Science, Stanford law Professor Hank Greely identifies the ethical landmines of this new concept of de-extinction.
“I view this piece as the first framing of the issues,” said Greely, director of the Stanford Center for Law and the Biosciences. “I don’t think it’s the end of the story, rather I think it’s the start of a discussion about how we should deal with de-extinction.”
In “What If Extinction Is Not Forever?” Greely lays out potential benefits of de-extinction, from creating new scientific knowledge to restoring lost ecosystems. But the biggest benefit, Greely believes, is the “wonder” factor.
“It would certainly be cool to see a living saber-toothed cat,” Greely said. “‘Wonder’ may not seem like a substantive benefit, but a lot of science — such as the Mars rover — is done because of it.”
Greely became interested in the ethics of de-extinction in 1999 when one of his students wrote a paper on the implications of bringing back wooly mammoths.
“He didn’t have his science right — which wasn’t his fault because approaches on how to do this have changed in the last 13 years — but it made me realize this was a really interesting topic,” Greely said.
Scientists are currently working on three different approaches to restore lost plants and animals. In cloning, scientists use genetic material from the extinct species to create an exact modern copy. Selective breeding tries to give a closely-related modern species the characteristics of its extinct relative. With genetic engineering, the DNA of a modern species is edited until it closely matches the extinct species.
All of these techniques would bring back only the physical animal or plant.
“If we bring the passenger pigeon back, there’s no reason to believe it will act the same way as it did in 1850,” said co-author Jacob Sherkow, a fellow at the Stanford Center for Law and the Biosciences. “Many traits are culturally learned. Migration patterns change when not taught from generation to generation.”
Many newly revived species could cause unexpected problems if brought into the modern world. A reintroduced species could become a carrier for a deadly disease or an unintentional threat to a nearby ecosystem, Greely says.
“It’s a little odd to consider these things ‘alien’ species because they were here before we were,” he said. “But the ‘here’ they were in is very different than it is now. They could turn out to be pests in this new environment.”
When asked whether government policies are keeping up with the new threat, Greely answers “no.”
“But that’s neither surprising nor particularly concerning,” he said. “It will be a while before any revised species is going to be present and able to be released into the environment.”
Greely and Sherkow recommend that the government leave de-extinction research to private companies and focus on drafting new regulations. Sherkow says the biggest legal and ethical challenge of de-extinction concerns our own long-lost ancestors.
“Bringing back a hominid raises the question, ‘Is it a person?’ If we bring back a mammoth or pigeon, there’s a very good existing ethical and legal framework for how to treat research animals. We don’t have very good ethical considerations of creating and keeping a person in a lab,” said Sherkow. “That’s a far cry from the type of de-extinction programs going on now, but it highlights the slippery slope problem that ethicists are famous for considering.”
J. S. Sherkow, H. T. Greely. What If Extinction Is Not Forever?Science, 2013; 340 (6128): 32 DOI:10.1126/science.1236965
Pesquisadores criam plano de ação para preservar o macaco muriqui. Confira a entrevista concedida por um dos responsáveis pelo projeto à CH On-line.
Por: Mariana Rocha, Ciência Hoje On-line
Publicado em 19/03/2013 | Atualizado em 20/03/2013
Maior primata não humano das Américas, o muriqui sofre em função do desmatamento desenfreado e da caça para consumo humano. (foto: Sinara Conessa/ Flickr – CC BY 2.0)
Menos de três mil exemplares. É tudo o que resta do macaco muriqui na Mata Atlântica. Forte candidato a mascote das Olimpíadas de 2016, o primata corre o risco de sumir das florestas por conta do desmatamento desenfreado e da caça para consumo humano. No intuito de reverter esse quadro, pesquisadores traçam estratégias para garantir a sobrevivência do muriqui.
Até 2020, o PAN Muriquis pretende retroceder em pelo menos um nível o risco de extinção do primata. A meta é fazer com que o muriqui-do-norte seja classificado como espécie em perigo e o do sul como vulnerável.
Candidato a mascote
De origem indígena, a palavra muriqui significa povo manso da floresta e descreve um animal de comportamento pacífico e solidário. O hábito de abraçar seus companheiros fez do macaco um forte candidato a representar os anfitriões brasileiros nas Olimpíadas de 2016.
Veja vídeo da campanha em favor do muriqui como mascote das Olimpíadas de 2016
A campanha para eleger o primata-mascote do evento conta com o apoio de instituições envolvidas no PAN Muriquis e discute a necessidade de preservá-lo.
Para saber mais sobre ações que buscam garantir a sobrevivência do macaco muriqui, a CH On-line conversou com Maurício Talebi, bioantropólogo da Universidade Federal de São Paulo-Diadema e coautor do PAN Muriquis.
Como e quando começou a elaboração do PAN Muriquis?
O projeto surgiu a partir do Plano de Sobrevivência das Espécies, uma ferramenta conceitual da Comissão de Sobrevivência das Espécies (CSE), uma divisão da União Internacional para Conservação da Natureza. A CSE desenvolve atividades para a conservação de diversas espécies ameaçadas no planeta. O documento que descreve as ações do PAN Muriqui começou a ser desenvolvido em 2003 pelo ICMBio e foi finalizado em 2010. Esse planejamento contou com a participação de diversos setores da sociedade, como governo, universidades e organizações não governamentais.
Quais são as principais atividades do homem que prejudicam os muriquis?
Diversas ameaças acometem populações selvagens de muriquis. As principais são a redução de hábitat, caça ilegal, baixos investimentos em vigilância e fiscalização, índices reduzidos de reprodução em cativeiro e a fragmentação do hábitat em ilhas de florestas.
O senhor coordena o monitoramento das populações de muriquis em vários locais – alguns deles já são estudados há 20 anos. Como essa medida auxilia no planejamento de ações para preservar os muriquis?
Pesquisas de longa duração são fundamentais por vários motivos. É importante obter informações sobre os animais em várias épocas do ano, conhecer seu comportamento frente a variações ambientais, entender como eles organizam sua vida cotidiana e como executam tarefas vitais para a sobrevivência. Obtemos, também, informações sobre quais variáveis ambientais devem ser levadas em conta durante ações de reflorestamento do hábitat desses primatas. Complementarmente, esses estudos propiciam o treinamento das futuras gerações de pesquisadores. Nosso grupo de pesquisa na Associação Pró-muriqui treinou mais de 200 estudantes de graduação e pós-graduação nos últimos dez anos.
Quais são as principais dificuldades na execução do PAN Muriquis?
O principal fator restritivo é a baixa disponibilidade de recursos financeiros. Atualmente, contamos com recursos humanos qualificados para a execução desses trabalhos, mas faltam recursos para financiar a mão de obra. Uma das metas é criar um fundo financeiro que viabilize a execução de todas as ações do plano. Lamentamos que, no Brasil, os fundos financeiros para a conservação de hábitat e de espécies ainda sejam praticamente inexistentes.
O senhor acredita que a candidatura do muriqui a mascote das Olimpíadas de 2016 pode auxiliar na preservação do primata?
Caso seja confirmado como mascote olímpico, o muriqui será conhecido globalmente e diversos setores da economia se interessarão por investir em um emblema tão poderoso quanto ele
Certamente sim. A maioria dos brasileiros desconhece que o maior primata (não humano) das Américas ocorre exclusivamente em nosso país. Caso seja confirmado como mascote olímpico, o muriqui será conhecido globalmente e diversos setores da economia se interessarão por investir em um emblema tão poderoso quanto ele. Assim, será possível conseguir recursos para os esforços que poucos brasileiros e estudantes estão fazendo para a pesquisa e conservação da espécie. A conscientização nos níveis nacional e internacional poderá gerar recursos para continuarmos trabalhando e assim contribuirmos para que o muriqui possa ser visto ao vivo e a cores em seu hábitat natural pelas futuras gerações.
Este texto foi atualizado para incluir a seguinte alteração:
Além de Rio de Janeiro e São Paulo, o muriqui-do-sul é encontrado no norte do Paraná. (20/03/2013)
Pesquisadores vão mapear os locais no Rio onde o maior primata das Américas e candidato a mascote das Olimpíadas de 2016 resiste.
Restam apenas 300 muriquis no estado do Rio de Janeiro. Eles são ameaçados pela diminuição das áreas de floresta, pela caça e por doenças transmitidas por outros bichos. Correndo risco de extinção, o maior primata das Américas e candidato a mascote dos Jogos Olímpicos de 2016 ainda padece com a falta generalizada de informações. Pesquisadores vão a campo a partir de janeiro e, num prazo de dois anos, pretendem concluir o primeiro censo populacional e o georreferenciamento do mono-carvoeiro, como também é conhecido esse macaco exclusivamente brasileiro.
Uma força-tarefa com 20 pesquisadores vai percorrer 350 mil hectares de florestas no estado. Além do censo e do georreferenciamento, eles pretendem coletar material genético, observar hábitos, costumes, analisar a dieta, identificar os vegetais que servem de alimento. Tudo isso para entender como se dá a interação dos muriquis com o meio ambiente. O trabalho, que custará em torno de R$ 5,5 milhões, vai servir de base científica para a criação de um plano estadual de proteção do macaco. Este documento deverá orientar desde a localização de novas áreas de preservação até a escolha das espécies de plantas usadas em programas de reflorestamento, sempre levando em consideração as preferências do animal.
A iniciativa faz parte de um conjunto de outras medidas, que incluem a campanha para a escolha da mascote dos Jogos Olímpicos, programas de educação ambiental e propagandas, que pretendem fazer do muriqui um animal conhecido e protegido. A meta é criar as condições que permitam aumento da população e, principalmente, a retirada da espécie da lista de extinção.
“O muriqui servirá de modelo para outros estudos científicos, com certeza. O boto-cinza, por exemplo, também receberá investimentos do estado para pesquisas científicas”, antecipa o secretário estadual do Ambiente, Carlos Minc.
O projeto, chamado oficialmente de “Conservação do Muriqui no Rio de Janeiro: levantamento da situação da espécie para a elaboração de um plano de ação estadual”, mobilizará especialistas da ONG Ecoatlântica, do Instituto Estadual do Ambiente (Inea), do Jardim Botânico, dos centros de primatologias do Brasil e do Rio de Janeiro, da Fiocruz, UFF e UFRJ, entre outras instituições.
Não vai ser fácil mapear os hábitos do muriqui. Ao menor barulho, ele foge, com uma agilidade tão grande que é praticamente impossível persegui-lo. A desenvoltura do animal na mata, que faz lembrar a agilidade de um atleta olímpico, é um dos argumentos para fazer do muriqui a mascote dos Jogos Olímpicos do Rio. Para observar de perto esse bicho arredio, os pesquisadores terão que escalar montanhas e se embrenharem em locais de difícil acesso.
A estratégia será dividir os especialistas em dez grupos. Nas primeiras incursões, eles se espalharão pelo estado em busca de relatos e de vestígios dos muriquis. Nos locais nos quais haja alguma probabilidade de encontrar o macaco, todos eles se reunirão para fazer a varredura para a contagem e coleta de material. Quando for possível, será realizada a captura, com o auxílio de armas que lançam tranquilizantes. Nestes casos, será feita a coleta de sangue e marcação do animal.
“O muriqui é um banco genético. A gente não tem ideia hoje de como está realmente a área verde. Por exemplo, quando fizermos o estudo das fezes e analisarmos as sementes encontradas, tenho quase certeza de que identificaremos espécies novas da flora da Mata Atlântica”, explica Paula Breves, veterinária e presidente da ONG Ecoatlântica. “O Jardim Botânico ficará responsável pela análise da flora. A UFF fará o georreferenciamento das informações, mapa de ameaças, do estudo botânico. Serão muitos mapas. O pessoal da Fiocruz vai desenvolver ações de educação ambiental. Por exemplo, como trabalhar com agricultores a prevenção das queimadas.”
Os especialistas pretendem comprovar, ainda, que o Rio é o único estado da federação no qual é possível encontrar não apenas o muriqui-do-sul (Brachyteles arachnoides), que também ocorre nas matas de São Paulo e extremo Norte do Paraná, como também o muriqui-do-norte (Brachyteles hypoxanthus).
“Nenhum outro estado tem a ocorrência das duas outras espécies do animal. Vamos tentar identificar o muriqui-do-norte em Itatiaia”, antecipa Daniela Pires e Albuquerque, técnica do Inea.
Há diferenças físicas entre os muriquis-do-norte, mais despigmentado, e do sul, aparentemente mais escuro. O estudo vai permitir uma comparação entre ambas as espécies, já que hoje é grande a desinformação em relação ao muriqui-do-sul. Tanto que uma das hipóteses a ser verificada é a de que não se tratam de duas espécies distintas, mas de uma subespécie.
“Temos grandes dúvidas se realmente são duas espécies distintas. Ou se um deles é uma subespécie. Vamos tentar entender isso, porque até então não há um estudo genético do muriqui-do-sul”, salienta Paula. “A pesquisa não vai gerar informações apenas sobre o muriqui. Qualquer animal que aparecer será identificado. Vamos usar câmeras para tirar fotos de qualquer bicho que se mover em uma determinada área. Até pássaros, o que for observado, anotaremos. Será um resultado secundário, que vai gerar informação importante para os parques.”
Os pesquisadores terão atenção especial em áreas nas quais haja indícios da presença do muriqui, sobretudo os parques estaduais do Desengano (que se espalha por Santa Maria Madalena, São Fidélis e Campos), dos Três Picos (Cachoeiras de Macacu, Friburgo, Teresópolis, Guapimirim e Silva Jardim), Cunhambebe (Mangaratiba, Rio Claro, Angra e Itaguaí); parques nacionais da Serra dos Órgãos (Teresópolis, Guapimirim, Magé e Petrópolis), de Itatiaia; Área de Proteção Ambiental do Cairuçu; e Reserva Ecológica da Juatinga (ambas em Paraty).
“Este estudo de campo é fundamental para a preservação do muriqui”, resume Paula. “Ainda temos relatos de caça, em Cunhambebe, há um mês. O legal é que já estamos recebendo telefones de proprietários de áreas com mata perguntando o que eles podem fazer para ajudar o muriqui, o que eles podem plantar. Isso é fantástico.”
Outro importante local para especialistas é o Centro de Primatologia do Rio de Janeiro (CPRJ), em Guapimirim. Mantido pelo Inea, há 22 espécies de primatas e 230 animais. Porém, faltam pesquisadores. Apenas o chefe da unidade, Alcides Pissinatti, desenvolve trabalhos científicos, dividindo seu tempo com a administração local. O CPRJ recebe estudiosos visitantes, mas sem vínculo com o local. Está prevista a contratação de um veterinário no próximo concurso público, diz o Inea.
“Com os muriquis em cativeiro, é possível conhecer a biologia e o comportamento da espécie. Temos seis animais, sendo que o último nasceu no dia 5 de fevereiro de 2012”, relata Pissinatti. “O ideal seria contar com cerca de 30 animais, que não podem ser da mesma família.”
Falta de espaço – Diferentemente do muriqui do Estado do Rio, que sofre com a falta de informações científicas, há cerca de 30 anos o muriqui-do-norte (Brachyteles hypoxanthus), sobretudo os que vivem na reserva Feliciano Miguel Abdala, em Caratinga, Leste de Minas Gerais, vêm sendo estudado pelo grupo de pesquisadores liderados pela primatóloga americana Karen Strier, pesquisadora e professora da Universidade de Wisconsin-Madison. Neste período, a população do macaco pulou de 60 para cerca de 200. Se, por um lado, o crescimento revela o sucesso das medidas de preservação; por outro, mostra os problemas de manter o muriqui confinado em pequenas unidades de conservação. Já falta espaço.
Esta situação está provocando mudanças de comportamento do muriqui. Os macacos ficam mais no chão, para terem outros locais além da copa das árvores. E procuram matas vizinhas, nem sempre seguras. Por este motivo, os ambientalistas querem criar um corredor ligando as unidades de conservação, com o objetivo de dar mais espaço para o maior primata das Américas se expandir.
“A mata tem seus limites. Crescendo a população, para onde vão os muriquis? É a mesma situação de uma família, quando ela cresce, precisa ir para uma casa maior ou encontrar outro espaço”, explica Karen.
Pesquisadores também constataram o aumento do número de machos. Para a especialista, esta pode ser uma forma de controle do número de macacos. Se a população crescesse muito, haveria disputa entre os animais. Neste momento, a tendência é que o índice de crescimento da população diminua.
“Ninguém entende como esse mecanismo funciona, mas, quando há excesso de população, nascem mais machos. A população cresce mais quando há mais fêmeas”, revela Karen. “Os muriquis são as espécies mais pacíficas do mundo. Eles têm um comportamento sem agressividade, não brigam. Os dentes caninos são muito pequenos. Entre eles, em vários aspectos, não tem hierarquia. Vivem numa sociedade igualitária.”
Em vez de brigar, os muriquis têm o hábito de abraçar uns aos outros. De acordo com a pesquisadora, esta é uma forma de cumprimentar o companheiro. E, se algo os assusta, eles se abraçam para se sentirem mais confiantes. Os machos não têm dominância sobre as fêmeas. Quando copulam, os machos da maioria das outras espécies ficam muito agressivos, há forte competição. No caso do muriqui, não há disputa entre machos, que compartilham as fêmeas. Pesquisadores relatam casos em que os machos esperam em fila a sua vez de ficar com a fêmea.
“Já vi cinco machos copulando no prazo de 11 minutos, sem briga alguma. Por isso os muriquis já foram comparados com os hippies: paz e amor”, conta Karen. “Eles nos mostram que é possível viver numa sociedade, até mesmo em densidade demográfica alta, sem brigas, sem disputas. E com muita tolerância, paciência e pacifismo. Hoje em dia me inspiro no comportamento do muriqui. Quando eu percebo após 30 anos de trabalho, que a espécie está crescendo e que o problema agora é procura novas áreas protegidas para esta população, fico mais esperançosa. Existe solução, é fácil. Os próprios macacos estão nos mostrando de que eles precisam: mais florestas preservadas e protegidas.”
One of the fundamental questions here is, is extinction a good thing? Is it “nature’s way?” And if it’s nature’s way, who in the world says anyone should go about changing nature’s way? If something was meant to go extinct, then who are we to screw around with it and bring it back? I don’t think it’s really nature’s way. I think that the extinction that we’ve seen since man is 99.9 percent caused by man.
RYAN PHELAN is the Executive Director of Revive and Restore, a project within The Long Now Foundation, with a mission to provide deep ecological enrichment through extinct species revival.
[ ED. NOTE: The following conversation took place at the seventh annual Science Foo Camp (SciFoo), hosted by Nature, Digital Science, O’Reilly Media, and Google, August 3 – 5, 2012, at the Googleplex in Mountain View, California. Special thanks to Philip Campbell of Nature, Timo Hannay of Digital Science, Tim O’Reilly of O’Reilly Media (“Foo” stands for “friends of O’Reilly”), and Chris DiBona and Cat Allman of Google. —JB ]
TO BRING BACK THE EXTINCT
[RYAN PHELAN:] The big question that I’m asking right now is: If we could bring back an extinct species, should we? Could we? Should we? How does it benefit society? How does it advance the science? And the truth is, we’re just at the beginning of trying to figure all this out. I got inspired really thinking about this through my involvement with George Church, and I’ve been on the periphery of an organization that he started called The Personal Genome Project. Over the last seven years I’ve been working primarily in personalized medicine, keeping my eye on the application of genomic medicine in different areas, and the growth of genomics and the shockingly drop in the sequencing price, and the cost of sequencing, and what that means to all different areas of science.
One thing led to another and we started talking with George about what it would mean if we could actually apply this towards the de-extinction of species. It turns out, of course, that in George’s lab he’s pioneering in all these methods. Right now, George’s approach of basically editing the genome starts to make the concept of bringing something back really plausible.
There are right now probably three different methods that are being used to contemplate bringing back species. The most traditional is what they refer to as back breeding, and we see that going on right now with the ancient cattle called aurochs. Basically, what they do is they start by taking the strains of cattle that are closest to the ancient aurochs and try to breed back in much the way they do with plant biology and hybridization.
The other area that is being done is in cloning, and the best example of that is with the Spanish Pyrenean ibex (a wild mountain goat). They actually were able to get some cellular matter from the last remaining ibex to clone. The Spanish scientists that did all that work feel that that cloning is completely viable. The truth is that when they did that ibex, it only lasted seven minutes, because of a particular lung frailty. That’s quite common in cloning anything. That is just something that cloning technology has to deal with, so he feels really confident if he had funding he could clone an extinct species now without a problem, and solve the lung issue.
The third concept is the one that we’re focused on right now: genome editing that George Church is pioneering. The way it would work (and again, I’m not the scientist here, George is better to explain it) the idea would be to take the most closely related extant living species and actually compare it genomically with the extinct species, and basically gene by gene match it, and edit it accordingly.
The species of choice right now that we’re looking at helping, aiding, and abetting, is the passenger pigeon, and the passenger pigeon, as you may know, is an iconic bird that had flocks in the billions just over a hundred years ago. A hundred and fifty years ago the passenger pigeon darkened the sky when it would pass. They say that these flocks were so thick in the sky that when they passed it could take a mile for a flock of birds to go by. They would darken the sky. It’s an amazing concept. We don’t have anything like that today. When that happened, it went from being the most prolific bird, and in just 30 years to being extinct. Why does that matter? Well, it matters for a lot of reasons. What was going on ecologically there? What did that bird bring to that whole eastern deciduous forest? God knows, it had a tremendous impact. I think we’re just now trying to figure out what would that impact might be like today if you were to reintroduce it.
The idea with the passenger pigeon is to take a closely related relative, which is the and-tailed pigeon, and sequence that genome. We’re sequencing that right now at Harvard, with an intern that we’re helping to fund, named Ben Novak. Right now we’re in the process of doing that work, and then they will basically edit the band-tail genome until the band-tail walks, and talks, and flies like a passenger pigeon. That’s how resurgence will occur.
We’re using the term “resurgence” because as you can imagine, there’s a lot of controversy over if you could bring back an extinct species, is it invasive? Would it become an invasive species? And is this a bad thing?
We’re in the process of starting a new organization. It’s called Revive and Restore. If we were to say it has a mission, it’s to help rethink extinction, to basically bring back extinct species if it’s the right thing to do. We’re contemplating the ethics involved in all this. This fall we’ll have a conference that we’re sponsoring in Washington DC, and I think it’s going to be thrilling. We’re bringing in 25 to 35 the scientists from all over the world that are actually doing extinction work— from the Korean team that’s working on the wooly mammoth, to the New Zealand and Australian teams that are de-extincting some species yet to be identified. They’re calling it the Lazarus Project. We don’t really know what it is. It could be the Moa. There are different theories about what it is. But, hopefully, in the fall we’ll learn more about that.
We’ll be talking with these scientists about the different technologies that they’re deploying, of which this genome hybridization technique that George is doing is going to be one and I’m sure there are others. We’ll be talking about the ethics of re-wilding. It’s one thing to actually bring back a species in the lab. It’s another to actually release it into the wild. And so we’ll be talking to scientists that are working in captive breeding, like the San Diego Zoo, with the California ondor. We’ll be talking with the frozen zoos that are doing this kind of banking of genetic material, and trying to figure out what kind of ethical framework we could create, so that when these scientists actually start to succeed in these fields we can somehow socialize this in the public discourse.
What I fear, quite honestly, is backlash that we’ve seen around genetically modified foods, that these organisms will be deemed genetically modified, which, of course, they are. This is genome engineering, and there may be way too much of a concern over what happens when they go into the wild.
One of the fundamental questions here is, is extinction a good thing? Is it “nature’s way”? And if it’s nature’s way, who in the world says anyone should go about changing nature’s way? If something was meant to go extinct, then who are we to screw around with it and bring it back? I don’t think it’s really nature’s way. I think that the extinction that we’ve seen since man is 99.9 percent caused by man.
I’m going to just take the passenger pigeon as an example, not because it’s my favorite bird, but because it’s so iconic. If we are the ones that are responsible for blasting it out of the sky, do we have a little bit of responsibility to think about bringing it back now that we have science that can easily allow for it? I say “easily,” but in the scheme of things, it’s still going to be a lot of heavy lifting to help make this happen.
What does all this mean to the average citizen? A good example of a reintroduction of a species is the peregrine falcon. The peregrine falcon had actually gone extinct as a species in the East. For many of us bird lovers, we love the peregrine falcon. We love seeing that bird fly and soar like it does. But, it was really only through captive breeding and a reintroduction of a sub-species from the Rocky Mountain area that we even have a peregrine now flourishing on the East Coast. Where the peregrine falcon really wants to nest is on bridges or on the sides of skyscrapers, and that bird is now evolving into a bird that is better adapted for working in an urban environment.
What’s going to happen is, even if we were to have a passenger pigeon, they’re not going to be in the flocks of the billions any more. Their impact with agriculture will be lessened, because of an obvious reduction in size. The truth is, if anything happened with that bird, we know it’s a tremendous game bird that people loved, and probably people would be shooting it for good meat, good game.
One question is: If you could actually bring back anything, would you bring back the California grizzly bear? A species that could eat people? Well, we recently were at the California Academy of Sciences, up front and personal with “Monarch”, the last California grizzly, a beautiful specimen there, and we were joking, and not really joking, saying, “Well, what if you could genome edit the California grizzly so that it didn’t like the taste of people?” That would be kind of interesting! Big megafauna, good for the land, but take the fear of it out for people. The truth is all of this could someday be possible.
Some people have said to us, “Well, are you one click away from “Jurassic Park” here?” The truth is, we’re not. “Jurassic Park” was a good movie, if that, but the science is not there at all today, and the reason for that is that we don’t have a close relative of the dinosaurs. We just don’t have it. The only reason that this concept of bringing back an extinct species works right now is if you can take those genomes and actually edit them based on either a close living relative, or you’ve got viable cell tissue, and we don’t have that. So right now that one is not a worry. But could it be someday? Sure.
The concept of Revive and Restore is an idea that might well blossom on the West Coast, here in Silicon Valley, but the truth is that the pressures that I think all these scientists who are working in de-extinction worldwide will feel will be around this whole question of: Who are you to play God and bring back an extinct species? Who are you to introduce something that could be “invasive”? Whether it’s in academia or it’s being done in industry, I think the science is going to be challenged around this really intriguing issue. That’s why I think an organization like Revive and Restore can actually help with the public discourse.
Somebody has to responsibly help the industry and academia think through these heady issues, and I think we’re going to start that dialogue this fall. But in the absence of it, what we’re going to see is the, “Oh, my God, we’re cloning this dangerous species again,” or we’re doing something horrific with our chicken to avoid the Avian flu. These things are going to happen.
Everyone wants to bring up the Neanderthals, and interestingly enough, anyone who’s working around the Neanderthal genome is reluctant to participate in our fall workshop, because they last thing they want is to be criticized or implicated in bringing back a Neanderthal. It’s just verboten.
I’ve been dealing with this whole genetic exceptionalism now for almost a decade with personalized medicine. There has always been a hypersensitivity to anything genetic and I’m looking forward to when we get over that.
The most interesting part of all this is going to be where the science goes, what we learn from doing this. It’s not going to be necessarily about bringing back something. It’s going to be about what we learn.
Just like everything that we know that’s really innovative in science, you never know the unintended benefits or what the outcomes are going to be. Specifically, around the study of extinct species we’re going to probably learn what made them vulnerable to extinction. The implications for endangered species are tremendous. We don’t really know why things go extinct. We can surmise, but right now we could actually start to look at the genetic level, at what some of these contributory factors were, and I think that’s really exciting.
THE REALITY CLUB:
Jennifer Jacquet: To the question of who is Ryan Phelan, or anybody else, to bring an extinct species back I would counter: who was anyone to make these animals extinct to begin with? An estimated 869 species have gone officially and, so far, irreversibly extinct just since the 16th century, and 290 more species are considered critically endangered and possibly extinct — and in almost all cases the finger points to humans. Many of these disappearances, like the Tasmanian tiger, the Great auk, and the Steller’s sea cow, were precipitated by a relatively small group that never asked their fellow earthlings, let alone future generations, if they wanted these animals gone forever. Should the entire group have been queried, my guess is that its majority, certainly in the case of the large, delicate, and vegetarian Steller’s sea cow, would have answered in a resounding “No.” (Admittedly the response might be different in the case of the saber-toothed cat, for instance, which went extinct not long after the invention of farming.) To be in favor of human-induced extinction seems one of the pillars of myopia.
But what is a genome edited songbird brought back from extinction to do against the poachers in the Mediterranean? What happens when the reconstituted baby Yangtze River dolphin (last seen in 2005) is released into still sullied Chinese waters? We already have captive-bred tigers, but that hasn’t stopped the habitat fragmentation and human takeover that has led to fewer than 3500 wild tigers (there were 100,000 in 1900) today in India. In other words, does this technical solution, which is elegant and scientifically interesting, as Phelan points out, distract from old boring problems? Or does it necessitate more work on pollution, habitat loss, and human behavior because the species that would be the usual victims now have a shot at immortality? |
SPARQL (pronounced "sparkle") is the query language for the Semantic Web. Along with RDF and OWL, it is one of the three core technologies of the Semantic Web.
This lesson introduces the SPARQL query language, starting with simple queries. Future lessons will build on this material with more advanced SPARQL concepts.
SPARQL is a recursive acronym, which stands for SPARQL Protocol and RDF Query Language.
SPARQL consists of two parts: query language and protocol. The query part of that is pretty straightforward. SQL is used to query relational data. XQuery is used to query XML data. SPARQL is used to query RDF data. Despite this similarity, SPARQL differs in that it was designed to operate over disconnected sources over a network in addition to a local database.
In particular, the SPARQL protocol allows transmitting SPARQL queries and results between a client and a SPARQL engine via HTTP. We can take advantage of that fact to query live, public SPARQL endpoints, as we'll see later in this tutorial. A SPARQL endpoint is simply a server that exposes its data via the SPARQL protocol.
We'll cover the importance of the SPARQL protocol later in the lesson, after introducing some more basic SPARQL concepts.
At its most basic, a SPARQL query is an RDF graph with variables. For example, consider the following RDF graph:
ex:juan foaf:name "Juan Sequeda" .
ex:juan foaf:based_near ex:Austin .
Now consider a version of the previous RDF graph that has variables instead of values:
?x foaf:name ?y .
?x foaf:based_near ?z .
Note that variables in SPARQL queries start with a question mark (?).
At first blush, this is not very different from RDF itself, and that's intentional. SPARQL queries are based on the concept of graph pattern matching. A basic SPARQL query is simply a graph pattern with some variables. Data that is returned via a query is said to match the pattern.
In the example above, we can see that foaf:name and foaf:based_near can be matched against the actual RDF data from the real graph. In doing so, we bind ?x to ex:juan, ?y to "Juan Sequeda" and ?z to ex:Austin to get the actual RDF graph we showed above.
Before we go on to show a full SPARQL query, let's summarize the vocabulary:
- Graph pattern. Specifying a graph pattern, which is just RDF using some variables.
- Matching. When RDF data matches a specific graph pattern.
- Binding. When a specific value in RDF is bound to a variable in a graph pattern.
What does a SPARQL query look like?
The following SPARQL query has all the major components from SPARQL:
Let's look at each component in turn.
The PREFIX keyword describes prefix declarations for abbreviating URIs. Without a prefix, you would have to use the entire URI in the query (<http://xmlns.com/foaf/0.1/name>). Create a prefix by using a string (foaf) to reference a part of the URI (<http://xmlns.com/foaf/0.1/>). When you use the abbreviation (foaf:name), it appends the string after the colon (:) to the URI that is referenced by the prefix string.
The SELECT keyword is the most popular of the 4 possible return clauses (more on the others later). If you've used SQL, SELECT serves very much the same function in SPARQL, which is simply to return data matching some conditions. In particular, SELECT queries return data represented in a simple table, where each matching result is a row, and each column is the value for a specific variable. Using our SPARQL query above in which we SELECT ?name, the result would be a table with one column and as many rows as match the query. The variable ?x is not returned.
The FROM keyword defines the RDF dataset which is being queried. There is an optional clause, FROM NAMED, which is used when you want to query a named graph.
The WHERE clause specifies the query graph pattern to be matched. This is the heart of the query. A graph pattern, as mentioned above, is, in essense, RDF with variables.
Finally, ORDER BY is one of the several possible solution modifiers, which are used to rearrange the query results. Other solution modifiers are LIMIT and OFFSET.
In addition to SELECT, there are three other very important return clauses that you can use: ASK, DESCRIBE, and CONSTRUCT.
ASK queries check if there is at least one result for a given query pattern. The result is true or false.
DESCRIBE queries returns an RDF graph that describes a resource. The implementation of this return form is up to each query engine, so you won't see it used nearly as often as the other return clauses.
CONSTRUCT queries returns an RDF graph that is created from a template specified as part of the query itself. That is, a new RDF graph is created by taking the results of a query pattern and filling in the values of variables that occur in the construct template. CONSTRUCT is used to transform RDF data (for example into a different graph structure and with a different vocabulary than the source data).
CONSTRUCT queries are useful if you have RDF data that was automatically generated and would like to transform it using well-known vocabularies, or if you have RDF data using vocabulary from one ontology but need to translate it to another ontology. After SELECT this is the most common type of query in practice, and a major reason why agreeing on every aspect of an OWL ontology ahead of time is not necessary. Translation using CONSTRUCT is relatively cheap.
In the next SPARQL lesson, we'll show a number of examples of each kind of query.
The SPARQL protocol enables SPARQL queries over simple HTTP requests. A SPARQL endpoint is simply a service that implements the SPARQL protocol.
For example, if you do a curl on the following:
The response is the following:
HTTP/1.1 200 OK
Date: Mon, 21 May 2012 23:43:38 GMT
Content-Type: application/sparql-results+xml; charset=UTF-8
Server: Virtuoso/06.04.3132 (Linux) x86_64-generic-linux-glibc25-64 VDB
This means that SPARQL is basically an API!
SPARQL as Federation
What this means is that data exposed via SPARQL on any server can be queried by any SPARQL client. This is a fundamental difference between SPARQL and other query languages, such as SQL, which assume that all data being queried is local and conforms to a single model. With SPARQL—and especially when using CONSTRUCT—data from multiple places can be combined dynamically, as needed, to create new forms of information.
Going back to the basic SPARQL syntax shown above, the FROM clause can be used to specify named graphs that sit on any server. We'll show exactly how to merge data from multiple sources in more advanced SPARQL lessons.
So how do you expose data via SPARQL? RDF databases typically include a SPARQL endpoint by default. Even non-RDF data sources can be exposed using a SPARQL endpoint. Future lessons will present ways to turn existing relational databases into SPARQL endpoints, making them part of the Semantic Web.
Note: we won't go into the details of the SPARQL protocol query encoding here since you're much more likely to be writing actual SPARQL and using tools to issue the queries over HTTP for you.
This is a brief introduction to the SPARQL query language. In the next lesson we will look at each of the return clauses in the context of real-world queries that you can run on a public SPARQL endpoint. |
After 20 years, NASA’s Wind spacecraft is still going strong and helping scientists understand the forces that buffet near-Earth space.
The end of 2014 marks two decades of data from a NASA mission called Wind. Wind – along with 17 other missions – is part of what’s called the Heliophysics Systems Observatory, a fleet of spacecraft dedicated to understanding how the sun and its giant explosions affect Earth, the planets, and beyond.
Wind launched on November 1, 1994, with the goal of characterizing the constant stream of particles from the sun called the solar wind. With particle observations once every 3 seconds, and 11 magnetic measurements every second, Wind measurements were – and still are – the highest cadence solar wind observations for any near-Earth spacecraft.
During its more than 20 years in space, Wind has taken up position at various spots around our planet to help determine how near-Earth space interacts with incoming energy and particles from the sun. Assessing the complex variations of the charged particles making up the solar wind cannot be done from a single point in space. That would be like trying to understand the entire Earth’s weather system from a single collection station in Washington, D.C. So, Wind was part of a game-changing idea: launch several missions to work in tandem to understand how the dynamic magnetosphere surrounding Earth reacts to the sun. Sitting at a point between Earth and the sun, Wind was the vanguard, observing the solar wind.
“We had a fairly simple original objective,” said Adam Szabo, the project scientist for Wind at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “The number one question was to find out how the solar wind was driving changes in the magnetosphere.”
The original flotilla, named the Global Geospace Science (GGS) campaign, was composed of the Polar spacecraft observing Earth’s magnetosphere in high latitudes, Equator-S making equatorial magnetospheric measurements, and the Japanese Geotail patrolling the elongated magnetotail — the long ribbon of magnetosphere that trails behind Earth, away from the sun. The original GGS program was rapidly extended with additional missions to form the International Solar Terrestrial Program, or ISTP.
With its mandate to watch the frontlines, Wind was sent into orbit around what’s called a Lagrangian point, a point that experiences balanced gravity from both the sun and Earth. Wind took up residence in an elliptical orbit around the first Langrangian point (L1), lying between Earth and the sun, some 932,000 miles (1,500,000 kilometers) away from Earth. While several satellites have since been in a similar orbit, Wind was only the second spacecraft ever to orbit L1.
In 1997, another solar wind monitor joined the L1 neighborhood. The Advanced Composition Explorer, or ACE, was designed both to measure properties of the incoming solar wind, and to give scientists advanced notice of larger, more intense eruptions from the sun, such as coronal mass ejections, or CMEs. At their worst, CMEs can compress the magnetosphere so severely that satellites suddenly find themselves outside that protective bubble, exposed to harsh solar radiation. The compression can also set off vibrations in the magnetosphere that can induce electrical surges in power grids on Earth.
NASA decided to take advantage of having two spacecraft monitoring the solar wind by moving Wind to the second Lagrange point (L2), a point on the other side of Earth from the sun. L2 is some 1.1 million miles (1.8 million kilometers) down the magnetotail, four times the distance to the moon. From this new location, Wind was able to provide measurements from deeper in the magnetotail than any other missions have done.
Working together, ACE and Wind unraveled even more mysteries about the solar wind, helping answer questions such as, did the observations on one side correlate to what was happening on the other? Did any particular occurrence stay coherent over long distances or did they change as they moved?
During this time frame, the ISTP missions helped scientists understand more about the size of events in the magnetosphere. At a distance of under 90,000 miles (145,000 kilometers), what one satellite observed could be correlated to measurements from the other. That means that knowing what one satellite saw could perhaps be used to predict what might be seen elsewhere in the magnetosphere, as long as it was less than 90,000 miles away. At greater distances, however, any given blast of energy or particles moving through the magnetosphere simply changed too much to be predictable.
From 2000-2003, Wind moved through a variety of positions, including off to the side of the magnetosphere, 1.5 million miles (2.4 million kilometers) away from Earth, and a return trip to the magnetotail. In 2004, Wind was moved back to the L1 point permanently.
“In its position at L1, Wind has witnessed a handful of first ever sightings of different kinds of electromagnetic waves traveling by in the solar wind,” said Lynn Wilson, deputy project scientist for Wind at Goddard. “In space where a particle could travel 100 million miles (160 million kilometers) before hitting another one, these waves simply can’t be working the same way sound or water waves do, pushing material along. It has opened up whole areas of research trying to understand these unexpected properties.”
Wind continues to work with other spacecraft — and is even looking to the future. In 2018, NASA will launch a new mission called Solar Probe Plus that will go to within 3.8 million miles (6.1 million kilometers) of the sun to explore what happens within the solar wind near the sun. One big mystery is the question of what keeps the solar wind heated. One would think that the solar wind would cool down as it expands and travels away from the sun, but it remains hotter than expected. Some intrinsic activity within the wind must continue to generate heat. It is known that magnetic reconnection – a process in which magnetic energy is converted into heat and acceleration of particles – is part of the process. In sync with this endeavor, Wind has searched for the signatures of magnetic reconnection closer to home.
“The question we had was whether magnetic reconnection could ever happen in the low-density solar wind, where things are not as dynamic as in the sun’s atmosphere,” said Szabo. “Wind found signatures of reconnection, but they weren’t violent reactions like what happens closer to the sun. These were subtle, lower energy events, and the signature was thin streams of particles accelerating outward, which we call reconnection jets.”
These jets last for such short periods of time that the 3-second data collection on Wind is just barely fast enough to capture them – an example of how Wind’s high cadence measurements still shine 20 years after launch, and how its mission continues to offer important data for scientists.
Despite having a planned mission of five years, Wind was built with the hope of lasting much longer. Wind has enough fuel to keep it in orbit around L1 until 2074, and every effort has been made to reduce stress on its instruments in order to maintain their longevity. |
Please note: this article is based on information about the previous version of the Security+ exam (SY0-401), which expired in May of 2018. For updated information, please see our up-to-date Security+ listing.
Appropriate use of security controls can provide a number of behind-the-scenes security measures: deterrents, prevention, detection, and so on. The three primary goals of security, confidentiality, integrity, and availability (also known as the CIA triad), are common enough in most organizations. In addition to the CIA triad, though, organizations should also focus on providing security to the physical environment, protecting human life, and ensuring the implementation of other safety procedures.
What Do I Need to Know about Confidentiality?
Confidentiality prevents unauthorized disclosure of data and information to the bad guys. For example, when the data is in transit over a network, confidentiality ensures that the attacker cannot intercept it for nefarious purposes. This means that when a user sends data or text message over a network, only the intended recipient can receive it. Confidentiality can be ensured by deploying three basic security controls: include encryption, access controls, and steganography.
Encryption is a process of converting electronic data or information into code, called ciphertext, to prevent unauthorized access. Only authorized parties can understand this ciphertext. Two popular encryption techniques, symmetric and asymmetric, are used to encrypt and decrypt data. Encryption converts data into ciphertext, which is an unreadable form of data whereas the decryption process converts ciphertext back into readable data.
Access controls and permissions are used to restrict access to valuable data. A user can obtain only the level of access that is granted by the system administrator.
Steganography is a technique of hiding data or information in another type of data. Steganography can be applied to images or to audio or video files. Since media files can be large, security experts use them for steganographic transmissions. Hiding data in a large media is difficult to tamper with.
How Does Integrity Protect Essential Data and Information?
Integrity is a security service that protects data and information from damage or deliberate manipulation. It is essential for any business or e-commerce website. Integrity ensures that, when data has been communicated or stored, it has not been manipulated, changed, or altered in storage media or even after transit. Integrity checks use various methods, including hashing, digital signatures, certificates, and non-repudiation.
Hashing prevents the data and information from being accessed in an unauthorized way. It operates by producing a unique identifier, which can be a fingerprint, checksum, or hash value, through a hash function or algorithm. The popular hash functions include MD2, MD4, MD5, and secure hash algorithm (SHA-1). An attacker often uses reverse engineering to reverse a hash matching and to crack passwords.
Digital signatures: A digital signature is a mathematical technique used to validate the integrity and authenticity of a message, digital document, or software. Digital signatures are intended to solve the problems of impersonation and tampering with data while in transit.
Certificates: A digital certificate proves the identity of the user who sends a message. It only verifies the source of the message (sender), rather than proving the quality or reliability of the message or the network on which that message was being transmitted.
Non-repudiation is the assurance that a sender cannot deny the authenticity of a message pr data that is sent to a recipient. For example, email non-repudiation uses an email tracking method that ensures that a sending party cannot deny having sent the message or data and that the receiving party cannot deny having received that message or data.
What Do I Need to Know about Availability?
Availability is a security service that ensures that the data and systems are available for authorized users in an effective and timely manner. Availability can be ensured through proper data backups, disaster recovery plans, and redundant systems. Availability also helps users to accomplish their assigned tasks within a given time. The underlying techniques ensure the availability of IT systems.
Redundancy is the use of alternate or secondary solutions. In an IT environment, redundancy provides alternate means to accomplish IT functions or perform tasks. Redundancy improves fault tolerance by reducing the chances of a single point of failure. If a primary system is compromised, it can be switched over to the redundant servers or backup systems so that the smooth continuation of the work is ensured. Failover, or rollover, means redirecting traffic or workload to a backup system when the primary machine fails to perform.
Fault tolerance: Fault tolerance is the capability of a computer system or network device to continue its operations in the event of a failure or malfunction of any of its components, including hardware and software. In fact, fault tolerance prevents the sudden failure of large systems (such as proxy servers, FTP servers, email Servers, etc.) to provide uninterrupted services to the users. For example, VMware’s vSphere 6.x is a branded data availability architecture that accurately replicates a VMware’s virtual machine on an alternate physical host in case of failure of the main host server.
Patching is the process of applying updates to system or application software. The purpose of patching is to improve the usability or performance of the software by fixing its security vulnerabilities and bugs. Organizations often hire a patch management team that identifies what patch should be applied to which system when necessary.
Which Safety Procedures Are Necessary for the Security+ Exam?
The safety of personnel and facilities is a prerequisite to an organization’s overall security endeavor. In addition to the safety of human life, providing physical security for infrastructure and other important assets is also essential. The important aspects of safety and security are discussed below.
Fencing: This is a device that marks a perimeter to differentiate between specifically protected and non-protected areas. Fencing involves the usage of concrete walls, chain-link fences, barbed wires, stripes painted on the ground, or invisible perimeters, including laser beams and heat detectors.
Lighting: Although lighting isn’t a strong deterrent, it can be an effective security tool to discourage intruders, prowlers, and trespassers. For better results, lighting should be combined with CCTV, dogs, guards, or any other form of intrusion detection system.
Locks: The gates and doors should be locked properly through hardware locks, electronic locks, and conventional locks that employ traditional metal keys so that only authorized workers can unlock them. In addition, biometric locks are effective for authentication purposes. A biometric lock requires the user to present a biometric factor, such as a hand, finger, or retina to the scanner. A person cannot enter into a secured room unless his/her biometric factor is verified.
CCTV: A closed-circuit television (CCTV) is used to record the events within or/and outside the secured environment. Security management mostly installs CCTV cameras on the entry and exit points, resources and other valuable assets, in order to watch the movements of suspects.
Escape plans are designed to define the alternate routes for exits in the event of an emergency or disaster. Escape plans are often sketched on the maps placed on the walls of the facility. An effective escape plan properly maps the positions of fire extinguishers and identifies alternate routes with arrows and explicit instructions, rather than a vague and unclear guide. Other essential elements of an escape plan include smoke alarms, floor plan, clear escape routes, no elevators, and staff training.
InfoSec Security+ Boot Camp
The InfoSec Institute offers a Security+ Boot Camp that teaches you information theory and reinforces that theory with hands-on exercises that help you learn by doing.
Moreover, the InfoSec Institute has been one of the most awarded (42 industry awards) and trusted information security training vendors for 17 years.
InfoSec also offers thousands of articles on all manner of security topics. |
Fractions and decimals represent the same thing: a part of a whole. For example, 0.25 and 1/4 both mean one-quarter or 25 percent. Converting some decimals -- such as those with more than two numbers after the decimal point -- to fractions makes it easier to visualize them. You can convert decimals to fractions quickly, using basic multiplication and division.
Converting Decimals to Fractions
Write the decimal number above the number 1, as if you are going to divide it. For example, if the decimal is 0.625, write 0.625/1. Multiply both 0.625 and 1 by the multiple of 10 with the same amount of zeros as there are numbers after the decimal point. Since 0.625 has three numbers after the decimal point, multiply the numbers by 1,000, for example:
0.625 X 1,000 = 625 1 X 1,000 = 1,000
The top number is now a whole number, 625, and the 1 is now 1,000, yielding the fraction 625/1,000.
The number 625/1,000 is a decimal fraction, meaning that you can reduce it to a smaller fraction. Do this by dividing both the top and bottom number by a common divisor, which is a number that both numbers can be evenly divided by. Both of these numbers can be divided equally by 25, so use that number to reduce the fraction. The equations would be: 625/25 = 25 and 1,000/25 = 40. Write this as a fraction: 25/40. This fraction can be reduced once more. Both 25 and 40 are divisible by 5, so use 5 to reduce as follows: 25/5 = 5 and 40/5 = 8, yielding the fraction 5/8. Since 5/8 is a common fraction, meaning it cannot be reduced, this is your final answer. |
William Jones and his Circle: The Man who invented Pi
In 1706 a little-known mathematics teacher named William Jones first used a symbol to represent the platonic concept of pi, an ideal that in numerical terms can be approached, but never reached.
The history of the constant ratio of the circumference to the diameter of any circle is as old as man's desire to measure; whereas the symbol for this ratio known today as π (pi) dates from the early 18th century. Before this the ratio had been awkwardly referred to in medieval Latin as: quantitas in quam cum multiflicetur diameter, proveniet circumferencia (the quantity which, when the diameter is multiplied by it, yields the circumference).
It is widely believed that the great Swiss-born mathematician Leonhard Euler (1707-83) introduced the symbol π into common use. In fact it was first used in print in its modern sense in 1706 a year before Euler's birth by a self-taught mathematics teacher William Jones (1675-1749) in his second book Synopsis Palmariorum Matheseos, or A New Introduction to the Mathematics based on his teaching notes.
Before the appearance of the symbol π, approximations such as 22/7 and 355/113 had also been used to express the ratio, which may have given the impression that it was a rational number. Though he did not prove it, Jones believed that π was an irrational number: an infinite, non-repeating sequence of digits that could never totally be expressed in numerical form. In Synopsis he wrote: '... the exact proportion between the diameter and the circumference can never be expressed in numbers...'. Consequently, a symbol was required to represent an ideal that can be approached but never reached. For this Jones recognised that only a pure platonic symbol would suffice.
The symbol π had been used in the previous century in a significantly different way by the rector and mathematician, William Oughtred (c. 1575-1 660), in his book Clavis Mathematicae (first published in 1631). Oughtred used π to represent the circumference of a given circle, so that his π varied according to the circle's diameter, rather than representing the constant we know today. The circumference of a circle was known in those days as the 'periphery', hence the Greek equivalent 'π' of our letter 'π'. Jones's use of π was an important philosophical step which Oughtred had failed to make even though he had introduced other mathematical symbols, such as :: for proportion and 'x' as the symbol for multiplication.
On Oughtred's death in 1660 some books and papers from his fine mathematical library were acquired by the mathematician John Collins (1625-83), from whom they would eventually pass to Jones.
The irrationality of π was not proved until 1761 by Johann Lambert (172877), then in 1882 Ferdinand Lindemann (1852-1939) proved that π was a non-algebraic irrational number, a transcendental number (one which is not a solution of an algebraic equation, of any degree, with rational coefficients). The discovery that there are two types of irrational numbers, however, does not detract from Jones's achievement in recognising that the ratio of the circumference to the diameter could not be expressed as a rational number.
Beyond his first use of the symbol p Jones is of interest because of his connection to a number of key mathematical, scientific and political characters of the 18th century. He was also responsible for developing one of the greatest scientific libraries and mathematical archives in the country which remained in the hands of the Macclesfield family, his patrons, for nearly 300 years.
Though Jones ended his life as part of the mathematical establishment, his origins were modest. He was born on a small farm on Anglesey in about 1675. His only formal education was at the local charity school where he showed mathematical aptitude and it was arranged for him to work in a merchant's counting house in London. Later he sailed to the West Indies and became interested in navigation; he then went on to be a mathematics master on a man-of-war. He was present at the battle of Vigo in October 1702 when the English successfully intercepted the Spanish treasure fleet as it was returning to the port in north-west Spain under French escort. While the victorious seamen went ashore in search of silver and the spoils of war, for Jones, according to an 1807 memoir by Baron Teignmouth, '... literary treasures were the sole plunder that he coveted.'
On his return to England Jones left the Navy and began to teach mathematics in London, probably initially in coffee houses where for a small fee customers could listen to a lecture. He also published his first book, A New Compendium of the Whole Art of Practical Navigation (1702). Not long after this Jones became tutor to Philip Yorke, later 1st Earl of Hardwicke (1690-1764), who became lord chancellor and provided an invaluable source of introductions for his tutor.
It was probably around 1706 that Jones first came to Isaac Newton's attention when he published Synopsis, in which he explained Newton's methods for calculus as well as other mathematical innovations. In 1708 Jones was able to acquire Collins's extensive library and archive, which contained several of Newton's letters and papers written in the 1670s. These would prove of great interest to Jones and useful to his reputation.
Born half a century apart, Collins and Jones never met, yet history will forever link them because of the library and mathematical archive that Collins started and Jones continued, arising from their shared passion for collecting books. The son of an impoverished minister, Collins was apprenticed to a bookseller. Essentially self-taught like Jones, he had also gone to sea and learned navigation. On his return to London he had earned his living as a teacher and an accountant. He held several increasingly lucrative posts and was adept at disentangling intricate accounts.
Collins's modest ambition had been to open a bookshop, but he was unable to accumulate enough capital. In 1667, however, he was elected to the Royal Society of which he became an indispensable member, assisting the official secretary Henry Oldenburg on mathematical subjects. Collins corresponded with Newton and with many of the leading English and foreign mathematicians of the day, drafting mathematical notes on behalf of the Society.
When Jones applied for the mastership of Christ's Hospital Mathematical School in 1709 he carried with him testimonials from Edmund Halley and Newton. In spite of these he was turned down. However Jones's former pupil, Philip Yorke, had by now embarked on his legal career and introduced his tutor to Sir Thomas Parker (1667-1732), a successful lawyer who was on his way to becoming the next lord chief justice in the following year. Jones joined his household and became tutor to his only son, George (c.1697-1764). This was the start of his life-long connection with the Parker family.
Around the time that Jones bought Collins's library and archive, Newton and the German mathematician Gottfried Leibniz (1646-1716) were in dispute over who invented calculus first. In Collins's mathematical papers, Jones had found a transcript of one of Newton's earliest treatments of calculus, De Analyst (1669), which in 1711 he arranged to have published. It had previously been circulated only privately. President of the Royal Society since 1703, Newton was reluctant to have his work published and jealously guarded his intellectual property. However, he recognised an ally in Jones.
In 1712 Jones joined the committee set up by the Royal Society to determine priority for the invention of calculus. Jones made the Collins papers with Newton's correspondence on calculus available to the committee and the resulting report on the dispute, published later that year, Commercium Epistolicum, was based largely upon them. Though anonymous, Commercium Epistolicum was edited by Newton himself and could hardly be viewed as impartial. Unsurprisingly it came down on Newton's side. (Today it is considered that both Newton and Leibniz discovered calculus independently though Leibniz's notation is superior to Newton's and is the one now in common use.)
By 1712 Jones was firmly positioned among the mathematical establishment. In 1718 his patron Sir Thomas Parker was made lord chancellor and in 1721 was ennobled as Earl of Macclesfield. By this time he had purchased Shirburn estate and castle for the then vast sum of £18,350. Shirburn castle became a home too for Jones who was, by then, almost a family member. Besides the law, Parker had a scholarly interest in many subjects including science and mathematics and was a generous patron of the arts as well as the sciences. He was influential in the appointment of Halley as astronomer royal in 1721.
But there was an obverse side to the first earl's character. It seems that together with his great abilities and ambition there was also a dangerous lust for wealth. He was accused of selling chancery masterships to the highest bidder and of allowing suitors' funds held in trust to be misused. Parker resigned as lord chancellor in 1725 but he was nevertheless impeached. His punishment was a fine of £30,000 and he was forced to spend six weeks in the Tower of London before the necessary money was raised to pay the fine. Some of his assets were sold and his name was struck from the roll of privy councillors but he did not have to forfeit Shirburn which remains in the Macclesfield family to this day. Some dignity was restored when in 1727 he was one of the pallbearers at Newton's funeral.
Thomas's son, George Parker, became an MP for Wallingford in 1722 and spent much of his time at Shirburn where, with Jones's guidance, he added to the library and archive that Jones had brought with him. George Parker developed an interest in astronomy and with the help of a friend, the astronomer James Bradley (who became the third Astronomer Royal in 1742 on the death of Halley), he built an astronomical observatory at Shirburn.
By 1718 Jones was dividing his time mainly between Shirburn and Tibbald's Court, near Red Lion Square, London. Among the many influential mathematicians, astronomers and natural philosophers he corresponded with was Roger Cotes (1682-1716), the first Plumian Professor of Astronomy at Cambridge and considered by many to be the most talented British mathematician of his generation after Newton. He had been entrusted with the revisions for the publication of the second edition of Newton's Principia.
Jones acted as a conduit between Newton and Cotes when relations between the two became strained. He clearly had influence and considerable tact. In one letter Cotes wrote to Jones: 'I must beg your assistance and management in an affair, which I cannot so properly undertake myself ...'. This was the delicate matter of suggesting to Newton an improvement in one of his methods. Newton had a difficult personality and had to be handled carefully. This Jones was able to do. The second, amended edition of Principia was published in 1713 to great acclaim.
Newton was a towering eminence over most of the period and many among the scientific community lived under his shadow. Jones also had an extensive correspondence with the astronomer and mathematician, John Machin (c.1686-1771), who served as secretary to the Royal Society for nearly 30 years from 1718. He was also on the Society's committee to investigate the invention of calculus. Professor of astronomy at Gresham College for nearly 40 years, Machin worked on lunar theory and considered himself an expert on the subject. In one letter to Jones, Machin used fanciful language to complain about Newton's lunar theory:
... she (the moon) has informed me that he (Newton) has abused her throughout the whole course of her life, giving out that she is guilty of such irregularities and enormities in all her ways and proceedings that no man alive is able to find where she is at any time.
He then went on to write that he, Machin, knew the moon's whereabouts and would therefore be able to claim the £10,000 which the 'Lord Treasurer' was offering for the discovery of longitude at sea; because his lunar theory would improve the accuracy of lunar tables.
Though Machin did not receive the reward, his lunar theory as described in Laws of the moon's motion according to gravity was appended to the 1729 English edition of Principia after Newton's death.
Machin had also worked on a series for the ratio of the circumference to the diameter which converged fairly rapidly. The result of his calculation was printed in Jones's 1706 book, 'true to above a 100 places; as computed by the accurate and ready pen of the truly ingenious Mr John Machin...'. Machin performed this by using an infinite series whose sum converged to π. In mathematical terms this means that no matter how many terms are summed there is always a difference, however small, between that sum and the value of the irrational number, π. In the infinite series, which Machin used, the terms alternate between being positive and negative so that the sum is alternately lower or higher than π.
Jones also had correspondents abroad; one of particular interest was the Quaker scholar James Logan (1674-1751) who lived in America. Logan had been born in Ireland and was invited by William Penn, the Quaker leader and founder of Pennsylvania, to be his secretary. He prospered there and eventually bought a plantation, Stenton, where he retired in his early fifties to pursue his interests, including mathematics and botany. His own library of over 30,000 books was one of the most outstanding of the 18th century in America and was bequeathed to the city of Philadelphia.
In 1732 Logan wrote to Jones about an invention by, 'a young man here ... of an excellent natural genius'. This was Thomas Godfrey (1704-49), a glazier, who in October 1730 had invented an instrument that could be accurately used at sea because it had a single half-mirrored sight that lined up a reflected image of the sun with the horizon. Alternatively any two astronomical objects, for instance, the moon and a star could be lined up by moving a rotatable arm containing the mirror and reading off the angle from the scale. This meant that movement of a ship would not interfere with the angular measurement as both object and image would move together. It was an ingenious instrument. Logan considered that it could be used to find longitude at sea by the lunar method. The instrument is what we now know as Hadley's Quadrant, although it is in fact an octant. The attribution of this important invention was claimed both by America and by England. The English astronomer John Hadley (1682-1744) had made one of these instruments in the summer of 1730 and sent an account to the Royal Society the following May.
Logan had sent a personal letter describing Godfreys invention to Halley, then President of the Royal Society, addressing him as 'Esteemed Friend'. It was a friendly communication as well as a scientific one and was not read to the Royal Society, as was customary. Logan asked Jones to make some enquiry about the omission. Jones subsequently raised the subject with the Society in January 1734 and Godfrey's claims to be the inventor of the instrument, though not the first, were established.
Some years later in 1736 Jones wrote to Logan, apologising for not having replied sooner, saying that:
... my affairs are such as require my constant application, and take up my mind so much that I have little, or no leasur (sic) to think of anything else: even the mathematics. I have scarce thought of it these 18 years past, and am now almost a stranger to all improvements made that way.
But there are letters in Jones's correspondence dating from after that time that are mathematical in subject. Perhaps he did not want to encourage Logan to send him further discoveries. Logan was a tireless correspondent and it appears that he wrote many more letters to Jones than Jones answered.
There were certainly other things on Jones's mind. Like many other men of science, Jones was intrigued with the problem of longitude and he wrote letters to the Royal Society on the subject of clocks keeping accurate time as the temperature changed.
He served as a council member of the Society and became its vice-president in 1749. His income was boosted by sinecures organised by his former pupils: he was made Secretary of the Peace through the influence of Hardwicke and Deputy Teller to the Exchequer with George Parker's help. Nevertheless, he also experienced financial crisis on more than one occasion when his bank collapsed, a frequent occurrence in those days.
Jones married a second time in 1731 to Mary Nix, 30 years his junior and they had three children. He was elected a Governor of the Foundling Hospital in 1747 when George Parker was vice-president. It was Parker who commissioned Hogarth's portrait of Jones. Although Jones looked impressive in this portrait, he is reported to have been 'a little short faced Welshman, and used to treat his mathematical friends with a great deal of roughness and freedom'. Even so, as we have seen, he knew how to be tactful when necessary and could show great kindness.
After he died in 1749, aged 74, it was reportedly said by John Robertson, a clerk and librarian to the Royal Society, that he 'died in better circumstances than usually falls to the lot of mathematicians'. His one surviving son, also called William, was only three years old at the time. Known as Oriental' Jones, he excelled as a linguist, philologist and expert in Hindu Law and was duly knighted.
In 1750 George Parker wrote a paper which was read to the Royal Society entitled Remarks upon the Solar and Lunar years. Parker was a principal proponent for the adoption of the Gregorian calendar and the change in 1752 of the new year from March 25th to January 1st. One might consider the revision of the calendar as part of William Jones's scientific legacy. The same year Parker was elected president of the Royal Society, a position he held until his death.
In his will, Jones left his 'study of books' to George Parker 'as a testimony of my acknowledgement of the many marks of his favour which I have received'. The scientific books Parker inherited from Jones, together with the archive of papers, remained in the library at Shirburn. Access to them had been severely restricted though it was acknowledged that they represented the most important collection of their kind in private hands. In 2000 the archive of letters and papers was offered to Cambridge University Library who purchased it for £6,370,000 with the aid of a grant from the Heritage Lottery Fund. The Macclesfield Library was finally sold at Sotheby's in 2005 in six massive sales that have replenished libraries throughout the world.
In his lifetime, Jones's ability to retain his patrons was important and he served them well. From a historical perspective though, Jones gave much more to the Macclesfields than he ever received from them and, in doing so, he left a great intellectual legacy to the world. |
What are Strings?
The string is any value written either in a single quote or double quote. Internally R takes single quotes as double quotes only.
Want to get certified in R! Learn R from top R experts and excel in your career with Intellipaat’s R Programming certification!
The quotes at the beginning and end of a string should be both single or double quotes and cannot be mixed.
x <- "This is a valid proper ' string"
y <- 'this is still valid as this one" double quote is used inside single quotes"
This is a valid proper ' string
this is still valid as this single" double quote is used inside single quotes
Enroll yourself in Online R Programming Training and give a head-start to your career in R programming!
There should not be a single quote in a string which is having double quotes at the beginning and ending and vice versa.
a<- 'Incorrect string"
b <- 'no single quote' should be present within it'
...: unexpected INCOMPLETE_STRING
.... unexpected symbol
1: b <- 'no single quote' should
Concatenating Strings: paste() function
paste() function combines strings and can take many numbers of arguments.
The Basic syntax for paste function is :
paste(x,y,z,sep=” “, collapse=NULL)
- x, y, z represents any number of arguments
- sep represents any kind of separator between arguments.
- collapse represents the elimination of space in between strings and not in between two or more words of the same string.
x <- "Welcome"
z <- "Intellipaat Services"
print(paste(x, y, z))
print(paste(x,y,z,sep = "_"))
print(paste(x,y,z, sep="", collapse=""))
Welcome to Intellipaat Services
Welcome_to_Intellipaat Services
WelcometoIntellipaat Services
Check out the top R Programming Interview Questions to learn what is expected from R professionals!
Formatting numbers and strings
Using format() function
Numbers and strings with the help of format() function can be formatted to a specific style.
format(x, digits, nsmall, width, scientific, justify=c(“left”, “centre”, “right”, “none”))
x is a vector, digits are the total number of digits, nsmall is the minimum digits towards right of the decimal point, scientific has TRUE and FALSE to display scientific notation, width is a number of blank spaces at the beginning of a string and justify is to display string towards left, centre or right.
#illustrating use of digit
dig<- format( 12.3456789, digits = 8)
#illustrating scientific notation
ans <- format(c(5, 13.14521), scientific = TRUE)
#Illustrating justify use of strings
sol <-format"(21.9", width = 6, justify=l)
"5.000000e+00" "1.314521e+01"
"21.9 "
Are you interested in learning R Programming from experts? Enroll in our R Programming training Course in Bangalore now!
Counting characters in a string – nchar() function
The function counts the number of blank spaces and characters in a string.
- nchar(a); where “a” is the vector input
s <- nchar("calculating number of charactersis upper case conversion")")
Changing the case – toupper() & tolower() functions
This function can change the case of letters of a string.
#converting to upper case
ans <- toupper("This is upper case Transform")
"THIS IS UPPER CASE TRANSFORM"
Come to Intellipaat’s R Programming Community if you have more queries on R Programming!
Extracting components of a string via substring() function
Syntax: substring(a, first, last)
- where a is vector input of the character
- first is the position of “a” character to be extracted from starting.
- last is the position of “a” character to be extracted until the last character
#Extracting characters from 3rd to 5th position
sol <- substring("Welcome", 3, 5) |
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 23 people, some anonymous, worked to edit and improve it over time.
wikiHow marks an article as reader-approved once it receives enough positive feedback. In this case, 91% of readers who voted found the article helpful, earning it our reader-approved status.
This article has been viewed 250,366 times.
The aperture is a hole which controls the amount of light that passes through to the camera sensor (or film pane for film cameras). It's one of the three key settings of exposure (ISO, shutter speed, aperture).
By adjusting the aperture or f/stop to which it is most often referred, you not only control the amount of light you 'gather' but you also introduce effects on your final image which you will need to understand. Depth of field (DOF, the area of sharpness through the image) is the most important, but there are also optical imperfections or enhancements. Knowing how your camera's lens aperture works will help you make informed choices about what other exposure settings to use and what creative effects or even errors may occur and how these will affect the image.
1Familiarize yourself with some of the basic concepts and terminology. You'll need to know these in order to make sense of the rest of the article.
Aperture or stop. This is the adjustable hole through which light passes on its way from the subject, through the lens, to the film (or digital sensor). Like the pinhole in a pinhole camera, it blocks rays of light except those that would, even without a lens, tend to form an inverted image by passing through that central point to a corresponding point in the opposite direction on the film. With a lens, it also blocks rays of light that would pass through far from the center, where the lens glass may less closely approximate (usually with various easy-to-make spherical surfaces) the shapes that would focus it perfectly (usually much more complex aspherical surfaces), causing aberrations.
- Because every camera has an aperture, usually adjustable, and if not, at least has the edges of the lens as an aperture, the aperture size setting is what is normally called the "aperture".
- F-stop or simply aperture. This is the ratio of the focal length of the lens to the size of the aperture. This kind of measurement is used because a given focal ratio produces the same image brightness, requiring the same shutter speed for a given ISO setting (film speed or equivalent sensor light amplification) without regard to focal length.
Iris diaphragm or simply iris. This is the device most cameras use to form and adjust the aperture. It consists of a series of overlapping thin metal blades that can swing toward the center of a hole in a flat metal ring. It forms a central hole that is perfectly round wide open, when the blades are out of the way, and constricts by pushing the blades toward the center of that hole to form a smaller polygonal hole (which may have curved edges).
- If your camera uses interchangeable lenses, or it is a "bridge" type digital camera, the lens will have an adjustable diaphragm iris. If your camera is a shirt-pocket sized "point-and-shoot" compact model, especially a lower priced model, it may have a "neutral density filter" instead of a diaphragm iris. Also, if the camera's mode dial includes "M", "Tv", and "Av", it almost certainly has an actual diaphragm iris; this applies even on small compact models. If the mode dial doesn't include these three settings, the camera might have a diaphragm, or it might only have an ND filter; the only way to know for sure is to read the specifications in the owner's manual, or read a detailed professional review (Google your camera's model name with the word "reviews", and you will probably find at least two or three reviews on the Internet). If your camera uses an ND filter, your ability to "fine tune" your settings and control depth of field and bokeh effects will be limited to whatever the fixed aperture of the lens provides. NOTE on Mode Dial settings: "M" stands for "Manual" - in this mode you have to set both the shutter speed and aperture. "Tv" is shutter speed priority: you manually set the shutter speed, and the camera's exposure computer sets an appropriate aperture. "Av" is "Aperture Priority" - you manually set the f-stop (aperture) that you want, typically to achieve a specific depth of field, and the camera's exposure computer decides what shutter speed to use.
- Most SLR cameras only close down the iris diaphragm, making it visible from the front of the lens, during an exposure or when the depth-of-field-preview function is activated.
- Stopping down means to use a smaller, or (depending on context) a relatively small aperture (large f/ number).
- Opening up means to use a larger, or (depending on context), a relatively large aperture (small f/ number).
- Wide open means to use the largest aperture (smallest f/number).
Depth of field is the specific front-to-back area, or (depending on context) the scope of the front-to-back area that appears fairly sharp. A smaller aperture increases depth of field and decreases the extent to which objects outside the depth of field are blurred. The precise extent of depth of field is somewhat subjective because focus drops off gradually from the precise distance of focus, and the noticeability of defocus depends on factors such as subject type, other sources of lack of sharpness, and viewing conditions.
- A relatively large depth-of-field is called deep; a relatively small depth-of-field is called shallow.
Aberrations are imperfections in a lens's ability to focus light sharply. Generally speaking, less-expensive and more-exotic types of lenses (such as superwides) have more severe aberrations.
- Aperture has no effect on linear distortion (straight lines appearing curved), but it often goes away toward the middle of a zoom lens's focal-length range, and pictures can be composed to avoid drawing attention to it such as by not putting prominent obviously straight lines such as on buildings or horizons close to the frame edges, and it can be corrected in software or by some digital cameras automatically.
- Diffraction is a basic aspect of the behavior of waves passing through small openings which limits the maximum sharpness of all lenses at smaller apertures. X Research source It becomes increasingly apparent past f/11 or so, making a great camera and lens no better than a so-so one (albeit sometimes one exactly suited for a specific need such as great depth of field or a long shutter speed where lower sensitivity or a neutral-density filter is not available).
- Aperture or stop. This is the adjustable hole through which light passes on its way from the subject, through the lens, to the film (or digital sensor). Like the pinhole in a pinhole camera, it blocks rays of light except those that would, even without a lens, tend to form an inverted image by passing through that central point to a corresponding point in the opposite direction on the film. With a lens, it also blocks rays of light that would pass through far from the center, where the lens glass may less closely approximate (usually with various easy-to-make spherical surfaces) the shapes that would focus it perfectly (usually much more complex aspherical surfaces), causing aberrations.
2Understand depth of field. Depth of field is, formally, the range of object distances within which objects are imaged with acceptable sharpness. There is only one distance at which objects will be in perfect focus, but sharpness drops off gradually in front of and behind that distance. For a short distance in each direction, objects will be blurred so little that the film or sensor will be too coarse to detect any blurring; for a somewhat greater distance they will still appear "pretty" sharp in the final picture. The pairs of depth-of-field marks for certain apertures next to the focusing scale on a lens are good for estimating this latter measure. X Research source .
- Roughly one-third of the depth of field is in front of the focus distance, and two-thirds is behind (if not extending to infinity, since it is a phenomenon relating to the amount by which light rays from an object have to be bent to converge at a focal point and rays coming from far distances tend toward parallel.)
- Depth of field drops off gradually. Backgrounds and foregrounds will appear slightly soft, if not in focus, with a small aperture, but very blurred or unrecognizable with a wide aperture. Consider whether they are important and should be in focus, relevant for context and should be a little soft, or distracting and should be blurred.
- If you want great background blur but do not have quite enough depth of field for your subject, focus on the part that will draw the most attention, often the eyes.
- Depth of field generally appears to depend on, in addition to aperture, focal length (longer focal length gives less), format size (smaller film or sensor size gives more, assuming the same angle of view, i.e., equivalent focal length), and distance (there is much less at close focus distances).
So, if you want shallow depth of field, you can buy a super-fast lens (expensive), or zoom in (free) and set even a cheap smaller-aperture lens wide open.
- The artistic purpose of depth of field is to deliberately have the entire picture sharp or to "crop depth" by diffusing distracting foreground and/or background.
- A more practical purpose of depth of field is to set a small aperture and pre-focus the lens to the "hyperfocal distance" (the closest at which the depth of field extends to infinity from a given distance; see a table or the depth of field marks on the lens for the aperture chosen) or to an estimated distance, to be ready to take a picture quickly with a manual-focus camera or a subject moving too fast or unpredictably for autofocus (in which case you'll need a high shutter speed too).
- Remember that you normally won't see any of this through your viewfinder (or on your screen as you're composing. Modern cameras meter with the lens at its widest aperture, and only stop down the lens to its selected aperture at the moment of exposure. The depth-of-field preview function usually allows only a dim and imprecise view. (Disregard any odd patterns in the focusing screen view; they will not appear in the final picture.) What's more, viewfinders on modern digital SLRs and other autofocus cameras don't even show the true wide-open depth of field with a lens faster than f/2.8 or so (it's shallower than it looks; rely on autofocus, not subject to this limitation, when possible). A better option on digital cameras is to simply take the picture, then play it back and zoom in on your LCD to see if the background is adequately sharp (or blurred) enough.
3Understand the interaction of aperture and instantaneous lighting (flash). A flash burst is normally so short that the flash component of an exposure is affected only by aperture. (Most 35mm and digital SLRs have a maximum "flash-sync" flash-compatible shutter speed; above that only a fraction of the frame would be exposed due to the way in which their "focal-plane" shutter works. Special high-speed-sync flash modes use a rapid burst of weak flashes, each exposing a fraction of the frame; they greatly reduce flash range and so are rarely helpful.) A wide aperture increases maximum flash range. It also increases effective fill-flash range by increasing the proportionate exposure from a flash and reducing the time during which ambient light is allowed in. A small aperture may be needed to prevent overexposure in close-ups due to a minimum output below which a flash cannot be reduced (indirect flash, which is inherently less efficient, can help in this situation). Many cameras can adjust the balance of flash and ambient lighting with "flash exposure compensation". A digital camera is best for complex flash setups because the results of instantaneous bursts of light are inherently non-intuitive, even though some studio flashes have "modeling lights" and some fancy portable flashes have modeling-light-like preview modes.
4Test your lenses for optimal sharpness. All lenses are different and are better shot at different apertures for optimal performance. Get out and shoot something with lots of fine texture at different apertures and compare the shots to figure out how your lens behaves at various apertures. The object should be all essentially at "infinity" (30 feet or more with wide-angles to hundreds of feet with tele-lenses; a distant stand of trees is generally good) to avoid confusing defocus with aberrations. Here's some hints as to what to look for:
- Nearly all lenses have lower contrast and are less sharp at their widest aperture, especially towards the corners of your image. This is especially true on point-and-shoot and cheaper lenses. Consequently, if you're going to have detail in the corners of your pictures that you want to keep sharp, then you'll want to use a smaller aperture. For flat subjects, f/8 is typically the sharpest aperture. For objects at varying distances a smaller aperture may be better for more depth of field.
- Most lenses will have some noticeable amount of light fall-off wide open. Light fall-off is where the edges of the picture are slightly darker than the centre of the picture. This can be a good thing for many photographs, especially portraits; it draws attention towards the centre of the photograph, which is why many people add falloff in post. But it's still good to know what you're getting. Falloff is usually invisible after about f/8.
- Zoom lenses can vary depending on how far in or out they are zoomed. Test for the above things at a few different zoom settings.
- Diffraction makes almost every lens's images softer at f/16 and smaller apertures, and conspicuously softer at f/22 and smaller.
- All of this is just something to think about for optimum clarity of a picture that already has as good a composition--including depth of field – as possible, and which will not be much more grossly marred by insufficient shutter speed causing camera-shake or subject blur or noise from excessive "sensitivity" (amplification).
- Don't waste film investigating this – check your lenses on a digital camera, check reviews, and in a pinch assume expensive or prime (non-zoom) lenses are best at f/8, cheap simple ones such as kit lenses are best at f/11, and cheap exotic ones such as superwides or lenses with wide or tele adapters are best at f/16. (With an adapter lens on a point and shoot, stop down as much as possible, perhaps by using the camera's aperture-priority mode – look in its menus.)
5Understand aperture-related special effects.
Bokeh, a Japanese word often used to refer to the appearance of out-of-focus areas, especially highlights because those appear as bright blobs. Much has been written about the details of those out-of-focus blobs, which are sometimes brighter in the middle and sometimes a little brighter at the edges, like donuts, or some combination of the two, but at least one author rarely notices it except in bokeh articles. Most importantly, out-of-focus blurs are:
- Much larger and more diffuse at wider apertures.
- Soft-edged at the widest aperture, due to the perfectly round hole (the edge of a lens, rather than an iris blade).
- The shape of the diaphragm opening, when not at the widest aperture. This is most noticeable at wide apertures because they are large. This might be considered unattractive with a lens whose opening does not closely approximate a circle, such as a cheap lens with a five- or six-bladed diaphragm.
- Sometimes half-moons rather than circular toward the sides of images at very wide apertures, probably due to one of the lens elements not being as huge as it would have to be to fully illuminate all parts of the image at that aperture, or weirdly extended due to "coma" at very wide apertures (which is pretty much only an issue when taking pictures of lights at night).
- Prominently donut-like with mirror-type tele lenses, due to a central obstruction.
- Diffraction spikes forming sunstars. Very bright highlights, such as light bulbs at night or small specular reflections of sunlight, will be surrounded by "diffraction spikes" making "sunstars" at small apertures (they are formed by increased diffraction at the points of the polygonal hole formed by the iris). These will either have the same number of points as your lens has aperture blades (if you have an even number of them), due to overlapping of opposite-sides' spikes, or twice as many (if you have an odd number of aperture blades). They are fainter and less noticeable with lenses with many, many aperture blades (generally odd lenses such as old Leicas).
- Bokeh, a Japanese word often used to refer to the appearance of out-of-focus areas, especially highlights because those appear as bright blobs. Much has been written about the details of those out-of-focus blobs, which are sometimes brighter in the middle and sometimes a little brighter at the edges, like donuts, or some combination of the two, but at least one author rarely notices it except in bokeh articles. Most importantly, out-of-focus blurs are:
6Get out and shoot. Most importantly (in terms of aperture at least), Control your depth of field. It's as simple as this: a smaller aperture means more depth of field, a larger aperture means less. A larger aperture also means more background blur. Here's some examples:
- Use a small aperture to force more depth of field.
- Remember that depth of field becomes shallower the closer you get.If you're doing macro photography, for example, you might want to stop down far more than you would for a landscape. Insect photographers often go way down to f/16 or smaller, and have to nuke their subjects with lots of artificial lighting.
Use a large aperture to force a shallow depth of field. This is great for portraits (much better than the silly automatic portrait scene modes), for example; use the largest aperture you have, lock your focus on the eyes, recompose and you'll find the background is thrown out of focus and is, consequently, made less distracting.
Remember that opening the aperture like this will cause faster shutter speeds to be chosen. In bright daylight, make sure you aren't causing your camera to max out its fastest shutter speed (typically 1/4000 on digital SLRs). Keep your ISO low to avoid this.
7Shoot for special effects. If you're photographing lights at night, have adequate camera support, and want sunstars, use a small aperture. If you want large, perfectly rounded bokeh spots (albeit with some incomplete circles), use a wide-open aperture.
8Shoot for fill-flash. Use a relatively large aperture and fast shutter speed if necessary to mix flash with daylight so the flash isn't overwhelmed.
9Shoot for optimum technical image quality. If depth of field is not of primary importance (which would generally be the case when pretty much everything in the picture is relatively far from the lens and will be in focus anyway), the shutter speed will be high enough to avoid blur from camera shake and the ISO setting will be low enough to avoid severe noise or other quality loss (which would generally be the case in daytime), you don't need any aperture-related gimmicks, and any flash is powerful enough to balance with ambient light adequately, set the aperture that gives the best detail with the particular lens being used.
10Once you've chosen the lens aperture, try making the most of it with aperture-priority mode.Advertisement
QuestionHow can I achieve a Bokeh effect with a normal 50mm f1.4 lense?Community AnswerSwitch your camera to Manual or shutter-priority mode, then adjust the aperture wide open to f1.4. the larger the distance between your background and foreground, the more blurred the background will be.
- There's plenty of wisdom embodied in the old saying: f/8 and don't be late. f/8 typically gives sufficient depth of field for most still subjects, and it's where 35mm and digital SLR lenses are typically at their sharpest (or close to it). Don't be afraid to use it – or program mode (a good mode to leave your camera on for whatever might pop up) – for interesting subjects that won't necessarily stand still for you to adjust your camera.
- Sometimes you have to compromise your choice of aperture to allow an adequate shutter speed or acceptable film speed or "sensitivity" (amplification) setting. You can also just let your camera's auto setting choose something for you to get the shot. Do it.
- Softness from diffraction and, to a lesser extent, defocus (which can create odd patterns rather than softness alone) can sometimes be mitigated by processing such as the "unsharp mask" function in your post-processing software; GIMP and Photoshop being two popular examples. This will strengthen soft edges though it cannot create fine detail that was not captured, and creates harsh erroneous detail if overused.
- If careful aperture selection will be very important to your picture and you have an automated camera, aperture-priority mode or program-shift (scrolling through the combinations of apertures and shutter speeds automatically determined to give proper exposure) are convenient ways to set it.
- All lenses have some distortion in them: there is no such thing as a "perfect" lens, even in Professional models that cost thousands of dollars. The good news is that name-brand lenses, such as those from Nikon, Canon, Pentax, Zeiss, Leica, Sony/Minolta, and Olympus, often have known "distortion correction" profiles that can be downloaded on the Internet and applied in post-processing software (in Adobe Photoshop and Adobe Camera RAW software, for example). Using the capabilities of good post-processing software and camera lens profiles can go a long way toward making photos with a lot of barrel or pincushion distortion look much more natural and pleasing to the eye. In this example of a wide angle panoramic landscape photograph, the problem is that "perspective distortion" and "barrel distortion" is causing the trees toward the outer edges of the image to lean inwards. It's pretty obvious that this is a lens distortion and that it's very unlikely that the trees were actually leaning this way.
- Now, here is the same image after Lens Profile and Vertical Distortion Corrections were applied in Adobe Camera RAW. The trees are now all more or less vertical, both in the center and at the edges of the scene, at the expense of a slight cropping of the image. The photograph looks much more pleasing to the eye, and doesn't have the distraction of the trees leaning inwards
- Make "sunstars" with bright points of light, like streetlights, that are not so bright as the sun itself.
- Don't point a tele-lens, especially a very fast or long tele-lens, directly at the sun while attempting to make "sunstars", or for any other reason. You may damage your eye, or the camera's shutter or sensor.
- Don't point a cloth-shutter non-SLR camera, such as a Leica, toward the sun, except perhaps briefly to take a picture handheld, and even then only with a small aperture set. You may burn a hole in the shutter, which would require a somewhat expensive repair.
Support wikiHow's Educational Mission
Every day at wikiHow, we work hard to give you access to instructions and information that will help you live a better life, whether it's keeping you safer, healthier, or improving your well-being. Amid the current public health and economic crises, when the world is shifting dramatically and we are all learning and adapting to changes in daily life, people need wikiHow more than ever. Your support helps wikiHow to create more in-depth illustrated articles and videos and to share our trusted brand of instructional content with millions of people all over the world. Please consider making a contribution to wikiHow today.
- ↑ See Ken Rockwell on diffraction, http://www.kenrockwell.com/tech/diffraction.htm
- ↑ http://www.zeiss.com/C12567A8003B58B9/Contents-Frame/696A77A0FB8016CFC125697700547F1F |
By the end of this section, you will be able to:
- Differentiate between resistance and resistivity
- Define the term conductivity
- Describe the electrical component known as a resistor
- State the relationship between resistance of a resistor and its length, cross-sectional area, and resistivity
- State the relationship between resistivity and temperature
What drives current? We can think of various devices—such as batteries, generators, wall outlets, and so on—that are necessary to maintain a current. All such devices create a potential difference and are referred to as voltage sources. When a voltage source is connected to a conductor, it applies a potential difference V that creates an electrical field. The electrical field, in turn, exerts force on free charges, causing current. The amount of current depends not only on the magnitude of the voltage, but also on the characteristics of the material that the current is flowing through. The material can resist the flow of the charges, and the measure of how much a material resists the flow of charges is known as the resistivity. This resistivity is crudely analogous to the friction between two materials that resists motion.
When a voltage is applied to a conductor, an electrical field is created, and charges in the conductor feel a force due to the electrical field. The current density that results depends on the electrical field and the properties of the material. This dependence can be very complex. In some materials, including metals at a given temperature, the current density is approximately proportional to the electrical field. In these cases, the current density can be modeled as
where is the electrical conductivity. The electrical conductivity is analogous to thermal conductivity and is a measure of a material’s ability to conduct or transmit electricity. Conductors have a higher electrical conductivity than insulators. Since the electrical conductivity is , the units are
Here, we define a unit named the ohm with the Greek symbol uppercase omega, . The unit is named after Georg Simon Ohm, whom we will discuss later in this chapter. The is used to avoid confusion with the number 0. One ohm equals one volt per amp: . The units of electrical conductivity are therefore .
Conductivity is an intrinsic property of a material. Another intrinsic property of a material is the resistivity, or electrical resistivity. The resistivity of a material is a measure of how strongly a material opposes the flow of electrical current. The symbol for resistivity is the lowercase Greek letter rho, , and resistivity is the reciprocal of electrical conductivity:
The unit of resistivity in SI units is the ohm-meter . We can define the resistivity in terms of the electrical field and the current density,
The greater the resistivity, the larger the field needed to produce a given current density. The lower the resistivity, the larger the current density produced by a given electrical field. Good conductors have a high conductivity and low resistivity. Good insulators have a low conductivity and a high resistivity. Table 9.1 lists resistivity and conductivity values for various materials.
|Manganin (Cu, Mn, Ni alloy)||0.000002|
|Constantan (Cu, Ni alloy)||0.00003|
|Nichrome (Ni, Fe, Cr alloy)||0.0004|
The materials listed in the table are separated into categories of conductors, semiconductors, and insulators, based on broad groupings of resistivity. Conductors have the smallest resistivity, and insulators have the largest; semiconductors have intermediate resistivity. Conductors have varying but large, free charge densities, whereas most charges in insulators are bound to atoms and are not free to move. Semiconductors are intermediate, having far fewer free charges than conductors, but having properties that make the number of free charges depend strongly on the type and amount of impurities in the semiconductor. These unique properties of semiconductors are put to use in modern electronics, as we will explore in later chapters.
Copper wires use routinely used for extension cords and house wiring for several reasons. Copper has the highest electrical conductivity rating, and therefore the lowest resistivity rating, of all nonprecious metals. Also important is the tensile strength, where the tensile strength is a measure of the force required to pull an object to the point where it breaks. The tensile strength of a material is the maximum amount of tensile stress it can take before breaking. Copper has a high tensile strength, . A third important characteristic is ductility. Ductility is a measure of a material’s ability to be drawn into wires and a measure of the flexibility of the material, and copper has a high ductility. Summarizing, for a conductor to be a suitable candidate for making wire, there are at least three important characteristics: low resistivity, high tensile strength, and high ductility. What other materials are used for wiring and what are the advantages and disadvantages?
View this interactive simulation to see what the effects of the cross-sectional area, the length, and the resistivity of a wire are on the resistance of a conductor. Adjust the variables using slide bars and see if the resistance becomes smaller or larger.
Temperature Dependence of Resistivity
Looking back at Table 9.1, you will see a column labeled “Temperature Coefficient.” The resistivity of some materials has a strong temperature dependence. In some materials, such as copper, the resistivity increases with increasing temperature. In fact, in most conducting metals, the resistivity increases with increasing temperature. The increasing temperature causes increased vibrations of the atoms in the lattice structure of the metals, which impede the motion of the electrons. In other materials, such as carbon, the resistivity decreases with increasing temperature. In many materials, the dependence is approximately linear and can be modeled using a linear equation:
where is the resistivity of the material at temperature T, is the temperature coefficient of the material, and is the resistivity at , usually taken as .
Note also that the temperature coefficient is negative for the semiconductors listed in Table 9.1, meaning that their resistivity decreases with increasing temperature. They become better conductors at higher temperature, because increased thermal agitation increases the number of free charges available to carry current. This property of decreasing with temperature is also related to the type and amount of impurities present in the semiconductors.
We now consider the resistance of a wire or component. The resistance is a measure of how difficult it is to pass current through a wire or component. Resistance depends on the resistivity. The resistivity is a characteristic of the material used to fabricate a wire or other electrical component, whereas the resistance is a characteristic of the wire or component.
To calculate the resistance, consider a section of conducting wire with cross-sectional area A, length L, and resistivity A battery is connected across the conductor, providing a potential difference across it (Figure 9.13). The potential difference produces an electrical field that is proportional to the current density, according to .
The magnitude of the electrical field across the segment of the conductor is equal to the voltage divided by the length, , and the magnitude of the current density is equal to the current divided by the cross-sectional area, Using this information and recalling that the electrical field is proportional to the resistivity and the current density, we can see that the voltage is proportional to the current:
The ratio of the voltage to the current is defined as the resistance R:
The resistance of a cylindrical segment of a conductor is equal to the resistivity of the material times the length divided by the area:
The unit of resistance is the ohm, . For a given voltage, the higher the resistance, the lower the current.
A common component in electronic circuits is the resistor. The resistor can be used to reduce current flow or provide a voltage drop. Figure 9.14 shows the symbols used for a resistor in schematic diagrams of a circuit. Two commonly used standards for circuit diagrams are provided by the American National Standard Institute (ANSI, pronounced “AN-see”) and the International Electrotechnical Commission (IEC). Both systems are commonly used. We use the ANSI standard in this text for its visual recognition, but we note that for larger, more complex circuits, the IEC standard may have a cleaner presentation, making it easier to read.
Material and shape dependence of resistance
A resistor can be modeled as a cylinder with a cross-sectional area A and a length L, made of a material with a resistivity (Figure 9.15). The resistance of the resistor is .
The most common material used to make a resistor is carbon. A carbon track is wrapped around a ceramic core, and two copper leads are attached. A second type of resistor is the metal film resistor, which also has a ceramic core. The track is made from a metal oxide material, which has semiconductive properties similar to carbon. Again, copper leads are inserted into the ends of the resistor. The resistor is then painted and marked for identification. A resistor has four colored bands, as shown in Figure 9.16.
Resistances range over many orders of magnitude. Some ceramic insulators, such as those used to support power lines, have resistances of or more. A dry person may have a hand-to-foot resistance of , whereas the resistance of the human heart is about . A meter-long piece of large-diameter copper wire may have a resistance of , and superconductors have no resistance at all at low temperatures. As we have seen, resistance is related to the shape of an object and the material of which it is composed.
Current Density, Resistance, and Electrical field for a Current-Carrying WireCalculate the current density, resistance, and electrical field of a 5-m length of copper wire with a diameter of 2.053 mm (12-gauge) carrying a current of .
StrategyWe can calculate the current density by first finding the cross-sectional area of the wire, which is and the definition of current density . The resistance can be found using the length of the wire , the area, and the resistivity of copper , where . The resistivity and current density can be used to find the electrical field.
SolutionFirst, we calculate the current density:
The resistance of the wire is
Finally, we can find the electrical field:
SignificanceFrom these results, it is not surprising that copper is used for wires for carrying current because the resistance is quite small. Note that the current density and electrical field are independent of the length of the wire, but the voltage depends on the length.
The resistance of an object also depends on temperature, since is directly proportional to For a cylinder, we know , so if L and A do not change greatly with temperature, R has the same temperature dependence as (Examination of the coefficients of linear expansion shows them to be about two orders of magnitude less than typical temperature coefficients of resistivity, so the effect of temperature on L and A is about two orders of magnitude less than on Thus,
is the temperature dependence of the resistance of an object, where is the original resistance (usually taken to be and R is the resistance after a temperature change The color code gives the resistance of the resistor at a temperature of .
Numerous thermometers are based on the effect of temperature on resistance (Figure 9.17). One of the most common thermometers is based on the thermistor, a semiconductor crystal with a strong temperature dependence, the resistance of which is measured to obtain its temperature. The device is small, so that it quickly comes into thermal equilibrium with the part of a person it touches.
Calculating ResistanceAlthough caution must be used in applying and for temperature changes greater than , for tungsten, the equations work reasonably well for very large temperature changes. A tungsten filament at has a resistance of . What would the resistance be if the temperature is increased to ?
StrategyThis is a straightforward application of , since the original resistance of the filament is given as and the temperature change is .
SolutionThe resistance of the hotter filament R is obtained by entering known values into the above equation:
SignificanceNotice that the resistance changes by more than a factor of 10 as the filament warms to the high temperature and the current through the filament depends on the resistance of the filament and the voltage applied. If the filament is used in an incandescent light bulb, the initial current through the filament when the bulb is first energized will be higher than the current after the filament reaches the operating temperature.
A strain gauge is an electrical device to measure strain, as shown below. It consists of a flexible, insulating backing that supports a conduction foil pattern. The resistance of the foil changes as the backing is stretched. How does the strain gauge resistance change? Is the strain gauge affected by temperature changes?
The Resistance of Coaxial CableLong cables can sometimes act like antennas, picking up electronic noise, which are signals from other equipment and appliances. Coaxial cables are used for many applications that require this noise to be eliminated. For example, they can be found in the home in cable TV connections or other audiovisual connections. Coaxial cables consist of an inner conductor of radius surrounded by a second, outer concentric conductor with radius (Figure 9.18). The space between the two is normally filled with an insulator such as polyethylene plastic. A small amount of radial leakage current occurs between the two conductors. Determine the resistance of a coaxial cable of length L.
StrategyWe cannot use the equation directly. Instead, we look at concentric cylindrical shells, with thickness dr, and integrate.
SolutionWe first find an expression for dR and then integrate from to ,
SignificanceThe resistance of a coaxial cable depends on its length, the inner and outer radii, and the resistivity of the material separating the two conductors. Since this resistance is not infinite, a small leakage current occurs between the two conductors. This leakage current leads to the attenuation (or weakening) of the signal being sent through the cable.
The resistance between the two conductors of a coaxial cable depends on the resistivity of the material separating the two conductors, the length of the cable and the inner and outer radius of the two conductor. If you are designing a coaxial cable, how does the resistance between the two conductors depend on these variables?
View this simulation to see how the voltage applied and the resistance of the material the current flows through affects the current through the material. You can visualize the collisions of the electrons and the atoms of the material effect the temperature of the material. |
The Thirty Years' War was a war fought primarily in Central Europe between 1618 and 1648. One of the most destructive conflicts in human history, it resulted in eight million fatalities not only from military engagements but also from violence, famine, and plague. Casualties were overwhelmingly and disproportionately inhabitants of the Holy Roman Empire, most of the rest being battle deaths from various foreign armies. In terms of proportional German casualties and destruction, it was surpassed only by the period January to May 1945; one of its enduring results was 19th-century Pan-Germanism, when it served as an example of the dangers of a divided Germany and became a key justification for the 1871 creation of the German Empire.
Initially a war between various Protestant and Catholic states in the fragmented Holy Roman Empire, it gradually developed into a more general conflict involving most of the European great powers. These states employed relatively large mercenary armies, and the war became less about religion and more of a continuation of the France–Habsburg rivalry for European political pre-eminence.
The war was preceded by the election of the new Holy Roman Emperor, Ferdinand II, who tried to impose religious uniformity on his domains, forcing Roman Catholicism on its peoples. The northern Protestant states, angered by the violation of their rights to choose, which had been granted in the Peace of Augsburg, banded together to form the Protestant Union. Ferdinand II was a devout Roman Catholic and much less tolerant than his predecessor, Rudolf II, who ruled from the largely Protestant city of Prague. Ferdinand's policies were considered strongly pro-Catholic and anti-Protestant.
These events caused widespread fears throughout northern and central Europe, and triggered the Protestant Bohemians living in the then relatively loose dominion of Habsburg Austria (and also with the Holy Roman Empire) to revolt against their nominal ruler, Ferdinand II. After the so-called Defenestration of Prague deposed the Emperor's representatives in Prague, the Protestant estates and Catholic Habsburgs started gathering allies for war. The Protestant Bohemians ousted the Habsburgs and elected the Calvinist Frederick V, Elector of the Rhenish Palatinate as the new king of the Kingdom of Bohemia. Frederick took the offer without the support of the Protestant Union. The southern states, mainly Roman Catholic, were angered by this. Led by Bavaria, these states formed the Catholic League to expel Frederick in support of the Emperor. The Empire soon crushed the perceived Protestant rebellion in the Battle of White Mountain, executing leading Bohemian aristocrats shortly after. Protestant rulers across Europe unanimously condemned the Emperor's action.
After the atrocities committed in Bohemia, Saxony finally gave its support to the Protestant Union and decided to fight back. Sweden, at the time a rising military power, soon intervened in 1630 under its king Gustavus Adolphus, transforming what had been simply the Emperor's attempt to curb the Protestant states into a full-scale war in Europe. Habsburg Spain, wishing to finally crush the Dutch rebels in the Netherlands and the Dutch Republic (which was still a part of the Holy Roman Empire), intervened under the pretext of helping its dynastic Habsburg ally, Austria. No longer able to tolerate the encirclement of two major Habsburg powers on its borders, Catholic France entered the coalition on the side of the Protestants in order to counter the Habsburgs.
The Thirty Years' War devastated entire regions, resulting in high mortality, especially among the populations of the German and Italian states, the Crown of Bohemia, and the Southern Netherlands. Both mercenaries and soldiers in fighting armies traditionally looted or extorted tribute to get operating funds, which imposed severe hardships on the inhabitants of occupied territories. The war also bankrupted most of the combatant powers.
The Dutch Republic enjoyed contrasting fortune; it was removed from the Holy Roman Empire and was able to end its revolt against Spain in 1648 and subsequently enjoyed a time of great prosperity and development, known as the Dutch Golden Age, during which it became one of the world's foremost economic, colonial, and naval powers. The Thirty Years' War ended with the Treaty of Osnabrück and the Treaties of Münster, part of the wider Peace of Westphalia. The war altered the previous political order of European powers. The rise of Bourbon France, the curtailing of Habsburg ambition, and the ascendancy of Sweden as a great power created a new balance of power on the continent, with France emerging from the war strengthened and increasingly dominant in the latter part of the 17th century.
|Thirty Years' War|
|Part of the European wars of religion|
Les Grandes Misères de la guerre
(The Great Miseries of War) by Jacques Callot, 1632
Anti-Habsburg states and allies:
Habsburg states and allies:
|Commanders and leaders|
|Casualties and losses|
1,835 ships (1626–34)
300,000 (including the years 1648–59, after Westphalia)
100 warships and 20,000 crew casualties (1638–40)
|Total: 8,000,000 dead (~94% were Imperial subjects)|
The Peace of Augsburg (1555), signed by Charles V, Holy Roman Emperor, confirmed the result of the Diet of Speyer (1526), ending the war between German Lutherans and Catholics, and establishing that:
Although the Peace of Augsburg created a temporary end to hostilities, it did not resolve the underlying religious conflict, which was made yet more complex by the spread of Calvinism throughout Germany in the years that followed. This added a third major faith to the region, but its position was not recognized in any way by the Augsburg terms, to which only Catholicism and Lutheranism were parties.
The rulers of the nations neighboring the Holy Roman Empire also contributed to the outbreak of the Thirty Years' War:
The Holy Roman Empire was a fragmented collection of largely independent states (a fragmentation that the Peace of Westphalia would solidify). The position of the Holy Roman Emperor was mainly titular, but the emperors, from the House of Habsburg, also directly ruled a large portion of imperial territory (lands of the Archduchy of Austria and the Kingdom of Bohemia), as well as the Kingdom of Hungary. The Austrian domain was thus a major European power in its own right, ruling over some eight million subjects. Another branch of the House of Habsburg ruled over Spain and its empire, which included the Spanish Netherlands, southern Italy, the Philippines, and most of the Americas. In addition to Habsburg lands, the Holy Roman Empire contained several regional powers, such as the Duchy of Bavaria, the Electorate of Saxony, the Margraviate of Brandenburg, the Electorate of the Palatinate and the Landgraviate of Hesse. A vast number of minor independent duchies, free cities, abbeys, prince-bishoprics, and petty lordships (whose authority sometimes extended to no more than a single village) rounded out the empire. Apart from Austria and perhaps Bavaria, none of those entities was capable of national-level politics; alliances between family-related states were common, due partly to the frequent practice of partible inheritance, i.e. splitting a lord's inheritance among his various sons.
Religious tensions remained strong throughout the second half of the 16th century. The Peace of Augsburg began to unravel: some converted bishops refused to give up their bishoprics, and certain Habsburg and other Catholic rulers of the Holy Roman Empire and Spain sought to restore the power of Catholicism in the region. This was evident from the Cologne War (1583–88), a conflict initiated when the prince-archbishop of the city, Gebhard Truchsess von Waldburg, converted to Calvinism. As he was an imperial elector, this could have produced a Protestant majority in the College that elected the Holy Roman Emperor, a position that was always held by a Roman Catholic.
In the Cologne War, Spanish troops expelled the former prince-archbishop and replaced him with Ernst of Bavaria, a Roman Catholic. After this success, the Catholics regained peace, and the principle of cuius regio, eius religio began to be exerted more strictly in Bavaria, Würzburg, and other states. This forced Lutheran residents to choose between conversion or exile. Lutherans also witnessed the defection of the lords of the Palatinate (1560), Nassau (1578), Hesse-Kassel (1603), and Brandenburg (1613) to the new Calvinist faith. Thus, at the beginning of the 17th century, the Rhine lands and those south to the Danube were largely Catholic, while Lutherans predominated in the north, and Calvinists dominated in certain other areas, such as west-central Germany, Switzerland, and the Netherlands. Minorities of each creed existed almost everywhere, however. In some lordships and cities, the numbers of Calvinists, Catholics, and Lutherans were approximately equal.
Much to the consternation of their Spanish ruling cousins, the Habsburg emperors who followed Charles V (especially Ferdinand I and Maximilian II, but also Rudolf II, and his successor Matthias) were content to allow the princes of the empire to choose their own religious policies. These rulers avoided religious wars within the empire by allowing the different Christian faiths to spread without coercion. This angered those who sought religious uniformity. Meanwhile, Sweden and Denmark-Norway, both Lutheran kingdoms, sought to assist the Protestant cause in the Empire, and wanted to gain political and economic influence there, as well.
Religious tensions broke into violence in the German free city of Donauwörth in 1606. There, the Lutheran majority barred the Catholic residents of the Swabian town from holding an annual Markus procession, which provoked a riot called the 'battle of the flags'. This prompted foreign intervention by Duke Maximilian of Bavaria on behalf of the Catholics. After the violence ceased, Calvinists in Germany (who remained a minority) felt the most threatened. They banded together and formed the Protestant Union in 1608, under the leadership of the Elector Palatine Frederick IV, whose son, Frederick V, married Elizabeth Stuart, the Scottish-born daughter of King James VI of Scotland and I of England and Ireland. The establishment of the league prompted the Catholics into banding together to form the Catholic League in 1609, under the leadership of Duke Maximilian.
Tensions escalated further in 1609, with the War of the Jülich Succession, which began when John William, Duke of Jülich-Cleves-Berg, the ruler of the strategically important United Duchies of Jülich-Cleves-Berg, died childless. Two rival claimants vied for the duchy. The first was Duchess Anna of Prussia, daughter of Duke John William's eldest sister, Marie Eleonore of Cleves. Anna was married to John Sigismund, Elector of Brandenburg. The second was Wolfgang William, Count Palatine of Neuburg, who was the son of Duke John William's second-eldest sister, Anna of Cleves. Duchess Anna of Prussia claimed Jülich-Cleves-Berg as the heir to the senior line, while Wolfgang William, Count Palatine of Neuburg, claimed Jülich-Cleves-Berg as Duke John William's eldest male heir. Both claimants were Protestants. In 1610, to prevent war between the rival claimants, the forces of Rudolf II, Holy Roman Emperor occupied Jülich-Cleves-Berg until the Aulic Council (Reichshofrat) resolved the dispute. However, several Protestant princes feared that the emperor Rudolf II, a Catholic, intended to keep Jülich-Cleves-Berg for himself to prevent the United Duchies falling into Protestant hands. Representatives of Henry IV of France and the Dutch Republic gathered forces to invade Jülich-Cleves-Berg, but these plans were cut short by the assassination of Henry IV by the Catholic fanatic François Ravaillac. Hoping to gain an advantage in the dispute, Wolfgang William converted to Catholicism; John Sigismund, though, converted to Calvinism (although Anna of Prussia stayed Lutheran). The dispute was settled in 1614 with the Treaty of Xanten, by which the United Duchies were dismantled: Jülich and Berg were awarded to Wolfgang William, while John Sigismund gained Cleves, Mark, and Ravensberg.
The background of the Dutch Revolt also has close relations to the events leading to the Thirty Years' War. It was widely known that the Twelve Years' Truce was set to expire in 1621, and throughout Europe it was recognized that at that time, Spain would attempt to reconquer the Dutch Republic. Forces under Ambrogio Spinola, 1st Marquis of the Balbases, the Genoese commander of the Spanish army, would be able to pass through friendly territories to reach the Dutch Republic. The only hostile state that stood in his way was the Electorate of the Palatinate. Spinola's preferred route would take him through the Republic of Genoa, the Duchy of Milan, the Val Telline, around hostile Switzerland bypassing it along the north shore of Lake Constance, then through Alsace, the Archbishopric of Strasbourg, the Electorate of the Palatinate, and then finally through the Archbishopric of Trier, Jülich and Berg, and on to the Dutch Republic. The Palatinate thus assumed a strategic importance in European affairs out of all proportion to its size. This explains why the Protestant James VI and I arranged for the marriage of his daughter Elizabeth Stuart to Frederick V, Elector Palatine in 1612, in spite of the social convention that a princess would only marry another royal.
By 1617, it was apparent that Matthias, Holy Roman Emperor and King of Bohemia, would die without an heir, with his lands going to his nearest male relative, his cousin Archduke Ferdinand II of Austria, heir-apparent and Crown Prince of Bohemia. With the Oñate treaty, Philip III of Spain agreed to this succession.
Ferdinand, educated by the Jesuits, was a staunch Catholic who wanted to impose religious uniformity on his lands. This made him highly unpopular in Protestant (primarily Hussite) Bohemia. The Bohemian nobility rejected Ferdinand, who had been elected Bohemian Crown Prince in 1617. Ferdinand's representatives were thrown out of a window in Prague and seriously injured, triggering the Thirty Years' War in 1618. This so-called Defenestration of Prague provoked open revolt in Bohemia, which had powerful foreign allies. Ferdinand was upset by the calculated insult, but his intolerant policies in his own lands had left him in a weak position. The Habsburg cause in the next few years would seem to suffer unrecoverable reverses. The Protestant cause seemed to wax toward a quick overall victory.
Without heirs, Emperor Matthias sought to assure an orderly transition during his lifetime by having his dynastic heir (the fiercely Catholic Ferdinand of Styria, later Ferdinand II, Holy Roman Emperor) elected to the separate royal thrones of Bohemia and Hungary. Some of the Protestant leaders of Bohemia feared they would be losing the religious rights granted to them by Emperor Rudolf II in his Letter of Majesty (1609). They preferred the Protestant Frederick V, elector of the Palatinate (successor of Frederick IV, the creator of the Protestant Union). However, other Protestants supported the stance taken by the Catholics, and in 1617, Ferdinand was duly elected by the Bohemian Estates to become the crown prince, and automatically upon the death of Matthias, the next king of Bohemia.
The king-elect then sent two Catholic councillors (Vilem Slavata of Chlum and Jaroslav Borzita of Martinice) as his representatives to Prague Castle in Prague in May 1618. Ferdinand had wanted them to administer the government in his absence. On 23 May 1618, an assembly of Protestants seized them and threw them (and also secretary Philip Fabricius) out of the palace window, which was some 21 m (69 ft) off the ground. Although injured, they survived. This event, known as the (Second) Defenestration of Prague, started the Bohemian Revolt. Soon afterward, the Bohemian conflict spread through all of the Bohemian Crown, including Bohemia, Silesia, Upper and Lower Lusatia, and Moravia. Moravia was already embroiled in a conflict between Catholics and Protestants. The religious conflict eventually spread across the whole continent of Europe and also increased the concerns of a Habsburg hegemony, involving France, Sweden, and a number of other countries.
The death of Emperor Matthias emboldened the rebellious Protestant leaders, who had been on the verge of a settlement. The weaknesses of both Ferdinand (now officially on the throne after the death of Emperor Matthias) and of the Bohemians themselves led to the spread of the war to western Germany. Ferdinand was compelled to call on his nephew, King Philip IV of Spain, for assistance.
The Bohemians, desperate for allies against the emperor, applied to be admitted into the Protestant Union, which was led by their original candidate for the Bohemian throne, the Calvinist Frederick V, Elector Palatine. The Bohemians hinted Frederick would become King of Bohemia if he allowed them to join the Union and come under its protection. However, similar offers were made by other members of the Bohemian Estates to the Duke of Savoy, the Elector of Saxony, and the Prince of Transylvania. The Austrians, who seemed to have intercepted every letter leaving Prague, made these duplicities public. This unraveled much of the support for the Bohemians, particularly in the court of Saxony. In spite of these issues surrounding their support, the rebellion initially favoured the Bohemians. They were joined in the revolt by much of Upper Austria, whose nobility was then chiefly Lutheran and Calvinist. Lower Austria revolted soon after, and in 1619, Count Thurn led an army to the walls of Vienna itself. Moreover, within the British Isles, Frederick V's cause became seen as that of Elizabeth Stuart, described by her supporters as "The Jewell of Europe", leading to a stream of tens of thousands of volunteers to her cause throughout the course of the Thirty Years' War. In the opening phase, an Anglo-Dutch regiment under Horace Vere headed to the Palatinate, a Scots-Dutch regiment under Colonel John Seton moved into Bohemia, and that was joined by a mixed "Regiment of Brittanes" (Scots and English) led by the Scottish Catholic Sir Andrew Gray. Seton's regiment was the last of the Protestant allies to leave the Bohemian theatre after tenaciously holding the town of Třeboň until 1622, and only departing once the rights of the citizens had been secured.
In the east, the Protestant Hungarian Prince of Transylvania, Gabriel Bethlen, led a spirited campaign into Hungary with the support of the Ottoman Sultan, Osman II. Fearful of the Catholic policies of Ferdinand II, Gabriel Bethlen requested a protectorate by Osman II, so "the Ottoman Empire became the one and only ally of great-power status which the rebellious Bohemian states could muster after they had shaken off Habsburg rule and had elected Frederick V as a Protestant king". Ambassadors were exchanged, with Heinrich Bitter visiting Constantinople in January 1620, and Mehmed Aga visiting Prague in July 1620. The Ottomans offered a force of 60,000 cavalry to Frederick and plans were made for an invasion of Poland with 400,000 troops, in exchange for the payment of an annual tribute to the sultan. These negotiations triggered the Polish–Ottoman War of 1620–21. The Ottomans defeated the Poles, who were supporting the Habsburgs in the Thirty Years' War, at the Battle of Cecora in September–October 1620, but were not able to further intervene efficiently before the Bohemian defeat at the Battle of the White Mountain in November 1620. Later, Poles defeated the Ottomans at the Battle of Chocim and the war ended with a status quo.
The emperor, who had been preoccupied with the Uskok War, hurried to muster an army to stop the Bohemians and their allies from overwhelming his country. Count Bucquoy, the commander of the Imperial army, defeated the forces of the Protestant Union led by Count Mansfeld at the Battle of Sablat, on 10 June 1619. This cut off Count Thurn's communications with Prague, and he was forced to abandon his siege of Vienna. The Battle of Sablat also cost the Protestants an important ally – Savoy, long an opponent of Habsburg expansion. Savoy had already sent considerable sums of money to the Protestants and even troops to garrison fortresses in the Rhineland. The capture of Mansfeld's field chancery revealed the Savoyards' involvement, and they were forced to bow out of the war.
The Spanish sent an army from Brussels under Ambrogio Spinola to support the Emperor. In addition, the Spanish ambassador to Vienna, Don Íñigo Vélez de Oñate, persuaded Protestant Saxony to intervene against Bohemia in exchange for control over Lusatia. The Saxons invaded, and the Spanish army in the west prevented the Protestant Union's forces from assisting. Oñate conspired to transfer the electoral title from the Palatinate to the Duke of Bavaria in exchange for his support and that of the Catholic League.
The Catholic League's army pacified Upper Austria, while Imperial forces under Johan Tzerclaes, Count of Tilly, pacified Lower Austria. The two armies united and moved north into Bohemia. Ferdinand II decisively defeated Frederick V at the Battle of White Mountain, near Prague, on 8 November 1620. In addition to becoming Catholic, Bohemia remained in Habsburg hands for nearly 300 years.
This defeat led to the dissolution of the Protestant Union and the loss of Frederick V's holdings despite the tenacious defence of Trebon, Bohemia (under Colonel Seton) until 1622 and Frankenthal (under Colonel Vere) the following year. Frederick was outlawed from the Holy Roman Empire, and his territories, the Rhenish Palatinate, were given to Catholic nobles. His title of elector of the Palatinate was given to his distant cousin, Duke Maximilian of Bavaria. Frederick, now landless, made himself a prominent exile abroad and tried to curry support for his cause in Sweden, the Netherlands, and Denmark-Norway.
This was a serious blow to Protestant ambitions in the region. As the rebellion collapsed, the widespread confiscation of property and suppression of the Bohemian nobility ensured the country would return to the Catholic side after more than two centuries of Hussite and other religious dissent. The Spanish, seeking to outflank the Dutch in preparation for renewal of the Eighty Years' War, took Frederick's lands, the Electorate of the Palatinate. The first phase of the war in eastern Germany ended 31 December 1621, when the prince of Transylvania and the emperor signed the Peace of Nikolsburg, which gave Transylvania a number of territories in Royal Hungary.
Some historians regard the period from 1621 to 1625 as a distinct portion of the Thirty Years' War, calling it the "Palatinate phase". With the catastrophic defeat of the Protestant army at White Mountain and the departure of the prince of Transylvania, greater Bohemia was pacified. However, the war in the Palatinate continued: Famous mercenary leaders – such as, particularly, Count Ernst von Mansfeld – helped Frederick V to defend his countries, the Upper and the Rhine Palatinate. This phase of the war consisted of much smaller battles, mostly sieges conducted by the Imperial and the Spanish armies. Mannheim and Heidelberg fell in 1622, and Frankenthal was finally transferred two years later, thus leaving the Palatinate in the hands of the Spaniards.
The remnants of the Protestant armies, led by Mansfeld and Duke Christian of Brunswick, withdrew into Dutch service. Although their arrival in the Netherlands did help to lift the siege of Bergen-op-Zoom (October 1622), the Dutch could not provide permanent shelter for them. They were paid off and sent to occupy neighboring East Frisia. Mansfeld remained in the Dutch Republic, but Christian wandered off to "assist" his kin in the Lower Saxon Circle, attracting the attentions of Count Tilly. With the news that Mansfeld would not be supporting him, Christian's army began a steady retreat toward the safety of the Dutch border. On 6 August 1623, ten miles short of the border, Tilly's more disciplined army caught up with them. In the ensuing Battle of Stadtlohn, Christian was decisively defeated, losing over four-fifths of his army, which had been some 15,000 strong. After this catastrophe, Frederick V, already in exile in The Hague and under growing pressure from his father-in-law, James I, to end his involvement in the war, was forced to abandon any hope of launching further campaigns. The Protestant rebellion had been crushed.
Following the Wars of Religion of 1562–1598, the Protestant Huguenots of France (mainly located in the southwestern provinces) had enjoyed two decades of internal peace under Henry IV, who was originally a Huguenot before converting to Catholicism, and had protected Protestants through the Edict of Nantes. His successor, Louis XIII, under the regency of his Italian Catholic mother, Marie de' Medici, was much less tolerant. The Huguenots responded to increasing persecution by arming themselves, forming independent political and military structures, establishing diplomatic contacts with foreign powers, and finally, openly revolting against the central power. The revolt became an international conflict with the involvement of England in the Anglo-French War (1627–29). The House of Stuart in England had been involved in attempts to secure peace in Europe (through the Spanish Match), and had intervened in the war against both Spain and France. However, defeat by the French (which indirectly led to the assassination of the English leader the Duke of Buckingham), lack of funds for war, and internal conflict between Charles I and his Parliament led to a redirection of English involvement in European affairs – much to the dismay of Protestant forces on the continent. This involved a continued reliance on the Anglo-Dutch brigade as the main agency of English military participation against the Habsburgs, although regiments also fought for Sweden thereafter. France remained the largest Catholic kingdom unaligned with the Habsburg powers, and would later actively wage war against Spain. The French Crown's response to the Huguenot rebellion was not so much a representation of the typical religious polarization of the Thirty Years' War, but rather an attempt at achieving national hegemony by an absolutist monarchy.
Peace following the Imperial victory at Stadtlohn (1623) proved short-lived, with conflict resuming at the initiation of Denmark–Norway. Danish involvement, referred to as the Low Saxon War or Kejserkrigen ("the Emperor's War"), began when Christian IV of Denmark, a Lutheran who also ruled as Duke of Holstein, a duchy within the Holy Roman Empire, helped the Lutheran rulers of the neighbouring principalities in what is now Lower Saxony by leading an army against the Imperial forces in 1625. Denmark-Norway had feared that the recent Catholic successes threatened its sovereignty as a Protestant nation. Christian IV had also profited greatly from his policies in northern Germany. For instance, in 1621, Hamburg had been forced to accept Danish sovereignty.
Denmark-Norway's King Christian IV had obtained for his kingdom a level of stability and wealth that was virtually unmatched elsewhere in Europe. Denmark-Norway was funded by tolls on the Øresund and also by extensive war reparations from Sweden. Denmark-Norway's cause was aided by France, which together with Charles I, had agreed to help subsidize the war, not the least because Christian was a blood uncle to both the Stuart king and his sister Elizabeth of Bohemia through their mother, Anne of Denmark. Some 13,700 Scottish soldiers were sent as allies to help Christian IV under the command of General Robert Maxwell, 1st Earl of Nithsdale. Moreover, some 6,000 English troops under Charles Morgan also eventually arrived to bolster the defence of Denmark-Norway, though it took longer for these to arrive than Christian hoped, not the least due to the ongoing British campaigns against France and Spain. Thus, Christian, as war-leader of the Lower Saxon Circle, entered the war with an army of only 20,000 mercenaries, some of his allies from England and Scotland and a national army 15,000 strong, leading them as Duke of Holstein rather than as King of Denmark-Norway.
To fight Christian, Ferdinand II employed the military help of Albrecht von Wallenstein, a Bohemian nobleman who had made himself rich from the confiscated estates of his Protestant countrymen. Wallenstein pledged his army, which numbered between 30,000 and 100,000 soldiers, to Ferdinand II in return for the right to plunder the captured territories. Christian, who knew nothing of Wallenstein's forces when he invaded, was forced to retire before the combined forces of Wallenstein and Tilly. Christian's mishaps continued when all of the allies he thought he had were forced aside: France was in the midst of a civil war, Sweden was at war with the Polish–Lithuanian Commonwealth, and neither Brandenburg nor Saxony was interested in changes to the tenuous peace in eastern Germany. Moreover, neither of the substantial British contingents arrived in time to prevent Wallenstein defeating Mansfeld's army at the Battle of Dessau Bridge (1626) or Tilly's victory at the Battle of Lutter (1626). Mansfeld died some months later of illness, apparently tuberculosis, in Dalmatia.
Wallenstein's army marched north, occupying Mecklenburg, Pomerania, and Jutland itself, but proved unable to take the Dano-Norwegian capital Copenhagen on the island of Zealand. Wallenstein lacked a fleet, and neither the Hanseatic ports nor the Poles would allow the building of an imperial fleet on the Baltic coast. He then laid siege to Stralsund, the only belligerent Baltic port with sufficient facilities to build a large fleet; it soon became clear, however, that the cost of continuing the war would far outweigh any gains from conquering the rest of Denmark. Wallenstein feared losing his northern German gains to a Danish-Swedish alliance, while Christian IV had suffered another defeat in the Battle of Wolgast (1628); both were ready to negotiate.
Negotiations concluded with the Treaty of Lübeck in 1629, which stated that Christian IV could retain control over Denmark-Norway (including the duchies of Sleswick and Holstein) if he would abandon his support for the Protestant German states. Thus, in the following two years, the Catholic powers subjugated more land. At this point, the Catholic League persuaded Ferdinand II to take back the Lutheran holdings that were, according to the Peace of Augsburg, rightfully the possession of the Catholic Church. Enumerated in the Edict of Restitution (1629), these possessions included two archbishoprics, 16 bishoprics, and hundreds of monasteries. In the same year, Gabriel Bethlen, the Calvinist prince of Transylvania, died. Only the port of Stralsund continued to hold out against Wallenstein and the emperor, having been bolstered by Scottish 'volunteers' who arrived from the Swedish army to support their countrymen already there in the service of Denmark-Norway. These men were led by Colonel Alexander Leslie, who became governor of the city. As Colonel Robert Monro recorded:
Sir Alexander Leslie being made Governour, he resolved for the credit of his Country-men, to make an out-fall upon the Enemy, and desirous to conferre the credit on his own Nation alone, being his first Essay in that Citie.
Leslie held Stralsund until 1630, using the port as a base to capture the surrounding towns and ports to provide a secure beach-head for a full-scale Swedish landing under Gustavus Adolphus.
Some in the court of Ferdinand II did not trust Wallenstein, believing he sought to join forces with the German princes and thus gain influence over the Emperor. Ferdinand II dismissed Wallenstein in 1630. He later recalled him, after the Swedes, led by King Gustavus Adolphus, had successfully invaded the Holy Roman Empire and turned the tables on the Catholics.
Like Christian IV before him, Gustavus Adolphus came to aid the German Lutherans, to forestall Catholic suzerainty in his back yard, and to obtain economic influence in the German states around the Baltic Sea. He was also concerned about the growing power of the Habsburg monarchy, and like Christian IV before him, was heavily subsidized by Cardinal Richelieu, the chief minister of Louis XIII of France, and by the Dutch. From 1630 to 1634, Swedish-led armies drove the Catholic forces back, regaining much of the lost Protestant territory. During his campaign, he managed to conquer half of the imperial kingdoms, making Sweden the leader of Protestantism in continental Europe until the Swedish Empire ended in 1721.
Swedish forces entered the Holy Roman Empire via the Duchy of Pomerania, which served as the Swedish bridgehead since the Treaty of Stettin (1630). After dismissing Wallenstein in 1630, Ferdinand II became dependent on the Catholic League. Gustavus Adolphus allied with France in the Treaty of Bärwalde (January 1631). France and Bavaria signed the secret Treaty of Fontainebleau (1631), but this was rendered irrelevant by Swedish attacks against Bavaria. At the Battle of Breitenfeld (1631), Gustavus Adolphus's forces defeated the Catholic League led by Tilly. A year later, they met again in another Protestant victory, this time accompanied by the death of Tilly. The upper hand had now switched from the Catholic side to the Protestant side, led by Sweden. In 1630, Sweden had paid at least 2,368,022 daler for its army of 42,000 men. In 1632, it contributed only one-fifth of that (476,439 daler) towards the cost of an army more than three times as large (149,000 men). This was possible due to subsidies from France, and the recruitment of prisoners (most of them taken at the Battle of Breitenfeld) into the Swedish army.
Before that time, Sweden waged war with the Polish–Lithuanian Commonwealth and could not support the Protestant states properly. For that reason, the King Gustav II enlisted support of the Russian Tsar Michael I, who also fought the Polish–Lithuanian Commonwealth in hopes of regaining Smolensk. While a separate conflict, the Smolensk War became an integral part of Thirty Years' confrontation.
The majority of mercenaries recruited by Gustavus Adolphus were German, but Scottish soldiers were also very numerous. These were composed of some 12,000 Scots already in service before the Swedes entered the war under the command of General Sir James Spens and colonels such as Sir Alexander Leslie, Sir Patrick Ruthven, and Sir John Hepburn. These were joined by a further 8,000 men under the command of James Marquis Hamilton. The total number of Scots in Swedish service by the end of the war is estimated at some 30,000 men, no less than 15 of whom served with the rank of major-general or above.
With Tilly dead, Ferdinand II returned to the aid of Wallenstein and his large army. Wallenstein marched up to the south, threatening Gustavus Adolphus's supply chain. Gustavus Adolphus knew that Wallenstein was waiting for the attack and was prepared but found no other option. Wallenstein and Gustavus Adolphus clashed in the Battle of Lützen (1632), where the Swedes prevailed, but Gustavus Adolphus was killed.
Ferdinand II's suspicion of Wallenstein resumed in 1633, when Wallenstein attempted to arbitrate the differences between the Catholic and Protestant sides. Ferdinand II may have feared that Wallenstein would switch sides, and arranged for his arrest after removing him from command. One of Wallenstein's soldiers, Captain Devereux, killed him when he attempted to contact the Swedes in the town hall of Eger (Cheb) on 25 February 1634. The same year, the Protestant forces, lacking Gustav's leadership, were smashed at the First Battle of Nördlingen by the Spanish-Imperial forces commanded by Cardinal-Infante Ferdinand.
By the spring of 1635, all Swedish resistance in the south of Germany had ended. After that, the Imperial and Protestant German sides met for negotiations, producing the Peace of Prague (1635), which entailed a delay in the enforcement of the Edict of Restitution for 40 years and allowed Protestant rulers to retain secularized bishoprics held by them in 1627. This protected the Lutheran rulers of northeastern Germany, but not those of the south and west (whose lands had been occupied by the imperial or league armies prior to 1627).
The treaty also provided for the union of the army of the emperor and the armies of the German states into a single army of the Holy Roman Empire (although John George I of Saxony and Maximillian I of Bavaria kept, as a practical matter, independent command of their own forces, now nominally components of the "imperial" army). Finally, German princes were forbidden from establishing alliances amongst themselves or with foreign powers, and amnesty was granted to any ruler who had taken up arms against the emperor after the arrival of the Swedes in 1630.
This treaty failed to satisfy France, however, because of the renewed strength it granted the Habsburgs. France then entered the conflict, beginning the final period of the Thirty Years' War. Sweden did not take part in the Peace of Prague and it continued the war together with France. Initially after the Peace of Prague, the Swedish armies were pushed back by the reinforced Imperial army north into Germany.
France, although mostly Roman Catholic, was a rival of the Holy Roman Empire and Spain. Cardinal Richelieu, the chief minister of King Louis XIII of France, considered the Habsburgs too powerful, since they held a number of territories on France's eastern border, including portions of the Low Countries. Richelieu had already begun intervening indirectly in the war in January 1631, when the French diplomat Hercule de Charnacé signed the Treaty of Bärwalde with Gustavus Adolphus, by which France agreed to support the Swedes with 1,000,000 livres each year in return for a Swedish promise to maintain an army in Germany against the Habsburgs. The treaty also stipulated that Sweden would not conclude a peace with the Holy Roman Emperor without first receiving France's approval.
After the Swedish rout at Nördlingen in September 1634 and the Peace of Prague in 1635, in which the Protestant German princes sued for peace with the Emperor, Sweden's ability to continue the war alone appeared doubtful, and Richelieu made the decision to enter into direct war against the Habsburgs. France declared war on Spain in May 1635 and the Holy Roman Empire in August 1636, opening offensives against the Habsburgs in Germany and the Low Countries. France aligned her strategy with the allied Swedes in Wismar (1636) and Hamburg (1638).
After the Peace of Prague, the Swedes reorganised the Royal Army under Johan Banér and created a new one, the Army of the Weser under the command of Alexander Leslie. The two army groups moved south from spring 1636, re-establishing alliances on the way including a revitalised one with Wilhelm of Hesse-Kassel. The two Swedish armies combined and confronted the Imperials at the Battle of Wittstock. Despite the odds being stacked against them, the Swedish army won. This success largely reversed many of the effects of their defeat at Nördlingen, albeit not without creating some tensions between Banér and Leslie.
Emperor Ferdinand II died in 1637 and was succeeded by his son Ferdinand III, who was strongly inclined toward ending the war through negotiations. His army did, however, win an important success at the Battle of Vlotho in 1638 against a combined Swedish-English-Palatine force. This victory effectively ended the involvement of the Palatinate in the war.
French military efforts met with disaster, and the Spanish counter-attacked, invading French territory. The Imperial general Johann von Werth and Spanish commander Cardinal-Infante Ferdinand of Spain ravaged the French provinces of Champagne, Burgundy, and Picardy, and even threatened Paris in 1636. Then, the tide began to turn for the French. The Spanish army was repulsed by Bernhard of Saxe-Weimar. Bernhard's victory in the Battle of Breisach pushed the Habsburg armies back from the borders of France. Then, for a time, widespread fighting ensued until 1640, with neither side gaining an advantage.
In 1640 the war reached a climax and the tide turned clearly in favor of the French and against Spain, starting with the siege and capture of the fort at Arras.[note 11] The French conquered Arras from the Spanish following a siege that lasted from 16 June to 9 August 1640. When Arras fell, the way was opened to the French to take all of Flanders. The ensuing French campaign against the Spanish forces in Flanders culminated with a decisive French victory at the battle of Rocroi in May 1643.
Meanwhile, an important act in the war was played out by the Swedes. After the battle of Wittstock, the Swedish army regained the initiative in the German campaign. In the Second Battle of Breitenfeld in 1642, outside Leipzig, the Swedish Field Marshal Lennart Torstenson defeated an army of the Holy Roman Empire led by Archduke Leopold Wilhelm of Austria and his deputy, Prince-General Ottavio Piccolomini, Duke of Amalfi. The imperial army suffered 20,000 casualties. In addition, the Swedish army took 5,000 prisoners and seized 46 guns, at a cost to themselves of 4,000 killed or wounded. The battle enabled Sweden to occupy Saxony and impressed on Ferdinand III the need to include Sweden, and not only France, in any peace negotiations.
Louis XIII died in 1643, leaving his five-year-old son Louis XIV on the throne. Mere days later, French General Louis II de Bourbon, 4th Prince de Condé, Duc d'Enghien, The Great Condé, defeated the Spanish army at the Battle of Rocroi in 1643. The same year, however, the French were defeated by the Imperial and Catholic League forces at the battle of Tuttlingen. The chief minister of Louis XIII, Cardinal Mazarin, facing the domestic crisis of the Fronde in 1645, began working to end the war.
In 1643, Denmark-Norway made preparations to again intervene in the war, but on the imperial side (against Sweden). The Swedish marshal Lennart Torstenson expelled Danish prince Frederick from Bremen-Verden, gaining a stronghold south of Denmark-Norway and hindering Danish participation as mediators in the peace talks in Westphalia. Torstensson went on to occupy Jutland, and after the Royal Swedish Navy under Carl Gustaf Wrangel inflicted a decisive defeat on the Danish Navy in the battle of Fehmern Belt in an action of 13 October 1644, forcing them to sue for peace. With Denmark-Norway out of the war, Torstenson then pursued the Imperial army under Gallas from Jutland in Denmark south to Bohemia. At the Battle of Jankau near Prague, the Swedish army defeated the Imperial army under Gallas and could occupy Bohemian lands and threaten Prague, as well as Vienna.
In 1645, a French army under Turenne was almost destroyed by the Bavarians at the Battle of Herbsthausen. However, reinforced by Louis II de Bourbon, Prince de Condé, it defeated its opponent in the Second Battle of Nördlingen. The last Catholic commander of note, Baron Franz von Mercy, died in the battle. However, the French army's effort on the Rhine had little result, in contrast to its string of victories in Flanders and Artois. The same year, the Swedes entered Austria and besieged Vienna, but they could not take the city and had to retreat. The siege of Brünn in Bohemia proved fruitless, as the Swedish army met with fierce resistance from the Habsburg forces. After five months, the Swedish army, severely worn out, had to withdraw.
On 14 March 1647, Bavaria, Cologne, France, and Sweden signed the Truce of Ulm. In 1648, the Swedes (commanded by Marshal Carl Gustaf Wrangel) and the French (led by Turenne) defeated the Imperial army at the Battle of Zusmarshausen, and Condé defeated the Spanish at Lens. However, an Imperial army led by Octavio Piccolomini managed to check the Franco-Swedish army in Bavaria, though their position remained fragile. The Battle of Prague in 1648 became the last action of the Thirty Years' War. The general Hans Christoff von Königsmarck, commanding Sweden's flying column, entered the city and captured Prague Castle (where the event that triggered the war – the Defenestration of Prague – took place, 30 years before). There, they captured many valuable treasures, including the Codex Gigas, which is still today preserved in Stockholm. However, they failed to conquer the right-bank part of Prague and the old city, which resisted until the end of the war. These results left only the Imperial territories of Austria safely in Habsburg hands.
News of the French victories in Flanders in 1640 provided strong encouragement to separatist movements against Habsburg Spain in the territories of Catalonia and Portugal. It had been the conscious goal of Cardinal Richelieu to promote a "war by diversion" against the Spanish enhancing difficulties at home that might encourage them to withdraw from the war. To fight this war by diversion, Cardinal Richelieu had been supplying aid to the Catalans and Portuguese.
The Reapers' War Catalan revolt had sprung up spontaneously in May 1640. The threat of having an anti-Habsburg territory establishing a powerful base south of the Pyrenees caused an immediate reaction from the monarchy. The Habsburg government sent a large army of 26,000 men to crush the Catalan revolt. On its way to Barcelona, the Spanish army retook several cities, executing hundreds of prisoners, and a rebel army of the recently-proclaimed Catalan Republic was defeated in Martorell, near Barcelona, on January, 23. In response, the rebels reinforced their efforts and the Catalan Generalitat obtained an important military victory over the Spanish army in the Battle of Montjuïc (January 26, 1641) which dominated the city of Barcelona. Perpinyà (Perpignan) was taken from the Spanish after a siege of 10 months, and the whole of Roussillon fell under direct French control. The Catalan ruling powers half-heartedly accepted the proclamation of Louis XIII of France as sovereign count of Barcelona, as Lluís I of Catalonia For the next decade the Catalans fought under French vassalage, taking the initiative after Montjuïc. Meanwhile, increasing French control of political and administrative affairs, in particular in Northern Catalonia, and a firm military focus on the neighbouring Spanish kingdoms of Valencia and Aragon, in line with Richelieu's war against Spain, gradually undermined Catalan enthusiasm for the French.
In parallel, in December 1640, the Portuguese rose up against Spanish rule and once again Richelieu supplied aid to the insurgents.. The ensuing conflict with Spain brought Portugal into the Thirty Years' War as, at least, a peripheral player. From 1641 to 1668, the period during which the two nations were at war, Spain sought to isolate Portugal militarily and diplomatically, and Portugal tried to find the resources to maintain its independence through political alliances and maintenance of its colonial income.
The war by diversion in the Iberian Peninsula had its intended effect. Philip IV of Spain was reluctantly forced to divert his attention from the war in northern Europe to deal with his problems at home. Indeed, even at this time, some of Philip's advisers, including the Count of Oñate, were recommending that Philip withdraw from overseas commitments. With Trier, Alsace, and Lorraine all in French hands and the Dutch in charge of Limburg, the Channel and the North Sea, the "Spanish Road" connecting Habsburg Spain with the Habsburg possessions in the Netherlands and Austria was severed. Philip IV could no longer physically send reinforcements to the Low Countries. On 4 December 1642, Cardinal Richelieu died. However, his policy of war by diversion continued to pay dividends to France. Spain was unable to resist the continuing drumbeat of French victories—Gravelines was lost to the French in 1644, followed by Hulst in 1645 and Dunkirk in 1646. The Thirty Years' War would continue until 1648 when the Peace of Westphalia was signed.
The conflict between France and Spain continued in Catalonia until 1659, with the confrontation between two sovereigns and two Catalan governments, one based in Barcelona, under the control of Spain and the other in Perpinyà, under the occupation of France. In 1652 the French authorities renounced to Catalonia's territories south of the Pyrenees, but held control of Roussillon, thereby leading to the signing of the Treaty of the Pyrenees in 1659, which finally ended the war between France and Spain, with the partition of restive Catalonia between both empires. The Portuguese Restoration War ended with the Treaty of Lisbon in 1668, that terminated the 60-year Iberian Union.
Over a four-year period, the warring parties (the Holy Roman Empire, France, and Sweden) were actively negotiating at Osnabrück and Münster in Westphalia. The end of the war was not brought about by one treaty, but instead by a group of treaties such as the Treaty of Hamburg. On 15 May 1648, the Peace of Münster was signed, ending the Thirty Years' War. Over five months later, on 24 October, the Treaties of Münster and Osnabrück were signed.
The war ranks with the worst famines and plagues as the greatest medical catastrophe in modern European history. Lacking good census information, historians have extrapolated the experience of well-studied regions. John Theibault agrees with the conclusions in Günther Franz's Der Dreissigjährige Krieg und das Deutsche Volk (1940), that population losses were great but varied regionally (ranging as high as 50%) and says his estimates are the best available. The war killed soldiers and civilians directly, caused famines, destroyed livelihoods, disrupted commerce, postponed marriages and childbirth, and forced large numbers of people to relocate. The overall reduction of population in the German states was typically 25% to 40%. Some regions were affected much more than others. For example, Württemberg lost three-quarters of its population during the war. In the region of Brandenburg, the losses had amounted to half, while in some areas, an estimated two-thirds of the population died. Overall, the male population of the German states was reduced by almost half. The population of the Czech lands declined by a third due to war, disease, famine, and the expulsion of Protestant population. Much of the destruction of civilian lives and property was caused by the cruelty and greed of mercenary soldiers. Villages were especially easy prey to the marauding armies. Those that survived, like the small village of Drais near Mainz, would take almost a hundred years to recover. The Swedish armies alone may have destroyed up to 2,000 castles, 18,000 villages, and 1,500 towns in Germany, one-third of all German towns.
The war caused serious dislocations to both the economies and populations of central Europe, but may have done no more than seriously exacerbate changes that had begun earlier. Also, some historians contend that the human cost of the war may actually have improved the living standards of the survivors. According to Ulrich Pfister, Germany was one of the richest countries in Europe per capita in 1500, but ranked far lower in 1600. Then, it recovered during the 1600–1660 period, in part thanks to the demographic shock of the Thirty Years' War.
Pestilence of several kinds raged among combatants and civilians in Germany and surrounding lands from 1618 to 1648. Many features of the war spread disease. These included troop movements, the influx of soldiers from foreign countries, and the shifting locations of battle fronts. In addition, the displacement of civilian populations and the overcrowding of refugees into cities led to both disease and famine. Information about numerous epidemics is generally found in local chronicles, such as parish registers and tax records, that are often incomplete and may be exaggerated. The chronicles do show that epidemic disease was not a condition exclusive to war time, but was present in many parts of Germany for several decades prior to 1618.
When the Imperial and Danish armies clashed in Saxony and Thuringia during 1625 and 1626, disease and infection in local communities increased. Local chronicles repeatedly referred to "head disease", "Hungarian disease", and a "spotted" disease identified as typhus. After the Mantuan War, between France and the Habsburgs in Italy, the northern half of the Italian peninsula was in the throes of a bubonic plague epidemic (Italian Plague of 1629–1631). During the unsuccessful siege of Nuremberg, in 1632, civilians and soldiers in both the Imperial and Swedish armies succumbed to typhus and scurvy. Two years later, as the Imperial army pursued the defeated Swedes into southwest Germany, deaths from epidemics were high along the Rhine River. Bubonic plague continued to be a factor in the war. Beginning in 1634, Dresden, Munich, and smaller German communities such as Oberammergau recorded large numbers of plague casualties. In the last decades of the war, both typhus and dysentery had become endemic in Germany.
Contemporary records recall, in harrowing detail, what life was like — people were starving in huge numbers and the Church even received reports of cannibalism
Among the other great social traumas abetted by the war was a major outbreak of witch hunting. This violent wave of inquisitions first erupted in the territories of Franconia during the time of the Danish intervention and the hardship and turmoil the conflict had produced among the general population enabled the hysteria to spread quickly to other parts of Germany. Residents of areas that had been devastated not only by the conflict but also by the numerous crop failures, famines, and epidemics that accompanied it were quick to attribute these calamities to supernatural causes. In this tumultuous and highly volatile environment allegations of witchcraft against neighbors and fellow citizens flourished. The sheer volume of trials and executions during this time would mark the period as the peak of the European witch-hunting phenomenon.
The persecutions began in the Bishopric of Würzburg, then under the leadership of Prince-Bishop Philipp Adolf von Ehrenberg. An ardent devotee of the Counter-Reformation, Ehrenberg was eager to consolidate Catholic political authority in the territories he administered. Beginning in 1626 Ehrenberg staged numerous mass trials for witchcraft in which all levels of society (including the nobility and the clergy) found themselves targeted in a relentless series of purges. By 1630, 219 men, women, and children had been burned at the stake in the city of Würzburg itself, while an estimated 900 people are believed to have been put to death in the rural areas of the province.
Concurrent with the events in Würzburg, Prince-Bishop Johann von Dornheim would embark upon a similar series of large-scale witch trials in the nearby territory of Bamberg. A specially designed Malefizhaus (‘crime house’) was erected containing a torture chamber, whose walls were adorned with Bible verses, in which to interrogate the accused. The Bamberg witch trials would drag on for five years and claimed upwards of 1000 lives, among them Dorothea Flock and the city's long-time Bürgermeister (mayor) Johannes Junius. Meanwhile, 274 suspected witches were put to the torch in the Bishopric of Eichstätt in 1629, while another 50 perished in the adjacent Duchy of Palatinate-Neuburg that same year.
Elsewhere, the persecutions arrived in the wake of the early Imperial military successes. The witch hunts expanded into Baden following its reconquest by Tilly while the Imperial victory in the Palatinate opened the way for their eventual spread to the Rhineland. The Rhenish electorates of Mainz and Trier both witnessed mass burnings of suspected witches during this time. In Cologne the territory's Prince-Elector, Ferdinand of Bavaria, presided over a particularly infamous series of witchcraft trials that included the controversial prosecution of Katharina Henot, who was burned at the stake in 1627. During this time the witch hunts also continued their unchecked growth, as new and increased incidents of alleged witchcraft began surfacing in the territories of Westphalia.
The witch hunts reached their peak around the time of the Edict of Restitution in 1629 and much of the remaining institutional and popular enthusiasm for them faded in the aftermath of Sweden's entry into the war the following year. However, in Würzburg, the persecutions continued until the death of Ehrenberg in July, 1631. The excesses of this period inspired the Jesuit scholar and poet Friedrich Spee (himself a former "witch confessor") to author his scathing legal and moral condemnation of the witch trials, the Cautio Criminalis. This influential work later was credited with bringing an end to the practice of witch-burning in some areas of Germany and its gradual abolition throughout Europe.
The Thirty Years' War rearranged the European power structure. During the last decade of the conflict Spain showed clear signs of weakening. While Spain was fighting in France, Portugal – which had been under personal union with Spain for 60 years – acclaimed John IV of Braganza as king in 1640, and the House of Braganza became the new dynasty of Portugal. Spain was forced to accept the independence of the Dutch Republic in 1648, ending the Eighty Years' War. Bourbon France challenged Habsburg Spain's supremacy in the Franco-Spanish War (1635–59), gaining definitive ascendancy in the War of Devolution (1667–68) and the Franco-Dutch War (1672–78), under the leadership of Louis XIV. The war resulted in the partition of Catalonia between the Spanish and French empires in the Treaty of the Pyrenees.
The war resulted in increased autonomy for the constituent states of the Holy Roman Empire, limiting the power of the emperor and decentralizing authority in German-speaking central Europe. For Austria and Bavaria, the result of the war was ambiguous. Bavaria was defeated, devastated, and occupied, but it gained some territory as a result of the treaty in 1648. Austria had utterly failed in reasserting its authority in the empire, but it had successfully suppressed Protestantism in its own dominions. Compared to large parts of Germany, most of its territory was not significantly devastated, and its army was stronger after the war than it was before, unlike that of most other states of the empire. This, along with the shrewd diplomacy of Ferdinand III, allowed it to play an important role in the following decades and to regain some authority among the other German states to face the growing threats of the Ottoman Empire and France.
From 1643 to 1645, during the last years of the war, Sweden and Denmark-Norway fought the Torstenson War. The result of that conflict and the conclusion of the Thirty Years' War helped establish postwar Sweden as a major force in Europe.
The arrangements agreed upon in the Peace of Westphalia in 1648 were instrumental in laying the legal foundations of the modern sovereign nation-state. Aside from establishing fixed territorial boundaries for many of the countries involved in the ordeal (as well as for the newer ones created afterwards), the Peace of Westphalia changed the relationship of subjects to their rulers. Previously, many people had borne overlapping, sometimes conflicting political and religious allegiances. Henceforth, the inhabitants of a given state were understood to be subject first and foremost to the laws and edicts of their respective state authority, not to the claims of any other entity, be it religious or secular. This in turn made it easier to levy national armies of significant size, loyal to their state and its leader, so as to reduce the need to employ mercenaries, whose drawbacks had been exposed a century earlier in The Prince. Among the drawbacks were the depravations (such as the Schwedentrunk) and destruction caused by mercenary soldiers, which defied description and resulted in revulsion and hatred of the sponsor of the mercenaries; there would be no other figure such as Albrecht von Wallenstein, and the age of Landsknecht mercenaries would end.
The war also had more subtle consequences. It was the last major religious war in mainland Europe, ending the large-scale religious bloodshed accompanying the Reformation, which had begun over a century before. Other religious conflicts occurred until 1712, but only on a minor scale and no great wars.
The war also had consequences abroad, as the European powers extended their rivalry via naval power to overseas colonies. In 1630, a Dutch fleet of 70 ships took the rich sugar-exporting areas of Pernambuco (Brazil) from the Portuguese, though the Dutch would lose them by 1654. Fighting also took place in Africa and Asia.
Phillip II and III of Portugal used forts built from the destroyed temples, including Fort Fredrick in Trincomalee, and others in southern Ceylon such as Colombo and Galle Fort, to fight sea battles with the Dutch, Danish, French, and English. This was the beginning of the island's loss of sovereignty. Later the Dutch and English succeeded the Portuguese as colonial rulers of the island.
|Directly against Emperor|
|Indirectly against Emperor|
|Directly for Emperor|
|Indirectly for Emperor|
When the Dutch army was increased to 77.000 in 1629 during the threatened Spanish invasion...
king of Portugal from 1640 as a result of the national revolution, or restoration, which ended 60 years of Spanish rule.
Events from the year 1631 in Sweden1632 in Sweden
Events from the year 1632 in SwedenAlbrecht von Wallenstein
Albrecht Wenzel Eusebius von Wallenstein (24 September 1583 – 25 February 1634), also von Waldstein (Czech: Albrecht Václav Eusebius z Valdštejna), was a Bohemian military leader and nobleman who gained prominence during the Thirty Years' War (1618–1648), on the Catholic side. His outstanding martial career made him one of the most influential men in the Holy Roman Empire by the time of his death. Wallenstein became the supreme commander of the armies of the Habsburg Emperor Ferdinand II and was a major figure of the Thirty Years' War.
Wallenstein was born in Heřmanice into a poor Protestant noble family. He acquired a multilingual university education across Europe and converted to Roman Catholicism in 1606. A marriage in 1609 to the wealthy widow of a Bohemian landowner gave Wallenstein access to considerable estates and wealth after her death at an early age in 1614. Three years later, Wallenstein embarked on a career as a military contractor by raising forces for the Emperor in the war against Venice.
Wallenstein fought for the Catholics against the Protestant Bohemian revolt in 1618 and confiscated and was awarded further estates after the defeat of the rebels at White Mountain in 1620. A series of military victories against the Protestants raised Wallenstein's reputation in the Imperial court and in 1625 he raised a large army of 50,000 men to further the Imperial cause. A year later, he administered a crushing defeat to the Protestants at Dessau Bridge. For his successes, Wallenstein became an Imperial count palatine and made himself ruler of the lands of the Duchy of Friedland in northern Bohemia.An imperial generalissimo by land, and Admiral of the Baltic Sea from 21 April 1628, Wallenstein found himself released from service on 13 August 1630 after Ferdinand grew wary of his ambition. Several Protestant victories over Catholic armies induced Ferdinand to recall Wallenstein, who then defeated the Swedish king Gustavus Adolphus at Alte Veste and killed him at Lützen. Dissatisfied with the Emperor's treatment of him, Wallenstein considered allying with the Protestants. However, he was assassinated at Eger in Bohemia by one of the army's officials, with the emperor's approval.Battle of Fleurus (1622)
The Battle of Fleurus of August 29, 1622 was fought in the Spanish Netherlands between a Spanish army, and the Protestant forces of Ernst von Mansfeld and Christian of Brunswick during the Eighty Years' War and Thirty Years' War. The bloody struggle left the Protestants mangled and the Spanish masters of the field, but unable to block the enemy's march.Battle of Humenné
The Battle of Humenné (Hungarian: Homonnai csata, Polish: bitwa pod Humiennem or pierwsza odsiecz wiedeńska) took place on 22–23 November 1619 near Humenné (eastern Slovakia) during the first period of the Thirty Years' War between the Transylvanian army and the joined loyalist Hungarian and Polish forces of Lisowczycy. It was the only battle of that war to involve the Polish-Lithuanian Commonwealth.
The battle was won by the Polish cavalry led by Walenty Rogawski against the Transylvanian corps commanded by George Rákóczi, the future Prince of Transylvania.Battle of Nördlingen (1634)
The Battle of Nördlingen (German: Schlacht bei Nördlingen; Spanish: Batalla de Nördlingen; Swedish: Slaget vid Nördlingen) was fought in 1634 during the Thirty Years' War, on 27 August (Julian calendar) or 6 September (Gregorian calendar). The Roman Catholic Imperial army, bolstered by 15,000 Spanish soldiers, won a crushing victory over the combined Protestant armies of Sweden and their German-Protestant allies (Heilbronn Alliance).
After the failure of the tercio system in the first Battle of Breitenfeld in 1631, the professional Spanish troops deployed at Nördlingen proved the tercio system could still contend with the deployment improvements devised by Maurice of Orange and Gustavus Adolphus of Sweden in their respective troops.Battle of Rocroi
The Battle of Rocroi of 19 May 1643 resulted in the victory of a French army under the Duc d'Enghien against the Spanish Army under General Francisco de Melo only five days after the accession of Louis XIV of France to the throne of France, late in the Thirty Years' War. The battle is considered by many to be the turning point of the perceived invincibility of the Spanish Tercio that dominated European battlefields in the 16th century and the first half of the 17th century. After Rocroi, the Spanish abandoned the tercio system and began to use linear Dutch-style battalions like the French.Battle of Sablat
The Battle of Sablat or Záblatí occurred on 10 June 1619, during the Bohemian period of the Thirty Years' War. The battle was fought between a Roman Catholic Imperial army led by Charles Bonaventure de Longueval, Count of Bucquoy and the Protestant army of Ernst von Mansfeld.
When Mansfeld was on his way to reinforce general Hohenloe, who was besieging Budějovice (German: Budweis), Buquoy intercepted Mansfeld near the small village of Záblatí (German: Sablat), about 25 km (16 mi) km NW of Budějovice, and brought him to battle. Mansfeld suffered defeat, losing at least 1,500 infantry and his baggage train. As a result, the Bohemians had to lift the siege of Budějovice.Battle of White Mountain
The Battle of White Mountain (Czech: Bitva na Bílé hoře, German: Schlacht am Weißen Berg) was an important battle in the early stages of the Thirty Years' War.
It was fought on 8 November 1620. An army of 15,000 Bohemians and mercenaries under Christian of Anhalt was defeated by 27,000 men of the combined armies of Ferdinand II, Holy Roman Emperor led by Charles Bonaventure de Longueval, Count of Bucquoy and the German Catholic League under Johann Tserclaes, Count of Tilly at Bílá Hora ("White Mountain") near Prague. The site is now part of the city of Prague.
The battle marked the end of the Bohemian period of the Thirty Years' War and decisively influenced the fate of the Czech lands for the next 300 years. Its aftermath drastically changed the religious landscape of the Czech lands after two centuries of Protestant dominance. Roman Catholicism retained majority in the Czech lands until the late 20th century.Bohemian Revolt
The Bohemian Revolt (German: Böhmischer Aufstand; Czech: České stavovské povstání; 1618–1620) was an uprising of the Bohemian estates against the rule of the Habsburg dynasty that began the Thirty Years' War. It was caused by both religious and power disputes. The estates were almost entirely Protestant, mostly Utraquist Hussite but there was also a substantial German population that endorsed Lutheranism. The dispute culminated after several battles in the final Battle of White Mountain, where the estates suffered a decisive defeat. This started re-Catholisation of the Czech lands, but also expanded the scope of the Thirty Years' War by drawing Denmark and Sweden into it. The conflict spread to the rest of Europe and devastated vast areas of central Europe, including the Czech lands, which were particularly stricken by its violent atrocities.Charles Bonaventure de Longueval, 2nd Count of Bucquoy
Charles Bonaventure de Longueval, Count of Bucquoy (Czech: Karel Bonaventura Buquoy, Spanish: Carlos Buenaventura de Longueval, Conde de Bucquoy, full name in French: Charles Bonaventure de Longueval comte de Bucquoy, German: Karl Bonaventura Graf von Buquoy) (Arras, 9 January 1571 – Nové Zámky, 10 July 1621) was a military commander who fought for the Spanish Netherlands during the Eighty Years' War and for the Holy Roman Empire during the Thirty Years' War.Christian IV of Denmark
Christian IV (12 April 1577 – 28 February 1648) was king of Denmark and Norway and duke of Holstein and Schleswig from 1588 to 1648. His 59-year reign is the longest of Danish monarchs, and of Scandinavian monarchies.
A member of the house of Oldenburg, Christian began his personal rule of Denmark in 1596 at the age of 19. He is frequently remembered as one of the most popular, ambitious, and proactive Danish kings, having initiated many reforms and projects. Christian IV obtained for his kingdom a level of stability and wealth that was virtually unmatched elsewhere in Europe. He engaged Denmark in numerous wars, most notably the Thirty Years' War (1618–48), which devastated much of Germany, undermined the Danish economy, and cost Denmark some of its conquered territories.
He rebuilt and renamed the Norwegian capital Oslo as Christiania after himself, a name used until 1925.Defenestrations of Prague
The Defenestrations of Prague (Czech: Pražská defenestrace, German: Prager Fenstersturz, Latin: Defenestratio Pragensis) were two incidents in the history of Bohemia in which multiple people were defenestrated (that is, thrown out of a window). The first occurred in 1419, and the second in 1618, although the term "Defenestration of Prague" more commonly refers to the second. Each helped to trigger a prolonged religious conflict inside Bohemia (the Hussite Wars, 1st defenestration) or beyond (Thirty Years' War, 2nd defenestration).Peace of Westphalia
The Peace of Westphalia (German: Westfälischer Friede) was a series of peace treaties signed between May and October 1648 in the Westphalian cities of Osnabrück and Münster, largely ending the European wars of religion, including the Thirty Years' War. The treaties of Westphalia brought to an end a calamitous period of European history which caused the deaths of approximately eight million people. Scholars have identified Westphalia as the beginning of the modern international system, based on the concept of Westphalian sovereignty, though this interpretation has been seriously challenged.The negotiation process was lengthy and complex. Talks took place in two different cities, as each side wanted to meet on territory under its own control. A total of 109 delegations arrived to represent the belligerent states, but not all delegations were present at the same time. Three treaties were signed to end each of the overlapping wars: the Peace of Münster, the Treaty of Münster, and the Treaty of Osnabrück. These treaties ended the Thirty Years' War (1618–1648) in the Holy Roman Empire, with the Habsburgs and their Catholic allies on one side, battling the Protestant powers (Sweden, Denmark, Dutch, and Holy Roman principalities) allied with France (Catholic but anti-Habsburg). The treaties also ended the Eighty Years' War (1568–1648) between Spain and the Dutch Republic, with Spain formally recognising the independence of the Dutch.
The Peace of Westphalia established the precedent of peace established by diplomatic congress. A new system of political order arose in central Europe, based upon peaceful coexistence among sovereign states. Inter-state aggression was to be held in check by a balance of power, and a norm was established against interference in another state's domestic affairs. As European influence spread across the globe, these Westphalian principles, especially the concept of sovereign states, became central to international law and to the prevailing world order.Prince Rupert of the Rhine
Prince Rupert of the Rhine, Duke of Cumberland, (17 December 1619 – 29 November 1682) was a noted German soldier, admiral, scientist, sportsman, colonial governor and amateur artist during the 17th century. He first came to prominence as a Cavalier cavalry commander during the English Civil War.Rupert was a younger son of the German prince Frederick V, Elector Palatine and his wife Elizabeth, the eldest daughter of James VI of Scotland and I of England. Thus Rupert was the nephew of King Charles I of England, who made him Duke of Cumberland and Earl of Holderness, and the first cousin of King Charles II of England. His sister Electress Sophia was the mother of George I of Great Britain.
Prince Rupert had a varied career. He was a soldier from a young age, fighting against Spain in the Netherlands during the Eighty Years' War (1568–1648), and against the Holy Roman Emperor in Germany during the Thirty Years' War (1618–1648). Aged 23, he was appointed commander of the Royalist cavalry during the English Civil War, becoming the archetypal Cavalier of the war and ultimately the senior Royalist general. He surrendered after the fall of Bristol and was banished from England. He served under Louis XIV of France against Spain, and then as a Royalist privateer in the Caribbean. Following the Restoration, Rupert returned to England, becoming a senior English naval commander during the Second and Third Anglo-Dutch wars, engaging in scientific invention, art, and serving as the first governor of the Hudson's Bay Company. Rupert died in England in 1682, aged 62.
Rupert is considered to have been a quick-thinking and energetic cavalry general, but ultimately undermined by his youthful impatience in dealing with his peers during the Civil War. In the Interregnum, Rupert continued the conflict against Parliament by sea from the Mediterranean to the Caribbean, showing considerable persistence in the face of adversity. As the head of the Royal Navy in his later years, he showed greater maturity and made impressive and long-lasting contributions to the Royal Navy's doctrine and development. As a colonial governor, Rupert shaped the political geography of modern Canada—Rupert's Land was named in his honour, and he was a founder of the Hudson's Bay Company. He also played a role in the early Atlantic slave trade. Rupert's varied and numerous scientific and administrative interests combined with his considerable artistic skills made him one of the more colourful individuals of the Restoration period.Relief of Thionville
The Relief of Thionville took place from 6 to 7 June 1639, during the Thirty Years' War.Sack of Magdeburg
The Sack of Magdeburg, also called Magdeburg Wedding (German: Magdeburger Hochzeit) or Magdeburg's Sacrifice (German: Magdeburgs Opfergang), was the destruction of the Protestant city of Magdeburg on 20 May 1631 by the Imperial Army and the forces of the Catholic League, resulting in the deaths of around 20,000, including both defenders and non-combatants. The event is considered the worst massacre of the Thirty Years' War. Magdeburg, then one of the largest cities in Germany, having well over 25,000 inhabitants in 1630, did not recover its importance until well in the 18th century.Torstenson War
The Torstenson war, Hannibal controversy or Hannibal War (Norwegian: Hannibalsfeiden) was a short period of conflict between Sweden and Denmark–Norway from 1643 to 1645 towards the end of the Thirty Years' War. The names refer to Swedish general Lennart Torstenson and Norwegian governor-general Hannibal Sehested.
Denmark had withdrawn from the Thirty Years' War in the Treaty of Lübeck (1629). In the Second Treaty of Brömsebro (1645), which concluded the war, Denmark had to make huge territorial concessions and exempt Sweden from the Sound Dues, de facto acknowledging the end of the Danish dominium maris baltici. Danish efforts to reverse this result in the Second Northern, Scanian and Great Northern wars failed.Treaty of Stettin (1653)
The Treaty of Stettin (German: Grenzrezeß von Stettin) of 4 May 1653 settled a dispute between Brandenburg and Sweden, who both claimed succession in the Duchy of Pomerania after the extinction of the local House of Pomerania during the Thirty Years' War. Brandenburg's claims were based on the Treaty of Grimnitz (1529), while Sweden's claims were based on the Treaty of Stettin (1630). The parties had agreed on a partition of the Swedish-held duchy in the Peace of Westphalia (1648), and with the Treaty of Stettin determined the actual border between the partitions. Western Pomerania became Swedish Pomerania, Farther Pomerania became Brandenburgian Pomerania.
Thirty Years' War
Treaties of the Thirty Years' War (1618–1648)
|Start of the| |
Nuclear fusion, the process that powers the sun and the stars, is heralded as the ultimate energy source for the future of mankind. The promise of nuclear fusion to provide clean and safe energy, while having abundant fuel resources continues to drive global research and development. However, the goal of reaching so-called “breakeven” energy conditions, whereby the energy produced from a fusion reaction is greater than the energy put in, is yet to be demonstrated. It is the role of ITER, an international collaborative experimental reactor, to achieve breakeven conditions and to demonstrate technologies that will allow fusion to be realized as a viable energy source. However, with significant delays and cost overruns to ITER, there has been increased interest in the development of other fusion reactor concepts, particularly by private-sector start-ups, all of which are exploring the possibility of an accelerated route to fusion. This chapter gives a comprehensive overview of nuclear fusion science, and provides an account of current approaches and their progress towards the realization of future fusion energy power plants. The range of technical issues, associated technology development challenges and future commercial opportunities are explored, with a focus on magnetic confinement approaches.
- nuclear fusion
- power plant
- plant design
- plant operation
- environmental impact
1. Introduction: a brief history of nuclear fusion
Under enormous pressures and temperatures, two or more atomic nuclei are able to overcome the coulombic barrier and, through the quantum tunneling effect, join together to create a heavier nucleus, and to release enormous amounts of energy in the process. This reaction is callednuclear fusion. It is the process that combines lighter elements to create heavier elements, from which the energy released is what powers the sun and the stars . Nuclear fusion has the potential to provide almost limitless energy for mankind, as its primary fuel sources are abundant , there is no risk of a runaway reaction or meltdown, and no long-lived high-level radioactive waste or harmful greenhouse emissions are produced (see Section 5) . As such, the possibility of creating a star on earth and harnessing the energy from the fusion reaction is heralded as the solution to all of mankind’s energy problems . The aim of this study is to provide an overview of current development efforts into nuclear fusion as an energy source.
Figure 1 illustrates the binding energy of atomic nuclei and shows the differences between the easily confused nuclear fusion and nuclear fission reactions. Nuclear fission involves the splitting of unstable heavy atomic nuclei (illustrated by the leftward arrow on the right-hand side of the figure), whereas fusion involves the fusing of light atomic nuclei (illustrated by the upward arrow on the left-hand side of the figure).
Nuclear fusion was first observed earlier than nuclear fission. In 1934, an experiment involving scientists Oliphant, Harteck and Lord Rutherford, where by bombarding deuterium ions into target compounds containing deuterium, they observed that a new isotope of hydrogen and a neutron had been produced . They theorized that a “hydrogen transmutation effect” had taken place , and it was later proven that this effect had in fact been the D-D fusion reaction (the reaction between two deuterium isotopes).
Although discovered prior to World War II, efforts to utilize the fusion reaction as a source of energy did not materialize until the 1950s . Meanwhile, scientific understanding of the nuclear fission reaction, and the mechanisms by which energy could be produced from it, lead to rapid commercialization of fission technology in the early 1960s. During the same period, nuclear fusion research was considered slow, and was considered as being in “Purgatory” due to the relative lack of progress as compared with fission. However, unlike fission, which occurs spontaneously in certain elements in nature and for which the reaction can be easily controlled in manmade reactors, fusion only occurs in stars (and the supernovae of starts) where the intense gravitational pressure and high temperatures allow the fusion reaction to take place. Given the extremity of the conditions required, it was immediately clear that the task of mimicking a star and harnessing energy from the fusion reaction on Earth would be a significant challenge.
In 1965, promising experimental results were published by the Soviet Union from a novel nuclear fusion device called a Tokamak. A tokamak, which is a Russian acronym for “Toroidalnaya Kamera Magnitmaya saksial’nym” (which translates to “toroidal chamber with axial magnetic field”), is a donut-shaped device that was designed for the purpose of confining a high temperature plasma using a magnetic field, which is explained in more detail in Section 3. At first, experimental findings from tokamak experiments were largely ignored by the international fusion research community. However, by the beginning of 1970s the efficacy of the tokamak became apparent, and many countries followed by developing their own tokamak machines. Notable tokamaks around the world include: the Joint European Torus (JET) in U.K. (designed, constructed, and operated by the European Union, and Euratom, starting in the late 1970s and continues operation today), the Japan Torus-60 (JT-60) in Japan (which is now being upgraded to JT-60SA “Super Advanced”).
Since the end of the Cold War, focus has shifted towards international collaboration on the development of fusion. Together, the European Union, India, Japan, Russia, United States, South Korea and China are involved in the construction of the ITER tokamak (previously an acronym for International Thermonuclear Experimental Reactor, but now solely referred to as ITER, which is Latin for “the way”). A diagram showing the cross-section of ITER is shown in Figure 2. Under construction in Saint-Paul-lès-Durance, near Provence, in France, ITER will be the largest fusion reactor in the world to date and is considered the next major step in the path towards fusion energy. The primary objective is for ITER to yield a fusion reaction that produces five times more energy than is needed to sustain the fusion reaction (see Section 2), but it will also demonstrate the scientific and technological feasibility of fusion energy using tokamaks. “First plasma” in ITER (the start of preliminary D-D operation) is currently scheduled to begin in 2025, but the start of full power D-T operation (the reaction between deuterium and tritium), which will allow an attempt at achieving breakeven conditions, has been pushed back almost two decades from the original start date and will now begin in 2035.
2. Fundamentals of nuclear fusion science
During the fusion of two or more light atomic nuclei, the mass of the product of the fusion reaction is slightly less than the sum of the reactants. This difference in mass is the conversion of mass into energy, as was theorized by Albert Einstein, and later proven. The relationship between mass and energy is shown in Eq. (1), where E, m and c are the energy released, the mass difference, and the speed of light respectively. In case of a nuclear fusion reaction, the surplus binding energy will be released as kinetic energy of particles, as detailed below.
As shown in Figure 1, a helium-4 (4He) nucleus has the greatest binding energy of any atom lighter than carbon-12 (12C), and as such it is therefore the most stable of the light elements. Therefore, in terms of effectively utilizing energy from the nuclear fusion reaction, and to produce a stable product, it is most desirable to fuse light atoms that result in the production of a helium nucleus. Fusing lighter atomic nuclei has another significant advantage. The lower electric charge of lighter atoms leads to a reduced level of repulsion when interacting with other atomic nuclei, increasing the likelihood that a fusion reaction will occur. Nuclear fusion reactions between the lightest isotopes of hydrogen, deuterium (2D) and tritium (3T), as shown as Eqs. (2)–(4), are therefore the best candidates for the fuel cycle in future fusion reactors .
But of the three reactions shown, which offers the best option to be utilized as an energy source? The difficulty of a nuclear fusion reaction can be expressed by the reactivity, which is defined as the probability of a reaction occurring, per unit time, per unit density of target nuclei . Reactivities of nuclear fusion reactions can be obtained by a multiplication of the nuclear cross section σ, and the relative velocity ν . Figure 3 shows the averaged reactivity <σν> of the reactions Eqs. (2)–(4), as well as other possible fusion reactions between light atomic nuclei. As is clear, the lower the reactivity, the more extreme the conditions must be for the fusion reaction to occur. The figure shows that the reactivity between atomic nuclei of deuterium and tritium (the D-T reaction) is the most favorable, and it is for this reason that efforts are currently focused on producing a D-T fusion reactor. However, despite the fact that the reactivity of the D-T reaction makes it favorable from a physics perspectives, as detailed in Section 6, due to complications surrounding the long-term availability of tritium, unwanted chemical properties, and the higher energy neutrons produced by the reaction, other fusion fuels that avoid the use of tritium may be preferable. Of these, the D-D fusion reaction, as shown in Eqs. (2) and (3), as well as other aneutronic fusion reactions (reactions not resulting in the production of neutrons), are considered to be the best long-term options for future fusion reactors.
Although the D-T fusion reaction requires the lowest kinetic temperature for the fusion reaction to occur, extremely high temperatures in the order of tens of keV are still required. Fusion reactors must be designed to provide and contain the conditions needed for nuclear fusion reactions to occur. In a fusion reactor, atoms of deuterium and tritium are heated to very high temperatures. At high temperatures, the electrons surrounding an atom separate from the nucleus, forming an ionized and electrically conductive substance called a plasma (plasma is the fourth state of matter). For fusion to occur, the plasma containing the fusion fuels must reach the thermal (kinetic) energy required, which requires the need to both contain and to heat the plasma. Plasma can be contained by magnetic fields, as it is positively charged. Being electrically conductive, it is also possible to induce a current in the plasma. There are a number of ways fusion plasmas can be controlled, and these are explained in Section 3.
To generate net positive energy from a fusion reaction, the energy released by the reaction must be greater than the energy that is required to induce the reaction. In the case of a fusion reactor, this is the ratio of the energy output from nuclear fusion reactions in the plasma to the energy supplied to sustain the plasma, and is known as the fusion energy gain Q, or Qfus. The conditions to achieve Q = 1, the moment at which the energy produced is equal to the energy put in, is known as scientific breakeven conditions. In the case of a fusion reactor, auxiliary system power requirements and inefficiencies in the production of electricity mean that scientific breakeven conditions are not sufficient for a commercial fusion reactor. Instead, the ratio of energy production from the fusion reactor must be compared against the total energy consumption of the whole fusion power plant. This is known as the engineering gain Qeng. Similarly, the conditions required to achieve Qeng = 1 is known as “engineering breakeven,” and it is achieving these conditions that is the true goal on the pathway to the realization of fusion energy.
There are three ways to improve the value of Q, in order to get closer to fusion conditions. Firstly, by increasing the rate of the fusion reaction (increasing the output energy) whilst simultaneously reducing the level of external heating needed (decreasing in the input energy), the value of Q can be increased. This is shown by the volumetric rate of the fusion reaction, f, as in Eq. (5), where n is the density of the fuel, and <σν> is the averaged reactivity. Since <σν> is proportional to the square of T, the volumetric rate of fusion reaction f is proportional to n2T2, thus when both are increased it leads to an increase in Q. The rate of the fusion reaction is dependent on both the density of the plasma, and on the plasma temperature, and increasing the temperature and density are thus two of the ways to increase Q.
The third way to increase Q pertains to the efficiency of a fusion plasma to maintain its high-temperature and high-density plasma conditions. This is known as the energy confinement time τE, and is expressed by Eq. (6), where W and Pheat are the thermal energy and the heating energy of the plasma, respectively . The confinement time τE is the first-order delay time constant of the plasma thermal energy when the heating energy Pheat = 0, and is a measure of how well a fusion plasma can be contained.
In summary, Qfus is closely linked to the plasma density, the plasma temperature, and the efficiency of contained thermal energy (confinement time). All must be increased to achieve the conditions required for nuclear fusion. These three factors combine as nTτE, which is known as the fusion triple product, or the Lawson criterion . The triple product is used to evaluate the performance of a fusion reactor, and efforts have seen the value of the triple product increase steadily over time, although little improvement has been made in the past two decades.
3. Nuclear fusion reactors
3.1. Approaches to fusion reactors
Although several approaches to controlling and containing a fusion plasma exist, the two primary approaches being explored are based on the concept of magnetic confinement, and inertial confinement.
Magnetic confinement fusion (MCF) reactors are the more advanced of the two approaches, and they utilize magnetic fields generated by electromagnetic coils to confine a fusion plasma in a donut-shaped (torus) vessel. There are two primary types of torus-shaped fusion devices. The tokamak, such as ITER (as introduced in Section 1), utilizes magnetic coils arranged around a torus-shaped vessel, which generates a toroidal magnetic field to confine the plasma, and uses a secondary poloidal magnetic field to drive the current in the plasma . Other tokamak variants, such as the spherical tokamak design, which has a lower aspect ratio (the ratio of the outer radius to the inner radius of the torus), exhibits different and potentially better plasma performance but with the tradeoff of increased difficulty in engineering design .
Another magnetic confinement concept is the stellarator, which uses magnetic coils in a helical configuration around the plasma vessel, creating a spiral-shaped magnetic field which is used to drive the current. The differences between tokamak and stellarator systems are illustrated in Figure 4. The stellarator is considered to be a potential long-term solution, and stellarator-based fusion reactors are actively being explored, but like the spherical tokamak may present a great challenge in engineering design .
Unlike magnetic confinement approaches, inertial confinement fusion (ICF) approaches attempt to externally heat and compress fusion fuel targets to achieve the very high temperatures even higher densities required to initiate the nuclear fusion. For most ICF concepts and approaches, high power lasers are used to compress and heat the fuel.
Recently, a third approach, which exploits the parameter space between the conditions produced and needed for magnetic and inertial confinement, has gained traction in recent years, and is receiving much scientific, and even commercial, attention. Magnetized target fusion (MTF), sometimes known as magnetized inertial fusion (MIF), looks to exploit the use of higher density plasmas than for MCF approaches, but lower power lasers and other drivers than those used in ICF approaches. MTF may offer a unique route to fusion, and the accelerated development of a number of unique concepts has seen significant support, particularly in the United States of America where the U.S. ARPA-E (Advanced Research Projects Agency-Energy) “ALPHA” program has provided support for exploration into the magnetized target fusion route to fusion .
3.2. Progress in reactor development
As described in Section 2, nuclear fusion reactors are often evaluated by their ability to achieve high plasma density n, confinement time τE, and temperature T. As such, the history of fusion reactors is best viewed as a history of the improvement of the fusion gain Q on the Lawson Diagram. The Lawson Diagram in Figure 5 illustrates the progress in fusion reactor development, showing progression towards the Lawson criterion, with the central ion temperature (T) shown on the horizontal axis, and the product of plasma density the energy confinement time (nτE) shown on the vertical axis. The diagram shows that since the 1970s fusion reactors have seen a steady improvement towards scientific breakeven conditions (Q = 1). However, whilst the scientific community wait on the delayed ITER project to begin operation, progress towards breakeven has stagnated over the past two decades, as all focus has been on ensuring ITER’s success, which has diverted effort, resources in the way of both funding and manpower, and time for the exploration of other pathways, and even alternative tokamak concepts.
4. Nuclear fusion power plant design and operation
4.1. Harnessing the energy from the fusion reaction
All information presented here pertains only to the D-T fusion reaction, as the majority of development efforts are based on the D-T fuel cycle. However, it is worth mentioning that aneutronic fusion fuels, such as the proton-boron-11 reaction, or those involving helium-3, are considered to present promising and viable alternatives for long-term use as fuels for fusion energy. Refer to for a comprehensive overview of the range of potential fuel cycles for future fusion reactors.
The primary energy released by the D-T fusion reaction is in the form of kinetic energy, which is carried by the products of the reaction. Of the two products, the majority of the energy is carried by the neutron (14.1 MeV), with the remainder being carried by the helium nucleus (3.5 MeV). As helium carries a positive charge, it will be affected by magnetic fields of the reactor, and as such the majority of the kinetic energy carried by the helium nuclei from fusion reactions will remain in the plasma, with the energy transferred to the plasma provide a self-heating effect to help sustain the fusion reaction. However, the kinetic energy carried by the neutrons, which are uncharged particles, will not remain in the plasma and instead will deposit their energy as heat in the walls of the reactor. Fusion power plant concepts intended for energy production will capture the energy carried by the neutrons in a blanket surrounding the reactor. The heat energy captured by the blanket will be extracted and converted into electricity through a thermodynamic cycle. It should be noted that whilst the energy is transferred by the neutrons, they also have potential to cause significant radiation damage. This is a major issue for future fusion reactors and must be designed for (see Section 5.1).
4.2. Energy production
The Rankine cycle is a closed steam turbine system used to generate electricity by converting energy from a heat source. A standard Rankine cycle follows a four-stage process. Water enters a boiler, for which the energy is provided by a heat source (in this case a fusion blanket which is heated primarily by the energy deposited from neutrons), where the energy from the heat carried away in the water is hot enough to form a saturated steam. The saturated steam passes through a steam turbine, where it expands transferring its energy to a turbine as rotational energy, which is used to turn a generator to produce electricity. Following the expansion through the turbine, the resulting wet steam enters the condenser, where it is converted back into the liquid phase. Finally, the liquid water passes through a pump, which returns the working fluid from a low-pressure boiler to a high-pressure boiler, and the cycle repeats. Currently, the Rankine cycle, as well as variations of the Rankine Cycle such as the reheat and regenerative Rankine cycle, are widely used at coal, oil and nuclear fission power plants. Due to the similarities in the conditions of nuclear fission reactors in that they produce high-grade heat, fusion power plants of the future are also expected to employ a Rankine cycle.
The Brayton cycle is now utilized at many natural gas power plants. As nuclear fusion reactors have the potential to operate at high-temperatures, fusion power plants of the future operating on the Brayton cycle also have the potential to achieve a higher energy production efficiency than systems using a Rankine cycle. Proposals to use fusion in more advanced electricity generation cycles include the possibility of using the Integrated Gasification Fuel Cell (IGFC) cycle or the Magnetohydrodynamic (MHD) generator cycle. In fact, the potential for fusion to produce high-grade process heat opens a number of avenues for future energy generation technology. Novel ideas for process heat applications of nuclear fusion, for purposes such as hydrogen production , high-temperature salt water desalination , or biomass gasification could facilitate the deep decarbonization of a larger proportion of primary energy markets, allowing fusion technology to be used to better support ever-increasing global energy demand.
4.3. Operation modes
There are two proposed modes of plant operation for electricity production in fusion power plants. The first is steady-state mode, which would allow the plant to generate electricity at a constant rate, as is the case in current nuclear fission power plants. Alternatively, fusion power plants could operate in pulsed mode, whereby the reactor system alternates between a short plasma burning period (concept designs see burn period range from 30 min to several hours), and a shut-off period (also known as a dwell period) to recharge for the next pulse. Some plant concepts based on a pulsed operational mode are designed with thermal reservoirs that use residual heat to enable continued electricity generation during dwell periods. Concepts that cannot manage continuous energy production in pulsed mode are considered intermittent and thus may not be viable as an electricity generating source but may still be useful for process heat applications, as detailed in Section 4.2.
An alternative is to design smaller (“compact”) fusion reactors modules, which then operate together in a modular power plant configuration. By designing a power plant so that of a set of fusion reactor modules, some are operational whilst others are in a dwell period, intermittent fusion devices could still prove viable for electricity production. A modular power plant configuration also opens up the possibility of load-following capability and co-generation, by switching on a greater number of modules to provide electricity at times of high grid demand and then switching the output for the purposes of process heat applications at times of low grid demand. This concept is possible with some of the approaches being explored by various fusion initiatives, and is suggested in (see Section 6), as well as by an array of concepts employing the use of fission SMRs (Small Modular Reactors), which share many similarities with the modular fusion power plant concept [21, 23].
5. Challenges to the realization of a nuclear fusion power plant
5.1. Science, engineering and technology
The science, engineering and technology challenges ahead on the route to commercial fusion are vast and wide-ranging. Principally, for magnetic confinement D-T reactor concepts, the primary technical issues that must be overcome are: .
Stable operation of fusion plasmas
Design and development of a heat exhaust system (known as the divertor)
Development of neutron-resistant fusion materials
Development of tritium breeding technology
Development of reliable magnet systems
For the success of any fusion device, the operation and control of a high-performance plasma is crucial. The development of reliable plasma regimes with mitigation procedures that prevent instabilities and disruptions in the plasma from causing damage to the walls of the reactor are the subject of much current research around the globe and is a primary focus on the ITER project . Further, to handle the heat from the plasma, and to remove the helium “ash” (the alpha particles) that is produced by the D-T fusion reaction itself, a plasma heat exhaust, known as a divertor, is also required. An integrated divertor design must be developed to be effective at handling the intense heat (10 MW/m2 is the design basis for ITER ) and the high neutron loads over the long operational timescales that will be required for a fusion power plant [15, 24]. Divertors are specific to the tokamak approach, but any MCF power plant concept, or perhaps even MTF approaches, will have to consider a power handling and plasma exhaust system.
In addition to materials needed for the divertor, plasma facing materials (sometimes known as the first wall) must also be developed to provide radiation shielding for protection of the magnets, diagnostics and control equipment, as well as workers and the environment (using a bio-shield), whilst simultaneously allowing neutrons through to the tritium fuel breeding blanket where the energy deposited is used to produce electricity and to breed new fuel to sustain the fusion fuel cycle (see below). The requirements of fusion materials differ to those used for nuclear fission reactors. The neutrons from the D-T fusion reaction are of a much higher energy, and with the reduction of nuclear waste and safety in mind, materials for fusion are subject to judicious selection to ensure that long-lived radioactive waste is not produced through the interaction of fusion neutrons with the surrounding reactor structure . In eliminating certain isotopes, the list of materials available for use in fusion reactors becomes significantly limited, thus providing an added challenge on top of an already difficult problem. An example of the trade-offs is apparent when considering the development of Reduced Activation Ferritic Martensitic (RAFM) steels for fusion applications, which upon neutron irradiation better retain their properties and do not produce any long-lived radioactive waste, but instead suffer from other performance limitations and have more of a limited thermal operation range .
Neutron resistant materials also play a critical role in the structure of the tritium breeding blanket systems. The tritium breeding systems have two primary purposes: to breed new tritium fuel from D-T fusion neutron interaction with lithium, and to capture and extract the energy carried by the neutrons in the form of heat so that energy can be produced (see Section 4). Challenges in the design of breeding blankets are wide-ranging. Materials selection, the removal of heat and associated thermal hydraulic challenges, as well as the breeding mechanism itself, all present disparate problems but require an integrated solution. To date, no proof-of-concept for tritium breeding technology has been demonstrated, though a range of designs exist, and preliminary testing and computer modeling has been the focus in the absence of experimental data. However, even if breeding technology is developed, issues surrounding the sustainability of breeding blankets may present an additional hurdle, as discussed in Section 5.5 [15, 26].
The final of the core challenges for fusion is in the development of efficient superconducting magnets, which are required to provide the magnetic field to contain a fusion plasma. Until recently, most effort was focused on the use of low temperature superconducting (LTS) magnets, which are capable of carrying the high fields and currents necessary for large scale magnetic confinement fusion reactors, but that are large in size, and must be cooled to liquid helium temperatures (~4 K) at significant cryogenic cost. Recent developments in magnet technology has seen the development of high-temperature superconductors (HTS) which can carry greater currents at higher field than LTS, and with greater cryogenic efficiency, owing to the operating temperature (“high-temperature” is a misnomer that refers to potential high-performance magnet operation at 20–30 K, rather than 4 K). Development in HTS, which may lead to the development of more efficient smaller fusion reactors as they are capable of operating at higher field [22, 27].
Unlike nuclear fission reactors, nuclear fusion reactors do not have any risk of a runaway reaction or meltdown. In the case of any abnormalities in fusion reactor conditions, such as an abnormal plasma pressure or density spike, the plasma will dissociate and collapse, and the fusion reaction will cease. The level of decay heat in a fusion reactor after the termination of the plasma is very low compared with fission reactors, which must be cooled after shutdown to prevent core melt. In principle, nuclear fusion power plants do not require an Emergency Core Cooling System (ECCS), as even in a Loss of Cooling Accident (LOCA) the plasma inside the reactor would dissociate due to the influx of impurities from the reactor vessel walls as the surfaces heat up due to the lack of coolant available. In such an event, once the plasma has dissociated, all that remains is residual decay heat, for which studies suggest that the small temperature increases do not lead to melting, and therefore decay heat in a fusion power plant is considered as a low safety risk . Despite this, consideration of such accident scenarios will still be made based on the rigorous method of Probabilistic Risk Assessment (PRA) .
Nuclear fusion power plants will not produce high level or transuranic radioactive waste like that produced by fission power plants. However, nuclear fusion power plants will still produce large quantities of intermediate level waste as a result of the existence of high energy neutrons and the in-vessel tritium-contaminated (tritiated) dust that becomes embedded in the reactor walls and components. Radioactive waste from fusion is unavoidable, even with efforts to develop materials such as RAFM steels to reduce the radioactivity and quantities of waste from the reactor structure. Another important example of the impossibility of avoiding the production of radioactive waste from fusion is in the selection of breeding blanket materials, as the neutron irradiation of lead, a crucial breeding material (neutron multiplier) can result in the production of the isotope polonium-210, which is a strong alpha emitter. As such, both issues present a challenge, as the waste from fusion power plants will remain significantly radioactive for a number of decades, perhaps even presenting a higher level of radiological risk than the waste produced in fission reactors in the short-term, and tritiated materials will require novel handing techniques . While the risks associated with radioactive materials in the long-term are considered to be lower than those associated with waste produced from fission reactors, which can last for millions of years, it is likely that a similar level of regulation and licensing will be required to ensure that plant design and waste handling is fit for purpose, safe, and factored in to design and costing.
5.3. Nuclear proliferation and security risks
Nuclear fusion power plant concepts are generally considered to have a lower risk of nuclear proliferation. Nuclear fusion power plants will not handle any currently designated special nuclear materials. Currently safeguarded are: 239Pu, 233U and enriched uranium (235U). However, it is not inconceivable that weapons-grade 239Pu or 233U could be produced using the neutrons from a fusion reactor by replacing the blanket materials with natural uranium or thorium . Moreover, tritium, the primary fuel for fusion, can be used to boost the yield of thermonuclear fission and fusion weapons, and thus careful accountancy of the fuel will be required . While the nuclear proliferation and security risks regarding nuclear fusion power plants are significantly lower than those required for fission power plants, it is likely that stringent safeguarding for fusion power plants will be required. These must be developed in accordance with International Atomic Energy Agency (IAEA) recommendations.
5.4. Environmental impacts
Although fusion power plants will release small quantities of tritium to within already defined limits, they will not produce greenhouse gases or other air pollutants . As a result, the environmental impacts associated with nuclear fusion power plants will instead be primarily attributed to construction, operation and maintenance, including fuel supply chains, and waste disposal. Environmental Life Cycle Assessments (LCA) suggest that life cycle greenhouse gas emissions of nuclear fusion electricity generation will be somewhere between 6 and 12 g CO2 equivalent per kWh of electricity production. This is in line with recent renewables estimates , and current light water nuclear power plants (5.7 g/kWh), and an order of magnitude lower than for coal power plants (270 g/kWh) [36, 37, 38].
The fuels of nuclear fusion power plants are deuterium and tritium. Deuterium is an isotope of hydrogen with the isotopic ratio of 150 ppm, or 1 part in 6700 atoms of hydrogen. As such, deuterium is abundant in seawater and can be extracted using well-established separation processes. Tritium, on the other hand, does not occur in nature in any significant quantity, and is only produced by commercial purposes as a by-product in heavy water CANDU fission reactors. Tritium is a radioactive isotope, decaying with a half-life of 12.3 years, and with supply coming only from CANDU reactors, supply is severely limited, as a global stockpile of only around 30 kg is available for commercial use worldwide (and the same stockpile must supply ITER with almost 20 kg). That commercial fusion reactors require 55.6 kg of tritium per year per GW (thermal) for operation, future fusion power plants cannot depend on an external supply of CANDU tritium (or otherwise) for commercial operation. Instead, tritium is expected to be produced by neutron interaction with lithium, specifically the isotope lithium-6, in breeding blankets, under the reaction shown in Eq. (7).
The quantity of tritium produced in the breeding blanket must be greater than that used by the fusion reactor, and therefore the reactor must have a TBR (tritium breeding ratio) above 1 in order to achieve “tritium self-sufficiency”. Therefore, although the fuel itself that is required for fusion is tritium, the consumable fuel for a fusion power plant is in fact lithium.
On lithium and deuterium sources alone, it is estimated nuclear fusion power plants could provide the electricity needs of humanity for tens of millions of years (from 14 million to 23 million years ). This leads us to the consideration that the resources for nuclear fusion are ‘virtually unlimited.’ Current terrestrial deposits of lithium are estimated at 53 million tons . Given that a nuclear fusion power plant with an electrical output of 1 GWe requires between 10 and 35 tons of lithium over its operational lifetime , 2500 1 GWe fusion power plants would require up to 90,000 tons, notwithstanding competition for lithium from advanced technologies such as large scale battery storage. However, it is more complex when considering that many fusion breeder concepts rely on the use of lithium-6 rather than natural lithium. Lithium-6 has an isotopic abundance of only 7.5%, and therefore to obtain 90,000 tons of lithium-6, a total of 1.2 million tons of natural lithium would be required. Even so, this is only around 2% of the current known terrestrial deposits, and a backstop also exists in the form of seawater in which the abundance of lithium and some other key minerals is relatively high. Thus, although production cost would likely increase, lithium could be procured from seawater in the future [40, 41]. Even with competition for lithium, resources appear plentiful for the purposes of fusion, particularly since technological advancements towards D-D and aneutronic fuel cycles may eventually avoid the need for tritium production altogether.
However, resource limitations do exist with other critical materials required for future nuclear fusion reactors. There are potentially significant issues in the supply of helium gas for the cryogenic cooling systems, beryllium for the tritium breeder blanket, and some critical metals that are required for construction of the fusion reactor structure.
Helium resource is expected to be of limited availability for future fusion reactors, and thus improving the efficiency of cooling systems, as well as efforts to reduce and recycle the overall helium inventory, is needed to ensure longevity of the current supply . As above, the lack of tritium available from external sources necessitates the inclusion of a tritium breeder blanket, which will mean lithium as the primary fuel. However, as even enriched lithium-6 tritium breeder blankets are expected to be insufficient to achieve a TBR > 1, beryllium will be used as a neutron multiplier in order to increase the neutron yield and give a higher TBR. Total current global deposits of beryllium are estimated at 100,000 to 150,000 tons, and the quantity of beryllium required per reactor is in the order of 400 tons per GWe. Therefore, current beryllium deposits would be far insufficient to support 2500 GWe of installed fusion reactors using beryllium as the neutron multiplier in the tritium breeder blanket . Fortunately, lead-based tritium breeder blankets, which also provide neutron multiplication and as such offers a substitution option, are also being explored as lead is abundant and cheap. Structural materials, such as vanadium and niobium, are not abundant and although recycling or even extraction from seawater may be possible, alternative metals for alloying should be sought for longer-term fusion reactors.
6. First-Of-A-Kind fusion power plants
6.1. DEMO projects
In anticipation of the successful demonstration of the technical feasibility of nuclear fusion power plants based on the tokamak approach in ITER, many nations around the world are now proposing Demonstration Nuclear Fusion Power Plants (DEMO) designs. DEMO will be based on design, engineering and operational experience of ITER, and is expected to be the First-Of-A-Kind (FOAK) commercially viable fusion power demonstrator in the world (even though it may never produce power to the electricity grid).
SlimCS is a DEMO power plant proposed by JAEA (Japan Atomic Energy Agency, later reformed into QST in 2016). SlimCS will have a fusion thermal output of 2.95 GW and an electrical output of 1 GW, and it will assess the economic viability of a large-scale fusion power plant. The reactor is of similar size to ITER, with a major radius of 5.5 m, and an aspect ratio of 2.6 . The Japanese government publicly announced that the decision to construct a DEMO reactor will be made in 2030s, in order to realize the commercialization of fusion energy by the middle of the twenty-first century. As this puts the SlimCS schedule in the same timeframe as the operation of ITER, it is uncertain as to what extent ITER will inform SlimCS.
The European Union has a dedicated team within EUROfusion which is focused on developing the design of a European version of a DEMO fusion device, EU DEMO. Similarly, EU DEMO is considered to be the last step before the full-scale commercial roll-out of fusion energy technology. EU DEMO is primarily designed to be a pulsed machine but is expected to deliver long pulse durations with only a short dwell time. The expected fusion thermal output is currently envisaged to be in the order of 2 GW, with electrical output at 500 MW, but the design is only in at a conceptual stage [44, 45].
6.2. Innovative approaches by private companies
Due to delays and cost overruns in ITER, questions have been raised over the viability of the ITER pathway as being the best route to fusion energy. This has led to increasing uncertainty over future involvement and project funding, most notably from the United States of America. Such issues with the ITER project have not helped to shift the longstanding perception that commercial fusion is “always 30 years away” . However, alternative fusion energy concepts are also being developed in parallel to the ITER project and are slowly increasing in technological maturity. And such activities have become the subject of increased international interest over recent years. Delays to the public fusion program, combined with novel ideas, disruptive technologies, and an injection of private funding has led to the birth of a number of private-sector start-ups, all looking for a faster route to fusion . Both Tokamak Energy Ltd in the UK, and Commonwealth Fusion Systems, a spin-out company from MIT in the US, are developing tokamak variants that operate on alternative high-performance plasma regimes that make use of the benefits of HTS magnets [22, 27].
Non-tokamak reactor concepts are looking to explore entirely different configurations and are considering different ways of initiating, heating and sustaining plasmas. The ARPA-E ALPHA program in the United States of America, which has supported a number of start-ups exploring the physics space between inertial and magnetic confinement fusion, with the vision that it may lead to an “easier” route to fusion. This approach is intended to support a number of promising concepts, to spread the risk of failure and therefore at the same time to increase the chances of success [17, 47]. General Fusion, a Canadian-based start-up company is developing a reactor based on an entirely novel acoustically-driven system, which will operate in pulse mode . TAE Technologies (formerly Tri-Alpha Energy), a US-based start-up, is exploring the possibilities of liner-driven proton-boron11 fusion, opting to avoid the complications that arise from the D-T fuel cycle, and are already looking at medical applications as a potentially important market . Indeed, of further interest is that Lockheed Martin also has an internal “Skunkworks” team dedicated to developing a novel fusion reactor approach. Although few details have been released, the reactor concept is that of a magnetic cusp device, and although patents have been filed, progress towards the realization of fusion energy of the magnetic cusp device is largely being kept secretive . Numerous other fusion start-ups exist, all with the goal of delivering commercial fusion energy. Whether or not these efforts are on the road to success remains to be seen, but a “new fusion race” and the competition it brings is expected to spark technological advancement in a multitude of areas that will likely benefit all in the fusion community, and those outside it, in the pursuit of the holy grail: commercially viable fusion energy.
7. Conclusions: the road to a nuclear fusion power plant
Nuclear fusion has received frequent cynicism, with the longstanding quip that it is “always 30 years away,” in reference to the fact that since the 1970s fusion scientists have continually predicted that fusion energy will take 30 years to become commercial . It appears that this has always been the case, and critics say it always will be. With this in mind, it could appear disingenuous to make the same statement here at the current time, but the realization of a commercial fusion power plant is expected in around 30 years’ time. To conclude this overview study, Figure 6 provides a summary of current efforts, showing key concepts and expected milestones, on the pathway to commercial nuclear fusion energy.
The result of this review study highlights the current plans for the development of fusion to deliver on the promise of fusion energy. Current plans to realize fusion power are continuously updated, however should be treated with caution, as they are subject to uncertainties, unknown obstacles to technological progression and resource limitations in funding and manpower; all of which may limit the ability to achieve future goals in a timely manner. At the current time, however, it is expected that fusion energy will become a reality in less than 30 years. Every effort to ensure this timescale is realized should be made so that fusion can fulfill its potential and make the much-needed impact in global energy.
The authors would like to thank the Open University for their support of this work.
Conflict of interest
A proportion of author Richard Pearson’s research is sponsored by Tokamak Energy Ltd., UK. |
In the second half of the twentieth century the word ghetto in American culture was used to describe overpopulation and poverty in urban settings. Sections of cities, usually housing recent immigrants of African American or Latino origin, came to be referred to by this term. It communicated a kind of substandard living that could usually be ascribed to persistent discrimination against such communities, but also toward immigrants in general. In some instances, a sense of belonging and self-identification emerged from these negative connotations.
A sense of belonging evolved from the racial homogeneity and experience of shared persecution within the confines of the ghetto. African American or Latino ghettos do not always contain dilapidated buildings or deteriorating housing projects, but may signify home, places with an authentic racial identity or "soul" that yields a desire and yearning for life and the overpowering drive to rise above the immediate physical surroundings. This powerful image has been aptly captured in popular culture, especially literature. In the early twentieth century there were descriptions of a "negro ghetto" in Langston Hughes' plays and in his poem "The Heart of Harlem" (1945). In the latter Hughes captures the essence of this term:
The buildings in Harlem are brick and stone
And the streets are long and wide
But Harlem's much more than these alone
Harlem is what's inside.
This theme was echoed in the later work of other African American authors such as Countee Cullen, Claude McKay, Ralph Ellison, and Lorraine Hansberry. What links these writers is their reference to the mean streets of the ghetto, where life was hard but, despite poverty, crime, and rampant drug activity, dreams could be born that would transport people to a better way of life.
The derivation of the word ghetto is important. The Oxford English Dictionary, in seeking to trace its etymology, admits to a lack of clarity. There is tacit acceptance among scholars, however, that the word derives from the Italian verb gettare (to pour or to cast), a reference to the foundry existing in the city-state of Venice in the early 1500s. Nearly a hundred years later in Thomas Coryat's Coryat's Crudities (1611), the word first appeared in written form in the English language: "a place where the whole fraternity of the Jews dwelleth together, which is called the Ghetto."
From this passage it can be extrapolated that the early history of the word refers to a distinct section of a city, usually separated from the rest of the city by walls or gates. The people who lived within that walled section of the city were Jews. The connotation of negativity and discrimination followed the word from that point onward.
Origin of the Concept
The Jews who lived in Venice were mostly traders and moneylenders by profession. The presence of Jewish moneylenders played an important role in overcoming the religious prohibition, among both Christians and Jews, on collecting interest for loans made to members of one's own faith. As pointed out by Benjamin Ravid in 1992,
The Jewish moneylenders not only helped to solve the socioeconomic problems of an increasingly urbanized society, but also made it less necessary for Christians to violate church law by lending money at interest to fellow Christians. Consequently the Venetian government periodically renewed charters allowing Jews to engage in money lending down to the end of the Republic in 1797.
The beliefs of the Jewish minorities in Venice across Italy and throughout Europe stood in stark opposition to the growing Christian Renaissance of the time. As a result, the incumbent powers in Venice and the city's population targeted the Jewish community. Laws were passed, notably Calimani 1, that required Jews to be grouped together to prevent free movement, especially at night. Another regulation, Calimani 2, required the Jewish population to wear a star-shaped yellow badge and yellow beret to differentiate them from the Christian majority. This public identification not only enabled the authorities to easily identify Jews, but it also attracted taunts and social cruelties. This discrimination was compounded by strict migration laws that prevented the Jewish population from growing through immigration.
The combination of social factors at play during that historical period and the creation of specific laws aimed at the Jewish community introduced the word ghetto into the lexicon. Discriminated against in mainstream society, Jewish traders and moneylenders were forced to remain together. The strict regulations requiring them to live in a specific area of the city implied that they had to live within a section that could be easily monitored. The area near the foundry in Venice was ideal for such purposes. Persistent discrimination coupled with the passage of further laws identified this group to the authorities and the rest of the city's residents, making them subject to abuse. This provided additional motivation for Jews to live inside their own territory, where they were less likely to be subjected to derision.
Accounts of the time suggest that the ghetto itself did not necessarily signify a deterioration in living standards and status. Rather, for many Jews it represented the middle ground between unconditional acceptance and complete expulsion and exclusion. Residing within the ghetto allowed them to pursue their way of life and trade without interference. The ghetto appears to have been a place where Jewish culture and identify thrived.
Shades of Meaning
The word ghetto encompasses several strands of meaning that need to be identified and differentiated. At least three different connotations exist: (1) voluntary Jewish quarters; (2) quarters assigned to the Jews, either for their convenience or protection, or as an inducement for them to settle in a particular area; and (3) an area that was compulsorily Jewish and where no Christians were allowed to live.
These distinctions largely resulted from clerical pressures, social circumstances, and especially the edicts of the Nazi regime. Also important to understanding the meaning of the term is an examination of the environs in which the ghetto typically existed and the reaction of Jews when confronted with compulsory or optional living quarters.
There is little doubt that in the late medieval period many Jews, like modern immigrant groups of the twenty-first century, chose freely to live in close proximity to each other. This desire was often driven by the very practical needs of living a shared religious and social life that was significantly different from that of the rest of the population. This tendency was apparently reinforced in the eleventh and twelfth centuries when secular authorities in Germanic lands as well as reconquista Spain offered their Jewish populations specific quarters. It is important to note that the Jewish quarters at this stage were not compulsory nor were they used as a means of segregation. Rather, they were provided as an incentive for Jewish traders to conduct their trades within cities.
During this era there was regular contact between Jews and their Christian neighbors, despite the occasional recalcitrance of the Catholic Church, which frowned on such relations. This was captured in the stipulation adopted by the Third Lateran Council in 1179 discouraging Catholics from living among Jews. It was primarily this decree that led many European cities, including Venice, to pass legislation segregating Jews. As a result, Jewish quarters commonly were populated exclusively by Jews, with non-Jews, mainly Catholics, often prevented by law and emerging custom from living in these areas.
Developments in Venice
In Venice itself, Jews were allowed to settle anywhere within the city, with no concerted group settlement except for a brief period between 1382 and 1397. It was also common for Jews to settle on the mainland across the lagoon from Venice in Padua and Mestre, with the city of Venice allowing them to seek refuge there in the event of war. Within this context Jews fled to Venice from neighboring regions during the War of the League of Cambrai in 1509. When Venice successfully defended itself, acquiring the surrounding mainland territories, the refugees were ordered to return home. However, exceptions were made for Jews when city authorities realized the benefits of permitting this population to remain in Venice.
The principal reason for this decision was the potential revenue that might be collected from wealthy Jewish traders in a time of state penury caused by the expense of the recent war. The Jewish community's continued presence in the city would also assure the close proximity of moneylenders for the poor, whose numbers had risen sharply after the war. It was these circumstances that are reflected in the city charter of 1513, allowing Jews to live in the city and guaranteeing their freedom to continue moneylending activities there.
Role of the Church
The enlightened attitude of the Venetian government stood in sharp contrast to the views of civil society and the Church. The clergy regularly preached and incited hatred against the Jews, notably at Easter time when there were often calls for their expulsion from the city. The delicate balance between polity and the Church was overcome by a move on the part of the Venetian government in 1516 that sought to placate such sentiments and can be directly attributed to the growing use of the term ghetto. In a document passed by the Venetian Senate on March 29, the city government agreed to the Jews' continued presence in the city as money-lenders, but indicated that they could not dwell anywhere in the city and have freedom of movement day and night. Instead, the legislation stipulated that all Jews would be required to live on an island referred to as ghetto nuovo (the new ghetto). To guarantee that Jews lived within this area and remained confined within it at night, gates were erected at two locations. These gates were to be locked at sunset and only reopened the next morning at sunrise. Jews caught outside the gates during the hours between sunset and sunrise could be fined prohibitive amounts.
For the legislation to take effect, the Christians who lived within the area designated as the ghetto nuovo were required to vacate their homes. Landlords of properties within the newly formed ghetto were also allowed to charge their new Jewish tenants rents that were one-third higher than those paid by their former Christian tenants, with the increments exempt from any form of taxation.
Evolution of the Venetian Ghetto
The concept of ghetto that is understood in the contemporary world, although it reflects many aspects of current reality, may be traced back to the actions of the Venetian Senate in 1516. Many Jews initially resisted the stipulation that required them to leave their abodes and move to the newly gated area. In addition, while many of the Jews lived in close proximity to each other, they strongly objected to the idea of being segregated in the manner proposed by the Senate. However, because the Venetian government was adamant about its policy but did make some concessions in terms of the area's administration, the community gradually accepted the stricture. It was clearly preferable to being cast out of the city altogether and forced to trade from the mainland.
Records also reveal 1541 to be a significant date in distinguishing the Venetian ghetto from the radical concept of ghetto that the Nazis advanced nearly four hundred years later. That year a group of Levantine Jewish merchants visited the city and then approached the authorities, complaining that the existing ghetto was not large enough for them to both reside in and use for the storage of their merchandise. The Venetian government investigated the complaint and found it to be valid. Recognizing the value of the Jewish community in attracting trade to the city, it ordered that the ghetto be extended by appropriating a neighboring area that contained twenty dwellings. This amalgamation was accomplished by building a wall and a footbridge between the ghetto vecchio (old ghetto) and the ghetto nuovo. Thus unlike the Nazis, Venetian authorities did engage in a dialogue with the Jewish community and instigated measures to increase their comfort.
The Concept Spreads
The ghetto and the phenomenon of segregating Jewish populations were not confined to Venice alone. With the driving force being the pressure exerted by the Church on the regulation of the Jewish community and its interactions with Christians, the practice of restricting Jews to specific areas within cities became widespread. This trend was consolidated by the papal bull that Pope Paul IV issued shortly after his selection as pontiff in 1555. Cum Nimis Absurdum required all Jews, in papal states, to live on a single street and, if necessary, adjacent streets, with the area clearly separated from the living space of Christians and with a single entrance and exit. Thus, Jews in Rome were required to move into a designated quarter as a result of this edict, and subsequent reference to the area as a ghetto is contained in Pope Pius IV's papal bull of 1562, entitled Dudum a Felicis.
This trend was repeated across Italy, with similar activities reported in Tuscany and Florence (1571) and Sienna (1572). In each case the area that the Jews were required to live in was referred to as a ghetto. The word also entered the lexicon of the Jewish community; it appears in Hebrew documents of the Jews of Padua. From 1582 onward this community engaged in similar discussions with the authorities, which resulted in the creation of a ghetto there in 1601 after Padua had gained its independence from Venice.
In Venice the use of the term rose steadily after the extension of the Jewish quarter in 1541. A second negotiation for additional space occurred in 1633 and it resulted in the designation of a third ghetto area called ghetto nuovissimo, also physically linked to the two earlier ghettos. However, this third ghetto area was not located on the site of the previous foundry. Thus, while the former two ghettos owed their names to the existence of foundries on the land prior to their redesignation as segregated places for Jews, the new ghetto had never been a foundry. It was simply referred to as a "ghetto" since it was the newest enclosed quarter for Jews. Thus, as pointed out by Ravid in 1992, "the term ghetto had come full circle in its city of origin: from an original specific usage as a foundry in Venice, to a generic usage in other cities designating a compulsory segregated, walled-in Jewish quarter with no relation to a foundry, and then to that generic usage also in Venice."
Although the first official ghetto evolved in Venice and can be directly linked to the Senate ruling of 1516, it would be incorrect to suggest that it represented the first segregation of Jews. Prior to that date, there had been quarters in cities that were populated primarily by Jews. An example is the Jewish quarter in Frankfurt established in 1462, predating the Venetian ghetto by more than fifty years. Thus, although the first ghetto was established in Venice in 1516, it was such only in a purely technical, linguistic sense. In a wider context, one that recognizes what a ghetto signifies, the concept of a compulsory, exclusive, enclosed Jewish quarter is arguably older than 1516 and may be traced to the Church's Third Lateran Council.
References in Literature
In terms of English literature, although William Shakespeare's The Merchant of Venice (1596) specifically refers to a Jewish moneylender who almost certainly would have lived in the ghetto, no mention is made of the word. The play does, however, portray the prejudice that existed toward Jews in the sentiments expressed against Shylock, the moneylender, but it is inaccurate in that no reference is made to the fact that Jews were required at that time to wear yellow stars and berets.
The first reference in the English language to ghetto, as mentioned earlier, appeared in the travelogue written by Thomas Coryat in 1611, Coryat's Crudities. The book details the author's travels, including a visit to Venice, and the word ghetto is used to describe the dwelling place for the "whole fraternity of the Jews."
While ghettos persisted for the next two centuries, the phenomenon was only sporadically represented in popular culture and writing. In 1870 some scholarship suggested that Western Europe's last ghetto, in Rome, had been abolished. Despite such a claim, it is clear that the practice remained widespread, in Russia and elsewhere around the world. The term ghetto also began to appear with greater frequency in the literature. It appeared in the work of literary critic Edward Dowden in his analysis of Percy Bysshe Shelley's poetry in the late nineteenth century. In two biographical studies of the same period, Children of the Ghetto and Dreamers of the Ghetto (1898), Israel Zangwill explores the idea of life in the ghetto.
With the steady rise in discrimination against Jews all over Europe as well as in the Ottoman Empire during the nineteenth century, it became common for many cities to designate Jewish quarters that were often referred to as ghettos. The word came to refer to any area that was densely populated with Jews, even when those places had no strictures that barred Jews from living in the rest of the city among the rest of the population. Eventually, the word lost its Jewish emphasis and simply referred to any densely populated area where a minority group lived. Most often, as in modern-day usage of the word, the rationale for the homogeneity was socioeconomic and cultural rather than legal, thus marking a significant departure from the term's original use in Venice when the law required that Jews be segregated into a ghetto.
The development of the word has resulted in a number of related phrases such as "out of the ghetto" and "ghetto mentality." These suggest that the ghetto is a place from which emancipation is necessary. Although it could be argued that the Jews confined to ghettos sought emancipation of this kind, the factors from which individuals living in modern-day ghettos seek a release are primarily socioeconomic rather than legal. Thus, getting out of the ghetto is a reference to acquiring enough wealth and influence not to have to live within its crowded confines. Similarly, ghetto mentality refers to the feeling of being under pressure or in a state of siege and reacting in a manner that is not otherwise considered rational.
Nonetheless, in the literature and other contexts the word ghetto has been mostly used in its classical Counter-Reformation sense, to refer to compulsory segregation in urban settings.
The crucial step in the evolution of the concept of ghetto to its modern-day meaning occurred during World War II, when the Nazis forced Jews into overcrowded and squalid quarters. Unlike earlier ghettos, Jews were simply grouped together in one specific place as a temporary haven on the planned road to total annihilation. With Adolf Hitler's rise to power in the 1930s, the idea of the ghetto reignited with a fury, exhibiting the worst manifestations of forcing a population to live within strict confines. The substandard living conditions introduced in the Nazi ghettos established and reinforced the concept of an archetypical ghetto as a place of severe hardship and misery. Nazi ideology with its theory of a superior Aryan race placed the minority Jewish population under direct threat, and ghettos became the means by which this population was segregated and then targeted for the fullest expression of Nazi aggression. German expansion eastward reestablished ghettos all over Europe. It is estimated that the Third Reich's conquests resulted in the creation of over three hundred ghettos in Poland, the Soviet Union, the Baltic States, Czechoslovakia, Romania, and Hungary.
The ghettos of World War II were extremely different from those of the Renaissance period. Although motivated by the same idea of segregation, the Nazi ghettos had a much more sinister purpose: the containment of a population that was soon to be exterminated. Nazi ghettos were demarcated from the rest of the urban landscape by the use of crude wooden fences, high brick walls, and, often, barbed wire.
Life in the Ghetto
Life inside the ghetto has varied tremendously at different points in history and in reaction to the pressures exerted on the community within its confines. In early Venetian times and in the aftermath of the papal bulls, ghettos became a place where Jews could maintain their own affairs and escape the discrimination they suffered in mainstream society. It was also a place where Jewish sociocultural and religious activity thrived, and a feeling of relative security might be experienced. In this era the ghetto had not yet become synonymous with overcrowding and dense overpopulation. As discussed above, when space was at a premium in the Venetian ghetto, Jewish leaders simply renegotiated with the Senate and secured additional areas to enlarge the original ghetto. There are also several accounts by authors and artists of the time, notably Leon Modena, Simone Luzzatto, and Sara Copia Sullam, that depict a society rich in culture and art within the Venetian ghetto.
What is clear is that life inside the ghetto in Venice was in sharp contrast to life in the Nazi ghettos throughout Europe, where existence was directly influenced by outside pressures. A significant factor in the level of Jewish self-expression and creativity during this period was not so much the circumstance that required Jews to live in the ghetto, but rather, "the nature of the outside environment and whether it offered an attractive supplement to traditional Jewish genres of intellectual activity" (Ravid, 1992). Thus, the conditions the Nazi regime imposed on Jews were reflected in the immense overcrowding and suffering of a people forced to live within the confines of a ghetto. In these circumstances daily life was extremely hard, often resulting in despair, as it was compounded by the knowledge that the ghetto was merely an interim stop on the road to annihilation by a regime that was intent on eradicating Jewish identity.
Thus, the meaning of the term ghetto has changed considerably over time. Although its connotations have always been negative, because of the underlying rationale of segregation, these were not necessarily present to the same degree when the word was first coined in Venice of the sixteenth century. The most negative connotation of the word clearly derives from the actions of the Nazis during World War II.
The Ghetto Uprising in Warsaw
Another aspect of the use of the word ghetto can be attributed to a specific incident that occurred during World War II, the ghetto uprising in Warsaw. It captured the public imagination worldwide as a struggle against immense odds. At the outbreak of World War II there were three million Jews in Poland, with as many as four hundred thousand living in Warsaw. The Nazis invaded Poland in September 1939, and by November of the following year they had established the Warsaw ghetto. It was surrounded by an eleven-mile wall, roughly ten to twenty feet high, topped with broken glass and barbed wire. With its original residents displaced elsewhere, some 140,000 Polish Jews were forced into this concentrated area. German soldiers were posted at the ghetto's exits; only those Jews working in war-related industries were allowed to leave and return. Jews from other parts of Poland were gradually moved in, and some estimate that at one time there were as many as half a million people living in the Warsaw ghetto. Nearly 63,000 Jews are estimated to have died from starvation, the cold, and disease during the life of this ghetto.
Conditions within the ghetto regularly resulted in death, and this, coupled with the news in July 1942 that a death camp existed in Treblinka some forty miles away, fueled actions of resistance. By early 1943 the residents of the ghetto began to fight back against their captors. Using a handful of pistols, grenades, and captured weapons, the fighters took on the might of their tormentors, perhaps strengthened by the fatalistic attitude that death in combat was preferable to their meek acceptance of the fate that awaited them at Treblinka and other concentration camps. Drawing the Nazis into a guerilla-style battle, the Jewish fighters achieved some success in skirmishes that mostly took place in narrow alleys and dark apartment passages. The period of resistance lasted a total of eighty-seven days.
The fighting reached a climax on April 19 when columns of approaching German troops, with tanks and armored vehicles, met with fierce resistance. They lost two hundred soldiers—either killed or wounded—and were forced to retreat. By April 23 the fighters issued a public appeal:
Poles, citizens, soldiers of freedom. . . we the slaves of the ghetto convey our heartfelt greetings to you. Every doorstep in the ghetto has become a stronghold and shall remain a fortress until the end. It is our fight for freedom, as well as yours; for our human dignity and national honor as well as yours. . . .
However, the resistance began to crumble as food and ammunition ran out. The Nazis squeezed the ghetto, setting fire to buildings and reducing most of it to rubble as they sought out every last perpetrator of resistance against their occupying forces. By May the Nazis has regained complete control of the ghetto. Nevertheless, the fierce struggle against impossible odds inspired many other struggles, and in a sense, the feeling of shared fraternity that accompanies the use of the word ghetto in modern parlance may be attributed, in part, to it.
Ainsztein, Reuben (1979). The Warsaw Ghetto Revolt. New York: Holocaust Library.
"Ancient Ghetto of Venice." Available from http://www.doge.it/ghetto/indexi.htm.
Grynberg, Michal (2002). Words to Outlive Us: Voices from the Warsaw Ghetto, trans. Philip Boehm. New York: Metropolitan Books.
"Jewish Virtual Library." Available from http://www.usisrael.org/jsource/vjw/Venice.html.
Ravid, Benjamin C. I. (1992). "From Geographical Reality to Historiographical Symbol: The Odyssey of the Word Ghetto." In Essential Papers on Jewish Culture in Renaissance and Baroque Italy, ed. David B. Ruderman. New York: New York University Press.
Stoop, Jürgen (1979). The Jewish Quarter of Warsaw Is No More! tran. Sybil Milton. New York: Pantheon Books.
University of Bordeaux website. Available from http://wwwwriting.montaigne.u-bordeaux.fr/univ/ghetto.htm.
Uris, Leon (1961). Mila 18. Garden City, N.Y: Doubleday.
Social scientists have long studied the effects of economic, political, and social inequality on lives, attitudes, and behavior. Central issues in this research include how and why societies tend to treat certain groups negatively, how such groups respond to such conditions, and whether and how society should address the historic and contemporary social problems that result. The history of ghettos provides an exemplar of the effects and implications of differential treatment of minority groups in society.
The term ghetto has been historically used to describe legally sanctioned segregated areas occupied by ethnic minorities. Although some writers contend that the first ghettos were created to segregate Jews during the Roman Empire between the first and fourth century CE, the term is most commonly used to describe segregated Jewish sections in Italy, Germany, and Portugal in the 1200s. The translation of the term ghetto originally referred to the Venice Ghetto in the 1300s and areas of town that were originally iron foundries or gettos before being converted to secluded Jewish sections. The term is also translated “gated” to characterize residentially isolated neighborhoods that existed in Venice and parts of northern Italy until as late as the 1600s. Other derivatives of the term refer to a small neighborhood (Italian, borghetto ) or a “bill of divorce” (Hebrew, get ). As suggested by these translations, it was illegal for non-Jews to live in ghettos and Jews were prohibited from leaving. To impose these sanctions, the gates of this section of the town were locked at night.
Roman ghettos were created in the mid-1500s via a decree by Pope Paul IV (1476–1559) and lasted until the Papal States were overthrown by Italy in 1870. Roman ghettos were used to separate Jews from Christians, but also enabled the Jewish community to maintain its religious and cultural practices and avoid assimilation. Other Jewish ghettos were located in Prague, Frankfurt, and Mainz. Although legal restrictions were no longer imposed in Europe during the 1800s, many ghettos continued to exist based on cultural or religious dictates. Most European ghettos were destroyed in the nineteenth century following the French Revolution. However, the rise of Adolf Hitler (1889–1945) in Nazi Germany in the twentieth century saw the return of Jewish ghettos in eastern European cities. Other international ghettos include the predominately black area of Soweto in Johannesburg, South Africa; KwaMashu in Durban, South Africa; and ghettos in the United States in South Central Los Angeles, sections of Chicago, and rust-belt cities such as Flint, Michigan.
Ghettos in the United States are generally defined as poor inner-city areas where a disproportionate percentage of ethnic minorities reside. Although African Americans are generally associated with ghettos, Hispanics and whites also live in them. Ghetto neighborhoods are also defined as census tracts where 40 percent or more of residents, regardless of their race or ethnicity, are poor. The latter definition is widely used for comparative purposes in quantitative urban sociological research. Although ghetto residents tend to be ethnic minorities, it is important to note that neighborhoods where a large number of ethnic minorities reside are not necessarily ghettos. For example, prior to deindustrialization, many African Americans were segregated in northern communities such as Chicago’s Bronzeville. Although the area was predominately African American, it was also the place of residence for relatively affluent African American families and businesses. Furthermore, economically stable ethnic enclaves such as Chinatowns and Germantowns exist in many cities across the United States.
The distinguishing factor that generally constitutes a ghetto is the prevalence of poverty. Ghettos are also often distinguished from other racially or ethnically homogeneous communities (for example, a predominately white or black suburban area) because of the inability of many residents to relocate from ghettos—even if they desire to do so. Poverty among many U.S. ghetto residents makes it difficult to out-migrate. The involuntary nature of ghetto areas often reflects constrained residential choices less evident in non-ghetto locales. Thus, as compared to historic ghettos that were formed due to direct or indirect racial or ethnic coercion and isolation, contemporary U.S. ghettos generally reflect class-based formation and the resulting isolation.
U.S. ghettos developed as a result of dramatic postindustrial economic, political, and social changes. Several urban migrations during the early and mid-twentieth century resulted in the exodus of many African Americans to such northern states as Illinois, New York, Michigan, and Pennsylvania in search of employment and to escape segregation and discrimination in the rural South. During the same period, persons of Hispanic descent migrated from Puerto Rico, Mexico, and Central and South America to New York, Miami, and Chicago for similar reasons. Cities provided industrious, less-educated persons with manufacturing jobs to earn a family wage.
After World War II (1939–1945), globalization and deindustrialization resulted in significant international and national economic restructuring. The United States responded to increased international economic competition by spurring technological advances and relocating industrial enterprises abroad and to the suburbs to increase profits. Increased efficiency and fewer manufacturing positions unduly affected residents in northern cities—especially ethnic minorities. From about 1967 to 1987, cities such as New York, Chicago, and Detroit lost more than 50 percent of their manufacturing jobs. By the late 1900s, many persons who had been gainfully employed in northern industrial cites became unemployed or underemployed or were forced to work in service occupations for substantially lower wages and reduced benefits.
The dramatic decline in manufacturing jobs affected a disproportionate percentage of African Americans and Hispanics. The out-migration of manufacturing firms coupled with an exodus of middle-class families and other businesses from cities to suburbs and abroad left many inner cities economically devastated. Economic restructuring coupled with the effects of poorly underserviced infrastructures, inadequate housing to accommodate a growing urban populace, group conflict and competition over limited jobs and space, the inability for many residents to compete for new technology-based jobs, and tensions between the public and private sectors led to the formation and growth of U.S. ghettos. Furthermore, housing discrimination in the form of redlining by lending institutions, discriminatory practices by realtors, and the development of large housing projects resulted in densely populated urban locales of primarily poor ethnic minorities. Economic challenges were exacerbated by the effects of historic and contemporary classism, segregation, and racism. The cumulative effects of these systemic forces contributed to the existence and prevalence of concentrated urban poverty in many U.S. ghettos.
Ghettos were historically developed to physically isolate a group with clearly identifiable physical features and cultural markers. Contemporary U.S. ghettos have had similar effects on many African American and Hispanic residents. Whether the result of legal sanctions or due to societal norms and values, physical isolation in ghettos usually results in social, political, and economic isolation. Such separation also directly or indirectly conveys superior status and privilege on majority group members and, by default, inferior status and privilege on the segregated group.
Although the Venice Ghetto was actually a relatively wealthy section of town where moneylenders and merchants resided, overall, conditions in ghettos were and continue to be negative. Jews could maintain their cultural and religious practices, but a segregated existence meant political and social isolation from the larger society. Because Jews could not purchase land outside the ghetto, population increases resulted in overcrowded conditions and infrastructure problems characterized by narrow streets and tall houses. Jews were allowed to organize and maintain their own political system within the ghetto. However, they often needed official passes to travel outside the ghetto walls.
The Warsaw Ghetto of Nazi Germany housed almost 400,000 Jews and was the largest and possibly most notorious ghetto. These ghettos were walled off, and Jews were shot if they attempted to escape. Other horrific conditions included extreme overcrowding, limited food supplies rationed by the Nazis, poor sanitation, starvation, and disease. Jews who survived these circumstances were forced to contend with the ever-present threat of death or deportation to concentration camps. In 1942 systematic efforts were implemented to deport Jews from ghettos around Europe to eastern ghettos or to concentration camps such as Treblinka in Poland. Historians suggest that various direct and indirect ghetto uprisings broke out, but the majority of residents in the ghettos of Nazi Germany were killed.
Contemporary ghettos are generally characterized by neighborhood and household poverty, social isolation, segregation, discrimination, overcrowding, increased crime, neighborhood disinvestment, and political disempowerment. Ghetto residents are more likely to live in substandard housing, frequent understaffed hospitals and healthcare providers, and have limited access to gainful employment. Businesses such as grocery stores, banks, retailers, and other institutions needed to complete the daily round are also limited and often overpriced or underserviced as compared to their suburban counterparts. Children who reside in ghetto areas tend to attend ill-equipped schools and must often learn at an early age to negotiate potentially crime-ridden neighborhoods. Research also suggests that the life chances of many ghetto residents are constrained largely because their place of residence isolates them from important resources needed to locate gainful employment, establish informational networks, and interact consistently in the larger society. Political disenfranchisement in ghettos is usually a result of isolation by predominately white state-run governments from predominately ethnic minority residents in ghetto spaces. Although studies show that most ghetto residents subscribe to mainstream values and goals, limited opportunities and resources often constrain their chances to realize them.
Urban renewal efforts are underway in many innercity ghettos—with varied results. In some instances, renewal has resulted in refurbished neighborhoods, increased tax bases, and strengthened infrastructures. Supporters of urban renewal efforts point to the in-migration of young professionals as an important factor in revitalizing ghettos. However, detractors suggest that gentrification benefits persons who in-migrate and are able to use their greater discretionary income to take advantage of depressed housing markets at the expense of existing, poor ethnic minorities who are often forced out of their homes because they cannot afford to live in the newly renovated, higher-taxed neighborhoods.
Research is inconclusive regarding exactly how to characterize experiences in contemporary ghettos. The prevailing economic, political, and social disenfranchisement does not suggest a positive portrait of life. However, studies attest to the adaptive, resilient nature of many residents that belie the harsh reality of their experiences. A comprehensive discourse on the effects and implications of ghetto life and needed interventions should consider the challenges associated with ghetto living, the strengths of persons who live in ghettos, and the role the larger society should play to improve ghetto conditions.
SEE ALSO Cities; Neighborhoods; Shtetl
Barnes, Sandra L. 2002. Achievement or Ascription Ideology? An Analysis of Attitudes about Future Success for Residents in Poor Urban Neighborhoods. Sociological Focus 35 (2): 207–225.
Barnes, Sandra L. 2005. The Cost of Being Poor: A Comparative Study of Life in Poor Urban Neighborhoods in Gary, Indiana. Albany: State University of New York Press.
Billingsley, Andrew. 1992. Climbing Jacob’s Ladder: The Enduring Legacy of African-American Families. New York: Touchstone.
Chadwick, Owen. 1998. A History of the Popes, 1830–1914. New York: Oxford University Press.
Einwohner, Rachel. 2003. Opportunity, Honor, and Action in the Warsaw Ghetto Uprising of 1943. American Journal of Sociology 3: 650–675.
Massey, Douglas S., and Nancy A. Denton. 1993. American Apartheid: Segregation and the Making of the Underclass. Cambridge, MA: Harvard University Press.
Wilson, William Julius. 1996. When Work Disappears: The World of the New Urban Poor. New York: Knopf.
Sandra L. Barnes
GHETTO , urban section serving as compulsory residential quarter for Jews. Generally surrounded by a wall shutting it off from the rest of the city, except for one or more gates, the ghetto remained bolted at night. The origin of this term has been the subject of much speculation. It was probably first used to describe a quarter of Venice situated near a foundry (getto, or ghetto) and which in 1516 was enclosed by walls and gates and declared to be the only part of the city to be open to Jewish settlement. Subsequently the term was extended to all Jewish quarters of the same type. Other theories are that the word derives from the Hebrew get indicating divorce or separation; from the Greek γέιτων (neighbor); from the German geheckter [Ort], or fenced place; or from the Italian borghetto (a small section of the town). All can be excluded, except for get which was sometimes used in Rome to mean a separate section of the city. In any case the institution antedates the word, which is commonly used in several ways. It has come to indicate not only the legally established, coercive ghetto, but also the voluntary gathering of Jews in a secluded quarter, a process known in the Diaspora time before compulsion was exercised. By analogy the word is currently used to describe similar homogeneous quarters of non-Jewish groups, such as immigrant quarters, Black quarters in American cities, native quarters in South African cities, etc.
For historical survey see *Jewish Quarter.
In Muslim Countries
In Muslim countries the Jewish quarter (Arab. ḥāra) in its beginnings never had the character of a ghetto. It was always built on a voluntary basis, and it remained so in later times in the vast Ottoman Empire. Istanbul (Constantinople) was the classic example of a capital in which the Jewish quarters were scattered all over the city. In Shīʿite countries (Persia, Yemen) and in orthodox North Africa (Malikite rite) all non-Muslims were forced to live in separate quarters – for religious reasons (ritual uncleanness). Embassies from Christian countries had to look for their (even temporary) dwellings among the Jews. Christian travelers and pilgrims to the Holy Land always remark that in case there was no Christian hospice in a town, they had to look for hospitality among the Jews. After the regulations compelling the Jews to dwell in separate quarters had been repealed (in the 19th and 20th centuries), and they could freely move out, the majority voluntarily remained in their old quarters. Only after the establishment of the new independent states in North Africa did most of the Jews abandon their old dwellings.
See *Jewish Quarter, in Muslim Countries.
the crystallization of german policy
While ghettos were traditionally permanent places of Jewish residence, in Poland, under the Nazis, the ghettos were viewed as a transitional measure. "I shall determine at which time and with what means the ghetto, and thereby the city of Lodz, will be cleansed of Jews," boasted Hans Biebow, the Nazi official who ran the Lodz ghetto. "In the end … we must burn out this bubonic plague."
A secret memo issued on September 21, 1939, by Reinhard *Heydrich, the chief of the Security Police, to the chiefs of all task forces operating in the conquered Polish territory, established the basic outlines of German policy in the territories.
Heydrich distinguishes between the ultimate goal (Endziel), which would require some time to implement, and the intermediate goals, which must be carried out in the short term. He said: Some goals cannot yet be implemented for technical reasons and some for economic reasons. Room was left for innovation.
He wrote: "The instructions and directives below must serve also for the purpose of urging chiefs of the Einsatzgruppen to give practical consideration to the problems involved."
His language was specific: the Endziel, the final goal, must be distinguished from the language that is later to be used, the endlossen, or final solution, a polite euphemism for the murder of Jewish men, women, and children. The ultimate goal was unarticulated.
The first intermediate goal was concentration. Jews were to be moved from the countryside into the larger cities. Certain areas were to become Judenrein, free of Jews, and smaller communities were to be merged into the larger ones.
Heydrich ordered local leaders to establish a Council of Jewish Elders, 24 men to be appointed from the local leaders and rabbis that are to be made fully responsible, "in the literal sense of the word," to implement future decrees. A census must be taken and leaders are to be personally responsible for the evacuation of Jews from the countryside. It was unnecessary to indicate what personal responsibility implied; clearly, the lives of individual *Judenrat members were at risk.
Due priority was given to the needs of the army and to minimize economic dislocation, not of the Jews, but of industries essential to German economic interest. Businesses and farms were to be turned over to the locals, preferably Germans, and, if essential and no Germans were available, even to Poles.
The Einsatzgruppen were to issue reports, a census of people, an inventory of resources, industries, and personnel.
It is within this framework that the Jewish Councils were established and that the work of securing the occupied territory began. A second decree dated two months later and signed by Hans *Frank, the head of the General Government, further specified the role of the Jewish Council, which was to have a chairman and a deputy.
"The Jewish Council is obliged to receive through its chairman and his deputy the order of the German official agencies. Its responsibility will be to see that the orders are carried out completely and accurately." Jews were ordered to obey the orders of the Jewish Councils.
In retrospect, but only in retrospect, it can be seen that the ghetto was a holding pen, intended to concentrate Jews and hold them captive until such time as an infrastructure was created that could solve the Jewish problem.
The ghetto originally had two goals. The Germans created a situation in which hard labor, malnutrition, overcrowding, and substandard sanitary conditions contributed to the death of a large number of Jews. One in ten died in Warsaw in 1941, before the deportations, before shots were fired. This policy was at odds with the other use of the ghetto as a source of cheap labor that could be of benefit to the Reich and also to individual commanders. In the end, and often only in the end, even the availability of cheap labor gave way to the "Final Solution."
The lifespan of some ghettos was extended because they provided a large reservoir of cheap labor; but while this consideration might forestall the murder process, it did not prevent it. Thus the commander of Galicia, for example, sent out an order in the fall of 1942 to decrease the number of ghettos from 1,000 to 55, and in July 1943 Himmler decided to transfer the surviving inhabitants of ghettos throughout Ostland to concentration camps. The last ghetto on Polish soil (*Lodz), which had been in existence since April 1940, was liquidated in August 1944.
Special ghettos were established for Jews deported from Romania to Transnistria and resettled in cities or towns and in neighborhoods or on streets that had been occupied by Jews who had been murdered shortly before by the German army. One exception was the ghetto at *Theresienstadt, which was established at the end of 1941 to house Jews from Bohemia and Moravia and later Jews from Germany and other Western countries were deported there as well. The Germans intended Theresienstadt to be a showcase to the world of their mass treatment of the Jews and thus to mask the crime of the "Final Solution." Still Theresienstadt was actually a ghetto – a holding pen for captive Jews – a concentration camp where conditions of imprisonment prevailed, and a transit camp: of the 144,000 Jews sent to Theresienstadt, 88,000 were shipped from there to Auschwitz, while 33,000 died in the ghetto. Of the 15,000 children sent to Theresienstadt, fewer than 100 survived.
There were several crucial differences between ghettoization in Poland and ghettoization in former Soviet territories. In Poland, ghettoization began shortly after the onset of war, before mass killings and before the murderous intentions of the Germans were clear to all. In former Soviet territories, ghettoization occurred only after the Einsatzgruppen murders; Jews were certain that German rule would be murderous even if the nature of German intensions was unclear. Some ghettos were situated near forests which could facilitate escape and a chance, however remote, of survival.
the jewish reaction to the establishment of the ghettos
In Poland, the Jews, who were unaware of the Nazis' intentions, resigned themselves to the establishment of ghettos and hoped that living together in mutual cooperation under self-rule would make it easier for them to overcome the period of repression until their country would be liberated from the Nazi yoke. They gave a name to their strategy of survivor, iberleben, to live beyond, beyond German rule until liberation. If within the ghetto, they presumed they would somehow be safer, as they would no longer interact with non-Jews in quite the same way and be freed of daily humiliations and dangers. Based on past experience and also on rational calculations or economic self-interest, it seemed to them that by imprisoning Jews in ghettos, the Nazis had arrived at the final manifestation of their anti-Jewish policy. If the Jews would carry out their orders and prove that they were beneficial to the Nazis by their work, they would be allowed to organize their community life as they wished. In addition, the Jews had practically no opportunity to offer armed opposition that would prevent the Germans from carrying out their plans. The constant changes in the composition of the population (effected by transfers and roundups) and in living quarters made it more difficult to express opposition; the hermetic imprisonment from the outside world prevented the acquisition of arms; and conditions in the ghetto (malnutrition, concern for one's family, etc.) weakened the strength of the opposition. On the other hand, the Germans had the manpower and technical equipment to repress any uprising with ease, and the non-Jewish population collaborated with them, or at best remained apathetic. Any uprising in the ghettos, even if it could be pulled off, was thus doomed to military failure. Any attempt at resistance was risky as the German practice of collective responsibility and disproportionate punishment left the remaining ghetto population at risk. Thus uprisings, when they occurred, were usually last stands undertaken when all hope for collective survival was lost and when the only question was what could be done in the face of impending death.
typology of the ghettos
In most cases, the ghetto was located in one of the poor neighborhoods of a city that had previously housed a crowded Jewish population. Moving large numbers of widely dispersed people into ghettos was a chaotic and unnerving process. In Lodz, where an area already housing 62,000 Jews was designated as the ghetto, an additional 100,000 Jews were crowded into the quarter from other sections of the city. Bus lines had to be rerouted. To avoid the disruption of the city's main transportation lines, two streets were walled off so trolleys could pass through. Polish passengers rode through the center of the Lodz ghetto on streets that Jews could only cross by way of crowded wooden bridges overhead.
In Warsaw, the decree establishing the ghetto was announced on October 12, 1940 – Yom Kippur, the Jewish Day of Atonement. Moving schedules were posted on billboards. Whole neighborhoods were evacuated. While Jews were forced out of Polish residential neighborhoods, Poles were also evicted from the area that would become the ghetto. During the last two weeks of October 1940, according to German figures, 113,000 Poles (Christians) and 140,000 Jews had to be relocated, bringing with them whatever belongings they could pile on a wagon. All abandoned property was confiscated. In every Polish city, the ghettos were overcrowded. Jews were transferred from the other neighborhoods in the city, and in many cases from nearby villages, to housing there, while the non-Jewish inhabitants of the neighborhood were forced to move to another area. These transfers caused great overcrowding from the outset. In Lodz, for example, the average was six people to a room; in Vilna there were even eight to a room during one period. Whenever the overcrowding lessened because of the deporting of Jews to extermination camps, the area of the ghetto was reduced significantly.
At first there were two types of ghettos: open ones, which were marked only by signs as areas of Jewish habitation; and closed ones, which were surrounded by fences, or in some cases even by walls (as in *Warsaw). This difference, however, lost all significance during the period of deportations before an open ghetto was destroyed, or what the Germans called liquidated. In advance all access roads were blocked by the German police, whereas in closed ghettos shifts of German police or their aides constantly guarded the fences and walls. A more significant distinction was the fact that the Germans regarded the closed ghettos as large concentration camps, and therefore most of them were liquidated later than the open ghettos. In contrast to these ghettos, which were all in Polish and Russian territory, the ghettos in Transnistria were not predestined for liquidation. Neither was the ghetto in Theresienstadt. Transnistria even succeeded in maintaining contact with the outside world and received assistance from committees in Romania. Theresienstadt was, in fact, cut off from the world (except for the transports that came in and went out), but the standard of living was higher there than in Eastern European ghettos.
For every ghetto, the German authorities appointed a Judenrat, which was usually composed of Jewish leaders acceptable to the community. The Judenrat was not a democratic body, and its power was centered in one person, not always the chairman, who was responsible for its cooperation in matters relating to the ghetto. The leader of the Judenrat was subordinate to the German authorities, who delegated to him much authority with regard to the Jews but treated him disrespectfully and often cruelly. Many Jews appointed to the Judenrat believed that they were placed in their position in order to serve the Jewish people in its time of great need. They faced two masters. To the Germans they represented Jewish needs and to the Jews they represented German authority. The Germans were uninterested in meeting Jewish needs and German authority was eventually lethal for the Jews.
Ghetto life was one of squalor, hunger, disease, and despair. Rooms and apartments were overcrowded, with 10 or 15 people typically living in space previously occupied by four. Daily calorie allotments seldom exceeded 1,100. Without smugglers who brought in food, starvation would have been rampant. The smugglers' motto: "Eat and drink for tomorrow we die," was only too apt.
There were serious public health problems. Epidemic diseases were a threat, typhus the most dreaded. Dead bodies were often left on the street until the burial society came. Beggars were everywhere. Perhaps most unbearable was the uncertainty of life. Ghetto residents never knew what tomorrow would bring.
In the ghetto, life went on. Families adjusted to new realities, living in constant fear of humiliation, labor conscription, and deportation. Survival was a daily challenge, a struggle for the bare necessities of food, warmth, sanitation, shelter, and clothing. Clandestine schools educated the young. Religious services were held even when they were outlawed. Cultural life continued with theater and music, poetry and art offering a temporary respite from squalor.
From the beginning, the Jewish leadership was faced with the impossible task of organizing ghetto life under emergency conditions and under the ceaseless pressure of threats of cruel punishment. Jewish institutions, to the extent that they existed, continued to function, either openly, such as the institutions that fulfilled religious needs, or in secret, such as the various political parties. The major function of the leadership, however, was the provision of sustenance and health and welfare services (including hospitals) and sanitation, and this had to be accomplished without adequate means. Raul *Hilberg likened their task to a small isolated municipal government living in hostile territory. The authority of leaders always derived from the Germans. To provide these services, they taxed those who still had some resources and worked those who had none. They practiced the time-honored traditions of their people honed by centuries of exile and persecution. Decrees were evaded or circumvented. They tried to outwit the enemy and alleviate the awful conditions of the ghetto, at least temporarily. Some behaved admirably; others became infatuated with their power and imposed it on the powerless, captive population.
Despite what was often their best effort, in the course of time these institutions collapsed in most ghettos. It was even more difficult to establish those services which had not existed within the Jewish community before the Holocaust, such as police, prisons, and courts. The authority vested in these institutions was broad within the narrow autonomous framework that existed in the ghettos, and in many instances they were, of course, not properly utilized under conditions of the life-and-death struggle imposed on the inhabitants of the ghetto.
liquidation of the ghettos
The lifespan of the Polish ghettos was brief; formed in 1940, most were destroyed beginning in 1942 shortly after the *Wannsee Conference. The destruction of the ghettos was conducted as part of the policy of the "Final Solution," for which purpose the Germans prepared special death camps, what they called extermination camps. When it was decided to liquidate a ghetto, they would call on the Jews to present themselves voluntarily to be transferred to labor camps (sometimes with false promises of improved living conditions), but if deception proved unsuccessful, they would round up the residents and bring them by force to assembly areas, from where they would be transported, usually by train, to their destination. Ghetto leaders faced the ultimate decision. For a time they could save some but only at the sacrifice of others. *Rumkowski in Lodz saved the able-bodied and shipped the children to Chelmno, reasoning that the best chance of survival was if the ghetto was transformed into a work camp, productive for the Wehrmacht. "Survival by work" was his motto. In Warsaw, *Czerniakow tried to save the children; when he could not, he killed himself rather than participate in their deportation. Jewish police were employed to send Jews to the trains. In some ghettos – but not many – the leadership chose suicide rather than cooperation. The great majority of the ghetto inhabitants were killed immediately upon their arrival in the camps; a minority, the young and the able-bodied, women without children, were employed in forced labor and were killed after a short time by one of the regular means of extermination. Only a very small number remained alive, sometimes after having been shunted from camp to camp.
[Michael Berenbaum (2nd ed.)]
G. Reitlinger, Final Solution (19682), index; R. Hilberg, Destruction of the European Jews (20033), index; P. Friedman, in: jsos, 16 (1954), 61–88 (incl. bibl.). add. bibliography: E. Sterling, Life in the Ghettos during the Holocaust (2005); I. Gutman, The Jews of Warsaw 1939–1943 (1982); A. Tory, Surviving the Holocaust: The Kovno Ghetto Diary (1990); L. Dobroszycki, The Chronicles of the Lodz Ghetto 1941–44 (1984); I Trunk, Judenrat (1972).
GHETTO.THE NAZI GHETTO
JEWISH LIFE IN THE GHETTOS
The word ghetto originally denoted the traditional Jewish quarter of medieval Christian cities; the term evidently originated in a quarter of this kind that existed in Venice. From the early Middle Ages, Jews tended to live on separate streets or in separate neighborhoods, but they did so voluntarily to maintain their distinct way of life. The first ghettos imposed on Jews appeared in Spain and Portugal in the late fourteenth century.
From the end of the eighteenth century on, and especially after the political changes that the French Revolution brought about, the ghettos that had been established for Jews in Europe began to disappear. The ghetto of Rome was the last to be formally abolished—in 1883, after papal rule ended in Rome (it was thenceforth confined to the Vatican).
Although this type of ghetto existed mainly in Central and Western Europe, separate Jewish quarters also came into being in cities across the Muslim world. In the United States, during the struggle for equal rights by the African American population in the 1950s and 1960s, the term ghetto was widely used to denote the impoverished neighborhoods noted for rampant distress, crime, and violence that these citizens inhabited in major American cities. A diverse alternative culture, noted for its music and art and its sweeping social protest, evolved in the vicinity of the African American ghetto. To this day, the term ghetto is used for a neighborhood inhabited by an ethnic minority that is socially marginalized and suffers from inferior living conditions and fewer opportunities when compared to those of the established population.
The ghettos established by the Nazis for European Jews during World War II were totally different in structure and goals from those described above. The establishment of these ghettos, beginning in Poland shortly after the onset of the German occupation of that country in 1939, was a phase in the overall development of an anti-Jewish policy that aimed to find a comprehensive solution to the "problem" of the Jewish presence in Europe. The Nazi ghetto was not intended to be a permanent solution, a place where Jews would be strictly isolated from the surrounding society. Instead, it was something like a quarantine camp or at times a giant prison, where harsh and restrictive living conditions were imposed. The ghetto provided various German authorities with a reserve of available labor for various purposes and gave them an opportunity, unconstrained by laws and regulations, to oppress the Jewish inhabitants and dispossess them of money, valuables, and other goods according to Nazi officials' needs and caprices.
The first German directive regarding the concentration of Jews in separate urban quarters appeared in the Schnellbrief (quick brief) that the head of the Security Police, Reinhard Heydrich, sent to the commanders of the SS and the police special units (Einsatzgruppen) that followed the Wehrmacht into Poland in September 1939. According to the directive, within three or four weeks the Jews in the Polish areas were to be concentrated in special areas of the large cities so that they would be easier to control and eventually deport. Small Jewish communities were to be eradicated and their inhabitants removed to more central towns, preferably close to railroads. Several weeks later, Hans Frank, the governor-general of the General Government, issued a similar directive. Neither of these documents, however, speaks specifically about the establishment of a ghetto, that is, a closed and isolated area where the Jews would be concentrated under strict supervision.
In 1939–1940, it was the declared policy of Nazi Germany to resolve the Jewish issue in territorial ways. This goal, however, quickly proved unrealistic and unworkable. The absence of clear guidelines about what to do with the Jews, coupled with the abandonment of the total deportation policy, convinced the local authorities that they should deal on their own with the presence of Jews within their purviews. Consequently, the Nazi ghettos were established at different times and differed in their ways of life, in the type of official German control, and in the extent of freedom of movement allowed.
The first large ghetto, located in Lodz, was sealed on 1 May 1940, with 162,000 Jews packed into it. The Lodz model was closely studied and adopted by those who established ghettos in other Polish cities. The assumption behind the founding of the Lodz ghetto was that the Jews' continued presence in the city would be short-lived. Therefore, the Germans' main concern was how to exploit fully the Jews' property as the ghetto was being established. As it became increasingly evident that the Jews would not quickly disappear from Lodz, however, an economic structure was built in the ghetto, including a variety of workshops that exploited Jewish labor and funneled the profits into the pockets of the German ghetto administrators, merchants, and others.
The Warsaw ghetto was sealed in November 1940. It was the largest of all the ghettos, its population peaking at around 440,000 in mid-1941. The governor of Warsaw District, Ludwig Fischer, claimed that in the opinion of the German medical service in Warsaw, the Jews were spreading dangerous illnesses and therefore had to be isolated from the surrounding population. Allegations of Jewish involvement in the black market and in the corruption of the morals and culture of Polish society provided additional rationales for a sealed ghetto. The establishment of the ghetto also amounted to an admission by the local German authorities that they would not be able to deport the Jews of Warsaw rapidly. Since the plans for the ghetto did not include mechanisms that would keep the inhabitants fed and gainfully employed, however, the Warsaw ghetto became a focal point of distress, hunger, and severe epidemics. The situation did begin to improve slightly in mid-1941, when the Germans in charge of the ghetto decided to make the ghetto economically viable, to create jobs so that the Jews could support themselves and to increase food supplies. In March 1941, ghettos were established in Lublin and in Kraków, the seat of the governor-general and the administrative capital of the German occupation in Poland. The Kraków ghetto was established during a deportation action that had begun in the spring of 1940 with the aim of banishing some 50,000 Jews, leaving only 5,000 workers in high-demand trades. By economic necessity, however, the ghetto eventually held 18,000. In April 1941, ghettos were established in several other important cities in Poland: Kielce, Radom, and Częstochowa. By the spring and summer of 1942, when the deportations to the death camps began in Poland, hundreds of ghettos had been established across the country, including some in the Jewish communities of small towns.
The Germans' goal in establishing the ghettos remains unclear and various authorities who dealt with the Jewish question in Poland interpreted it in different ways. Obviously, the Nazis were not concerned about the high mortality rate that ghettoization and the living conditions in some of the ghettos caused among the imprisoned Jews. In 1941–1942, more than 112,000 Jews in the two most important ghettos in Poland, those of Warsaw and Lodz—20 percent of the Jewish population living there at the time—died of starvation and illness. In 1941 the deaths of thousands of Jews in the ghettos, especially in Warsaw, forced the leaders of the General Government to choose between allowing the starvation and slow extermination of the Jews to continue or transforming the ghettos to serve the Germans' economic interests. In mid-1941, those who favored the economic rationale won the day. Thus, German policy toward the ghettos in general favored economic considerations as long as a comprehensive territorial solution involving the deportation of the Jews had not been formulated.
Ironically, the Jewish labor force in the ghettos became more necessary than ever in early 1942, after the Final Solution was set in motion. As the war in the east expanded, the German war industry required more and more workers. Hundreds of thousands of Poles were sent to Germany as laborers, as were Soviet prisoners of war, who had survived a winter of catastrophic mortality in German prisoner of war camps. Demand for Jewish workers in the ghettos escalated so rapidly that in June 1942 Ludwig Fischer issued a directive to the effect that every effort must be made not to leave Jews idle. The fundamental goal of employing the ghettoized Jews changed at this time. The instrumental purpose—enabling the Jews to support themselves and absolving the German authorities of this concern—gave way to the needs of companies that urgently required a handy supply of labor. The decision about the Jews' ultimate fate, however, had already been made in Berlin and economic considerations were not central in its adoption. By then, local leaders no longer played a role in making decisions about the Jews.
The last phase in the establishment of ghettos began in 1941 in the newly occupied Soviet territories. Ghettos were established in various towns in Lithuania, Latvia, Byelorussia, and Ukraine. The formation of ghettos in cities in these areas—Vilna (Vilnius), Kovno (Kaunas), Riga, Minsk, Lwów (Lvov/Lviv), and so on—coincided with the mass murder of the Jewish populations there, beginning in the summer of 1941. Often the ghettos served as a mechanism for the selection of some Jews for immediate murder and others for continued survival based on their ability to contribute their labor to the cause of the war. Thus, the ghettos in the occupied Soviet areas were already part of the Final Solution that had been decided upon and that had begun to be implemented in autumn 1941. In some towns, fewer than 20 percent of the pre-Occupation Jewish populations were left in the ghettos. Some of these ghettos resembled huge labor camps in every respect.
Although almost all the ghettos were located in Eastern Europe, the Germans did establish several ghettos for specific purposes elsewhere. The most notable of them was that in Theresienstadt, northwestern Czechoslovakia, to which in 1941–1945 some 140,000 Jews were deported from Germany, the Protectorate of Bohemia and Moravia, and Western European countries. Responsibility for this ghetto belonged to the SS Security Police (the RSHA), which transferred its guarding requirements to the Czech police. Theresienstadt had been established to concentrate selected groups of Jews—the elderly, the famous, or those who had special status in Germany and Western Europe. In this manner, the Nazis intended to disprove rumors about the fate of German Jews who were being deported to the east. The first groups of Jews from Prague reached this ghetto in November 1941, but by January 1942 extermination transports were already setting out from Theresienstadt to Riga, Latvia. Most Jews who were concentrated in Theresienstadt were sent in 1942–1944 to be killed at the Auschwitz and Treblinka death camps in Poland; by late 1944, only 11,000 remained there.
Another ghetto in Central Europe was that of Budapest, established in late November 1944. After approximately 70,000 Jews were led out of this city on a death march toward the Austrian border, members of the Hungarian Nazi Party, the Arrow Cross, which had seized power in Hungary, set up a ghetto in Budapest, where most remaining Jews were concentrated. In December 1944–January 1945, the Hungarian Fascists removed about 20,000 Jews from the ghetto and murdered them along the banks of the Danube.
The Germans left the Jews to their own devices in many respects. Even before the ghettos were established, Jewish councils—Judenräte—were set up in Polish towns. Their function, in addition to obeying the Germans' directives, was to oversee the Jews' lives. This created an impression of Jewish autonomy that was illusory, since the powers of the Judenräte were never entrusted to any Jewish leaders who had been active in the prewar Eastern European Jewish communities.
The Judenräte were composed of public figures who had remained in the Jewish communities after the Occupation began. Once ghettoization had occurred, the Judenräte had to cope with dire problems that they could rarely solve. They dealt with the allocation of apartments and other dwellings in the ghetto, the removal of waste, the distribution of food, the welfare and relief of the indigent and refugees, education, the operation of clinics and hospitals, and burial of the dead. In many ghettos, a Jewish police force was established to maintain public order, control the entrances to the ghetto, and escort groups of workers who set out from the ghetto to workplaces in town. In certain ghettos, the Germans even allowed the Judenräte to manage an independent branch of the postal service.
The Judenräte in the major ghettos, however, invested most of their effort toward creating an economic infrastructure that would provide the inhabitants with jobs. In ghettos such as those of Lodz, Białystok, and Vilnius, a systematic network of workplaces—ghetto workshops and employers outside the ghetto who hired Jewish workers—was built in cooperation with the Judenräte. Ghettos that had such systems were usually more orderly and stable than the others. Although chronic shortages of food, clothing, and other essentials persisted, mass mortality was not in evidence as in the Warsaw ghetto. Many Jews believed that a productive, well-kept, and functioning ghetto was the only instrument that might persuade the Germans to leave them unharmed and might increase their prospects of survival.
Community and cultural life continued in almost all ghettos that had been established in Eastern Europe. Even in small ghettos, Jews maintained their educational, cultural, and religious institutions as best they could, at the initiative of the Judenräte or of public activists and intellectuals. In Warsaw, Lodz, Vilnius, Kaunas, and other towns, drama groups put on Jewish and non-Jewish plays for the ghetto public. Public libraries collected thousands of books from Jewish libraries that had been shut down, including some that the Nazis had torched at the beginning of the Occupation.
Activists in Jewish youth movements were very important and had an impact on the lives of young people in the ghettos. In ghettos in the major cities—Warsaw, Lodz, Kraków, Vilnius, Kaunas, Białystok—these activists were the most dynamic group, secretly maintaining informal education and cultural and welfare endeavors. They organized social activity groups for children and young people, evening literary events, theater troupes, and choirs. The youth movements and underground activists of the former Jewish political parties also published dozens of underground newspapers in the Warsaw ghetto, which were disseminated to other ghettos in occupied Poland. In this way, the underground activists managed to break through the isolation that the Germans had imposed on the Jews in ghettoizing them. They also formed the core that established the resistance organizations in the ghettos in 1941–1942.
In early spring 1942, the Germans began to evacuate the ghettos in Poland as part of a comprehensive extermination scheme known as Operation Reinhardt. The operation started in Lublin District and culminated in the deportation of 350,000 Jews from the Warsaw ghetto to Treblinka for extermination in summer 1942. On 19 July 1942, Heinrich Himmler issued a directive for the final annihilation of the Jews in the General Government by the end of 1942, with the exception of selected groups that would be left behind for labor in several major cities. In another directive, on 21 July 1943, Himmler ordered the deportation of these remaining Jews to concentration camps in the Baltic countries and parts of Byelorussia. The last ghetto to be liquidated was that of Lodz, where the remaining Jews, some 70,000 in number, were sent in August 1944 to the Auschwitz death camp.
The Chronicle of the Łódź Ghetto, 1941–1944. Edited by Lucjan Dobroszycki. Translated by Richard Lourie, Joachim Neugroschel et al. New Haven, Conn., 1984.
Czerniaków, Adam. The Warsaw Diary of Adam Czerniaków: Prelude to Doom. Edited by Raul Hilberg, Stanislaw Staron, and Josef Kermisz. Translated by Stanislaw Staron and the staff of Yad Vashem. New York, 1979.
Kahane, David. Lvov Ghetto Diary. Translated by Jerzy Michalowicz. Amherst, Mass., 1990.
Kruk, Herman. The Last Days of the Jerusalem of Lithuania: Chronicles from the Vilna Ghetto and the Camps, 1939–1944. Edited by Benjamin Harshav. Translated by Barbara Harshav. New Haven, Conn., 2002.
Ringelblum, Emanuel. Ksòvim fun Geto. 2 vols. Tel Aviv, 1985.
Berkley, George E. Hitler's Gift: The Story of Theresienstadt. Boston, 1993.
Browning, Christopher R. "Nazi Ghettoization Policy in Poland, 1939–1941." In his The Path to Genocide. Essays on Launching the Final Solution, 28–56 . Cambridge, U.K., 1992.
Gutman, Yisrael. The Jews of Warsaw, 1939–1943: Ghetto, Underground, Revolt. Translated by Ina Friedman. Bloomington, Ind., 1982.
The name of a district in sixteenth-century Venice where Jews were required to live, ghetto came to be the name for any segregated Jewish quarter. The name was applied (1) to compulsorily segregated Jewish residential districts in Europe between 1516 and 1870; (2) to urban areas of first settlement of Jewish immigrants and their distinctive culture after about 1880; and (3) from 1940 to 1944, to rigidly segregated districts in German-occupied European cities where the occupiers imprisoned Jews before methodically murdering them.
As a striking historical example of recurring policies of marginalization and demonization, ghetto was also applied to phenomena of Western history unconnected with Jews. In the nineteenth century, the term came to refer to (1) urban concentrations of distinctive businesses, classes, and ethnic groups. In the twentieth century in the United States, the term was applied to ethnic neighborhoods, particularly to (2) black neighborhoods in northern cities. Other urban areas have been called "the hippie ghetto," "Pakistani ghettos in the (English) midlands," and "the golden ghetto." Before the Enlightenment, mention of the ghetto was meant to arouse revulsion at the inhabitants; afterward, its mention could also be meant to evoke indignation at the infliction of shame and suffering.
Jewish Urban Quarters before the Ghetto
Diaspora Jews in late antiquity and the European Middle Ages lived together voluntarily, for security and communal convenience, in urban neighborhoods that were called Judengasse, in German-speaking countries; giudecca, Judaica, juiverie, carrière, or judería in Romance-speaking countries; and, in Muslim countries, equivalents of Harat-al-Yahud, "the Jewish quarter." Besides these voluntary Jewish enclaves, in which non-Jews also lived, medieval governments occasionally attracted Jews to settle in undeveloped regions by reserving special areas for them. These voluntary Jewish districts were usually walled and gated.
A different form of restricted residence that affected millions of Jews was the Russian Pale of Settlement, covering four hundred thousand square miles between the Baltic and Black seas, defined in 1791 and abolished after the 1917 revolution. Between 1772 and 1795, Russia, which had no Jews, annexed Polish territory with a large Jewish population. It restricted Jewish residence to some of the annexed territory, which Czar Nicholas I (reigned 1835–1855) gave the name "Pale of Settlement." In the course of the nineteenth century, Jews, who were a minority in these territories, were expelled from villages and compelled to live in towns and cities, and were limited to certain occupations. These regulations, which by 1897 applied to nearly five million Jews, became onerous at a time when restrictions on other population groups were relaxed. Pauperization, legal restrictions, and hostility in the pale provoked mass Jewish emigration, which flowed to the ghettos in other countries.
Establishment of Ghettos
To control heresy, the Roman Catholic Church tried at times to separate Jews from Christians. Separation became a widespread policy from 1300 to 1600, when England, France, Spain, and Portugal expelled Jews, and many German and Italian cities enacted strict controls on those who were allowed to remain. Venice first permitted Jews to residence in 1513 and in 1516 required them to settle in the ghetto nuovo, the "new foundry" district, which it encouraged Christians to leave. The city later allowed Jewish settlement in other districts, the ghetto vecchio and the nuovissimo ghetto.
In 1555, as part of Counter-Reformation policy, Pope Paul IV (reigned 1555–1559) restricted Jewish residence in papal territories to segregated quarters, which by 1562 were called "ghettos." Through the eighteenth century they were established in western and central Europe. "Ghetto" conventionally evoked a forbidding image of impoverished Jews who lived locked behind walls from dusk to dawn in crowded, narrow streets, under their own authorities. During the French revolutionary wars, Napoleon Bonaparte (1769–1821) abolished ghettos and granted citizenship to Jews; this became permanent during the nineteenth century. The last ghetto, in Rome, was abolished in 1870.
Ghetto as Metaphor for Slum
The image of the ghetto was applied to a variety of situations. The Oxford English Dictionary records ghetto as referring in 1887 to a neighborhood of book dealers. In 1903, Jack London compared the ghetto to the misery of slums inflicted on workers—only a small percentage of them Jews—by the unrestrained operation of laissez-faire economics:
At one time the nations of Europe confined the undesirable Jews in city ghettos. But today the dominant economic class, by less arbitrary but nonetheless rigorous methods, has confined the undesirable yet necessary workers into ghettos of remarkable meanness and vastness. East London is such a ghetto, where the rich and the powerful do not dwell, and the traveler cometh not, and where two million workers swarm, procreate, and die.
The areas of first settlement by the mass immigration of Russian and Polish Jews to the United States, between 1880 and 1924, were called ghettos. Some earlier settlers considered these immigrants—like those from Italy, Poland, Scandinavia, and Asia—threats to American morality, hygiene, economics, and race. The immigrant "ghetto slums" lasted for a generation or two, until most inhabitants moved away or became invisible by learning English and adopting the manners and clothing of the country.
Large numbers of black Americans in search of economic and social opportunities also arrived in northern cities in waves of internal migration during World War I, World War II, and the 1950s. They often first settled in immigrant neighborhoods, and the terms ghetto and slum came to refer to visible poor black neighborhoods that did not disappear through assimilation. Sociologist Kenneth Clark wrote:
America has contributed to the concept of the ghetto the restriction of persons to a special area and the limiting of their freedom of choice on the basis of skin color. The dark ghetto's invisible walls have been erected by the white society, by those who have power, both to confine those who have no power and to perpetuate their powerlessness.… The objective dimensions of the American urban ghettoes are overcrowded and deteriorated housing, high infant mortality, crime, and disease. The subjective dimensions are resentment, hostility, despair, apathy, self-depreciation, and its ironic companion, compensatory grandiose behavior.
Many social scientists later discarded the ghetto metaphor because it carried misleading expectations that the underclass in the inner city would also disappear automatically.
Between 1939 and 1944, Nazi racial ideology was put into operation in German-occupied Europe. The occupiers separated Jews from other subject peoples and imprisoned them in more than one thousand ghettos, which the Germans did not consistently give that name. The Germans ruled through governing councils that they selected. The occupiers allowed disease to spread widely and imposed both substarvation rations and the death penalty for smuggling food.
Hans Frank, chief of the Generalgouvernement of Poland, summarized the policy in August 1942 when he stated that if the Jews did not die of starvation, other measures would need to be taken. The Germans liquidated all the ghettos, and sent the survivors to extermination camps. Under these conditions, the Jews' attempts to preserve normal communal life and to demonstrate their productivity qualify as resistance, but the desperate armed uprising by the last inhabitants of the ghetto in Warsaw, in April and May 1943, added an unprecedented association to the term ghetto.
See also Anti-Semitism ; Ethnicity and Race ; Genocide ; Resistance ; Segregation .
Clark, Kenneth B. Dark Ghetto: Dilemmas of Social Power. New York, Evanston, Ill., and London: Harper and Row, 1965.
Gutman, Yisrael. The Jews of Warsaw, 1939–1943: Ghetto, Underground, Revolt. Bloomington: Indiana University Press, 1982.
London, Jack. The People of the Abyss. London and Sterling, Va.: Pluto Press, 2001. Originally published in 1903.
Ravid, Benjamin C. I. "From Geographical Realia to Historio-graphical Symbol: The Odyssey of the Word Ghetto." In Essential Papers on Jewish Culture in Renaissance and Baroque Italy, edited by David B. Ruderman, 373–385. New York: New York University Press, 1992.
Slutsky, Yehuda. "Pale of Settlement." In Encyclopedia Judaica, columns 24–28. Jerusalem: Encyclopedia Judaica, 1971.
Ward, David. Poverty, Ethnicity, and the American City, 1840–1925: Changing Conceptions of the Slum and the Ghetto. Cambridge, U.K., and New York: Cambridge University Press, 1989.
Wirth, Louis. The Ghetto. New Brunswick, N.J.: Transaction, 1998. Originally published in 1928.
Arthur M. Lesley
GHETTO. From their earliest days in the Diaspora, Jews chose voluntarily to live close together, reflecting a practice commonly adopted by groups dwelling in foreign lands. Their quarters, often referred to as the Jewish quarter or street, initially were almost never compulsory, and they continued to have contacts on all levels with their Christian neighbors. However, the Catholic church looked askance at such relationships, and in 1179 the Third Lateran Council stipulated that Christians should not dwell together with Jews. This vague policy statement had to be translated into legislation by the secular authorities, and only infrequently in the Middle Ages were laws enacted confining Jews to compulsory, segregated, and enclosed quarters. The few such Jewish quarters then established, such as that of Frankfurt, were never called ghettos, since that term originated in Venice and became associated with the Jews only in the sixteenth century.
THE GHETTO OF VENICE
In 1516, as a compromise between allowing Jews to live anywhere they wished in Venice and expelling them, the Venetian government required them to dwell on the island known as the Ghetto Nuovo (the New Ghetto), which was walled up with only two gates that were locked from sunset to sunrise. Then, when in 1541 visiting Ottoman Jewish merchants complained that they did not have enough room in the ghetto, the government ordered twenty dwellings located across a small canal walled up, joined by a footbridge to the Ghetto Nuovo, and assigned to them. This area was already known as the Ghetto Vecchio (the Old Ghetto), thereby strengthening the association between Jews and the word "ghetto."
Clearly, the word "ghetto" is of Venetian rather than of Jewish origin, as sometimes conjectured. The Ghetto Vecchio had been the original site of the municipal copper foundry, called "ghetto" from the Italian verb gettare, 'to pour or to cast', while the island across from it, on which waste products had been dumped, became known as il terreno del ghetto, 'the terrain of the ghetto', and eventually the Ghetto Nuovo.
Although compulsory, segregated, and enclosed Jewish quarters had existed in a few places prior to 1516, since the term "ghetto" had never been applied to them before 1516, the oft-encountered statement that the first ghetto was established in Venice in 1516 is correct in a technical linguistic sense but very misleading in a wider context, while to apply the term "ghetto" to an area prior to 1516 would be anachronistic. The most precise formulation is that the compulsory segregated and enclosed Jewish quarter received the designation "ghetto" as a result of developments in Venice in 1516.
THE SPREAD OF THE GHETTO
The word "ghetto" did not long remain confined to the city of Venice. In 1555, Pope Paul IV issued his restrictive bull, Cum Nimis Absurdum. Its first paragraph provided that the Jews of the Papal States were to live together on a single street, or should it not suffice, then on as many adjacent ones as necessary, with only one entrance and exit. Accordingly, the Jews of Rome were moved into a new compulsory, segregated, enclosed quarter, which apparently was first called a ghetto seven years later. Influenced by the papal example, local Italian authorities established special compulsory quarters for the Jews in most places in which they were allowed to reside. Following the Venetian nomenclature, these new residential areas were called "ghetto" in the legislation that established them.
In later years, the Venetian origin of the word "ghetto" in connection with the foundry came to be forgotten, as it was used exclusively in its secondary meaning as referring to compulsory, segregated, and enclosed Jewish quarters and then in a looser sense to refer to any area densely populated by Jews, even if they had freedom of residence and lived in the same districts as Christians.
Although the segregated, compulsory, and enclosed ghettos were abolished under the influence of the ideals of the French Revolution and European liberalism (as in Venice, 1797; Frankfurt, 1811; and Rome, where the gates and walls were removed in 1848 although the Jews were basically confined to that area until the city became a part of the Kingdom of Italy in 1870), the word "ghetto" lived on as the general designation for areas densely inhabited by members of minority groups, almost always for socioeconomic reasons rather than for legal ones, as had been the case with the initial Jewish ghetto.
AMBIGUOUS USAGE OF THE WORD "GHETTO"
It must be noted that the varied uses of the word "ghetto" have created a blurring of the Jewish historical experience, especially when employed loosely in phrases such as "the age of the ghetto," "out of the ghetto," and "ghetto mentality." Actually, the word can be used in its original sense of a compulsory, segregated, and enclosed Jewish quarter only in connection with the Jewish experience in Italy and a few places in the Germanic lands, and not at all with that in Poland-Russia. If it is to be used in its original sense in connection with Eastern Europe, then it must be asserted that the age of the ghetto arrived there only after the Nazi invasions of World War II. However, there was a basic difference: unlike ghettos of earlier days, which were designed to provide Jews with clearly defined permanent space in Christian society, twentieth-century ghettos constituted merely temporary stages on the planned road to total liquidation.
Finally, to a great extent because of the negative connotations of the word "ghetto," the nature of Jewish life in the ghetto is often misunderstood. The establishment of ghettos did not lead to the breaking off of Jewish contacts with the outside world on any level. Additionally, from the internal Jewish perspective many evaluations of the ghetto's alleged impact upon the life of the Jews and their mentality require substantial revision. In general, the decisive element determining the nature of Jewish life was not so much whether or not Jews were required to live in a ghetto, but rather the nature of the surrounding environment and whether it constituted an attractive stimulus to Jewish thought and offered a desirable supplement to traditional Jewish genres of intellectual activity. In all places, Jewish life must be examined in the context of the external environment, and developments, especially those subjectively evaluated as undesirable, should not be attributed solely to the alleged impact of the ghetto.
See also Jews and Judaism ; Jews, Attitudes toward ; Jews, Expulsion of (Spain; Portugal) ; Venice .
Bonfil, Robert. Jewish Life in Renaissance Italy. Translated by Anthony Oldcorn. Berkeley, 1994.
Calabi, Donatella. "Les quartiers Juifs en Italie entre 15e et 17e siècle. Quelques hypotheses de travail." Annales 52 (1997) 4: 777–797.
Ravid, Benjamin. "From Geographic Realia to Historiographical Symbol: The Odyssey of the Word Ghetto. " In Essential Papers on Jewish Culture in Renaissance and Baroque Italy, edited by David Ruderman, pp. 373–385. New York, 1992.
Benjamin C. I. Ravid
After World War II (1939–1945), millions of African Americans sought to escape poverty in the rural south by moving to northern cities where they hoped to find better paying jobs. But, they encountered housing discrimination that forced them into racially separate neighborhoods known as ghettos. Ghetto populations soared during the 1950s, when the black population of major cities grew quickly. During that timeframe Detroit's black population increased from 16 percent to 29 percent, while Chicago's grew from 14 percent to 23 percent. Boston's increased from five to 10 percent, and the District of Columbia's rose from 35 to 55 percent. At one time during this period, more than 2,200 African Americans moved to Chicago each week. This rapid population shift severely strained housing and urban services and created a set of circumstances that made ghettos, which had first appeared in the early decades of the 1900s during the Great Migration, an entrenched feature of almost every major city in the United States.
One of the most significant factors in the creation of ghettos was the mass movement to the suburbs of middle-class whites. At the same time, expansion of highway construction and the growth of the automobile industry enabled companies to move away from cities to areas where they could operate more cheaply. Thus, just as millions of blacks were moving to cities, jobs there were disappearing, as were tax revenues that could support decent services such as schools and sanitation. Housing in ghettos deteriorated badly, and high unemployment and limited social services combined to create blighted areas where crime rates soared. Yet blacks found it extremely difficult to escape from these areas because they were consistently denied the opportunity to purchase homes in white neighborhoods. Even after passage of the Fair Housing Act in 1968, which prohibited discrimination in the sale, rental, or financing of housing units, most African American families in urban areas had no choice but to live in ghetto neighborhoods. By the late 1960s ghetto residents were extremely frustrated by the slow pace of change as advocated by civil rights leaders. In 1965, the Los Angeles neighborhood of Watts erupted in violence as thousands of African American residents burned stores and looted the area. The riots, which lasted from August 11 to August 16, caused 34 deaths and injured more than 1,000. Devastating riots also broke out in Detroit. These riots traumatized the nation and brought significant public attention to ghetto conditions. Though ghettos were beset by poverty and other problems, however, they also fostered racial pride and provided an important base for black businesses.
See also: Discrimination, Suburbs
Italians first used the word ghetto in the 1500s. At the time, it referred to the enclosed areas of Italian cities where Jews were permitted to live. However, the segregation of Jews in European societies has a much longer history than the word. Throughout the Middle Ages, Jews in Europe had chosen to live in their own communities, although they had many social contacts with their Christian neighbors. In 1179 the Catholic Church announced that Jews and Christians should not live together. However, few cities passed laws to enforce the church's policy, so the announcement had little effect.
The word ghetto had its origins in Venice over 300 years later. Although Venice permitted individual Jews to reside in the city, Venetians disliked having Jews live wherever they wished. In 1516 the city's Senate required all Jews to move to a part of the city called Ghetto Nuovo (the New Ghetto). There they lived in a walled community that was locked from sunset to sunrise. The Venetian government later increased the size of the Jewish community by adding Ghetto Vecchio (the Old Ghetto), an area near Ghetto Nuovo that had previously been a foundry, a place for pouring and casting metals. Both places took their names from the Italian verb gettare, which means "to pour or to cast."
Separate, enclosed quarters for Jews became common in Italy during the Catholic Counter-Reformation*, a period when the Catholic Church became hostile toward Jews. In 1555 Pope Paul IV required Jews in the Papal States* to reside together in an enclosed quarter with only one entrance and one exit. Cities such as Florence, Siena, Padua, and Mantua followed this example, passing laws to force Jews into segregated districts. The laws used the Venetian term ghetto to describe these places. Soon, ghetto came to mean any segregated Jewish area. In later years, the meaning of the word became still broader. It referred to any area populated by many Jews, even if they chose to live there voluntarily. Today, the term ghetto often refers to any minority community.
Although many European Jews lived in ghettos, they maintained contacts on all levels with the outside world. The nature of the society surrounding the ghetto had a far stronger effect on Jewish life in Renaissance Europe than the fact that Jews lived apart from their neighbors.
Originally a district in Venice reserved for Jewish inhabitants of the city, and a name applied to any neighborhood that, either by law or custom, holds a majority of any single national, ethnic, or religious group. There were Jewish inhabitants of Venice early in its history, with most earning their livings from certain trades permitted to them: moneylending, tailoring, and medicine. After Jews were expelled from Spain in 1492, however, the arrival of several thousand foreign Jews prompted the Venetian Republic to take action restricting their movements in the city. One law allowed them to live in the city for no more than fifteen days every year. In 1516 Venice designated the ghetto as the restricted area where Jews could live. The city also had designated areas of residence for other groups, including German merchants, who were limited to a single building known as the Fondaco dei Tedeschi, and the Turks, in the Fondaco dei Turchi.
The Venetian ghetto was linked to the rest of the city by two small bridges that were patrolled after sundown in order to prevent any of the inhabitants from mingling with Gentiles (non-Jews) in the rest of the city. As the Jewish population increased, and the neighborhood grew dangerously overcrowded, the ghetto was expanded into neighboring quarters. The ghetto came to an end with the Republic of Venice, which was overthrown in 1797 by the armies of Napoléon Bonaparte. The neighborhood has remained a center of the Jewish religion and culture up to the present day.
The idea of a ghetto for Jewish residents spread to other cities in Italy and Europe. In Rome, Pope Paul IV established a small Jewish ghetto of four city blocks in 1555. As in Venice, the neighborhood was surrounded by a wall and not allowed to expand even as its population grew. The pope enforced the requirement that Jews live there by a papal bull (decree), Cum Nimis Absurdum. The ghetto of Rome was opened in 1870 and its walls torn down in 1888.
See Also: Jews; Venice
ghetto (gĕt´ō), originally, a section of a city in which Jews lived; it has come to mean a section of a city where members of any racial group are segregated. In the early Middle Ages the segregation of Jews in separate streets or localities was voluntary. The first compulsory ghettos were in Spain and Portugal at the end of the 14th cent. The ghetto was typically walled, with gates that were closed at a certain hour each night, and all Jews had to be inside the gate at that hour or suffer penalties. The reason generally given for compulsory ghettos was that the faith of Christians would be weakened by the presence of Jews; the idea of Jewish segregation dates from the Lateran Councils of 1179 and 1215. Within the ghetto the inhabitants usually had autonomy, with their own courts of law, their own culture, and their own charitable, recreational, educational, and religious institutions. Economic activities, however, were restricted, and beyond the ghetto walls Jews were required to wear badges of identification. One of the most infamous ghettos was that of Frankfurt, to which Jews were compelled to move by a city ordinance of 1460. Crowded into a narrow section, the ghetto underwent several disastrous fires. The ghetto in Venice was established in 1516 after long negotiations between the city and the Jews. In 1870 the last ghetto in Western Europe, in Rome, was abolished. In Russia the Jewish Pale continued to exist until 1917. After the 18th cent. ghettos were also to be found in some Muslim countries. During World War II the Nazis set up ghettos in many towns in E Europe from which Jews were transported to concentration camps for liquidation; the Warsaw (Poland) ghetto was a prime example. In the United States, African Americans, Chicanos, and immigrant groups have been forced to live in ghettos through economic and social forces rather than being required to do so by law. See also anti-Semitism. |
Like what you saw?
Create FREE Account and:
Economics: Cost & Revenue - Problem 3
The average cost is the cost per item of producing a certain number of items, or c(x)/x. So, to find average cost for a certain, given a cost function, simply plug in the number of items into the cost function, and divide by that number. The result is the cost per item produced. The graph of the average cost can be shown on a graphing calculator by inputting the function of the average cost, its minimum, and bounds.
I want to look at a problem that explores average cost just a little bit more. The cost function for your magic broom company is c(x) equals 300,000 plus 60x minus 0.03x² plus 0.000009x³. Now this is in galleons. This is the total cost of producing x magic brooms. A says find the average cost per broom as a function of the number x of brooms produced.
Well, before I get into average cost, this is a graph of the cost function. This function I have here. Average cost remember is a(x) equals c(x), the total cost, divided by x. Now how would this quantity be represented on a graph of average cost? Remember this is y equals c(x).
If I picked an x value here, the average cost would be this value which is c(x) divided by x. The way to represent that is the slope of this line. Now every point on this curve is going to have a different slope. Well, there might be some that have the same, but the idea is that you can calculate the average cost by calculating the slope of this.
So this begs the question, can we find a place where there is a minimum average cost? That's what we're going to do in part b. But for now let's come up with a formula for the average cost. We start with this, and we divide this function by x. I'll just do that termwise. So a(x) is going to be 300,000 over x plus 60x over x is 60, minus 0.03x² over x, minus 0.03x. Then plus 0.000009x³ over x, that will be x². That's my average cost function.
So in part b I'm asked to graph the average cost function on a graphing calculator. I'm asked for what x value is average cost minimum, and what is the minimum average cost. So I'm going to graph this function on my TI-SmartView, it's TI-84. I'm going to graph this, and find the minimum average cost, and where it occurs. So let's take a look at the TI-84.
Here we are looking at the TI-84. I've taken the liberty of already entering in the function a(x). This is my average cost function. Not the cost function. I've set the window to Xmin is 0, Xmax is 600. So this is not the number of brooms. I'm stepping by thousands of brooms. Then in the y-axis, I have the cost in galleons per brooms. So I'm going from 0 to 500 there. You recall in the previous exercise, we had an average cost of about 355 galleons per broom.
Let's graph this. So this is what the graph of average cost looks like. You can see that there is an actual absolute minimum here. We just need to find it. Instead of using calculus to find it, I'm going to use my calculator. If you look at second calc, in the Calculate menu, third thing; minimum. I'll hit number 3. I need a Left Bound.
So I'm going to pick a point that's to the left of the minimum. I'm guessing that 2,000 might be left. So I'm, going to type that in, 2,000 enter. That's good. On right Bound, I'm going to try 5,000. Now that tells the computer where to search for the minimum. It will search between 2,000 and 5,000.
Now it's asking for a guess. I want to just type in 3000. That's between the two. It's found x equals 3,247, that's the number of broom where the average cost is a minimum, and 149.87, that's the number of galleons per broom that is the minimum average cost. So the answer to our problem is that we should produce 3247 brooms to get a minimum average cost. That minimum average cost will be about 150 galleons per broom. |
Asteroids are some of the most ancient objects in the Solar System, relics left over from the time when the planets first started forming and evolving. For this reason, scientists are very interested in them, since they can provide clues as to how this process occurred. Most asteroids orbit the Sun in a broad belt between Mars and Jupiter, but they can be found elsewhere in the Solar System as well. NASA’s OSIRIS-REx spacecraft is now en route to one of these asteroids, called Bennu, which it will study and then bring a sample back to Earth. While on the way there, however, OSIRIS-REx will also be searching for other asteroids, called Trojans. These have regular orbits which place them either just before or just behind a planet, including Earth. The spacecraft will be on the lookout for some of these Trojans near Earth this month as it travels toward Bennu.
NASA has chosen two new missions to explore the Solar System; it was announced today during a media teleconference. The missions are part of NASA’s Discovery Program, and after the competing proposals had been narrowed down to five contenders, the final two winners were announced. Both missions, called Lucy and Psyche, will visit asteroids which have never been seen up close: multiple Trojan asteroids which share Jupiter’s orbit and the unusual metal asteroid 16 Psyche. These missions will study such objects which are relics left over from the early beginnings of the Solar System, providing new clues as to how the planets and other bodies formed. Two other mission proposals to return to Venus did not make the cut, unfortunately.
Sending human astronauts to Mars is a dream shared by many, but there are still challenges to overcome and the question of just how to accomplish it is a subject of intense debate. Some supporters advocate sending a mission directly to Mars, while others think that returning to the Moon first, for potentially beneficial training, is the way to go. Indeed, former astronaut James Lovell, who flew on two trips to the Moon, has also called for a return to the Moon first. NASA itself has stated its desire to send a crewed mission to a nearby asteroid first, instead of the Moon, going a bit farther into space than the Moon as its idea of preparation for the much longer journey to Mars. A major problem has been that NASA has still not set a firm timetable for such a mission; it wants to go to Mars, but the steps to achieving that goal are still unclear.
This coming September, a new NASA spacecraft, OSIRIS-REx, will be heading toward an asteroid to collect samples which will later be brought back to Earth. This is the first time for such a sample return mission to an asteroid by a U.S. spacecraft, but there’s also another unique aspect to this mission – artists, and space enthusiasts in general, are being invited to submit some of their work to be included onboard the spacecraft.
Looking ahead to future planetary missions, NASA has selected five new science investigations for refinement over the next year. Later, one or two of those missions will be chosen to actually be launched, perhaps as early as 2020. The selections are part of NASA’s Discovery Program, which had requested the proposals in November 2014. Initially, 27 proposals had been submitted, from which the five current finalists were chosen. The five proposals would study Venus, near-Earth objects, an unusual metallic asteroid and Trojan asteroids.
Near-Earth asteroids, also known as Near Earth Objects (NEOs), are some of the best studied space rocks in the Solar System, primarily due to the fact that they approach the orbit of Earth, making them potentially dangerous to our home planet. Now, a new study has provided evidence that at least some of them, including dark ones which are more difficult to see, originate from the oddball Euphrosyne family of dark asteroids which are at the outer edge of the main asteroid belt between Mars and Jupiter, but have highly inclined orbits well above the plane or “equator” of the Solar System.
The dwarf planet Ceres, the largest body in the asteroid belt, is releasing water vapour into space, astronomers announced yesterday. The discovery, made by the European Herschel space telescope, is being called the first unambiguous detection of water vapour around any object in the asteroid belt and was published today in the journal Nature.
The Dawn spacecraft left behind the giant asteroid Vesta last September, and is now en route to the even bigger dwarf planet Ceres, but scientists are still busy studying all of the data that was sent back to Earth while it was orbiting Vesta for over a year. And as often happens while exploring these new worlds, they have made a surprising discovery: long, sinuous gullies on the walls of geologically younger craters. |
Preamble to the United States Constitution
United States of America
|This article is part of a series on the|
|United States Constitution|
Preamble and Articles
of the Constitution
|Amendments to the Constitution|
Full text of the Constitution
The Preamble to the United States Constitution is a brief introductory statement of the Constitution's fundamental purposes and guiding principles. It states in general terms, and courts have referred to it as reliable evidence of, the Founding Fathers' intentions regarding the Constitution's meaning and what they hoped the Constitution would achieve.
- Text 1
- Drafting 2
Meaning and application 3
- Judicial relevance 3.1
- Examples 3.2
- Aspects of national sovereignty 3.3.1
- People of the United States 3.3.2
- The popular nature of the Constitution 3.3.3
- Where the Constitution is legally effective 3.3.4
- To form a more perfect Union 3.3.5
- See also 4
- Notes 5
- References 6
- External links 7
We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence,[note 1] promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.
The Preamble was placed in the Constitution during the last days of the Constitutional Convention by the Committee on Style, which wrote its final draft. It was not proposed or discussed on the floor of the convention beforehand. The initial wording of the preamble did not refer to the people of the United States, rather, it referred to people of the various states, which was the norm. In earlier documents, including the 1778 Treaty of Alliance with France, the Articles of Confederation, and the 1783 Treaty of Paris recognizing American independence, the word "people" was not used, and the phrase the United States was followed immediately by a listing of the states, from north to south. The change was made out of necessity, as the Constitution provided that whenever the popularly elected ratifying conventions of nine states gave their approval, it would go into effect for those nine, irrespective of whether any of the remaining states ratified.
Meaning and application
The Preamble serves solely as an introduction, and does not assign powers to the federal government, nor does it provide specific limitations on government action. Due to the Preamble's limited nature, no court has ever used it as a decisive factor in case adjudication, except as regards frivolous litigation.
The courts have shown interest in any clues they can find in the Preamble regarding the Constitution's meaning. Courts have developed several techniques for interpreting the meaning of statutes and these are also used to interpret the Constitution. As a result, the courts have said that interpretive techniques that focus on the exact text of a document should be used in interpreting the meaning of the Constitution. Balanced against these techniques are those that focus more attention on broader efforts to discern the meaning of the document from more than just the wording; the Preamble is also useful for these efforts to identify the "spirit" of the Constitution.
Additionally, when interpreting a legal document, courts are usually interested in understanding the document as its authors did and their motivations for creating it; as a result, the courts have cited the Preamble for evidence of the history, intent and meaning of the Constitution as it was understood by the Founders. Although revolutionary in some ways, the Constitution maintained many common law concepts (such as habeas corpus, trial by jury, and sovereign immunity), and courts deem that the Founders' perceptions of the legal system that the Constitution created (i.e., the interaction between what it changed and what it kept from the British legal system) are uniquely important because of the authority "the People" invested them with to create it. Along with evidence of the understandings of the men who debated and drafted the Constitution at the Constitutional Convention, the courts are also interested in the way that government officials have put into practice the Constitution's provisions, particularly early government officials, although the courts reserve to themselves the final authority to determine the Constitution's meaning. However, this focus on historical understandings of the Constitution is sometimes in tension with the changed circumstances of modern society from the late 18th century society that drafted the Constitution; courts have ruled that the Constitution must be interpreted in light of these changed circumstances. All of these considerations of the political theory behind the Constitution have prompted the Supreme Court to articulate a variety of special rules of construction and principles for interpreting it. For example, the Court's rendering of the purposes behind the Constitution have led it to express a preference for broad interpretations of individual freedoms.
An example of the way courts utilize the Preamble is Ellis v. City of Grand Rapids. Substantively, the case was about Fifth Amendment, which is understood to require that property acquired via eminent domain must be put to a "public use". In deciding whether the proposed project constituted a "public use", the court pointed to the Preamble's reference to "promot[ing] the general Welfare" as evidence that "[t]he health of the people was in the minds of our forefathers". "[T]he concerted effort for renewal and expansion of hospital and medical care centers, as a part of our nation's system of hospitals, is as a public service and use within the highest meaning of such terms. Surely this is in accord with an objective of the United States Constitution: '* * * promote the general Welfare.'"
On the other hand, courts will not interpret the Preamble to give the government powers that are not articulated elsewhere in the Constitution. United States v. Kinnebrew Motor Co. is an example of this. In that case, the defendants were a car manufacturer and dealership indicted for a criminal violation of the National Industrial Recovery Act. The Congress passed the statute in order to cope with the Great Depression, and one of its provisions purported to give to the President authority to fix "the prices at which new cars may be sold". The dealership, located in Oklahoma City, had sold an automobile to a customer (also from Oklahoma City) for less than the price for new cars fixed pursuant to the Act. Substantively, the case was about whether the transaction in question constituted "interstate commerce" that Congress could regulate pursuant to the Commerce Clause. Although the government argued that the scope of the Commerce Clause included this transaction, it also argued that the Preamble's statement that the Constitution was created to "promote the general Welfare" should be understood to permit Congress to regulate transactions such as the one in this case, particularly in the face of an obvious national emergency like the Great Depression. The court, however, dismissed this argument as erroneous and insisted that the only relevant issue was whether the transaction that prompted the indictment actually constituted "interstate commerce" under the Supreme Court's precedents that interpreted the scope of the Commerce Clause.
Aspects of national sovereignty
The Preamble's reference to the "United States of America" has been interpreted over the years to explain the nature of the governmental entity that the Constitution created (i.e., the federal government). In contemporary international law, the world consists of sovereign states (or "sovereign nations" in modern equivalent). A state is said to be "sovereign," if any of its ruling inhabitants are the supreme authority over it; the concept is distinct from mere land-title or "ownership." While each state was originally recognized as sovereign unto itself, the Supreme Court held that the "United States of America" consists of only one sovereign nation with respect to foreign affairs and international relations; the individual states may not conduct foreign relations. Although the Constitution expressly delegates to the federal government only some of the usual powers of sovereign governments (such as the powers to declare war and make treaties), all such powers inherently belong to the federal government as the country's representative in the international community.
Domestically, the federal government's sovereignty means that it may perform acts such as entering into contracts or accepting bonds, which are typical of governmental entities but not expressly provided for in the Constitution or laws. Similarly, the federal government, as an attribute of sovereignty, has the power to enforce those powers that are granted to it (e.g., the power to "establish Post Offices and Post Roads" includes the power to punish those who interfere with the postal system so established). The Court has recognized the federal government's supreme power over those limited matters entrusted to it. Thus, no state may interfere with the federal government's operations as though its sovereignty is superior to the federal government's (discussed more below); for example, states may not interfere with the federal government's near absolute discretion to sell its own real property, even when that real property is located in one or another state. The federal government exercises its supreme power not as a unitary entity, but instead via the three coordinate branches of the government (legislative, executive, and judicial), each of which has its own prescribed powers and limitations under the Constitution. In addition, the doctrine of separation of powers functions as a limitation on each branch of the federal government's exercise of sovereign power.
One aspect of the American system of government is that, while the rest of the world now views the United States as one country, domestically American constitutional law recognizes a federation of state governments separate from (and not subdivisions of) the federal government, each of which is sovereign over its own affairs. Sometimes, the Supreme Court has even analogized the States to being foreign countries to each other to explain the American system of State sovereignty. However, each state's sovereignty is limited by the U.S. Constitution, which is the supreme law of both the United States as a nation and each state; in the event of a conflict, a valid federal law controls. As a result, although the federal government is (as discussed above) recognized as sovereign and has supreme power over those matters within its control, the American constitutional system also recognizes the concept of "State sovereignty," where certain matters are susceptible to government regulation, but only at the State and not the federal level. For example, although the federal government prosecutes crimes against the United States (such as treason, or interference with the postal system), the general administration of criminal justice is reserved to the States. Notwithstanding sometimes broad statements by the Supreme Court regarding the "supreme" and "exclusive" powers the State and Federal governments exercise, the Supreme Court and State courts have also recognized that much of their power is held and exercised concurrently.
People of the United States
The phrase "People of the United States" has sometimes been understood to mean "citizens." This approach reasons that, if the political community speaking for itself in the Preamble ("We the People") includes only citizens, by negative implication it specifically excludes non-citizens in some fashion. It has also been construed to mean something like "all under the sovereign jurisdiction and authority of the United States." The phrase has been construed as affirming that the national government created by the Constitution derives its sovereignty from the people, (whereas "United Colonies" had identified external monarchical sovereignty) as well as confirming that the government under the Constitution was intended to govern and protect "the people" directly, as one society, instead of governing only the states as political units. The Court has also understood this language to mean that the sovereignty of the government under the U.S. Constitution is superior to that of the States. Stated in negative terms, the Preamble has been interpreted as meaning that the Constitution was not the act of sovereign and independent states.
The popular nature of the Constitution
The Constitution claims to be an act of "We the People." However, because it represents a general social contract, there are limits on the ability of individual citizens to pursue legal claims allegedly arising out of the Constitution. For example, if a law was enacted which violated the Constitution, not just anybody could challenge the statute's constitutionality in court; instead, only an individual who was negatively affected by the unconstitutional statute could bring such a challenge. For example, a person claiming certain benefits that are created by a statute cannot then challenge, on constitutional grounds, the administrative mechanism that awards them. These same principles apply to corporate entities, and can implicate the doctrine of exhaustion of remedies.
In this same vein, courts will not answer hypothetical questions about the constitutionality of a statute. The judiciary does not have the authority to invalidate unconstitutional laws solely because they are unconstitutional, but may declare a law unconstitutional if its operation would injure a person's interests. For example, creditors who lose some measure of what they are owed when a bankrupt’s debts are discharged cannot claim injury, because Congress’ power to enact bankruptcy laws is also in the Constitution and inherent in it is the ability to declare certain debts valueless. Similarly, while a person may not generally challenge as unconstitutional a law that they are not accused of violating, once charged, a person may challenge the law's validity, even if the challenge is unrelated to the circumstances of the crime.
Where the Constitution is legally effective
The Preamble has been used to confirm that the Constitution was made for, and is binding only in, the United States of America. For example, in Casement v. Squier, a serviceman in China during World War II was convicted of murder in the United States Court for China. After being sent to prison in the State of Washington, he filed a writ of habeas corpus with the local federal court, claiming he had been unconstitutionally put on trial without a jury. The court held that, since his trial was conducted by an American court and was, by American standards, basically fair, he was not entitled to the specific constitutional right of trial by jury while overseas.
Since the Preamble declares the Constitution to have been created by the "People of the United States", "there may be places within the jurisdiction of the United States that are no part of the Union." The following examples help demonstrate the meaning of this distinction:
- Geofroy v. Riggs, 133 U.S. 258 (1890): the Supreme Court held that a certain treaty between the United States and France which was applicable in "the States of the Union" was also applicable in Washington, D.C., even though it is not a state or a part of a state.
- De Lima v. Bidwell, 182 U.S. 1 (1901): the Supreme Court ruled that a customs collector could not, under a statute providing for taxes on imported goods, collect taxes on goods coming from Puerto Rico after it had been ceded to the United States from Spain, reasoning that although it was not a State, it was under the jurisdiction of U.S. sovereignty, and thus the goods were not being imported from a foreign country. However, in Downes v. Bidwell, 182 U.S. 244 (1901), the Court held that the Congress could constitutionally enact a statute taxing goods sent from Puerto Rico to ports in the United States differently from other commerce, in spite of the constitutional requirement that "all Duties, Imposts and Excises shall be uniform throughout the United States," on the theory that although Puerto Rico could not be treated as a foreign country, it did not count as part of the "United States" and thus was not guaranteed "uniform" tax treatment by that clause of the Constitution. This was not the only constitutional clause held not to apply in Puerto Rico: later, a lower court went on to hold that goods brought from Puerto Rico into New York before the enactment of the tax statute held constitutional in Downes, could retroactively have the taxes applied to them notwithstanding the Constitution's ban on ex post facto laws, even if at the time they were brought into the United States no tax could be applied to the goods because Puerto Rico was not a foreign country.
- Ochoa v. Hernandez y Morales, 230 U.S. 139 (1913): the Fifth Amendment's requirement that "no person shall . . . be deprived of . . . property, without due process of law" was held, by the Supreme Court, to apply in Puerto Rico, even though it was not a State and thus not "part" of the United States.
To form a more perfect Union
The phrase "to form a more perfect Union" has been construed as referring to the shift to the Constitution from the Articles of Confederation. In this transition, the "Union" was made "more perfect" by the creation of a federal government with enough power to act directly upon citizens, rather than a government with narrowly limited power that could act on citizens (e.g., by imposing taxes) only indirectly through the states. Although the Preamble speaks of perfecting the "Union," and the country is called the "United States of America," the Supreme Court has interpreted the institution created as a government over the people, not an agreement between the States. The phrase has also been interpreted to confirm that state nullification of any federal law, dissolution of the Union, or secession from it, are not contemplated by the Constitution.
- List of songs from Schoolhouse Rock (The Preamble)
- In the hand-written engrossed copy of the Constitution maintained in the National Archives, the British spelling "defence" is used in the preamble (See the National Archives transcription and the Archives' image of the engrossed document. Retrieved both web pages on October 24, 2009.)
- McDonald, Forrest. "Essay on the Preamble". The Heritage Foundation. Retrieved July 13, 2014.
- Schütze, Robert. European Constitutional Law, p. 50 (Cambridge University Press 2012).
- See Jacobson v. Massachusetts, 197 U.S. 11, 22 (1905) ("Although th[e] preamble indicates the general purposes for which the people ordained and established the Constitution, it has never been regarded as the source of any substantive power conferred on the government of the United States, or on any of its departments."); see also United States v. Boyer, 85 F. 425, 430–31 (W.D. Mo. 1898) ("The preamble never can be resorted to, to enlarge the powers confided to the general government, or any of its departments. It cannot confer any power per se. It can never amount, by implication, to an enlargement of any power expressly given. It can never be the legitimate source of any implied power, when otherwise withdrawn from the constitution. Its true office is to expound the nature and extent and application of the powers actually conferred by the constitution, and not substantively to create them." (quoting 1 JOSEPH STORY, COMMENTARIES ON THE CONSTITUTION OF THE UNITED STATES § 462 (1833)) (internal quotation marks omitted)).
- It is difficult to prove a negative, but courts have at times acknowledged this apparent truism. See, e.g., Boyer, 85 F. at 430 ("I venture the opinion that no adjudicated case can be cited which traces to the preamble the power to enact any statute.").
- In Jacobs v. Pataki, 68 F. App'x 222, 224 (2d Cir. 2003), the plaintiff made the bizarre argument that "the 'United States of America' that was granted Article III power in the Constitution is distinct from the 'United States' that currently exercises that power"; the court dismissed this contention with 3 words ("it is not") and cited a comparison of the Preamble's reference to the "United States of America" with Article III's vesting of the "judicial Power of the United States."
- Legal Tender Cases, 79 U.S. (12 Wall.) 457, 531–32 (1871) ("[I]t [cannot] be questioned that, when investigating the nature and extent of the powers, conferred by the Constitution upon Congress, it is indispensable to keep in view the objects for which those powers were granted. This is a universal rule of construction applied alike to statutes, wills, contracts, and constitutions. If the general purpose of the instrument is ascertained, the language of its provisions must be construed with reference to that purpose and so as to subserve it. In no other way can the intent of the framers of the instrument be discovered. And there are more urgent reasons for looking to the ultimate purpose in examining the powers conferred by a constitution than there are in construing a statute, a will, or a contract. We do not expect to find in a constitution minute details. It is necessarily brief and comprehensive. It prescribes outlines, leaving the filling up to be deduced from the outlines."), abrogated on other grounds by Pa. Coal Co. v. Mahon, 260 U.S. 393 (1922), as recognized in Lucas v. S.C. Coastal Council, 505 U.S. 1003 (1992).
- Cf. Badger v. Hoidale, 88 F.2d 208, 211 (8th Cir. 1937) ("Rules applicable to the construction of a statute are equally applicable to the construction of a Constitution." (citing Taylor v. Taylor, 10 Minn. 107 (1865))).
- Examples include the "plain meaning rule," Pollock v. Farmers' Loan & Trust Co., 158 U.S. 601, 619 (1895) ("The words of the Constitution are to be taken in their obvious sense, and to have a reasonable construction."), superseded on other grounds by U.S. CONST. amend. XVI, as recognized in Brushaber v. Union Pac. R.R., 240 U.S. 1 (1916); McPherson v. Blacker, 146 U.S. 1, 27 (1892) ("The framers of the Constitution employed words in their natural sense; and where they are plain and clear, resort to collateral aids to interpretation is unnecessary and cannot be indulged in to narrow or enlarge the text . . . ."), and noscitur a sociis, Virginia v. Tennessee, 148 U.S. 503, 519 (1893) ("It is a familiar rule in the construction of terms to apply to them the meaning naturally attaching to them from their context. Noscitur a sociis is a rule of construction applicable to all written instruments. Where any particular word is obscure or of doubtful meaning, taken by itself, its obscurity or doubt may be removed by reference to associated words. And the meaning of a term may be enlarged or restrained by reference to the object of the whole clause in which it is used.").
- See, e.g., Hooven & Allison Co. v. Evatt, 324 U.S. 652, 663 (1945) ("[I]n determining the meaning and application of [a] constitutional provision, we are concerned with matters of substance, not of form."), overruled on other grounds by Limbach v. Hooven & Allison Co., 466 U.S. 353 (1984); South Carolina v. United States, 199 U.S. 437, 451 (1905) ("[I]t is undoubtedly true that that which is implied is as much a part of the Constitution as that which is expressed."), overruled on other grounds by Garcia v. San Antonio Metro. Transit Auth., 469 U.S. 528 (1985); Ex parte Yarbrough, 110 U.S. 651, 658 (1884) ("[I]n construing the Constitution of the United States, [courts use] the doctrine universally applied to all instruments of writing, that what is implied is as much a part of the instrument as what is expressed. This principle, in its application to the Constitution of the United States, more than to almost any other writing, is a necessity, by reason of the inherent inability to put into words all derivative powers . . . ."); Packet Co. v. Keokuk, 95 U.S. 80, 87 (1877) ("A mere adherence to the letter [of the Constitution], without reference to the spirit and purpose, may [sometimes] mislead.").
- Missouri v. Illinois, 180 U.S. 208, 219 (1901) ("[W]hen called upon to construe and apply a provision of the Constitution of the United States, [courts] must look not merely to its language but to its historical origin, and to those decisions of this court in which its meaning and the scope of its operation have received deliberate consideration.").
- United States v. S.-E. Underwriters Ass'n, 322 U.S. 533, 539 (1944) ("Ordinarily courts do not construe words used in the Constitution so as to give them a meaning more narrow than one which they had in the common parlance of the times in which the Constitution was written."), superseded on other grounds by statute, McCarran-Ferguson Act, ch. 20, 59 Stat. 33 (1945) (codified as amended at 15 U.S.C. §§ 1011–1015 (2006)), as recognized in U.S. Dep't of the Treasury v. Fabe, 508 U.S. 491 (1993); Ex parte Bain, 121 U.S. 1, 12 (1887) ("[I]n the construction of the language of the Constitution . . . , we are to place ourselves as nearly as possible in the condition of the men who framed that instrument."), overruled on other grounds by United States v. Miller, 471 U.S. 130 (1985), and United States v. Cotton, 535 U.S. 625 (2002).
- United States v. Sanges, 144 U.S. 310, 311 (1892) ("[T]he Constitution . . . is to be read in the light of the common law, from which our system of jurisprudence is derived." (citations omitted)); Smith v. Alabama, 124 U.S. 465, 478 (1888) ("The interpretation of the Constitution of the United States is necessarily influenced by the fact that its provisions are framed in the language of the English common law, and are to be read in the light of its history.").
- United States v. Wood, 299 U.S. 123, 142 (1936) ("Whether a clause in the Constitution is to be restricted by a rule of the common law as it existed when the Constitution was adopted depends upon the terms or nature of the particular clause." (citing Cont'l Ill. Nat'l Bank & Trust Co. v. Chi., Rock Island & Pac. Ry. Co., 294 U.S. 648 (1935))); Mattox v. United States, 156 U.S. 237, 243 (1895) ("We are bound to interpret the Constitution in the light of the law as it existed at the time it was adopted, not as reaching out for new guaranties of the rights of the citizen, but as securing to every individual such as he already possessed as a British subject -- such as his ancestors had inherited and defended since the days of Magna Charta.").
- Veazie Bank v. Fenno, 75 U.S. (8 Wall.) 533, 542 (1869) ("We are obliged . . . to resort to historical evidence, and to seek the meaning of the words [in the Constitution] in the use and in the opinion of those whose relations to the government, and means of knowledge, warranted them in speaking with authority.").
- McPherson v. Blacker, 146 U.S. 1, 27 (1892) ("[W]here there is ambiguity or doubt [in the meaning of constitutional language], or where two views may well be entertained, contemporaneous and subsequent practical construction are entitled to the greatest weight."); Murray's Lessee v. Hoboken Land & Improvement Co., 59 U.S. (18 How.) 272, 279–80 (1856) ("[A] legislative construction of the constitution, commencing so early in the government, when the first occasion for [a] manner of proceeding arose, continued throughout its existence, and repeatedly acted on by the judiciary and the executive, is entitled to no inconsiderable weight upon the question whether the proceeding adopted by it was 'due process of law.'" (citations omitted)).
- Fairbank v. United States, 181 U.S. 283, 311 (1901) ("[A] practical construction [of the Constitution] is relied upon only in cases of doubt. . . . Where there was obviously a matter of doubt, we have yielded assent to the construction placed by those having actual charge of the execution of the statute, but where there was no doubt we have steadfastly declined to recognize any force in practical construction. Thus, before any appeal can be made to practical construction, it must appear that the true meaning is doubtful."); see Marbury v. Madison, 5 U.S. (1 Cranch) 137, 177 (1803) ("It is emphatically the province and duty of the judicial department to say what the law is.").
- In re Debs, 158 U.S. 564, 591 (1895) ("Constitutional provisions do not change, but their operation extends to new matters as the modes of business and the habits of life of the people vary with each succeeding generation."), overruled on other grounds by Bloom v. Illinois, 391 U.S. 194 (1968); R.R. Co. v. Peniston, 85 U.S. (18 Wall.) 5, 31 (1873) ("[T]he Federal Constitution must receive a practical construction. Its limitations and its implied prohibitions must not be extended so far as to destroy the necessary powers of the States, or prevent their efficient exercise."); In re Jackson, 13 F. Cas. 194, 196 (C.C.S.D.N.Y. 1877) (No. 7124) ("[I]n construing a grant of power in the constitution, it is to be construed according to the fair and reasonable import of its terms, and its construction is not necessarily to be controlled by a reference to what existed when the constitution was adopted.").
- E.g., Richfield Oil Corp. v. State Bd. of Equalization, 329 U.S. 69, 77, 78 (1946) ("[T]o infer qualifications does not comport with the standards for expounding the Constitution. . . . We cannot, therefore, read the prohibition against 'any' tax on exports as containing an implied qualification."); Fairbank, 181 U.S. at 287 ("The words expressing the various grants [of power] in the Constitution are words of general import, and they are to be construed as such, and as granting to the full extent the powers named."); Shreveport v. Cole, 129 U.S. 36, 43 (1889) ("Constitutions . . . are construed to operate prospectively only, unless, on the face of the instrument or enactment, the contrary intention is manifest beyond reasonable question.")
- Boyd v. United States, 116 U.S. 616, 635 (1886) ("[C]onstitutional provisions for the security of person and property should be liberally construed. A close and literal construction deprives them of half their efficacy, and leads to gradual depreciation of the right, as if it consisted more in sound than in substance. It is the duty of courts to be watchful for the constitutional rights of the citizen, and against any stealthy encroachments thereon."), recognized as abrogated on other grounds in Fisher v. United States, 425 U.S. 391 (1976).
- 257 F. Supp. 564 (W.D. Mich. 1966).
- Id. at 572.
- Id. at 574 (emphasis added).
- 8 F. Supp. 535 (W.D. Okla. 1934).
- Id. at 535.
- U.S. CONST. art. I, § 8, cl. 3. ("The Congress shall have power . . . [t]o regulate commerce . . . among the several states . . . .").
- Kinnebrew Motor Co., 8 F. Supp. at 539 ("Reference has been made in the government's brief to the ‘Welfare Clause‘ of the Constitution as if certain powers could be derived by Congress from said clause. It is not necessary to indulge in an extended argument on this question for the reason that there is no such thing as the ‘Welfare Clause‘ of the Constitution.").
- Id. at 544 ("The only question which this court pretends to determine in this case is whether or not the sale of automobiles, in a strictly retail business in the vicinity of Oklahoma City, constitutes interstate commerce, and this court, without hesitation, finds that there is no interstate commerce connected with the transactions described in this indictment, and if there is no interstate commerce, Congress has no authority to regulate these transactions.")
- See Shapleigh v. Mier, 299 U.S. 468, 470, 471 (1937) (when certain land passed from Mexico to the United States because of a shift in the Rio Grande's course, "[s]overeignty was thus transferred, but private ownership remained the same"; thus, a decree of a Mexican government official determining title to the land, "if lawful and effective under the Constitution and laws of Mexico, must be recognized as lawful and effective under the laws of the United States, the sovereignty of Mexico at the time of that decree being exclusive of any other")
- Chae Chan Ping v. United States, 130 U.S. 581, 604, 606 (1889) ("[T]he United States, in their relation to foreign countries and their subjects or citizens, are one nation, invested with powers which belong to independent nations, the exercise of which can be invoked for the maintenance of its absolute independence and security throughout its entire territory. The powers to declare war, make treaties, suppress insurrection, repel invasion, regulate foreign commerce, secure republican governments to the states, and admit subjects of other nations to citizenship are all sovereign powers, restricted in their exercise only by the Constitution itself and considerations of public policy and justice which control, more or less, the conduct of all civilized nations. . . . For local interests, the several states of the union exist, but for national purposes, embracing our relations with foreign nations, we are but one people, one nation, one power.").
- United States v. Curtiss-Wright Export Corp., 299 U.S. 304, 318 (1936) ("[T]he investment of the federal government with the powers of external sovereignty did not depend upon the affirmative grants of the Constitution. The powers to declare and wage war, to conclude peace, to make treaties, to maintain diplomatic relations with other sovereignties, if they had never been mentioned in the Constitution, would have vested in the federal government as necessary concomitants of nationality. . . . As a member of the family of nations, the right and power of the United States in that field are equal to the right and power of the other members of the international family. Otherwise, the United States is not completely sovereign.").
- United States v. Bradley, 35 U.S. (10 Pet.) 343, 359 (1836) ("[T]he United States being a body politic, as an incident to its general right of sovereignty, has a capacity to enter into contracts and take bonds in cases within the sphere of its constitutional powers and appropriate to the just exercise of those powers, . . . whenever such contracts or bonds are not prohibited by law, although the making of such contracts or taking such bonds may not have been prescribed by any preexisting legislative act."); United States v. Tingey, 30 U.S. (5 Pet.) 115, 128 (1831) ("[T]he United States has . . . [the] capacity to enter into contracts [or to take a bond in cases not previously provided for by some law]. It is in our opinion an incident to the general right of sovereignty, and the United States being a body politic, may, within the sphere of the constitutional powers confided to it, and through the instrumentality of the proper department to which those powers are confided, enter into contracts not prohibited by law and appropriate to the just exercise of those powers. . . . To adopt a different principle would be to deny the ordinary rights of sovereignty not merely to the general government, but even to the state governments within the proper sphere of their own powers, unless brought into operation by express legislation.")
- U.S. CONST. art. I, § 8, cl. 7
- In re Debs, 158 U.S. 564, 578, 582 (1895) ("While, under the dual system which prevails with us, the powers of government are distributed between the State and the Nation, and while the latter is properly styled a government of enumerated powers, yet within the limits of such enumeration, it has all the attributes of sovereignty, and, in the exercise of those enumerated powers, acts directly upon the citizen, and not through the intermediate agency of the State. . . . The entire strength of the nation may be used to enforce in any part of the land the full and free exercise of all national powers and the security of all rights entrusted by the Constitution to its care. The strong arm of the national government may be put forth to brush away all obstructions to the freedom of interstate commerce or the transportation of the mails. If the emergency arises, the army of the Nation, and all its militia, are at the service of the Nation to compel obedience to its laws.")
- In re Quarles, 158 U.S. 532, 535 (1895) ("The United States are a nation, whose powers of government, legislative, executive and judicial, within the sphere of action confided to it by the Constitution, are supreme and paramount. Every right, created by, arising under or dependent upon the Constitution, may be protected and enforced by such means, and in such manner, as Congress, in the exercise of the correlative duty of protection, or of the legislative powers conferred upon it by the Constitution, may in its discretion deem most eligible and best adapted to attain the object." (citing Logan v. United States, 144 U.S. 263, 293 (1892))); Dobbins v. Comm'rs of Erie Cnty., 41 U.S. (16 Pet.) 435, 447 (1842) ("The government of the United States is supreme within its sphere of action."), overruled on other grounds by Graves v. New York ex rel. O'Keefe, 306 U.S. 466 (1939), and superseded on other grounds by statute, Public Salary Tax Act of 1939, ch. 59, 53 Stat. 574 (codified as amended at 4 U.S.C. § 111 (2006)).
- United States v. Butler, 297 U.S. 1, 68 (1936) ("From the accepted doctrine that the United States is a government of delegated powers, it follows that those not expressly granted, or reasonably to be implied from such as are conferred, are reserved to the states or to the people. To forestall any suggestion to the contrary, the Tenth Amendment was adopted. The same proposition, otherwise stated, is that powers not granted are prohibited. None to regulate agricultural production is given, and therefore legislation by Congress for that purpose is forbidden." (footnote omitted)); Pac. Ins. Co. v. Soule, 74 U.S. (7 Wall.) 433, 444 (1869) ("The national government, though supreme within its own sphere, is one of limited jurisdiction and specific functions. It has no faculties but such as the Constitution has given it, either expressly or incidentally by necessary intendment. Whenever any act done under its authority is challenged, the proper sanction must be found in its charter, or the act is ultra vires and void."); Briscoe v. President of the Bank of Ky., 36 U.S. (11 Pet.) 257, 317 (1837) ("The federal government is one of delegated powers. All powers not delegated to it, or inhibited to the states, are reserved to the states, or to the people.")
- See U.S. CONST. art. IV, § 3, cl. 2; United States v. Bd. of Com'rs, 145 F.2d 329, 330 (10th Cir. 1944) ("Congress is vested with the absolute right to designate the persons to whom real property belonging to the United States shall be transferred, and to prescribe the conditions and mode of the transfer; and a state has no power to interfere with that right or to embarrass the exercise of it. Property owned by the United States is immune from taxation by the state or any of its subdivisions.")
- Dodge v. Woolsey, 59 U.S. (18 How.) 331, 347 (1885) ("The departments of the government are legislative, executive, and judicial. They are co ordinate in degree to the extent of the powers delegated to each of them. Each, in the exercise of its powers, is independent of the other, but all, rightfully done by either, is binding upon the others. The constitution is supreme over all of them, because the people who ratified it have made it so; consequently, anything which may be done unauthorized by it is unlawful.")
- See Loan Ass'n v. Topeka, 87 U.S. (20 Wall.) 655, 663 (1875) ("The theory of our governments, state and national, is opposed to the deposit of unlimited power anywhere. The executive, the legislative, and the judicial branches of these governments are all of limited and defined powers."); Hepburn v. Griswold, 75 U.S. (8 Wall.) 603, 611 (1870) ("[T]he Constitution is the fundamental law of the United States. By it the people have created a government, defined its powers, prescribed their limits, distributed them among the different departments, and directed in general the manner of their exercise. No department of the government has any other powers than those thus delegated to it by the people. All the legislative power granted by the Constitution belongs to Congress, but it has no legislative power which is not thus granted. And the same observation is equally true in its application to the executive and judicial powers granted respectively to the President and the courts. All these powers differ in kind, but not in source or in limitation. They all arise from the Constitution, and are limited by its terms.")
- Humphrey's Ex'r v. United States, 295 U.S. 602, 629–30 (1935) ("The fundamental necessity of maintaining each of the three general departments of government entirely free from the control or coercive influence, direct or indirect, of either of the others has often been stressed, and is hardly open to serious question. So much is implied in the very fact of the separation of the powers of these departments by the Constitution, and in the rule which recognizes their essential coequality."); e.g., Ainsworth v. Barn Ballroom Co., 157 F.2d 97, 100 (4th Cir. 1946) (judiciary has no power to review a military order barring servicemen from patronizing a certain dance hall due to separation of powers concerns because "the courts may not invade the executive departments to correct alleged mistakes arising out of abuse of discretion[;] . . . to do so would interfere with the performance of governmental functions and vitally affect the interests of the United States")
- Tarble's Case, 80 U.S. (13 Wall.) 397, 406 (1872) ("There are within the territorial limits of each state two governments, restricted in their spheres of action but independent of each other and supreme within their respective spheres. Each has its separate departments, each has its distinct laws, and each has its own tribunals for their enforcement. Neither government can intrude within the jurisdiction, or authorize any interference therein by its judicial officers with the action of the other.")
- Bank of Augusta v. Earle, 38 U.S. (13 Pet.) 519, 590 (1839) ("It has . . . been supposed that the rules of comity between foreign nations do not apply to the states of this Union, that they extend to one another no other rights than those which are given by the Constitution of the United States, and that the courts of the general government are not at liberty to presume . . . that a state has adopted the comity of nations towards the other states as a part of its jurisprudence or that it acknowledges any rights but those which are secured by the Constitution of the United States. The Court thinks otherwise. The intimate union of these states as members of the same great political family, the deep and vital interests which bind them so closely together, should lead us, in the absence of proof to the contrary, to presume a greater degree of comity and friendship and kindness towards one another than we should be authorized to presume between foreign nations. . . . They are sovereign states, and the history of the past and the events which are daily occurring furnish the strongest evidence that they have adopted towards each other the laws of comity in their fullest extent."); Bank of U.S. v. Daniel, 37 U.S. (12 Pet.) 32, 54 (1838) ("The respective states are sovereign within their own limits, and foreign to each other, regarding them as local governments."); Buckner v. Finley, 27 U.S. (2 Pet.) 586, 590 (1829) (" For all national purposes embraced by the federal Constitution, the states and the citizens thereof are one, united under the same sovereign authority and governed by the same laws. In all other respects, the states are necessarily foreign to and independent of each other. Their constitutions and forms of government being, although republican, altogether different, as are their laws and institutions.")
- Angel v. Bullington, 330 U.S. 183, 188 (1947) ("The power of a state to determine the limits of the jurisdiction of its courts and the character of the controversies which shall be heard in them is, of course, subject to the restrictions imposed by the Federal Constitution." (quoting McKnett v. St. Louis & S.F. Ry. Co., 292 U.S. 230, 233 (1934)) (internal quotation marks omitted)); Ableman v. Booth, 62 U.S. (21 How.) 506, 516 (1856) ("[A]lthough the State[s] . . . [are] sovereign within [their] territorial limits to a certain extent, yet that sovereignty is limited and restricted by the Constitution of the United States.")
- United Pub. Workers v. Mitchell, 330 U.S. 75, 95–96 (1947) ("The powers granted by the Constitution to the Federal Government are subtracted from the totality of sovereignty originally in the states and the people. Therefore, when objection is made that the exercise of a federal power infringes upon rights reserved by the Ninth and Tenth Amendments, the inquiry must be directed toward the granted power under which the action of the Union was taken. If granted power is found, necessarily the objection of invasion of those rights, reserved by the Ninth and Tenth Amendments, must fail."); Tarble's Case, 80 U.S. at 406 ("The two governments in each state stand in their respective spheres of action in the same independent relation to each other, except in one particular, that they would if their authority embraced distinct territories. That particular consists in the supremacy of the authority of the United States when any conflict arises between the two governments.").
- recognized as abrogated on other grounds in New Mexico v. Mescalero Apache Tribe, 462 U.S. 324 (1983).
- Screws v. United States, 325 U.S. 91, 109 (1945) ("Our national government is one of delegated powers alone. Under our federal system, the administration of criminal justice rests with the States except as Congress, acting within the scope of those delegated powers, has created offenses against the United States.").
- E.g., Kohl v. United States, 91 U.S. 367, 372 (1876) ("Th[e federal] government is as sovereign within its sphere as the states are within theirs. True, its sphere is limited. Certain subjects only are committed to it; but its power over those subjects is as full and complete as is the power of the states over the subjects to which their sovereignty extends."). Taken very literally, statements like this could be understood to suggest that there is no overlap between the State and Federal governments.
- Ex parte McNiel, 80 U.S. (13 Wall.) 236, 240 (1872) ("In the complex system of polity which prevails in this country, the powers of government may be divided into four classes. Those which belong exclusively to the states. Those which belong exclusively to the national government. Those which may be exercised concurrently and independently by both. Those which may be exercised by the states, but only until Congress shall see fit to act upon the subject. The authority of the state then retires and lies in abeyance until the occasion for its exercise shall recur."); People ex rel. Woll v. Graber, 68 N.E.2d 750, 754 (Ill. 1946) ("The laws of the United States are laws in the several States, and just as binding on the citizens and courts thereof as the State laws are. The United States is not a foreign sovereignty as regards the several States but is a concurrent, and, within its jurisdiction, a paramount authority."); Kersting v. Hargrove, 48 A.2d 309, 310 (N.J. Cir. Ct. 1946) ("The United States government is not a foreign sovereignty as respects the several states but is a concurrent, and within its jurisdiction, a superior sovereignty. Every citizen of New Jersey is subject to two distinct sovereignties; that of New Jersey and that of the United States. The two together form one system and the two jurisdictions are not foreign to each other.").
- See, e.g., Dred Scott v. Sandford, 60 U.S. (19 How.) 393, 410–11 (1857) ("The brief preamble sets forth by whom [the Constitution] was formed, for what purposes, and for whose benefit and protection. It declares that [the Constitution] [was] formed by the people of the United States; that is to say, by those who were members of the different political communities in the several States; and its great object is declared to be to secure the blessings of liberty to themselves and their posterity. It speaks in general terms of the people of the United States, and of citizens of the several States, when it is providing for the exercise of the powers granted or the privileges secured to the citizen. It does not define what description of persons are intended to be included under these terms, or who shall be regarded as a citizen and one of the people. It uses them as terms so well understood, that no further description or definition was necessary. But there are two clauses in the Constitution which point directly and specifically to the negro race as a separate class of persons, and show clearly that they were not regarded as a portion of the people or citizens of the Government then formed." (emphasis added)), superseded by constitutional amendment, U.S. CONST. amend. XIV, § 1, as recognized in Slaughter-House Cases, 83 U.S. (16 Wall.) 36 (1873). But see id. at 581–82 (Curtis, J., dissenting) (arguing that "the Constitution has recognized the general principle of public law, that allegiance and citizenship depend on the place of birth" and that the "necessary conclusion is, that those persons born within the several States, who, by force of their respective Constitutions and laws, are citizens of the State, are thereby citizens of the United States").
- Jacobson v. Massachusetts, 197 U.S. 11, 22 (1905) (using this particular phrasing).
- Cf. League v. De Young, 52 U.S. (11 How.) 184, 203 (1851) ("The Constitution of the United States was made by, and for the protection of, the people of the United States."); Barron v. Mayor of Balt., 32 U.S. (7 Pet.) 243, 247 (1833) ("The constitution was ordained and established by the people of the United States for themselves, for their own government, and not for the government of the individual states. . . . The people of the United States framed such a government for the United States as they supposed best adapted to their situation and best calculated to promote their interests."), superseded on other grounds by constitutional amendment, U.S. CONST. amend. XIV, as recognized in Chi., Burlington & Quincy R.R. v. Chicago, 166 U.S. 226 (1897). While the Supreme Court did not specifically mention the Preamble in these cases, it seems apparent that it was expounding on the implications of what it understood reference to "the People" in the Preamble to mean.
- that the State Governments should be bound, and to which the State Constitutions should be made to conform. Every State Constitution is a compact made by and between the citizens of a State to govern themselves in a certain manner; and the Constitution of the United States is likewise a compact made by the people of the United States to govern themselves as to general objects, in a certain manner." (emphasis added)). abrogated by constitutional amendment, U.S. CONST. amend. XI, as recognized in Hollingsworth v. Virginia, 3 U.S. (3 Dall.) 378 (1798), and abrogated by Hans v. Louisiana, 134 U.S. 1, 12 (1890); see also United States v. Cathcart, 25 F. Cas. 344, 348 (C.C.S.D. Ohio 1864) (No. 14,756) ("[The Supreme Court has] den[ied] the assumption that full and unqualified sovereignty still remains in the states or the people of a state, and affirm[ed], on the contrary, that, by express words of the constitution, solemnly ratified by the people of the United States, the national government is supreme within the range of the powers delegated to it; while the states are sovereign only in the sense that they have an indisputable claim to the exercise of all the rights and powers guarantied to them by the constitution of the United States, or which are expressly or by fair implication reserved to them.").
- See White v. Hart, 80 U.S. (13 Wall.) 646, 650 (1872) ("The National Constitution was, as its preamble recites, ordained and established by the people of the United States. It created not a confederacy of States, but a government of individuals."); Martin v. Hunter's Lessee, 14 U.S. (1 Wheat.) 304, 324–25 (1816) ("The constitution of the United States was ordained and established, not by the states in their sovereign capacities, but . . . , as the preamble of the constitution declares, by 'the people of the United States.' . . . The constitution was not, therefore, necessarily carved out of existing state sovereignties, nor a surrender of powers already existing in state institutions . . . ."); cf. M‘Culloch v. Maryland, 17 U.S. (4 Wheat.) 316, 402–03 (1819) (rejecting a construction of the Constitution that would interpret it "not as emanating from the people, but as the act of sovereign and independent states. The powers of the general government . . . are delegated by the states, who alone are truly sovereign; and must be exercised in subordination to the states, who alone possess supreme dominion;" instead, "the [Constitution] was submitted to the people. They acted upon it . . . by assembling in convention. . . . [It] d[id] not, on . . . account [of the ratifying conventions assembling in each state], cease to be the [action] of the people themselves, or become [an action] of the state governments.").
- Ala. State Fed'n of Labor v. McAdory, 325 U.S. 450, 463 (1945) ("Only those to whom a statute applies and who are adversely affected by it can draw in question its constitutional validity in a declaratory judgment proceeding as in any other."); Premier-Pabst Sales Co. v. Grosscup, 298 U.S. 226, 227 (1936) ("One who would strike down a state statute as obnoxious to the Federal Constitution must show that the alleged unconstitutional feature injures him."); Buscaglia v. Fiddler, 157 F.2d 579, 581 (1st Cir. 1946) ("It is a settled principle of law that no court will consider the constitutionality of a statute unless the record before it affords an adequate factual basis for determining whether the challenged statute applies to and adversely affects the one who draws it in question."); Liberty Nat'l Bank v. Collins, 58 N.E.2d 610, 614 (Ill. 1944) ("The rule is universal that no one can raise a question as to the constitutionality of a statute unless he is injuriously affected by the alleged unconstitutional provisions. It is an established rule in this State that one may not complain of the invalidity of a statutory provision which does not affect him. This court will not determine the constitutionality of the provisions of an act which do not affect the parties to the cause under consideration, or where the party urging the invalidity of such provisions is not in any way aggrieved by their operation." (citation omitted)).
- See, e.g., Ison v. W. Vegetable Distribs., 59 P.2d 649, 655 (Ariz. 1936) ("It is the general rule of law that when a party invokes the benefit of a statute, he may not, in one and the same breath, claim a right granted by it and reject the terms upon which the right is granted."); State ex rel. Sorensen v. S. Neb. Power Co., 268 N.W. 284, 285 (Neb. 1936) ("[In this case,] defendants . . . invoked the statute, . . . relied upon and t[ook] advantage of it, and are now estopped to assail the statute as unconstitutional."). It is important not to read these too broadly. For example, in In re Auditor Gen., 266 N.W. 464 (Mich. 1936), certain property had been foreclosed upon for delinquent payment of taxes. A statute changed the terms by which foreclosure sales had to be published and announced in the community. The Michigan Supreme Court held that it was not necessary to question the validity of the taxes whose nonpayment led to the foreclosure, to have standing to question the validity of the procedure by which the foreclosure sale was being conducted.
- E.g., Am. Power & Light Co. v. SEC, 329 U.S. 90, 107 (1946) (a claim that the Public Utility Holding Company Act of 1935 "is void in the absence of an express provision for notice and opportunity for hearing as to security holders regarding proceedings under that section [is groundless]. The short answer is that such a contention can be raised properly only by a security holder who has suffered injury due to lack of notice or opportunity for hearing. No security holder of that type is now before us. The management of American . . . admittedly w[as] notified and participated in the hearings . . . and . . . possess[es] no standing to assert the invalidity of that section from the viewpoint of the security holders' constitutional rights to notice and hearing"); Virginian Ry. Co. v. Sys. Fed'n No. 40, Ry. Employees Dep't, 300 U.S. 515, 558 (1937) (under the Railway Labor Act, a "railroad can complain only of the infringement of its own constitutional immunity, not that of its employees" (citations omitted)).
- E.g., Anniston Mfg. Co. v. Davis, 301 U.S. 337, 353 (1937) ("Constitutional questions are not to be decided hypothetically. When particular facts control the decision they must be shown. Petitioner's contention as to impossibility of proof is premature. . . . For the present purpose it is sufficient to hold, and we do hold, that the petitioner may constitutionally be required to present all the pertinent facts in the prescribed administrative proceeding and may there raise, and ultimately may present for judicial review, any legal question which may arise as the facts are developed." (citation omitted)).
- United Pub. Workers v. Mitchell, 330 U.S. 75, 89–90 (1947) ("The power of courts, and ultimately of this Court, to pass upon the constitutionality of acts of Congress arises only when the interests of litigants require the use of this judicial authority for their protection against actual interference. A hypothetical threat is not enough.").
- Sparks v. Hart Coal Corp., 74 F.2d 697, 699 (6th Cir. 1934) ("It has long been settled that courts have no power per se to review and annul acts of Congress on the ground that they are unconstitutional. That question may be considered only when the justification for some direct injury suffered or threatened, presenting a justiciable issue, is made to rest upon such act."); e.g., Manne v. Comm'r, 155 F.2d 304, 307 (8th Cir. 1946) ("A taxpayer alleging unconstitutionality of an act must show not only that the act is invalid, but that he has sustained some direct injury as the result of its enforcement." (citing Massachusetts v. Mellon, 262 U.S. 447 (1923)).
- In re 620 Church St. Bldg. Corp., 299 U.S. 24, 27 (1936) (“Here the controlling finding is not only that there was no equity in the property above the first mortgage but that petitioners' claims were appraised by the court as having ‘no value.’ There was no value to be protected. This finding . . . [renders] the constitutional argument [that petitioners were deprived of property without due process of law] unavailing as petitioners have not shown injury.”).
- Mauk v. United States, 88 F.2d 557, 559 (9th Cir. 1937) ("Since appellant is not indicted under or accused of violating this provision, he has no interest or standing to question its validity. That question is not before us and will not be considered.").
- Downes v. Bidwell, 182 U.S. 244, 251 (1901) ("The Constitution was created by the people of the United States, as a union of states, to be governed solely by representatives of the states."); In re Ross, 140 U.S. 453, 464 (1891) ("By the constitution a government is ordained and established ‘for the United States of America,’ and not for countries outside of their limits. The guaranties it affords against accusation of capital or infamous crimes, except by indictment or presentment by a grand jury, and for an impartial trial by a jury when thus accused, apply only to citizens and others within the United States, or who are brought there for trial for alleged offenses committed elsewhere, and not to residents or temporary sojourners abroad.").
- 46 F. Supp. 296 (W.D. Wash. 1942), aff'd, 138 F.2d 909 (9th Cir. 1943).
- Id. at 296 ("Upon his arraignment the [trial] court appointed counsel for the petitioner who was without funds and was a member of the armed forces of the United States at Shanghai. The petitioner entered a plea of not guilty and demanded a trial before a jury of Americans, which motion was denied, and he was thereupon tried by the court. The petitioner contends that his constitutional rights were violated by his being denied a jury trial.").
- Id. at 299 ("The petitioner does not claim that he was not afforded a fair trial aside from the denial of his demand for a jury. Inasmuch as unquestionably he obtained a trial more to his liking than he would have obtained in Shanghai in other than an American court sitting in Shanghai, and since the Supreme Court of this country has determined that the right of trial by jury does not obtain in an American court sitting in another country pursuant to treaty, it must be held that the allegations of petitioner's petition do not entitle him to release.").
- Downes, 182 U.S. at 251 (emphases added). Compare, e.g., Dooley v. United States, 182 U.S. 222, 234 (1901) ("[A]fter the ratification of the treaty [with Spain] and the cession of the island to the United States[,] Porto Rico then ceased to be a foreign country . . . ."), and Municipality of Ponce v. Roman Catholic Apostolic Church, 210 U.S. 296, 310 (1908) ("[I]n case of cession to the United States; laws of the ceded country inconsistent with the Constitution and laws of the United States, so far as applicable, would cease to be of obligatory force; but otherwise the municipal laws of the acquired country continue." (quoting Ortega v. Lara, 202 U.S. 339, 342 (1906))), with Downes, 182 U.S. at 287 ("[T]he island of Porto Rico is a territory appurtenant and belonging to the United States, but not a part of the United States . . . .").
- The fact that this discussion happens to talk mainly about Puerto Rico should not be understood to imply that the Supreme Court held that Puerto Rico was some sort of sui generis jurisdiction. For example, in Goetze v. United States, 182 U.S. 221 (1901), the Supreme Court held that this same reasoning (that a place could be under the jurisdiction of the United States, without being "part" of the United States) applied to Hawaii before it was admitted into the Union as a State.
- U.S. CONST. art. I, § 8, cl. 1.
- De Pass v. Bidwell, 124 F. 615 (C.C.S.D.N.Y. 1903).
- See Lane Cnty. v. Oregon, 74 U.S. (7 Wall.) 71, 76 (1869) ("The people, through [the Constitution], established a more perfect union by substituting a national government, acting, with ample power, directly upon the citizens, instead of the Confederate government, which acted with powers, greatly restricted, only upon the States.").
- Legal Tender Cases, 79 U.S. (12 Wall.) 457, 545 (1871) ("The Constitution was intended to frame a government as distinguished from a league or compact, a government supreme in some particulars over States and people."); id. at 554–55 (Bradley, J., concurring) ("The Constitution of the United States established a government, and not a league, compact, or partnership. It was constituted by the people. It is called a government.").
- See Bush v. Orleans Parish Sch. Bd., 188 F. Supp. 916, 922–23 (E.D. La. 1960) ("Interposition is . . . based on the proposition that the United States is a compact of states, any one of which may interpose its sovereignty against the enforcement within its borders of any decision of the Supreme Court or act of Congress, irrespective of the fact that the constitutionality of the act has been established by decision of the Supreme Court. . . . In essence, the doctrine denies the constitutional obligation of the states to respect those decisions of the Supreme Court with which they do not agree. The doctrine may have had some validity under the Articles of Confederation. On their failure, however, ‘in Order to form a more perfect Union,’ the people, not the states, of this country ordained and established the Constitution. Thus the keystone of the interposition thesis, that the United States is a compact of states, was disavowed in the Preamble to the Constitution." (emphasis added) (footnote omitted) (citation omitted)), aff'd mem., 365 U.S. 569 (1961). Although the State of Louisiana in Bush invoked a concept it called "interposition," it was sufficiently similar to the concept of "nullification" that the court used the latter, more familiar term in a fashion that clearly indicated it viewed the concepts as functionally interchangeable. See id. at 923 n.7 ("[E]ven the ‘compact theory’ [of the Constitution] does not justify interposition. Thus, Edward Livingston, . . . though an adherent of th[e 'compact] theory['], strongly denied the right of a state to nullify federal law or the decisions of the federal courts." (emphases added)). Compare Martin, 14 U.S. (1 Wheat.) at 332 ("The confederation was a compact between states; and its structure and powers were wholly unlike those of the national government."), with id. ("The constitution was an act of the people of the United States to supersede the confederation, and not to be ingrafted on it, as a stock through which it was to receive life and nourishment.").
- White v. Hart, 80 U.S. (13 Wall.) 646, 650 (1871) ("[The Constitution] assumed that the government and the Union which it created, and the States which were incorporated into the Union, would be indestructible and perpetual; and as far as human means could accomplish such a work, it intended to make them so.")
- Texas, 74 U.S. (7 Wall.) at 725–26 ("[W]hen the Articles [of Confederation] were found to be inadequate to the exigencies of the country, the Constitution was ordained 'to form a more perfect Union.' It is difficult to convey the idea of indissoluble unity more clearly than by these words. What can be indissoluble if a perpetual Union, made more perfect, is not? . . . The Constitution, in all its provisions, looks to an indestructible Union, composed of indestructible States. When, therefore, Texas became one of the United States, she entered into an indissoluble relation. All the obligations of perpetual union, and all the guaranties of republican government in the Union, attached at once to the State. The act which consummated her admission into the Union was something more than a compact; it was the incorporation of a new member into the political body. And it was final. The union between Texas and the other States was as complete, as perpetual, and as indissoluble as the union between the original States. There was no place for reconsideration, or revocation, except through revolution, or through consent of the States."); United States v. Cathcart, 25 F. Cas. 344, 348 (C.C.S.D. Ohio 1864) (No. 14,756) ("The[ Supreme Court has] repudiate[d] emphatically the mischievous heresy that the union of the states under the constitution is a mere league or compact, from which a state, or any number of states, may withdraw at pleasure, not only without the consent of the other states, but against their will."). |
You can find all the previous posts about below:
Introduction to Vedic Mathematics
A Spectacular Illustration of Vedic Mathematics
Multiplication Part 1
Multiplication Part 2
Multiplication Part 3
Multiplication Part 4
Multiplication Part 5
Multiplication Special Case 1
Multiplication Special Case 2
Multiplication Special Case 3
Vertically And Crosswise I
Vertically And Crosswise II
Squaring, Cubing, Etc.
Division By The Nikhilam Method I
Division By The Nikhilam Method II
We have already seen the beginnings of how the process works in the previous two lessons. The key to the process is in the step where we multiplied the sum under each column by the 10's complement of the denominator. To illustrate the method further, we will take a few examples, starting with a simple case.
Let us work out 123/8. As before, our figure starts out looking like the one below:
As always, the first line consists of the denominator followed by its 10's complement (2 is 8's 10's complement). The numerator has been divided by a "|" such that there are as many digits to the right of the "|" as there are digits in the denominator. We then put a zero under the first digit of the numerator.
Now add up the digits in that column of the numerator to get a sum of 1 (1 + 0 = 1). Multiply it by the 10's complement to get 2 (1 x 2 = 2). Put that under the second digit of the numerator. The figure now looks as below:
The sum of the digits under the second column is 4. Multiplying this by the 10's complement gives us 8. Put the 8 under the third digit of the numerator, right of the "|". Now we add up the numbers under the columns to get the figure below:
Note that there is no carryover from the right of the "|" to the left of it. Following the rules on how to deal with a remainder greater than the denominator, we divide the remainder by the denominator and add the new quotient to the original quotient and retain the new remainder as the final remainder. Our final figure looks like this:
It is easy to verify that this is indeed the right answer to the problem.
Now, let us work out a larger problem such as 894378/7. We get the figure below:
On the first line, we have 7 and its 10's complement, 3. Then we have the numerator, with one digit behind the "|". We have a zero below the first digit of the numerator. The sum of those is 8 and the product of this sum with the 10's complement is 24. We have followed the usual rules of carryover by putting the 4 under the second digit of the numerator and the 2 under the first digit. Our next sum becomes 33 (24 + 9 = 33). Multiplying that by 3 gives us 99. Once again, we put the first 9 under the third digit of the numerator and the second 9 under the 2nd digit of the numerator to give us the figure below:
Now, we move on to the third digit of the numerator. The sum we have is now 99 + 4 = 103. Multiplying 103 by the 10's complement, we get 309. Now, we put the 9 below the 4th digit of the numerator, the 0 below the 3rd digit and the 3 below the 2nd digit of the numerator. We get the figure below:
Under the 4th digit of the numerator, we now have 309 + 3 = 312. Multiplying 312 by 3 gives us 936. We write it as below following normal carryover rules:
Coming to the last digit of the numerator before the "|", we get 936 + 7 = 943. Multiplying 943 by 3 gives us 2829. This leads to the figure below and the intermediate answer as below:
Deriving the final answer from this intermediate answer, we get:
To illustrate division by bigger denominators, let us start with 1123/88.
We start out by writing the problem as below:
As always, 88 and its 10's complement, 12, are on the first line. We have written the numerator on the next line, separating the last two digits behind the "|" because the denominator contains 2 digits. Now we write a 0 below the first digit of the numerator. We then add up the numbers under that digit, giving us 1. Now we have to multiply this number by the digits of the 10's complement. The product of 1 with the left digit of the 10's complement is 1 (1 x 1 = 1), and this goes under the second digit of the numerator. The product of 1 with the right digit of the 10's complement is 2 (1 x 2 = 2) and this goes under the 3rd digit of the numerator, to the right of the "|". The figure below reflects this:
Now add up the digits under the second digit of the numerator to get 1 + 1 = 2. We now multiply this sum by the digits of the 10's complement. The product of 2 with the left digit of the 10's complement is 2 (2 x 1 = 2), and this goes under the 3rd digit of the numerator. The product of 2 with the right digit of the 10's complement is 4 (2 x 2 = 4), and this goes under the 4th digit of the numerator. Since we have now dealt with all the digits of the numerator to the left of the "|", we are done. Add up the numbers under the columns to get the intermediate answer as below:
Since the intermediate remainder is less than the denominator, the intermediate answer above is also the final answer, and this can be verified using a calculator.
Let us try a couple more problems to convince ourselves that the method indeed works. We will start with 49857/79.
Note that 4 + 0 = 4, and the product of 4 with 2 and 1 gives rise to the 8 and 4 under the 2nd and 3rd digits of the numerator. Now, 9 + 8 = 17. The product of 17 with 2 and 1 gives rise to 34 under the 3rd digit of the numerator and 17 under the 4th digit. Since the 4th digit is the first digit of the numerator beyond the "|", and there is one more digit of the numerator to its right, we write the 17 as 170, with the addition of a zero to account for this extra digit. We get the figure below:
Since the 3 in the last row, from the 34 obtained as the product of 17 with 2, has never been dealt with in the problem before (and it is placed under the 2nd digit of the numerator, which we have already finished dealing with), we have to include it when we find the sum under the 3rd digit of the numerator. This gives us the sum of 46 (8 + 4 + 34). Multiplying 46 by 2 and 1 gives us 92 and 46. They are written as below (note the zero after the 92 to account for the fact that the 92 goes under the 4th digit of the numerator and there is one other digit of the numerator to the right of the "|"):
We then divide 1193 by 79 to get a final remainder of 8 and a new quotient of 15. Adding the original quotient to the new quotient gives us a final answer as below:
Let us now extend this method further by working out a few more problems. Just remember some simple rules and the rest becomes very easy:
- Write the 10's complement with as many digits as the denominator, padding to the left with zeroes as necessary
- Put a zero under the first digit of the numerator
- Add them (you get the first digit of the numerator once again), and multiply the sum by the individual digits of the 10's complement
- The product of the left-most digit of the 10's complement with the sum goes under the next digit of the numerator, while the product of the digit to its right with the sum goes under the next digit to the right and so on
- If some number from the product has not been dealt with when finding the sum of digits under a particular digit of the numerator (because of carryover as in the previous example), then it has to be dealt with when finding the sum under the next digit of the numerator
- There is no carryover from the right of the "|" to the left
- If a number goes under a digit to the right of the "|", pad it with as many zeroes to the right as there are digits in the numerator to its right
- Stop when you have dealt with the sum of digits under the last digit of the numerator to the left of the "|"
- Add up the digits to the right of the "|" with no carryover to the left
- Normal carryover rules apply when the carryover is entirely to the left of or right of the "|"
- If the intermediate remainder is larger than the denominator, perform division of this remainder by the denominator once again (this process is recursive, so remember this rule when you divide the remainder by the denominator!)
- The new quotient is added to the intermediate quotient to get the final quotient
- The new remainder is the final remainder
Let us now apply these rules to a few more problems that look difficult, but actually turn out to be quite easy once we start tackling them.
As you can see from the examples worked out above, this method works very well when the denominator is made up of large digits so that its 10's complement contains small numbers (mostly 0's, 1's etc.). The method becomes more cumbersome when the products are larger because of large numbers in the 10's complement. We will deal with such problems using other methods in future lessons.
This method is valuable though precisely because it is very useful when the denominator is composed of large numbers. It is in such division problems that most people face problems because trial and error multiplication becomes more difficult when the denominator consists of large numbers. Moreover, such problems also involve more difficult subtractions, especially when the numerator contains small numbers for its digits. Since the method illustrated here consists of very simple multiplications (mostly single digit by single digit, though occasionally one may need to multiply larger numbers) and additions, this method is ideal for such difficult division problems. Even though the method may seem complicated at first glance, it is very easy to master with practice. Good luck, and happy computing! |
The Roman Republic was the era of classical Roman civilization beginning with the overthrow of the Roman Kingdom, traditionally dated to 509 BC, ending in 27 BC with the establishment of the Roman Empire. It was during this period that Rome's control expanded from the city's immediate surroundings to hegemony over the entire Mediterranean world. Roman society under the Republic was a cultural mix of Latin and Greek elements, visible in the Roman Pantheon, its political organisation was influenced by the Greek city states of Magna Graecia, with collective and annual magistracies, overseen by a senate. The top magistrates were the two consuls, who had an extensive range of executive, judicial and religious powers. Whilst there were elections each year, the Republic was not a democracy, but an oligarchy, as a small number of large families monopolised the main magistracies. Roman institutions underwent considerable changes throughout the Republic to adapt to the difficulties it faced, such as the creation of promagistracies to rule its conquered provinces, or the composition of the senate.
Unlike the Pax Romana of the Roman Empire, the Republic was in a state of quasi-perpetual war throughout its existence. Its first enemies were its Latin and Etruscan neighbours as well as the Gauls, who sacked the city in 387 BC; the Republic nonetheless demonstrated extreme resilience and always managed to overcome its losses, however catastrophic. After the Gallic Sack, Rome indeed conquered the whole Italian peninsula in a century, which turned the Republic into a major power in the Mediterranean; the Republic's greatest enemy was doubtless Carthage, against. The Punic general Hannibal famously invaded Italy by crossing the Alps and inflicted on Rome two devastating defeats at the Lake Trasimene and Cannae, but the Republic once again recovered and won the war thanks to Scipio Africanus at the Battle of Zama in 202 BC. With Carthage defeated, Rome became the dominant power of the ancient Mediterranean world, it embarked in a long series of difficult conquests, after having notably defeated Philip V and Perseus of Macedon, Antiochus III of the Seleucid Empire, the Lusitanian Viriathis, the Numidian Jugurtha, the great Pontic king Mithridates VI, the Gaul Vercingetorix, the Egyptian queen Cleopatra.
At home, the Republic experienced a long streak of social and political crises, which ended in several violent civil wars. At first, the Conflict of the Orders opposed the patricians, the closed oligarchic elite, to the far more numerous plebs, who achieved political equality in several steps during the 4th century BC; the vast conquests of the Republic disrupted its society, as the immense influx of slaves they brought enriched the aristocracy, but ruined the peasantry and urban workers. In order to solve this issue, several social reformers, known as the Populares, tried to pass agrarian laws, but the Gracchi brothers, Saturninus, or Clodius Pulcher were all murdered by their opponents, the Optimates, keepers of the traditional aristocratic order. Mass slavery caused three Servile Wars. In this context, the last decades of the Republic were marked by the rise of great generals, who exploited their military conquests and the factional situation in Rome to gain control of the political system.
Marius Sulla dominated in turn the Republic. These multiple tensions lead to a series of civil wars. Despite his victory and appointment as dictator for life, Caesar was murdered in 44 BC. Caesar's heir Octavian and lieutenant Mark Antony defeated Caesar's assassins Brutus and Cassius in 42 BC, but turned against each other; the final defeat of Mark Antony and his ally Cleopatra at the Battle of Actium in 31 BC, the Senate's grant of extraordinary powers to Octavian as Augustus in 27 BC – which made him the first Roman emperor – thus ended the Republic. Since the foundation of Rome, its rulers had been monarchs, elected for life by the patrician noblemen who made up the Roman Senate; the last Roman king was Lucius Tarquinius Superbus. In the traditional histories, Tarquin was expelled in 509 because his son Sextus Tarquinius had raped the noblewoman Lucretia, who afterwards took her own life. Lucretia's father, her husband Lucius Tarquinius Collatinus, Tarquin's nephew Lucius Junius Brutus mustered support from the Senate and army, forced Tarquin into exile in Etruria.
The Senate agreed to abolish kingship. Most of the king's former functions were transferred to two consuls, who were elected to office for a term of one year; each consul had the capacity to act as a check on his colleague, if necessary through the same power of veto that the kings had held. If a consul abused his powers in office, he could be prosecuted. Brutus and Collatinus became Republican Rome's first consuls. Despite Collatinus' role in the creation of the Republic, he belonged to the same family as the former king, was forced to abdicate his office and leave Rome, he was replaced as co-consul by Publius Valerius Publicola. Most modern scholarship describes these events as the quasi-mythological detailing of an aristocratic coup within Tarquin's own family, not a popular revolution, they fit a narrative of a personal vengeance against a tyrant leading to his overthrow, common among Greek cities and theorised by Aristotle
Mauretania is the Latin name for an area in the ancient Maghreb. It stretched from central present-day Algeria westwards to the Atlantic, covering northern Morocco, southward to the Atlas Mountains, its native inhabitants, seminomadic pastoralists of Berber ancestral stock, were known to the Romans as the Mauri and the Masaesyli. Beginning in 27 BC, the kings of Mauretania became Roman vassals until about 44 AD when the area was annexed to Rome and divided into two provinces: Mauretania Tingitana and Mauretania Caesariensis. In the late 3rd century, another province, Mauretania Sitifensis, was formed out of the eastern part of Caesariensis; when the Vandals arrived in Africa in 429, much of Mauretania became independent. Christianity had spread there in the 4th and 5th centuries but was extinguished when the Arabs conquered the region in the 7th century. Mauretania existed as a tribal kingdom of the Berber Mauri people. Yevgenii Pospelov records a Phoenician naming of the area which became known as Mauretania: the Phoenicians called the country at the extreme western edge of their known world Mauharim, meaning "Western land".
In the early 1st century Strabo recorded Mauri as the native name. This appellation was adopted into Latin; the Mauri would bequeath their name to the Moors on the Mediterranean coast of North Africa, from at least the 3rd century BC. The Mediterranean coast of Mauretania had commercial harbours for trade with Carthage from before the 4th century BC, but the interior was controlled by Berber tribes, who had established themselves in the region by the beginning of the Iron Age. King Atlas was a legendary king of Mauretania credited with the invention of the celestial globe; the first known historical king of the Mauri, ruled during the Second Punic War of 218-201 BC. The Mauri were in close contact with Numidia. Bocchus I was father-in-law to the redoubted Numidian king Jugurtha. Mauretania became a client kingdom of the Roman Empire in 33 BC; the Romans installed Juba II of Numidia as their client-king. When Juba died in AD 23, his Roman-educated son Ptolemy of Mauretania succeeded him; the Emperor Caligula had Ptolemy executed in 40.
The Roman Emperor Claudius annexed Mauretania directly as a Roman province in 44, placing it under an imperial governor. The known kings of Mauretania are: In the 1st century AD, Emperor Claudius divided the Roman province of Mauretania into Mauretania Caesariensis and Mauretania Tingitana along the line of the Mulucha River, about 60 km west of modern Oran: Mauretania Tingitana was named after its capital Tingis. Mauretania Caesariensis was named after its capital Caesarea and comprised western and central Algeria. Mauretania gave the empire the equestrian Macrinus, he seized power after the assassination of Caracalla in 217 but was himself defeated and executed by Elagabalus the next year. Emperor Diocletian's Tetrarchy reform further divided the area into three provinces, as the small, easternmost region of Sitifensis was split off from Mauretania Caesariensis; the Notitia Dignitatum mentions themas still existing, two being under the authority of the Vicarius of the diocese of Africa: A Dux et praeses provinciae Mauritaniae et Caesariensis, i.e. a Roman governor of the rank of Vir spectabilis, who held the high military command of dux, as the superior of eight border garrison commanders, each styled Praepositus limitis... followed by Columnatensis, inferioris, Muticitani, Audiensis and Augustensis.
A Praeses in the province of Mauretania Sitifensis. And, under the authority of the Vicarius of the diocese of Hispaniae: A Comes rei militaris of Mauretania Tingitana ranking as vir spectabilis, in charge of the following border garrison commanders: Praefectus alae Herculeae at Tamuco Tribunus cohortis secundae Hispanorum at Duga Tribunus cohortis primae Herculeae at Aulucos Tribunus cohortis primae Ityraeorum at Castrabarensis Another Tribunus cohortis at Sala Tribunus cohortis Pacatianensis at Pacatiana Tribunus cohortis tertiae Asturum at Tabernas Tribunus cohortis Friglensis at the Fortress of Friglas or Frigias, near Lixusand to whom three extraordinary cavalry units were assigned: Equites scutarii seniores Equites sagittarii seniores Equites Cordueni A Praeses of the same province of Tingitana During the crisis of the 3rd century, parts of Mauretania were reconquered by Berber tribes. Direct Roman rule became confined to a few coastal cities by the late 3rd century. Historical sources about inland areas are sparse, but these were controlled by local Berber rulers who, maintained a degree of Roman culture, including the local cities, nominally acknowledged the suzerainty of the Roman Emperors.
The Western kingdom more distant from the Vandal kingdom was the one of Altava, a city located at the borders of Mauretania Tingitana and Caesariensis.... It is clear that the Mauro-Roman kingdom of Altava was inside the Western Latin world, not only because of location but because it adopted the military-religious-sociocultural-administrative organization of the Roman Empire... In an inscription from Altava in western Algeria, one of these rulers, described himself as rex gentium Maurorum et Romanorum. Altava was the capital of another ruler, Garmul or
Marcus Antonius known in English as Mark Antony or Anthony, was a Roman politician and general who played a critical role in the transformation of the Roman Republic from an oligarchy into the autocratic Roman Empire. Antony was a supporter of Julius Caesar, served as one of his generals during the conquest of Gaul and the Civil War. Antony was appointed administrator of Italy while Caesar eliminated political opponents in Greece, North Africa, Spain. After Caesar's death in 44 BC, Antony joined forces with Marcus Aemilius Lepidus, another of Caesar's generals, Octavian, Caesar's great-nephew and adopted son, forming a three-man dictatorship known to historians as the Second Triumvirate; the Triumvirs defeated Caesar's murderers, the Liberatores, at the Battle of Philippi in 42 BC, divided the government of the Republic between themselves. Antony was assigned Rome's eastern provinces, including the client kingdom of Egypt ruled by Cleopatra VII Philopator, was given the command in Rome's war against Parthia.
Relations among the triumvirs were strained. Civil war between Antony and Octavian was averted in 40 BC, when Antony married Octavian's sister, Octavia. Despite this marriage, Antony carried on a love affair with Cleopatra, who bore him three children, further straining Antony's relations with Octavian. Lepidus was expelled from the association in 36 BC, in 33 BC disagreements between Antony and Octavian caused a split between the remaining Triumvirs, their ongoing hostility erupted into civil war in 31 BC, as the Roman Senate, at Octavian's direction, declared war on Cleopatra and proclaimed Antony a traitor. That year, Antony was defeated by Octavian's forces at the Battle of Actium. Antony and Cleopatra fled to Egypt. With Antony dead, Octavian became the undisputed master of the Roman world. In 27 BC, Octavian was granted the title of Augustus, marking the final stage in the transformation of the Roman Republic into an empire, with himself as the first Roman emperor. A member of the plebeian Antonia gens, Antony was born in Rome on 14 January 83 BC.
His father and namesake was Marcus Antonius Creticus, son of the noted orator by the same name, murdered during the Marian Terror of the winter of 87–86 BC. His mother was a distant cousin of Julius Caesar. Antony was an infant at the time of Lucius Cornelius Sulla's march on Rome in 82 BC. According to the Roman orator Marcus Tullius Cicero, Antony's father was incompetent and corrupt, was only given power because he was incapable of using or abusing it effectively. In 74 BC he was given military command to defeat the pirates of the Mediterranean, but he died in Crete in 71 BC without making any significant progress; the elder Antony's death left Antony and his brothers and Gaius, in the care of their mother, who married Publius Cornelius Lentulus Sura, an eminent member of the old Patrician nobility. Lentulus, despite exploiting his political success for financial gain, was in debt due to the extravagance of his lifestyle, he was a major figure in the Second Catilinarian Conspiracy and was summarily executed on the orders of the Consul Cicero in 63 BC for his involvement.
Antony's early life was characterized by a lack of proper parental guidance. According to the historian Plutarch, he spent his teenage years wandering through Rome with his brothers and friends gambling and becoming involved in scandalous love affairs. Antony's contemporary and enemy, claimed he had a homosexual relationship with Gaius Scribonius Curio. There is little reliable information on his political activity as a young man, although it is known that he was an associate of Publius Clodius Pulcher and his street gang, he may have been involved in the Lupercal cult as he was referred to as a priest of this order in life. By age twenty, Antony had amassed an enormous debt. Hoping to escape his creditors, Antony fled to Greece in 58 BC, where he studied philosophy and rhetoric at Athens. In 57 BC, Antony joined the military staff of Aulus Gabinius, the Proconsul of Syria, as chief of the cavalry; this appointment marks the beginning of his military career. As Consul the previous year, Gabinius had consented to the exile of Cicero by Antony's mentor, Publius Clodius Pulcher.
Hyrcanus II, the Roman-supported Hasmonean High Priest of Judea, fled Jerusalem to Gabinius to seek protection against his rival and son-in-law Alexander. Years earlier in 63 BC, the Roman general Pompey had captured him and his father, King Aristobulus II, during his war against the remnant of the Seleucid Empire. Pompey had deposed Aristobulus and installed Hyrcanus as Rome's client ruler over Judea. Antony achieved his first military distinctions after securing important victories at Alexandrium and Machaerus. With the rebellion defeated by 56 BC, Gabinius restored Hyrcanus to his position as High Priest in Judea; the following year, in 55 BC, Gabinius intervened in the political affairs of Ptolemaic Egypt. Pharaoh Ptolemy XII Auletes had been deposed in a rebellion led by his daughter Berenice IV in 58 BC, forcing him to seek asylum in Rome. During Pompey's conquests years earlier, Ptolemy had received the support of Pompey, who named him an ally of Rome. Gabinius' invasion sought to restore Ptolemy to his throne.
This was done against the orders of the Senate but with the approval of Pompey Rome's leading politician, only after the deposed king provided a 10,000 talent bribe. The Greek historian Plutarch records it was Antony who convinced Gabinius to act. After defeating the frontier forces of the Egyptian kingdom, Gabinius's army proceeded to attack the palace guards but they surrendered before a battle commenced
Cleopatra Selene II
Cleopatra Selene II known as Cleopatra VIII of Egypt or Cleopatra VIII, was a Ptolemaic Princess and was the only daughter to Greek Ptolemaic queen Cleopatra VII of Egypt and Roman triumvir Mark Antony. She was the fraternal twin of Ptolemaic prince Alexander Helios, her second name in ancient Greek means moon meaning the Titaness-goddess of the Moon Selene, being the counterpart of her twin brother's second name Helios, meaning sun and the Titan-god of the Sun Helios. Cleopatra was born and educated in Alexandria, Egypt. In 36 BC in the Donations of Antioch and in late 34 BC during the Donations of Alexandria, she was made ruler of Cyrenaica and Libya. After the defeat of Antony and Cleopatra at Actium and their suicides in Egypt in 30 BC, Cleopatra Selene was brought to Rome and placed in the household of Octavian's sister Octavia the Younger. Cleopatra Selene was married to Juba II of Numidia and Mauretania and they produced a son and successor Ptolemy of Mauretania. Cleopatra Selene had two full brothers, her twin Alexander Helios and the younger Ptolemy Philadelphos.
Her older half-brother, was the son of her mother and her first partner, Julius Caesar. Cleopatra most planned her only daughter to marry her eldest son Caesarion, her father had five other children with previous wives. Her parents, Mark Antony and Cleopatra VII, were defeated by Octavian, during a naval battle at Actium, Greece in 31 BC. In 30 BC, her parents committed suicide as Octavian and his army invaded Egypt. Octavian captured Cleopatra Selene and her brothers and took them from Egypt to Rome, parading them in heavy golden chains in his triumph; the chains were so heavy that the children were unable to walk in them, eliciting unexpected sympathy from many of the Roman onlookers. Octavian gave the siblings to his elder sister Octavia Minor to be raised in her household in Rome. Between 26 and 20 BC, Augustus arranged for Cleopatra to marry King Juba II of Numidia in Rome; the Emperor Augustus gave to Cleopatra as a wedding present a huge dowry and she became an ally to Rome. By her brothers, Alexander Helios and Ptolemy Philadelphus, had disappeared from all known historical records and are presumed to have died from illness or assassination.
When Cleopatra married Juba, she was the only surviving member of the Ptolemaic dynasty. Juba and Cleopatra could not return to Numidia as it had been made a Roman province in 46 BC; the couple were sent to an unorganized territory that needed Roman supervision. They renamed their new capital Caesarea, in honor of the Emperor. Cleopatra is said to have exercised great influence on policies. Through her influence, the Mauretanian Kingdom flourished. Mauretania traded well throughout the Mediterranean; the construction and sculptural projects at Caesarea and at another city Volubilis, were built and display a rich mixture of Ancient Egyptian and Roman architectural styles. The children of Cleopatra and Juba were: Ptolemy of Mauretania born in 10 BC A daughter, whose name has not been recorded, is mentioned in an inscription, it has been suggested that Drusilla of Mauretania was a daughter, but she may have been a granddaughter instead. Drusilla is described as a granddaughter of Antony and Cleopatra, but she may have been a daughter of Ptolemy of Mauretania.
Zenobia of Palmyra, Queen of Syria, claimed descent from Cleopatra. Controversy surrounds Cleopatra's exact date of death. A discovered hoard of Cleopatra's coins was dated at 17 AD, it has traditionally been believed. To explain this strange marital problem, historians have supposed some sort of rift between Cleopatra and Juba, mended after Juba's divorce from Glaphyra. Modern historians dispute the idea that Juba, a Romanized king, would have taken a second wife; the argument goes that if Juba married Glaphyra before 4 AD his first wife, must have been dead. The following epigram by Greek Epigrammatist Crinagoras of Mytilene is considered to be Cleopatra's eulogy; the moon herself grew dark, rising at sunset, Covering her suffering in the night, Because she saw her beautiful namesake, Breathless, descending to Hades, With her she had had the beauty of her light in common, And mingled her own darkness with her death. If this poem is not literary license astronomical correlation can be used to help pinpoint the date of Cleopatra's death.
Lunar eclipses occurred in 9, 8, 5 and 1 BC and in AD 3, 7, 10, 11 and 14. The event in 5 BC most resembles the description given in the eulogy, but the date of her death is not ascertainable with any certainty. Zahi Hawass, former Director of Egyptian Antiquities, believes Cleopatra died in AD 8; when Cleopatra died, she was placed in the Royal Mausoleum of Mauretania in modern Algeria, built by her and Juba east of Caesarea and still visible. A fragmentary inscription was dedicated as the King and Queen of Mauretania, their human remains have not been found at the site due to tomb raids that occurred at an uncertain time, or because the structure was meant to serve as a memorial and not an actual place of burial. Cleopatra is mentioned in the novels
Juba I of Numidia
Juba I of Numidia was a king of Numidia. He was the son and successor to Hiempsal II. Juba I was the father of King of Numidia and Mauretania, Juba II, father-in-law of Juba II’s wives Greek Ptolemaic princess Cleopatra Selene II, Cappodocian princess Glaphyra and paternal grandfather to King Ptolemy of Mauretania and the princess Drusilla of Mauretania the Elder. In 81 BC Hiempsal had been driven from his throne; this alliance was strengthened during a visit by Juba to Rome, when Julius Caesar insulted him by pulling on his beard during a trial when Caesar was defending his client against Juba's father, still further in 50 BC, when the tribune Gaius Scribonius Curio proposed that Numidia should be sold privately. In August 49 BC, Caesar sent Curio to take Africa from the Republicans. Curio was held Publius Attius Varus, the governor of Africa, in low esteem. Curio took fewer legions. In the Battle of the Bagradas the same year, Curio led his army in a bold, uphill attack which swiftly routed Varus's army and wounded Varus.
Encouraged by this success, Curio acted on what proved to be faulty intelligence, attacked what he believed to be a detachment of Juba's army. In fact, the bulk of the king's forces were there and, after an initial success, Curio's forces were ambushed and annihilated by Saburra. Curio was died in the fighting. Only a few escaped on their ships, King Juba took several senators captive back to Numidia for display and execution. With the arrival of Caesar in Africa, Juba planned to join Quintus Caecilius Metellus Pius Scipio Nasica, but his kingdom was invaded from the west by Caesar's ally Bocchus II and an Italian adventurer, Publius Sittius, he therefore marched home to save his country. Scipio knew he could not fight without more troops, sent a desperate message to Juba for assistance. Juba left the command of his kingdom's defence with Sabura, joined Scipio with three legions, around 15,000 light infantry, 1000 cavalry and 30 elephants for the Battle of Thapsus. However, he camped away from Scipio's main lines.
Seeing the certain defeat of Scipio's army, Juba did not take part in the battle and fled with his 30,000 men. Having fled with the Roman general Marcus Petreius and finding their retreat cut off, they made a suicide pact and engaged in one on one combat; the idea was. Sources vary on the outcome, but it is most that Petreius killed Juba and committed suicide with the assistance of a slave; the genus of the endangered Chilean wine palm, Jubaea, is named for him. Suetonius, The Twelve Caesars - Caesar. Appian, B. C. i. 80. Marcus Velleius Paterculus ii. 54. Julius Caesar, Commentarii de Bello Civili 2.40 Huss, Geschichte der Karthager, Munich: C. H. Beck
Algeria the People's Democratic Republic of Algeria, is a country in the Maghreb region of North Africa. The capital and most populous city is Algiers, located in the far north of the country on the Mediterranean coast. With an area of 2,381,741 square kilometres, Algeria is the tenth-largest country in the world, the world's largest Arab country, the largest in Africa. Algeria is bordered to the northeast by Tunisia, to the east by Libya, to the west by Morocco, to the southwest by the Western Saharan territory and Mali, to the southeast by Niger, to the north by the Mediterranean Sea; the country is a semi-presidential republic consisting of 1,541 communes. It has the highest human development index of all non-island African countries. Ancient Algeria has known many empires and dynasties, including ancient Numidians, Carthaginians, Vandals, Umayyads, Idrisid, Rustamid, Zirid, Almoravids, Spaniards and the French colonial empire. Berbers are the indigenous inhabitants of Algeria. Algeria is a middle power.
It supplies large amounts of natural gas to Europe, energy exports are the backbone of the economy. According to OPEC Algeria has the 16th largest oil reserves in the world and the second largest in Africa, while it has the 9th largest reserves of natural gas. Sonatrach, the national oil company, is the largest company in Africa. Algeria has one of the largest defence budget on the continent. Algeria is a member of the African Union, the Arab League, OPEC, the United Nations and is a founding member of the Arab Maghreb Union. On 2 April 2019, president Abdelaziz Bouteflika resigned after nearly 20 years in power, following pressure from the country’s army after mass protests against Bouteflika's campaign for a fifth term; the country's name derives from the city of Algiers. The city's name in turn derives from the Arabic al-Jazā'ir, a truncated form of the older Jazā'ir Banī Mazghanna, employed by medieval geographers such as al-Idrisi. In the region of Ain Hanech, early remnants of hominid occupation in North Africa were found.
Neanderthal tool makers produced hand axes in the Levalloisian and Mousterian styles similar to those in the Levant. Algeria was the site of the highest state of development of Middle Paleolithic Flake tool techniques. Tools of this era, starting about 30,000 BC, are called Aterian; the earliest blade industries in North Africa are called Iberomaurusian. This industry appears to have spread throughout the coastal regions of the Maghreb between 15,000 and 10,000 BC. Neolithic civilization developed in the Saharan and Mediterranean Maghreb as early as 11,000 BC or as late as between 6000 and 2000 BC; this life, richly depicted in the Tassili n'Ajjer paintings, predominated in Algeria until the classical period. The mixture of peoples of North Africa coalesced into a distinct native population that came to be called Berbers, who are the indigenous peoples of northern Africa. From their principal center of power at Carthage, the Carthaginians expanded and established small settlements along the North African coast.
These settlements served as market towns as well as anchorages. As Carthaginian power grew, its impact on the indigenous population increased dramatically. Berber civilization was at a stage in which agriculture, manufacturing and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. By the early 4th century BC, Berbers formed the single largest element of the Carthaginian army. In the Revolt of the Mercenaries, Berber soldiers rebelled from 241 to 238 BC after being unpaid following the defeat of Carthage in the First Punic War, they succeeded in obtaining control of much of Carthage's North African territory, they minted coins bearing the name Libyan, used in Greek to describe natives of North Africa. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars.
In 146 BC the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. Two of them were established behind the coastal areas controlled by Carthage. West of Numidia lay Mauretania, which extended across the Moulouya River in modern-day Morocco to the Atlantic Ocean; the high point of Berber civilization, unequaled until the coming of the Almohads and Almoravids more than a millennium was reached during the reign of Masinissa in the 2nd century BC. After Masinissa's death in 148 BC, the Berber kingdoms were reunited several times. Masinissa's line survived until 24 AD, when the remaining Berber territory was annexed to the Roman Empire. For several centuries Algeria was ruled by the Romans. Like the rest of No
The Julio-Claudian dynasty was the first Roman imperial dynasty, consisting of the first five emperors—Augustus, Caligula and Nero—or the family to which they belonged. They ruled the Roman Empire from its formation under Augustus in 27 BC until AD 68, when the last of the line, committed suicide; the name "Julio-Claudian dynasty" is a historiographical term derived from the two main branches of the imperial family: the gens Julia and gens Claudia. Primogeniture is notably absent in the history of the Julio-Claudian dynasty. Neither Augustus, nor Nero fathered a natural and legitimate son. Tiberius' own son, Drusus predeceased him. Only Claudius was outlived by his son, although he opted to promote his adopted son Nero as his successor to the throne. Adoption became a tool that most Julio-Claudian emperors utilized in order to promote their chosen heir to the front of the succession. Augustus—himself an adopted son of his great-uncle, the Roman dictator Julius Caesar—adopted his stepson Tiberius as his son and heir.
Tiberius was, in turn, required to adopt his nephew Germanicus, the father of Caligula and brother of Claudius. Caligula adopted his cousin Tiberius Gemellus shortly before executing him. Claudius adopted his great-nephew and stepson Nero, lacking a natural or adopted son of his own, ended the reign of the Julio-Claudian dynasty with his fall from power and subsequent suicide; the ancient historians who dealt with the Julio-Claudian period—chiefly Suetonius and Tacitus —write in negative terms about their reign. In Tacitus's historiography of the Julio-Claudian emperors, he states: But the successes and reverses of the old Roman people have been recorded by famous historians; the histories of Tiberius, Gaius and Nero, while they were in power, were falsified through terror, after their death were written under the irritation of a recent hatred. Julius and Claudius were two Roman family names. Roman family names were inherited from father to son, but a Roman aristocrat could – either during his life or in his will – adopt an heir if he lacked a natural son.
In accordance with Roman naming conventions, the adopted son would replace his original family name with the name of his adopted family. A famous example of this custom is Julius Caesar's adoption of Gaius Octavius. Augustus, as Caesar's adopted son and heir, discarded the family name of his natural father and renamed himself "Gaius Julius Caesar" after his adoptive father, it was customary for the adopted son to acknowledge his original family by adding an extra name at the end of his new name. As such, Augustus' adopted name would have been "Gaius Julius Caesar Octavianus". However, there is no evidence that he used the name Octavianus. Following Augustus' ascension as the first emperor of the Roman Empire in 27 BC, his family became a de facto royal house, known in historiography as the "Julio-Claudian dynasty". For various reasons, the Julio-Claudians followed in the example of Julius Caesar and Augustus by utilizing adoption as a tool for dynastic succession; the next four emperors were related through a combination of blood relation and adoption.
Tiberius, a Claudian by birth, became Augustus' stepson after the latter's marriage to Livia, who divorced Tiberius' natural father in the process. Tiberius' connection to the Julian side of the Imperial family grew closer when he married Augustus' only daughter, Julia the Elder, he succeeded Augustus as emperor in AD 14 after becoming his stepfather's adopted son and heir. Caligula was born into the Julian and Claudian branches of the Imperial family, thereby making him the first actual "Julio-Claudian" emperor, his father, was the son of Nero Claudius Drusus and Antonia Minor, the son of Livia and the daughter of Octavia Minor respectively. Germanicus was a great-nephew of Augustus on his mother's side and nephew of Tiberius on his father's side, his wife, Agrippina the Elder, was a granddaughter of Augustus. Through Agrippina, Germanicus' children – including Caligula – were Augustus' great-grandchildren; when Augustus adopted Tiberius, the latter was required to adopt his brother's eldest son as well, thus allowing Germanicus' side of the Imperial family to inherit the Julius nomen.
Claudius, the younger brother of Germanicus, was a Claudian on the side of his father, Nero Claudius Drusus, younger brother of Tiberius. However, he was related to the Julian branch of the Imperial family through his mother, Antonia Minor; as a son of Antonia, Claudius was a great-nephew of Augustus. Moreover, he was Augustus' step-grandson due to the fact that his father was a stepson of Augustus. Unlike Tiberius and Germanicus, both of whom were born as Claudians and became adopted Julians, Claudius was not adopted into the Julian family. Upon becoming emperor, however, he added the Julian-affiliated cognomen Caesar to his full name. Nero was a great-great-grandson of Augustus and Livia through Agrippina the Younger; the younger Agrippina was a daughter of Germanicus and Agrippina the Elder, as well as Caligula's sister. Through his mother, Nero was related by blood to the Julian and Claudian branches of the Imperial family. However, he was born into the Domitii Ahenobarbi on his father's side.
Nero became a Claudian in name a |
Graphing Rational Functions
Lesson 8 of 12
Objective: SWBAT find equations of asymptotes and graph rational functions.
Start class by reviewing what rational functions are. Give students the two questions below. After discussing with their tables for about three minutes, have a class discussion about these.
1. What is a rational function?
2. What are some functions that are NOT rational functions?
I chose these two questions because I find that students don't usually have a strong grasp of rational functions like they do for quadratic or exponential functions, for example. Part of the reason for this lack of understanding is because many students often have trouble coming up with a concrete definition for rational functions. For the last two days we have been been working with rational functions without calling them rational functions (besides the second half of yesterday's class), so I want to cement their conceptual understanding of rational functions with precise mathematical language.
The main aspect of rational functions that I want to get out of them is that they involve division of polynomials. The division is vital to these functions and is the reason why rational functions have asymptotes when they are graphed.
The first rational function from the worksheet that we are going to graph is f(x) = x/(x^2-x-2). I want to do this first example as a class; I find that many students will start plugging in random points and I want to establish the structure of how we graph rational functions right away. The ordering of the questions on the worksheet presents the method that I encourage my students to use when graphing a rational function.
We can use our work from the last two days to decide if the rational function is similar to one in the Ultramarathon, Gummy Bear, or Homecoming examples. Students should also be able to identify the asymptotes for this first function. Give them a few minutes to think about letters a) and b) on the worksheet.
After the asymptotes have been added to graph, we want to think about what points are going to be important to the graph. You can ask students what they think the "critical points" of the graph are going to be. They may likely know that the x-intercepts and the points where the function is undefined are really important. Generalize this to show them that the critical points are where either the numerator or denominator are equal to zero.
After adding the intercepts and asymptotes to the graph, ask students how they can figure out the rest of the function. Students will probably say that they should use their graphing calculator or plug in random points. Let them know that these are both good strategies, but really we are looking to see what will happen between each critical point. Remind them that we are not going to plugging in a lot of points to our functions, that would be too time consuming. We just need a few to get a general idea of the function. At this point you can show them how to use a sign diagram (the critical points on a number line) to test every interval of the function. Then, they will have enough to sketch the rest of the graph.
Summarize and Extend
In the video below I comment on the process of taking our rational function in the Launch section and summarizing those steps into an algorithm that would allow us to graph any rational function. The structure is really important as it is something that we can use for any of the rational functions we encounter.
After we summarize to create an algorithm, we can test it with the remaining examples on the worksheet. |
Python Certification Training for Data Scienc ...
- 52k Enrolled Learners
- Live Class
Python programming language comes with in-built data types like list, dictionary, set, tuple, etc. Range in python is another in-built python datatype which is mainly used with loops in python. It returns a sequence of numbers specified in the function arguments. In this article, we will learn about the range in python in detail with various examples. Following are the topics covered in this blog:
It is an in-built function in Python which returns a sequence of numbers starting from 0 and increments to 1 until it reaches a specified number. The most common use of range function is to iterate sequence type. It is most commonly used in for and while loops.
Following are the range function parameters that we use in python:
range(start, stop, step)
Below is an example of how we can use range function in a for loop. This program will print the even numbers starting from 2 until 20.
for i in range(2,20,2): print(i)
Output: 2 4 6 8 10 12 14 16 18
We can use range in python to increment and decrement step values using positive and negative integers, following program shows how we can get the sequence of numbers in both the orders using positive and negative steps values.
for i in range(2, 20, 5): print(i, end=", ") for j in range(25, 0 , -5): print(j , end=", ")
Output: 2, 7, 12, 17, 25, 20, 15, 10, 5
The range function does not support float or non-integer numbers in the function but there are ways to get around this and still get a sequence with floating-point values. The following program shows an approach that we can follow to use float in range.
def frange(start , stop, step): i = start while i < stop: yield i i += step for i in frange(0.6, 1.0, 0.1): print(i , end=",")
Output: 0.6, 0.7, 0.8, 0.9
The following program shows how we can reverse range in python. It will return the list of first 5 natural numbers in reverse.
for i in range(5, 0, -1): print(i, end=", ")
Output: 5, 4, 3, 2, 1, 0
In the program below, there is a concatenation between two range functions.
from itertools import chain res = chain(range(10) , range(10, 15)) for i in res: print(i , end=", ")
Output: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
The following program shows how we can access range using indexes.
a = range(0,10) b = range(0,10) print(a) print(b)
Output: 3 5
The following program shows how we can simply convert the range to list using type conversion.
a = range(0,10) b = list(a) c = list(range(0,5)) print(b) print(c)
Output: [0,1,2,3,4,5,6,7,8,9] [0,1,2,3,4]
This brings us to the end of this article where we have learned how we can use range in python with several examples including a for loop in python and difference between range and xrange in python. I hope you are clear with all that has been shared with you in this tutorial.
If you found this article on “Range In Python” relevant, check out the Edureka Python Certification Training, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.
We are here to help you with every step on your journey and come up with a curriculum that is designed for students and professionals who want to be a Python developer. The course is designed to give you a head start into Python programming and train you for both core and advanced Python concepts along with various Python frameworks like Django.
If you come across any questions, feel free to ask all your questions in the comments section of “Range In Python” and our team will be glad to answer. |
Students should recognize that some types of charts are more appropriate than others, depending on the nature of the data or the message the author is trying to convey. Teaching Tip Reading Strategies: Students can read individually, in partners, or as a whole class. The guide is not particularly long, but you’ll want all students to have had a chance to look through those pages before the discussion. Let students know ahead of time that you’ll be discussing the reading and ask them to pick one or two key points as they are going through. Make a table of good v. bad visualization characteristics Prompts: Following the principles of good data visualization, which one would we say is better? What makes the good one good and the bad one bad? As students respond, steer the discussion toward generating general characteristics of good and bad visualizations. Make a simple chart that everyone can see. Something like this… Good Bad simple easy to read a basic graph that makes a simple point …etc… complicated confusing colors too much text …etc… Wrap-up (15 mins) Data Visualization 101 discussion Remarks We’re going to be making some of our own visualizations of data very soon. To help us do that, we’re going to look at some helpful tips for effectively communicating with data visualization. Distribute: Data Visualization 101: How to design charts and graphs - Link . Students should read the first 4 pages of this document. Discuss: What are the key take-aways from this guide? Some key ideas that should come up: Choosing the right way to visualize data is essential to communicating your ideas. There are stories in data; visualization helps you tell them. Before understanding visualizations, you must understand the types of data that can be visualized and their relationships to each other. Certain chart types are right for certain situations, depending on the data. Remarks The Data Visualization 101 guide is a resource for you (students). The rest of the guide goes into some specifics of different chart types. You should keep this guide at your side as you review visualizations data, and when you develop your own in the future. Further Discussion Points: What else did we learn about data visualization today? What are the benefits of visualizing data? Can we characterize common mistakes in visualizations to which we gave low ratings? Can we characterize common strengths in effective visualizations? Not all visualizations were charts; what other types are there? As you embark on making your own visualization, what do you want to keep in mind so that you can avoid rookie mistakes? Assessment Assessment Posibilities
Assessment Idea: show students a visualization and have them analyze it, using the table of characteristics of good/bad visualizations to justify their opinion. |
Let me tell you about resistor color code and how it works. We often look at a lot of resistors in many electronic circuits. Do you know how to use it? How it works. I think it is a very important electronic device. If you do not have them. Your project may not work.
What is a resistor
What does a resistor do
It will resist or limit the current flowing it. The levels or size that they are resisting. We call the resistance.
If more resistance, less current flows in the circuit. You may not imagine. The current looks like the water in a pipe, below.
- The left resistor has high resistance. So the current can flow is low.
- But on the other hand, the right has low resistance inside. So the current can flow through it higher.
Many ways to use them
Resistors resist the flow of electricity. Also, we can use them a lot. For example,
- Using the resistor in a series circuit.
- They are very useful for reducing the current to the LED. Which it can damage with too much current.
- Most using resistors to divide a voltage into a smaller voltage.
- Resistors are used to increase the time required to charge capacitors and speed up the discharged of capacitors.
- They are also used to control the gain of amplifiers.
Learn the resistor
When my son learns about basic Electronics. He should learn about the resistors first.
Today we will learn about the resistors
- How does it shape? He said they look like a worm.
- Learn to use the meter to measure its value. Now, I have not taught him to read the color code. Because it is too difficult for him. He uses a digital ohmmeter to measure the resistors. It is so easy to read the resistance of the resistors.
Then, he learns about basic of resistor, name, symbol and Its value in decimal units.
What to expect him to get
- Drawing skills
- Using a digital multimeter to measure the resistors or how to use ohms meter.
Learn math in Electronics
He has not understood the math (decimal point). But it is important in Electronics. He can understand in the future.
Resistor color code
When we look at the resistor. We will notice many color bands on it. These colors show their resistance. We can measure them in ohms. When we write it, we can write an omega symbol, Ω.
1Ω is quite small so resistor values are often given in kΩ and MΩ.
10kΩ = 10,000Ω
100kΩ = 100,000Ω
Why use color codes? As the resistors are small components. We can difficult to see resistor values on them. So, these resistors have the color-coded instead.
Resistor color code calculator
To begin with, you look at the block diagram below. Now, most resistors have 4 color bands. When I was a young boy. I used to see the three color bands in AM receiver radio.
Suppose we have a resistor color code, brown, black, red and gold. How many its resistance?
Secondly, start to look the color band left to right. The first is the first digit of resistance. We know the first band is “1” in resistance.
Then, the second band is the second digit. It is black for “0”.
Now, we have the resistance is 10.
Look at the third color band is red. The multiplier is 100Ω.
So, the resistance is 10 x 100Ω or 1kΩ
The fourth color band means an actual resistance, tolerance. It is gold, so its tolerance of ±5 percent, 25Ω.
Thus in real using its resistance is between 950Ω and 1050Ω.
Which if you see the resistors having the tolerance resistance 10%(silver). You can get it is the same this how.
Look at the resistor again. You will notice two group color. First, tolerance is one band only, often for gold. Second, three bands are the resistance value.
How to read resistor color code without calculate
You may do not understand how to use it. When more use more understand!
For example, You have 6 resistors. All last color band is Glod. So, it is 5% tolerance. What is each resistance?
1. The first resistor,—Brown, Black, Red, and Gold. First, Brown and Black are 10. Second, look at the third color band is red. We put a dot between the first color and second color band. It is 1.0. Then we put a kΩ as its unit. Now we have 1.0kΩ or 1kΩ.
2. The second resistor,—Yellow, Purple, Red, and Gold. First, yellow and purple are 47. Then, we put the dot between the first and second color. It is 4.7. Next, we put the kΩ. So its resistance is 4.7kΩ.
3. Third resistor,—Orange, Orange, Orange, Gold. First, yellow and purple are 33. Then, we put a kΩ at behind the second color as the unit. So its resistance is 33kΩ.
4. Fourth resistor,—Red, Red, Yellow, Gold. First, Red and Red are 22. Then, we put 0 (zero) behind the second color. It is 220. Next, put a kΩ at behind the zero as the unit. So its resistance is 220kΩ.
5. Fifth resistor,—Brown, Black, Gold, Gold. First, Brown and Black are 10. When the third color band is the Gold, we put the dot between the first color and second color band. It is 1.0 and the unit is Ω. So, the resistance is 1.0Ω or 1Ω.
6. Sixth resistor,—Green, Black, Silver, and Gold. First, Green and Black are 50. When the third color band is the Silver, we put the dot font the first color band. It is 0.50 and the unit is Ω. So, the resistance is 0.50Ω or 0.5Ω.
Standard resistor values 5% tolerance
The following are the standard resistor values available in carbon film with 5
GET UPDATE VIA EMAIL
I always try to make Electronics Learning Easy. |
CBSE Class 9 Science Notes Chapter 3 Atoms And Molecules: Download PDF Here
Atoms and molecules are responsible for forming tiny sand particles, gargantuan black holes and everything in between. The atom is the most fundamental unit of matter, making up everything that we see around us. It is extremely small, measuring in at less than 0.1 to 0.5 nanometers.
For Chapter Summary On Atoms And Molecules, Watch The Below Video:
Laws of Chemical Combination
- In a chemical reaction, two or more molecules interact to produce new compounds and are called reactants, whereas the newly formed compounds are called products.
- In a chemical reaction, a chemical change must occur, which is generally observed with physical changes like precipitation, heat production, colour change, etc.
Law of conservation of mass
- According to the law of conservation of mass, matter can neither be created nor destroyed in a chemical reaction. It remains conserved.
- Mass of reactants will be equal to the mass of products.
Law of constant proportions
- A pure chemical compound contains the same elements combined together in a fixed proportion by mass is given by the law of definite proportions.
- For e.g., If we take water from a river or from an ocean, both have oxygen and hydrogen in the same proportion.
The elements are present in chemical compounds in a predetermined mass ratio. The “law of constant proportions” is this. This “law of constant proportions” is also known as the “Proust’s law” or the “law of defined proportions.” For instance, the oxygen and hydrogen content in pure water is always 1:8.
An atom is the defining structure of an element, which cannot be broken by any chemical means.
The atomic symbol has three parts:-
- The symbol X: the usual element symbol
- The atomic number A: equal to the number of protons
- The mass number Z: equal to the total number of protons and neutrons in an element.
This distance between an atom’s nucleus and outer electron shell. The atomic radius is calculated by measuring the distance between the nuclei of two identical atoms bonded together. Half this distance is the atomic radius.
Dalton’s Atomic Theory
According to Dalton’s atomic theory, atoms, which are indestructible and indivisible building blocks, make up all substances. Unlike other elements, which have atoms of different sizes and weights, an element’s atoms have all the same size and mass.
Dalton proposed that the concept of atoms could be used to explain the laws of conservation of mass and definite proportions. He proposed that atoms, which he described as “solid, massy, hard, impenetrable, moving particle(s),” are the smallest, indivisible units of matter.
- The matter is made up of indivisible particles known as atoms.
- The properties of all the atoms of a given element are the same, including mass. This can also be stated as all the atoms of an element have identical mass and chemical properties; atoms of different elements have different masses and chemical properties.
- Atoms of different elements combine in fixed ratios to form compounds.
- Atoms are neither created nor destroyed. The formation of new products (compounds) results from the rearrangement of existing atoms (reactants) in a chemical reaction.
- The relative number and kinds of atoms are constant in a given compound.
To know more about Laws of Chemical Combination, visit here.
Atomic mass and atomic mass unit
- Atomic mass is the total of the masses of the electrons, neutrons, and protons in an atom, or in a group of atoms, the average mass.
- Mass of an atomic particle is called the atomic mass.
- This is commonly expressed as per the international agreement in terms of a unified atomic mass unit (AMU).
- It can be best defined as 1/12 of the mass of a carbon-12 atom in its ground state.
To know more about Atomic Mass, visit here.
Molecular mass of an element is defined as the sum of the masses of the elements present in the molecule.
- Molecular mass is obtained by multiplying the atomic mass of an element with the number of atoms in the molecule and then adding the masses of all the elements in the molecule.
To know more about Molecular mass, visit here.
The smallest identifiable unit into which a pure substance may be divided while retaining its composition and chemical properties is a molecule, which is a collection of two or more atoms.
Molecules of elements
A molecule is a collection of two or more chemically bound atoms, whether they are from the same element or another.
For example, when two hydrogen (H2) and one oxygen (O2) atoms interact, one water molecule is created.
Molecules of compounds
Salts and molecular compounds are the two categories into which compounds can be divided. Covalent bonds hold the atoms together in molecular molecules. Ionic bonds hold it together in salts. Every compound is composed of one of these two types of bonds.
Actually, a compound is a kind of molecule. The atoms that join together must be distinct from one another for the substance to qualify as a compound. O2, for instance, is a molecule, not a compound. Due to its atomic connection with another oxygen atom. NaCl, however, is a compound since it is made up of two distinct atoms that are chemically bound together.
Mole concept & Avogadro Number
- In a substance, the amount of entities present for e.g. atoms, molecules, ions, is defined as a mole. A mole of any substance is 6.022×1023 molecules.
- Mole concept is one of the most convenient ways of expressing the amount of reactants and product in the reaction.
The value of Avogadro’s number is approximately 6.022×1023. The definition of Avogadro’s number is that it tells us the number of particles in 1 mole (or mol) of a substance. These particles could be electrons or molecules or atoms.
To know more about Mole Concept, visit here.
A substance is something which has mass and occupies space. The molar mass/molecular weight is actually the sum of the total mass in grams of the atoms present to make up a molecule per mole. The unit of molar mass is grams/mole.
To know more about Molar Mass, visit here.
Molecules and Atomicity
A molecule is defined as the smallest unit of a compound that contains the chemical properties of the compound.
- The atomicity of an element is the number of atoms in one molecule of the element.
- For e.g:- Hydrogen, nitrogen, oxygen, chlorine, iodine, bromine all have two atoms in each of their molecules. So, the atomicity of hydrogen, nitrogen, oxygen, chlorine, iodine, bromine is two each.
Structure of atom
- Atom is made of three particles; electron, proton and neutron.
- The centre of the atom is called the nucleus. The nucleus of an atom contains the whole mass of an atom.
- Electrons in an atom are arranged in shells/orbitals.
Valence electrons are those electrons which are present in the outermost orbit of the atom.
- The capacity of an atom to lose, gain or share valence electrons in order to complete its octet determines the valency of the atom.
To know more about Valency, visit here.
Writing Chemical Formulae
- When two or more elements chemically combine in a fixed ratio by mass, the obtained product is known as a compound.
- Compounds are substances consisting of two or more different types of elements in a fixed ratio of its atoms.
- An ion is defined as an atom or molecule which has gained or lost one or more of its valence electrons, giving it a net positive or negative charge.
- A negatively charged particle is called an anion, and a positively charged particle is called a cation.
Ionic compounds: chemical formula
Each constituent element in a chemical formula is identified by its chemical symbol, along with the relative number of atoms that make up each element. These ratios are used in empirical equations to start with a key element and then assign atom counts for the remaining elements in the compound in relation to the key element.
- Ionic compounds are chemical compounds in which ions are held together by specialised bonds called ionic bonds.
- An Ionic compound always contains an equal amount of positive and negative charge.
- For example: In Calcium chloride, the ionic bond is formed by oppositely charged calcium and chloride ions.
Calcium atom loses 2 electrons and attains the electronic configuration of the nearest noble gas (Ar). By doing so, it gains a net charge of +2.
The two Chlorine atoms take one electron each, thus gaining a charge of -1 (each) and attain the electronic configuration of the nearest noble gas (Ar).
To know more about Writing Chemical Formulae, visit here.
Explore more questions, concepts and tips on atoms and molecules Class 9 notes by registering at BYJU’S.
|NCERT Solutions for Class 9 Science Chapter 3|
|NCERT Exemplar Solutions for Class 9 Science Chapter 3|
- Important Questions for Class 9 Science Chapter 3 –Atoms and Molecules
- Maths Notes For Class 9
- CBSE Class 9 Social Science Notes
Frequently Asked Questions on CBSE Class 9 Science Notes Chapter 3: Atoms and Molecules
What is difference between atoms and molecules?
Atoms refer to the mos basic or smallest unit of a chemical element. Whereas, molecules are composed of 2 or more atoms held together.
Who was Dalton?
John Dalton was an English chemist, physicist and he was best known for his Atomic Theory.
What is a mole?
Mole is a standard unit for measuring the amount of any substance. |
The Anglo-Saxons were a cultural group that inhabited England in the Early Middle Ages. They traced their origins to settlers who came to Britain from mainland Europe in the 5th century. However, the ethnogenesis of the Anglo-Saxons happened within Britain, and the identity was not merely imported. Anglo-Saxon identity arose from interaction between incoming groups from several Germanic tribes, both amongst themselves, and with indigenous Britons. Many of the natives, over time, adopted Anglo-Saxon culture and language and were assimilated. The Anglo-Saxons established the concept, and the Kingdom, of England, and though the modern English language owes somewhat less than 26% of its words to their language, this includes the vast majority of words used in everyday speech.
Historically, the Anglo-Saxon period denotes the period in Britain between about 450 and 1066, after their initial settlement and up until the Norman Conquest. The early Anglo-Saxon period includes the creation of an English nation, with many of the aspects that survive today, including regional government of shires and hundreds. During this period, Christianity was established, and there was a flowering of literature and language. Charters and law were also established. The term Anglo-Saxon is popularly used for the language that was spoken and written by the Anglo-Saxons in England and southeastern Scotland from at least the mid-5th century until the mid-12th century. In scholarly use, it is more commonly called "Old English".
The history of the Anglo-Saxons is the history of a cultural identity. It developed from divergent groups in association with the people's adoption of Christianity and was integral to the founding of various kingdoms. Threatened by extended Danish Viking invasions and military occupation of eastern England, this identity was re-established; it dominated until after the Norman Conquest. Anglo-Saxon material culture can still be seen in architecture, dress styles, illuminated texts, metalwork and other art. Behind the symbolic nature of these cultural emblems, there are strong elements of tribal and lordship ties. The elite declared themselves kings who developed burhs (fortifications and fortified settlements), and identified their roles and peoples in Biblical terms. Above all, as archaeologist Helena Hamerow has observed, "local and extended kin groups remained...the essential unit of production throughout the Anglo-Saxon period." The effects persist, as a 2015 study found the genetic makeup of British populations today shows divisions of the tribal political units of the early Anglo-Saxon period.
The term Anglo-Saxon began to be used in the 8th century (in Latin and on the continent) to distinguish "Germanic" groups in Britain from those on the continent (Old Saxony and Anglia in Northern Germany).[a] Catherine Hills summarised the views of many modern scholars in her observation that attitudes towards Anglo-Saxons, and hence the interpretation of their culture and history, have been "more contingent on contemporary political and religious theology as on any kind of evidence."
The Old English ethnonym Angul-Seaxan comes from the Latin Angli-Saxones and became the name of the peoples the English monk Bede called Angli around 730 and the British monk Gildas called Saxones around 530. Anglo-Saxon is a term that was rarely used by Anglo-Saxons themselves. It is likely they identified as ængli, Seaxe or, more probably, a local or tribal name such as Mierce, Cantie, Gewisse, Westseaxe, or Norþanhymbre. After the Viking Age, an Anglo-Scandinavian identity developed in the Danelaw.
The term Angli Saxones seems to have first been used in mainland writing of the 8th century; Paul the Deacon uses it to distinguish the English Saxons from the mainland Saxons (Ealdseaxe, literally, 'old Saxons'). The name, therefore, seemed to mean "English" Saxons.
The Christian church seems to have used the word Angli; for example in the story of Pope Gregory I and his remark, "Non Angli sed angeli" (not English but angels). The terms ænglisc (the language) and Angelcynn (the people) were also used by West Saxon King Alfred to refer to the people; in doing so he was following established practice. Bede and Alcuin used gens Anglorum to refer to all the Anglo-Saxons: Bede referred to the people of the pre-Christian period as 'Saxons', but all becoming 'Angles' after accepting Christianity (in accordance with Pope Gregory I's use of the word Anglorum for the entire mission); Alcuin contrasted 'Saxons' with 'Angles', the former referring only to continental Saxons and the latter being associated with Britain. Bede's choice of terminology contrasted with the norm among his contemporaries, both Angle and Saxon, who collectively identified as 'Saxons' and their country as Saxonia. Aethelweard also followed Bede's usage, systematically editing all mentions of the word 'Saxon' to 'English'.
The first use of the term Anglo-Saxon amongst the insular sources is in the titles for Æthelstan around 924: Angelsaxonum Denorumque gloriosissimus rex (most glorious king of the Anglo-Saxons and of the Danes) and rex Angulsexna and Norþhymbra imperator paganorum gubernator Brittanorumque propugnator (king of the Anglo-Saxons and emperor of the Northumbrians, governor of the pagans, and defender of the Britons). At other times he uses the term rex Anglorum (king of the English), which presumably meant both Anglo-Saxons and Danes. Alfred used Anglosaxonum Rex. The term Engla cyningc (King of the English) is used by Æthelred. Cnut the Great, King of Denmark, England, and Norway, in 1021 was the first to refer to the land and not the people with this term: ealles Englalandes cyningc (King of all England). These titles express the sense that the Anglo-Saxons were a Christian people with a king anointed by God.
The indigenous Common Brittonic speakers referred to Anglo-Saxons as Saxones or possibly Saeson (the word Saeson is the modern Welsh word for 'English people'); the equivalent word in Scottish Gaelic is Sasannach and in the Irish language, Sasanach. Catherine Hills suggests that it is no accident "that the English call themselves by the name sanctified by the Church, as that of a people chosen by God, whereas their enemies use the name originally applied to piratical raiders".
Early Anglo-Saxon history (410–660)
The early Anglo-Saxon period covers the history of medieval Britain that starts from the end of Roman rule. It is a period widely known in European history as the Migration Period, also the Völkerwanderung ("migration of peoples" in German). This was a period of intensified human migration in Europe from about 375 to 800.[b] The migrants were Germanic tribes such as the Goths, Vandals, Angles, Saxons, Lombards, Suebi, Frisii, and Franks; they were later pushed westwards by the Huns, Avars, Slavs, Bulgars, and Alans. The migrants to Britain might also have included the Huns and Rugini.: 123–124
Until AD 400, Roman Britain, the province of Britannia, was an integral, flourishing part of the Western Roman Empire, occasionally disturbed by internal rebellions or barbarian attacks, which were subdued or repelled by the large contingent of imperial troops stationed in the province. By 410, however, the imperial forces had been withdrawn to deal with crises in other parts of the empire, and the Romano-Britons were left to fend for themselves in what is called the post-Roman or "sub-Roman" period of the 5th century.
It is now widely accepted that the Anglo-Saxons were not just transplanted Germanic invaders and settlers from the Continent, but the outcome of insular interactions and changes.
Writing c. 540, Gildas mentions that sometime in the 5th century, a council of leaders in Britain agreed that some land in the east of southern Britain would be given to the Saxons on the basis of a treaty, a foedus, by which the Saxons would defend the Britons against attacks from the Picts and Scoti in exchange for food supplies. The most contemporaneous textual evidence is the Chronica Gallica of 452, which records for the year 441: "The British provinces, which to this time had suffered various defeats and misfortunes, are reduced to Saxon rule." This is an earlier date than that of 451 for the "coming of the Saxons" used by Bede in his Historia ecclesiastica gentis Anglorum, written around 731. It has been argued that Bede misinterpreted his (scanty) sources and that the chronological references in the Historia Britonnum yield a plausible date of around 428.
Gildas recounts how a war broke out between the Saxons and the local population – historian Nick Higham calls it the "War of the Saxon Federates" – which ended shortly after the siege at 'Mons Badonicus'. The Saxons went back to "their eastern home". Gildas calls the peace a "grievous divorce with the barbarians". The price of peace, Higham argues, was a better treaty for the Saxons, giving them the ability to receive tribute from people across the lowlands of Britain. The archaeological evidence agrees with this earlier timescale. In particular, the work of Catherine Hills and Sam Lucy on the evidence of Spong Hill has moved the chronology for the settlement earlier than 450, with a significant number of items now in phases before Bede's date.
This vision of the Anglo-Saxons exercising extensive political and military power at an early date remains contested. The most developed vision of a continuation in sub-Roman Britain, with control over its own political and military destiny for well over a century, is that of Kenneth Dark, who suggests that the sub-Roman elite survived in culture, politics and military power up to c. 570. Bede, however, identifies three phases of settlement: an exploration phase, when mercenaries came to protect the resident population; a migration phase, which was substantial as implied by the statement that Anglus was deserted; and an establishment phase, in which Anglo-Saxons started to control areas, implied in Bede's statement about the origins of the tribes.
Scholars have not reached a consensus on the number of migrants who entered Britain in this period. Härke argues that the figure is around 100,000 to 200,000. Bryan Ward-Perkins also argues for up to 200,000 incomers. Catherine Hills suggests the number is nearer to 20,000. A computer simulation showed that a migration of 250,000 people from mainland Europe could have been accomplished in as little as 38 years. Recent genetic and isotope studies have suggested that the migration, which included both men and women, continued over several centuries, possibly allowing for significantly more new arrivals than has been previously thought. By around 500, communities of Anglo-Saxons were established in southern and eastern Britain.
Härke and Michael Wood estimate that the British population in the area that eventually became Anglo-Saxon England was around one million by the start of the fifth century; however, what happened to the Britons has been debated. The traditional explanation for their archaeological and linguistic invisibility is that the Anglo-Saxons either killed them or drove them to the mountainous fringes of Britain, a view broadly supported by the few available sources from the period. However, there is evidence of continuity in the systems of landscape and local governance, decreasing the likelihood of such a cataclysmic event, at least in parts of England. Thus, scholars have suggested other, less violent explanations by which the culture of the Anglo-Saxons, whose core area of large-scale settlement was likely restricted to what is now southeastern England, East Anglia and Lincolnshire, could have come to be ubiquitous across lowland Britain. Härke has posited a scenario in which the Anglo-Saxons, in expanding westward, outbred the Britons, eventually reaching a point where their descendants made up a larger share of the population of what was to become England. It has also been proposed that the Britons were disproportionately affected by plagues arriving through Roman trade links, which, combined with a large emigration to Armorica, could have substantially decreased their numbers.
Even so, there is general agreement that the kingdoms of Wessex, Mercia and Northumbria housed significant numbers of Britons. Härke states that "it is widely accepted that in the north of England, the native population survived to a greater extent than the south," and that in Bernicia, "a small group of immigrants may have replaced the native British elite and took over the kingdom as a going concern." Evidence for the natives in Wessex, meanwhile, can be seen in the late seventh century laws of King Ine, which gave them fewer rights and a lower status than the Saxons. This might have provided an incentive for Britons in the kingdom to adopt Anglo-Saxon culture. Higham points out that "in circumstances where freedom at law, acceptance with the kindred, access to patronage, and the use and possession of weapons were all exclusive to those who could claim Germanic descent, then speaking Old English without Latin or Brittonic inflection had considerable value."
There is evidence for a British influence on the emerging Anglo-Saxon elite classes. The Wessex royal line was traditionally founded by a man named Cerdic, an undoubtedly Celtic name cognate to Ceretic (the name of two British kings, ultimately derived from *Corotīcos). This may indicate that Cerdic was a native Briton and that his dynasty became anglicised over time. A number of Cerdic's alleged descendants also possessed Celtic names, including the 'Bretwalda' Ceawlin. The last man in this dynasty to have a Brittonic name was King Caedwalla, who died as late as 689. In Mercia, too, several kings bear seemingly Celtic names, most notably Penda. As far east as Lindsey, the Celtic name Caedbaed appears in the list of kings.
Recent genetic studies, based on data collected from skeletons found in Iron Age, Roman and Anglo-Saxon era burials, have concluded that the ancestry of the modern English population contains large contributions from both Anglo-Saxon migrants and Romano-British natives.
Development of an Anglo-Saxon society (560–610)
In the last half of the 6th century, four structures contributed to the development of society; they were the position and freedoms of the ceorl, the smaller tribal areas coalescing into larger kingdoms, the elite developing from warriors to kings, and Irish monasticism developing under Finnian (who had consulted Gildas) and his pupil Columba.
The Anglo-Saxon farms of this period are often falsely supposed to be "peasant farms". However, a ceorl, who was the lowest ranking freeman in early Anglo-Saxon society, was not a peasant but an arms-owning male with the support of a kindred, access to law and the wergild; situated at the apex of an extended household working at least one hide of land. The farmer had freedom and rights over lands, with provision of a rent or duty to an overlord who provided only slight lordly input.[c] Most of this land was common outfield arable land (of an outfield-infield system) that provided individuals with the means to build a basis of kinship and group cultural ties.
The Tribal Hidage lists thirty-five peoples, or tribes, with assessments in hides, which may have originally been defined as the area of land sufficient to maintain one family. The assessments in the Hidage reflect the relative size of the provinces. Although varying in size, all thirty-five peoples of the Tribal Hidage were of the same status, in that they were areas which were ruled by their own elite family (or royal houses), and so were assessed independently for payment of tribute. [d] By the end of the sixth century, larger kingdoms had become established on the south or east coasts. They include the provinces of the Jutes of Hampshire and Wight, the South Saxons, Kent, the East Saxons, East Angles, Lindsey and (north of the Humber) Deira and Bernicia. Several of these kingdoms may have had as their initial focus a territory based on a former Roman civitas.
By the end of the sixth century, the leaders of these communities were styling themselves kings, though it should not be assumed that all of them were Germanic in origin. The Bretwalda concept is taken as evidence of a number of early Anglo-Saxon elite families. What Bede seems to imply in his Bretwalda is the ability of leaders to extract tribute, overawe and/or protect the small regions, which may well have been relatively short-lived in any one instance. Ostensibly "Anglo-Saxon" dynasties variously replaced one another in this role in a discontinuous but influential and potent roll call of warrior elites. Importantly, whatever their origin or whenever they flourished, these dynasties established their claim to lordship through their links to extended kin, and possibly mythical, ties. As Helen Geake points out, "they all just happened to be related back to Woden".
The process from warrior to cyning – Old English for king – is described in Beowulf:
|Old English||Modern English (as translated by Seamus Heaney)|
Oft Scyld Scéfing – sceaþena þréatum
There was Shield Sheafson, scourge of many tribes,
Conversion to Christianity (588–686)
In 565, Columba, a monk from Ireland who studied at the monastic school of Moville under St. Finnian, reached Iona as a self-imposed exile. The influence of the monastery of Iona would grow into what Peter Brown has described as an "unusually extensive spiritual empire," which "stretched from western Scotland deep to the southwest into the heart of Ireland and, to the southeast, it reached down throughout northern Britain, through the influence of its sister monastery Lindisfarne."
In June 597 Columba died. At this time, Augustine landed on the Isle of Thanet and proceeded to King Æthelberht's main town of Canterbury. He had been the prior of a monastery in Rome when Pope Gregory the Great chose him in 595 to lead the Gregorian mission to Britain to Christianise the Kingdom of Kent from their native Anglo-Saxon paganism. Kent was probably chosen because Æthelberht had married a Christian princess, Bertha, daughter of Charibert I the king of Paris, who was expected to exert some influence over her husband. Æthelberht was converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines. Kent was rich, with strong trade ties to the continent, and Æthelberht may have instituted royal control over trade. For the first time following the Anglo-Saxon invasion, coins began circulating in Kent during his reign.
In 635 Aidan, an Irish monk from Iona, chose the Isle of Lindisfarne to establish a monastery which was close to King Oswald's main fortress of Bamburgh. He had been at the monastery in Iona when Oswald asked to be sent a mission to Christianise the Kingdom of Northumbria from their native Anglo-Saxon paganism. Oswald had probably chosen Iona because after his father had been killed he had fled into south-west Scotland and had encountered Christianity, and had returned determined to make Northumbria Christian. Aidan achieved great success in spreading the Christian faith, and since Aidan could not speak English and Oswald had learned Irish during his exile, Oswald acted as Aidan's interpreter when the latter was preaching. Later, Northumberland's patron saint, Saint Cuthbert, was an abbot of the monastery, and then Bishop of Lindisfarne. An anonymous life of Cuthbert written at Lindisfarne is the oldest extant piece of English historical writing, [e] and in his memory a gospel (known as the St Cuthbert Gospel) was placed in his coffin. The decorated leather bookbinding is the oldest intact European binding.
In 664, the Synod of Whitby was convened and established Roman practice as opposed to Irish practice (in style of tonsure and dates of Easter) as the norm in Northumbria, and thus "brought the Northumbrian church into the mainstream of Roman culture." The episcopal seat of Northumbria was transferred from Lindisfarne to York. Wilfrid, chief advocate for the Roman position, later became Bishop of Northumbria, while Colmán and the Ionan supporters, who did not change their practices, withdrew to Iona.
Middle Anglo-Saxon history (660–899)
By 660, the political map of Lowland Britain had developed with smaller territories coalescing into kingdoms, and from this time larger kingdoms started dominating the smaller kingdoms. The development of kingdoms, with a particular king being recognised as an overlord, developed out of an early loose structure that, Higham believes, is linked back to the original feodus. The traditional name for this period is the Heptarchy, which has not been used by scholars since the early 20th century as it gives the impression of a single political structure and does not afford the "opportunity to treat the history of any one kingdom as a whole". Simon Keynes suggests that the 8th and 9th century was a period of economic and social flourishing which created stability both below the Thames and above the Humber.
Mercian supremacy (626–821)
Middle-lowland Britain was known as the place of the Mierce, the border or frontier folk, in Latin Mercia. Mercia was a diverse area of tribal groups, as shown by the Tribal Hidage; the peoples were a mixture of Brittonic speaking peoples and "Anglo-Saxon" pioneers and their early leaders had Brittonic names, such as Penda. Although Penda does not appear in Bede's list of great overlords, it would appear from what Bede says elsewhere that he was dominant over the southern kingdoms. At the time of the battle of the river Winwæd, thirty duces regii (royal generals) fought on his behalf. Although there are many gaps in the evidence, it is clear that the seventh-century Mercian kings were formidable rulers who were able to exercise a wide-ranging overlordship from their Midland base.
Mercian military success was the basis of their power; it succeeded against not only 106 kings and kingdoms by winning set-piece battles, but by ruthlessly ravaging any area foolish enough to withhold tribute. There are a number of casual references scattered throughout the Bede's history to this aspect of Mercian military policy. Penda is found ravaging Northumbria as far north as Bamburgh and only a miraculous intervention from Aidan prevents the complete destruction of the settlement. In 676 Æthelred conducted a similar ravaging in Kent and caused such damage in the Rochester diocese that two successive bishops gave up their position because of lack of funds. In these accounts there is a rare glimpse of the realities of early Anglo-Saxon overlordship and how a widespread overlordship could be established in a relatively short period. By the middle of the 8th century, other kingdoms of southern Britain were also affected by Mercian expansionism. The East Saxons seem to have lost control of London, Middlesex and Hertfordshire to Æthelbald, although the East Saxon homelands do not seem to have been affected, and the East Saxon dynasty continued into the ninth century. The Mercian influence and reputation reached its peak when, in the late 8th century, the most powerful European ruler of the age, the Frankish king Charlemagne, recognised the Mercian King Offa's power and accordingly treated him with respect, even if this could have been just flattery.
Learning and monasticism (660–793)
Michael Drout calls this period the "Golden Age", when learning flourished with a renaissance in classical knowledge. The growth and popularity of monasticism was not an entirely internal development, with influence from the continent shaping Anglo-Saxon monastic life. In 669 Theodore, a Greek-speaking monk originally from Tarsus in Asia Minor, arrived in Britain to become the eighth Archbishop of Canterbury. He was joined the following year by his colleague Hadrian, a Latin-speaking African by origin and former abbot of a monastery in Campania (near Naples). One of their first tasks at Canterbury was the establishment of a school; and according to Bede (writing some sixty years later), they soon "attracted a crowd of students into whose minds they daily poured the streams of wholesome learning". As evidence of their teaching, Bede reports that some of their students, who survived to his own day, were as fluent in Greek and Latin as in their native language. Bede does not mention Aldhelm in this connection; but we know from a letter addressed by Aldhelm to Hadrian that he too must be numbered among their students.
Aldhelm wrote in elaborate and grandiloquent and very difficult Latin, which became the dominant style for centuries. Michael Drout states "Aldhelm wrote Latin hexameters better than anyone before in England (and possibly better than anyone since, or at least up until John Milton). His work showed that scholars in England, at the very edge of Europe, could be as learned and sophisticated as any writers in Europe." During this period, the wealth and power of the monasteries increased as elite families, possibly out of power, turned to monastic life.
Anglo-Saxon monasticism developed the unusual institution of the "double monastery", a house of monks and a house of nuns, living next to each other, sharing a church but never mixing, and living separate lives of celibacy. These double monasteries were presided over by abbesses, who became some of the most powerful and influential women in Europe. Double monasteries which were built on strategic sites near rivers and coasts, accumulated immense wealth and power over multiple generations (their inheritances were not divided) and became centers of art and learning.
While Aldhelm was doing his work in Malmesbury, far from him, up in the North of England, Bede was writing a large quantity of books, gaining a reputation in Europe and showing that the English could write history and theology, and do astronomical computation (for the dates of Easter, among other things).
During the 9th century, Wessex rose in power, from the foundations laid by King Egbert in the first quarter of the century to the achievements of King Alfred the Great in its closing decades. The outlines of the story are told in the Anglo-Saxon Chronicle, though the annals represent a West Saxon point of view. On the day of Egbert's succession to the kingdom of Wessex, in 802, a Mercian ealdorman from the province of the Hwicce had crossed the border at Kempsford, with the intention of mounting a raid into northern Wiltshire; the Mercian force was met by the local ealdorman, "and the people of Wiltshire had the victory". In 829, Egbert went on, the chronicler reports, to conquer "the kingdom of the Mercians and everything south of the Humber". It was at this point that the chronicler chooses to attach Egbert's name to Bede's list of seven overlords, adding that "he was the eighth king who was Bretwalda". Simon Keynes suggests Egbert's foundation of a 'bipartite' kingdom is crucial as it stretched across southern England, and it created a working alliance between the West Saxon dynasty and the rulers of the Mercians. In 860, the eastern and western parts of the southern kingdom were united by agreement between the surviving sons of King Æthelwulf, though the union was not maintained without some opposition from within the dynasty; and in the late 870s King Alfred gained the submission of the Mercians under their ruler Æthelred, who in other circumstances might have been styled a king, but who under the Alfredian regime was regarded as the 'ealdorman' of his people.
The wealth of the monasteries and the success of Anglo-Saxon society attracted the attention of people from mainland Europe, mostly Danes and Norwegians. Because of the plundering raids that followed, the raiders attracted the name Viking – from the Old Norse víkingr meaning an expedition – which soon became used for the raiding activity or piracy reported in western Europe. In 793, Lindisfarne was raided and while this was not the first raid of its type it was the most prominent. In 794, Jarrow, the monastery where Bede wrote, was attacked; in 795 Iona was attacked; and in 804 the nunnery at Lyminge Kent was granted refuge inside the walls of Canterbury. Sometime around 800, a Reeve from Portland in Wessex was killed when he mistook some raiders for ordinary traders.
Viking raids continued until in 850, then the Chronicle says: "The heathen for the first time remained over the winter". The fleet does not appear to have stayed long in England, but it started a trend which others subsequently followed. In particular, the army which arrived in 865 remained over many winters, and part of it later settled what became known as the Danelaw. This was the "Great Army", a term used by the Chronicle in England and by Adrevald of Fleury on the Continent. The invaders were able to exploit the feuds between and within the various kingdoms and to appoint puppet kings, such as Ceolwulf in Mercia in 873 and perhaps others in Northumbria in 867 and East Anglia in 870. The third phase was an era of settlement; however, the "Great Army" went wherever it could find the richest pickings, crossing the English Channel when faced with resolute opposition, as in England in 878, or with famine, as on the Continent in 892. By this stage, the Vikings were assuming ever increasing importance as catalysts of social and political change. They constituted the common enemy, making the English more conscious of a national identity which overrode deeper distinctions; they could be perceived as an instrument of divine punishment for the people's sins, raising awareness of a collective Christian identity; and by 'conquering' the kingdoms of the East Angles, the Northumbrians and the Mercians, they created a vacuum in the leadership of the English people.
Danish settlement continued in Mercia in 877 and East Anglia in 879—80 and 896. The rest of the army meanwhile continued to harry and plunder on both sides of the Channel, with new recruits evidently arriving to swell its ranks, for it clearly continued to be a formidable fighting force. At first, Alfred responded by the offer of repeated tribute payments. However, after a decisive victory at Edington in 878, Alfred offered vigorous opposition. He established a chain of fortresses across the south of England, reorganised the army, "so that always half its men were at home, and half out on service, except for those men who were to garrison the burhs", and in 896 ordered a new type of craft to be built which could oppose the Viking longships in shallow coastal waters. When the Vikings returned from the Continent in 892, they found they could no longer roam the country at will, for wherever they went they were opposed by a local army. After four years, the Scandinavians therefore split up, some to settle in Northumbria and East Anglia, the remainder to try their luck again on the Continent.
King Alfred and the rebuilding (878–899)
More important to Alfred than his military and political victories were his religion, his love of learning, and his spread of writing throughout England. Keynes suggests Alfred's work laid the foundations for what really made England unique in all of medieval Europe from around 800 until 1066.
Thinking about how learning and culture had fallen since the last century, King Alfred wrote:
...So completely had wisdom fallen off in England that there were very few on this side of the Humber who could understand their rituals in English, or indeed could translate a letter from Latin into English; and I believe that there were not many beyond the Humber. There were so few of them that I indeed cannot think of a single one south of the Thames when I became king. (Preface: "Gregory the Great's Pastoral Care")
Alfred knew that literature and learning, both in English and in Latin, were very important, but the state of learning was not good when Alfred came to the throne. Alfred saw kingship as a priestly office, a shepherd for his people. One book that was particularly valuable to him was Gregory the Great's Cura Pastoralis (Pastoral Care). This is a priest's guide on how to care for people. Alfred took this book as his own guide on how to be a good king to his people; hence, a good king to Alfred increases literacy. Alfred translated this book himself and explains in the preface:
...When I had learned it I translated it into English, just as I had understood it, and as I could most meaningfully render it. And I will send one to each bishopric in my kingdom, and in each will be an æstel worth fifty mancuses. And I command in God's name that no man may take the æstel from the book nor the book from the church. It is unknown how long there may be such learned bishops as, thanks to God, are nearly everywhere. (Preface: "Gregory the Great's Pastoral Care")
What is presumed to be one of these "æstel" (the word only appears in this one text) is the gold, rock crystal and enamel Alfred Jewel, discovered in 1693, which is assumed to have been fitted with a small rod and used as a pointer when reading. Alfred provided functional patronage, linked to a social programme of vernacular literacy in England, which was unprecedented.
Therefore it seems better to me, if it seems so to you, that we also translate certain books ...and bring it about ...if we have the peace, that all the youth of free men who now are in England, those who have the means that they may apply themselves to it, be set to learning, while they may not be set to any other use, until the time when they can well read English writings. (Preface: "Gregory the Great's Pastoral Care")
This began a growth in charters, law, theology and learning. Alfred thus laid the foundation for the great accomplishments of the tenth century and did much to make the vernacular more important than Latin in Anglo-Saxon culture.
I desired to live worthily as long as I lived, and to leave after my life, to the men who should come after me, the memory of me in good works. (Preface: "The Consolation of Philosophy by Boethius")
Late Anglo-Saxon history (899–1066)
A framework for the momentous events of the 10th and 11th centuries is provided by the Anglo-Saxon Chronicle. However charters, law-codes and coins supply detailed information on various aspects of royal government, and the surviving works of Anglo-Latin and vernacular literature, as well as the numerous manuscripts written in the 10th century, testify in their different ways to the vitality of ecclesiastical culture. Yet as Keynes suggests "it does not follow that the 10th century is better understood than more sparsely documented periods".
Reform and formation of England (899–978)
During the course of the 10th century, the West Saxon kings extended their power first over Mercia, then into the southern Danelaw, and finally over Northumbria, thereby imposing a semblance of political unity on peoples, who nonetheless would remain conscious of their respective customs and their separate pasts. The prestige, and indeed the pretensions, of the monarchy increased, the institutions of government strengthened, and kings and their agents sought in various ways to establish social order. This process started with Edward the Elder – who with his sister, Æthelflæd, Lady of the Mercians, initially, charters reveal, encouraged people to purchase estates from the Danes, thereby to reassert some degree of English influence in territory which had fallen under Danish control. David Dumville suggests that Edward may have extended this policy by rewarding his supporters with grants of land in the territories newly conquered from the Danes and that any charters issued in respect of such grants have not survived. When Athelflæd died, Mercia was absorbed by Wessex. From that point on there was no contest for the throne, so the house of Wessex became the ruling house of England.
Edward the Elder was succeeded by his son Æthelstan, who Keynes calls the "towering figure in the landscape of the tenth century". His victory over a coalition of his enemies – Constantine, King of the Scots; Owain ap Dyfnwal, King of the Cumbrians; and Olaf Guthfrithson, King of Dublin – at the battle of Brunanburh, celebrated by a poem in the Anglo-Saxon Chronicle, opened the way for him to be hailed as the first king of England. Æthelstan's legislation shows how the king drove his officials to do their respective duties. He was uncompromising in his insistence on respect for the law. However this legislation also reveals the persistent difficulties which confronted the king and his councillors in bringing a troublesome people under some form of control. His claim to be "king of the English" was by no means widely recognised. The situation was complex: the Hiberno-Norse rulers of Dublin still coveted their interests in the Danish kingdom of York; terms had to be made with the Scots, who had the capacity not merely to interfere in Northumbrian affairs, but also to block a line of communication between Dublin and York; and the inhabitants of northern Northumbria were considered a law unto themselves. It was only after twenty years of crucial developments following Æthelstan's death in 939 that a unified kingdom of England began to assume its familiar shape. However, the major political problem for Edmund and Eadred, who succeeded Æthelstan, remained the difficulty of subjugating the north. In 959 Edgar is said to have "succeeded to the kingdom both in Wessex and in Mercia and in Northumbria, and he was then 16 years old" (ASC, version 'B', 'C'), and is called "the Peacemaker". By the early 970s, after a decade of Edgar's 'peace', it may have seemed that the kingdom of England was indeed made whole. In his formal address to the gathering at Winchester the king urged his bishops, abbots and abbesses "to be of one mind as regards monastic usage . . . lest differing ways of observing the customs of one Rule and one country should bring their holy conversation into disrepute".
Athelstan's court had been an intellectual incubator. In that court were two young men named Dunstan and Æthelwold who were made priests, supposedly at the insistence of Athelstan, right at the end of his reign in 939. Between 970 and 973 a council was held, under the aegis of Edgar, where a set of rules were devised that would be applicable throughout England. This put all the monks and nuns in England under one set of detailed customs for the first time. In 973, Edgar received a special second, 'imperial coronation' at Bath, and from this point England was ruled by Edgar under the strong influence of Dunstan, Athelwold, and Oswald, the Bishop of Worcester.
The reign of King Æthelred the Unready witnessed the resumption of Viking raids on England, putting the country and its leadership under strains as severe as they were long sustained. Raids began on a relatively small scale in the 980s but became far more serious in the 990s, and brought the people to their knees in 1009–12, when a large part of the country was devastated by the army of Thorkell the Tall. It remained for Swein Forkbeard, king of Denmark, to conquer the kingdom of England in 1013–14, and (after Æthelred's restoration) for his son Cnut to achieve the same in 1015–16. The tale of these years incorporated in the Anglo-Saxon Chronicle must be read in its own right, and set beside other material which reflects in one way or another on the conduct of government and warfare during Æthelred's reign. It is this evidence which is the basis for Keynes's view that the king lacked the strength, judgement and resolve to give adequate leadership to his people in a time of grave national crisis; who soon found out that he could rely on little but the treachery of his military commanders; and who, throughout his reign, tasted nothing but the ignominy of defeat. The raids exposed tensions and weaknesses which went deep into the fabric of the late Anglo-Saxon state, and it is apparent that events proceeded against a background more complex than the chronicler probably knew. It seems, for example, that the death of Bishop Æthelwold in 984 had precipitated further reaction against certain ecclesiastical interests; that by 993 the king had come to regret the error of his ways, leading to a period when the internal affairs of the kingdom appear to have prospered.
The increasingly difficult times brought on by the Viking attacks are reflected in both Ælfric's and Wulfstan's works, but most notably in Wulfstan's fierce rhetoric in the Sermo Lupi ad Anglos, dated to 1014. Malcolm Godden suggests that ordinary people saw the return of the Vikings as the imminent "expectation of the apocalypse," and this was given voice in Ælfric and Wulfstan writings, which is similar to that of Gildas and Bede. Raids were taken as signs of God punishing his people; Ælfric refers to people adopting the customs of the Danish and exhorts people not to abandon the native customs on behalf of the Danish ones, and then requests a "brother Edward" to try to put an end to a "shameful habit" of drinking and eating in the outhouse, which some of the countrywomen practised at beer parties.
In April 1016, Æthelred died of illness, leaving his son and successor Edmund Ironside to defend the country. The final struggles were complicated by internal dissension, and especially by the treacherous acts of Ealdorman Eadric of Mercia, who opportunistically changed sides to Cnut's party. After the defeat of the English in the Battle of Assandun in October 1016, Edmund and Cnut agreed to divide the kingdom so that Edmund would rule Wessex and Cnut Mercia, but Edmund died soon after his defeat in November 1016, making it possible for Cnut to seize power over all England.
Conquest of England: Danes, Norwegians and Normans (1016–1066)
In the 11th century, there were three conquests: one by Cnut in 1016; the second was an unsuccessful attempt of Battle of Stamford Bridge in 1066; and the third was conducted by William of Normandy in 1066. The consequences of each conquest changed the Anglo-Saxon culture. Politically and chronologically, the texts of this period are not Anglo-Saxon; linguistically, those written in English (as opposed to Latin or French, the other official written languages of the period) moved away from the late West Saxon standard that is called "Old English". Yet neither are they "Middle English"; moreover, as Treharne explains, for around three-quarters of this period, "there is barely any 'original' writing in English at all". These factors have led to a gap in scholarship, implying a discontinuity either side of the Norman Conquest, however this assumption is being challenged.
At first sight, there would seem little to debate. Cnut appeared to have adopted wholeheartedly the traditional role of Anglo-Saxon kingship. However an examination of the laws, homilies, wills, and charters dating from this period suggests that as a result of widespread aristocratic death and the fact that Cnut did not systematically introduce a new landholding class, major and permanent alterations occurred in the Saxon social and political structures. Eric John remarks that for Cnut "the simple difficulty of exercising so wide and so unstable an empire made it necessary to practise a delegation of authority against every tradition of English kingship". The disappearance of the aristocratic families which had traditionally played an active role in the governance of the realm, coupled with Cnut's choice of thegnly advisors, put an end to the balanced relationship between monarchy and aristocracy so carefully forged by the West Saxon Kings.
Edward became king in 1042, and given his upbringing might have been considered a Norman by those who lived across the English Channel. Following Cnut's reforms, excessive power was concentrated in the hands of the rival houses of Leofric of Mercia and Godwine of Wessex. Problems also came for Edward from the resentment caused by the king's introduction of Norman friends. A crisis arose in 1051 when Godwine defied the king's order to punish the men of Dover, who had resisted an attempt by Eustace of Boulogne to quarter his men on them by force. The support of Earl Leofric and Earl Siward enabled Edward to secure the outlawry of Godwine and his sons; and William of Normandy paid Edward a visit during which Edward may have promised William succession to the English throne, although this Norman claim may have been mere propaganda. Godwine and his sons came back the following year with a strong force, and the magnates were not prepared to engage them in civil war but forced the king to make terms. Some unpopular Normans were driven out, including Archbishop Robert, whose archbishopric was given to Stigand; this act supplied an excuse for the Papal support of William's cause.
The fall of England and the Norman Conquest is a multi-generational, multi-family succession problem caused in great part by Athelred's incompetence. By the time William of Normandy, sensing an opportunity, landed his invading force in 1066, the elite of Anglo-Saxon England had changed, although much of the culture and society had stayed the same.
Ða com Wyllelm eorl of Normandige into Pefnesea on Sancte Michæles mæsseæfen, sona þæs hi fere wæron, worhton castel æt Hæstingaport. Þis wearð þa Harolde cynge gecydd, he gaderade þa mycelne here, com him togenes æt þære haran apuldran, Wyllelm him com ongean on unwær, ær þis folc gefylced wære. Ac se kyng þeah him swiðe heardlice wið feaht mid þam mannum þe him gelæstan woldon, þær wearð micel wæl geslægen on ægðre healfe. Ðær wearð ofslægen Harold kyng, Leofwine eorl his broðor, Gyrð eorl his broðor, fela godra manna, þa Frencyscan ahton wælstowe geweald.
Then came William, the Earl of Normandy, into Pevensey on the evening of St Michael's mass, and soon as his men were ready, they built a fortress at Hasting's port. This was told to King Harold, and he gathered then a great army and came towards them at the Hoary Apple Tree, and William came upon him unawares before his folk were ready. But the king nevertheless withstood him very strongly with fighting with those men who would follow him, and there was a great slaughter on either side. Then Harald the King was slain, and Leofwine the Earl, his brother, and Gyrth, and many good men, and the Frenchmen held the place of slaughter.
After the Norman Conquest
Following the Norman conquest, many of the Anglo-Saxon nobility were either exiled or had joined the ranks of the peasantry. It has been estimated that only about 8% of the land was under Anglo-Saxon control by 1087. In 1086, only four major Anglo-Saxon landholders still held their lands. However, the survival of Anglo-Saxon heiresses was significantly greater. Many of the next generation of the nobility had English mothers and learnt to speak English at home. Some Anglo-Saxon nobles fled to Scotland, Ireland, and Scandinavia. The Byzantine Empire became a popular destination for many Anglo-Saxon soldiers, as it was in need of mercenaries. The Anglo-Saxons became the predominant element in the elite Varangian Guard, hitherto a largely North Germanic unit, from which the emperor's bodyguard was drawn and continued to serve the empire until the early 15th century. However, the population of England at home remained largely Anglo-Saxon; for them, little changed immediately except that their Anglo-Saxon lord was replaced by a Norman lord.
The chronicler Orderic Vitalis, who was the product of an Anglo-Norman marriage, writes: "And so the English groaned aloud for their lost liberty and plotted ceaselessly to find some way of shaking off a yoke that was so intolerable and unaccustomed". The inhabitants of the North and Scotland never warmed to the Normans following the Harrying of the North (1069–1070), where William, according to the Anglo Saxon Chronicle utterly "ravaged and laid waste that shire".
Many Anglo-Saxon people needed to learn Norman French to communicate with their rulers, but it is clear that among themselves they kept speaking Old English, which meant that England was in an interesting tri-lingual situation: Anglo-Saxon for the common people, Latin for the Church, and Norman French for the administrators, the nobility, and the law courts. In this time, and because of the cultural shock of the Conquest, Anglo-Saxon began to change very rapidly, and by 1200 or so, it was no longer Anglo-Saxon English, but early Middle English. But this language had deep roots in Anglo-Saxon, which was being spoken much later than 1066. Research has shown that a form of Anglo-Saxon was still being spoken, and not merely among uneducated peasants, into the thirteenth century in the West Midlands. This was J.R.R. Tolkien's major scholarly discovery when he studied a group of texts written in early Middle English called the Katherine Group. Tolkien noticed that a subtle distinction preserved in these texts indicated that Old English had continued to be spoken far longer than anyone had supposed.
Old English had been a central mark of the Anglo-Saxon cultural identity. With the passing of time, however, and particularly following the Norman conquest of England, this language changed significantly, and although some people (for example the scribe known as the Tremulous Hand of Worcester) could still read Old English into the thirteenth century, it fell out of use and the texts became useless. The Exeter Book, for example, seems to have been used to press gold leaf and at one point had a pot of fish-based glue sitting on top of it. For Michael Drout this symbolises the end of the Anglo-Saxons.
After 1066, it took more than three centuries for English to replace French as the language of government. The 1362 parliament opened with a speech in English and in the early 15th century, Henry V became the first monarch, since before the 1066 conquest, to use English in his written instructions.
Life and society
The larger narrative, seen in the history of Anglo-Saxon England, is the continued mixing and integration of various disparate elements into one Anglo-Saxon people. The outcome of this mixing and integration was a continuous re-interpretation by the Anglo-Saxons of their society and worldview, which Heinreich Härke calls a "complex and ethnically mixed society".
Kingship and kingdoms
The development of Anglo-Saxon kingship is little understood, but the model proposed by York considered the development of kingdoms and writing down of the oral law-codes to be linked to a progression towards leaders providing mund and receiving recognition. These leaders who developed in the sixth century were able to seize the initiative and to establish a position of power for themselves and their successors. Anglo-Saxon leaders, unable to tax and coerce followers, extracted surplus by raiding and collecting food renders and 'prestige goods'. The later sixth century saw the end of a 'prestige goods' economy, as evidenced by the decline of accompanied burial, and the appearance of the first 'princely' graves and high-status settlements. The ship burial in mound one at Sutton Hoo (Suffolk) is the most widely known example of a 'princely' burial, containing lavish metalwork and feasting equipment, and possibly representing the burial place of King Raedwald of East Anglia. These centres of trade and production reflect the increased socio-political stratification and wider territorial authority which allowed seventh-century elites to extract and redistribute surpluses with far greater effectiveness than their sixth-century predecessors would have found possible. Anglo-Saxon society, in short, looked very different in 600 than it did a hundred years earlier.
By 600, the establishment of the first Anglo-Saxon 'emporia' (alternatively 'wics') appears to have been in process. There are only four major archaeologically attested wics in England - London, Ipswich, York, and Hamwic. These were originally interpreted by Hodges as methods of royal control over the import of prestige goods, rather than centre of actual trade-proper. Despite archaeological evidence of royal involvement, emporia are now widely understood to represent genuine trade and exchange, alongside a return to urbanism. Bede's use of the term imperium has been seen as significant in defining the status and powers of the bretwaldas, in fact it is a word Bede used regularly as an alternative to regnum; scholars believe this just meant the collection of tribute. Oswiu's extension of overlordship over the Picts and Scots is expressed in terms of making them tributary. Military overlordship could bring great short-term success and wealth, but the system had its disadvantages. Many of the overlords enjoyed their powers for a relatively short period.[f] Foundations had to be carefully laid to turn a tribute-paying under-kingdom into a permanent acquisition, such as Bernician absorption of Deira. The smaller kingdoms did not disappear without trace once they were incorporated into larger polities; on the contrary their territorial integrity was preserved when they became ealdormanries or, depending on size, parts of ealdormanries within their new kingdoms. An example of this tendency for later boundaries to preserve earlier arrangements is Sussex; the county boundary is essentially the same as that of the West Saxon shire and the Anglo-Saxon kingdom. The Witan, also called Witenagemot, was the council of kings; its essential duty was to advise the king on all matters on which he chose to ask its opinion. It attested his grants of land to churches or laymen, consented to his issue of new laws or new statements of ancient custom, and helped him deal with rebels and persons suspected of disaffection.
Only five Anglo-Saxon kingdoms are known to have survived to 800, and several British kingdoms in the west of the country had disappeared as well. The major kingdoms had grown through absorbing smaller principalities, and the means through which they did it and the character their kingdoms acquired as a result are one of the major themes of the Middle Saxon period. Beowulf, for all its heroic content, clearly makes the point that economic and military success were intimately linked. A 'good' king was a generous king who through his wealth won the support which would ensure his supremacy over other kingdoms. King Alfred's digressions in his translation of Boethius' Consolation of Philosophy, provided these observations about the resources which every king needed:
In the case of the king, the resources and tools with which to rule are that he have his land fully manned: he must have praying men, fighting men and working men. You know also that without these tools no king may make his ability known. Another aspect of his resources is that he must have the means of support for his tools, the three classes of men. These, then, are their means of support: land to live on, gifts, weapons, food, ale, clothing and whatever else is necessary for each of the three classes of men.
This is the first written appearance of the division of society into the 'three orders'; the 'working men' provided the raw materials to support the other two classes. The advent of Christianity brought with it the introduction of new concepts of land tenure. The role of churchmen was analogous with that of the warriors waging heavenly warfare. However what Alfred was alluding to was that in order for a king to fulfil his responsibilities towards his people, particularly those concerned with defence, he had the right to make considerable exactions from the landowners and people of his kingdom. The need to endow the church resulted in the permanent alienation of stocks of land which had previously only been granted on a temporary basis and introduced the concept of a new type of hereditary land which could be freely alienated and was free of any family claims.
The nobility under the influence of Alfred became involved with developing the cultural life of their kingdom. As the kingdom became unified, it brought the monastic and spiritual life of the kingdom under one rule and stricter control. However the Anglo-Saxons believed in 'luck' as a random element in the affairs of man and so would probably have agreed that there is a limit to the extent one can understand why one kingdom failed while another succeeded. They also believed in 'destiny' and interpreted the fate of the kingdom of England with Biblical and Carolingian ideology, with parallels, between the Israelites, the great European empires and the Anglo-Saxons. Danish and Norman conquests were just the manner in which God punished his sinful people and the fate of great empires.
Although Christianity dominates the religious history of the Anglo-Saxons, life in the 5th and 6th centuries was dominated by pagan religious beliefs with a Scandinavian-Germanic heritage.
Pagan Anglo-Saxons worshipped at a variety of different sites across their landscape, some of which were apparently specially built temples and others that were natural geographical features such as sacred trees, hilltops or wells. According to place name evidence, these sites of worship were known alternately as either hearg or as wēoh. Most poems from before the Norman Conquest are steeped in pagan symbolism, and their integration into the new faith goes beyond the literary sources. Thus, as Lethbridge reminds us, "to say, 'this is a monument erected in Christian times and therefore the symbolism on it must be Christian,' is an unrealistic approach. The rites of the older faith, now regarded as superstition, are practised all over the country today. It did not mean that people were not Christian; but that they could see a lot of sense in the old beliefs also"
Early Anglo-Saxon society attached great significance to the horse; a horse may have been an acquaintance of the god Woden, and/or they may have been (according to Tacitus) confidants of the gods. Horses were closely associated with gods, especially Odin and Freyr. Horses played a central role in funerary practices as well as in other rituals. Horses were prominent symbols of fertility, and there were many horse fertility cults. The rituals associated with these include horse fights, burials, consumption of horse meat, and horse sacrifice. Hengist and Horsa, the mythical ancestors of the Anglo-Saxons, were associated with horses, and references to horses are found throughout Anglo-Saxon literature. Actual horse burials in England are relatively rare and "may point to influence from the continent". A well-known Anglo-Saxon horse burial (from the sixth/seventh century) is Mound 17 at Sutton Hoo, a few yards from the more famous ship burial in Mound 1. A sixth-century grave near Lakenheath, Suffolk, yielded the body of a man next to that of a complete horse in harness, with a bucket of food by its head.
Bede's story of Cædmon, the cowherd who became the 'Father of English Poetry,' represents the real heart of the conversion of the Anglo-Saxons from paganism to Christianity. Bede writes, "[t]here was in the Monastery of this Abbess (Streonæshalch – now known as Whitby Abbey) a certain brother particularly remarkable for the Grace of God, who was wont to make religious verses, so that whatever was interpreted to him out of scripture, he soon after put the same into poetical expressions of much sweetness and humility in Old English, which was his native language. By his verse the minds of many were often excited to despise the world, and to aspire to heaven." The story of Cædmon illustrates the blending of Christian and Germanic, Latin and oral tradition, monasteries and double monasteries, pre-existing customs and new learning, popular and elite, that characterizes the Conversion period of Anglo-Saxon history and culture. Cædmon does not destroy or ignore traditional Anglo-Saxon poetry. Instead, he converts it into something that helps the Church. Anglo-Saxon England finds ways to synthesize the religion of the Church with the existing "northern" customs and practices. Thus the conversion of the Anglo-Saxons was not just their switching from one practice to another, but making something new out of their old inheritance and their new belief and learning.
Monasticism, and not just the church, was at the centre of Anglo-Saxon Christian life. Western monasticism, as a whole, had been evolving since the time of the Desert Fathers, but in the seventh century, monasticism in England confronted a dilemma that brought to question the truest representation of the Christian faith. The two monastic traditions were the Celtic and the Roman, and a decision was made to adopt the Roman tradition. Monasteria seem to describe all religious congregations other than those of the bishop.
In the 10th century, Dunstan brought Athelwold to Glastonbury, where the two of them set up a monastery on Benedictine lines. For many years, this was the only monastery in England that strictly followed the Benedictine Rule and observed complete monastic discipline. What Mechthild Gretsch calls an "Aldhelm Seminar" developed at Glastonbury, and the effects of this seminar on the curriculum of learning and study in Anglo-Saxon England were enormous. Royal power was put behind the reforming impulses of Dunstan and Athelwold, helping them to enforce their reform ideas. This happened first at the Old Minster in Winchester, before the reformers built new foundations and refoundations at Thorney, Peterborough, and Ely, among other places. Benedictine monasticism spread throughout England, and these became centers of learning again, run by people trained in Glastonbury, with one rule, the works of Aldhelm at the center of their curricula but also influenced by the vernacular efforts of Alfred. From this mixture sprung a great flowering of literary production.
Fighting and warfare
Soldiers throughout the country were summoned, for both offensive and defensive war; early armies consisted essentially of household bands, while later on men were recruited on a territorial basis. The mustering of an army, annually at times, occupied an important place in Frankish history, both military and constitutional. The English kingdoms appear to have known no institution similar to this. The earliest reference is Bede's account of the overthrow of the Northumbrian Æthelfrith by Rædwald overlord of the southern English. Rædwald raised a large army, presumably from among the kings who accepted his overlordship, and "not giving him time to summon and assemble his whole army, Rædwald met him with a much greater force and slew him on the Mercian border on the east bank of the river Idle." At the Battle of Edington in 878, when the Danes made a surprise attack on Alfred at Chippenham after Twelfth Night, Alfred retreated to Athelney after Easter and then seven weeks after Easter mustered an army at "Egbert's stone". It is not difficult to imagine that Alfred sent out word to the ealdormen to call his men to arms. This may explain the delay, and it is probably no more than coincidence that the army mustered at the beginning of May, a time when there would have been sufficient grass for the horses. There is also information about the mustering of fleets in the eleventh century. From 992 to 1066 fleets were assembled at London, or returned to the city at the end of their service, on several occasions. Where they took up station depended on the quarter from which a threat was expected: Sandwich if invasion was expected from the north, or the Isle of Wight if it was from Normandy.
Once they left home, these armies and fleets had to be supplied with food and clothing for the men as well as forage for the horses. Yet if armies of the seventh and eighth centuries were accompanied by servants and a supply train of lesser free men, Alfred found these arrangements insufficient to defeat the Vikings. One of his reforms was to divide his military resources into thirds. One part manned the burhs and found the permanent garrisons which would make it impossible for the Danes to overrun Wessex, although they would also take to the field when extra soldiers were needed. The remaining two would take it in turns to serve. They were allocated a fixed term of service and brought the necessary provisions with them. This arrangement did not always function well. On one occasion a division on service went home in the middle of blockading a Danish army on Thorney Island; its provisions were consumed and its term had expired before the king came to relieve them. This method of division and rotation remained in force up to 1066. In 917, when armies from Wessex and Mercia were in the field from early April until November, one division went home and another took over. Again, in 1052 when Edward's fleet was waiting at Sandwich to intercept Godwine's return, the ships returned to London to take on new earls and crews. The importance of supply, vital to military success, was appreciated even if it was taken for granted and features only incidentally in the sources.
Military training and strategy are two important matters on which the sources are typically silent. There are no references in literature or laws to men training, and so it is necessary to fall back on inference. For the noble warrior, his childhood was of first importance in learning both individual military skills and the teamwork essential for success in battle. Perhaps the games the youthful Cuthbert played ('wrestling, jumping, running, and every other exercise') had some military significance. Turning to strategy, of the period before Alfred the evidence gives the impression that Anglo-Saxon armies fought battles frequently. Battle was risky and best avoided unless all the factors were on your side. But if you were in a position so advantageous that you were willing to take the chance, it is likely that your enemy would be in such a weak position that he would avoid battle and pay tribute. Battles put the princes' lives at risk, as is demonstrated by the Northumbrian and Mercian overlordships brought to an end by a defeat in the field. Gillingham has shown how few pitched battles Charlemagne and Richard I chose to fight.
A defensive strategy becomes more apparent in the later part of Alfred's reign. It was built around the possession of fortified places and the close pursuit of the Danes to harass them and impede their preferred occupation of plundering. Alfred and his lieutenants were able to fight the Danes to a standstill by their repeated ability to pursue and closely besiege them in fortified camps throughout the country. The fortification of sites at Witham, Buckingham, Towcester and Colchester persuaded the Danes of the surrounding regions to submit. The key to this warfare was sieges and the control of fortified places. It is clear that the new fortresses had permanent garrisons, and that they were supported by the inhabitants of the existing burhs when danger threatened. This is brought out most clearly in the description of the campaigns of 917 in the Chronicle, but throughout the conquest of the Danelaw by Edward and Æthelflæd it is clear that a sophisticated and coordinated strategy was being applied.
In 973, a single currency was introduced into England in order to bring about political unification, but by concentrating bullion production at many coastal mints, the new rulers of England created an obvious target which attracted a new wave of Viking invasions, which came close to breaking up the kingdom of the English. From 980 onwards, the Anglo -Saxon Chronicle records renewed raiding against England. At first, the raids were probing ventures by small numbers of ships' crews, but soon grew in size and effect, until the only way of dealing with the Vikings appeared to be to pay protection money to buy them off: "And in that year it was determined that tribute should first be paid to the Danish men because of the great terror they were causing along the coast. The first payment was 10,000 pounds." The payment of Danegeld had to be underwritten by a huge balance of payments surplus; this could only be achieved by stimulating exports and cutting imports, itself accomplished through currency devaluation. This affected everyone in the kingdom.
Settlements and working life
Helena Hamerow suggests that the prevailing model of working life and settlement, particularly for the early period, was one of shifting settlement and building tribal kinship. The mid-Saxon period saw diversification, the development of enclosures, the beginning of the toft system, closer management of livestock, the gradual spread of the mould-board plough, 'informally regular plots' and a greater permanence, with further settlement consolidation thereafter foreshadowing post-Norman Conquest villages. The later periods saw a proliferation of service features including barns, mills and latrines, most markedly on high-status sites. Throughout the Anglo-Saxon period as Hamerow suggests, "local and extended kin groups remained...the essential unit of production". This is very noticeable in the early period. However, by the tenth and eleventh centuries, the rise of the manor and its significance in terms of both settlement and the management of land, which becomes very evident in the Domesday Book.
The collection of buildings discovered at Yeavering formed part of an Anglo-Saxon royal vill or king's tun. These 'tun' consisted of a series of buildings designed to provide short-term accommodation for the king and his household. It is thought that the king would have travelled throughout his land dispensing justice and authority and collecting rents from his various estates. Such visits would be periodic, and it is likely that he would visit each royal villa only once or twice per year. The Latin term villa regia which Bede uses of the site suggests an estate centre as the functional heart of a territory held in the king's demesne. The territory is the land whose surplus production is taken into the centre as food-render to support the king and his retinue on their periodic visits as part of a progress around the kingdom. This territorial model, known as a multiple estate or shire, has been developed in a range of studies. Colm O'Brien, in applying this to Yeavering, proposes a geographical definition of the wider shire of Yeavering and also a geographical definition of the principal estate whose structures Hope-Taylor excavated. One characteristic that the king's tun shared with some other groups of places is that it was a point of public assembly. People came together not only to give the king and his entourage board and lodging; but they attended upon the king in order to have disputes settled, cases appealed, lands granted, gifts given, appointments made, laws promulgated, policy debated, and ambassadors heard. People also assembled for other reasons, such as to hold fairs and to trade.
The first creations of towns are linked to a system of specialism at individual settlements, which is evidenced in studying place-names. Sutterton, "shoe-makers' tun" (in the area of the Danelaw such places are Sutterby) was so named because local circumstances allowed the growth of a craft recognised by the people of surrounding places. Similarly with Sapperton, the "soap-makers' tun". While Boultham, the "meadow with burdock plants", may well have developed a specialism in the production of burrs for wool-carding, since meadows with burdock merely growing in them must have been relatively numerous. From places named for their services or location within a single district, a category of which the most obvious perhaps are the Eastons and Westons, it is possible to move outwards to glimpse component settlements within larger economic units. Names betray some role within a system of seasonal pasture, Winderton in Warwickshire is the winter tun and various Somertons are self-explanatory. Hardwicks are dairy farms and Swinhopes the valleys where pigs were pastured.
Settlement patterns as well as village plans in England fall into two great categories: scattered farms and homesteads in upland and woodland Britain, nucleated villages across a swathe of central England. The chronology of nucleated villages is much debated and not yet clear. Yet there is strong evidence to support the view that nucleation occurred in the tenth century or perhaps the ninth, and was a development parallel to the growth of towns.
Women, children and slaves
Alfred's reference to 'praying men, fighting men and working men' is far from a complete description of his society.
Women in the Anglo-Saxon kingdoms appear to have enjoyed considerable independence, whether as abbesses of the great 'double monasteries' of monks and nuns founded during the seventh and eighth centuries, as major land-holders recorded in Domesday Book (1086), or as ordinary members of society. They could act as principals in legal transactions, were entitled to the same weregild as men of the same class, and were considered 'oath-worthy', with the right to defend themselves on oath against false accusations or claims. Sexual and other offences against them were penalised heavily. There is evidence that even married women could own property independently, and some surviving wills are in the joint names of husband and wife.
Marriage comprised a contract between the woman's family and the prospective bridegroom, who was required to pay a 'bride-price' in advance of the wedding and a 'morning gift' following its consummation. The latter became the woman's personal property, but the former may have been paid to her relatives, at least during the early period. Widows were in a particularly favourable position, with inheritance rights, custody of their children and authority over dependents. However, a degree of vulnerability may be reflected in laws stating that they should not be forced into nunneries or second marriages against their will. The system of primogeniture (inheritance by the first-born male) was not introduced to England until after the Norman Conquest, so Anglo-Saxon siblings – girls as well as boys – were more equal in terms of status.
The age of majority was usually either ten or twelve, when a child could legally take charge of inherited property, or be held responsible for a crime. It was common for children to be fostered, either in other households or in monasteries, perhaps as a means of extending the circle of protection beyond the kin group. Laws also make provision for orphaned children and foundlings.
The traditional distinction in society, amongst free men, was expressed as eorl and ceorl ('earl and churl') though the term 'Earl' took on a more restricted meaning after the Viking period. The noble rank is designated in early centuries as gesiþas ('companions') or þegnas ('thegns'), the latter coming to predominate. After the Norman Conquest the title 'thegn' was equated to the Norman 'baron'. A certain amount of social mobility is implied by regulations detailing the conditions under which a ceorl could become a thegn. Again these would have been subject to local variation, but one text refers to the possession of five hides of land (around 600 acres), a bell and a castle-gate, a seat and a special office in the king's hall. In the context of the control of boroughs, Frank Stenton notes that according to an 11th-century source, "a merchant who had carried out three voyages at his own charge [had also been] regarded as of thegnly status." Loss of status could also occur, as with penal slavery, which could be imposed not only on the perpetrator of a crime but on his wife and family.
A further division in Anglo-Saxon society was between slave and free. Slavery was not as common as in other societies, but appears to have been present throughout the period. Both the freemen and slaves were hierarchically structured, with several classes of freemen and many types of slaves. These varied at different times and in different areas, but the most prominent ranks within free society were the king, the nobleman or thegn, and the ordinary freeman or ceorl. They were differentiated primarily by the value of their weregild or 'man price', which was not only the amount payable in compensation for homicide, but was also used as the basis for other legal formulations such as the value of the oath that they could swear in a court of law. Slaves had no weregild, as offences against them were taken to be offences against their owners, but the earliest laws set out a detailed scale of penalties depending both on the type of slave and the rank of owner. Some slaves may have been members of the native British population conquered by the Anglo-Saxons when they arrived from the continent; others may have been captured in wars between the early kingdoms, or have sold themselves for food in times of famine. However, slavery was not always permanent, and slaves who had gained their freedom would become part of an underclass of freedmen below the rank of ceorl.
Early Anglo-Saxon buildings in Britain were generally simple, not using masonry except in foundations but constructed mainly using timber with thatch roofing. Generally preferring not to settle within the old Roman cities, the Anglo-Saxons built small towns near their centres of agriculture, at fords in rivers, or near natural ports. In each town, a main hall was in the centre, provided with a central hearth.
Only ten of the hundreds of settlement sites that have been excavated in England from this period have revealed masonry domestic structures and confined to a few specific contexts. Timber was the natural building medium of the age: the Anglo-Saxon word for "building" is timbe. Unlike in the Carolingian world, late Anglo-Saxon royal halls continued to be of timber in the manner of Yeavering centuries before, even though the king could clearly have mustered the resources to build in stone. Their preference must have been a conscious choice, perhaps an expression of deeply–embedded Germanic identity on the part of the Anglo-Saxon royalty.
Even the elite had simple buildings, with a central fire and a hole in the roof to let the smoke escape; the largest homes rarely had more than one floor and one room. Buildings varied widely in size, most were square or rectangular, though some round houses have been found. Frequently these buildings have sunken floors, with a shallow pit over which a plank floor was suspended. The pit may have been used for storage, but more likely was filled with straw for insulation. A variation on the sunken floor design has been found in towns, where the "basement" may be as deep as 9 feet, suggesting a storage or work area below a suspended floor. Another common design was simple post framing, with heavy posts set directly into the ground, supporting the roof. The space between the posts was filled in with wattle and daub, or occasionally, planks. The floors were generally packed earth, though planks were sometimes used. Roofing materials varied, with thatch being the most common, though turf and even wooden shingles were also used.
Stone was sometimes used to build churches. Bede makes it clear that the masonry construction of churches, including his own at Jarrow, was undertaken morem Romanorum, 'in the manner of the Romans,' in explicit contrast to existing traditions of timber construction. Even at Canterbury, Bede believed that St Augustine's first cathedral had been 'repaired' or 'recovered' (recuperavit) from an existing Roman church, when in fact it had been newly constructed from Roman materials. The belief was "the Christian Church was Roman therefore a masonry church was a Roman building".
The building of churches in Anglo-Saxon England essentially began with Augustine of Canterbury in Kent following 597; for this he probably imported workmen from Frankish Gaul. The cathedral and abbey in Canterbury, together with churches in Kent at Minster in Sheppey (c. 664) and Reculver (669), and in Essex at the Chapel of St Peter-on-the-Wall at Bradwell-on-Sea, define the earliest type in southeast England. A simple nave without aisles provided the setting for the main altar; east of this a chancel arch separated the apse for use by the clergy. Flanking the apse and east end of the nave were side chambers serving as sacristies; further porticus might continue along the nave to provide for burials and other purposes. In Northumbria the early development of Christianity was influenced by the Irish mission, important churches being built in timber. Masonry churches became prominent from the late 7th century with the foundations of Wilfrid at Ripon and Hexham, and of Benedict Biscop at Monkwearmouth-Jarrow. These buildings had long naves and small rectangular chancels; porticus sometimes surrounded the naves. Elaborate crypts are a feature of Wilfrid's buildings. The best preserved early Northumbrian church is Escomb Church.
From the mid-8th century to the mid-10th century, several important buildings survive. One group comprises the first known churches utilizing aisles: Brixworth, the most ambitious Anglo-Saxon church to survive largely intact; Wareham St Mary's; Cirencester; and the rebuilding of Canterbury Cathedral. These buildings may be compared with churches in the Carolingian Empire. Other lesser churches may be dated to the late eighth and early ninth centuries on the basis of their elaborate sculptured decoration and have simple naves with side porticus. The tower of Barnack hearkens to the West Saxon reconquest in the early 10th century, when decorative features that were to be characteristic of Late Anglo-Saxon architecture were already developed, such as narrow raised bands of stone (pilaster strips) to surround archways and to articulate wall surfaces, as at Barton-upon-Humber and Earls Barton. In plan, however, the churches remained essentially conservative.
From the monastic revival of the second half of the tenth century, only a few documented buildings survive or have been excavated. Examples include the abbeys of Glastonbury; Old Minster, Winchester; Romsey; Cholsey; and Peterborough Cathedral. The majority of churches that have been described as Anglo-Saxon fall into the period between the late 10th century and the early 12th century. During this period, many settlements were first provided with stone churches, but timber also continued to be used; the best wood-framed church to survive is Greensted Church in Essex, no earlier than the 9th century, and no doubt typical of many parish churches. On the continent during the eleventh century, a group of interrelated Romanesque styles developed, associated with the rebuilding of many churches on a grand scale, made possible by a general advance in architectural technology and mason-craft.
The first fully Romanesque church in England was Edward the Confessor's rebuilding of Westminster Abbey (c. 1042–60, now entirely lost to later construction), while the main development of the style only followed the Norman Conquest. However, at Stow Minster the crossing piers of the early 1050s are clearly proto-Romanesque. A more decorative interpretation of Romanesque in lesser churches can be dated only somewhere between the mid and late 11th century, e.g. Hadstock (Essex), Clayton and Sompting (Sussex); this style continued towards the end of the century as at Milborne Port (Somerset). At St Augustine's Abbey in Canterbury (c. 1048–61) Abbot Wulfric aimed to retain the earlier churches while linking them with an octagonal rotunda, but the concept was still essentially Pre-Romanesque. Anglo-Saxon churches of all periods would have been embellished with a range of arts, including wall-paintings, some stained glass, metalwork and statues.
St Peter-in-the-Wall, Essex: A simple nave church of the early style c. 650
Brixworth, Northants: monastery founded c. 690, one of the largest churches to survive relatively intact
Barnack, Peterborough: Lower tower c. 970 – spire is later
Sompting Church, Sussex, with the only Anglo-Saxon Rhenish helm tower to survive, c. 1050
Early Anglo-Saxon art is seen mostly in decorated jewellery, like brooches, buckles, beads and wrist-clasps, some of outstanding quality. Characteristic of the 5th century is the quoit brooch with motifs based on crouching animals, as seen on the silver quoit brooch from Sarre, Kent. While the origins of this style are disputed, it is either an offshoot of provincial Roman, Frankish, or Jutish art. One style flourished from the late 5th century and continued throughout the 6th and is on many square-headed brooches, it is characterised by chip-carved patterns based on animals and masks. A different style, which gradually superseded it, is dominated by serpentine beasts with interlacing bodies.
By the later 6th century, the best works from the south-east are distinguished by greater use of expensive materials, above all gold and garnets, reflecting the growing prosperity of a more organised society which had greater access to imported precious materials, as seen in the buckle from the Taplow burial and the jewellery from Sutton Hoo, c.600 and c.625 respectively. The possible symbolism of the decorative elements like interlace and beast forms that were used in these early works remains unclear. These objects were the products of a society that invested its modest surpluses in personal display, who fostered craftsmen and jewellers of a high standard, and in which the possession of a fine brooch or buckle was a valuable status symbol.
The Staffordshire Hoard is the largest hoard of Anglo-Saxon gold and silver metalwork yet found[update]. Discovered in a field near the village of Hammerwich, it consists of over 3,500 items that are nearly all martial in character and contains no objects specific to female uses. It demonstrates that considerable quantities of high-grade goldsmiths' work were in circulation among the elite during the 7th century. It also shows that the value of such items as currency and their potential roles as tribute or the spoils of war could, in a warrior society, outweigh appreciation of their integrity and artistry.
The Christianization of the society revolutionised the visual arts, as well as other aspects of society. Art had to fulfil new functions, and whereas pagan art was abstract, Christianity required images clearly representing subjects. The transition between the Christian and pagan traditions is occasionally apparent in 7th century works; examples include the Crundale buckle and the Canterbury pendant. In addition to fostering metalworking skills, Christianity stimulated stone sculpture and manuscript illumination. In these Germanic motifs, such as interlace and animal ornament along with Celtic spiral patterns, are juxtaposed with Christian imagery and Mediterranean decoration, notably vine-scroll. The Ruthwell Cross, Bewcastle Cross and Easby Cross are leading Northumbrian examples of the Anglo-Saxon version of the Celtic high cross, generally with a slimmer shaft.
The jamb of the doorway at Monkwearmouth, carved with a pair of lacertine beasts, probably dates from the 680s; the golden, garnet-adorned pectoral cross of St Cuthbert was presumably made before 687; while his wooden inner coffin (incised with Christ and the Evangelists' symbols, the Virgin and Child, archangels and apostles), the Lindisfarne Gospels, and the Codex Amiatinus all date from c. 700. The fact that these works are all from Northumbria might be held to reflect the particular strength of the church in that kingdom. Works from the south were more restrained in their ornamentation than are those from Northumbria.
Lindisfarne was an important centre of book production, along with Ripon and Monkwearmouth-Jarrow. The Lindisfarne Gospels might be the single most beautiful book produced in the Middle Ages, and the Echternach Gospels and (probably) the Book of Durrow are other products of Lindisfarne. A Latin gospel book, the Lindisfarne Gospels are richly illuminated and decorated in an Insular style that blends Irish and Western Mediterranean elements and incorporates imagery from the Eastern Mediterranean, including Coptic Christianity. The Codex Amiatinus was produced in the north of England at the same time and has been called the finest book in the world. It is certainly one of the largest, weighing 34 kilograms. It is a pandect, which was rare in the Middle Ages, and included all the books of the Bible in one volume. The Codex Amiatinus was produced at Monkwearmouth-Jarrow in 692 under the direction of Abbot Ceolfrith. Bede probably had something to do with it. The production of the Codex shows the riches of the north of England at this time. We have records of the monastery needing a new grant of land to raise 2,000 more cattle to get the calf skins to make the vellum for the manuscript. The Codex Amiatinus was meant to be a gift to the pope, and Ceolfrith was taking it to Rome when he died on the way. The copy ended up in Florence, where it still is today – a ninth-century copy of this book is in the possession of the pope.
In the 8th century, Anglo-Saxon Christian art flourished with grand decorated manuscripts and sculptures, along with secular works which bear comparable ornament, like the Witham pins and the Coppergate helmet. The flourishing of sculpture in Mercia occurred slightly later than in Northumbria and is dated to the second half of the 8th century. The Book of Cerne is an early 9th century Insular or Anglo-Saxon Latin personal prayer book with Old English components. This manuscript was decorated and embellished with four painted full-page miniatures, major and minor letters, and continuing panels. Further decorated motifs used in these manuscripts, such as hunched, triangular beasts, also appear on objects from the Trewhiddle hoard (buried in the 870s) and on the rings which bear the names of King Æthelwulf and Queen Æthelswith, which are the centre of a small corpus of fine ninth-century metalwork.
There was demonstrable continuity in the south, even though the Danish settlement represented a watershed in England's artistic tradition. Wars and pillaging removed or destroyed much Anglo-Saxon art, while the settlement introduced new Scandinavian craftsmen and patrons. The result was to accentuate the pre-existing distinction between the art of the north and that of the south. In the 10th and 11th centuries, the Viking dominated areas were characterised by stone sculpture in which the Anglo-Saxon tradition of cross shafts took on new forms, and a distinctive Anglo-Scandinavian monument, the 'hogback' tomb, was produced. The decorative motifs used on these northern carvings (as on items of personal adornment or everyday use) echo Scandinavian styles. The Wessexan hegemony and the monastic reform movement appear to have been the catalysts for the rebirth of art in southern England from the end of the 9th century. Here artists responded primarily to continental art; foliage supplanting interlace as the preferred decorative motif. Key early works are the Alfred Jewel, which has fleshy leaves engraved on the back plate; and the stole and maniples of Bishop Frithestan of Winchester, which are ornamented with acanthus leaves, alongside figures that bear the stamp of Byzantine art. The surviving evidence points to Winchester and Canterbury as the leading centres of manuscript art in the second half of the 10th century: they developed colourful paintings with lavish foliate borders, and coloured line drawings.
By the early 11th century, these two traditions had fused and had spread to other centres. Although manuscripts dominate the corpus, sufficient architectural sculpture, ivory carving and metalwork survives to show that the same styles were current in secular art and became widespread in the south at parochial level. The wealth of England in the later tenth and eleventh century is clearly reflected in the lavish use of gold in manuscript art as well as for vessels, textiles and statues (now known only from descriptions). Widely admired, southern English art was highly influential in Normandy, France and Flanders from c. 1000. Indeed, keen to possess it or recover its materials, the Normans appropriated it in large quantities in the wake of the Conquest. The Bayeux Tapestry, probably designed by a Canterbury artist for Bishop Odo of Bayeux, is arguably the apex of Anglo-Saxon art. Surveying nearly 600 years of continuous change, three common strands stand out: lavish colour and rich materials; an interplay between abstract ornament and representational subject matter; and a fusion of art styles reflecting English links to other parts of Europe.
Sutton Hoo purse-lid c. 620
Codex Aureus of Canterbury c. 750
Ruthwell Cross c. 750
Trewhiddle style on silver ring c. 775 – c. 850
Old English (Ænglisċ, Anglisċ, Englisċ) is the earliest form of the English language. It was brought to Britain by Anglo-Saxon settlers, and was spoken and written in parts of what are now England and southeastern Scotland until the mid-12th century, by which time it had evolved into Middle English. Old English was a West Germanic language, closely related to Old Frisian and Old Saxon (Old Low German). The language was fully inflected, with five grammatical cases, three grammatical numbers and three grammatical genders. Over time, Old English developed into four major dialects: Northumbrian, spoken north of the Humber; Mercian, spoken in the Midlands; Kentish, spoken in Kent; and West Saxon, spoken across the south and southwest. All of these dialects have direct descendants in modern England. Standard English developed from the Mercian dialect, as it was predominant in London.
It is generally held that Old English received little influence from the Common Brittonic and British Latin spoken in southern Britain prior to the arrival of the Anglo-Saxons, as it took in very few loan words from these languages. Though some scholars have claimed that Brittonic could have exerted an influence on English syntax and grammar, these ideas have not become consensus views, and have been criticized by other historical linguists. Richard Coates has concluded that the strongest candidates for substratal Brittonic features in English are grammatical elements occurring in regional dialects in the north and west of England, such as the Northern Subject Rule.
Old English was more clearly influenced by Old Norse. Scandinavian loan words in English include place names, items of basic vocabulary such as sky, leg and they, and words concerned with particular administrative aspects of the Danelaw (that is, the area of land under Viking control, including the East Midlands and Northumbria south of the Tees). Old Norse was related to Old English, as both originated from Proto-Germanic, and many linguists believe that the loss of inflectional endings in Old English was accelerated by contact with Norse.
Local and extended kin groups were a key aspect of Anglo-Saxon culture. Kinship fueled societal advantages, freedom and the relationships to an elite, that allowed the Anglo-Saxons' culture and language to flourish. The ties of loyalty to a lord were to the person of a lord and not to his station; there was no real concept of patriotism or loyalty to a cause. This explains why dynasties waxed and waned so quickly, since a kingdom was only as strong as its leader-king. There was no underlying administration or bureaucracy to maintain any gains beyond the lifetime of a leader. An example of this was the leadership of Rædwald of East Anglia and how the East Anglian primacy did not survive his death. Kings could not make new laws barring exceptional circumstances. Their role instead was to uphold and clarify previous custom and to assure his subjects that he would uphold their ancient privileges, laws, and customs. Although the person of the king as a leader could be exalted, the office of kingship was not in any sense as powerful or as invested with authority as it was to become. One of the tools kings used was to tie themselves closely to the new Christian church, through the practice of having a church leader anoint and crown the king; God and king were then joined in peoples' minds.
The ties of kinship meant that the relatives of a murdered person were obliged to exact vengeance for his or her death. This led to bloody and extensive feuds. As a way out of this deadly and futile custom the system of weregilds was instituted. The weregild set a monetary value on each person's life according to their wealth and social status. This value could also be used to set the fine payable if a person was injured or offended against. Robbing a thane called for a higher penalty than robbing a ceorl. On the other hand, a thane who thieved could pay a higher fine than a ceorl who did likewise. Men were willing to die for the lord and to support their comitatus (their warrior band). Evidence of this behavior (though it may be more a literary ideal than an actual social practice) can be observed in the story, made famous in the Anglo-Saxon Chronicle entry for 755, of Cynewulf and Cyneheard, in which the followers of a defeated king decided to fight to the death rather than be reconciled after the death of their lord.
This emphasis on social standing affected all parts of the Anglo-Saxon world. The courts, for example, did not attempt to discover the facts in a case; instead, in any dispute it was up to each party to get as many people as possible to swear to the rightness of their case, which became known as oath-swearing. The word of a thane counted for that of six ceorls. It was assumed that any person of good character would be able to find enough people to swear to his innocence that his case would prosper.
Anglo-Saxon society was also decidedly patriarchal, but women were in some ways better off than they would be in later times. A woman could own property in her own right. She could and did rule a kingdom if her husband died. She could not be married without her consent, and any personal goods, including lands, that she brought into a marriage remained her own property. If she were injured or abused in her marriage, her relatives were expected to look after her interests.
The most noticeable feature of the Anglo-Saxon legal system is the apparent prevalence of legislation in the form of law codes. The early Anglo-Saxons were organised in various small kingdoms often corresponding to later shires or counties. The kings of these small kingdoms issued written laws, one of the earliest of which is attributed to Ethelbert, king of Kent, ca.560–616. The Anglo-Saxon law codes follow a pattern found in mainland Europe where other groups of the former Roman Empire encountered government dependent upon written sources of law and hastened to display the claims of their own native traditions by reducing them to writing. These legal systems should not be thought of as operating like modern legislation, rather they are educational and political tools designed to demonstrate standards of good conduct rather than act as criteria for subsequent legal judgment.
Although not themselves sources of law, Anglo-Saxon charters are a most valuable historical source for tracing the actual legal practices of the various Anglo-Saxon communities. A charter was a written document from a king or other authority confirming a grant either of land or some other valuable right. Their prevalence in the Anglo-Saxon state is a sign of sophistication. They were frequently appealed to and relied upon in litigation. Making grants and confirming those made by others was a major way in which Anglo-Saxon kings demonstrated their authority.
The royal council or witan played a central but limited role in the Anglo-Saxon period. The main feature of the system was its high degree of decentralisation. The interference by the king through his granting of charters and the activity of his witan in litigation are exceptions rather than the rule in Anglo-Saxon times. The most important court in the later Anglo-Saxon period was the shire court. Many shires (such as Kent and Sussex) were in the early days of the Anglo-Saxon settlement the centre of small independent kingdoms. As the kings first of Mercia and then of Wessex slowly extended their authority over the whole of England, they left the shire courts with overall responsibility for the administration of law. The shire met in one or more traditional places, earlier in the open air and then later in a moot or meeting hall. The meeting of the shire court was presided over by an officer, the shire reeve or sheriff, whose appointment came in later Anglo-Saxon times into the hands of the king but had in earlier times been elective. The sheriff was not the judge of the court, merely its president. The judges of the court were all those who had the right and duty of attending the court, the suitors. These were originally all free male inhabitants of the neighbourhood, but over time suit of court became an obligation attached to particular holdings of land. The sessions of a shire court resembled more closely those of a modern local administrative body than a modern court. It could and did act judicially, but this was not its prime function. In the shire court, charters and writs would be read out for all to hear.
Below the level of the shire, each county was divided into areas known as hundreds (or wapentakes in the north of England). These were originally groups of families rather than geographical areas. The hundred court was a smaller version of the shire court, presided over by the hundred bailiff, formerly a sheriff's appointment, but over the years many hundreds fell into the private hands of a local large landowner. Little is known about hundred court business, which was likely a mix of the administrative and judicial, but they remained in some areas an important forum for the settlement of local disputes well into the post-Conquest period.
The Anglo-Saxon system put an emphasis upon compromise and arbitration: litigating parties were enjoined to settle their differences if possible. If they persisted in bringing a case for decision before a shire court, then it could be determined there. The suitors of the court would pronounce a judgment which fixed how the case would be decided: legal problems were considered to be too complex and difficult for mere human decision, and so proof or demonstration of the right would depend upon some irrational, non-human criterion. The normal methods of proof were oath-helping or the ordeal. Oath-helping involved the party undergoing proof swearing to the truth of his claim or denial and having that oath reinforced by five or more others, chosen either by the party or by the court. The number of helpers required and the form of their oath differed from place to place and upon the nature of the dispute. If either the party or any of the helpers failed in the oath, either refusing to take it or sometimes even making an error in the required formula, the proof failed and the case was adjudged to the other side. As "wager of law," it remained a way of determining cases in the common law until its abolition in the 19th century.
The ordeal offered an alternative for those unable or unwilling to swear an oath. The two most common methods were the ordeal by hot iron and by cold water. The former consisted in carrying a red-hot iron for five paces: the wound was immediately bound up, and if on unbinding, it was found to be festering, the case was lost. In the ordeal by water, the victim, usually an accused person, was cast bound into water: if he sunk he was innocent, if he floated he was guilty. Although for perhaps understandable reasons, the ordeals became associated with trials in criminal matters. They were in essence tests of the truth of a claim or denial of a party and appropriate for trying any legal issue. The allocation of a mode of proof and who should bear it was the substance of the shire court's judgment.
Old English literary works include genres such as epic poetry, hagiography, sermons, Bible translations, legal works, chronicles, riddles and others. In all there are about 400 surviving manuscripts from the period, a significant corpus of both popular interest and specialist research. The manuscripts use a modified Roman alphabet, but Anglo-Saxon runes or futhorc are used in under 200 inscriptions on objects, sometimes mixed with Roman letters.
This literature is remarkable for being in the vernacular (Old English) in the early medieval period: almost all other written literature in Western Europe was in Latin at this time, but because of Alfred's programme of vernacular literacy, the oral traditions of Anglo-Saxon England ended up being converted into writing and preserved. Much of this preservation can be attributed to the monks of the tenth century, who made – at the very least – the copies of most of the literary manuscripts that still exist. Manuscripts were not common items. They were expensive and hard to make. First, cows or sheep had to be slaughtered and their skins tanned. The leather was then scraped, stretched, and cut into sheets, which were sewn into books. Then inks had to be made from oak galls and other ingredients, and the books had to be hand written by monks using quill pens. Every manuscript is slightly different from another, even if they are copies of each other, because every scribe had different handwriting and made different errors. Individual scribes can sometimes be identified from their handwriting, and different styles of hand were used in specific scriptoria (centres of manuscript production), so the location of the manuscript production can often be identified.
There are four great poetic codices of Old English poetry (a codex is a book in modern format, as opposed to a scroll): the Junius Manuscript, the Vercelli Book, the Exeter Book, and the Nowell Codex or Beowulf Manuscript; most of the well-known lyric poems such as The Wanderer, The Seafarer, Deor and The Ruin are found in the Exeter Book, while the Vercelli Book has the Dream of the Rood, some of which is also carved on the Ruthwell Cross. The Franks Casket also has carved riddles, a popular form with the Anglo-Saxons. Old English secular poetry is mostly characterized by a somewhat gloomy and introspective cast of mind, and the grim determination found in The Battle of Maldon, recounting an action against the Vikings in 991. This is from a book that was lost in the Cotton Library fire of 1731, but it had been transcribed previously.
Rather than being organized around rhyme, the poetic line in Anglo-Saxon is organised around alliteration, the repetition of stressed sounds; any repeated stressed sound, vowel or consonant, could be used. Anglo-Saxon lines are made up of two half-lines (in old-fashioned scholarship, these are called hemistiches) divided by a breath-pause or caesura. There must be at least one of the alliterating sounds on each side of the caesura.
hreran mid hondum hrimcealde sæ[g]
The line above illustrates the principle: note that there is a natural pause after 'hondum' and that the first stressed syllable after that pause begins with the same sound as a stressed line from the first half-line (the first halfline is called the a-verse and the second is the b-verse).
There is very strong evidence that Anglo-Saxon poetry has deep roots in oral tradition, but keeping with the cultural practices seen elsewhere in Anglo-Saxon culture, there was a blending between tradition and new learning. Thus while all Old English poetry has common features, three strands can be identified: religious poetry, which includes poems about specifically Christian topics, such as the cross and the saints; Heroic or epic poetry, such as Beowulf, which is about heroes, warfare, monsters, and the Germanic past; and poetry about "smaller" topics, including introspective poems (the so-called elegies), "wisdom" poems (which communicate both traditional and Christian wisdom), and riddles. For a long time all Anglo-Saxon poetry was divided into three groups: Cædmonian (the biblical paraphrase poems), heroic, and "Cynewulfian," named after Cynewulf, one of the only named poets in Anglo-Saxon. The most famous works from this period include the epic poem Beowulf, which has achieved national epic status in Britain.
There are about 30,000 surviving lines of Old English poetry and about ten times that much prose, and the majority of both is religious. The prose was influential and obviously very important to the Anglo-Saxons and more important than the poetry to those who came after the Anglo-Saxons. Homilies are sermons, lessons to be given on moral and doctrinal matters, and the two most prolific and respected writers of Anglo-Saxon prose, Ælfric and Wulfstan, were both homilists. Almost all surviving poetry is found in only one manuscript copy, but there are several versions of some prose works, especially the Anglo-Saxon Chronicle, which was apparently promulgated to monasteries by the royal court. Anglo-Saxon clergy also continued to write in Latin, the language of Bede's works, monastic chronicles, and theological writing, although Bede's biographer records that he was familiar with Old English poetry and gives a five line lyric which he either wrote or liked to quote – the sense is unclear.
Symbolism was an essential element in Anglo-Saxon culture. Julian D. Richards suggests that in societies with strong oral traditions, material culture is used to store and pass on information and stand instead of literature in those cultures. This symbolism is less logical than literature and more difficult to read. Anglo-Saxons used symbolism to communicate as well as to aid their thinking about the world. Anglo-Saxons used symbols to differentiate between groups and people, status and role in society.
The visual riddles and ambiguities of early Anglo-Saxon animal art, for example, has been seen as emphasising the protective roles of animals on dress accessories, weapons, armour and horse equipment, and its evocation of pre-Christian mythological themes. However Howard Williams and Ruth Nugent have suggested that the number of artefact categories that have animals or eyes—from pots to combs, buckets to weaponry—was to make artefacts 'see' by impressing and punching circular and lentoid shapes onto them. This symbolism of making the object seems to be more than decoration.
Conventional interpretations of the symbolism of grave goods revolved around religion (equipment for the hereafter), legal concepts (inalienable possessions) and social structure (status display, ostentatious destruction of wealth). There was multiplicity of messages and variability of meanings characterised the deposition of objects in Anglo-Saxon graves. In Early Anglo-Saxon cemeteries, 47% of male adults and 9% of all juveniles were buried with weapons. The proportion of adult weapon burials is much too high to suggest that they all represent a social elite. The usual assumption is that these are 'warrior burials', and this term is used throughout the archaeological and historical literature. However, a systematic comparison of burials with and without weapons, using archaeological and skeletal data, suggests that this assumption is much too simplistic and even misleading. Anglo-Saxon weapon burial rite involved a complex ritual symbolism: it was multi-dimensional, displaying ethnic affiliation, descent, wealth, élite status, and age groups. This symbol continued until c.700 when it ceased to have the symbolic power that it had before. Heinrich Härke suggests this change was the result of the changing structure of society and especially in ethnicity and assimilation, implying the lowering of ethnic boundaries in the Anglo-Saxon settlement areas of England towards a common culture.
The word bead comes from the Anglo-Saxon words bidden (to pray) and bede (prayer). The vast majority of early Anglo-Saxon female graves contain beads, which are often found in large numbers in the area of the neck and chest. Beads are sometimes found in male burials, with large beads often associated with prestigious weapons. A variety of materials other than glass were available for Anglo-Saxon beads, including amber, rock crystal, amethyst, bone, shells, coral and even metal. These beads are usually considered to have a social or ritual function. Anglo-Saxon glass beads show a wide variety of bead manufacturing techniques, sizes, shapes, colours and decorations. Various studies have been carried out investigating the distribution and chronological change of bead types. The crystal beads which appear on bead strings in the pagan Anglo-Saxon period seems to have gone through various changes in meaning in the Christian period, which Gale Owen-Crocker suggests was linked to symbolism of the Virgin Mary, and hence to intercession. John Hines has suggested that the over 2,000 different types of beads found at Lakenheath show that the beads symbolise identity, roles, status and micro cultures within the tribal landscape of the early Anglo-Saxon world.
Symbolism continued to have a hold on the minds of Anglo-Saxon people into the Christian eras. The interiors of churches would have glowed with colour, and the walls of the halls were painted with decorative scenes from the imagination telling stories of monsters and heroes like those in the poem Beowulf. Although nothing much is left of the wall paintings, evidence of their pictorial art is found in Bibles and Psalters, in illuminated manuscripts. The poem The Dream of the Rood is an example how symbolism of trees was fused into Christian symbolism. Richard North suggests that the sacrifice of the tree was in accordance with pagan virtues and "the image of Christ's death was constructed in this poem with reference to an Anglian ideology of the world tree". North suggests that the author of The Dream of the Rood "uses the language of the myth of Ingui in order to present the Passion to his newly Christianized countrymen as a story from their native tradition". Furthermore, the tree's triumph over death is celebrated by adorning the cross with gold and jewels.
The most distinctive feature of coinage of the first half of the 8th century is its portrayal of animals, to an extent found in no other European coinage of the Early Middle Ages. Some animals, such as lions or peacocks, would have been known in England only through descriptions in texts or through images in manuscripts or on portable objects. The animals were not merely illustrated out of an interest in the natural world. Each was imbued with meanings and acted as a symbol which would have been understood at the time.
The food eaten by Anglo-Saxons was long presumed to differ between elites and commoners. However, a 2022 study by the University of Cambridge found that Anglo-Saxon elites and royalty both ate a primarily vegetarian diet based on cereal grains as did peasants. The discovery came after bioarchaeologist Sam Leggett analysed chemical dietary signatures from the bones of 2,023 people buried in England between the 5th to 11th Centuries and cross referenced the analysis with markers of social status. Rather than elites eating regular banquets with huge quantities of meat, the researchers concluded these were occasional grand feasts hosted by the peasants for their rulers rather than regular occurrences.
Anglo-Saxon is still used as a term for the original Old English-derived vocabulary within the modern English language, in contrast to vocabulary derived from Old Norse and French.
Throughout the history of Anglo-Saxon studies, different narratives of the people have been used to justify contemporary ideologies. In the early Middle Ages, the views of Geoffrey of Monmouth produced a personally inspired (and largely fictitious) history that was not challenged for some 500 years. In the Reformation, Christians looking to establish an independent English church reinterpreted Anglo-Saxon Christianity. In the 19th century, the term Anglo-Saxon was broadly used in philology, and is sometimes so used at present, though the term 'Old English' is more commonly used. During the Victorian era, writers such as Robert Knox, James Anthony Froude, Charles Kingsley and Edward A. Freeman used the term Anglo-Saxon to justify colonialistic imperialism, claiming that Anglo-Saxon heritage was superior to those held by colonised peoples, which justified efforts to "civilise" them. Similar racist ideas were advocated in 19th-century United States by Samuel George Morton and George Fitzhugh. The historian Catherine Hills contends that these views have influenced how versions of early English history are embedded in the sub-conscious of certain people and are "re-emerging in school textbooks and television programmes and still very congenial to some strands of political thinking."
The term Anglo-Saxon is sometimes used to refer to peoples descended or associated in some way with the English ethnic group, but there is no universal definition for the term. In contemporary Anglophone cultures outside Britain, "Anglo-Saxon" may be contrasted with "Celtic" as a socioeconomic identifier, invoking or reinforcing historical prejudices against non-English British and Irish immigrants. "White Anglo-Saxon Protestant" (WASP) is a term especially popular in the United States that refers chiefly to long-established wealthy families with mostly English ancestors. As such, WASP is not a historical label or a precise ethnological term but rather a reference to contemporary family-based political, financial and cultural power, e.g. The Boston Brahmin.
The term Anglo-Saxon is becoming increasingly controversial among some scholars, especially those in America, for its modern politicised nature and adoption by the far-right. In 2019, the International Society of Anglo-Saxonists changed their name to the International Society for the Study of Early Medieval England, in recognition of this controversy.
Outside Anglophone countries, the term Anglo-Saxon and its direct translations are used to refer to the Anglophone peoples and societies of Britain, the United States, and other countries such as Australia, Canada and New Zealand – areas which are sometimes referred to as the Anglosphere. The term Anglo-Saxon can be used in a variety of contexts, often to identify the English-speaking world's distinctive language, culture, technology, wealth, markets, economy, and legal systems. Variations include the German "Angelsachsen", French "Anglo-Saxon", Spanish "anglosajón", Portuguese "Anglo-saxão", Russian "англосаксы", Polish "anglosaksoński", Italian "anglosassone", Catalan "anglosaxó" and Japanese "Angurosakuson".
- Anglo-Saxon dress
- Anglo-Saxon military organization
- Burial in Anglo-Saxon England
- Coinage in Anglo-Saxon England
- States in Medieval Britain
- Timeline of Anglo-Saxon settlement in Britain
- ^ Throughout this article Anglo-Saxon is used for Saxon, Angles, Jute, or Frisian unless it is specific to a point being made; "Anglo-Saxon" is used when specifically the culture is meant rather than any ethnicity. However, all these terms are interchangeably used by scholars.
- ^ The delimiting dates vary; often cited are 410, date of the Sack of Rome by Alaric I; and 751, the accession of Pippin the Short and the establishment of the Carolingian dynasty.
- ^ There is much evidence for loosely managed and shifting cultivation and no evidence of "top down" structured landscape planning.
- ^ Confirmation of this interpretation may come from Bede's account of the battle of the river Winwæd of 655, where it is said that Penda of Mercia, overlord of all the southern kingdoms, was able to call upon thirty contingents, each led by duces regii – royal commanders.
- ^ From its reference to "Aldfrith, who now reigns peacefully" it must date to between 685 and 704.
- ^ Oswiu of Northumbria (642–70) only won authority over the southern kingdoms after he defeated Penda at the battle of the Winwæd in 655 and must have lost it again soon after Wulfhere regained control in Mercia in 658.
- ^ Example from the Wanderer
- ^ Williams, Joseph M. (1986). Origins of the English Language: A Social and Linguistic History. ISBN 978-0-02-934470-5.
- ^ a b Higham, Nicholas J., and Martin J. Ryan. The Anglo-Saxon World. Yale University Press, 2013.
- ^ Higham, Nicholas J., and Martin J. Ryan. The Anglo-Saxon World. Yale University Press, 2013. p. 7
- ^ Richard M. Hogg, ed. The Cambridge History of the English Language: Vol 1: the Beginnings to 1066 (1992)
- ^ Higham, Nicholas J., and Martin J. Ryan. The Anglo-Saxon World. Yale University Press, 2013. pp. 7–19
- ^ Hamerow, Helena. Rural Settlements and Society in Anglo-Saxon England. Oxford University Press, 2012. p166
- ^ Sarah Knapton (18 March 2015). "Britons still live in Anglo-Saxon tribal kingdoms, Oxford University finds". Daily Telegraph. Archived from the original on 2022-01-10. Retrieved 19 March 2015.
- ^ Higham & Ryan 2013:7"The Anglo-Saxon World"
- ^ Hills, Catherine. Origins of the English. Duckworth Pub, 2003. p. 21
- ^ Richter, Michael. "Bede's Angli: Angles or English?." Peritia 3.1 (1984): 99–114.
- ^ Two Lives of Gildas by a Monk of Ruys, and Caradoc of Llancarfan. Translated by Williams, Hugh. Felinfach: Llanerch. 1990 . p. 32. ISBN 0947992456. Retrieved 6 September 2020.
- ^ Holman, Katherine (2007). The Northern Conquest: Vikings in Britain and Ireland. Signal Books. p. 187. ISBN 9781904955344.
- ^ McKitterick, Rosamond. "Paul the Deacon and the Franks." Early Medieval Europe 8.3 (1999): 319–339.
- ^ Hills, Catherine. Origins of the English. Duckworth Pub, 2003: 14
- ^ The complete sentence was Non Angli, sed angeli, si forent Christiani. "They are not Angles, but angels, if they were Christian", see p. 117 of Zuckermann, Ghil'ad (2003), Language Contact and Lexical Enrichment in Israeli Hebrew. Palgrave Macmillan. ISBN 9781403917232 / ISBN 9781403938695
- ^ Timofeeva, Olga. "Of ledenum bocum to engliscum gereorde." Communities of Practice in the History of English 235 (2013): 201.
- ^ Nicholas Brooks (2003). "English Identity from Bede to the Millenium". The Haskins Society Journal. 14: 35–50.
- ^ "The Acts and Monuments Online". www.johnfoxe.org. Archived from the original on 2017-01-03. Retrieved 2017-01-02.
- ^ Gates, Jay Paul. "Ealles Englalandes Cyningc: Cnut's Territorial Kingship and Wulfstan's Paronomastic Play."
- ^ Sawyer, Peter H. 1978. From Roman Britain to Norman England. New York: St. Martin's Press: 167
- ^ Ellis, Steven G. A View of the Irish Language: Language and History in Ireland from the Middle Ages to the Present.
- ^ Hills, Catherine. Origins of the English. Duckworth Pub, 2003: 15
- ^ "Definition of "Völkerwanderung" – Collins English Dictionary".
- ^ John Hines, Karen Høilund Nielsen, Frank Siegmund, The Pace of Change: Studies in Early-Medieval Chronology, Oxbow Books, 1999, p. 93, ISBN 978-1-900188-78-4
- ^ Bury, J. B., The Invasion of Europe by the Barbarians, Norton Library, 1967.
- ^ Campbell, James (1986). Essays in Anglo-Saxon history. London: Hambledon Press. ISBN 0-907628-32-X. OCLC 458534293.
- ^ P. Salway, Roman Britain (Oxford, Oxford University Press, 1981), pp. 295–311, 318, 322, 349, 356, 380, 401–405
- ^ In the abstract for: Härke, Heinrich. "Anglo-Saxon Immigration and Ethnogenesis." Medieval Archaeology 55.1 (2011): 1–28.
- ^ Jones & Casey 1988:367–98 "The Gallic Chronicle Restored: a Chronology for the Anglo-Saxon Invasions and the End of Roman Britain"
- ^ "EBK: Adventus Saxonum Part 2". www.earlybritishkingdoms.com. Archived from the original on 2017-04-02. Retrieved 2017-01-02.
- ^ Higham, Nicholas (1995). An English Empire: Bede and the Early Anglo-Saxon Kings. Manchester University Press. p. 2. ISBN 9780719044243.
- ^ Hills, C.; Lucy, S. (2013). Spong Hill IX: Chronology and Synthesis. Cambridge: McDonald Institute for Archaeological Research. ISBN 978-1-902937-62-5.
- ^ Dark, K., Civitas to Kingdom: British Political Continuity 300–80 (London, Leicester University Press, 1994)
- ^ Higham, Nick. "From sub-Roman Britain to Anglo-Saxon England: Debating the Insular Dark Ages." History Compass 2.1 (2004).
- ^ Brugmann, B. I. R. T. E. "Migration and endogenous change." The Oxford Handbook of Anglo-Saxon Archaeology (2011): 30–45.
- ^ a b c d e Härke, Heinrich. "Anglo-Saxon Immigration and Ethnogenesis." Medieval Archaeology 55.1 (2011): 1–28.
- ^ Bryan Ward-Perkins, "Why did the Anglo-Saxons not become more British?" in The English Historical Review, Oxford University Press (2000)
- ^ Hills 2003:11–20 Origins of the English
- ^ Schiffels, S. and Sayer, D., "Investigating Anglo-Saxon migration history with ancient and modern DNA," 2017, H.H. Meller, F. Daim, J. Frause and R. Risch (eds) Migration and Integration form Prehisory to the Middle Ages. Tagungen Des Landesmuseums Für Vorgeschichte Halle, Saale
- ^ Hughes, Susan S. and Millard, Andrew R. and Chenery, Carolyn A. and Nowell, Geoff and Pearson, D. Graham (2018) 'Isotopic analysis of burials from the early Anglo-Saxon cemetery at Eastbourne, Sussex, U.K.', Journal of archaeological science : reports., 19 . pp. 513–525.
- ^ Brooks, Nicholas. "The formation of the Mercian Kingdom." The Origins of Anglo-Saxon Kingdoms (1989): 159–170.
- ^ Wood, Michael (25 May 2012). "Viewpoint: The time Britain slid into chaos". BBC News.
- ^ Coates, Richard. "Invisible Britons: The view from linguistics. Paper circulated in connection with the conference Britons and Saxons, 14–16 April. University of Sussex Linguistics and English Language Department." (2004)
- ^ Hamerow, H. Early Medieval Settlements: The Archaeology of Rural Communities in North-West Europe, 400–900. Oxford: Oxford University Press.
- ^ a b Dark, Ken R. (2003). "Large-scale population movements into and from Britain south of Hadrian's Wall in the fourth to sixth centuries AD" (PDF). Archived (PDF) from the original on 2022-10-09.
- ^ Toby F. Martin, The Cruciform Brooch and Anglo-Saxon England, Boydell and Brewer Press (2015), pp. 174–178
- ^ Kortlandt, Frederik (2018). "Relative Chronology" (PDF). Archived (PDF) from the original on 2022-10-09.
- ^ a b Coates, Richard. "Celtic whispers: revisiting the problems of the relation between Brittonic and Old English".
- ^ Jean Merkale, King of the Celts: Arthurian Legends and Celtic Tradition (1994), pp. 97–98
- ^ Nicholas Ostler, Ad Infinitum: A Biography of Latin (2009: Bloomsbury Publishing), p. 141
- ^ Jim Storr, King Arthur's Wars: The Anglo-Saxon Conquest of England (2016), p. 114
- ^ Jean Manco, The Origins of the Anglo-Saxons (2018: Thames & Hudson), pp. 131–139
- ^ Martin Grimmer, "Britons in Early Wessex: The Evidence of the Law Code of Ine," in Britons in Anglo-Saxon England, ed. Nick Higham (2007: Boydell and Brewer)
- ^ Koch, J.T., (2006) Celtic Culture: A Historical Encyclopedia, ABC-CLIO, ISBN 1-85109-440-7, pp. 392–393.
- ^ Myres, J.N.L. (1989) The English Settlements. Oxford University Press, pp. 146–147
- ^ Ward-Perkins, B., "Why did the Anglo-Saxons not become more British?" The English Historical Review 115.462 (June 2000): p. 513.
- ^ Yorke, B. (1990), Kings and Kingdoms of Early Anglo-Saxon England, London: Seaby, ISBN 1-85264-027-8 pp. 138–139
- ^ Celtic culture: a historical encyclopedia, ABC-CLIO, 2006ISBN 1851094407, 9781851094400, page. 60
- ^ Mike Ashley, The Mammoth Book of British Kings and Queens (2012: Little, Brown Book Group)
- ^ Schiffels, Stephan; Haak, Wolfgang; Paajanen, Pirita; Llamas, Bastien; Popescu, Elizabeth; Loe, Louise; Clarke, Rachel; Lyons, Alice; Mortimer, Richard; Sayer, Duncan; Tyler-Smith, Chris; Cooper, Alan; Durbin, Richard (January 19, 2016). "Iron Age and Anglo-Saxon genomes from East England reveal British migration history". Nature Communications. 7 (1): 10408. Bibcode:2016NatCo...710408S. doi:10.1038/ncomms10408. PMC 4735688. PMID 26783965 – via www.nature.com.
- ^ Martiniano, Rui; Caffell, Anwen; Holst, Malin; Hunter-Mann, Kurt; Montgomery, Janet; Müldner, Gundula; McLaughlin, Russell L.; Teasdale, Matthew D.; van Rheenen, Wouter; Veldink, Jan H.; van den Berg, Leonard H.; Hardiman, Orla; Carroll, Maureen; Roskams, Steve; Oxley, John; Morgan, Colleen; Thomas, Mark G.; Barnes, Ian; McDonnell, Christine; Collins, Matthew J.; Bradley, Daniel G. (January 19, 2016). "Genomic signals of migration and continuity in Britain before the Anglo-Saxons". Nature Communications. 7 (1): 10326. Bibcode:2016NatCo...710326M. doi:10.1038/ncomms10326. PMC 4735653. PMID 26783717. S2CID 13817552.
- ^ Ross P. Byrne, Rui Martiniano, Lara M. Cassidy, Matthew Carrigan, Garrett Hellenthal, Orla Hardiman, Daniel G. Bradley, Russell L. McLaughlin, "Insular Celtic population structure and genomic footprints of migration," PLOS Genetics (January 2018)
- ^ Higham, Nicholas J. An English Empire: Bede, the Britons, and the Early Anglo-Saxon Kings. Vol 2 p.244
- ^ Oosthuizen, Susan. Tradition and Transformation in Anglo-Saxon England: Archaeology, Common Rights and Landscape. Bloomsbury Academic, 2013.
- ^ Hodges, R 1982: Dark Age Economics: The Origins of Towns and Trade A.D. 600–1000. London
- ^ a b Yorke, Barbara. Kings and Kingdoms of Early Anglo-Saxon England. Routledge, 2002.
- ^ Campbell, J 1979: Bede's Reges and Principes. Jarrow Lecture (Campbell 1986, 85–98)
- ^ Yorke, Barbara. "Kings and Kingship," A Companion to the Early Middle Ages (2009): 76.
- ^ Gerrard, James. The Ruin of Roman Britain: An Archaeological Perspective. Cambridge University Press, 2013.
- ^ Bede, Historia Ecclesiastica, II, 5.
- ^ Britain AD: King Arthur's Britain, Programme 2 – Three part Channel 4 series. 2004
- ^ Heaney, Seamus. "trans. Beowulf." (2000).
- ^ Brown, Peter. The Rise of Western Christendom, 2nd edition. Oxford and Malden: Blackwell Publishing, 2003. p328
- ^ Bede, Book III, chapters 3 and 5.
- ^ Stenton 1987, p. 88.
- ^ Campbell 1982, pp. 80–81.
- ^ Colgrave, Earliest Life of Gregory the Great, p. 9.
- ^ Higham, Nicholas J. The English conquest: Gildas and Britain in the fifth century. Vol. 1. Manchester University Press, 1994.
- ^ a b Keynes, Simon. "England, 700–900." The New Cambridge Medieval History 2 (1995): 18–42.
- ^ Yorke, Barbara. Kings and kingdoms of early Anglo-Saxon England. Routledge, 2002: p101
- ^ Yorke, Barbara. Kings and kingdoms of early Anglo-Saxon England. Routledge, 2002: p103
- ^ Scharer, Anton. "The writing of history at King Alfred's court." Early Medieval Europe 5.2 (1996): 177–206.
- ^ Yorke, Barbara. Kings and kingdoms of early Anglo-Saxon England, 2002. p. 101.
- ^ Yorke, B A E 1985: 'The kingdom of the East Saxons.' Anglo-Saxon England 14, 1–36
- ^ RYAN, MARTIN J. "The Mercian Supremacies." The Anglo-Saxon World (2013): 179.
- ^ Drout, Michael DC. Imitating fathers: tradition, inheritance, and the reproduction of culture in Anglo-Saxon England. Diss. Loyola University of Chicago, 1997.
- ^ Lendinara, Patrizia. "The world of Anglo-Saxon learning." The Cambridge Companion to Old English Literature (1991): 264–281.
- ^ Bede; Plummer, Charles (1896). Historiam ecclesiastica gentis Anglorum: Historiam abbatum; Epistolam ad Ecgberctum; una cum Historia abbatum auctore anonymo. Oxford, United Kingdom: e Typographeo Clarendoniano.
- ^ Lapidge, Michael. "The school of Theodore and Hadrian." Anglo-Saxon England 15.1 (1986): 45–72.
- ^ Drout, M. Anglo-Saxon World (Audio Lectures) Audible.com
- ^ Dobney, Keith, et al. Farmers, monks and aristocrats: the environmental archaeology of an Anglo-Saxon Estate Centre at Flixborough, North Lincolnshire, UK. Oxbow Books, 2007.
- ^ Godfrey, John. "The Double Monastery in Early English History." Ampleforth Journal 79 (1974): 19–32.
- ^ Dumville, David N., Simon Keynes, and Susan Irvine, eds. The Anglo-Saxon chronicle: a collaborative edition. MS E. Vol. 7. Ds Brewer, 2004.
- ^ Swanton, Michael (1996). The Anglo-Saxon Chronicle. New York: Routledge. ISBN 0-415-92129-5.
- ^ a b c d e f Whitelock, Dorothy, ed. The Anglo-Saxon Chronicle. Eyre and Spottiswoode, 1965.
- ^ Bede, Saint. The Ecclesiastical History of the English People: The Greater Chronicle; Bede's Letter to Egbert. Oxford University Press, 1994.
- ^ Keynes, Simon. "Mercia and Wessex in the ninth century." Mercia. An Anglo-Saxon Kingdom in Europe, ed. Michelle P. Brown/Carol Ann Farr (London 2001) (2001): 310–328.
- ^ Sawyer, Peter Hayes, ed. Illustrated history of the Vikings. Oxford University Press, 2001
- ^ Coupland, Simon. "The Vikings in Francia and Anglo-Saxon England to 911." The New Cambridge Medieval History 2 (1995): 190–201.
- ^ Anglo-Saxon Chronicel s.a. 893
- ^ Keynes, Simon, and Michael Lapidge. Alfred the Great. New York: Penguin, 1984.
- ^ a b c d Keynes, Simon, and Michael Lapidge. Alfred the Great. New York: Penguin, 1984.
- ^ Frantzen, Allen J. King Alfred. Woodbridge, CT: Twayne Publishers, 1986
- ^ Yorke, Barbara. Wessex in the Early Middle Ages. London: Pinter Publishers Ltd., 1995.
- ^ Keynes, Simon. "England, 900–1016." New Cambridge Medieval History 3 (1999): 456–484.
- ^ a b c Keynes, Simon. "Edward, King of the Anglo-Saxons."." Edward the Elder: 899 924 (2001): 40–66.
- ^ Dumville, David N. Wessex and England from Alfred to Edgar: six essays on political, cultural, and ecclesiastical revival. Boydell Press, 1992.
- ^ Keynes, Simon. King Athelstan's books. University Press, 1985.
- ^ Hare, Kent G. "Athelstan of England: Christian king and hero." The Heroic Age 7 (2004).
- ^ Keynes, Simon. "Edgar, King of the English 959–975 New Interpretations." (2008).
- ^ a b Dumville, David N. "Between Alfred the Great and Edgar the Peacemaker: Æthelstan, First King of England." Wessex and England from Alfred to Edgar (1992): 141–171.
- ^ Regularis concordia Anglicae nationis, ed. T. Symons (CCM 7/3), Siegburg (1984), p.2 (revised edition of Regularis concordia Anglicae nationis monachorum sanctimonialiumque: The Monastic Agreement of the Monks and Nuns of the English Nation, ed. with English trans. T. Symons, London (1953))
- ^ a b Gretsch, Mechthild. "Myth, Rulership, Church and Charters: Essays in Honour of Nicholas Brooks." The English Historical Review 124.510 (2009): 1136–1138.
- ^ ASC, pp. 230–251
- ^ See, e.g., EHD, no. 10 (the poem on the battle of Maldon), nos. 42–6 (law-codes), nos. 117–29 (charters, etc.), nos.230–1 (letters), and no. 240 (Archbishop Wulfstan's Sermo ad Anglos).
- ^ White, Stephen D. "Timothy Reuter, ed., The New Cambridge Medieval History, 3: C. 900–c. 1024. Cambridge, Eng.: Cambridge University Press, 1999. Pp. xxv." Speculum 77.01 (2002): pp455-485.
- ^ Dorothy Whitelock, ed. Sermo Lupi ad Anglos, 2. ed., Methuen's Old English Library B. Prose selections (London: Methuen, 1952).
- ^ Malcolm Godden, "Apocalypse and Invasion in Late Anglo-Saxon England," in From Anglo-Saxon to Early Middle English: Studies Presented to E. G. Stanley, ed. Malcolm Godden, Douglas Gray, and Terry Hoad (Oxford: Clarendon Press, 1994).
- ^ Mary Clayton, "An Edition of Ælfric's Letter to Brother Edward," in Early Medieval English Texts and Interpretations: Studies Presented to Donald G. Scragg, ed. Elaine Treharne and Susan Rosser (Tempe, Arizona: Arizona Center for Medieval and Renaissance Studies, 2002), 280–283.
- ^ Keynes, S. The Diplomas of King Æthelred "the Unready", 226–228.
- ^ Treharne, Elaine. Living Through Conquest: The Politics of Early English, 1020–1220. Oxford University Press, 2012.
- ^ Robin Fleming Kings and lords in Conquest England. Vol. 15. Cambridge University Press, 2004.
- ^ Mack, Katharin. "Changing thegns: Cnut's conquest and the English aristocracy." Albion: A Quarterly Journal Concerned with British Studies (1984): 375–387.
- ^ Eric John, Orbis Britanniae (Leicester, 1966), p. 61.
- ^ a b Maddicott, J. R. (2004). "Edward the Confessor's Return to England in 1041". English Historical Review (Oxford University Press) CXIX (482): 650–666.
- ^ Swanton, Michael (1996). The Anglo-Saxon Chronicle. New York: Routledge. ISBN 0-415-92129-5
- ^ Bartlett, Robert (2000). J.M.Roberts (ed.). England Under the Norman and Angevin Kings 1075–1225. London: OUP. ISBN 978-0-19-925101-8., p.1
- ^ Wood, Michael (2005). In Search of the Dark Ages. London: BBC. ISBN 978-0-563-52276-8.p.248-249
- ^ Higham, Nicholas J., and Martin J. Ryan. The Anglo-Saxon World. Yale University Press, 2013. pp. 409–410
- ^ From Norman Conquest to Magna Carta: England, 1066–1215, pp.13,14, Christopher Daniell, 2003, ISBN 0-415-22216-8
- ^ Slaves and warriors in medieval Britain and Ireland, 800–1200, p.385, David R. Wyatt, 2009, ISBN 978-90-04-17533-4
- ^ Western travellers to Constantinople: the West and Byzantium, 962–1204, pp. 140,141, Krijna Nelly Ciggaar, 1996, ISBN 90-04-10637-5
- ^ "Byzantine Armies AD 1118–1461", p.23, Ian Heath, Osprey Publishing, 1995, ISBN 978-1-85532-347-6
- ^ "The Norman conquest: England after William the Conqueror", p.98, Hugh M. Thomas, 2008, ISBN 978-0-7425-3840-5
- ^ Chibnall, Marjorie (translator), The Ecclesiastical History of Orderic Vitalis, 6 volumes (Oxford, 1968–1980) (Oxford Medieval Texts), ISBN 0-19-820220-2.
- ^ Anglo-Saxon Chronicle 'D' s.a. 1069
- ^ Jack, George B. "Negative adverbs in early Middle English." (1978): 295–309.
- ^ a b Drout, Michael DC, ed. JRR Tolkien Encyclopedia: Scholarship and critical assessment. Routledge, 2006.
- ^ De Caluwé-Dor, Juliette. "The chronology of the Scandinavian loan-verbs in the Katherine Group." (1979): 680–685.
- ^ Drout, M. The Modern Scholar: The Anglo-Saxon World [Unabridged] [Audible Audio Edition]
- ^ British Library. The Language of Government. Accessed 4 January 2023.
- ^ a b Härke, Heinrich. "Changing symbols in a changing society. The Anglo-Saxon weapon burial rite in the seventh century." The Age of Sutton Hoo. The Seventh Century in North-Western Europe, ed. Martin OH Carver (Woodbridge 1992) (1992): 149–165.
- ^ Yorke, Barbara. Kings and Kingdoms of Early Anglo-Saxon England.
- ^ Hough. "An Ald Reht": Essays on Anglo-Saxon Law. p. 117.
- ^ Hamerow, Helena. "The earliest Anglo-Saxon kingdoms' in The New Cambridge Medieval History, I, c. 500-c. 700. ed. Paul Fouracre." (2005): 265.
- ^ Scull, C. (1997),'Urban centres in Pre-Viking England?', in Hines (1997), pp. 269–98
- ^ Hodges (1982). Dark Age Economics: the origins of towns and trade A.D. 600-1000. London.
- ^ Richards, Naylor; Holas-Clark. "Anglo-Saxon Landscape and Economy: using portable antiquities to study Anglo-Saxon and Viking Age England". Internet Archaeology.
- ^ Fanning, Steven. "Bede, Imperium, and the bretwaldas." Speculum 66.01 (1991): 1–26.
- ^ Wood, Mark. "Bernician Transitions: Place-names and Archaeology." Early medieval Northumbria: kingdoms and communities, AD (2011): 450–1100.
- ^ Leslie, Kim, and Brian Short. An historical atlas of Sussex. History Press, 1999.
- ^ Campbell, J 1979: Bede's Reges and Principes. Jarrow Lecture
- ^ Irvine, Susan, Susan Elizabeth Irvine, and Malcolm Godden, eds. The Old English Boethius: with verse prologues and epilogues associated with King Alfred. Vol. 19. Harvard University Press, 2012.
- ^ Abels, Richard P. Alfred the Great: War, Kingship and Culture in Anglo-Saxon England. Routledge, 2013.
- ^ Higham, N.J. "From Tribal Chieftains to Christian Kings." The Anglo-Saxon World (2013): 126.
- ^ Woodman, David. "Edgar, King of the English 959–975. New Interpretations–Edited by Donald Scragg." Early Medieval Europe 19.1 (2011): 118–120.
- ^ Chaney, William A. (1970). The Cult of Kingship in Anglo-Saxon England: The Transition from Paganism to Christianity. Manchester: Manchester University Press.
- ^ Lethbridge, Gogmagog. The Buried Gods (London, 1957), p. 136.
- ^ Jennbert, Kristina (2006). The Horse and its role in Icelandic burial practices, mythology, and society. pp. 130–133.
- ^ Sikora, Maeve. "Diversity in Viking Age Horse Burial: A Comparative Study of Norway, Iceland, Scotland and Ireland". The Journal of Irish Archaeology. 13 (2004): 87–109.
- ^ Their names mean, literally, "Stallion" and "Horse"
- ^ Owen-Crocker, Gale R. (2000). The Four Funerals in Beowulf: And the Structure of the Poem. Manchester UP. p. 71. ISBN 978-0-7190-5497-6. Retrieved 25 June 2012.
- ^ a b Jupp, Peter C.; Gittings, Clare (1999). Death in England: An Illustrated History. Manchester UP. pp. 67, 72. ISBN 978-0-7190-5811-0. Retrieved 26 June 2012.
- ^ Carver, M. O. H. (1998). Sutton Hoo: Burial Ground of Kings?. U of Pennsylvania P. p. 167. ISBN 978-0-8122-3455-8. Retrieved 25 June 2012.
- ^ Frantzen, Allen J., and I. I. John Hines, eds. Cædmon's Hymn and Material Culture in the World of Bede: Six Essays. West Virginia University Press, 2007.
- ^ Keynes, Simon. "The 'Dunstan B'charters." Anglo-Saxon England 23 (1994): 165–193.
- ^ HE. Bede, Ecdesiastical History of the English People, quoted from the ed. by B. Colgrave and R.A.B. Mynors (Oxford, 1969). ii.12
- ^ ASC, Anglo-Saxon Chronicle in Whitelock 878, Asser c. 55
- ^ a b Hollister, C.W. 1962: Anglo-Saxon Military Institutions (Oxford)
- ^ ASC, Anglo-Saxon Chronicle in Whitelock 893; also Asser c. 100 for the Organisation of the royal household
- ^ Brooks, N.P.1971: The Development of Military Obligations in Eighth-and Ninth-century England, in Clemoes, P. and Hughes, K. (ed.), England Before the Conquest (Cambridge) pp. 69—84.
- ^ Webb, J.F. and Farmer, D.H. 1965: The Age of Bede (Harmondsworth)., pp. 43–4
- ^ Gillingham, J. 1984: Richard I and the Science of War in the Middle Ages, in J. Holt and J. Gillingham (eds.), War and Government in the Middle Ages (Woodbridge).
- ^ ASC, Anglo-Saxon Chronicle in Whitelock 1979 912, 914, 917
- ^ Campbell, J. 1981: The Anglo-Saxons (Oxford).
- ^ Richards, Julian D. (2013-06-01). Viking Age England (Kindle Locations 418–422). The History Press. Kindle Edition.
- ^ a b Hamerow, Helena. Rural Settlements and Society in Anglo-Saxon England. Oxford University Press, 2012.
- ^ O'Brien C (2002) The Early Medieval Shires of Yeavering, Bamburgh and Breamish. Archaeologia Aeliana 5th Series, 30, 53–73.
- ^ a b Sawyer, Peter. The Wealth of Anglo-Saxon England. Oxford University Press, 2013.
- ^ Higham, Nicholas J., and Martin J. Ryan, eds. Place-names, Language and the Anglo-Saxon Landscape. Vol. 10. Boydell Press, 2011.
- ^ Pickles, Thomas. "The Landscape Archaeology of Anglo-Saxon England, ed. Nicholas J. Higham and Martin J. Ryan." The English Historical Review 127.528 (2012): 1184–1186.
- ^ Hamerow, Helena, David A. Hinton, and Sally Crawford, eds. The Oxford Handbook of Anglo-Saxon Archaeology. OUP Oxford, 2011.
- ^ Klinck, A. L., 'Anglo-Saxon women and the law', Journal of Medieval History 8 (1982), 107–21.
- ^ Rivers, T. J., 'Widows' rights in Anglo-Saxon law', American Journal of Legal History 19 (1975), 208–15.
- ^ Fell, C., Women in Anglo-Saxon England (Oxford, 1984).
- ^ Leges Henrici Primi
- ^ Stenton 1987, p. 530.
- ^ Anglo-Saxon Dictionary edited by Joseph Bosworth, T. Northcote Toller and Alistair Campbell (1972), Oxford University Press, ISBN 0-19-863101-4.
- ^ Stenton, F. M. "The Thriving of the Anglo-Saxon Ceorl." Preparatory to Anglo-Saxon England (1970): 383–93.
- ^ "Early Medieval Architecture". English Heritage. Retrieved 26 January 2021.
- ^ "When did the Anglo-Saxons come to Britain?". BBC Bitesize. Retrieved 26 January 2021.
- ^ York and London both offer examples of this trend.
- ^ Turner, H. L. (1970), Town Defences in England and Wales: An Architectural and Documentary Study A. D. 900–1500 (London: John Baker)
- ^ Higham, R. and Barker, P. (1992), Timber Castles (London: B. T. Batsford):193
- ^ a b Wilkinson, David John, and Alan McWhirr. Cirencester Anglo-Saxon Church and Medieval Abbey: Excavations Directed by JS Wacher (1964), AD McWhirr (1965) and PDC Brown (1965–6). Cotswold Archaeological Trust, 1998.
- ^ Whitehead, Matthew Alexander, and J. D. Whitehead. The Saxon Church, Escomb. 1979.
- ^ Conant, Kenneth John. Carolingian and Romanesque architecture, 800 to 1200. Vol. 13. Yale University Press, 1993.
- ^ Suzuki, Seiichi. The Quoit Brooch Style and Anglo-Saxon Settlement: A Casting and Recasting of Cultural Identity Symbols. Boydell & Brewer, 2000.
- ^ a b Adams, Noël. "Rethinking the Sutton Hoo Shoulder Clasps and Armour." Intelligible Beauty: Recent Research on Byzantine ewellery. London: British Museum Research Publications 178 (2010): 87–116.
- ^ a b Richards, Julian D. "Anglo-Saxon symbolism." The Age of Sutton Hoo: The Seventh Century in North-West Europe (1992): 139.
- ^ Alexander, Caroline (November 2011). "Magical Mystery Treasure". National Geographic. 220 (5): 44. Archived from the original on 2016-12-25. Retrieved 2014-02-20.
- ^ "The Find". Staffordshire Hoard. Archived from the original on 2011-07-03. Retrieved 14 June 2011.
- ^ Leahy & Bland 2009, p. 9.
- ^ Mills, Allan A. "The Canterbury Pendant: A Saxon Seasonal-Hour Altitude Dial." PI Drinkwater:'Comments upon the Canterbury Pendant', and AJ Turner:'The Canterbury Dial', Bull BSS 95.2 (1995): 95.
- ^ Leslie Webster, Janet Backhouse, and Marion Archibald. The Making of England: Anglo-Saxon Art and Culture, AD 600–900. Univ of Toronto Pr, 1991.
- ^ Brown, Katherine L., and Robin JH Clark. "The Lindisfarne Gospels and two other 8th century Anglo-Saxon/Insular manuscripts: pigment identification by Raman microscopy." Journal of Raman Spectroscopy 35.1 (2004): 4–12.
- ^ Bruce-Mitford, Rupert Leo Scott. The art of the Codex Amiatinus. Parish of Jarrow, 1967.
- ^ Gameson, Richard. "THE COST OF THE CODEX-AMIATINUS." Notes and Queries 39.1 (1992): 2–9.
- ^ Meyvaert, Paul. "Bede, Cassiodorus, and the Codex Amiatinus." Speculum 71.04 (1996): 827–883.
- ^ Chazelle, Celia. "Ceolfrid's gift to St Peter: the first quire of the Codex Amiatinus and the evidence of its Roman destination." Early Medieval Europe 12.2 (2003): 129–157.
- ^ THOMAS, GABOR. "OVERVIEW: CRAFT PRODUCTION AND TECHNOLOGY." The Oxford Handbook of Anglo-Saxon Archaeology (2011): 405.
- ^ Brown 1996, pp. 70, 73.
- ^ Reynolds, Andrew, and Webster, Leslie. "Early Medieval Art and Archaeology in the Northern World." (2013).
- ^ O'Sullivan, Deirdre. "Normanising the North: The Evidence of Anglo-Saxon and Anglo-Scandinavian Sculpture." Medieval Archaeology 55.1 (2011): 163–191.
- ^ Janet Backhouse, Derek Howard Turner, and Leslie Webster, eds. The Golden Age of Anglo-Saxon Art, 966–1066. British Museum Publications Limited, 1984.
- ^ Grape, Wolfgang. The Bayeux tapestry: monument to a Norman triumph. Prestel Pub, 1994.
- ^ Kemola, Juhani. 2000 "The Origins of the Northern Subject Rule – A Case of Early contact?"
- ^ The Celtic Roots of English, ed. by Markku Filppula, Juhani Klemola and Heli Pitkänen, Studies in Languages, 37 (Joensuu: University of Joensuu, Faculty of Humanities, 2002).
- ^ Hildegard L. C. Von Tristram (ed.), The Celtic Englishes, Anglistische Forschungen 247, 286, 324, 3 vols (Heidelberg: Winter, 1997–2003).
- ^ Peter Schrijver, Language Contact and the Origins of the Germanic Languages, Routledge Studies in Linguistics, 13 (New York: Routledge, 2014), pp. 12–93.
- ^ Minkova, Donka (2009), Reviewed Work(s): A History of the English Language by Elly van Gelderen; A History of the English Language by Richard Hogg and David Denison; The Oxford History of English by Lynda Mugglestone
- ^ John Insley, "Britons and Anglo-Saxons," in Kulturelle Integration und Personnenamen in Mittelalter, De Gruyter (2018)
- ^ Robert McColl Millar, "English in the 'transition period': the sources of contact-induced change," in Contact: The Interaction of Closely-Related Linguistic Varieties and the History of English, Edinburgh University Press (2016)
- ^ Richard Coates, Reviewed Work: English and Celtic in Contact (2010)
- ^ Scott Shay (30 January 2008). The history of English: a linguistic introduction. Wardja Press. p. 86. ISBN 978-0-615-16817-3. Retrieved 29 January 2012.
- ^ Barber, Charles (2009). The English Language: A Historical Introduction. Cambridge University Press. p. 137. ISBN 978-0-521-67001-2.
- ^ Robert McColl Millar, "English in the 'transition period': the sources of contact-induced change," in Contact: The Interaction of Closely-Related Linguistic Varieties and the History of English (2016)
- ^ Schendl, Herbert (2012), Middle English: Language Contact
- ^ Hamerow, Helena. Rural Settlements and Society in Anglo-Saxon England. Oxford University Press, 2012.p166
- ^ Fisher, Genevieve. "Kingdom and community in early Anglo-Saxon eastern England." Regional approaches to mortuary analysis. Springer US, 1995. 147–166.
- ^ Lynch, Joseph H. Christianizing kinship: ritual sponsorship in Anglo-Saxon England. Cornell University Press, 1998
- ^ Hough, C. "Wergild." (1999): 469–470.
- ^ Harrison, Mark. Anglo-Saxon Thegn AD 449–1066. Vol. 5. Osprey Publishing, 1993
- ^ Fell, Christine E., Cecily Clark, and Elizabeth Williams. Women in Anglo-Saxon England. Blackwell, 1987
- ^ Simpson, A.W.B. 'The Laws of Ethelbert' in Arnold et al. (1981) 3.
- ^ Baker, J.H. An Introduction to English Legal History. (London: Butterworths, 1990) 3rd edition, ISBN 0-406-53101-3, Chapters 1–2.
- ^ Milsom, S.F.C. Historical Foundations of the Common Law. (London: Butterworths, 1981) 2nd edition, ISBN 0-406-62503-4 (limp), 1–23.
- ^ Robertson, Agnes Jane, ed. Anglo-Saxon Charters. Vol. 1. Cambridge University Press, 2009.
- ^ Milsom, S.F.C. Historical Foundations of the Common Law. (London: Butterworths, 1981) 2nd edition, ISBN 0-406-62503-4 (limp), 1–23
- ^ Pollock, F. and Maitland, F.M. A History of English Law. Two volumes. (Cambridge: Cambridge University Press, 1898 reprinted 1968) 2nd edition, ISBN 0-521-07061-9 and ISBN 0-521-09515-8, Volume I, Chapter 1.
- ^ Reynolds, Andrew. "Judicial culture and social complexity: a general model from Anglo-Saxon England." World Archaeology ahead-of-print (2014): 1–15.
- ^ a b Hyams, P. 'Trial by ordeal: the key to proof in the early common law' in Arnold, M.S. et al.. (eds) On the Laws and Customs of England: Essays in honor of S.E. Thorne. (Harvard: Harvard University Press, 1981) ISBN 0-8078-1434-2, p. 90.
- ^ Leeson, Peter T. "Ordeals." Journal of Law and Economics 55.3 (2012): 691–714.
- ^ Higham, Nicholas, and Martin J. Ryan. The Anglo-Saxon World. Yale University Press, 2013.
- ^ Karkov, Catherine E. The Art of Anglo-Saxon England. Vol. 1. Boydell Press, 2011.
- ^ Fulk, R. D., and Christopher M. Cain. "Making Old English New: Anglo-Saxonism and the Cultural Work of Old English Literature." (2013).
- ^ Godden, Malcolm, and Michael Lapidge, eds. The Cambridge Companion to Old English Literature. Cambridge University Press, 1991; there is also the Paris Psalter (not the Paris Psalter), a metrical version of most of the Psalms, described by its most recent specialist as "a pedestrian and unimaginative piece of poetic translation. It is rarely read by students of Old English, and most Anglo-Saxonists make only passing reference to it. There is scarcely any literary criticism written on the text, although some work has been done on its vocabulary and metre", "Poetic language and the Paris Psalter: the decay of the Old English tradition", by M. S. Griffith, Anglo-Saxon England, Volume 20, December 1991, pp 167–186, doi:10.1017/S0263675100001800
- ^ "Anglo-Saxons.net".
- ^ Bradley, S.A.J. Anglo-Saxon Poetry. New York: Everyman Paperbacks, 1995.
- ^ Alexander, Michael. The Earliest English Poems. 3rd rev. ed. New York: Penguin Classics, 1992.
- ^ Anglo Saxon Poetry. Hachette UK, 2012.
- ^ Sweet, Henry. An Anglo-Saxon reader in prose and verse: with grammar, metre, notes and glossary. At the Clarendon Press, 1908.
- ^ Nugent, Ruth, and Howard Williams. "Sighted surfaces. Ocular Agency in early Anglo-Saxon cremation burials." Encountering images: materialities, perceptions, relations. Stockholm studies in archaeology 57 (2012): 187–208.
- ^ Härke, Heinrich. "Grave goods in early medieval burials: messages and meanings." Mortality ahead-of-print (2014): 1–21.
- ^ Pader, E.J. 1982. Symbolism, social relations and the interpretation of mortuary remains. Oxford. (B.A.R. S 130)
- ^ Guido and Welch. Indirect evidence for glass bead manufacture in early Anglo-Saxon England. In Price 2000 115–120.
- ^ Guido, M. & M. Welch 1999. The glass beads of Anglo-Saxon England c. AD 400–700: a preliminary visual classification of the more definitive and diagnostic types. Rochester: Reports of the Research Committee of the Society of Antiqaries of London 56.
- ^ Brugmann, B. 2004. Glass beads from Anglo-Saxon graves: a study of the provenance and chronology of glass beads from early Anglo-Saxon graves, based on visual examination. Oxford: Oxbow
- ^ Owen-Crocker, Gale R. Dress in Anglo-Saxon England. Boydell Press, 2004.
- ^ John Hines (1998) The Anglo-Saxon Cemetery at Edix Hill (Barrington A), Cambridgeshire. Council for British Archaeology.
- ^ a b North, Richard. Heathen Gods in Old English Literature. Cambridge University Press, 1997, p. 273
- ^ Gannon, Anna. The iconography of early Anglo-Saxon coinage: sixth to eighth centuries. Oxford University Press, 2003.
- ^ "Cambridge University study finds Anglo-Saxon kings were mostly vegetarian". BBC News. 2022-04-22. Retrieved 2022-05-12.
- ^ "Anglo-Saxon kings 'were mostly vegetarian', before the Vikings new study claims". The Independent. 2022-04-21. Retrieved 2022-05-12.
- ^ Rule of Darkness: British Literature and Imperialism, 1830–1914 by Patrick Brantlinger. Cornell University Press, 1990
- ^ Race and Empire in British Politics by Paul B. Rich. CUP Archive, 1990
- ^ Race and Manifest Destiny: The Origins of American Racial Anglo-Saxonism by Reginald Horsman. Harvard University Press, 1981. (pgs. 126, 173, 273)
- ^ Hills, Catherine. Origins of the English. Duckworth Pub, 2003.p35
- ^ "ISSEME | News".
- Oppenheimer, Stephen. The Origins of the British (2006). Constable and Robinson, London. ISBN 1-84529-158-1
- Hamerow, Helena; Hinton, David A.; Crawford, Sally, eds. (2011), The Oxford Handbook of Anglo-Saxon Archaeology., Oxford: OUP, ISBN 978-0-19-921214-9
- Higham, Nicholas J.; Ryan, Martin J. (2013), The Anglo-Saxon World, Yale University Press, ISBN 978-0-300-12534-4
- Hills, Catherine (2003), Origins of the English, London: Duckworth, ISBN 0-7156-3191-8
- Koch, John T. (2006), Celtic Culture: A Historical Encyclopedia, Santa Barbara and Oxford: ABC-CLIO, ISBN 1-85109-440-7
- Stenton, Sir Frank M. (1987) [first published 1943], Anglo-Saxon England, The Oxford History of England, vol. II (3rd ed.), OUP, ISBN 0-19-821716-1
- Clark, David, and Nicholas Perkins, eds. Anglo-Saxon Culture and the Modern Imagination (2010)
- F.M. Stenton, Anglo-Saxon England, 3rd edition, (Oxford: University Press, 1971)
- J. Campbell et al., The Anglo-Saxons, (London: Penguin, 1991)
- Campbell, James, ed. (1982). The Anglo-Saxons. London: Penguin. ISBN 978-0-140-14395-9.
- E. James, Britain in the First Millennium, (London: Arnold, 2001)
- M. Lapidge et al., The Blackwell Encyclopaedia of Anglo-Saxon England, (Oxford: Blackwell, 1999)
- Donald Henson, The Origins of the Anglo-Saxons, (Anglo-Saxon Books, 2006)
- Bazelmans, Jos (2009), "The early-medieval use of ethnic names from classical antiquity: The case of the Frisians", in Derks, Ton; Roymans, Nico (eds.), Ethnic Constructs in Antiquity: The Role of Power and Tradition, Amsterdam: Amsterdam University, pp. 321–337, ISBN 978-90-8964-078-9, archived from the original on 2017-08-30, retrieved 2017-05-31
- Brown, Michelle P.; Farr, Carol A., eds. (2001), Mercia: An Anglo-Saxon Kingdom in Europe, Leicester: Leicester University Press, ISBN 0-8264-7765-8
- Brown, Michelle, The Lindisfarne Gospels and the Early Medieval World (2010)
- Charles-Edwards, Thomas, ed. (2003), After Rome, Oxford: Oxford University Press, ISBN 978-0-19-924982-4
- Dodwell, C. R., Anglo-Saxon Art, A New Perspective, 1982, Manchester UP, ISBN 0-7190-0926-X
- Dornier, Ann, ed. (1977), Mercian Studies, Leicester: Leicester University Press, ISBN 0-7185-1148-4
- Elton, Charles Isaac (1882), "Origins of English History", Nature, London: Bernard Quaritch, 25 (648): 501, Bibcode:1882Natur..25..501T, doi:10.1038/025501a0, S2CID 4097604
- Frere, Sheppard Sunderland (1987), Britannia: A History of Roman Britain (3rd, revised ed.), London: Routledge & Kegan Paul, ISBN 0-7102-1215-1
- Giles, John Allen, ed. (1841), "The Works of Gildas", The Works of Gildas and Nennius, London: James Bohn
- Giles, John Allen, ed. (1843a), "Ecclesiastical History, Books I, II and III", The Miscellaneous Works of Venerable Bede, vol. II, London: Whittaker and Co. (published 1843)
- Giles, John Allen, ed. (1843b), "Ecclesiastical History, Books IV and V", The Miscellaneous Works of Venerable Bede, vol. III, London: Whittaker and Co. (published 1843)
- Härke, Heinrich (2003), "Population replacement or acculturation? An archaeological perspective on population and migration in post-Roman Britain.", Celtic-Englishes, Carl Winter Verlag, III (Winter): 13–28, retrieved 18 January 2014
- Haywood, John (1999), Dark Age Naval Power: Frankish & Anglo-Saxon Seafaring Activity (revised ed.), Frithgarth: Anglo-Saxon Books, ISBN 1-898281-43-2
- Higham, Nicholas (1992), Rome, Britain and the Anglo-Saxons, London: B. A. Seaby, ISBN 1-85264-022-7
- Higham, Nicholas (1993), The Kingdom of Northumbria AD 350–1100, Phoenix Mill: Alan Sutton Publishing, ISBN 0-86299-730-5
- Jones, Barri; Mattingly, David (1990), An Atlas of Roman Britain, Cambridge: Blackwell Publishers (published 2007), ISBN 978-1-84217-067-0
- Jones, Michael E.; Casey, John (1988), "The Gallic Chronicle Restored: a Chronology for the Anglo-Saxon Invasions and the End of Roman Britain", Britannia, The Society for the Promotion of Roman Studies, XIX (November): 367–98, doi:10.2307/526206, JSTOR 526206, S2CID 163877146, archived from the original on 13 March 2020, retrieved 6 January 2014
- Karkov, Catherine E., The Art of Anglo-Saxon England, 2011, Boydell Press, ISBN 1-84383-628-9, ISBN 978-1-84383-628-5
- Kirby, D. P. (2000), The Earliest English Kings (Revised ed.), London: Routledge, ISBN 0-415-24211-8
- Laing, Lloyd; Laing, Jennifer (1990), Celtic Britain and Ireland, c. 200–800, New York: St. Martin's Press, ISBN 0-312-04767-3
- Leahy, Kevin; Bland, Roger (2009), The Staffordshire Hoard, British Museum Press, ISBN 978-0-7141-2328-8
- McGrail, Seàn, ed. (1988), Maritime Celts, Frisians and Saxons, London: Council for British Archaeology (published 1990), pp. 1–16, ISBN 0-906780-93-4
- Mattingly, David (2006), An Imperial Possession: Britain in the Roman Empire, London: Penguin Books (published 2007), ISBN 978-0-14-014822-0
- Pryor, Francis (2004), Britain AD, London: Harper Perennial (published 2005), ISBN 0-00-718187-6
- Russo, Daniel G. (1998), Town Origins and Development in Early England, c. 400–950 A.D., Greenwood Publishing Group, ISBN 978-0-313-30079-0
- Snyder, Christopher A. (1998), An Age of Tyrants: Britain and the Britons A.D. 400–600, University Park: Pennsylvania State University Press, ISBN 0-271-01780-5
- Snyder, Christopher A. (2003), The Britons, Malden: Blackwell Publishing (published 2005), ISBN 978-0-631-22260-6
- Webster, Leslie, Anglo-Saxon Art, 2012, British Museum Press, ISBN 978-0-7141-2809-2
- Wickham, Chris (2005), Framing the Early Middle Ages: Europe and the Mediterranean, 400–800, Oxford: Oxford University Press (published 2006), ISBN 978-0-19-921296-5
- Wickham, Chris (2009), "Kings Without States: Britain and Ireland, 400–800", The Inheritance of Rome: Illuminating the Dark Ages, 400–1000, London: Penguin Books (published 2010), pp. 150–169, ISBN 978-0-14-311742-1
- Wilson, David M.; Anglo-Saxon: Art From The Seventh Century To The Norman Conquest, Thames and Hudson (US edn. Overlook Press), 1984.
- Wood, Ian (1984), "The end of Roman Britain: Continental evidence and parallels", in Lapidge, M. (ed.), Gildas: New Approaches, Woodbridge: Boydell, p. 19
- Wood, Ian (1988), "The Channel from the 4th to the 7th centuries AD", in McGrail, Seàn (ed.), Maritime Celts, Frisians and Saxons, London: Council for British Archaeology (published 1990), pp. 93–99, ISBN 0-906780-93-4
- Yorke, Barbara (1990), Kings and Kingdoms of Early Anglo-Saxon England, B. A. Seaby, ISBN 0-415-16639-X
- Yorke, Barbara (1995), Wessex in the Early Middle Ages, London: Leicester University Press, ISBN 0-7185-1856-X
- Yorke, Barbara (2006), Robbins, Keith (ed.), The Conversion of Britain: Religion, Politics and Society in Britain c.600–800, Harlow: Pearson Education Limited, ISBN 978-0-582-77292-2
- Zaluckyj, Sarah, ed. (2001), Mercia: The Anglo-Saxon Kingdom of Central England, Little Logaston: Logaston, ISBN 1-873827-62-8
- Photos of over 600 items found in the Anglo-Saxon Hoard in Staffordshire Sept. 2009
- Anglo-Saxon gold hoard September 2009: largest ever hoard officially declared treasure
- Huge Anglo-Saxon gold hoard found, BBC News, with photos.
- Fides Angliarum Regum: the faith of the English kings
- Anglo-Saxon Origins: The Reality of the Myth by Malcolm Todd
- An Anglo-Saxon Dictionary
- Simon Keynes' bibliography of Anglo-Saxon topics |
Table of Contents :
Top Suggestions 2 X 2 Multiplication Worksheet :
2 X 2 Multiplication Worksheet Fixed point representation allows us to use fractional numbers on low cost integer hardware this article will discuss several multiplication examples using the fixed point representation to read 2 if we multiply 23 x 1 4 what does the closure property of multiplication tell us create your account to access this entire worksheet when you take this short quiz you ll find a series of Have students work in partners to practice multiplication facts example tommy puts down an 7 and a 2 bob puts down a 2 and a 3 the math problem is 72 x 23 whoever solves it first wins a point.
2 X 2 Multiplication Worksheet Define the commutative property of multiplication the product is the same regardless or what number to the power of 1 2 equals a number b what is the square root of 16 4 c what is the square They extend this to three digit numbers eg 600 247 3 200 can be derived from 2 x 3 6 later pupils multiply two and three digit numbers by a one digit number using the formal written layout Subtraction and multiplication it s obvious why those basic skills are needed but students taking standardized tests on bubble sheets to solve algebraic or calculus equations may have had a.
2 X 2 Multiplication Worksheet A fact family is 3 numbers that are connected through multiplication and division for instance 2 4 and 8 are a fact family because 2 x 4 8 4 x 2 8 8 247 4 2 and 8 247 2 4 step 2 explain the A full online curriculum for ages 2 to 7 it offers very early learning it s an entertaining but solid app that offers visual multiplication similar to what my children bring home on worksheets Csd is an elegant method to implement digital multipliers in a more efficient way this article reviews the basics of csd and its implementation generally digital signal processing algorithms.
One part of that rule says that multiplication creating a worksheet and using a formula such as the one above you will be alerted to problems if you check the calculation on your calculator if It s easy though for her to evoke math anxieties in listeners often with little more than an elementary school worksheet 5 x 10 5 x 8 another might halve one factor and double the other Similarly as 2 goes into 10 evenly in this chapter pupils use their knowledge of multiplication to recognise and use factor pairs building on their knowledge of the commutative property of.
2 Digit Multiplication Worksheets 2 Digit X 2 Digit And 2
2 Digit Multiplication Worksheets 2 Digit X 2 Digit And 2 Gigit X 1 Digit Multiplication Worksheets For 3rd And 4th Grade
2 Digit By 2 Digit Multiplication Worksheets Printable
2 Digit By 2 Digit Multiplication Showing Top 8 Worksheets In The Category 2 Digit By 2 Digit Multiplication Some Of The Worksheets Displayed Are Grade 4 Multiplication Work Long Multiplication Work Multiplying 2 Digit By 2 Long Multiplication Work Multiplying 2 Digit By 1 Name Multiplication 2 Digit By 2 Digit Multiplication 3 Digit By 2 Digit Multiplication 1 Grade 4 Multiplication
Multiplication Worksheets 2 3 Digits Thoughtco
Multiplication Worksheets 2 3 Digits Search Search The Site Go Math Worksheets By Grade Math Tutorials Geometry Arithmetic Pre Algebra Algebra Statistics 2 X 1 Multiplication Worksheet 2 X 1 Timestable Worksheet 1 Multiplication Worksheets 1 X 2 And 2 X 3 Digit Multiplication Worksheets Print Print 02 Of 15
Multiplication Worksheets Multiple Digit
Multiplication Worksheets Multiple Digit Multiplication Worksheets Vertical Format This Multiplication Worksheet May Be Configured For 2 3 Or 4 Digit Multiplicands Being Multiplied By 1 2 Or 3 Digit Multipliers You May Vary The Numbers Of Problems On Each Worksheet From 12 To 25
2 By 2 Digit Multiplication Worksheet Ncalculators
2 By 2 Digit Multiplication Worksheet With Answers For Your Kids To Practice Solve And Learn Basic Multiplication Each Exercise Of This Multiplication Worksheet Contains 2 Digit Multiplicand And 2 Digit Multiplier Each Time You Generate The New Worksheet The Appropriate Answers For New Set Of Questions For This Multiplication Worksheet Will Also Get GeneratedMultiply By 2 Horizontal Questions Full Page
This Basic Multiplication Worksheet Is Designed To Help Kids Practice Multiplying By 2 With Multiplication Questions That Change Each Time You Visit This Math Worksheet Is Printable And Displays A Full Page Math Sheet With Horizontal Multiplication QuestionsGrade 2 Multiplication Worksheets Free Printable K5
Worksheets Math Grade 2 Multiplication Grade 2 Multiplication Worksheets These Grade 2 Multiplication Worksheets Emphasize Early Multiplication Skills In Particular Recall Of The 2 5 And 10 Times Tables Multiplying By Whole Tens And Solving Missing Factor Problems2 By 2 Digit Multiplication Worksheet
2 By 2 Digit Multiplication Worksheet With Answers To Practice Learn Grade 4 Math Problems On Multiplication Is Available Online For Free In Printable Downloadable Image Format Tap On Print Or Image Button To Print Or Download This 4th Grade Worksheet For Practice Multiplying 2 Digit Multiplicand And 2 Digit Multiplier
Multiplying 2 Digit By 2 Digit Numbers With Comma
Welcome To The Multiplying 2 Digit By 2 Digit Numbers With Comma Separated Thousands A Math Worksheet From The Long Multiplication Worksheets Page At Math Drills This Long Multiplication Worksheet May Be Printed Downloaded Or Saved And Used In Your Classroom Home School Or Other Educational Environment To Help Someone Learn Math
Multiplying 2 Digit By 2 Digit Numbers A Math Drills
Welcome To The Multiplying 2 Digit By 2 Digit Numbers A Math Worksheet From The Long Multiplication Worksheets Page At Math Drills This Long Multiplication Worksheet May Be Printed Downloaded Or Saved And Used In Your Classroom Home School Or Other Educational Environment To Help Someone Learn Math
People interested in 2 X 2 Multiplication Worksheet also searched for :
2 X 2 Multiplication Worksheet. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. 2 X 2 Multiplication Worksheet.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
2 X 2 Multiplication Worksheet. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
Tags: #printable multiplication worksheets grade 6#printable multiplication worksheets grade 4#division 2 x 2 worksheets#5x multiplication worksheets#2 digit multiplication worksheets printable#2 by 2 multiplication worksheets#3 x 2 multiplication worksheets#2 x 1 multiplication worksheets#traditional 2 digit multiplication worksheets#multiplication worksheets grade 5
Related Free Printable Worksheets : |
2 Answers | Add Yours
Regarding the process of expanding the money supply, in theory, the government could expand the money supply as much as it wants by simply creating more money. However, there are at least two limits on this process.
One limit is inflation. If the government simply creates too much money, inflation will occur. The inflation will eat away at the value of the money that the government has created and will, thereby, reduce the real expansion of the money supply.
A second limit is the value of the money multiplier. For example, the government can try to create more money by lowering interest rates so that more people will borrow. But if people are not willing to borrow (because, for example, businesses are afraid to expand), the multiplier effect for any increase by the government will be reduced.
These factors limit the money supply's ability to grow.
The concept of expanding the money supply involves some different activities. One way that the government can increase the money supply is by printing more money. This, however, usually has negative consequences for the economy because it will lead to significant inflation. When more money is being printed, people will have access to that money. People with the ability to spend more money will create an increased demand for products. Unless the supply of products also increases, the result will be inflation.
Another way the government can increase the money supply is to lower interest rates. This is usually done to stimulate the economy. When interest rates drop, people may be more willing to take loans to create new businesses or to expand existing businesses since the cost of borrowing will decrease. However, the key to this working is that people need to actually take the loans to create the jobs that will help the economy grow. If interest rates drop and investment doesn’t increase, the economy will likely not grow.
We’ve answered 330,753 questions. We can answer yours, too.Ask a question |
Listen and Learn Science/Chemical Reactions
- 1 Chemical Reactions.
- 1.1 Chemical Reactions.
- 1.2 Alloys.
- 1.3 Solutions.
- 1.4 Valence
- 1.5 Noble Gases.
- 1.6 Valence of elements.
- 1.7 Bonding.
- 1.8 Nomenclature of a chemical reaction.
- 1.9 Balancing a chemical reaction.
- 1.10 Types of chemical reactions.
- 1.11 Redox.
- 1.12 Applications of chemical reactions.
- 1.13 Energy from Chemicals.
- 1.14 Extraction of minerals.
- 1.15 Manufacture of Chemical compounds.
- 1.16 Pharmaceutical drugs.
- 1.17 Biochemistry.
- 1.18 Human Genome.
A chemical reaction, is a process that leads, to the transformation, of one set of substances, to another.
In a chemical reaction, elements and compounds react, with each other, resulting in the formation, of other elements, and compounds.
This reaction involves chemical bonding.
This means, the elements get attached, to each other, at the atomic level. In nature, elements tend to have, an affinity to bond, with other elements.
So, we will find most of the elements, in nature as compounds. There is an exception to this. Noble gases, and elements, do not combine, with other elements.
Other elements have varying degrees, of affinity to combine, with other elements.
This degree of affinity is related, to the atomic structure, of the element.Specifically it is related, to the number of electrons, in the outer most shell, of the atom.
This gives a valence to an element.
An alloy is a mixture, of two or more, elements.
An alloy is not a compound of the elements.
There is no chemical bonding of the elements, in an alloy.
Steel is an example, of an alloy of iron and Carbon.
Bronze is an alloy of copper and tin.
Alloys can be very useful materials with different properties.
For example steel is much stronger, than iron.
Bronze is much stronger than copper or tin.
When sugar is dissolved in milk,
There is no chemical reaction.
A solution of milk, and sugar, is formed.
If we add water to this, again, there is no chemical reaction.
We just have a solution, of diluted milk and sugar.
The tendency of an element to combine with another element, is called the valence of the element.
The Valence of hydrogen is different from, the Valence of Carbon,
which is different from, the Valence of Oxygen.
So, is the case, with other elements.
Interestingly, the Valence of an element,
correlates to the way, the electrons are distributed in the shells, surrounding the nucleus.
The first shell, called the K shell, can hold 2 electrons.
The second shell, called the L shell, can hold 8 electrons.
The third shell, called the M shell, can hold 18 electrons.
If the number of electrons in the outer most shell is occupied to full capacity,
the atom does not have a tendency to combine with other atoms.
Also, if the number of electrons in the outer most shell is equal to 8, the atom does not have a tendency to combine with other atoms.
In all other cases, the atom has a tendency to combine with other atoms.
Noble gases do not react with other elements. They are neutral, and inert.
Helium has an, atomic number of 2. It has two electrons, in the first, or K shell. The K shell is full. So, helium does not combine, with other elements. We can also say, the Valence of helium is, 0.
Neon has an, atomic number of 10. It has a total, of 10 electrons. The K shell has 2 electrons. The L shell has 8 electrons. Since the outer most shell, has 8 electrons, neon does not have a tendency, to combine with other elements. We can also say, the Valence of neon is, 0. Other examples of inert gases are, Argon, and Krypton.
Valence of elements.
Hydrogen, has an atomic number, of 1. It has one electron, in the K shell. The K shell has a capacity of 2, so there is scope, for hydrogen to accommodate, one more electron. When there is such a capacity, atoms tend to share, an electron from another atom. In this case, it has a capacity to share, one electron. The Valence, or combining power, of hydrogen is 1.
Carbon, has an atomic number, of 6. It has two electrons, in the K shell. It has 4 electrons, in the L shell. The capacity of the L shell, is 8. So, Carbon has the capacity, to share four more electrons. The Valence of Carbon, is 4.
Oxygen, has an atomic number, of 8. It has 2 electrons, in the K shell. It has 6 electrons, in the L shell. The capacity of the L shell, is 8. So, Oxygen has the capacity, to share two more electrons. The Valence of Oxygen, is 2.
A chemical reaction, involves chemical bonding. Bonding is a way, elements share the electrons, in the outer most shell. Hydrogen combines with Oxygen, in a chemical reaction, to form water, or H 2 O. Two atoms of hydrogen, combine with one atom of Oxygen, to form H 2 O. Each of the hydrogen atoms, shares one electron, with the Oxygen atom. After the sharing, hydrogen feels fulfilled, in the K shell. Now the K shell, has one of its own electron, and one shared electron. The Oxygen atom gets, two shared electron, from two hydrogen atoms. The Oxygen atom, feels fulfilled, in the L shell. It has six of its own electrons, and two shared electrons, making a total of 8 electrons, in the L shell. We can say Oxygen, has one bond, with each of the two, hydrogen atoms. Each bond is represented, with a dash. So, H dash, O dash, H, Is the bonded formula, for water, or H 2 O. This bond of sharing, one electron is called, a covalent bond.
Carbon combines with Oxygen, to form Carbon dioxide. One atom of Carbon combines, with two atoms of Oxygen, to form C O 2. One Carbon atom shares, two electrons, with each Oxygen atom. Now Carbon has four, of its own electrons, and four shared electrons, in the L shell. So the Carbon atom gets fulfilled, with the 8 virtual electrons, in the L shell. The Oxygen atom has six, of its own electrons, and two shared electrons, in the L shell. So the Oxygen atom gets fulfilled, with the 8 virtual electrons, in the L shell. This sharing, of two electrons, is called a double bond. This is represented, with two parallel dashes, just like a, equal to, sign. So, O double bond Carbon, double bond O, is C O 2.
Nitrogen combines, with hydrogen, to form, ammonia, or N H 3. Nitrogen, has an atomic number, of 7. It has 2 electrons, in the K shell. It has 5 electrons, in the L shell. It has an affinity, for 3 more electrons, in the L shell. 3 hydrogen atoms share, 1 electron each, with a nitrogen atom. This gives a virtual, 8 electrons, to nitrogen, in the L shell. 5 of its own electrons, and 3 shared electrons. This combination of Nitrogen, and hydrogen, results in a stable compound, N H 3 or ammonia. Since 3 electrons are shared, this bond is represented, as a triple bond. So N triple bond H 3, is the way for representing, ammonia.
Chemical reactions, is all about sharing of electrons, and bonding. In an chemical reaction, A bond might be formed, Existing bonds might be broken, And some other, new bonds may be formed. This concept is true, for all chemicals reactions, however big or complex, the chemical reaction is.
This also gives us the idea, that chemical reactions, are not random. They occur, due to some scientific reason, of elements and compounds, to react with one another. Chemical reactions, results in a rearrangement, of chemical bonds.
Nomenclature of a chemical reaction.
We can state that, Carbon combines with Oxygen, to form Carbon dioxide. This is a qualitative way, of stating a chemical reaction. We can also state it, like an equation. C + O 2, right arrow, C O 2. This is an example, of a chemical equation. Carbon and Oxygen, are called, the reactants. The right arrow, can be interpreted, as "results in". In this module, we will read the right arrow as "results in". Carbon dioxide is the product. We can show, an up arrow, after C O 2, to signify that, it is a resultant gas. If heat is required for the reaction, we can say so, on top of the right arrow. If energy is released, in the reaction, we can say, C O 2 + energy. If the reaction results in a precipitate, we can represent, it with a down arrow.
Catalysts are substances, which facilitate a chemical reaction. Catalysts do not participate, in the reaction. They only help the reaction, to happen. If a catalyst is required, for a reaction, we can specify that also, in the chemical equation.
A chemical equation, is therefore a concise, and convenient way, of representing, a chemical reaction, completely.
Balancing a chemical reaction.
When a chemical reaction takes place, no matter can be lost. No new matter can be created. This means, all the elements, that are present in the reactants, should be present, in the products. Magnesium reacts with Oxygen, to produce magnesium oxide. If we say M g + O 2, right arrow, M g O. This will not be fully correct. The reactant O 2, has 2 Oxygen atoms. M g O, has only one, Oxygen atom. The other, Oxygen atom cannot disappear, so we have to balance the equation. So, 2M g + O 2, results in, 2 M g O. Reactants have 2 magnesium atoms, and 2 Oxygen atoms. Product has 2 magnesium atoms, and 2 Oxygen atoms. The chemical reaction, is balanced. Some examples, of balanced chemical reactions. Hydrogen reacts with Oxygen, resulting in water, or H 2 O. 2H 2 + O 2, results in, 2 H 2 O. Aluminium reacts with chlorine, to form aluminium chloride. 2A l + 3, C l 2, results in, 2A l, C l 3. We note that, simple arithmetic of multiplying, the number of reactants and products, to preserve the number of atoms, of the elements, balances the equation. 2 aluminium atoms, + 3 chlorine atoms, results in, 2 aluminium chloride molecules. Zinc reacts with hydrochloric acid, resulting in zinc chloride, and hydrogen. Z n + 2 H C l, results in, Z n C l 2 + H 2.
Types of chemical reactions.
There is some common type, of chemical reactions. It is worth knowing, some common types, of chemical reactions.
Sulphur combines with Oxygen, to form sulphur dioxide. S + O 2, results in, S O 2. Nitrogen combines with hydrogen, to form ammonia. N 2 + 3H 2, results in, 2 N H 3. Ammonia combines with hydrochloric acid, to form ammonium chloride. N H 3 + H C l, results in, N H 4 C l. All these are examples, of 2 substances combining, to form, a single substance. These reactions are called, chemical combination reactions.
Mercuric oxide decomposes, into mercury, and Oxygen. 2H g O, results in, 2H g + O 2. In these examples, one substance decomposes, into two or more, substances.
Potassium permanganate has the formula, K M n O 4. It decomposes into potassium magnate, magnesium oxide, and Oxygen. 2K M n O 4, results in, K 2 M n O 4, + M n O 2, + O 2.
These reactions are called, chemical decomposition reactions.
Copper sulphate, has the formula, C u S O 4. Iron sulphate or ferrous sulphate, has the formula, F e S O 4. Copper sulphate reacts with iron, to form ferrous sulphate, and copper is precipitated. C u S O 4 + F e, results in, F e S O 4 + C u. Iron has replaced copper, in this reaction.
Chlorine reacts with potassium iodide, to form potassium chloride, and iodine is precipitated. C l 2 + 2K I, results in, 2K C l, plus I 2. Chlorine has replaced Potassium, in this reaction.
The formula, for sulphuric acid, is H 2 S O 4. Zinc reacts with sulphuric acid, to form Zinc sulphate, and hydrogen is released. Z n + H 2 S O 4, results in, Z n S O 4 + H 2. Zinc has replaced hydrogen, in this reaction. In all these examples, one substance replaces, another substance, in a compound. Normally, the more reactive substance, replaces the less, reactive substance. That is, iron is more reactive, than copper. Chlorine is more reactive, than iodine. Zinc is more reactive, than hydrogen. All these reactions are examples, of chemical displacement reactions.
Chemical double displacement.
The formula, for magnesium sulphate, is M g S O 4. The group S O 4, is referred to, as sulphate. The formula for sodium Carbonate, is, N a, 2C O 3. The group C O 3, is referred to, as Carbonate. Magnesium sulphate reacts, with sodium Carbonate, to form magnesium Carbonate, and sodium sulphate. M g S O 4 + N a, 2C O 3, results in, M g C O 3 + N a, 2S O 4. In this reaction, magnesium and sodium, have replaced each other.
The formula, for calcium chloride, is C a C l 2. The formula for sodium Carbonate, is N a 2, C O 3. Calcium chloride reacts, with sodium Carbonate, resulting in calcium Carbonate, plus sodium chloride. C a C l 2 + N a 2, C O 3, results in, C a C O 3 + 2N a C l. In these examples, the two elements replace each other, in the compounds. This is called, double displacement. These reactions are examples, of chemical double displacement reactions.
When Oxygen is added, to a substance, it is said to be, oxidised. When Oxygen is removed, from a substance, it is said to be, reduced. Oxidation and reduction are very common, chemical reactions. These reactions are called, redox reactions. R e d stands, for reduction. O x stands, for oxidation.
Magnesium reacts with Oxygen, to form magnesium oxide. 2 M g + O 2, results in, 2 M g O.
The formula, for iron oxide, is F e 2 O 3. Iron oxide reacts with Carbon, to form iron, and Carbon dioxide is released. 2F e O 3 + 3C, results in, 2F e + 3C O 2. We say that iron oxide, is reduced to iron, by removing Oxygen. This we call it, as a reduction reaction.
Traditionally addition of Oxygen, to a substance, was called oxidation. Removable of Oxygen, was called reduction. The definition has since, expanded. Oxidation is now considered, as loss of electrons. Reduction is now considered, as gain of electrons. For example, 2N a + C l 2, results in, 2 N a, C l. Sodium is considered, as being oxidised. Chlorine is considered, as being reduced. Oxidation and reduction reactions, or redox reactions, are important because, many chemical reactions are redox reactions.
Applications of chemical reactions.
Chemical reactions are involved, in many energy producing applications. Chemical reactions are used, in industry to produce, some substance that we want. These substances are produced, from other substances, which exists. In a chemical process, a series of multiple chemical reactions, may be involved. That is, we may use, multiple steps to get, the required substances that we need.
Energy from Chemicals.
One of the most common, chemical reactions is combustion. When a substance ignites, it burns, by combining with Oxygen. This produces heat energy. Heat energy is very useful, to human beings. Possibly, this was the first man made, chemical reaction. This happened, when we discovered fire. Fire was able to burn, dried wood, to produce heat. Fire was used by man, for cooking. Fire was also used to keep him warm, in cold climates.
Today we use, coal in thermal power plants. When Coal is burnt, with Oxygen, it produces heat. This is a chemical reaction. This heat is used, to produce steam. The steam drives turbines, which produce electricity. Coal is a fossil fuel. It was produced naturally, millions of years ago, from organic matter. Coal is today used as a major source, of chemical energy, in thermal power plants.
Most vehicles use, combustion engines. These combustion engines, use petrol, or diesel, as fuel. When petrol undergoes combustion, in an engine, it produces heat. This is a chemical reaction. The gases rapidly expand. This energy is converted, to mechanical energy, to propel the vehicle. Most forms of transportation we use, use combustion engines. These car engines, truck engines, jet engines, etc, use combustion engines. Petrol and diesel are the most, widely used fuel for transportation. Petrol and diesel are extracted, from naturally occurring, crude oil. Crude oil, is a fossil fuel. It was produced naturally, millions of years ago, from organic matter. The availability of Fossil fuels, are limited. Fossil fuels cannot, be regenerated. In the future, we have to look for alternatives, to fossil fuels, to meet our energy and transportation needs.
Extraction of minerals.
Many of the minerals that we need, are present in the natural state, as compounds. These naturally occurring compounds are called, ores. We mine the ores, from the earth. We use a chemical process, to extract the mineral, we want, from the ore. For example, Iron is present naturally, in iron ore, as Iron oxide or F e 2, O 3. This naturally present Iron ore, is called Hematite. We can mine this ore. This is taken to a steel plant. In the steel plant furnace, Carbon is combined with Oxygen, to produce Carbon monoxide. 2C + O 2, results in, 2 C O. In the next step, at high temperature, Iron oxide reacts, with Carbon monoxide, to produce iron, and Carbon dioxide. F e 2, O 3 + 3 C O, results in, 2 F e + 3 C O 2. In this way, molten Iron is extracted, from the furnace. This is a simple example, of a multiple step chemical process. Useful Iron is extracted, from an otherwise unusable compound, Iron oxide or F e 2, O 3.
Copper is present, as copper sulphide, C u 2 S, in copper ore. This copper ore is called, Chalcocite. With a suitable chemical process, copper can be extracted, from this ore.
Aluminium is present as aluminium oxide, as A l 2 O 3. This is called bauxite. With a suitable chemical process, aluminium can be extracted, from this ore.
Manufacture of Chemical compounds.
Chemical compounds have unique properties. Property of a compound is usually very different, from the properties of the elements, that constitute it. For example, common salt is sodium chloride. The properties of sodium chloride, is very different from sodium, or chlorine. This is one of the reasons, that chemistry becomes, very interesting. We can discover and manufacture, many compounds, which are useful to us. We already use, many such artificially manufactured compounds, in our daily life. Some examples are Soap, Toothpaste, Ink, Plastics, Glue, Cosmetics, etc. Chemicals are used in the manufacture of fertilisers and pesticides. Fertilisers and pesticides are widely used in Agriculture. There are many such chemicals compounds, that we use, and which are used, in industry. Many more chemical compounds, useful to humans, are being discovered.
Most allopathic medicines we take, are artificially manufactured, chemical compounds. These drugs have a wide variety, of uses. They are used to cure, many illness that we could get. The pharmaceutical industry, basically, uses a number of chemical reactions, to produce these drugs. Scientists are still discovering, new and better drugs.
The process of life, involves a series, of Bio chemical reactions. These bio chemical reactions are happening silently, every minute in our lives. All living organisms are involved, in some kind of bio chemical reactions. Let us take a simple example, All plants absorb sunlight. Sunlight is a form of energy. Plants also breathe in Carbon dioxide, and take in water, through their roots. Carbon dioxide combines with water, with sunlight, resulting in glucose, and Oxygen. 6 C O 2 + 6 H 2 O + sunlight, results in, C 6, H 12, O 6 + 6 O 2. So plants breathe in Carbon dioxide, and breathe out Oxygen. They produce glucose, which is a source of food, and energy.
Animals and human beings, breathe in Oxygen. In cells of the human beings, energy is being created. A typical example of this is, Glucose + Oxygen, results in, Carbon dioxide + water + energy. C 6, H 12, O 6 + 6 O 2, results in, 6 C O 2 + 6 H 2 O + energy. The released energy is used, for our day to day living. We need about, 80 watts of energy, just to live. This is supplied, by the food we eat. Glucose is just one example, of food. Carbohydrates, proteins, and fats, are other, essential foods that we eat. Carbohydrates, proteins, and fats, are also organic chemical compounds. These compounds are a way to store energy, required for life. Our body digest these chemical compounds, through a series, of bio chemicals reactions, to produce energy.
Our body is built, from organic chemical compounds. Most of these are proteins. There are many types of proteins. Proteins are chains, of amino acids. Body synthesis these proteins. They are used for building the body, and for replacement, for maintaining the body. These bio chemicals reactions are happening, silently, all the time.
Where does the human body get the knowledge, to build the organs of the body. The organs could be the heart, the brain, the kidney, the liver, muscles, bones etc. All these organs, are built up through, a series of bio chemical reactions. The genome is the encyclopaedia, of the knowledge of life. The human genome comprises, of 23 chromosomes. These are like, 23 chapters, of an encyclopaedia. Each chromosomes, is a string of 1000s, of D N A molecules. D N A, is deoxyribonucleic acid. This is called a gene. D N A molecules, are two strands of molecules, coiled around each other, in the form of a double helix. It is like a pair of snakes, coiled around each other. The human genome has, 60 to 80 thousand genes. Each gene is like a chapter, in the encyclopaedia. The genes contain the knowledge, to build the organs of life. The genes themselves comprise, of only four basic, organic compounds. These four basic compounds are called, as G A C and T. Long chains of different combinations, of G A C T molecules, make the D N A. All these chains put together, in 23 chromosomes, form the genome. This genome is the encyclopaedia, of the knowledge of life. Each of the hundred trillion cells, in the human body, have the genome, in its nucleus. The genome is, a few nanometers in size. They teach the chemicals in the body, to build, organise, maintain, and run, the whole of life. It is amazing, that the knowledge of life, can be contained, in a chemical compound. This makes Bio-chemistry, a mysteriously interesting, branch of science. We are just beginning, to discover, the mystery that is life. Our level of knowledge is still, in the first chapter, of this discovery. Future chapters are waiting, for future scientists, to write it. |
Scientists have identified more than 52,000 meteorites—space rocks that have crashed into Earth.Of these rocks, fewer than 100 have been traced to the planet Mars, one of our closest neighbors in the solar system.Scientists didn’t know these strange space rocks came from Mars until NASA’s Viking spacecraft successfully landed on the Red Planet in the 1970s. The Viking space probes measured chemicals in the Martian atmosphere and surface. Scientists realized many meteorites found on Earth contained the same, precise concentrations of rocks, minerals, and even trapped gases. These rocks could only have come from Mars.Martian meteorites were formed as asteroids and other space rocks crashed into the Martian surface millions of years ago. These collisions formed craters and sent tons of dust and rocks into the Martian atmosphere. Some collisions were so violent that debris was forced out of Mars’ gravitational field altogether in a process called spallation.Ejected Martian rocks drifted in space for millions of years before being pulled into Earth’s gravitational field. Astronomers know this because they measure the cosmic ray exposure (CRE) of Martian meteorites. In outer space, asteroids and other space rocks are exposed to high-energy nuclear particles called cosmic rays. Cosmic rays turn some elements found in space rocks into unstable isotopes of those elements, which decay at very predictable rates. This radioactive decay allows scientists to accurately estimate the amount of time space rocks spent between their residences in the atmospheres of Mars and Earth.Major collisions that create meteorites are much more rare now than they were in the early solar system. But even today, Martian meteorites are still blazing into Earth’s atmosphere as “shooting stars”—and dozens, and maybe thousands, of meteorites are probably unidentified or undiscovered in the barren deserts of Antarctica and the Sahara, where most meteorites are found.
- Nakhlites are named after the suburb where the first sample was found, Nakhla, Alexandria, Egypt. Nakhlites are volcanic rocks rich in the silicate mineral augite. They also provide evidence of abundant liquid water on Mars at some point in the planet’s history. Nakhlites are about 1.3 billion years old.
- Rare, ungrouped “other” Martian meteorites have unusual characteristics. The most famous Martian meteorite in the world, ALH 84001, for instance, is classified as an “orthopyroxenite” for its unique crystal silicate structure. ALH 84001 is less famous for its geology, however, than its possible biology—it may preserve fossils of primitive bacteria, providing evidence for life on Mars more than 4 billion years ago.
- Chassignites are named after the town where the first sample was found, Chassigny, Haute Marne, France. Very few chassignites have been identified, but all are rich in the mineral olivine. The olivine chemical structure leads scientists to think chassignites formed in the Martian mantle, not its crust. Chassignites are about 1.4 billion years old.
- Shergottites are named after the village where the first sample was found, Sherghati, Bihar, India. Shergottites are mafic rocks, meaning they are rich in magnesium and iron. They are the most abundant type of Martian meteorite, and also among the youngest. Scientists think shergottites may have formed as recently as 165 million years ago.
Astronomers and geologists have classified Martian meteorites into four major categories: shergottites, nakhlites, chassignites, and an oddball “other” or “unclassified” group.
- National Geographic Education: What is a meteorite?
- Natural History Museum (UK): Martian Meteorites
- Dr. Tony Irving, University of Washington: Martian Meteorites
- ASU Center for Meteorite Studies: Meteorite Origins
Term Part of Speech Definition Encyclopedic Entry accurately Adverb
exactly or perfectly.
Antarctic Desert Noun
dry, barren rocks covered by an ice sheet that makes up most of the continent of Antarctica.
irregularly shaped planetary body, ranging from 6 meters (20 feet) to 933 kilometers (580 miles) in diameter, orbiting the sun between Mars and Jupiter.
person who studies space and the universe beyond Earth's atmosphere.
layers of gases surrounding a planet or other celestial body.
Encyclopedic Entry: atmosphere bacteria Plural Noun
(singular: bacterium) single-celled organisms found in every ecosystem on Earth.
study of living things.
Martian meteorite composed largely of the mineral olivine. Also called olivine achondrite.
measure of the amount of a substance or grouping in a specific place.
cosmic ray Noun
radiation originating in outer space and consisting mostly of high-energy atomic nuclei.
bowl-shaped depression formed by a volcanic eruption or impact of a meteorite.
Encyclopedic Entry: crater crust Noun
rocky outermost layer of Earth or other planet.
Encyclopedic Entry: crust crystal Noun
type of mineral that is clear and, when viewed under a microscope, has a repeating pattern of atoms and molecules.
remains of something broken or destroyed; waste, or garbage.
area of land that receives no more than 25 centimeters (10 inches) of precipitation a year.
Encyclopedic Entry: desert dust Noun
tiny, dry particles of material solid enough for wind to carry.
Encyclopedic Entry: dust eject Verb
to get rid of or throw out.
chemical that cannot be separated into simpler substances.
to guess based on knowledge of the situation or object.
state of matter with no fixed shape that will fill any container uniformly. Gas molecules are in constant, random motion.
gravitational field Noun
influence that a massive object extends around itself, in which another massive object would experience an attractive force.
having to do with igneous rocks that contain large amounts of iron and magnesium.
middle layer of the Earth, made of mostly solid rock.
Encyclopedic Entry: mantle measure Verb
to determine the numeric value of something, often in comparison with something else, such as a determined standard value.
type of rock that has crashed into Earth from outside the atmosphere.
Encyclopedic Entry: meteorite mineral Noun
inorganic material that has a characteristic chemical composition and specific crystal structure.
Martian meteorite composed largely of the silicate mineral augite.
type of silicate mineral.
path of one object around a more massive object.
outer space Noun
space beyond Earth's atmosphere.
small piece of material.
large, spherical celestial body that regularly rotates around a star.
Encyclopedic Entry: planet precise Adjective
regular or able to be forecasted.
simple or crude.
radioactive decay Noun
transformation of an unstable atomic nucleus into a lighter one, in which radiation is released in the form of alpha particles, beta particles, gamma rays, and other particles. Also called radioactivity.
home or place where a person lives.
natural substance composed of solid mineral matter.
Sahara Desert Noun
world's largest desert, in north Africa.
Martian meteorite composed mostly of mafic and ultramafic rocks.
shooting star Noun
rocky debris from space that enters Earth's atmosphere. Also called a meteor.
most common group of minerals, all of which include the elements silicon (Si) and oxygen (O).
solar system Noun
the sun and the planets, asteroids, comets, and other bodies that orbit around it.
vehicle designed for travel outside Earth's atmosphere.
space probe Noun
set of scientific instruments and tools launched from Earth to study the atmosphere and composition of space and other planets, moons, or celestial bodies.
process in which fragments of material (spall) are ejected from a larger body due to impact or stress.
unstable isotope Noun
atom with an unbalanced number of neutrons in its nucleus (isotope) that is radioactive, or decays by emitting particles from its nucleus. Also called a radionuclide.
having to do with volcanoes. |
In mathematics, a complex number is a number of the form
where a and b are real numbers, and i is the imaginary unit, with the property i 2 = −1. The real number a is called the real part of the complex number, and the real number b is the imaginary part. Real numbers may be considered to be complex numbers with an imaginary part of zero; that is, the real number a is equivalent to the complex number a+0i.
For example, 3 + 2i is a complex number, with real part 3 and imaginary part 2. If z = a + bi, the real part (a) is denoted Re(z), or ℜ(z), and the imaginary part (b) is denoted Im(z), or ℑ(z).
Complex numbers can be added, subtracted, multiplied, and divided like real numbers and have other elegant properties. For example, real numbers alone do not provide a solution for every polynomial algebraic equation with real coefficients, while complex numbers do (this is the fundamental theorem of algebra).
- 1 Equality
- 2 Notation and operations
- 3 The field of complex numbers
- 4 The complex plane
- 5 Absolute value, conjugation and distance
- 6 Complex fractions
- 7 Matrix representation of complex numbers
Two complex numbers are equal if and only if their real parts are equal and their imaginary parts are equal. That is, a + bi = c + di if and only if a = c and b = d.
Notation and operations
The set of all complex numbers is usually denoted by C, or in blackboard bold by (Unicode ℂ). The real numbers, R, may be regarded as "lying in" C by considering every real number as a complex: a = a + 0i.
Complex numbers are added, subtracted, and multiplied by formally applying the associative, commutative and distributive laws of algebra, together with the equation i2 = −1:
Division of complex numbers can also be defined (see below). Thus, the set of complex numbers forms a field which, in contrast to the real numbers, is algebraically closed.
In mathematics, the adjective "complex" means that the field of complex numbers is the underlying number field considered, for example complex analysis, complex matrix, complex polynomial and complex Lie algebra.
The field of complex numbers
Formally, the complex numbers can be defined as ordered pairs of real numbers (a, b) together with the operations:
In C, we have:
- additive identity ("zero"): (0, 0)
- multiplicative identity ("one"): (1, 0)
- additive inverse of (a,b): (−a, −b)
- multiplicative inverse (reciprocal) of non-zero (a, b):
Since a complex number a + bi is uniquely specified by an ordered pair (a, b) of real numbers, the complex numbers are in one-to-one correspondence with points on a plane, called the complex plane.
The complex plane
A complex number z can be viewed as a point or a position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram . The point and hence the complex number z can be specified by Cartesian (rectangular) coordinates. The Cartesian coordinates of the complex number are the real part x = Re(z) and the imaginary part y = Im(z). The representation of a complex number by its Cartesian coordinates is called the Cartesian form or rectangular form or algebraic form of that complex number.
Alternatively, the complex number z can be specified by polar coordinates. The polar coordinates are r = |z| ≥ 0, called the absolute value or modulus, and φ = arg(z), called the argument of z. For r = 0 any value of φ describes the same number. To get a unique representation, a conventional choice is to set arg(0) = 0. For r > 0 the argument φ is unique modulo 2π; that is, if any two values of the complex argument differ by an exact integer multiple of 2π, they are considered equivalent. To get a unique representation, a conventional choice is to limit φ to the interval (-π,π], i.e. −π < φ ≤ π. The representation of a complex number by its polar coordinates is called the polar form of the complex number.
Conversion from the polar form to the Cartesian form
Conversion from the Cartesian form to the polar form
The previous formula requires rather laborious case differentiations. However, many programming languages provide a variant of the arctangent function. A formula that uses the arccos function requires fewer case differentiations:
Notation of the polar form
The notation of the polar form as
is called trigonometric form. The notation cis φ is sometimes used as an abbreviation for cos φ + i sin φ. Using Euler's formula it can also be written as
which is called exponential form.
Multiplication, division, exponentiation, and root extraction in the polar form
Multiplication, division, exponentiation, and root extraction are much easier in the polar form than in the Cartesian form.
Using sum and difference identities its possible to obtain that
Exponentiation with integer exponents; according to de Moivre's formula,
Exponentiation with arbitrary complex exponents is discussed in the article on exponentiation.
The addition of two complex numbers is just the addition of two vectors, and multiplication by a fixed complex number can be seen as a simultaneous rotation and stretching.
Multiplication by i corresponds to a counter-clockwise rotation by 90° (π/2 radians). The geometric content of the equation i 2 = −1 is that a sequence of two 90 degree rotations results in a 180 degree (π radians) rotation. Even the fact (−1) · (−1) = +1 from arithmetic can be understood geometrically as the combination of two 180 degree turns.
All the roots of any number, real or complex, may be found with a simple algorithm. The nth roots are given by
for k = 0, 1, 2, …, n − 1, where represents the principal nth root of r.
Absolute value, conjugation and distance
The absolute value (or modulus or magnitude) of a complex number z = r eiφ is defined as |z| = r. Algebraically, if z = a + bi, then
One can check readily that the absolute value has three important properties:
- if and only if
- (triangle inequality)
for all complex numbers z and w. It then follows, for example, that and . By defining the distance function d(z, w) = |z − w| we turn the set of complex numbers into a metric space and we can therefore talk about limits and continuity.
The complex conjugate of the complex number z = a + bi is defined to be a − bi, written as or . As seen in the figure, is the "reflection" of z about the real axis. The following can be checked:
- if and only if z is real
- if z is non-zero.
The latter formula is the method of choice to compute the inverse of a complex number if it is given in rectangular coordinates.
That conjugation commutes with all the algebraic operations (and many functions; e.g. ) is rooted in the ambiguity in choice of i (−1 has two square roots). It is important to note, however, that the function is not complex-differentiable.
We can divide a complex number (a + bi) by another complex number (c + di) ≠ 0 in two ways. The first way has already been implied: to convert both complex numbers into exponential form, from which their quotient is easily derived. The second way is to express the division as a fraction, then to multiply both numerator and denominator by the complex conjugate of the denominator. The new denominator is a real number.
Matrix representation of complex numbers
While usually not useful, alternative representations of the complex field can give some insight into its nature. One particularly elegant representation interprets each complex number as a 2×2 matrix with real entries which stretches and rotates the points of the plane. Every such matrix has the form
where a and b are real numbers. The sum and product of two such matrices is again of this form. Every non-zero matrix of this form is invertible, and its inverse is again of this form. Therefore, the matrices of this form are a field. In fact, this is exactly the field of complex numbers. Every such matrix can be written as
which suggests that we should identify the real number 1 with the identity matrix
and the imaginary unit i with
a counter-clockwise rotation by 90 degrees. Note that the square of this latter matrix is indeed equal to the 2×2 matrix that represents −1.
The square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix.
If the matrix is viewed as a transformation of the plane, then the transformation rotates points through an angle equal to the argument of the complex number and scales by a factor equal to the complex number's absolute value. The conjugate of the complex number z corresponds to the transformation which rotates through the same angle as z but in the opposite direction, and scales in the same manner as z; this can be represented by the transpose of the matrix corresponding to z.
If the matrix elements are themselves complex numbers, the resulting algebra is that of the quaternions. In other words, this matrix representation is one way of expressing the Cayley-Dickson construction of algebras. |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Antarctic meteorite, any of a large group of meteorites that have been collected in Antarctica, first by Japanese expeditions and subsequently by U.S. and European teams since the discovery of meteorite concentrations there in 1969. Although meteorites fall more or less uniformly over Earth’s surface, many that fall in Antarctica are frozen into its ice sheets, which slowly flow from the centre of the continent toward its edges. In some places, patches of ice become stranded behind mountain peaks and are forced to flow upward. These stagnant patches are eroded by strong winds, thereby exposing and concentrating meteorites on the ice surface. Such areas, called blue ice for their colour, have over just a few decades provided more than 35,000 individual meteorites ranging in size from thumbnail to basketball. Although many meteorites are paired (parts of the same original fall), the Antarctic collection still represents several thousand new samples, which is comparable to the total number of catalogued meteorites that were collected elsewhere over the past several centuries.
Because large concentrations of Antarctic meteorites occur within small areas, the traditional geographic naming system used for meteorites is not applicable. Rather, they are identified by an abbreviated name of some local landmark plus a number that identifies the year of recovery and the specific sample. For example, the meteorite ALHA81005 was found in the Allan Hills region in 1981 and is the fifth sample recovered.
Antarctic meteorites have provided additional specimens of poorly represented meteorite types and of a few types that were previously unknown. Meteorites from the Moon were first recognized in Antarctica, and most lunar and many Martian meteorites have been collected there. Antarctic meteorites have spent times on Earth that range from a few thousand to about a million years. They thus provide insight into the kinds and abundances of meteorites that fell to Earth before recorded history.
Learn More in these related Britannica articles:
Meteorite, any fairly small natural object from interplanetary space—i.e., a meteoroid—that survives its passage through Earth’s atmosphere and lands on the surface. In modern usage the term is broadly applied to similar objects that land on the surface of other comparatively large bodies. For instance, meteorite fragments have been found…
Antarctica, fifth in size among the world’s continents. Its landmass is almost wholly covered by a vast ice sheet. Lying almost concentrically around the South Pole, Antarctica—the…
Earth, third planet from the Sun and the fifth largest planet in the solar system in terms of size and mass. Its single most outstanding feature is that its near-surface environments are the only places in the universe known to harbour life. It is designated by the symbol ♁. Earth’s… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.