text
stringlengths
4
602k
Heavy ion collisions at CERN should be able to produce the shortest light pulses ever created. This was demonstrated by computer simulations at the Vienna University of Technology. The pulses are so short that they cannot even be measured by today's technological equipment. Now, a method has been proposed to create the world's most precise stopwatch for the world's shortest light pulses, using a detector which is going to be installed at CERN in 2018. Small, Short and Hot Phenomena taking place on very short time scales are often investigated using ultra short laser pulses. Today, pulse durations of the order of attoseconds (billionths of a billionths of a second, 10^-18 seconds) can be created. But these records could soon be broken: "Atomic nuclei in particle colliders like the LHC at CERN or at RHIC can create light pulses which are still a million times shorter than that", says Andreas Ipp from TU Vienna. In the ALICE experiment at CERN, lead nuclei are collided almost at the speed of light. The debris of the scattered nuclei together with new particles created by the power of the impact form a quark-gluon plasma, a state of matter which is so hot that even protons and neutrons melt. Their building blocks – quarks and gluons – can move independently without being bound to each other. This quark-gluon plasma only exists for several yoctoseconds (10^-24 seconds). Ideas From Astronomy From the quark-gluon plasma created in a particle collider, light pulses can be emitted, which carry valuable information about the plasma. However, conventional measurement techniques are much too slow to resolve flashes on a yoctosecond timescale. "That's why we make use of the Hanbury Brown-Twiss effect, an idea which was originally developed for astronomical measurements", says Andreas Ipp. In a Hanbury Brown-Twiss experiment, correlations between two different light detectors are studied. That way, the diameter of a star can be calculated very precisely. "Instead of studying spatial distances, the effect can just as well be used for measuring time intervals", says Andreas Ipp. The calculations he did together with Peter Somkuti show that the yoctosecond pulses of the quark-gluon plasma could be resolved by a Hanbury Brown-Twiss experiment. "It would be hard to do, but it would definitely be achievable", says Ipp. This experiment would not require any additional expensive detectors, it could be done with the "forward calorimeter", which is supposed to go on line at CERN in 2018. That way, the ALICE-experiment could become the world's most accurate stopwatch. The Enigmas of the Plasma There are still many open questions in quark-gluon plasma physics. It has an extraordinarily low viscosity, it is thinner than any liquid we know. Even if it starts out in a state of extreme disequilibrium, it reaches a thermal equilibrium extremely fast. Studying the light pulses from the quark-gluon plasma could yield valuable new information to better understand this state of matter. In the future, the light pulses could perhaps even be used for nuclear research. "Experiments using two light pulses are often used in quantum physics", says Andreas Ipp. "The first pulse changes the state of the object under investigation, a second pulse is used shortly after that, to measure the change." With yoctosecond light pulses, this well-established approach could be used in areas which up until now have been completely inaccessible to this kind of research. Dr. Andreas Ipp Institute for Theoretical Physics Vienna University of Technology Wiedner Hauptstr. 8-10, 1040 Vienna T: +43 1 58801 13635 Florian Aigner | Source: EurekAlert! Further information: www.tuwien.ac.at Further Reports about: astronomical measurements > atomic nuclei > building block > CERN > computer simulation > Hanbury Brown-Twiss experiment > information technology > Light pulses > quark-gluon plasma > yoctosecond timescale More articles from Physics and Astronomy: Detecting mirror molecules 23.05.2013 | Harvard University Researchers Explain Magnetic Field Misbehavior in Solar Flares: The Culprit is Turbulence 23.05.2013 | Johns Hopkins University New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue. The development of new microscopes and fluorescent dyes in ... A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers. Droplets in this toroidal shape made ... Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry. Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada. High manufacturing cost and a short lifetime are still a major obstacle on ... University of Würzburg physicists have succeeded in creating a new type of laser. Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature. It also emits light the waves of which are in phase with one another: the polariton laser, developed ... Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions. They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics. “When water boils, its molecules are released as vapor. We call this ... 23.05.2013 | Physics and Astronomy 23.05.2013 | Health and Medicine 23.05.2013 | Ecology, The Environment and Conservation 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
Grade Level: 5 Interest Level: N/A Reading Level: N/A This book provides the tools necessary to capture the wonder and fun of mathematics while helping teachers and parents instruct the Common Core Mathematics Standards in a manageable way. This book focuses and connects to the Standards for Mathematical Content and Standards for Mathematical Practice, including: making sense of problems and persevere in solving them, modeling with mathematics, and using appropriate tools strategically. Featuring: a chart to monitor progress toward learning goal success; pre- & post-assessments for every Common Cores Standard domain; a problem set for every Common Core Standard; authentic challenge projects with real-world and technology integration; a detailed answer key. 80 pages. Table of Contents 5.oa.a.1. Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. 5.oa.a.2. Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them. 5.oa.b.3. Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the ordered pairs on a coordinate plane. Understand the place value system 5.nbt.a.1. Recognize that in a multi-digit number, a digit in one place represents 10 times as much as it represents in the place to its right and 1/10 of what it represents in the place to its left. 5.nbt.a.2. Explain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole- number exponents to denote powers of 10. 5.nbt.a.3. Read, write, and compare decimals to thousandths. a. Read and write decimals to thousandths using base-ten numerals, number names, and expanded form. b. Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. 5.nbt.a.4. Use place value understanding to round decimals to any place. 5.nbt.b.5. Fluently multiply multi-digit whole numbers using the standard algorithm 5.nbt.b.6. Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/ or area models. 5.nbt.b.7. Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Use equivalent fractions as a strategy to add and subtract fractions. 5.nF.a.1. Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. 5.nF.a.2. Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. 5.nF.b.3. Interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers. 5.nF.b.4. Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction. a. Interpret the product (a/b) × q as a parts of a partition of q into b equal parts; equivalently, as the result of a sequence of operations a × q ÷ b. b. Find the area of a rectangle with fractional side lengths by tiling it with unit squares of the appropriate unit fraction side lengths, and show that the area is the same as would be found by multiplying the side lengths. Multiply fractional side lengths to find areas of rectangles, and represent fraction products as rectangular areas. 5.nF.b.5. Interpret multiplication as scaling (resizing), by: a.Comparing the size of a product to the size of one factor on the basis of the size of the other factor, without performing the indicated multiplication. b. Explaining why multiplying a given number by a fraction greater than 1 results in a product greater than the given number (recognizing multiplication by whole numbers greater than 1 as a familiar case); explaining why multiplying a given number by a fraction less than 1 results in a product smaller than the given number; and relating the principle of fraction equivalence a/b = (n × a)/(n × b) to the effect of multiplying a/b by 1. 5.nF.b.6. Solve real world problems involving multiplication of fractions and mixed numbers 5.nF.b.7. Apply and extend previous understandings of division to divide unit fractions by whole numbers and whole numbers by unit fractions. a. Interpret division of a unit fraction by a non-zero whole number, and compute such quotients. Interpret division of a whole number by a unit fraction, and compute such quotients. b. Solve real world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions. Convert like measurement units within a given measurement system. 5.mD.a.1. Convert among different-sized standard measurement units within a given measurement system, and use these conversions in solving multi-step, real world problems. 5.mD.b.2. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented in line plots. 5.mD.c.3. Recognize volume as an attribute of solid figures and understand concepts of volume measurement. a. A cube with side length 1 unit, called a “unit cube,” is said to have “one cubic unit” of volume, and can be used to measure volume. b. A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units. 5.mD.c.4. Measure volumes by counting unit cubes, using cubic cm, cubic in, cubic ft, and improvised units. 5.mD.c.5. Relate volume to the operations of multiplication and addition and solve real world and mathematical problems involving volume. a. Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication. b. Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems. c. Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems. Graph points on the coordinate plane to solve real-world and mathematical problems. 5.g.a.1. Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Understand that the first number indicates how far to travel from the origin in the direction of one axis, and the second number indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond. 5.g.a.2. Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of the situation. Classify two-dimensional figures into categories based on their properties 5.g.b.3. Understand that attributes belonging to a category of two- dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles. 5.g.b.4. Classify two-dimensional figures in a hierarchy based on properties. Authentic Challenge Projects Project #1: “Basketball Scout” Project #2: “Yikes! 500 People Are Coming to Dinner!” Project #3: “How Much Water Flows Through a Creek?” - Stock: In Stock - Model: REM GP228 - ISBN: 9781930820289
Some of the biggest cosmic mysteries are shrouded in proverbial darkness. For instance, most of the Universe’s mass is unaccounted for: All of the stuff we can directly observe in the Universe with our telescopes amounts to only 4.6 percent of the latter’s mass content. The rest is just invisible and of a completely unknown nature. Scientists have been trying to figure out the nature of this “missing” cosmic mass for nearly a century, utilising various ground-, aerial-, and space-based instruments, like the Alpha Magnetic Spectrometer, or AMS-02, a particle physics detector which is mounted on the exterior of the International Space Station. Even though a recently published study that was based on new data by the AMS showcases the solid science that is being conducted on the orbiting laboratory, it nevertheless presents no firm evidence for the existence of dark matter yet. “Dark matter” was introduced as a term by astronomers in the early 1930s to account for the strangeness of motions that they were observing inside other galaxies. While studying how stars and interstellar gas move within galaxies and galaxy clusters, astronomers soon realised that the velocities they were observing were too high for the structures of galaxies to remain stable. If all the matter being observed and accounted for was all there was, then galaxies should have flown apart—its contents should have been hurled into the intergalactic void. There either was something flawed with the current understanding of how gravity worked on cosmic scales, or there was something else whose gravity was holding galaxies together, something that was evading observations. Thus the term “dark matter” was born. According to some theoretical models, the particles that constitute dark matter are their own anti-particles, meaning that if they would meet and collide they would annihilate while producing a stream of secondary particles including positrons (the anti-matter counterpart of electrons) in the process. If this phenomenon is indeed taking place in the Universe, these byproducts of dark matter annihilation could be detected through the study of cosmic rays, the constant flow of high-energy radiation that permeates the Universe, coming from every direction of the sky. Even though cosmic rays have been the object of extensive study ever since they were first discovered in the early 20th century, their exact origin still remains a mystery. They are mostly composed of hydrogen and helium nuclei, and they are believed to originate from such high-energy astrophysical phenomena in the Universe like supernova explosions, colliding black holes, rapidly spinning neutron stars, and gamma-ray bursts. Yet a tiny fraction of cosmic rays (about 0.01 percent) consists of anti-matter particles like positrons. Theoretical models predict that the annihilation of dark matter particles would, among other things, cause an excess of positrons that would have a specific spectrum which would be distinguishable from that of positrons originating from other well-known astrophysical sources like pulsars and supernovae. Determining the ratio of positrons to the combined flux of positrons and electrons in high-energy cosmic rays could allow researchers to determine their exact source and whether that involves well-known astrophysical processes or the long-sought-for and elusive dark matter. The measurement of high-energy cosmic rays and the hunt for dark matter are some of the key objects of study of the Alpha Magnetic Spectrometer module. Launched on STS-134, the space shuttle’s penultimate mission on 16 May 2011, AMS-02 was installed three days later on the S3 Truss of the orbiting laboratory from where it has been constantly operating ever since, recording more than 1,000 cosmic ray hits per second. At the heart of AMS-02 lies a large magnet which creates a uniform strong magnetic field that bends the path of the incoming charged cosmic ray particles diverting them according to the nature of each particle. A series of five different detectors measures the particles’ properties, determining their energy, velocity, charge, and coordinates inside the magnetic field, allowing the AMS to tell the difference between protons, electrons, and positrons. The construction and operation of the state-of-the-art, 7.5-tonne orbiting particle physics detector represents a major technological achievement and is the result of an international collaboration between 60 institutes from 16 different countries in Europe, North America, and Asia. Funded by the U.S. Department of Energy, the project is managed by the Alpha Magnetic Spectrometer Project Office at NASA’s Johnson Space Center in Houston. The roots of the project trace back to the early 1990’s, when following the cancellation of the Superconducting Super Collider in 1993, Dr. Sam Ting, Nobel Laureate and principal investigator of the Alpha Magnetic Spectrometer project, proposed the construction of a simplified particle physics detector to be flown in space in order to advance research on the field of particle physics. This proposal was finally approved, leading to the lauch of a prototype version, called AMS-01, onboard the Space Shuttle Discovery on STS-91 between 2 and 12 June 1998. Even though it came up empty in its search of helium-antihelium cosmic ray particles at the energy range between 1 and 140 GeV, it nevertheless served as a proof of concept as being the first physics experiment of its kind to ever fly and successfully operate in space, allowing researchers to put an upper limit to the theoretical predictions of the antihelium flux ratio. Based on the solid foundation that was laid by this pioneering work, the AMS-02 has, since it was first activated in May 2011, recorded more than 54 billion cosmic ray hits, providing the project’s scientists at CERN in Geneva, Switzerland, with approximately 1 GB of data per second of unprecedented precision. This huge amount of data allowed the AMS science team to announce its first results in March 2013 during a seminar at CERN, which were also published in a study that appeared in the April 2013 issue of the Physical Review Letters. Following an analysis of approximately 25 billion cosmic ray hits that had been recorded by the AMS during its first 18 months of operations, scientists had reported at the time that they had identified 6.8 million electron-positron hits in the energy range between 0.5 and 300 GeV (for comparison, the rest mass of the proton is 0.938 GeV) with an excess of approximately 400,000 positrons. By studying the power spectrum of the positrons, scientists noticed that their ratio to the combined flux of observed positrons and electrons exhibited a decrease at energies between 0.5 and 10 GeV only to rise again within the energy range between 10 and 250 GeV and steadily decline again between 20 and 250 GeV, confirming similar earlier observations by the Fermi and PAMELA space observatories. According to theory, above certain energies dark matter would annihilate without producing any secondary particles, resulting in a steep decline in the observed positron flux. Even though these first results from the AMS didn’t observe such a decline, because they were consistent with some theoretical predictions of a certain dark matter particle called the neutralino, they were eventually hyped as being evidence for the latter’s existence. Yet these results could only be described as ambiguous at best. If the recorded positron excess was due to dark matter, then it should showcase a sudden drop beyond 250 GeV and the positron flux should display a certain directionality toward the center of the Milky Way around which most of the galaxy’s dark matter is believed to reside. The AMS data instead showed that the detected positrons were coming from every direction of the sky. All this indicated that the positron excess could well due to more mundane astrophysical sources. Clearly, more data were required before a more definite statement on the matter could be made with any certainty. Following up on their initial findings, the AMS science team presented their latest findings in a new study that appeared on 18 September at the Physical Review Letters. In the study, the researchers report the results of their analysis of 41 billion cosmic ray hits out of the total 54 billion that the space-based particle physics detector has recorded after more than 40 months of operations, of which 10 million were identified as electron-positron hits. This new wealth of data allowed researchers to also extend the observed energy range from 350 to 500 GeV, giving them more insights into the particle physics processes of cosmic rays at ever-higher energies. The new AMS results confirmed the previous ones, by showing a distinct increase of the positron fraction around 8 GeV followed by a drop off around 275 GeV. According to the AMS science team, this pattern is representative of a new astrophysical phenomenon that could potentially be the elusive annihilation of dark matter. “This means there’s something new there,” commented Dr.Ting for the Symmetry magazine. “It’s totally unexpected.” Furthermore, the new results represent the first-ever detailed measurements of the positron fraction maximum at high energies, where positron flux ceases to increase. “Scientists have been measuring this ratio since 1964,” says Jim Siegrist, associate director of the U.S. Department of Energy’s Office of High-Energy Physics, which sponsored the AMS project “This is the first time anyone has observed this turning point.” Nevertheless, the AMS still hasn’t recorded the sudden drop-off at higher energies that is predicted by many theoretical models of dark matter. Even though the positron fraction peaks around 275 GeV, it then appears to level off without any subsequent drop off being evident. There’s nothing to preclude that from happening at even higher energies; yet the evidence gathered by the AMS so far in favor of dark matter could be seen as inconclusive at best, rendering any claims for its existence as premature. The focus of the researchers now is to collect more data on the proton-antiproton ratio and the positron fraction in the TeV energy range, which would help to indicate if the predicted positron drop-off indeed happens at higher energies and whether it’s tied to dark matter at all. “The new AMS results show unambiguously that a new source of positrons is active in the galaxy,” said Dr. Paolo Zuccon, an assistant professor of physics at the Massachusetts Institute of Technology, in a press release. “We do not know yet if these positrons are coming from dark matter collisions, or from astrophysical sources such as pulsars. But measurements are underway by AMS that may discriminate between the two hypotheses.” Despite their apparent ambiguity regarding the potential existence of dark matter, the results to come from the AMS have nevertheless showcased the detector’s superior ability to conduct first-rate research on high-energy cosmic rays, which is a key area of study in astrophysics. “The AMS results announced today are tremendously provocative, and will drive scientists around the world to continue pursuing one of the biggest mysteries in the cosmos: dark matter,” commented Dr. Ellen Stofan, NASA’s chief scientist at Washington, D.C., following the publication of the project’s science team on 18 September. “The clear and definitive data from AMS represent the caliber of scientific discovery enabled by our unique laboratory in space, the International Space Station. Today we are one step closer to answering the fundamental questions about how our universe works, and we look forward to many more exciting twists in this developing story.” Video Credit: NASA
Amino acids are the building blocks of proteins. There are 20 amino acids that most commonly occur in proteins. Based on the functional group making up their side chain, or R group, amino acids are classified as acidic, basic, or neutral. The physical and chemical properties of the R group determine the unique characteristics of each amino acid. Acidic amino acids have acidic R groups. Their electrically charged R groups make these molecules highly soluble in water. (GLUTAMIC ACID) Acidic R groups contain a carboxylic acid functional group, -COOH. Basic amino acids have basic R groups. Their electrically charged R groups make these molecules highly soluble in water. (ARGININE, LYSINE) Basic R groups contain an amino (not amide) functional group, -NH2, which attracts a proton to form -NH3. Neutral (neither acidic nor basic) amino acids can be further classified as nonpolar or polar. The neutral nonpolar amino acids have R groups that contain no charged atoms; most of these amino acids are not water soluble. The neutral polar amino acids have R groups that have a dipole moment. The partial charges in their R groups make these molecules generally water soluble. Neutral polar R groups are neither acidic nor basic, but they contain a highly electronegative atom such as oxygen, nitrogen, or sulfur. (ASPARAGINE, CYSTEINE, GLUTAMINE, SERINE, THREONINE, TYROSINE) Neutral nonpolar R groups contain mostly carbon and hydrogen (alkyl groups). They may also contain nitrogen or sulfur, but the effect of those atoms is diminished due to the size of the alkyl portion. (ALANINE, PHENYLALANINE, METHIONINE, PROLINE, VALINE, TRYPTOPHAN) It important for you to understand how amino acids are classified, rather than just looking up the answers to this tutorial in your book. The hints provided here will teach you how to figure out the classifications without looking them up. That way you won't have to memorize them when you are tested on this material. Protein structure is conceptually divided into four levels, from most basic to higher order: Primary structure describes the order of amino acids in the peptide chain. Secondary structure describes the basic three-dimensional structures, -helices and -sheets. Tertiary structure describes how the secondary structures come together to form an individual globular protein. Quaternary structure results from individual proteins coming together to form multi-subunit protein complexes. PRIMARY ? sequence of amino acids in protein SECONDARY ? describes alpha & beta formed by hydrogen bonding between backbone atoms located near each other in the protein TERTIARY ? when protein folds into 3D shape stabilized by interactions between side-chain R groups QUATEMARY ? result of 2+ protein subunits assembling to form larger biologically active protein complex Most proteins are folded into a complex globular shape. Each protein molecule consists of one or more chains of amino acid monomers. The amino acids are linked by peptide bonds, so a protein polymer is often called a polypeptide. Because they are so complicated, proteins are usually described in terms of four levels of structure. Each protein has a unique primary structure? a particular number and sequence of amino acids making up the polypeptide chain. Twenty different amino acids are used to build proteins. Theoretically, the various amino acids could be linked in almost any sequence, forming an almost infinite variety of different proteins. Click on the magnifying glass (in the lower right corner) to see primary structure in more detail. Secondary structure results from hydrogen bonding between atoms along the polypeptide backbone. Oxygen and nitrogen atoms along the backbone are highly electronegative, giving them partial negative charges, and leaving nearby hydrogen atoms with partial positive charges. These negatively and positively charged atoms are attracted to one another at regular intervals along the chain, causing parts of the protein to twist or fold back upon itself. In most proteins, parts of the polypeptide chain are coiled or folded, forming twists and corrugations. This is secondary structure. The turns and folds of secondary structure contribute to the protein's overall shape. One kind of secondary structure is the alpha helix, where the chain twists. Another is the pleated sheet, where the chain folds back on itself or where two regions of the chain lie parallel to one another. Click on the magnifying glass to examine secondary structure in more detail. Superimposed on primary and secondary structure is tertiary structure, irregular loops and folds that give the protein its overall three-dimensional shape. Click on the magnifying glass to see tertiary structure in more detail. Some proteins consist of two or more polypeptide chains. The fourth level of protein structure-- quaternary structure-- results from the combination of two or more polypeptide subunits. Click on the magnifying glass for a more detailed look at quaternary structure. Proteins-- the purple blobs in this closeup of an animal cell-- are the most complicated molecules known. A cell contains thousands of kinds of proteins, which carry out a variety of functions. In most cases, a protein's function depends on its complex three-dimensional structure. Click on each protein to see a demonstration and description of its function. Note how function depends on protein shape and changes in shape. STRUCTURAL PROTEINS ? have many functions. Like tent poles and ropes, they shape cells and anchor cell parts. They may serve as tracks along which cell parts can move. They bind cells together, making organized units such as muscles, ligaments, and the tendons that bind muscles to bones. The silk of spiders and the hair of mammals are also structural proteins. SIGNAL PROTEINS ? Include hormonal proteins that help coordinate an organism's activities by acting as signals between cells. For example, insulin, a hormonal protein secreted by the pancreas, signals an animal's cells to take in and use sugar. The hormone receptor is also a protein. RECEPTOR MOLECULES ? bind to signal molecules and can then emit second messengers which trigger changes inside a cell. Receptors are thus important links in the system of communication among cells. Some signal molecules, such as hormones, are also proteins. TRANSPORT PROTEINS ? carry molecules from place to place. The example shown here allows certain solute molecules to enter the cell. Hemoglobin is the transport protein that carries oxygen in the blood. SENSORY PROTEINS ? detect environmental changes such as light, and respond by emitting or producing signals that call for a response. GENE REGULATORY ? proteins bind to DNA in particular locations and control whether or not certain genes will be read. This allows cells to become specialized for different functions and respond to changes in their surroundings. An ENZYME is a protein that changes the rate of a chemical reaction without itself being changed into a different molecule in the process. Enzymes promote and regulate virtually all chemical reactions in cells. The immune system makes defensive proteins called antibodies that bind to invaders (such as the virus shown here) and mark the foreign objects for destruction. This is a closeup view of a DNA polymer, one of two twisted strands that make up a DNA molecule. Cells make nucleic acid polymers by linking together four kinds of monomers called nucleotides. Each nucleotide consists of a sugar (deoxyribose in DNA), a phosphate group, and a nitrogen-containing base-- abbreviated G, A, C, or T. Like letters in a sentence, the sequence of nucleotides in a nucleic acid carries information. The DNA of every organism has a unique nucleotide sequence. This illustration shows only a tiny segment of DNA, which may be millions of nucleotides in length. DNA normally consists of two strands of nucleotides that twist around one another, forming the famous double helix. The strands are held together by hydrogen bonds between pairs of nitrogenous bases. The base A always pairs with T, and C always pairs with G. This is a closeup view of an RNA polymer. RNA looks a lot like DNA, except it is typically single-stranded, contains a different sugar (called ribose), and has the base uracil (U) instead of thymine (T). RNA is copied from part of a DNA molecule, so it is shorter than DNA-- dozens to thousands of nucleotides. If a DNA double helix is 100 nucleotide pairs long and contains 25 adenine bases, how many guanine bases does it contain? 75 (100 nucleotide pairs are a total of 200 nucleotides. Because of base pairing, if there are 25 adenine there must also be 25 thymine. This leaves 200?50 = 150 nucleotides to be divided evenly between guanine and cytosine.) Ahe building block of a nucleic acid, consisting of a five-carbon sugar covalently bonded to a nitrogenous base and a phosphate group. The nucleic acids DNA and RNA are made from chains of nucleotides. Nucleotides consist of three components: a five-carbon sugar (either ribose or deoxyribose), a nitrogenous base attached to the sugar?s 1'-carbon, and a phosphate group attached to the sugar?s 5'-carbon. ribose deoxyribose has one less O atom phosphate purine Identify three possible components of a DNA nucleotide. deoxyribose, phosphate group, thymine DNA: deoxyribose, thymine BOTH: adenine, cytosine, phosphate, guanine RNA: ribose, uracil DNA is used for storage of genetic information. The presence of deoxyribose as the sugar in DNA makes the molecule more stable and less susceptible to hydrolysis. The 2'-oxygen on the ribose found in RNA makes RNA much more susceptible to breakdown. It is important that mRNA be easily broken down, to ensure that the correct levels of protein are maintained in the cell. DNA, or deoxyribonucleic acid, contains the genetic information that is used by all living things to produce their biomolecules essential for life. DNA is a double helix, with two strands. The two strands are held together by hydrogen bonds between complementary nitrogenous bases. The two strands are always complementary, ensuring that the DNA can be replicated accurately. The two complementary DNA strands always run in opposite directions: One runs from 5' to 3', and the other runs from 3' to 5', if you are looking along the strand, as seen in the image. Want to see the other 4 page(s) in Quiz 4.docx?JOIN TODAY FOR FREE!
INVERSE TRIGONOMETRIC FUNCTIONS THE ANGLES in calculus will be in radian measure. Thus if we are given a radian angle, for example, then we can evaluate a function of it. Inversely, if we are given that the value of the sine function is ½, then the challenge is to name the radian angle x. sin x = ½. "The sine of what angle is equal to ½?" We write however: Evaluate "The angle whose sine is ½." y = arcsin x is called the inverse of the funtion y = sin x. arcsin x is the angle whose sine is the number x. Strictly, arcsin x is the arc whose sine is x. Because in the unit circle, the length of that arc is the radian measure. Topic 15. Now there are many angles whose sine is ½. It wll be any angle whose corresponding acute angle is . Therefore we must restrict the range of y = arcsin x -- the values of that angle -- so that it will in fact be a function; so that it will be single-valued. How will we do that? We will restrict them to those angles that have the smallest absolute value. They are called the principal values of y = arcsin x. arcsin ½ = . The first quadrant angle is the angle with the smallest absolute value whose sine is ½. Example 1. Evaluate arcsin (−½). Solution. Angles whose sines are negative fall in the 3rd and 4th quadrants. The angle of smallest absolute value falls in the 4th quadrant between 0 and −. The angle whose sine is −x is simply the negative of the angle whose sine is x. arcsin (−½) = −arcsin (½) = −. The range, then, of the function y = arcsin x will be angles that fall in the 1st and 4th quadrants, between − and . Angles whose sines are positive will be 1st quadrant angles. Angles whose sines are negative will fall in the 4th quadrant. To restrict the range of arcsin x is equivalent to restricting the domain of sin x to those same values. This will be the case with all the restricted ranges that follow. sin−1x. The inverse sine Another notation for arcsin x is sin−1x. Read: "The inverse sine of x." −1 here is not an exponent. (See Topic 19 of Precalculus.) Problem 1. Evaluate the following in radians. To see the answer, pass your mouse over the colored area. a) arcsin 0 = 0. (Topic 15.) b) arcsin 1 = π/2. (Topic 15.) c) sin−1 (−1) = −π/2. (Topic 15.) Corresponding to each trigonometric function, there is its inverse function. In each one, we are given the value x of the trigonometric function. We are to name the radian angle that has that value. In each case, we must retstrict its range so that the function will be single-valued. The range of y = arctan x Like y = arcsin x, y = arctan x has its smallest absolute values in the 1st and 4th quadrants. Note that y -- the angle whose tangent is x -- must be greater than − and less than . For at those quadrantal angles, the tangent does not exist. Angles whose tangents are positive will be 1st quadrant angles. Angles whose tangents are negative will fall in the 4th quadrant. That is exactly the same as with arcsin (−x). The angle whose tangent is −x is simply the negative of the angle whose tangent is x. Problem 2. Evaluate the following. The range of y = arccos x Example 2. Evaluate arccos ½. Problem 3. Why is this not true? arccos (−½) = −. − is a 4th quadrant angle. And in the 4th quadrant, the cosine is positive. An angle whose cosine is negative will fall in the 2nd quadrant, where it will have its smallest absolute value. (Topic 15.) In other words: The angle whose cosine is −x arccos (−x) = π − arccos x. Example 3. Evaluate arccos (−½). Solution. We have seen: arccos ½ = . Therefore, arccos (−½) is the supplement of —which is the angle θ that we must add to to equal π. + θ = π. Now, is one-third of π. Therefore, its supplement θ will be two-thirds of π: . arccos (−½) = . The range, then, of y = arccos x will be from 0 to π. An angle whose cosine is positive will be a 1st quadrant angle; an angle whose cosine is negative will fall in the 2nd quadrant. It will be the supplement of the 1st quadrant angle. Problem 4. Evaluate the following. The inverse relations If we put f(x) = sin x g(x) = arcsin x, then according to the definition of inverse functions: f(g(x)) = x and g(f(x)) = x. sin (arcsin x) = x, and arcsin (sin x) = x. arcsin x = θ if and only if x = sin θ. We have taken the inverse function—the sine—of both sides of the equation on the left. We have extracted the argument x. This is in general the case. a) arctan t = β if and only if t = tan β. b) arcsec u = α if and only if u = sec α. c) arccos 1 = 0 if and only if 1 = cos 0. This principle enables us to solve many trigonometric equations. Example 4. Solve for x: Solution. By taking the inverse function -- the sine -- of both sides, we can free the argument x − 1. We write immediately: Problem 6. Solve for x: tan (x + 2) = 1. Problem 7. Solve for x: cos x2 = −1. x2 = arccos (−1) = π. x = ±. Problem 8. Solve for x: The range of y = arcsec x In calculus, sin−1x, tan−1x, and cos−1x are the most important inverse trigonometric functions. Nevertheless, here are the ranges that make the rest single-valued. If x is positive, then the value of the inverse function is always a first quadrant angle, or 0. If x is negative, the value of the inverse will fall in the quadrant in which the direct function is negative. Thus if x is negative, arcsec x will fall in the 2nd quadrant, because that is where sec x is negative. The only inverse function below in which x may be 0, is arccot x. arccot 0 = π/2. Again, we restrict the values of y to those angles that have the smallest absolute value. y = arcsec x, then the product sec y tan y is never negative. For, if y = arcsec x, then the angle y falls either in the first or second quadrants. When angle y falls in the first quadrant, then both sec y and tan y are positive. Therefore their product is positive. When angle y falls in the second quadrant, sec y and tan y are both negative, so that again their product is positive. If y = 0, then tan y= 0, hence the product sec y tan y is 0. Therefore, that product is never negative. (This theorem is referenced in the proof of the derivative of y = arcsecx.) Please make a donation to keep TheMathPage online. Copyright © 2022 Lawrence Spector Questions or comments? E-mail: [email protected]
Def. a "large white puffy cloud" is called a cumulus cloud. Cumulus clouds look white because the water droplets reflect and scatter the sunlight without absorbing other colors. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth." Theoretical clouds[edit | edit source] Def. a "visible mass of - water droplets suspended in the air ... - steam ... - smoke ... - a group or swarm" is called a cloud. Reds[edit | edit source] "[T]he extended red emission (ERE) [is] observed in many dusty astronomical environments, in particular, the diffuse interstellar medium of the Galaxy. ... silicon nanoparticles provide the best match to the spectrum and the efficiency requirement of the ERE." "The broad, 60 < FWHM < 100 nm, featureless luminescence band known as extended red emission (ERE) is seen in such diverse dusty astrophysical environments as reflection nebulae17, planetary nebulae3, HII regions (Orion)12, a Nova11, Galactic cirrus14, a dark nebula7, Galaxies8,6 and the diffuse interstellar medium (ISM)4. The band is confined between 540-950 nm, but the wavelength of peak emission varies from environment to environment, even within a given object. ... the wavelength of peak emission is longer and the efficiency of the luminescence is lower, the harder and denser the illuminating radiation field is13. These general characteristics of ERE constrain the photoluminescence (PL) band and efficiency for laboratory analysis of dust analog materials." In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening — similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line-of-sight. "The Danish 1.54-metre telescope located at ESO’s La Silla Observatory in Chile has captured a striking image of NGC 6559, an object that showcases the anarchy that reigns when stars form inside an interstellar cloud. This region of sky includes glowing red clouds of mostly hydrogen gas, blue regions where starlight is being reflected from tiny particles of dust and also dark regions where the dust is thick and opaque." "The blue section of the photo — representing a "reflection nebula" — shows light from the newly formed stars in the cosmic nursery being reflected in all directions by the particles of dust made of iron, carbon, silicon and other elements in the interstellar cloud." Infrareds[edit | edit source] "The glowing Trifid Nebula [in the image at right] is revealed in an infrared view from NASA's Spitzer Space Telescope. The Trifid Nebula is a giant star-forming cloud of gas and dust located 5,400 light-years away in the constellation Sagittarius." "The false-color Spitzer image reveals a different side of the Trifid Nebula. Where dark lanes of dust are visible trisecting the nebula in a visible-light picture, bright regions of star-forming activity are seen in the Spitzer picture. All together, Spitzer uncovered 30 massive embryonic stars and 120 smaller newborn stars throughout the Trifid Nebula, in both its dark lanes and luminous clouds. These stars are visible in the Spitzer image, mainly as yellow or red spots. Embryonic stars are developing stars about to burst into existence." "Ten of the 30 massive embryos discovered by Spitzer were found in four dark cores, or stellar "incubators," where stars are born. Astronomers using data from the Institute of Radioastronomy millimeter telescope in Spain had previously identified these cores but thought they were not quite ripe for stars. Spitzer's highly sensitive infrared eyes were able to penetrate all four cores to reveal rapidly growing embryos." "Astronomers can actually count the individual embryos tucked inside the cores by looking closely at this Spitzer image taken by its infrared array camera (IRAC). This instrument has the highest spatial resolution of Spitzer's imaging cameras. The embryos are thought to have been triggered by a massive "type O" star, which can be seen as a white spot at the center of the nebula. Type O stars are the most massive stars, ending their brief lives in explosive supernovas. The small newborn stars probably arose at the same time as the O star, and from the same original cloud of gas and dust." "This Spitzer mosaic image uses data from IRAC showing light of 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8.0 microns (red)." Interstellar dust can be studied by infrared spectrometry, in part because the dust is an astronomical infrared source and other infrared sources are behind the diffuse clouds of dust. The monochromatic flux density radiated by a greybody at frequency through solid angle is given by where is the Planck function for a blackbody at temperature T and emissivity . For a uniform medium of optical depth radiative transfer means that the radiation will be reduced by a factor . The optical depth is often approximated by the ratio of the emitting frequency to the frequency where all raised to an exponent β. For cold dust clouds in the interstellar medium β is approximately two. Therefore Q becomes, . (, is the frequency where ). Submillimeters[edit | edit source] Terahertz radiation is emitted as part of the black body radiation from anything with temperatures greater than about 10 kelvin. While this thermal emission is very weak, observations at these frequencies are important for characterizing the cold 10-20K dust in the interstellar medium in the Milky Way galaxy, and in distant starburst galaxies. Telescopes operating in this band include the James Clerk Maxwell Telescope, the Caltech Submillimeter Observatory and the Submillimeter Array at the Mauna Kea Observatory in Hawaii, the BLAST balloon borne telescope, the Herschel Space Observatory, and the Heinrich Hertz Submillimeter Telescope at the Mount Graham International Observatory in Arizona. The Atacama Large Millimeter Array, under construction, will operate in the submillimeter range. The opacity of the Earth's atmosphere to submillimeter radiation restricts these observatories to very high altitude sites, or to space. "[T]he detection of absorption by interstellar hydrogen fluoride (HF) [in the submillimeter band occurs] along the sight line to the submillimeter continuum sources W49N and W51." "HF is the dominant reservoir of fluorine wherever the interstellar H2/atomic H ratio exceeds ~ 1; the unusual behavior of fluorine is explained by its unique thermochemistry, F being the only atom in the periodic table that can react exothermically with H2 to form a hydride." The observations "toward W49N and W51 [occurred] on 2010 March 22 ... The observations were carried out at three different local oscillator (LO) tunings in order to securely identify the HF line toward both sight lines. The dual beam switch mode (DBS) was used with a reference position located 3' on either side of the source position along an East-West axis. We centered the telescope beam at α =19h10m13.2s, δ = 09°06'12.0" for W49N and α = 19h23m43.9s, δ = 14°30'30.5" for W51 (J2000.0). The total on-source integration time amounts to 222s on each source using the Wide Band Spectrometer (WBS) that offers a spectral resolution of 1.1 MHz (~0.3 km s-1 at 1232 GHz)." "[T]he first detection of chloronium, H2Cl+, in the interstellar medium, [occurred on March 1 and March 23, 2010,] using the HIFI instrument aboard the Herschel Space Observatory. The 212 − 101 lines of ortho-H235Cl+ and ortho-H237Cl+ are detected in absorption towards NGC 6334I, and the 111 − 000 transition of para-H235Cl+ is detected in absorption towards NGC 6334I and Sgr B2(S)." "The [microwave] detection of interstellar formaldehyde provides important information about the chemical physics of our galaxy. We now know that polyatomic molecules containing at least two atoms other than hydrogen can form in the interstellar medium." "H2CO is the first organic polyatomic molecule ever detected in the interstellar medium". Radios[edit | edit source] "Over the past 30 years, radioastronomy has revealed a rich variety of molecular species in the interstellar medium of our galaxy and even others." “[R]adio astronomy ... has resulted in the detection of over a hundred interstellar species, including radicals and ions, and organic (i.e. carbon-based) compounds, such as alcohols, acids, aldehydes, and ketones. One of the most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong electric dipole moment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that it is used to map out molecular regions. The radio observation of perhaps greatest human interest is the claim of interstellar glycine, the simplest amino acid, but with considerable accompanying controversy. One of the reasons why this detection [is] controversial is that although radio (and some other methods like rotational spectroscopy) are good for the identification of simple species with large dipole moments, they are less sensitive to more complex molecules, even something relatively small like amino acids. Solar coronal clouds[edit | edit source] A coronal cloud is a cloud, or cloud-like, natural astronomical entity, composed of plasmas and usually associated with a star or other astronomical object where the temperature is such that X-rays are emitted. While small coronal clouds are above the photosphere of many different visual spectral type stars, others occupy parts of the interstellar medium (ISM), extending sometimes millions of kilometers into space, or thousands of light-years, depending on the size of the associated object such as a galaxy. "Coronal clouds, type IIIg, form in space above a spot area and rain streamers upon it." "This energy [1032 to 1033 ergs] appears in the form of electromagnetic radiation over the entire spectrum from γ-rays to radio burst, in fast electrons and nuclei up to relativistic energies, in the creation of a hot coronal cloud, and in large-scale mass motions including the ejections of material from the Sun." "Coronal clouds are irregular objects suspended in the corona with matter streaming out of them into nearby active regions." Venus[edit | edit source] In visual astronomy almost no variation or detail can be seen in the clouds. The surface is obscured by a thick blanket of clouds. Venus is shrouded by an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space in visible light. It has thick clouds of sulfur dioxide. There are lower and middle cloud layers. The thick clouds consisting mainly of sulfur dioxide and sulfuric acid droplets. These clouds reflect and scatter about 90% of the sunlight that falls on them back into space, and prevent visual observation of the Venusian surface. The permanent cloud cover means that although Venus is closer than Earth to the Sun, the Venusian surface is not as well lit. Strong 300 km/h winds at the cloud tops circle the planet about every four to five earth days. Venusian winds move at up to 60 times the speed of the planet's rotation, while Earth's fastest winds are only 10% to 20% rotation speed. Earth[edit | edit source] The image on the left shows two meteors, the clouds passing over land and the rain falling towards the ground from the clouds above as the water droplets either lose their static charge or reach too large a size to be held aloft either by the natural electric field of the Earth or by air currents, respectively. The water droplets are moving somewhat horizontally and also vertically. Nephology[edit | edit source] Def. the "branch of meteorology that studies clouds" is called nephology. |Forms and levels||Stratiform (Polar mesospheric clouds) (Very high level) |Polar stratospheric clouds| |Cirrostratus clouds||Cirrus clouds||Cirrocumulus clouds| |(Mid-level)||Altostratus clouds||Altocumulus clouds| |(Low-level)||Stratus clouds||Stratocumulus clouds||Cumulus humilis| |Multi-level/vertical||Nimbostratus clouds||Cumulus mediocris| |Towering vertical||Cumulus congestus||Cumulonimbus clouds| Noctilucent clouds[edit | edit source] Noctilucent clouds may occasionally take on more of a red or orange hue. They are not common or widespread enough to have a significant effect on climate. An increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change. Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds. Convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation which tends to produce the coldest temperatures in the entire atmosphere just below the mesopause resulting in the best environment for the formation of polar mesospheric clouds. Smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud. Sightings are rare more than 45 degrees south of the north pole or north of the south pole. "The mesopause occurs, by definition, at the top of the mesosphere and at the bottom of the thermosphere. Noctilucent clouds appear always in the vicinity of the mesopause." Ionospheres[edit | edit source] From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region. "The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, rarely up to 225 MHz." "The total time for transport of metal ions from the equatorial E region to the higher latitudes (within ± 30" magnetic latitude) of the F region must not exceed about 12 hours if the entire "circulation" process is to occur during the time the fountain effect is operative. This requirement seems unnecessary in that the "reverse fountain effect" which occurs when the daytime eastward E field reverses to the west is weaker than the daytime fountain (WOODMAN et al., 1977) thus leading to an apparent daily net positive flux of metal ions into the equatorial F region from the equatorial E region. Some evidence for this "pulsed" source of metal ions is found in the observed "clouds" of Mg+ reported by MENDE et al., (1985) and possibly by KUMAR and HANSON (1980)." During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes, known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. "Dust quite probably plsys a major role in noctilucent cloud formation (TURCO et al., 1982) and possibly modifies D region ion chemistry (eg. PARTHASARATHY, 1976)." "Dust has long been considered important to the formation of noctiluent clouds at high latitudes. TURCO et al., (1982) extensively treats the problem of noctilucent cloud formation including effects of ion attachment to dust or ice particles. PARTHASARATHY (1976) has considered dust a direct "sink" for D region ionization." "[N]octilucent clouds are not an aspect of low and mid-laditude D region aeronomy." Mars[edit | edit source] At right is a Hubble Space Telescope image of a dust storm on Mars. The picture was snapped on October 28, 2005. The regional dust storm on Mars had "been growing and evolving over the past few weeks. The dust storm, which is nearly in the middle of the planet in this Hubble view is about 930 miles (1500 km) long measured diagonally, which is about the size of the states of Texas, Oklahoma, and New Mexico combined. No wonder amateur astronomers with even modest-sized telescopes have been able to keep an eye on this storm. The smallest resolvable features in the image (small craters and wind streaks) are the size of a large city, about 12 miles (20 km) across. The occurrence of the dust storm is in close proximity to the NASA Mars Exploration Rover Opportunity's landing site in Sinus Meridiani. Dust in the atmosphere could block some of the sunlight needed to keep the rover operating at full power. ... The large regional dust storm appears as the brighter, redder cloudy region in the middle of the planet's disk. This storm has been churning in the planet's equatorial regions for several weeks now, and it is likely responsible for the reddish, dusty haze and other dust clouds seen across this hemisphere of the planet in views from Hubble, ground based telescopes, and the NASA and ESA spacecraft studying Mars from orbit. Bluish water-ice clouds can also be seen along the limbs and in the north (winter) polar region at the top of the image." Saturn[edit | edit source] The upper clouds are composed of ammonia crystals. In 1990, the Hubble Space Telescope imaged an enormous white cloud near Saturn's equator that was not present during the Voyager encounters and in 1994, another, smaller storm was observed. The 1990 storm was an example of a Great White Spot, a unique but short-lived phenomenon that occurs once every Saturnian year, roughly every 30 Earth years, around the time of the northern hemisphere's summer solstice. Previous Great White Spots were observed in 1876, 1903, 1933 and 1960, with the 1933 storm being the most famous. If the periodicity is maintained, another storm will occur in about 2020. Wind speeds on Saturn can reach 1,800 km/h (1,100 mph) ... Voyager data indicate peak easterly winds of 500 m/s (1800 km/h). Infrared imaging has shown that Saturn's south pole has a warm polar vortex, the only known example of such a phenomenon in the Solar System. Whereas temperatures on Saturn are normally −185 °C, temperatures on the vortex often reach as high as −122 °C, believed to be the warmest spot on Saturn. Uranus[edit | edit source] Uranus has a complex, layered cloud structure, with methane thought to make up the uppermost layer of clouds. With a large telescope of 25 cm or wider, cloud patterns may be visible. When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet. Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar. In the 1990s, the number of the observed bright cloud features grew considerably partly because new high resolution imaging techniques became available. Most were found in the northern hemisphere as it started to become visible. An early explanation - that bright clouds are easier to identify in the dark part of the planet, whereas in the southern hemisphere the bright collar masks them - was shown to be incorrect: the actual number of features has indeed increased considerably. Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter. They appear to lie at a higher altitude. The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours, while at least one southern cloud may have persisted since Voyager flyby. Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune. For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged. The speculation is that Uranus is becoming more Neptune-like during its equinoctial season. On August 23, 2006, researchers at the Space Science Institute (Boulder, CO) and the University of Wisconsin observed a dark spot on Uranus's surface, giving astronomers more insight into the planet's atmospheric activity. The wind speeds on Uranus can reach 250 meters per second (900 km/h, 560 mph). The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus. At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −100 to −50 m/s. Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located. Closer to the poles, the winds shift to a prograde direction, flowing with the planet's rotation. Windspeeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles. Windspeeds at −40° latitude range from 150 to 200 m/s. Since the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure. In contrast, in the northern hemisphere maximum speeds as high as 240 m/s are observed near +50 degrees of latitude. ... Observations included record-breaking wind speeds of 229 m/s (824 km/h) and a persistent thunderstorm referred to as "Fourth of July fireworks". Neptune[edit | edit source] At the time of the 1989 Voyager 2 flyby, the planet's southern hemisphere possessed a Great Dark Spot. In 1989, the Great Dark Spot, an anti-cyclonic storm system spanned 13000×6600 km, was discovered by NASA's Voyager 2 spacecraft. Some five years later, on 2 November 1994, the Hubble Space Telescope did not see the Great Dark Spot on the planet. Instead, a new storm similar to the Great Dark Spot was found in the planet's northern hemisphere. The Scooter is another storm, a white cloud group farther south than the Great Dark Spot. Its nickname is due to the fact that when first detected in the months before the 1989 Voyager 2 encounter it moved faster than the Great Dark Spot. Subsequent images revealed even faster clouds. The Small Dark Spot is a southern cyclonic storm, the second-most-intense storm observed during the 1989 encounter. It initially was completely dark, but as Voyager 2 approached the planet, a bright core developed and can be seen in most of the highest-resolution images. The persistence of companion clouds shows that some former dark spots may continue to exist as cyclones even though they are no longer visible as a dark feature. Dark spots may dissipate when they migrate too close to the equator or possibly through some other unknown mechanism. The upper-level clouds occur at pressures below one bar, where the temperature is suitable for methane to condense. High-altitude clouds on Neptune have been observed casting shadows on the opaque cloud deck below. There are also high-altitude cloud bands that wrap around the planet at constant latitude. These circumferential bands have widths of 50–150 km and lie about 50–110 km above the cloud deck. Because of seasonal changes, the cloud bands in the southern hemisphere of Neptune have been observed to increase in size and albedo. This trend was first seen in 1980 and is expected to last until about 2020. The long orbital period of Neptune results in seasons lasting forty years. Neptune has the strongest sustained winds of any planet in the Solar System, with recorded wind speeds as high as 2,100 kilometres per hour (1,300 mph). On Neptune winds reach speeds of almost 600 m/s—nearly attaining supersonic flow. More typically, by tracking the motion of persistent clouds, wind speeds have been shown to vary from 20 m/s in the easterly direction to 325 m/s westward. At the cloud tops, the prevailing winds range in speed from 400 m/s along the equator to 250 m/s at the poles. Most of the winds on Neptune move in a direction opposite the planet's rotation. The general pattern of winds showed prograde rotation at high latitudes vs. retrograde rotation at lower latitudes. The difference in flow direction is believed to be a "skin effect" and not due to any deeper atmospheric processes. At 70° S latitude, a high-speed jet travels at a speed of 300 m/s. Comets[edit | edit source] Due to a need for accurate oscillator strengths and cross sections in studies of diffuse interstellar clouds and cometary atmospheres, emission lines in cometary spectra are being studied. Interstellar clouds[edit | edit source] Def. an increase in the hydrogen density (nH) of the interstellar medium from ~ 0.01 H cm-3 to ≳ 0.1 H cm-3 is called an interstellar cloud. The cyanide radical (called cyanogen) is used to measure the temperature of interstellar gas clouds. "Carbon monoxide is the second most abundant molecule, after H2, in interstellar clouds. In diffuse clouds, the amount of CO is mainly derived from measurements of absorption at UV wavelengths." Hot ionized mediums[edit | edit source] "Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble." HI clouds[edit | edit source] Def. an interstellar cloud composed primarily of neutral atomic hydrogen is called an HI cloud, H I cloud, or HI region. "Although there is a possibility that we are seeing the edge of a larger feature, we may be seeing a cloud of higher density superposed on a slowly varying background. If one assumes that to be the case, one finds that the H I cloud has a column density 1020 atoms cm-2 at maximum (assuming an arbitrary kinetic temperature of 50 K and a half-width of 2 km s-1). Although one cannot determine the distance to the absorbing cloud, one can estimate a reasonable upper limit. The quasar 3C 247 [in the image on the right] lies at galactic latitude 100; the assumption of a hydrogen layer extending 100 pc above the plane leads to a maximum probable distance of 600 pc. The linear diameter of the cloud (if the angular diameter is taken to be 0.1") is then at most 3 x 10-4 pc, or 70 AU! The neutral hydrogen density is 105 atoms cm-3; the mass, 3 x 10-7 M⊙." The neutral atomic hydrogen "gas is distributed very differently from how it was in the past, with much less in the galaxies’ outer suburbs than billions of years ago." “This means that it’s much harder for galaxies to pull the gas in and form new stars. It’s why stars are forming 20 times more slowly now than in the past.” “Even though there’s more atomic hydrogen than we thought, it’s not a big enough percentage to solve the Dark Matter problem. If what we are missing had the weight of a large kangaroo, what we have found would have the weight of a small echidna.” SIMBAD contains some 6,010 entries of the astronomical object type 'HI' (H I region). These regions are non-luminous, save for emission of the 21-cm (1,420 MHz) region spectral line. Mapping H I emissions with a radio telescope is a technique used for determining the structure of spiral galaxies. The degree of ionization in an H I region is very small at around 10−4 (i.e. one particle in 10,000). The temperature of an H I region is about 100 K, and it is usually considered as isothermal, except near an expanding H II region. For hydrogen, complete ionization "obviously reduces its cross section to zero, but ... the net effect of partial ionization of hydrogen on calculated absorption depends on whether or not observations of hydrogen [are] used to estimate the total gas. ... [A]t least 20 % of interstellar hydrogen at high galactic latitudes seems to be ionized". HI shells[edit | edit source] "The Southern Galactic Plane Survey (SGPS; see the 2002 Annual Report), which combines 21-cm HI observations from Parkes and the Compact Array, is now complete. The SGPS provides a wonderful resource for understanding populations such as magnetars in the context of their environment. Examination of SGPS data around the position of the well-known magnetar 1E 1048.15937 reveals a striking cavity in HI, designated as GSH 288.3-0.5-28, that is almost centred on the position of the neutron star. The SGPS data imply that GSH 288.3-0.5-28 is at a distance of approximately 2.7 kpc, and is expanding at a velocity of approximately 7.5 kilometres per second into gas of density ~17 atoms cm-3." "Shells like GSH 288.3-0.5-28 are common, and represent wind-blown bubbles powered by massive stars expanding into the interstellar medium. The size and expansion speed of GSH 288.3-0.5-28 then imply that the bubble is several million years old, and has been blown by a wind of mechanical luminosity ~4 x 1034 ergs per second, corresponding to a single star of initial mass 30 to 40 solar masses." "Usually in such cases, the central star is obvious, in the form of a bright O star, supergiant or WR star at the shell's centre. However, even though this field lies in the rich Carina OB1 region, there are no known stars of the appropriate position, distance or luminosity to argue for an association with GSH 288.3-0.5-28. This raises the intriguing possibility that GSH 288.3-0.5-28 was blown by the massive star whose collapse formed 1E 1048.1-5937. The central location of the magnetar within the HI shell suggests that the supernova occurred quite recently. The corresponding blast waves would impact the walls of the HI shell approximately 3000 years after core collapse, producing significant X-ray and radio emission. The lack of such emission requires the neutron star to be very young, consistent with the small ages expected for active magnetars. A common distance of around three kpc is suggested by the properties of both objects." HII clouds[edit | edit source] In the upper image on the right, the reddish region is a giant HII cloud. Def. an interstellar cloud in which the primary constituent is monatomic hydrogen undergoing ionization and emission is called an HII cloud. "The nebula [in the second image down on the right] is mostly composed of hydrogen gas, which is ionised by the ultraviolet radiation emitted by the hot stars, leading to the nebula’s alternative title as an HII region. This picture shows only part of the nebula, where dark dust clouds are strikingly silhouetted against the glowing gas." "NGC 2174 lies about 6400 light-years away in the constellation of Orion (The Hunter)." "This picture was created from images from the Wide Field Planetary Camera 2 on Hubble. Images through four different filters were combined to make the view shown here. Images through a filter isolating the glow from ionised oxygen (F502N) were coloured blue and images through a filter showing glowing hydrogen (F656N) are green. Glowing ionised sulphur (F673N) and the view through a near-infrared filter (F814W) are both coloured red. The total exposure times per filter were 2600 s, 2600 s, 2600 s and 1000 s respectively and the field of view is about 1.8 arcminutes across." "The Maryland-Green Bank hydrogen-line survey maps reveal this feature [the emission nebula surrounding NGC 2175] as part of a large neutral hydrogen cloud in the galactic plane that is situated at the edge of the association Gem.I. It is most unlikely that such a large neutral hydrogen cloud would be connected with the emission nebula surrounding NGC 2175. Indeed, in a medium with a mean density of hydrogen atoms of 20 cm-3, the Strömgren radius of an HII region around an O6-type star would be more than 16 pc.* However, if a distance of 2 kpc is accepted, the linear radius of the full extent of the continuum source is less than 10 pc. Thus the ionized nebula is density bounded rather than ionization bounded, its small size implying that it is not part of a large neutral hydrogen cloud which would be ionized by radiation from the O6-type star." Molecular clouds[edit | edit source] Def. a "large and relatively dense cloud of cold gas and dust in interstellar space from which new stars are formed" is called a molecular cloud. The image on the right is a composite of visible (B 440 nm and V 557 nm) and near-infrared (768 nm) of the dark cloud (absorption cloud) Barnard 68. Barnard 68 is around 500 lyrs away in the constellation Ophiuchus. "At these wavelengths, the small cloud is completely opaque because of the obscuring effect of dust particles in its interior." "It was obtained with the 8.2-m VLT ANTU telescope and the multimode FORS1 instrument in March 1999." In the image at right is a molecular cloud of gas and dust that is being reduced. "Likely, within a few million years, the intense light from bright stars will have boiled it away completely. The cloud has broken off of part of the Carina Nebula, a star forming region about 8000 light years away. Newly formed stars are visible nearby, their images reddened by blue light being preferentially scattered by the pervasive dust. This image spans about two light years and was taken by the orbiting Hubble Space Telescope in 1999." A molecular cloud, sometimes called a stellar nursery if star formation is occurring within, is a type of interstellar cloud whose density and size permits the formation of molecules, most commonly molecular hydrogen (H2). Molecular hydrogen is difficult to detect by infrared and radio observations, so the molecule most often used to determine the presence of H2 is CO (carbon monoxide). The ratio between CO luminosity and H2 mass is thought to be constant, although there are reasons to doubt this assumption in observations of some other galaxies. Such clouds make up < 1% of the ISM, have temperatures of 10-20 K and high densities of 102 - 106 atoms/cm3. These clouds are astronomical radio and infrared sources with radio and infrared molecular emission and absorption lines. Globules[edit | edit source] Def. a small, isolated round dark cloud is called a globule. "By comparing the properties of globules with and without star formation one can study the processes that lead to star formation in molecular clouds." The "Thumbprint Nebula (TPN) in the Chamaeleon III region" is "a globule without any signs of star formation". The "globule DC 303.8-14.2 (Hartley et al. 1986) [is] located in the eastern part of the Chamaeleon II dark cloud complex" and is "a star forming globule". Cometary globules[edit | edit source] Def. "a dense dust cloud with a faint luminous tail" is called a cometary globule. The image on the right shows a flower-like cometary globule. Circumstellar clouds[edit | edit source] Def. an interstellar-like cloud apparently surrounding or in orbit around a star is called a circumstellar cloud. "VY Canis Majoris [a red hypergiant star is] an irregular pulsating variable [that] lies about 5,000 light-years away in the constellation Canis Major." "Although VY Can is about half a million times as luminous as the Sun, much of its visible light is absorbed by a large, asymmetric cloud of dust particles that has been ejected from the star in various outbursts over the past 1,000 years or so. The infrared emission from this dust cloud makes VY Can one of the brightest objects in the sky at wavelengths of 5–20 microns." "In 2007, a team of astronomers using the 10-meter radio dish on Mount Graham, in Arizona, found that VY Can's extended circumstellar cloud is a prolific molecule-making factory. Among the radio emissions identified were those of hydrogen cyanide (HCN), silicon monoxide (SiO), sodium chloride (NaCl) and a molecule, phosphorus nitride (PN), in which a phosphorus atom and a nitrogen atom are bound together. Phosphorus-bearing molecules are of particular interest to astrobiologists because phosphorus is relatively rare in the universe, yet it is a key ingredient in molecules that are central to life as we know it, including the nuclei acids DNA and RNA and the energy-storage molecule, ATP. " "Material ejected by the star is visible in this 2004 image [on the top right] captured by the Hubble Space Telescope's Advanced Camera for Surveys, using polarizing filters." For comparison, the second image down on the right is captured using visuals. High-velocity clouds[edit | edit source] Def. any cloud having a velocity "inconsistent with simple Galactic rotation models that generally fit the stars and gas in the Milky Way disk" is called a high-velocity cloud. "The leading edge of this cloud [shown in the image on the right] is already interacting with gas from our Galaxy." "The cloud, called Smith's Cloud, after the astronomer who discovered it in 1963, contains enough hydrogen to make a million stars like the Sun. Eleven thousand light-years long and 2,500 light-years wide, it is only 8,000 light-years from our Galaxy's disk. It is careening toward our Galaxy at more than 150 miles per second, aimed to strike the Milky Way's disk at an angle of about 45 degrees." "This is most likely a gas cloud left over from the formation of the Milky Way or gas stripped from a neighbor galaxy. When it hits, it could set off a tremendous burst of star formation. Many of those stars will be very massive, rushing through their lives quickly and exploding as supernovae. Over a few million years, it'll look like a celestial New Year's celebration, with huge firecrackers going off in that region of the Galaxy." "If you could see this cloud with your eyes, it would be a very impressive sight in the night sky. From tip to tail it would cover almost as much sky as the Orion constellation. But as far as we know it is made entirely of gas -- no one has found a single star in it." "Its shape, somewhat similar to that of a comet, indicates that it's already hitting gas in our Galaxy's outskirts. It is also feeling a tidal force from the gravity of the Milky Way and may be in the process of being torn apart. Our Galaxy will get a rain of gas from this cloud, then in about 20 to 40 million years, the cloud's core will smash into the Milky Way's plane." Giant molecular clouds[edit | edit source] A vast assemblage of molecular gas with a mass of approximately 103–107 times the mass of the Sun is called a giant molecular cloud (GMC). GMCs are ≈15–600 light-years in diameter (5–200 parsecs). Whereas the average density in the solar vicinity is one particle per cubic centimetre, the average density of a GMC is 102–103 particles per cubic centimetre. Although the Sun is much denser than a GMC, the volume of a GMC is so great that it contains much more mass than the Sun. The substructure of a GMC is a complex pattern of filaments, sheets, bubbles, and irregular clumps. The densest parts of the filaments and clumps are called "molecular cores", whilst the densest molecular cores are, unsurprisingly, called "dense molecular cores" and have densities in excess of 104–106 particles per cubic centimeter. Observationally molecular cores are traced with carbon monoxide and dense cores are traced with ammonia. The concentration of dust within molecular cores is normally sufficient to block light from background stars so that they appear in silhouette as dark nebulae. GMCs are so large that "local" ones can cover a significant fraction of a constellation; thus they are often referred to by the name of that constellation, e.g. the Orion Molecular Cloud (OMC) or the Taurus Molecular Cloud (TMC). These local GMCs are arrayed in a ring in the neighborhood of the Sun coinciding with the Gould Belt. The most massive collection of molecular clouds in the galaxy forms an asymmetrical ring around the galactic center at a radius of 120 parsecs; the largest component of this ring is the Sagittarius B2 complex. The Sagittarius region is chemically rich and is often used as an exemplar by astronomers searching for new molecules in interstellar space. Nebulas[edit | edit source] "The Horsehead Nebula, a part of the optical nebula IC434 and also known as Barnard 33, was first recorded in 1888 on a photographic plate taken at the Harvard College Observatory. Its coincidental appearance as the profile of a horse's head and neck has led to its becoming one of the most familiar astronomical objects. It is, in fact, an extremely dense cloud projecting in front of the ionized gas that provides the pink glow so nicely revealed in this picture. We know this not only because the underside of the 'neck' is especially dark, but because it actually casts a shadow on the field to its east (below the 'muzzle')." Dark nebulas[edit | edit source] "The 111 → 110 rotational transition of formaldehyde (H2CO) [occurs] in absorption in the direction of four dark nebulae. The radiation ... being absorbed appears to be the isotropic microwave background". One of the dark nebulae sampled, per SIMBAD is TGU H1211 P5. Supernova remnants[edit | edit source] "The supernova SN1987A in the Large Magellanic Cloud (LMC) was discovered on February 23, 1987, and its progenitor is a blue supergiant (Sk -69 202) with luminosity of 2-5 x 1038 erg/s. The 847 keV and 1238 keV gamma-ray lines from 56Co decay have been detected. At right is a Hubble Space Telescope image of the Ghost Head Nebula. "This nebula is one of a chain of star-forming regions lying south of the 30 Doradus nebula in the Large Magellanic Cloud. The red and blue light comes from regions of hydrogen gas heated by nearby stars. The green light comes from glowing oxygen, illuminated by the energy of a stellar wind. The white center shows a core of hot, massive stars." On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15 – 60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, USA. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. "The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is ~1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula." Large Magellanic Clouds[edit | edit source] For coronal cloud observations of the Large Magellanic Cloud, "[b]ackground spectra were obtained from observations of the Lockman Hole." Outflow clouds[edit | edit source] Def. an interstellar-like or intergalactic-like cloud appearing to outflow from a quasar is called an outflow cloud. The image on the right labels three quasars that have outflow clouds associated with them. The other objects labeled are nearby stars. Satellites[edit | edit source] The Submillimeter Wave Astronomy Satellite (SWAS) [is in] low Earth orbit ... to make targeted observations of giant molecular clouds and dark cloud cores. The focus of SWAS is five spectral lines: water (H2O), isotopic water (H218O), isotopic carbon monoxide (13CO), molecular oxygen (O2), and neutral carbon (C I). Spectroscopy[edit | edit source] By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these measurements have limitations, with various types of radiation (radio, infrared, visible, ultraviolet etc.) able to detect only certain types of species, depending on the chemical properties of the molecules. Interstellar formaldehyde was the first polyatomic organic molecule detected in the interstellar medium. Spacecraft[edit | edit source] The Cosmic Ray System (CRS) determines the origin and acceleration process, life history, and dynamic contribution of interstellar cosmic rays, the nucleosynthesis of elements in cosmic-ray sources, the behavior of cosmic rays in the interplanetary medium, and the trapped planetary energetic-particle environment. Measurements from the spacecraft revealed a steady rise since May in collisions with high energy particles (above 70 MeV), which are believed to be cosmic rays emanating from supernova explosions far beyond the Solar System, with a sharp increase in these collisions in late August. At the same time, in late August, there was a dramatic drop in collisions with low-energy particles, which are thought to originate from the Sun. "It's important for us to be aware of what kinds of objects are present beyond our solar system, since we are now beginning to think about potential interstellar space missions, such as Breakthrough Starshot." At "least two interstellar clouds [have been discovered] along Voyager 2's path, and one or two interstellar clouds along Voyager 1's path. They were also able to measure the density of electrons in the clouds along Voyager 2's path, and found that one had a greater electron density than the other." "We think the difference in electron density perhaps indicates a difference in composition of overall density of the clouds." A "broad range of elements [were detected]] in the interstellar medium, such as electrically charged ions of magnesium, iron, carbon and manganese [and] neutrally charged oxygen, nitrogen and hydrogen." See also[edit | edit source] References[edit | edit source] - cumulus. San Francisco, California: Wikimedia Foundation, Inc. February 8, 2013. Retrieved 2013-02-17. - Baffled Scientists Say Less Sunlight Reaching Earth. LiveScience. 2006-01-24. Retrieved 2011-08-19. - "cloud". San Francisco, California: Wikimedia Foundation, Inc. February 13, 2013. Retrieved 2013-02-18. - SnoopY (20 December 2005). "nebula". San Francisco, California: Wikimedia Foundation, Inc. Retrieved 19 June 2019. - Jyril (11 August 2005). "nebula". San Francisco, California: Wikimedia Foundation, Inc. Retrieved 19 June 2019. - 220.127.116.11 (14 July 2005). "nebula". San Francisco, California: Wikimedia Foundation, Inc. Retrieved 19 June 2019. - Pumpie (27 February 2004). "nebula". San Francisco, California: Wikimedia Foundation, Inc. Retrieved 19 June 2019. - Adolf N. Witt, Karl D. Gordon and Douglas G. Furton (July 1, 1998). "Silicon Nanoparticles: Source of Extended Red Emission?". The Astrophysical Journal Letters 501 (1): L111-5. doi:10.1086/311453. http://iopscience.iop.org/1538-4357/501/1/L111. Retrieved 2013-07-30. - T. L. Smith and A. N. Witt (December 1999). "The Photoluminescence Efficiency of Extended Red Emission as a Constraint for Interstellar Dust". Bulletin of the American Astronomical Society 31: 1479. http://adsabs.harvard.edu/abs/1999AAS...195.7406S. Retrieved 2013-08-02. - See Binney and Merrifeld (1998), Carroll and Ostlie (1996), Kutner (2003) for applications in astronomy. - eso1320a (May 2, 2013). The star formation region NGC 6559. La Silla Observatory, Chile: European Southern Observatory. Retrieved 2013-05-02. - Miriam Kramer (May 2, 2013). Dusty Star-Spawning Space Cloud Glows In Amazing Photo. La Silla, Chile: Yahoo! News. Retrieved 2013-05-02. - J. Rho (January 12, 2005). Spitzer/IRAC View of the Trifid Nebula. Pasadena, California USA: NASA/JPL/Caltech. Retrieved 2014-03-06. - Duley, W. W. & Williams, D. A. (July 1981). "The infrared spectrum of interstellar dust - Surface functional groups on carbon". Royal Astronomical Society, Monthly Notices 196 (7): 269-74. - P. Sonnentrucker, D. A. Neufeld, T. G. Phillips, M. Gerin, D. C. Lis, M. De Luca, J. R. Goicoechea, J. H. Black, T. A. Bell, F. Boulanger, J. Cernicharo, A. Coutens, E. Dartois, M . Kaźmierczak, P. Encrenaz, E. Falgarone, T. R. Geballe, T. Giesen, B. Godard, P. F. Goldsmith, C. Gry, H. Gupta, P. Hennebelle, E. Herbst, P. Hily-Blant, C. Joblin, R. Kołos, J. Krełowski, J. Martín-Pintado, K. M. Menten, R. Monje, B. Mookerjea, J. Pearson, M. Perault, C. M. Persson, R. Plume, M. Salez, S. Schlemmer, M. Schmidt, J. Stutzki, D.Teyssier, C. Vastel, S. Yu, E. Caux, R. Güsten, W. A. Hatch, T. Klein, I. Mehdi, P. Morris and J. S. Ward (October 1, 2010). "Detection of hydrogen fluoride absorption in diffuse molecular clouds with Herschel/HIFI: a ubiquitous tracer of molecular gas". Astronomy & Astrophysics 521: 5. doi:10.1051/0004-6361/201015082. http://arxiv.org/pdf/1007.2148.pdf. Retrieved 2013-01-17. - D. C. Lis, J. C. Pearson, D. A. Neufeld, P. Schilke, H. S. P. Müller,H. Gupta, T. A. Bell, C. Comito, T. G. Phillips, E. A. Bergin, C. Ceccarelli, P. F. Goldsmith, G. A. Blake, A. Bacmann, A. Baudry, M. Benedettini, A. Benz, J. Black, A. Boogert, S. Bottinelli, S. Cabrit, P. Caselli, A. Castets, E. Caux, J. Cernicharo, C. Codella, A. Coutens, N. Crimier, N. R. Crockett, F. Daniel, K. Demyk, C. Dominic, M.-L. Dubernet, M. Emprechtinger, P. Encrenaz, E. Falgarone, A. Fuente, M. Gerin, T. F. Giesen, J. R. Goicoechea, F. Helmich, P. Hennebelle, Th. Henning, E. Herbst, P. Hily-Blant, Å. Hjalmarson, D. Hollenbach, T. Jack, C. Joblin, D. Johnstone, C. Kahane, M. Kama, M. Kaufman, A. Klotz, W. D. Langer, B. Larsson, J. Le Bourlot, B. Lefloch, F. Le Petit, D. Li, R. Liseau, S. D. Lord, A. Lorenzani, S. Maret, P. G. Martin, G. J. Melnick, K. M. Menten, P. Morris, J. A. Murphy, Z. Nagy, B. Nisini, V. Ossenkopf, S. Pacheco, L. Pagani, B. Parise, M. Pérault, R. Plume, S.-L. Qin, E. Roueff, M. Salez, A. Sandqvist, P. Saraceno, S. Schlemmer, K. Schuster, R. Snell, J. Stutzki, A. Tielens, N. Trappe, F. F. S. van der Tak, M. H. D. van der Wiel, E. van Dishoeck, C. Vastel, S. Viti, V. Wakelam, A. Walters, S. Wang, F. Wyrowski, H. W. Yorke, S. Yu, J. Zmuidzinas, Y. Delorme, J.-P. Desbat, R. Güsten, J.-M. Krieg, and B. Delforge (October 1, 2010). "Herschel/HIFI discovery of interstellar chloronium (H2Cl+)". Astronomy & Astrophysics 521: 5. doi:10.1051/0004-6361/201014959. http://arxiv.org/pdf/1007.1461.pdf. Retrieved 2013-01-18. - Lewis E. Snyder, David Buhl, B. Zuckerman, Patrick Palmer (March 1969). "Microwave detection of interstellar formaldehyde". Physical Review Letters 22 (13): 679-81. doi:10.1103/PhysRevLett.22.679. http://link.aps.org/doi/10.1103/PhysRevLett.22.679. Retrieved 2011-12-17. - F. H. Shu (1982). The Physical Universe. Mill Valley, California: University Science Books. ISBN 0-935702-05-9. - Cox, A. N., ed. (2000). Allen's Astrophysical Quantities. New York: Springer-Verlag. p. 124. ISBN 0-387-98746-0. - Dudley Herschbach (March-May 1999). "Chemical physics: Molecular clouds, clusters, and corrals". Reviews of Modern Physics 71 (2): S411-S418. doi:10.1103/RevModPhys.71.S411. http://link.aps.org/doi/10.1103/RevModPhys.71.S411. Retrieved 2011-12-17. - Kuan YJ, Charnley SB, Huang HC, et al. (2003). "Interstellar glycine". The Astrophysical Journal 593 (2): 848–867. doi:10.1086/375637. - Snyder LE, Lovas FJ, Hollis JM, et al. (2005). "A rigorous attempt to verify interstellar glycine". The Astrophysical Journal 619 (2): 914–30. doi:10.1086/426677. - Edison Pettit (July 1943). "The Properties of Solar Prominences as Related to Type". Astrophysical Journal 98 (7): 6-19. doi:10.1086/144539. - R. P. Lin and H. S. Hudson (September-October 1976). "Non-thermal processes in large solar flares". Solar Physics 50 (10): 153-78. doi:10.1007/BF00206199. http://adsabs.harvard.edu/full/1976SoPh...50..153L. Retrieved 2013-07-07. - E. Tandberg-Hanssen (1977). A. Bruzek and C. J. Durrant (ed.). Prominences, In: Illustrated Glossary for Solar and Solar-Terrestrial Physics. Dordrecht-Holland: D. Reidel Publishing Company. pp. 97–109. doi:10.1007/978-94-010-1245-4_10. ISBN 978-94-010-1247-8. Retrieved 2013-07-10. - Krasnopolsky, V. A.; Parshev, V. A. (1981). "Chemical composition of the atmosphere of Venus". Nature 292 (5824): 610–613. doi:10.1038/292610a0. - Vladimir A. Krasnopolsky (2006). "Chemical composition of Venus atmosphere and clouds: Some unsolved problems". Planetary and Space Science 54 (13–14): 1352–1359. doi:10.1016/j.pss.2006.04.019. - W. B., Rossow; A. D., del Genio; T., Eichler (1990). "Cloud-tracked winds from Pioneer Venus OCPP images". Journal of the Atmospheric Sciences 47 (17): 2053–2084. doi:10.1175/1520-0469(1990)047<2053:CTWFVO>2.0.CO;2. ISSN 1520-0469. http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281990%29047%3C2053%3ACTWFVO%3E2.0.CO%3B2. - Normile, Dennis (7 May 2010). "Mission to probe Venus's curious winds and test solar sail for propulsion". Science 328 (5979): 677. doi:10.1126/science.328.5979.677-a. PMID 20448159. - "Weather Terms". National Weather Service. Retrieved 21 June 2013. - Widsith (17 June 2006). nephology. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 5 February 2019. - WikiPedant (22 August 2008). noctilucent. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 6 February 2019. - Eean (28 November 2004). noctilucent. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 6 February 2019. - DerekWinters (20 September 2015). noctilucent. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 6 February 2019. - SemperBlotto (6 July 2007). noctilucent. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 6 February 2019. - World Meteorological Organization, ed. (2017). "Upper atmospheric clouds, International Cloud Atlas". Retrieved 31 July 2017. - Turco, R. P.; Toon, O. B.; Whitten, R. C.; Keesee, R. G.; Hollenbach, D. (1982). "Noctilucent clouds: Simulation studies of their genesis, properties and global influences". Planetary and Space Science 30 (11): 1147–1181. doi:10.1016/0032-0633(82)90126-X. - Project Possum, ed. (2017). "About Noctiluent Clouds". Retrieved 6 April 2018. - Michael Gadsden; Pekka Parviainen (September 2006). Observing Noctilucent Clouds (PDF). International Association of Geomagnetism & Aeronomy. p. 9. Retrieved 31 January 2011. - Fox, Karen C. (2013). "NASA Sounding Rocket Observes the Seeds of Noctilucent Clouds". Retrieved 1 October 2013. - Michael Gadsden and Wilfried Schröder (1989). Noctilucent Clouds, In: Noctilucent Clouds. 18. Berlin: Springer. pp. 1-12. doi:10.1007/978-3-642-48626-5_1. ISBN 978-3-642-48628-9. https://link.springer.com/chapter/10.1007/978-3-642-48626-5_1. Retrieved 7 February 2019. - Yenne, Bill (1985). The Encyclopedia of US Spacecraft. Exeter Books (A Bison Book), New York. ISBN 978-0-671-07580-4. p. 12 AEROS - Reddi (7 February 2004). Ionosphere. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 7 February 2019. - J. D. Mathews (1988). Some aspects of metallic ion chemistry and dynamics in the mesosphere and thermosphere (PDF). NASA. pp. 228–254. Retrieved 7 February 2019. - Rose, D.C.; Ziauddin, Syed (June 1962). "The polar cap absorption effect". Space Science Reviews 1 (1): 115. doi:10.1007/BF00174638. - Jim Bell, Mike Wolff, and Keith Noll (November 3, 2005). Mars Kicks Up the Dust as it Makes Closest Approach to Earth. HubbleSite NewsCenter. Retrieved 2013-02-24.CS1 maint: multiple names: authors list (link) - Pérez-Hoyos, S.; Sánchez-Laveg, A.; French, R. G.; J. F., Rojas (2005). "Saturn's cloud structure and temporal evolution from ten years of Hubble Space Telescope images (1994–2003)". Icarus 176 (1): 155–174. doi:10.1016/j.icarus.2005.01.014. - Patrick Moore, ed., 1993 Yearbook of Astronomy, (London: W.W. Norton & Company, 1992), Mark Kidger, "The 1990 Great White Spot of Saturn", pp. 176–215. - Hamilton, Calvin J. (1997). Voyager Saturn Science Summary. Solarviews. Retrieved 2007-07-05. - Warm Polar Vortex on Saturn. Merrillville Community Planetarium. 2007. Archived from the original on 2011-10-05. Retrieved 2007-07-25. Unknown parameter - Godfrey, D. A. (1988). "A hexagonal feature around Saturn's North Pole". Icarus 76 (2): 335. doi:10.1016/0019-1035(88)90075-9. - Sanchez-Lavega, A.; Lecacheux, J.; Colas, F.; Laques, P. (1993). "Ground-based observations of Saturn's north polar SPOT and hexagon". Science 260 (5106): 329. doi:10.1126/science.260.5106.329. PMID 17838249. - Jonathan I. Lunine (1993). "The Atmospheres of Uranus and Neptune". Annual Review of Astronomy and Astrophysics 31: 217–63. doi:10.1146/annurev.aa.31.090193.001245. - Nowak, Gary T. (2006). Uranus: the Threshold Planet of 2006. Retrieved June 14, 2007. - Smith, B. A.; Soderblom, L. A.; Beebe, A.; Bliss, D.; Boyce, J. M.; Brahic, A.; Briggs, G. A.; Brown, R. H. et al (4 July 1986). "Voyager 2 in the Uranian System: Imaging Science Results". Science 233 (4759): 43–64. Bibcode 1986Sci...233...43S. doi:10.1126/science.233.4759.43. PMID 17812889 - Emily Lakdawalla (2004). No Longer Boring: 'Fireworks' and Other Surprises at Uranus Spotted Through Adaptive Optics. Archived from the original on May 25, 2006. Retrieved June 13, 2007. - Sromovsky, L. A.; Fry, P. M. (December 2005). "Dynamics of cloud features on Uranus". Icarus 179 (2): 459–484. Bibcode 2005Icar..179..459S. doi:10.1016/j.icarus.2005.07.022. - Karkoschka, Erich (May 2001). "Uranus' Apparent Seasonal Variability in 25 HST Filters". Icarus 151 (1): 84–92. Bibcode 2001Icar..151...84K. doi:10.1006/icar.2001.6599. - Hammel, H. B.; de Pater, I.; Gibbard, S. G.; Lockwood, G. W.; Rages, K. (May 2005). "New cloud activity on Uranus in 2004: First detection of a southern feature at 2.2 µm". Icarus 175 (1): 284–288. Bibcode 2005Icar..175..284H. doi:10.1016/j.icarus.2004.11.016. - L. Sromovsky, Fry, P., Hammel, H., Rages, K. Hubble Discovers a Dark Cloud in the Atmosphere of Uranus (PDF). physorg.com. Retrieved August 22, 2007.CS1 maint: multiple names: authors list (link) - H.B. Hammel and G.W. Lockwood (2007). "Long-term atmospheric variability on Uranus and Neptune". Icarus 186: 291–301. doi:10.1016/j.icarus.2006.08.027. - Devitt, Terry (2004). Keck zooms in on the weird weather of Uranus. University of Wisconsin-Madison. Retrieved December 24, 2006. - Rages, K. A.; Hammel, H. B.; Friedson, A. J. (11 September 2004). "Evidence for temporal change at Uranus' south pole". Icarus 172 (2): 548–554. Bibcode 2004Icar..172..548R. doi:10.1016/j.icarus.2004.07.009 - Hammel, H. B.; de Pater, I.; Gibbard, S. G.; Lockwood, G. W.; Rages, K. (June 2005). "Uranus in 2003: Zonal winds, banded structure, and discrete features" (PDF). Icarus 175 (2): 534–545. Bibcode 2005Icar..175..534H. doi:10.1016/j.icarus.2004.11.012 - Hanel, R.; Conrath, B.; Flasar, F. M.; Kunde, V.; Maguire, W.; Pearl, J.; Pirraglia, J.; Samuelson, R. et al (4 July 1986). "Infrared Observations of the Uranian System". Science 233 (4759): 70–74. Bibcode 1986Sci...233...70H. doi:10.1126/science.233.4759.70. PMID 17812891. - Hammel, H. B.; Rages, K.; Lockwood, G. W.; Karkoschka, E.; de Pater, I. (October 2001). "New Measurements of the Winds of Uranus". Icarus 153 (2): 229–235. Bibcode 2001Icar..153..229H. doi:10.1006/icar.2001.6689. - Lavoie, Sue (8 January 1998). PIA01142: Neptune Scooter. NASA. Retrieved 26 March 2006. - Lavoie, Sue (16 February 2000). PIA02245: Neptune's blue-green atmosphere. NASA JPL. Retrieved 28 February 2008. - Hammel, H. B.; Lockwood, G. W.; Mills, J. R.; Barnet, C. D. (1995). "Hubble Space Telescope Imaging of Neptune's Cloud Structure in 1994". Science 268 (5218): 1740–1742. doi:10.1126/science.268.5218.1740. PMID 17834994. - Burgess (1991):64–70. - Lavoie, Sue (29 January 1996). PIA00064: Neptune's Dark Spot (D2) at High Resolution. NASA JPL. Retrieved 28 February 2008. - Sromovsky, L. A.; Fry, P. M.; Dowling, T. E.; Baines, K. H. (2000). "The unusual dynamics of new dark spots on Neptune". Bulletin of the American Astronomical Society 32: 1005. - Max, C. E.; Macintosh, B. A.; Gibbard, S. G.; Gavel, D. T.; Roe, H. G.; de Pater, I.; Andrea M. Ghez; Acton, D. S.; Lai, O.; Stomski, P.; Wizinowich, P. L. (2003). "Cloud Structures on Neptune Observed with Keck Telescope Adaptive Optics". The Astronomical Journal, 125 (1): 364–375. doi:10.1086/344943. - Ray Villard and Terry Devitt (15 May 2003). Brighter Neptune Suggests A Planetary Change Of Seasons. Hubble News Center. Retrieved 26 February 2008. - Suomi, V. E.; Limaye, S. S.; Johnson, D. R. (1991). "High Winds of Neptune: A possible mechanism". Science 251 (4996): 929–932. doi:10.1126/science.251.4996.929. PMID 17847386. - Hammel, H. B.; Beebe, R. F.; De Jong, E. M.; Hansen, C. J.; Howell, C. D.; Ingersoll, A. P.; Johnson, T. V.; Limaye, S. S.; Magalhaes, J. A.; Pollack, J. B.; Sromovsky, L. A.; Suomi, V. E.; Swift, C. E. (1989). "Neptune's wind speeds obtained by tracking clouds in Voyager 2 images". Science 245 (4924): 1367–1369. doi:10.1126/science.245.4924.1367. PMID 17798743. - Elkins-Tanton, Linda T. (2006). Uranus, Neptune, Pluto, and the Outer Solar System. New York: Chelsea House. ISBN 978-0-8160-5197-7. - S.R. Federman, David L. Lambert (May 2002). [www.sciencedirect.com/science/article/pii/S0368204802000178 "The need for accurate oscillator strengths and cross sections in studies of diffuse interstellar clouds and cometary atmospheres"]. Journal of Electron Spectroscopy and Related Phenomena 123 (2-3): 161-71. www.sciencedirect.com/science/article/pii/S0368204802000178. Retrieved 2013-01-20. - Alfred Vidal-Madjar, Claudine Laurent, and Paul Bruston (15 July 1978). "Is the solar system entering a nearby interstellar cloud". The Astrophysical Journal 223 (07): 589-600. doi:10.1086/156294. http://adsabs.harvard.edu/abs/1978ApJ...223..589V. Retrieved 2015-09-30. - Roth, K. C.; Meyer, D. M.; Hawkins, I. (1993). "Interstellar Cyanogen and the Temperature of the Cosmic Microwave Background Radiation". The Astrophysical Journal 413 (2): L67–L71. doi:10.1086/186961. http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1993ApJ...413L..67R&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf. - Marshallsumter (April 15, 2013). X-ray astronomy. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 2013-05-11. - N. H. Dieter and W. J. Welch and J. D. Romney (1 June 1976). "A very small interstellar neutral hydrogen cloud observed with VLBI techniques". The Astrophysical Journal 206 (06): L113-5. doi:10.1086/182145. http://adsabs.harvard.edu/abs/1976ApJ...206L.113D. Retrieved 2015-10-05. - Anne's Astronomy News (31 May 2012). There’s More Star-Stuff Out There But It’s Not Dark Matter. .com: BeforeItsNews. Retrieved 2015-10-05. - Robert Braun (31 May 2012). There’s More Star-Stuff Out There But It’s Not Dark Matter. .com: BeforeItsNews. Retrieved 2015-10-05. - L. Spitzer, M. P. Savedoff (1950). "The Temperature of Interstellar Matter. III". The Astrophysical Journal 111: 593. doi:10.1086/145303. - Savedoff MP, Greene J (November 1955). "Expanding H II region". Astrophysical Journal 122 (11): 477–87. doi:10.1086/146109. - Robert Morrison and Dan McCammon (July 1983). "Interstellar photoelectric absorption cross sections, 0.03-10 keV". The Astrophysical Journal 270 (7): 119-22. - B. M. Gaensler (2004). A wind bubble around a magnetar. Australia Telescope National Facility. Retrieved 2015-10-06. - potw1106a (7 February 2011). Fiery young stars wreak havoc in stellar nursery. Baltimore, Maryland: Space Telescope. Retrieved 2015-10-06. - H. M. Tovmassian and E. T. Shahbazian (June 1973). "Hydrogen Content of Young Stellar Clusters II.* Clusters NGC 2175, 2264, and 2362". Australian Journal of Physics 26 (6): 837-42. doi:10.1071/PH730837. http://www.publish.csiro.au/?act=view_file&file_id=PH730837.pdf. Retrieved 2015-10-06. - SemperBlotto (20 April 2006). molecular cloud. San Francisco, California: Wikimedia Foundation, Inc. Retrieved 2015-09-30. - eso0102 (10 January 2001). How to Become a Star. European Southern Observatory. Retrieved 2015-09-30. - Robert Nemiroff (MTU) & Jerry Bonnell (USRA) (June 30, 2003). Disappearing Clouds in Carina. Goddard Space Flight Center, Greenbelt, Maryland, USA: NASA. Retrieved 2012-09-05. - Craig Kulesa. Overview: Molecular Astrophysics and Star Formation. Retrieved September 7, 2005. - K. Lehtinen (January 1997). "Spectroscopic evidence of mass infall towards an embedded infrared source in the globule DC 303.8-14.2". Astronomy and Astrophysics 317 (01): L5-9. http://adsabs.harvard.edu/full/1997A%26A...317L...5L. Retrieved 2015-09-30. - P. W. J. L. Brand, T. G. Hawarden, A. J. Longmore, P. M. Williams and J. A. R. Caldwell (1983). "Cometary Globule 1". Monthly Notices of the Royal Astronomical Society 203 (1): 215-22. doi:10.1093/mnras/203.1.215. http://mnras.oxfordjournals.org/content/203/1/215.short. Retrieved 2015-09-30. - David Darling (2007). VY Canis Majoris. Encyclopedia of Science. Retrieved 7 October 2015. - Hugo van Woerden, Ulrich J. Schwarz, Reynier F. Peletier, Bart P. Wakker and Peter M. W. Kalberla (8 July 1999). "A confirmed location in the Galactic halo for the high-velocity cloud 'chain A'". Nature 400 (6740): 138-41. http://www.nature.com/nature/journal/v400/n6740/abs/400138a0.html. Retrieved 2015-10-03. - Felix J. Lockman (11 January 2008). Massive Gas Cloud Speeding Toward Collision With Milky Way. National Radio Astronomy Observatory (NRAO). Retrieved 2015-10-03. - Dave Finley (11 January 2008). Massive Gas Cloud Speeding Toward Collision With Milky Way. National Radio Astronomy Observatory (NRAO). Retrieved 2015-10-03. - See, e.g., Table 1 and the Appendix of Murray, N. (2011). "Star Formation Efficiencies and Lifetimes of Giant Molecular Clouds in the Milky Way". The Astrophysical Journal 729 (2): 133. doi:10.1088/0004-637X/729/2/133. - J. P. Williams, L. Blitz, C. F. McKee (2000). The Structure and Evolution of Molecular Clouds: from Clumps to Cores to the IMF, In: Protostars and Planets IV. Tucson: University of Arizona Press. p. 97.CS1 maint: multiple names: authors list (link) - Di Francesco, J.; et al. (2006). An Observational Perspective of Low-Mass Dense Cores I: Internal Physical and Chemical Properties, In: Protostars and Planets V. Explicit use of et al. in: - Grenier (2004). The Gould Belt, star formation, and the local interstellar medium, In: The Young Universe. - Sagittarius B2 and its Line of Sight - N. A. Sharp (28 December 1994). The Horsehead Nebula. Kitt Peak, Arizona USA: National Optical Astronomy Observatory (NOAO). Retrieved 2015-09-25. - Patrick Palmer, B. Zuckerman, David Buhl, and Lewis E. Snyder (June 1969). "Formaldehyde Absorption in Dark Nebulae". The Astrophysical Journal 156 (6): L147-50. doi:10.1086/180368. - Figueiredo N, Villela T, Jayanthi UB, Wuensche CA, Neri JACF, Cesta RC (1990). "Gamma-ray observations of SN1987A". Rev Mex Astron Astrofis. 21: 459–62. - News Release Number: STScI-2001-34 (December 19, 2001). Wallpaper: The Ghost-Head Nebula (NGC 2080). NASA and the Hubble Space Telescope. Retrieved 2012-07-21. - S. A. Drake. A Brief History of High-Energy Astronomy: 1960–1964. - F. A. Harrison, Steven Boggs, Aleksey E. Bolotnikov, Finn E. Christensen, Walter R. Cook III, William W. Craig, Charles J. Hailey, Mario A. Jimenez-Garate, Peter H. Mao (2000). Joachim E. Truemper, Bernd Aschenbach. ed. [proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=900102 "Development of the High-Energy Focusing Telescope (HEFT) balloon experiment"]. Proc SPIE. X-Ray Optics, Instruments, and Missions III 4012: 693. doi:10.1117/12.391608. proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=900102. - James F. Steiner, Rubens C. Reis, Andrew C. Fabian, Ronald A. Remillard, Jeffrey E. McClintock, Lijun Gou, Ryan Cooke, Laura W. Brenneman, Jeremy S. Sanders (December 11, 2012). "A broad iron line in LMC X‐1". Monthly Notices of the Royal Astronomical Society 427 (3): 2552-61. doi:10.1111/j.1365-2966.2012.22128.x. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2966.2012.22128.x/full. Retrieved 2013-07-10. - Julia Zachary (9 January 2017). How New Hubble Telescope Views Could Aid Interstellar Travel. Space.com. Retrieved 2017-01-11. - Charles Q. Choi (9 January 2017). How New Hubble Telescope Views Could Aid Interstellar Travel. Space.com. Retrieved 2017-01-11.
History of Virginia |History of Virginia| The History of Virginia begins with documentation by the first Spanish explorers to reach the area in the 1500s, when it was occupied chiefly by Algonquian, Iroquoian, and Siouan peoples. After a failed English attempt to settle Virginia in the 1580s by Sir Walter Raleigh, permanent European settlement began in Virginia with Jamestown in 1607. The colony was a commercial venture sponsored by London businessmen, who sent individual men to Virginia to look for gold. They did not send families. There was no gold, and the colonists could barely feed themselves. The colony nearly failed until tobacco emerged as a profitable export. It was grown on plantations, using primarily indentured servants for the intensive hand labor involved.. After 1662, the colony turned black slavery into a hereditary racial caste. By 1750, the primary cultivators of the cash crop were West African slaves. While the plantations thrived because of the high demand for tobacco, most white settlers raised their families on subsistence farms. Warfare with the Virginia Indian nations had been a factor in the 17th century; after 1700 there was continued conflict with natives east of the Alleghenies, especially in the French and Indian War (1754-1763), when the tribes were allied with the French. The westernmost counties including Wise and Washington only became safe with the death of Bob Benge in 1794. The Virginia Colony became the wealthiest and most populated British colony in North America, with an elected General Assembly. The colony was dominated by rich planters who were also in control of the established Anglican Church. Baptist and Methodist preachers brought the Great Awakening, welcoming black members and leading to many evangelical and racially integrated churches. Virginia planters had a major role in gaining independence and in the development of democratic-republican ideals of the United States. They were important in the Declaration of Independence, writing the Constitutional Convention (and preserving protection for the slave trade), and establishing the Bill of Rights. The state of Kentucky separated from Virginia in 1792. Four of the first five presidents were Virginians: George Washington, the "Father of his country"; and after 1800, "The Virginia Dynasty" of presidents for 24 years: Thomas Jefferson, James Madison, and James Monroe. During the first half of the 19th century, tobacco prices declined and tobacco lands lost much of their fertility. Planters adopted mixed farming, with an emphasis on wheat and livestock, which required less labor. The Constitutions of 1830 and 1850 expanded suffrage but did not equalize white male apportionment statewide. The population grew slowly from 700,000 in 1790, to 1 million in 1830, to 1.2 million in 1860. Virginia was the largest state joining the Confederate States of America in 1861. It became the major theater of war in the American Civil War. Unionists in western Virginia created the separate state of West Virginia. Virginia's economy was devastated in the war and disrupted in Reconstruction, when it was administered as Military District Number One. The first signs of recovery were seen in tobacco cultivation and the related cigarette industry, followed by coal mining and increasing industrialization. In 1883 conservative white Democrats regained power in the state government, ending Reconstruction and implementing Jim Crow laws. The 1902 Constitution limited the number of white voters below 19th-century levels and effectively disfranchised blacks until federal civil rights legislation of the mid-1960s. From the 1920s to the 1960s, the state was dominated by the Byrd Organization, with dominance by rural counties aligned in a Democratic party machine, but their hold was broken over their failed Massive Resistance to school integration. After World War II, the state's economy thrived, with a new industrial and urban base. A statewide community college system was developed. The first U.S. African-American governor since Reconstruction was Virginia's Douglas Wilder in 1990. Since the late 20th century, the contemporary economy has become more diversified in high-tech industries and defense-related businesses. Virginia's changing demography makes for closely divided voting in national elections but it is still generally conservative in state politics. - 1 Prehistory - 2 Early European exploration - 3 Royal colony - 4 Religion - 5 American Revolution - 6 Early Republic and antebellum periods - 7 Civil War - 8 Reconstruction - 9 Gilded Age - 10 Progressive Era - 11 Interwar - 12 WWII and Modern era - 13 Contemporary commonwealth - 14 Virginia history on stamps - 15 See also - 16 References - 17 External links For thousands of years before the arrival of the English, various societies of indigenous peoples inhabited the portion of the New World later designated by the English as "Virginia". Archaeological and historical research by anthropologist Helen C. Rountree and others has established 3,000 years of settlement in much of the Tidewater. Even so, a historical marker dedicated in 2015 states that recent archaeological work at Pocahontas Island has revealed prehistoric habitation dating to about 6500 BCE. At the end of the 16th century, native inhabitants of what is now Virginia belonged to three major groups, classified by modern anthropologists chiefly on the basis of language-families. The largest group, the Algonquian, numbered over 10,000 and occupied most of the coastal area up to the fall line. Groups to the interior were the Iroquoian (numbering 2,500) and the Siouan. Tribes included the Algonquian Chesepian, Chickahominy, Doeg, Mattaponi, Nansemond, Pamunkey, Pohick, Powhatan, and Rappahannock; the Siouan Monacan and Saponi; and the Iroquoian-speaking Cherokee, Meherrin, Nottoway, and Tuscarora. When the first English settlers arrived at Jamestown in 1607, Algonquian tribes controlled most of Virginia east of the fall line. Nearly all were united in what has been historically called the Powhatan Confederacy. Rountree has noted that "empire" more accurately describes their political structure. In the late 16th and early 17th centuries, a chief named Wahunsunacock created this powerful empire by conquering or affiliating with approximately thirty tribes whose territories covered much of what is now eastern Virginia. Known as the Powhatan, or paramount chief, he called this area Tenakomakah ("densely inhabited Land"). The empire was advantageous to some tribes, who were periodically threatened by other groups, such as the Monacan. Early European exploration |This section needs additional citations for verification. (February 2016) (Learn how and when to remove this template message)| After their discovery of the New World in the 15th century, European states began trying to establish New World colonies. England, the Dutch Republic, France, Portugal, and Spain were the most active. In 1540, a party led by two Spaniards, Juan de Villalobos and Francisco de Silvera, sent by Hernando de Soto, entered what is now Lee County in search of gold. In the spring of 1567, Hernando Moyano de Morales, a sergeant of Spanish explorer Juan Pardo, led a group of soldiers northward from Fort San Juan in Joara, a native town in what is now western North Carolina, to attack and destroy the Chisca village of Maniatique near present-day Saltville. The attack near Saltville was the first recorded battle in Virginia history. Another Spanish party, captained by Antonio Velázquez in the caravel Santa Catalina, explored to the lower Chesapeake Bay region of Virginia in mid-1561 under the orders of Ángel de Villafañe. During this voyage, two Kiskiack or Paspahegh youths, including Don Luis were taken back to Spain. In 1566, an expedition sent from Spanish Florida by Pedro Menéndez de Avilés reached the Delmarva Peninsula. The expedition consisted of two Dominican friars, thirty soldiers and Don Luis, in a failed effort to set up a Spanish colony in the Chesapeake, believing it to be an opening to the fabled Northwest Passage. In 1570, Spanish Jesuits established the Ajacán Mission on the lower peninsula. However, in 1571 it was destroyed by Don Luis and a party of his indigenous allies. In August 1572, Pedro Menéndez de Avilés arrived from St. Augustine with thirty soldiers and sailors to take revenge for the massacre of the Jesuits, and hanged approximately 20 natives. In 1573, the governor of Spanish Florida, Pedro Menéndez de Márquez, conducted further exploration of the Chesapeake. In the 1580s, captain Vicente González led several voyages into the Chesapeake in search of English settlements in the area. In 1609, Spanish Florida governor Pedro de Ibarra sent Francisco Fernández de Écija from St. Augustine to survey the activities of the Jamestown colonists, yet Spain never attempted a colony after the failure of the Ajacán Mission. The Roanoke Colony was the first English colony in the New World. It was founded at Roanoke Island in what was then Virginia, now part of Dare County, North Carolina. Between 1584 and 1587, there were two major groups of settlers sponsored by Sir Walter Raleigh who attempted to establish a permanent settlement at Roanoke Island, and each failed. The final group disappeared completely after supplies from England were delayed three years by a war with Spain. Because they disappeared, they were called "The Lost Colony." The name Virginia came from information gathered by the Raleigh-sponsored English explorations along what is now the North Carolina coast. Philip Amadas and Arthur Barlowe reported that a regional "king" named Wingina ruled a land of Wingandacoa. Queen Elizabeth modified the name to "Virginia", perhaps in part noting her status as the "Virgin Queen." Though the word is latinate, it stands as the oldest English language place-name in the United States. On the second voyage, Raleigh discovered that, while the chief of the Secotans was indeed called Wingina, the expression wingandacoa, heard by the English upon arrival, actually meant "You wear good clothes" in Carolina Algonquian, and was not the native name of the country, as previously misunderstood.[page needed] Virginia Company of London After the death of Queen Elizabeth I, in 1603 King James I assumed the throne of England. After years of war, England was strapped for funds, so he granted responsibility for England's New World colonization to the Virginia Company, which became incorporated as a joint stock company by a proprietary charter drawn up in 1606. There were two competing branches of the Virginia Company and each hoped to establish a colony in Virginia in order to exploit gold (which the region did not actually have), to establish a base of support for English privateering against Spanish ships, and to spread Protestantism to the New World in competition with Spain's spread of Catholicism. Within the Virginia Company, the Plymouth Company branch was assigned a northern portion of the area known as Virginia, and the London Company area to the south. In December 1606, the London Company dispatched a group of 104 colonists in three ships: the Susan Constant, Godspeed, and Discovery, under the command of Captain Christopher Newport. After a long, rough voyage of 144 days, the colonists finally arrived in Virginia on April 26, 1607 at the entrance to the Chesapeake Bay. At Cape Henry, they went ashore, erected a cross, and did a small amount of exploring, an event which came to be called the "First Landing." Under orders from London to seek a more inland location safe from Spanish raids, they explored the Hampton Roads area and sailed up the newly christened James River to the fall line at what would later become the cities of Richmond and Manchester. After weeks of exploration, the colonists selected a location and founded Jamestown on May 14, 1607. It was named in honor of King James I (as was the river). However, while the location at Jamestown Island was favorable for defense against foreign ships, the low and marshy terrain was harsh and inhospitable for a settlement. It lacked drinking water, access to game for hunting, or much space for farming. While it seemed favorable that it was not inhabited by the Native Americans, within a short time, the colonists were attacked by members of the local Paspahegh tribe. The colonists arrived ill-prepared to become self-sufficient. They had planned on trading with the Native Americans for food, were dependent upon periodic supplies from England, and had planned to spend some of their time seeking gold. Leaving the Discovery behind for their use, Captain Newport returned to England with the Susan Constant and the Godspeed, and came back twice during 1608 with the First Supply and Second Supply missions. Trading and relations with the Native Americans was tenuous at best, and many of the colonists died from disease, starvation, and conflicts with the natives. After several failed leaders, Captain John Smith took charge of the settlement, and many credit him with sustaining the colony during its first years, as he had some success in trading for food and leading the discouraged colonists. After Smith's return to England in August 1609, there was a long delay in the scheduled arrival of supplies. During the winter of 1609/10 and continuing into the spring and early summer, no more ships arrived. The colonists faced what became known as the "starving time". When the new governor Sir Thomas Gates, finally arrived at Jamestown on May 23, 1610, along with other survivors of the wreck of the Sea Venture that resulted in Bermuda being added to the territory of Virginia, he discovered over 80% of the 500 colonists had died; many of the survivors were sick. Back in England, the Virginia Company was reorganized under its Second Charter, ratified on May 23, 1609, which gave most leadership authority of the colony to the governor, the newly appointed Thomas West, 3rd Baron De La Warr. In June 1610, he arrived with 150 men and ample supplies. De La Warr began the First Anglo-Powhatan War, against the natives. Under his leadership, Samuel Argall kidnapped Pocahontas, daughter of the Powhatan chief, and held her at Henricus. The economy of the Colony was another problem. Gold had never been found, and efforts to introduce profitable industries in the colony had all failed until John Rolfe introduced his two foreign types of tobacco: Orinoco and Sweet Scented. These produced a better crop than the local variety and with the first shipment to England in 1612, the customers enjoyed the flavor, thus making tobacco a cash crop that established Virginia's economic viability. The First Anglo-Powhatan War ended when Rolfe married Pocahontas in 1614. George Yeardley took over as Governor of Virginia in 1619. He ended one-man rule and created a representative system of government with the General Assembly, the first elected legislative assembly in the New World. Also in 1619, the Virginia Company sent 90 single women as potential wives for the male colonists to help populate the settlement. That same year the colony acquired a group of "twenty and odd" Angolans, brought by two English privateers. They were probably the first Africans in the colony. They, along with many European indentured servants helped to expand the growing tobacco industry which was already the colony's primary product. Although these black men were treated as indentured servants, this marked the beginning of America's history of slavery. Major importation of enslaved Africans by European slave traders did not take place until much later in the century. In some areas, individual rather than communal land ownership or leaseholds were established, providing families with motivation to increase production, improve standards of living, and gain wealth. Perhaps nowhere was this more progressive than at Sir Thomas Dale's ill-fated Henricus, a westerly-lying development located along the south bank of the James River, where natives were also to be provided an education at the Colony's first college. About 6 miles (9.7 km) south of the falls at present-day Richmond, in Henrico Cittie, the Falling Creek Ironworks was established near the confluence of Falling Creek, using local ore deposits to make iron. It was the first in North America. Virginians were intensely individualistic at this point, weakening the small new communities. According to Breen (1979) their horizon was limited by the present or near future. They believed that the environment could and should be forced to yield quick financial returns. Thus everyone was looking out for number one at the expense of the cooperative ventures. Farms were scattered and few villages or towns were formed. This extreme individualism led to the failure of the settlers to provide defense for themselves against the Indians, resulting in two massacres. Conflict with natives English settlers soon came into conflict with the natives. Despite some successful interaction, issues of ownership and control of land and other resources, and trust between the peoples, became areas of conflict. Virginia has drought conditions an average of every three years. The colonists did not understand that the natives were ill-prepared to feed them during hard times. In the years after 1612, the colonists cleared land to farm export tobacco, their crucial cash crop. As tobacco exhausted the soil, the settlers continually needed to clear more land for replacement. This reduced the wooded land which Native Americans depended on for hunting to supplement their food crops. As more colonists arrived, they wanted more land. The tribes tried to fight the encroachment by the colonists. Major conflicts took place in the Indian massacre of 1622 and the Second Anglo-Powhatan war, both under the leadership of the late Chief Powhatan's younger brother, Chief Opechancanough. By the mid-17th century, the Powhatan and allied tribes were in serious decline in population, due in large part to epidemics of newly introduced infectious diseases, such as smallpox and measles, to which they had no natural immunity. The European colonists had expanded territory so that they controlled virtually all the land east of the fall line on the James River. Fifty years earlier, this territory had been the empire of the mighty Powhatan Confederacy. Surviving members of many tribes assimilated into the general population of the colony. Some retained small communities with more traditional identity and heritage. In the 21st century, the Pamunkey and Mattaponi are the only two tribes to maintain reservations originally assigned under the English. As of 2010[update], the state has recognized eleven Virginia Indian tribes. Others have renewed interest in seeking state and Federal recognition since the celebration of the 400th anniversary of Jamestown in 2007. State celebrations gave Native American tribes prominent formal roles to showcase their contributions to the state. While the developments of 1619 and continued growth in the several following years were seen as favorable by the English, many aspects, especially the continued need for more land to grow tobacco, were the source of increasing concern to the Native Americans most affected, the Powhatan. By this time, the remaining Powhatan Empire was led by Chief Opechancanough, chief of the Pamunkey, and brother of Chief Powhatan. He had earned a reputation as a fierce warrior under his brother's chiefdom. Soon, he gave up on hopes of diplomacy, and resolved to eradicate the English colonists. On March 22, 1622, the Powhatan killed about 400 colonists in the Indian Massacre of 1622. With coordinated attacks, they struck almost all the English settlements along the James River, on both shores, from Newport News Point on the east at Hampton Roads all the way west upriver to Falling Creek, a few miles above Henricus and John Rolfe's plantation, Varina Farms. At Jamestown, a warning by an Indian boy named Chanco to his employer, Richard Pace, helped reduce total deaths. Pace secured his plantation, and rowed across the river during the night to alert Jamestown, which allowed colonists some defensive preparation. They had no time to warn outposts, which suffered deaths and captives at almost every location. Several entire communities were essentially wiped out, including Henricus and Wolstenholme Towne at Martin's Hundred. At the Falling Creek Ironworks, which had been seen as promising for the Colony, two women and three children were among the 27 killed, leaving only two colonists alive. The facilities were destroyed. Despite the losses, two thirds of the colonists survived; after withdrawing to Jamestown, many returned to the outlying plantations, although some were abandoned. The English carried out reprisals against the Powhatan and there were skirmishes and attacks for about a year before the colonists and Powhatan struck a truce. The colonists invited the chiefs and warriors to Jamestown, where they proposed a toast of liquor. Dr. John Potts and some of the Jamestown leadership had poisoned the natives' share of the liquor, which killed about 200 men. Colonists killed another 50 Indians by hand. The period between the coup of 1622 and another Powhatan attack on English colonists along the James River (see Jamestown) in 1644 marked a turning point in the relations between the Powhatan and the English. In the early period, each side believed it was operating from a position of power; by the Treaty of 1646, the colonists had taken the balance of power, and had established control between the York and Blackwater Rivers. In 1624, the Virginia Company's charter was revoked and the colony transferred to royal authority as a crown colony, but the elected representatives in Jamestown continued to exercise a fair amount of power. Under royal authority, the colony began to expand to the North and West with additional settlements. In 1634, a new system of local government was created in the Virginia Colony by order of the King of England. Eight shires were designated, each with its own local officers; these shires were renamed as counties only a few years later. Governor Berkeley and English Civil War The first significant attempts at exploring the Trans-Allegheny region occurred under the administration of Governor William Berkeley. Efforts to explore farther into Virginia were hampered in 1644 when about 500 colonists were killed in another Indian massacre led, once again, by Opechancanough. Berkeley is credited with efforts to develop others sources of income for the colony besides tobacco such as cultivation of mulberry trees for silkworms and other crops at his large Green Spring Plantation. The colonists defined the 1644 coup as an "uprising". Chief Opechancanough expected the outcome would reflect what he considered the morally correct position: that the colonists were violating their pledges to the Powhatan. During the 1644 event, Chief Opechancanough was captured. While imprisoned, he was murdered by one of his guards. After the death of Opechancanough, and following the repeated colonial attacks in 1644 and 1645, the remaining Powhatan tribes had little alternative but to accede to the demands of the settlers. Most Virginia colonists were loyal to the crown (Charles I) during the English Civil War, but in 1652, Oliver Cromwell sent a force to remove and replace Gov. Berkeley with Governor Richard Bennett, who was loyal to the Commonwealth of England. This governor was a moderate Puritan who allowed the local legislature to exercise most controlling authority, and spent much of his time directing affairs in neighboring Maryland Colony. Bennett was followed by two more "Cromwellian" governors, Edward Digges and Samuel Matthews, although in fact all three of these men were not technically appointees, but were selected by the House of Burgesses, which was really in control of the colony during these years. Many royalists fled to Virginia after their defeat in the English Civil War. Some intermarried with existing plantation families to establish influential families in Virginia such as the Washingtons, Randolphs, Carters and Lees. However, most 17th-century immigrants were indentured servants, merchants or artisans. After the Restoration, in recognition of Virginia's loyalty to the crown, King Charles II of England bestowed Virginia with the nickname "The Old Dominion", which it still bears today. Governor Berkeley, who remained popular after his first administration, returned to the governorship at the end of Commonwealth rule. However, Berkeley's second administration was characterized with many problems. Disease, hurricanes, Indian hostilities, and economic difficulties all plagued Virginia at this time. Berkeley established autocratic authority over the colony. To protect this power, he refused to have new legislative elections for 14 years in order to protect a House of Burgesses that supported him. He only agreed to new elections when rebellion became a serious threat. Berkeley finally did face a rebellion in 1676. Indians had begun attacking encroaching settlers as they expanded to the north and west. Serious fighting broke out when settlers responded to violence with a counter-attack against the wrong tribe, which further extended the violence. Berkeley did not assist the settlers in their fight. Many settlers and historians believe Berkeley's refusal to fight the Indians stemmed from his investments in the fur trade. Large scale fighting would have cut off the Indian suppliers Berkeley's investment relied on. Nathaniel Bacon organized his own militia of settlers who retaliated against the Indians. Bacon became very popular as the primary opponent of Berkeley, not only on the issue of Indians, but on other issues as well. Berkeley condemned Bacon as a rebel, but pardoned him after Bacon won a seat in the House of Burgesses and accepted it peacefully. After a lack of reform, Bacon rebelled outright, captured Jamestown, and took control of the colony for several months. The incident became known as Bacon's Rebellion. Berkeley returned himself to power with the help of the English militia. Bacon burned Jamestown before abandoning it and continued his rebellion, but died of disease. Berkeley severely crushed the remaining rebels. In response to Berkeley's harsh repression of the rebels, the English government removed him from office. After the burning of Jamestown, the capital was temporarily moved to Middle Plantation, located on the high ground of the Virginia Peninsula equidistant from the James and York Rivers. Building of Williamsburg Local leaders had long desired a school of higher education, for the sons of planters, and for educating the Indians. An earlier attempt to establish a permanent university at Henricus failed after the Indian Massacre of 1622 wiped out the entire settlement. Finally, seven decades later, with encouragement from the Colony's House of Burgesses and other prominent individuals, Reverend Dr. James Blair, the colony's top religious leader, prepared a plan. Blair went to England and in 1693, obtained a charter from Protestants King William and Queen Mary II of England who had just deposed Catholic James II of England in 1688 during the Glorious Revolution. The college was named the College of William and Mary in honor of the two monarchs. The rebuilt statehouse in Jamestown burned again in 1698. After that fire, upon suggestion of college students, the colonial capital was permanently moved to nearby Middle Plantation again, and the town was renamed Williamsburg, in honor of the king. Plans were made to construct a capitol building and plan the new city according to the survey of Theodorick Bland. As the English increasingly used tobacco products, tobacco in the American colonies became a significant economic force, especially in the tidewater region surrounding the Chesapeake Bay. Vast plantations were built along the rivers of Virginia, and social/economic systems developed to grow and distribute this cash crop. Some elements of this system included the importation and employment of slaves to grow crops. Planters would then fill large hogsheads with tobacco and convey them to inspection warehouses. In 1730, the Virginia House of Burgesses standardized and improved quality of tobacco exported by establishing the Tobacco Inspection Act of 1730, which required inspectors to grade tobacco at 40 specified locations. In terms of the white population, the top five percent or so were planters who possessed growing wealth and increasing political power and social prestige. They controlled the local Anglican church, choosing ministers and handling church property and disbursing local charity. They sought elected and appointed offices. About 60 percent of white Virginians were part of a broad middle class that owned substantial farms; By the second generation, death rates from malaria and other local diseases had declined so much that a stable family structure was possible. The bottom third owned no land, and verged on poverty. Many were recent arrivals, or recently released from indentured servitude. Social stratification was most severe in the Northern Neck, where the Fairfax family had been given a proprietorship. In some districts there 70 percent of the land was owned by a handful of families, and three fourths of the whites had no land at all. In the frontier districts, large numbers of Irish and German Protestants had settled, often moving down from Pennsylvania. Tobacco was not important there; farmers focused on hemp, grain, cattle, and horses. Entrepreneurs had begun to mine and smelt the local iron ores. Sports occupied a great deal of attention at every social level, starting at the top. In England hunting was sharply restricted to landowners, and enforced by armed gameskeepers. In America, game was more than plentiful. Everyone—including servants and slaves—could and did hunt. Poor men with a good rifle aim won praise; rich gentlemen who were off target won ridicule. In 1691 Sir Francis Nicholson, the governor, organized competitions for the "better sort of Virginians onely who are Batchelors," and he offered prizes "to be shot for, wrastled, played at backswords, & Run for by Horse and foott." Horse racing was the main event. The typical farmer did not own a horse in the first place, and racing was a matter for gentlemen only, but ordinary farmers were spectators and gamblers. Selected slaves often became skilled horse trainers. Horse racing was especially important for knitting the gentry together. The race was a major public event designed to demonstrate to the world the superior social status of the gentry through expensive breeding, training, boasting and gambling, and especially winning the races themselves. Historian Timothy Breen explains that horse racing and high-stakes gambling were essential to maintaining the status of the gentry. When they publicly bet a large sum on their favorite horse, it told the world that competitiveness, individualism, and materialism where the core elements of gentry values. Historian Edmund Morgan (1975) argues that Virginians in the 1650s—and for the next two centuries—turned to slavery and a racial divide as an alternative to class conflict. "Racism made it possible for white Virginians to develop a devotion to the equality that English republicans had declared to be the soul of liberty." That is, white men became politically much more equal than was possible without a population of low-status slaves. By 1700 the population reached 70,000 and continued to grow rapidly from a high birth rate, low death rate, importation of slaves from the Caribbean, and immigration from Britain and Germany, as well as from Pennsylvania. The climate was mild, the farm lands were cheap and fertile. Early to mid-1700s: Westward expansion In 1716, Governor Alexander Spotswood led the Knights of the Golden Horseshoe Expedition, reaching the top ridge of the Blue Ridge Mountains at Swift Run Gap (elevation 2,365 feet (721 m)). Spotswood promoted Germanna, a settlement of German immigrants brought over for the purpose of iron production, in modern-day Orange County. By the 1730s, the Three Notch'd Road extended from the vicinity of the fall line of the James River at the future site of Richmond westerly to the Shenandoah Valley, crossing the Blue Ridge Mountains at Jarmans Gap. Around this time, Governor William Gooch promoted settlement of the Virginia backcountry as a means to insulate the Virginia colony from Native American and New France settlements in the Ohio Country In response, a wide variety of settlers traveled southward on the Indian Trail later known as the Great Wagon Road along the Shenandoah Valley from Pennsylvania. Many, including German Palatines and Scotch-Irish American immigrants, settled along former Indian camps. According to Encyclopedia Virginia, "By 1735 there were as many as 160 families in the backcountry region, and within ten years nearly 10,000 Europeans lived in the Shenandoah Valley." As colonial settlement moved into the piedmont area from the Tidewater/Chesapeake area, There was some uncertainty as to the exact tax boundaries of Virginia land versus the Land patent quit-rent rights held by Thomas Fairfax, 6th Lord Fairfax of Cameron in the Northern Neck Proprietary. When Robert "King" Carter died in 1732, Lord Fairfax read about his vast wealth in The Gentleman's Magazine and decided to settle the matter himself by coming to Virginia. Lord Fairfax travelled to Virginia for the first time between 1735 and 1737 to inspect and protect his lands. He employed a young George Washington (Washington's first employment) to survey his lands lying west of the Blue Ridge. Once this legal battle was ironed out, Frederick County, Virginia was founded in 1743 and the "Frederick Town" settlements there became a fourth city charter in Virginia, now known as Winchester, Virginia in February 1752. In the late 1740s and the second half of the 18th century, the British angled for control of the Ohio Country. Virginians Thomas Lee and brothers Lawrence and Augustine Washington organized the Ohio Company to represent the prospecting and trading interests of Virginian investors. In 1749, the British Crown, via the colonial government of Virginia, granted the Ohio Company a great deal of this territory on the condition that it be settled by British colonists. Governor Robert Dinwiddie of Virginia was an investor in the Ohio Company, which stood to lose money if the French held their claim. To counter the French military presence in Ohio, in October 1753 Dinwiddie ordered the 21-year-old Major George Washington (whose brother was another Ohio Company investor) of the Virginia Regiment to warn the French to leave Virginia territory. Ultimately, many Virginians were caught up in the resulting French and Indian War that occurred 1754–1763. At the completion of the war, the Royal Proclamation of 1763 forbade all British settlement past a line drawn along the Appalachian Mountains, with the land west of the Proclamation Line known as the Indian Reserve.British colonists and land speculators objected to the proclamation boundary since the British government had already assigned land grants to them. Many settlements already existed beyond the proclamation line, some of which had been temporarily evacuated during Pontiac's War, and there were many already granted land claims yet to be settled. For example, George Washington and his Virginia soldiers had been granted lands past the boundary. Prominent American colonials joined with the land speculators in Britain to lobby the government to move the line further west. Their efforts were successful, and the boundary line was adjusted in a series of treaties with the Native Americans. In 1768 the Treaty of Fort Stanwix and the Treaty of Hard Labour, followed in 1770 by the Treaty of Lochaber, opened much of what is now Kentucky and West Virginia to British settlement within the Virginia Colony. However, the Northwest Territories north of the Ohio continued to be occupied by native tribes until US forces drove them out in the early decades of the 1800s. - Further information: Episcopal Diocese of Virginia:History The Church of England was legally established in the colony in 1619, and the Bishop of London sent in 22 Anglican clergyman by 1624. In practice, establishment meant that local taxes were funneled through the local parish to handle the needs of local government, such as roads and poor relief, in addition to the salary of the minister. There never was a bishop in colonial Virginia, and in practice the local vestry, consisting of gentry laymen controlled the parish. By the 1740s, the Anglicans had about 70 parish priests around the colony. The stress on personal piety opened the way for the First Great Awakening in the mid 18th century, which pulled people away from the formal rituals of the established church. Especially in the back country, most families had no religious affiliation whatsoever and their low moral standards were shocking to proper Englishmen The Baptists, Methodists, Presbyterians and other evangelicals directly challenged these lax moral standards and refused to tolerate them in their ranks. Baptists, German Lutherans and Presbyterians, funded their own ministers, and favored disestablishment of the Anglican church. The spellbinding preacher Samuel Davies led the Presbyterians, and converted hundreds of slaves. By the 1760s Baptists were drawing Virginians, especially poor white farmers, into a new, much more democratic religion. Slaves were welcome at the services and many became Baptists at this time. Methodist missionaries were also active in the late colonial period. Methodists encouraged an end to slavery, and welcomed free blacks and slaves into active roles in the congregations. The Baptists and Presbyterians were subject to many legal constraints and faced growing persecution; between 1768 and 1774, about half of the Baptists ministers in Virginia were jailed for preaching, in defiance of England's Act of Toleration of 1689 that guaranteed freedom of worship for Protestants. At the start of the Revolution, the Anglican Patriots realized that they needed dissenter support for effective wartime mobilization, so they met most of the dissenters' demands in return for their support of the war effort. Historians have debated the implications of the religious rivalries for the American Revolution. The struggle for religious toleration was played out during the American Revolution, as the Baptists, in alliance with Thomas Jefferson and James Madison, worked successfully to disestablish the Anglican church. After the American victory in the war, the Anglican establishment sought to reintroduce state support for religion. This effort failed when non-Anglicans gave their support to Jefferson's "Bill for Establishing Religious Freedom", which eventually became law in 1786 as the Virginia Statute for Religious Freedom. With freedom of religion the new watchword, the Church of England was dis-established in Virginia. It was rebuilt as the Episcopal Church in the United States, with no connection to Britain. Revolutionary sentiments first began appearing in Virginia shortly after the French and Indian War ended in 1763. The Virginia legislature had passed the Two-Penny Act to stop clerical salaries from inflating. King George III vetoed the measure, and clergy sued for back salaries. Patrick Henry first came to prominence by arguing in the case of Parson's Cause against the veto, which he declared tyrannical. The British government had accumulated a great deal of debt through spending on its wars. To help payoff this debt, Parliament passed the Sugar Act in 1764 and the Stamp Act in 1765. The General Assembly opposed the passage of the Sugar Act on the grounds of no taxation without representation, and in turn passing the "Virginia Resolves" opposing the tax. Governor Francis Fauquier responded by dismissing the Assembly. The Northampton County court overturned the Stamp Act February 8, 1766. Various political groups, including the Sons of Liberty met and issued protests against the act. Most notably, Richard Bland published a pamphlet entitled An Enquiry into the Rights of The British Colonies, setting forth the principle that Virginia was a part of the British Empire, not the Kingdom of Great Britain, so it only owed allegiance to the Crown, not Parliament. The Stamp Act was repealed, but additional taxation from the Revenue Act and the 1769 attempt to transport Bostonian rioters to London for trial incited more protest from Virginia. The Assembly met to consider resolutions condemning on the transport of the rioters, but Governor Botetourt, while sympathetic, dissolved the legislature. The Burgesses reconvened in Raleigh Tavern and made an agreement to ban British imports. Britain gave up the attempt to extradite the prisoners and lifted all taxes except the tax on tea in 1770. In 1773, because of a renewed attempt to extradite Americans to Britain, Richard Henry Lee, Thomas Jefferson, Patrick Henry, George Mason, and others in the legislature created a committee of correspondence to deal with problems with Britain. This committee would serve as the foundation for Virginia's role in the American Revolution. After the House of Burgesses expressed solidarity with the actions in Massachusetts, the Governor, Lord Dunmore, again dissolved the legislature. The first Virginia Convention was held August 1–6 to respond to the growing crisis. The convention approved a boycott of British goods and elected delegates to the Continental Congress. On April 20, 1775, Dunmore ordered the gunpowder removed from the Williamsburg Magazine to a British ship. Patrick Henry led a group of Virginia militia from Hanover in response to Dunmore's order. Carter Braxton negotiated a resolution to the Gunpowder Incident by transferring royal funds as payment for the powder. The incident exacerbated Dunmore's declining popularity. He fled the Governor's Palace to a British ship at Yorktown. On November 7, Dunmore issued a proclamation declaring Virginia was in a state of rebellion. By this time, George Washington had been appointed head of the American forces by the Continental Congress and Virginia was under the political leadership of a Committee of Safety formed by the Third Virginia Convention in the governor's absence. On December 9, 1775, Virginia militia moved on the governor's forces at the Battle of Great Bridge, winning a victory in the small action there. Dunmore responded by bombarding Norfolk with his ships on January 1, 1776. After the Battle of Great Bridge, little military conflict took place on Virginia soil for the first part of the American Revolutionary War. Nevertheless, Virginia sent forces to help in the fighting to the North and South, as well as the frontier in the northwest. The Fifth Virginia Convention met on May 6 and declared Virginia a free and independent state on May 15, 1776. The convention instructed its delegates to introduce a resolution for independence at the Continental Congress. Richard Henry Lee introduced the measure on June 7. While the Congress debated, the Virginia Convention adopted George Mason's Bill of Rights (June 12) and a constitution (June 29) which established an independent commonwealth. Congress approved Lee's proposal on July 2 and approved Jefferson's Declaration of Independence on July 4. The constitution of the Fifth Virginia Convention created a system of government for the state that would last for 54 years, and converting House of Burgesses into a bicameral legislature with both a House of Delegates and a Senate. Patrick Henry serves as the first Governor of the Commonwealth (1776-1779). War returns to Virginia The British briefly brought the war back to coastal Virginia in May 1779. Fearing the vulnerability of Williamsburg, Governor Thomas Jefferson moved the capital farther inland to Richmond in 1780. However, in December, Benedict Arnold, who had betrayed the Revolution and become a general for the British, attacked Richmond and burned part of the city before the Virginia Militia drove his army out of the city. Arnold moved his base of operations to Portsmouth and was later joined by troops under General William Phillips. Phillips led an expedition that destroyed military and economic targets, against ineffectual militia resistance. The state's defenses, led by General Baron von Steuben, put up resistance in the April 1781 Battle of Blandford, but were forced to retreat. The French General Lafayette and his forces arrived to help defend Virginia, and though outnumbered, engaged British forces under General Charles Cornwallis in a series of skirmishes to help reduce their effectiveness. Cornwallis dispatched two smaller missions under Colonel John Graves Simcoe and Colonel Banastre Tarleton to march on Charlottesville and capture Gov. Jefferson and the legislature, though was foiled when Jack Jouett rode to warn Virginia government. Cornwallis moved down the Virginia Peninsula towards the Chesapeake Bay, where Clinton planned to extract part of the army for a siege of New York City. After surprising American forces at the Battle of Green Spring on July 6, 1781, Cornwallis received orders to move his troops to the port town of Yorktown and begin construction of fortifications and a naval yard, though when discovered American forces surrounded the town. Gen. Washington and his French ally Rochambeau moved their forces from New York to Virginia. The defeat of the Royal Navy by Admiral de Grasse at the Battle of the Virginia Capes ensured French dominance of the waters around Yorktown, thereby preventing Cornwallis from receiving troops or supplies and removing the possibility of evacuation. Following the two-week siege to Yorktown, Cornwallis decided to surrender. Papers for surrender were officially signed on October 19. As a result of the defeat, the king lost control of Parliament and the new British government offered peace in April 1782. The Treaty of Paris of 1783 officially ended the war. Early Republic and antebellum periods Victory in the Revolution brought peace and prosperity to the new state, as export markets in Europe reopened for its tobacco. While the old local elites were content with the status quo, younger veterans of the war had developed a national identity. Led by George Washington and James Madison, Virginia played a major role in the Constitutional Convention of 1787 in Philadelphia. Madison proposed the Virginia Plan, which would give representation in Congress according to total population, including a proportion of slaves. Virginia was the most populous state, and it was allowed to count all of its white residents and 3/5 of the enslaved African Americans for its congressional representation and its electoral vote. (Only white men who owned a certain amount of property could vote.) Ratification was bitterly contested; the pro-Constitution forces prevailed only after promising to add a Bill of Rights. The Virginia Ratifying Convention approved the Constitution by a vote of 89–79 on June 25, 1788, making it the tenth state to enter the Union. Madison played a central role in the new Congress, while Washington was the unanimous choice as first president. He was followed by the Virginia Dynasty, including Thomas Jefferson, Madison, and James Monroe, giving the state four of the first five presidents. Slavery and freedmen in Antebellum Virginia The Revolution meant change and sometimes political freedom for enslaved African Americans, too. Tens of thousands of slaves from southern states, particularly in Georgia and South Carolina, escaped to British lines and freedom during the war. Thousands left with the British for resettlement in their colonies of Nova Scotia and Jamaica; others went to England; others disappeared into rural and frontier areas or the North. Inspired by the Revolution and evangelical preachers, numerous slaveholders in the Chesapeake region manumitted some or all of their slaves, during their lifetimes or by will. From 1,800 persons in 1782, the total population of free blacks in Virginia increased to 12,766 (4.3 percent of blacks) in 1790, and to 30,570 in 1810; the percentage change was from free blacks' comprising less than one percent of the total black population in Virginia, to 7.2 percent by 1810, even as the overall population increased. One planter, Robert Carter III freed more than 450 slaves in his lifetime, more than any other planter. George Washington freed all of his slaves at his death. Many free blacks migrated from rural areas to towns such as Petersburg, Richmond, and Charlottesville for jobs and community; others migrated with their families to the frontier where social strictures were more relaxed. Among the oldest black Baptist congregations in the nation were two founded near Petersburg before the Revolution. Each congregation moved into the city and built churches by the early 19th century. Twice slave rebellions broke out in Virginia: Gabriel's Rebellion in 1800, and Nat Turner's Rebellion in 1831. White reaction was swift and harsh, and militias killed many innocent free blacks and black slaves as well as those directly involved in the rebellions. After the second rebellion, the legislature passed laws restricting the rights of free people of color: they were excluded from bearing arms, serving in the militia, gaining education, and assembling in groups. As bearing arms and serving in the militia were considered obligations of free citizens, free blacks came under severe constraints after Nat Turner's rebellion. As the new nation of the United States of America experienced growing pains and began to speak of Manifest Destiny, Virginia, too, found its role in the young republic to be changing and challenging. For one, the vast lands of the Virginia Colony were subdivided into other US states and territories. In 1784 Virginia relinquished its claims to the Illinois County, Virginia, except for the Virginia Military District (Southern Indiana). In 1775, Daniel Boone blazed a trail for the Transylvania Company from Fort Chiswell in Virginia through the Cumberland Gap into central Kentucky. This Wilderness Road became the principal route used by settlers for more than fifty years to reach Kentucky from the East. The fledgling US government rewarded veterans of the Revolutionary War with plots of land along the Ohio River in the Northwest Territory. In 1792, three western counties split off to form Kentucky. A second influence: the lands seemed to be more fertile in the west. Virginia's heavy farming of tobacco for 200 years had depleted its soils. The 1803 Louisiana Purchase only accelerated the westward movement of Virginians out of their native state. Many of the Virginians whose grandparents had created the Virginia Establishment began to emigrate and settle westward. Famous Virginian-born Americans affected not only the destiny of the state of Virginia, but the rapidly developing American Old West. Virginians Meriwether Lewis and William Clark were influential in their famous 1804-1806 expedition to explore the Missouri River and possible connections to the Pacific Ocean. Notable names such as Stephen F. Austin, Edwin Waller, Haden Harrison Edwards, and Dr. John Shackelford were famous Texan pioneers from Virginia. Even eventual Civil War general Robert E. Lee distinguished himself as a military leader in Texas during the 1846–48 Mexican–American War. Historians estimate that one million Virginians left the commonwealth between the Revolution and the Civil War. With this exodus, Virginia experienced a decline in both population and political influence Prominent Virginians formed the Virginia Historical and Philosophical Society to preserve the legacy and memory of its past. At the same time, with Virginians settling so much of the west, they brought their cultural habits with them. Today, many cultural features of the American South can be attributed to Virginians who migrated west. Cultural divide between Tidewater planters and Western Virginia farmers As the western reaches of Virginia were developed in the first half of the 19th century, the vast differences in the agricultural basis, cultural, and transportation needs of the area became a major issue for the Virginia General Assembly. In the older, eastern portion, slavery contributed to the economy. While planters were moving away from labor-intensive tobacco to mixed crops, they still held numerous slaves and their leasing out or sales was also part of their economic prospect. Slavery had become an economic institution upon which planters depended. Watersheds on most of this area eventually drained to the Atlantic Ocean. In the western reaches, families farmed smaller homesteads, mostly without enslaved or hired labor. Settlers were expanding the exploitation of resources: mining of minerals and harvesting of timber. The land drained into the Ohio River Valley, and trade followed the rivers. Representation in the state legislature was heavily skewed in favor of the more populous eastern areas and the historic planter elite. This was compounded by the partial allowance for slaves when counting population; as neither the slaves nor women had the vote, this gave more power to white men. The legislature's efforts to mediate the disparities ended without meaningful resolution, although the state held a constitutional convention on representation issues. Thus, at the outset of the American Civil War, Virginia was caught not only in national crisis, but in a long-standing controversy within its own boundaries. While other border states had similar regional differences, Virginia had a long history of east-west tensions which finally came to a head; it was the only state to divide into two separate states during the War. Infrastructure and Industrial Revolution After the Revolution, various infrastructure projects began to be developed, including the Dismal Swamp Canal, the James River and Kanawha Canal, and various turnpikes. Virginia was home to the first of all Federal infrastructure projects under the new Constitution, the Cape Henry Light of 1792, located at the mouth of the Chesapeake Bay. Following the War of 1812, several Federal national defense projects were undertaken in Virginia. Drydock Number One was constructed in Portsmouth in the 1827. Across the James River, Fort Monroe was built to defend Hampton Roads, completed in 1834. In the 1830s, railroads began to be built in Virginia. In 1831, the Chesterfield Railroad began hauling coal from the mines in Midlothian to docks at Manchester (near Richmond), powered by gravity and draft animals. The first railroad in Virginia to be powered by locomotives was the Richmond, Fredericksburg and Potomac Railroad, chartered in 1834, with the intent to connect with steamboat lines at Aquia Landing running to Washington, D.C.. Soon after, others (with equally descriptive names) followed: the Richmond and Petersburg Railroad and Louisa Railroad in 1836, the Richmond and Danville Railroad in 1847, the Orange and Alexandria Railroad in 1848, and the Richmond and York River Railroad. In 1849, the Virginia Board of Public Works established the Blue Ridge Railroad. Under Engineer Claudius Crozet, the railroad successfully crossed the Blue Ridge Mountains via the Blue Ridge Tunnel at Afton Mountain. Petersburg became a manufacturing center, as well as a city where free black artisans and craftsmen could make a living. In 1860 half its population was black and of that, one-third were free blacks, the largest such population in the state. With extensive iron deposits, especially in the western counties, Virginia was a pioneer in the iron industry. The first ironworks in the new world was established at Falling Creek in 1619, though it was destroyed in 1622. There would eventually grow to be 80 ironworks, charcoal furnaces and forges with 7,000 hands at any one time, about 70 percent of them slaves. Ironmasters hired slaves from local slave owners because they were cheaper than white workers, easier to control, and could not switch to a better employer. But the work ethic was weak, because the wages went to the owner, not to the workers, who were forced to work hard, were poorly fed and clothed, and were separated from their families. Virginia's industry increasingly fell behind Pennsylvania, New Jersey and Ohio, which relied on free labor. Bradford (1959) recounts the many complaints about slave laborers and argues the over-reliance upon slaves contributed to the failure of the ironmasters to adopt improved methods of production for fear the slaves would sabotage them. Most of the blacks were unskilled manual laborers, although Lewis (1977) reports that some were in skilled positions. Virginia at first refused to join the Confederacy, but did so after President Lincoln on April 15 called for troops from all states; that meant Federal troops crossing Virginia on the way south to subdue South Carolina. On April 17, 1861 the convention voted to secede, and voters ratified the decision on May 23. Immediately the Union army moved into northern Virginia and captured Alexandria without a fight, and controlled it for the remainder of the war. The Wheeling area had opposed secession and remained strong for the Union. Because of its strategic significance, the Confederacy relocated its capital to Richmond. Richmond was at the end of a long supply line and as the highly symbolic capital of the Confederacy became the main target of round after round of invasion attempts. A major center of iron production during the civil war was located in Richmond at Tredegar Iron Works, which produced most of the artillery for the war. The city was the site of numerous army hospitals. Libby Prison for captured Union officers gained an infamous reputation for the overcrowded and harsh conditions, with a high death rate. Richmond's main defenses were trenches built surrounding it down towards the nearby city of Petersburg. Saltville was a primary source of Confederate salt (critical for food preservation) during the war, leading to the two Battles of Saltville. The first major battle of the Civil War occurred on July 21, 1861. Union forces attempted to take control of the railroad junction at Manassas, but the Confederate Army reached it first and won the First Battle of Manassas (known as "Bull Run" in Northern naming convention). Both sides mobilized for war; the year 1861 went on without another major fight. Men from all economic and social levels, both slaveholders and nonslaveholders, as well as former Unionists, enlisted in great numbers on both sides. Areas, especially in the west and along the border, that sent few men to the Confederacy were characterized by few slaves, poor economies, and a history of reinal antagonism to the Tidewater. West Virginia breaks away The western counties could not tolerate the Confederacy. Breaking away, they first formed the Union state of Virginia (recognized by Washington); it is called the Restored government of Virginia and was based in Alexandria, across the river from Washington. The Restored government did little except give its permission for Congress to form the new state of West Virginia in 1862. From May to August 1861, a series of Unionist conventions met in Wheeling; the Second Wheeling Convention constituted itself as a legislative body called the Restored Government of Virginia. It declared Virginia was still in the Union but that the state offices were vacant and elected a new governor, Francis H. Pierpont; this body gained formal recognition by the Lincoln administration on July 4. On August 20 the Wheeling body passed an ordinance for the creation; it was put to public vote on Oct. 24. The vote was in favor of a new state—West Virginia—which was distinct from the Pierpont government, which persisted until the end of the war. Congress and Lincoln approved, and, after providing for gradual emancipation of slaves in the new state constitution, West Virginia became the 35th state on June 20, 1863. In effect there were now three states: the Confederate Virginia, the Union Restored Virginia, and West Virginia. The state and national governments in Richmond did not recognize the new state, and Confederates did not vote there. The Confederate government in Richmond sent in Robert E. Lee. But Lee found little local support and was defeated by Union forces from Ohio. Union victories in 1861 drove the Confederate forces out of the Monongahela and Kanawha valleys, and throughout the remainder of the war the Union held the region west of the Alleghenies and controlled the Baltimore and Ohio Railroad in the north. The new state was not subject to Reconstruction. Later war years For the remainder of the war, many major battles were fought across Virginia, including the Seven Days Battles, the Battle of Fredericksburg, the Battle of Chancellorsville, the Battle of Brandy Station Over the course of the War, despite occasional tactical victories and spectacular counter-stroke raids, Confederate control of many regions of Virginia was gradually lost to Federal advance. By October 1862 the northern 9th and 10th Congressional districts along the Potomac were under Union control. Eastern Shore, Northern, Middle and Lower Peninsula and the 2nd congressional district surrounding Norfolk west to Suffolk were permanently Union-occupied by May. Other regions, such as the Piedmont and Shenandoah Valley, regularly changed hands through numerous campaigns. In 1864, the Union Army planned to attack Richmond by a direct overland approach through Overland Campaign and the Battle of the Wilderness, culminating in the Siege of Petersburg which lasted from the summer of 1864 to April 1865. By November 6, 1864, Confederate forces controlled only four of Virginia's 16 congressional districts in the region of Richmond-Petersburg and their Southside counties. In April 1865, Richmond was burned by a retreating Confederate Army ; Lincoln walked the city streets to cheering crowds of newly freed blacks. The Confederate government fled south, pausing in Danville for a few days. The end came when Lee surrendered to Ulysses Grant at Appomattox on April 9, 1865. Virginia had been devastated by the war, with the infrastructure (such as railroads) in ruins; many plantations burned out; and large numbers of refugees without jobs, food or supplies beyond rations provided by the Union Army, especially its Freedmen's Bureau. Historian Mary Farmer-Kaiser reports that white landowners complained to the Bureau about unwillingness of freedwomen to work in the fields as evidence of their laziness, and asked the Bureau to force them to sign labor contracts. In response, many Bureau officials "readily condemned the withdrawal of freedwomen from the work force as well as the 'hen pecked' husbands who allowed it." While the Bureau did not force freedwomen to work, it did force freedmen to work or be arrested as vagrants. Furthermore, agents urged poor unmarried mothers to give their older children up as apprentices to work for white masters. Farmer-Kaiser concludes that "Freedwomen found both an ally and an enemy in the bureau." There were three phases in Virginia's Reconstruction era: wartime, presidential, and congressional. Immediately after the war President Andrew Johnson recognized the Francis Harrison Pierpont government as legitimate and restored local government. The Virginia legislature passed Black Codes that severely restricted Freedmen's mobility and rights; they had only limited rights and were not considered citizens, nor could they vote. The state ratified the 13th amendment to abolish slavery and revoked the 1861 ordnance of secession. Johnson was satisfied that Reconstruction was complete. Other Republicans in Congress refused to seat the newly elected state delegation; the Radicals wanted better evidence that slavery and similar methods of serfdom had been abolished, and the freedmen given rights of citizens. They also were concerned that Virginia leaders had not renounced Confederate nationalism. After winning large majorities in the 1866 national election, the Radical Republicans gained power in Congress. They put Virginia (and nine other ex-Confederate states) under military rule. Virginia was administered as the "First Military District" in 1867–69 under General John Schofield Meanwhile, the Freedmen became politically active by joining the pro-Republican Union League, holding conventions, and demanding universal male suffrage and equal treatment under the law, as well as demanding disfranchisement of ex-Confederates and the seizure of their plantations. McDonough, finding that Schofield was criticized by conservative whites for supporting the Radical cause on the one hand, and attacked on the other by Radicals for thinking black suffrage was premature on the other, concludes that "he performed admirably' by following a middle course between extremes. Increasingly a deep split opened up in the republican ranks. The moderate element had national support and called itself "True Republicans." The more radical element set out to disfranchise whites—such as not allowing a man to hold office if he was a private in the Confederate army, or had sold food to the Confederate government, plus land reform. About 20,000 former Confederates were denied the right to vote in the 1867 election. In 1867 radical James Hunnicutt (1814–1880), a white preacher, editor and Scalawag (white Southerners supporting Reconstruction) mobilized the black Republican vote by calling for the confiscation of all plantations and turning the land over to Freedmen and poor whites. The "True Republicans" (the moderates), led by former Whigs, businessmen and planters, while supportive of black suffrage, drew the line at property confiscation. A compromise was reached calling for confiscation if the planters tried to intimidate black voters. Hunnicutt's coalition took control of the Republican Party, and began to demand the permanent disfranchisement of all whites who had supported the Confederacy. The Virginia Republican party became permanently split, and many moderate Republicans switched to the opposition "Conservatives". The Radicals won the 1867 election for delegates to a constitutional convention. The 1868 constitutional convention included 33 white Conservatives, and 72 Radicals (of whom 24 were Blacks, 23 Scalawag, and 21 Carpetbaggers. Called the "Underwood Constitution" after the presiding officer, the main accomplishment was to reform the tax system, and create a system of free public schools for the first time in Virginia. After heated debates over disfranchising Confederates, the Virginia legislature approved a Constitution that excluded ex-Confederates from holding office, but allowed them to vote in state and federal elections. Under pressure from national Republicans to be more moderate, General Schofield continued to administer the state through the Army. He appointed a personal friend, Henry H. Wells as provisional governor. Wells was a Carpetbagger and a former Union general. Schofield and Wells fought and defeated Hunnicutt and the Scalawag Republicans. They took away contracts for state printing orders from Hunnicutt's newspaper. The national government ordered elections in 1869 that included a vote on the new Underwood constitution, a separate one on its two disfranchisement clauses that would have permanently stripped the vote from most former rebels, and a separate vote for state officials. The Army enrolled the Freedmen (ex-slaves) as voters but would not allow some 20,000 prominent whites to vote or hold office. The Republicans nominated Wells for governor, as Hunnicutt and most Scalawags went over to the opposition. The leader of the moderate Republicans, calling themselves "True Republicans," was William Mahone (1826–1895), a railroad president and former Confederate general. He formed a coalition of white Scalawag Republicans, some blacks, and ex-Democrats who formed the Conservative Party. Mahone recommended that whites had to accept the results of the war, including civil rights and the vote for Freedmen. Mahone convinced the Conservative Party to drop its own candidate and endorse Gilbert C. Walker, Mahone's candidate for governor. In return, Mahone's people endorsed Conservatives for the legislative races. Mahone's plan worked, as the voters in 1869 elected Walker and defeated the proposed disfranchisement of ex-Confederates. When the new legislature ratified the 14th and 15th amendments to the U.S. Constitution, Congress seated its delegation, and Virginia Reconstruction came to an end in January 1870. The Radical Republicans had been ousted in a non-violent election. Virginia was the only southern state that did not elect a civilian government that represented more Radical Republican principles. Suffering from widespread destruction and difficulties in adapting to free labor, white Virginians generally came to share the postwar bitterness typical of the southern attitudes. Historian Richard Lowe argues that the obstacles faced by the Radical Republican movement made their cause hopeless: - even more damaging to Republicans' prospects than their poverty, their inexperience in state politics, their isolation from potential allies, and their identification with the heated North was the perverse and powerful racism that ran so powerfully through the white community. The great majority of the Old Dominion's white citizens could not take seriously a political party composed primarily of former slaves. Railroad and industrial growth In addition to those that were rebuilt, new railroads developed after the Civil War. In 1868, under railroad baron Collis P. Huntington, the Virginia Central Railroad was merged and transformed into the Chesapeake and Ohio Railroad. In 1870, several railroads were merged to form the Atlantic, Mississippi and Ohio Railroad, later renamed Norfolk & Western. In 1880, the towpath of the now-defunct James River & Kanawha canal was transformed into the Richmond and Allegheny Railroad, which within a decade would merge into the Chesapeake & Ohio. Others would include the Southern Railroad, the Seaboard Air Line, and the Atlantic Coast Line; still others would eventually reach into Virginia, including the Baltimore & Ohio and the Pennsylvania Railroad. The rebuilt Richmond, Fredericksburg, and Potomac Railroad eventually was linked to Washington, D.C.. In the 1880s, the Pocahontas Coalfield opened up in far southwest Virginia, with others to follow, in turn providing more demand for railroads transportation. In 1909, the Virginian Railway opened, built for the express purpose of hauling coal from the mountains of West Virginia to the ports at Hampton Roads. The growth of railroads resulted in the creation of new towns and rapid growth of others, including Clifton Forge, Roanoke, Crewe and Victoria. The railroad boom was not without incident: the Wreck of the Old 97 occurred just north of Danville, Virginia in 1903, later immortalized by a popular ballad. With the invention of the cigarette rolling machine, and the great increase in smoking in the early 20th century, cigarettes and other tobacco products became a major industry in Richmond and Petersburg. Tobacco magnates such as Lewis Ginter funded a number of public institutions. Readjustment, public education, segregation A division among Virginia politicians occurred in the 1870s, when those who supported a reduction of Virginia's pre-war debt ("Readjusters") opposed those who felt Virginia should repay its entire debt plus interest ("Funders"). Virginia's pre-war debt was primarily for infrastructure improvements overseen by the Virginia Board of Public Works, much of which were destroyed during the war or in the new State of West Virginia. After his unsuccessful bid for the Democratic nomination for governor in 1877, former confederate General and railroad executive William Mahone became the leader of the "Readjusters", forming a coalition of conservative Democrats and white and black Republicans. The so-called Readjusters aspired "to break the power of wealth and established privilege" and to promote public education. The party promised to "readjust" the state debt in order to protect funding for newly established public education, and allocate a fair share to the new State of West Virginia. Its proposal to repeal the poll tax and increase funding for schools and other public facilities attracted biracial and cross-party support. The Readjuster Party was successful in electing its candidate, William E. Cameron as governor, and he served from 1882 to 1886. Mahone served as a Senator in the U.S. Congress from 1881 to 1887, as well as fellow Readjustor Harrison H. Riddleberger, who served in the U.S. Senate from 1883 to 1889. Readjusters' effective control of Virginia politics lasted until 1883, when they lost majority control in the state legislature, followed by the election of Democrat Fitzhugh Lee as governor in 1885. The Virginia legislature replaced both Mahone and Riddleberger in the U.S. Senate with Democrats. In 1888 the exception to Readjustor and Democratic control was John Mercer Langston, who was elected to Congress from the Petersburg area on the Republican ticket. He was the first black elected to Congress from the state, and the last for nearly a century. He served one term. A talented and vigorous politician, he was an Oberlin College graduate. He had long been active in the abolitionist cause in Ohio before the Civil War, had been president of the National Equal Rights League from 1864 to 1868, and had headed and created the law department at Howard University, and acted as president of the college. When elected, he was president of what became Virginia State University. While the Readjustor Party faded, the goal of public education remained strong, with institutions established for the education of schoolteachers. In 1884, the state acquired a bankrupt women's college at Farmville and opened it as a normal school. Growth of public education led to the need for additional teachers. In 1908, two additional normal schools were established, one at Fredericksburg and one at Harrisonburg, and in 1910, one at Radford. After the Readjuster Party disappeared, Virginia Democrats rapidly passed legislation and constitutional amendments that effectively disfranchised African Americans and many poor whites, through the use of poll taxes and literacy tests. They created white, one-party rule under the Democratic Party for the next 80 years. White state legislators passed statutes that restored white supremacy through imposition of Jim Crow segregation. In 1902 Virginia passed a new constitution that reduced voter registration. The Progressive Era after 1900 brought numerous reforms, designed to modernize the state, increase efficiency, apply scientific methods, promote education and eliminate waste and corruption. A key leader was Governor Claude Swanson (1906–10), a Democrat who left machine politics behind to win office using the new primary law. Swanson's coalition of reformers in the legislature, built schools and highways, raised teacher salaries and standards, promoted the state's public health programs, and increased funding for prisons. Swanson fought against child labor, lowered railroad rates and raised corporate taxes, while systematizing state services and introducing modern management techniques. The state funded a growing network of roads, with much of the work done by black convicts in chain gangs. After Swanson moved to the U.S. Senate in 1910 he promoted Progressivism at the national level as a supporter of President Woodrow Wilson, who had been born in Virginia and was considered a native son. Swanson, as a power on naval affairs, promoted the Norfolk Navy Yard and Newport News Ship Building and Drydock Corporation. Swanson's statewide organization evolved into the "Byrd Organization." The State Corporation Commission (SCC) was formed as part of the 1902 Constitution, over the opposition of the railroads, to regulate railroad policies and rates. The SCC was independent of parties, courts, and big businesses, and was designed to maximize the public interest. It became an effective agency, which especially pleased local merchants by keeping rates low. Virginia has a long history of agricultural reformers, and the Progressive Era stimulated their efforts. Rural areas suffered persistent problems, such as declining populations, widespread illiteracy, poor farming techniques, and debilitating diseases among both farm animals and farm families. Reformers emphasized the need to upgrade the quality of elementary education. With federal help, in they set up a county agent system (today the Virginia Cooperative Extension) that taught farmers the latest scientific methods for dealing with tobacco and other crops, and farm house wives how to maximize their efficiency in the kitchen and nursery. Some upper-class women, typified by Lila Meade Valentine of Richmond, promoted numerous Progressive reforms, including kindergartens, teacher education, visiting nurses programs, and vocational education for both races. Middle-class white women were especially active in the Prohibition movement. The woman suffrage movement became entangled in racial issues—whites were reluctant to allow black women the vote—and was unable to broaden its base beyond middle-class whites. Virginia women got the vote in 1920, the result of a national constitutional amendment. In higher education, the key leader was Edwin A. Alderman, president of the University of Virginia, 1904–31. His goal was the transformation of the southern university into a force for state service and intellectual leadership. and educational utility. Alderman successfully professionalized and modernized the state's system of higher education. He promoted international standards of scholarship, and a statewide network of extension services. Joined by other college presidents, he promoted the Virginia Education Commission, created in 1910. Alderman's crusade encountered some resistance from traditionalists, and never challenged the Jim Crow system of segregated schooling. While the progressives were modernizers, there was also a surge of interest in Virginia traditions and heritage, especially among the aristocratic First Families of Virginia (FFV). The Association for the Preservation of Virginia Antiquities (APVA), founded in Williamsburg in 1889, emphasized patriotism in the name of Virginia's 18th-century Founding Fathers. In 1907, the Jamestown Exposition was held near Norfolk to celebrate the tricentennial of the arrival of the first English colonists and the founding of Jamestown. Attended by numerous federal dignitaries, and serving as the launch point for the Great White Fleet, the Jamestown Exposition also spurred interest in the military potential of the area. The site of the exposition would later become, in 1917, the location of the Norfolk Naval Station. The proximity to Washington, D.C., the moderate climate, and strategic location of a large harbor at the center of the Atlantic seaboard made Virginia a key location during World War I for new military installations. These included Fort Story, the Army Signal Corps station at Langley, Quantico Marine Base in Prince William County, Fort Belvoir in Fairfax County, Fort Lee near Petersburg and Fort Eustis, in Warwick County (now Newport News). At the same time, heavy shipping traffic made the area a target for U-boats, and a number of merchant vessels were attacked or sunk off the Virginia coast. |This section needs expansion. You can help by adding to it. (November 2009)| Temperance became an issue in the early 20th century. In 1916, a statewide referendum passed to outlaw the consumption of alcohol. This was overturned in 1933. After 1930, tourism began to grow with the development of Colonial Williamsburg. Shenandoah National Park was constructed from newly gathered land, as well as the Blue Ridge Parkway and Skyline Drive. The Civilian Conservation Corps played a major role in developing that National Park, as well as Pocahontas State Park. By 1940 new highway bridges crossed the lower Potomac, Rappahannock, York, and James Rivers, bringing to an end the long-distance steamboat service which had long served as primary transportation throughout the Chesapeake Bay area. Ferryboats remain today in only a few places. Blacks comprised a third of the population but lost nearly all their political power. The electorate was so small that from 1905 to 1948 government employees and officeholders cast a third of the votes in state elections. This small, controllable electorate facilitated the formation of a powerful statewide political machine by Harry Byrd (1887–1966), which dominated from the 1920s to the 1960s. Most of the blacks who remained politically active supported the Byrd organization, which in turn protected their right to vote, making Virginia's race relations the most harmonious in the South before the 1950s, according to V.O. Key. Not until Federal civil rights legislation was passed in 1964 and 1965 did African Americans recover the power to vote and the protection of other basic constitutional civil rights. WWII and Modern era The economic stimulus of the World War brought full employment for workers, high wages, and high profits for farmers. It brought in many thousands of soldiers and sailors for training. Virginia sent 300,000 men and 4,000 women to the services. The buildup for the war greatly increased the state's naval and industrial economic base, as did the growth of federal government jobs in Northern Virginia and adjacent Washington, DC. The Pentagon was built in Arlington as the largest office building in the world. Additional installations were added: in 1941, Fort A.P. Hill and Fort Pickett opened, and Fort Lee was reactivated. The Newport News shipyard expanded its labor force from 17,000 to 70,000 in 1943, while the Radford Arsenal had 22,000 workers making explosives. Turnover was very high—in one three-month period the Newport News shipyard hired 8400 new workers as 8,300 others quit. Cold War and Space Age In addition to general postwar growth, the Cold War resulted in further growth in both Northern Virginia and Hampton Roads. With the Pentagon already established in Arlington, the newly formed Central Intelligence Agency located its headquarters further afield at Langley (unrelated to the Air Force Base). In the early 1960s, the new Dulles International Airport was built, straddling the Fairfax County-Loudoun County border. Other sites in Northern Virginia included the listening station at Vint Hill. Due to the presence of the U.S. Atlantic Fleet in Norfolk, in 1952 the Allied Command Atlantic of NATO was headquartered there, where it remained for the duration of the Cold War. Later in the 1950s and across the river, Newport News Shipbuilding would begin construction of the USS Enterprise—the world's first nuclear-powered aircraft carrier—and the subsequent atomic carrier fleet. Virginia also witnessed American efforts in the Space Race. When the National Advisory Committee for Aeronautics was transformed into the National Aeronautics and Space Administration in 1958, the resulting Space Task Group headquartered at the laboratories of Langley Research Center. From there, it would initiate Project Mercury, and would remain the headquarters of the U.S. manned spaceflight program until its transfer to Houston in 1962. On the Eastern Shore, near Chincoteague, Wallops Flight Facility served as a rocket launch site, including the launch of Little Joe 2 on December 4, 1959, which sent a rhesus monkey, Sam, into suborbital spaceflight. Langley later oversaw the Viking program to Mars. The new U.S. Interstate highway system begun in the 1950s and the new Hampton Roads Bridge-Tunnel in 1958 helped transform Virginia Beach from a tiny resort town into one of the state's largest cities by 1963, and spurring the growth of the Hampton Roads region linked by the Hampton Roads Beltway. In the western portion of the state, completion of north-south Interstate 81 brought better access and new businesses to dozens of counties over a distance of 300 miles (480 km) as well as facilitating travel by students at the many Shenandoah area colleges and universities. The creation of Smith Mountain Lake, Lake Anna, Claytor Lake, Lake Gaston, and Buggs Island Lake, by damming rivers, attracted many retirees and vacationers to those rural areas. As the century drew to a close, Virginia tobacco growing gradually declined due to health concerns, although not at steeply as in Southern Maryland. A state community college system brought affordable higher education within commuting distance of most Virginians, including those in remote, underserved localities. Other new institutions were founded, most notably George Mason University and Liberty University. Localities such as Danville and Martinsville suffered greatly as their manufacturing industries closed. Massive resistance and Civil Rights The state government orchestrated systematic resistance to federal court orders requiring the end of segregation. The state legislature even enacted a package of laws, known as the Stanley plan, to try to evade racial integration in public schools. Prince Edward County even closed all its public schools in an attempt to avoid racial integration, but relented in the face of U.S. Supreme Court rulings. The first black students attended the University of Virginia School of Law in 1950, and Virginia Tech in 1953. In 2008, various actions of the Civil Rights Movement were commemorated by the Virginia Civil Rights Memorial in Richmond. By the 1980s, Northern Virginia and the Hampton Roads region had achieved the greatest growth and prosperity, chiefly because of employment related to Federal government agencies and defense, as well as an increase in technology in Northern Virginia. Shipping through the Port of Hampton Roads began expansion which continued into the early 21st century as new container facilities were opened. Coal piers in Newport News and Norfolk had recorded major gains in export shipments by August 2008. The recent expansion of government programs in the areas near Washington has profoundly affected the economy of Northern Virginia whose population has experienced large growth and great ethnic/ cultural diversification, exemplified by communities such as Tysons Corner, Reston and dense, urban Arlington. The subsequent growth of defense projects has also generated a local information technology industry. In recent years, intolerably heavy commuter traffic and the urgent need for both road and rail transportation improvements have been a major issue in Northern Virginia. The Hampton Roads region has also experienced much growth, as have the western suburbs of Richmond in both Henrico and Chesterfield Counties. Virginia served as a major center for information technology during the early days of the Internet and network communication. Internet and other communications companies clustered in the Dulles Corridor. By 1993, the Washington area had the largest amount of Internet backbone and the highest concentration of Internet service providers. In 2000, more than half of all Internet traffic flowed along the Dulles Toll Road, and by 2016 70% of the world's internet traffic flowed through Loudoun County. Bill von Meister founded two Virginia companies that played major roles in the commercialization of the Internet: McLean, Virginia based The Source and Control Video Corporation, forerunner of America Online. While short-lived, The Source was one of the first online service providers alongside CompuServe. On hand for the launch of The Source, Isaac Asimov remarked "This is the beginning of the information age." The Source helped pave the way for future online service providers including another Virginia company founded by von Meister, America Online (AOL). AOL became the largest provider of Internet access during the Dial-up era of Internet access. AOL maintained a Virginia headquarters until the then-struggling company moved in 2007. In 2006 former Governor of Virginia Mark Warner gave a speech and interview in the massively multiplayer online game Second Life, becoming the first politician to appear in a video game. In 2007 Virginia speedily passed the nation's first spaceflight act by a vote of 99–0 in the House of Delegates. Northern Virginia company Space Adventures is currently the only company in the world offering space tourism. In 2008 Virginia became the first state to pass legislation on Internet safety, with mandatory educational courses for 11- to 16-year-olds. In 2013, by a slight margin in the Virginia Governor's race, the state of Virginia broke a long acclaimed streak of choosing a governor against the incumbent party within the White House. For the first time in more than thirty years will the Governor and the President be from the same party. Virginia history on stamps Stamps of Virginia events and landmarks include • Jamestown founding • Mount Vernon • Stratford Hall - Colonial South and the Chesapeake - Colony of Virginia - Constitution of Virginia - Former counties, cities, and towns of Virginia - History of Richmond, Virginia, the current state capital - History of the East Coast of the United States - History of the Southern United States - History of Virginia on stamps - Newspapers in Virginia in the 18th century, List of - Timeline of Virginia - Virginia Conventions - Charles H. Ambler and Festus P. Summers, West Virginia, the mountain state (1958) pp 48-52, 55 - "Archaeological evidence also indicates that Native Americans occupied the area as early as 6500 BC." "State Historical Highway Marker 'Pocahontas Island' To Be Dedicated in Petersburg", Petersburg, VA Official Website, Posted on: June 16, 2015, archived article accessed February 25, 2016 - Brown, Hutch (Summer 2000). "Wildland Burning by American Indians in Virginia". Fire Management Today. Washington, DC: U.S. Department of Agriculture, Forest Service. 60 (3): 32. An engraving after John White watercolor. Sparsely wooded field in background suggests the region's savanna. - Virginia Indian Tribes, University of Richmond Archived March 9, 2005, at the Wayback Machine. - c.f. Anishinaabe language: danakamigaa: "activity-grounds", i.e. "land of much events [for the People" - Berrier Jr., Ralph (September 20, 2009). "The slaughter at Saltville". The Roanoke Times. Retrieved October 9, 2011.[dead link] - "Virginia Memory: Virginia Chronology". Library of Virginia. Retrieved October 9, 2011. - James O. Glanville (2004). Conquistadors at Saltville in 1567?: A Review of the Archeological and Documentary Evidence. Smithfield Review. - "A" New Andalucia and a Way to the Orient: The American Southeast During the Sixteenth Century. LSU Press. 1 October 2004. pp. 182–184. ISBN 978-0-8071-3028-5. Retrieved 30 March 2013. - Stephen Adams (2001), The best and worst country in the world: perspectives on the early Virginia landscape, University of Virginia Press, p. 61, ISBN 978-0-8139-2038-2 - Charles M. Hudson; Carmen Chaves Tesser (1994). The Forgotten Centuries: Indians and Europeans in the American South, 1521-1704. University of Georgia Press. p. 359. ISBN 978-0-8203-1654-3. - Jerald T. Milanich (February 10, 2006). Laboring in the Fields of the Lord: Spanish Missions And Southeastern Indians. University Press of Florida. p. 92. ISBN 978-0-8130-2966-5. Retrieved June 30, 2012. - Seth Mallios (August 28, 2006). The Deadly Politics of Giving: Exchange And Violence at Ajacan, Roanoke, And Jamestown. University of Alabama Press. pp. 39–43. ISBN 978-0-8173-5336-0. Retrieved June 30, 2012. - Price, 11 - Thomas C. Parramore; Peter C. Stewart; Tommy L. Bogger (April 1, 2000). Norfolk: The First Four Centuries. University of Virginia Press. p. 12. ISBN 978-0-8139-1988-1. Retrieved March 18, 2012. - MR Peter C Mancall (2007). The Atlantic World and Virginia, 1550-1624. UNC Press Books. pp. 517, 522. ISBN 978-0-8078-3159-5. Retrieved 17 February 2013. - Three names from the Roanoke Colony are still in use, all based on Native American names. Stewart, George (1945). Names on the Land: A Historical Account of Place-Naming in the United States. New York: Random House. p. 22. ISBN 1-59017-273-6. - Raleigh, History of the World: "For when some of my people asked the name of that country, one of the savages answered 'Win-gan-da-coa', which is as much as to say, 'You wear good clothes.' - T. H. Breen, "Looking Out for Number One: Conflicting Cultural Values in Early Seventeenth-Century Virginia," South Atlantic Quarterly, Summer 1979, Vol. 78 Issue 3, pp. 342–360 - J. Frederick Fausz, "The 'Barbarous Massacre' Reconsidered: The Powhatan Uprising of 1622 and the Historians," Explorations in Ethnic Studies, vol 1 (Jan. 1978), 16–36 - Gleach p. 199 - John Esten Cooke, Virginia: A History of the People (1883) p. 205. - Heinemann, Ronald L., et al., Old Dominion, New Commonwealth: a history of Virginia 1607-2007, U. Virginia Press 2007 ISBN 978-0-8139-2609-4, p.44-45 - Wilcomb E. Washburn, The Governor and the Rebel: A History of Bacon's Rebellion in Virginia (1957) - Albert H. Tillson (1991). Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740-1789. UP of Kentucky. p. 20ff. - Alan Taylor, American Colonies: The Settling of North America (2002) p 157. - John E. Selby, The Revolution in Virginia, 1775-1783 (1988) p 24-25. - Quoted in Nancy L. Struna, "The Formalizing of Sport and the Formation of an Elite: The Chesapeake Gentry, 1650-1720s." Journal of Sport History 13#3 (1986) p 219. online - Struna, The Formalizing of Sport and the Formation of an Elite pp 212-16. - Timothy H. Breen, "Horses and gentlemen: The cultural significance of gambling among the gentry of Virginia." William and Mary Quarterly (1977) 34#2 pp: 239-257. online - Edmund Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) p 386 - Heinemann, Old Dominion, New Commonwealth (2007) 83–90 - Gene Wilhelm, Jr., "Folk Culture History of the Blue Ridge Mountains" Appalachian Journal (1975) 2#3 in JSTOR - Delma R. Carpenter, "The Route Followed by Governor Spotswood in 1716 across the Blue Ridge Mountains." Virginia Magazine of History and Biography (1965): 405-412. in JSTOR - Rob Sherwood, "Germanna's Treasure Trove of History: A Journey of Discovery." Inquiry 13.1 (2008): 45-55. online - "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). Virginiadot.org. Retrieved 2015-04-16. - "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). 3chopt.com. Retrieved 2015-04-16. - Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" online - Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" http://www.encyclopediavirginia.org/Backcountry_Frontier_of_Colonial_Virginia#start_entry - http://www.virginiaplaces.org/settleland/fairfaxgrant.html Once colonial settlement moved upstream of the Fall Line into the Piedmont, the dispute over the inland edge of the Northern Neck grant became an issue. Settlers seeking clear title had to know whether to file paperwork and pay fees to the colonial government in Williamsburg or the land office of the Fairfax family. If the colony could extinguish the Northern Neck grant somehow, revenues would flow to Williamsburg rather than to Leeds Castle." - http://www.historichampshire.org/research/searching1.htm "in mid-March, 1735, Lord Fairfax arrived in Virginia on board the Glasgow on his first inspection trip to America. The trip lasted over two years during which time Fairfax reasserted his claim to the Proprietary and made arrangements for the survey of the boundaries." - http://www.mountvernon.org/digital-encyclopedia/article/lord-fairfax/ "in 1748 hired, among others, the sixteen-year old Washington to survey the Northern Neck." - George Washington's elder half brother Lawrence Washington (1718-1752) was married to Anne (1728-1761) a daughter of Col. William Fairfax of Belvoir—a land agent and cousin of Lord Thomas Fairfax. Anne's brother, George William Fairfax, was married to Sally Fairfax (nee Cary). - Historical Statement Relative to the Town of Winchester the Virginia -- House of Burgesses granted the fourth city charter in Virginia to 'Winchester' as Frederick Town was renamed. - MacCorkle, William Alexander. "The historical and other relations of Pittsburgh and the Virginias". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013. - Andrew Arnold Lambing; et al. "Allegheny County: its early history and subsequent development: from the earliest period till 1790". Historic Pittsburgh Text Collection. University of Pittsburgh. Retrieved 12 September 2013. - "Addresses delivered at the celebration of the one hundred and fiftieth anniversary of the Battle of Bushy Run, August 5th and 6th, 1913". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013. - O'Meara, p. 48 - Anderson (2000), pp. 42–43 - Royal Proclamation I - Gordon S. Wood, The American Revolution, A History. New York, Modern Library, 2002 ISBN 0-8129-7041-1, p.22 - Edward L. Bond and Joan R. Gundersen, The Episcopal Church in Virginia, 1607–2007 (2007) - Rountree p. 161–162, 168–170, 175 - Edward L. Bond, "Anglican theology and devotion in James Blair's Virginia, 1685–1743," Virginia Magazine of History and Biography, (1996) 104#3 pp 313–40 - Charles Woodmason, The Carolina Backcountry on the Eve of the Revolution: The Journal and Other Writings of Charles Woodmason, Anglican Itinerant ed. by Richard J. Hooker (1969) - David Brion Davis (1986). Slavery in the Colonial Chesapeake. Colonial Williamsburg. p. 28. - Cynthia Lynn Lyerly (1998). Methodism and the Southern Mind, 1770-1810. Oxford UP. p. 119ff. - John A. Ragosta, "Fighting for Freedom: Virginia Dissenters' Struggle for Religious Liberty during the American Revolution," Virginia Magazine of History and Biography, (2008) 116#3 pp. 226–261 - Rhys Isaac, "Evangelical Revolt: The Nature of the Baptists' Challenge to the Traditional Order in Virginia, 1765 To 1775," William and Mary Quarterly (1974) 31#3 pp 345–368 in JSTOR - Pauline Maier, Ratification: The People Debate the Constitution, 1787–1788 (2010) pp. 235–319 - Peter Kolchin, American Slavery: 1619–1877, New York: Hill and Wang, 1994, p. 73 - Kolchin, American Slavery, p. 81 - Andrew Levy, The First Emancipator: The Forgotten Story of Robert Carter, the Founding Father who freed his slaves, New York: Random House, 2005 (ISBN 0-375-50865-1) - Scott Nesbit, "Scales Intimate and Sprawling: Slavery, Emancipation, and the Geography of Marriage in Virginia", Southern Spaces, July 19, 2011. http://southernspaces.org/2011/scales-intimate-and-sprawling-slavery-emancipation-and-geography-marriage-virginia. - Albert J. Raboteau, Slave Religion: The 'Invisible Institution' in the Antebellum South, New York: Oxford University Press, 2004, p. 137, accessed December 27, 2008 - "Soil exhaustion in the Tidewater became chronic, and the Piedmont was "worn out, washed and gullied." Conditions were better in the Valley of Virginia, where wheat rather than tobacco was dominant, but even there people saw a brighter future outside Virginia." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners - "In all, perhaps one million Virginians left the commonwealth between the Revolution and the Civil War." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners - "Virginia fell from first to seventh place in population, and its number of congressmen dropped from twenty-three to eleven." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners - http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners"Although this mass exodus of Virginians caused the state to slip into a secondary role both politically and economically, these westward-bound settlers spread their culture, laws, political ideas, and labor system across America." - "Washington Iron Furnace National Register Nomination" (PDF). Virginia Department of Historic Resources. Retrieved March 23, 2011. - S. Sydney Bradford, "The Negro Ironworker in Ante Bellum Virginia," Journal of Southern History, May 1959, Vol. 25 Issue 2, pp. 194–206; Ronald L. Lewis, "The Use and Extent of Slave Labor in the Virginia Iron Industry: The Antebellum Era," West Virginia History, Jan 1977, Vol. 38 Issue 2, pp. 141–156 - For a comparison of Virginia and New Jersey see John Bezis-Selfa, "A Tale of Two Ironworks: Slavery, Free Labor, Work, and Resistance in the Early Republic," William & Mary Quarterly, Oct 1999, Vol. 56 Issue 4, pp. 677–700 - see "Libby Prison", Encyclopedia Virginia, accessed 21 April 2012 - Aaron Sheehan-Dean, "Everyman's War: Confederate Enlistment in Civil War Virginia," Civil War History, March 2004, Vol. 50 Issue 1, pp. 5–26 - The U.S Constitution requires permission of the old state for a new state to form. David R. Zimring, "'Secession in Favor of the Constitution': How West Virginia Justified Separate Statehood during the Civil War," West Virginia History, (2009) 3#2 pp. 23–51 - Richard O. Curry, A House Divided, Statehood Politics & the Copperhead Movement in West Virginia, (1964), pp. 141–147. - Curry, A House Divided, pg. 73. - Curry, A House Divided, pgs. 141–152. - Charles H. Ambler and Festus P. Summers, West Virginia: The Mountain State ch 15–20 - Otis K. Rice, West Virginia: A History (1985) ch 12–14 - Kenneth C. Martis, The Historical Atlas of the Congresses of the Confederate States of America 1861-1865 (1994) p. 43-53. - The main scholarly histories are Hamilton James Eckenrode, The Political History of Virginia during the Reconstruction (1904); Richard Lowe, Republicans and Reconstruction in Virginia, 1856–70 (1991); and Jack P. Maddex, Jr., The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970). See also Heinemann et al., New Commonwealth (2007) ch. 11 - Mary Farmer-Kaiser, Freedwomen and the Freedmen's Bureau: Race, Gender, and Public Policy in the Age of Emancipation, (Fordham U.P., 2010), quotes pp. 51, 13 - Richard Lowe, "Another Look at Reconstruction in Virginia," Civil War History, March 1986, Vol. 32 Issue 1, pp. 56–76 - James L. McDonough, "John Schofield as Military Director of Reconstruction in Virginia.," Civil War History, Sept 1969, Vol. 15#3, pp. 237–256 - Heinemann, et al. Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007) p 248. - Eric Foner, Politics and Ideology in the Age of the Civil War (1980) p 146 - James E. Bond, No Easy Walk to Freedom: Reconstruction and the Ratification of the Fourteenth Amendment (Praeger, 1997) p. 156. - Eckenrode, The Political History of Virginia during the Reconstruction, ch 5 - The Carpetbaggers were Northern whites who had moved to Virginia after the war. Heinemann et al., New Commonwealth (2007) p. 248 - Note: In order to gain public education, black delegates had to accept segregation in the schools. - Eckenrode, The Political History of Virginia during the Reconstruction, ch 6 - Eckenrode, The Political History of Virginia during the Reconstruction, ch 7 - Walker had 119,535 votes and Wells 101,204. The new Underwood Constitution was approved overwhelmingly, but the disfranchisement clauses were rejected by 3:2 ratios. The new legislature was controlled by the Conservative Party, which soon absorbed the "True Republicans". Eckenrode, The Political History of Virginia during the Reconstruction, p. 411 - Ku Klux Klan chapters were formed in Virginia in the early years after the war, but they played a negligible role in state politics and soon vanished. Heinemann et al., New Commonwealth (2007) p. 249 - Nelson M. Blake, William Mahone of Virginia: Soldier and Political Insurgent (1935) - Richard Lowe, Republicans and Reconstruction in Virginia, 1856-70 (1991) p 119 - Henry C. Ferrell, Claude A. Swanson of Virginia: a political biography (1985) - George Harrison Gilliam, "Making Virginia Progressive," Virginia Magazine of History and Biography, 1999, Vol. 107 Issue 2, pp. 189–222 - Lex Renda, "The Advent of Agricultural Progressivism in Virginia," Virginia Magazine of History and Biography, 1988, Vol. 96 Issue 1, pp. 55–82 - Lloyd C. Taylor, Jr. "Lila Meade Valentine: The FFV as Reformer," Virginia Magazine of History and Biography, 1962, Vol. 70 Issue 4, pp. 471–487 - Sara Hunter Graham, "Woman Suffrage In Virginia: The Equal Suffrage League and Pressure-Group Politics, 1909–1920," Virginia Magazine of History and Biography, 1993, Vol. 101 Issue 2, pp. 227–250 - Michael Dennis, "Reforming the 'academical village,'" Virginia Magazine of History and Biography, 1997, Vol. 105 Issue 1, pp. 53–86 - James M. Lindgren, "Virginia Needs Living Heroes": Historic Preservation in the Progressive Era," Public Historian, Jan 1991, Vol. 13 Issue 1, pp. 9–24 - "U-Boat Sinks Schooner Without Any Warning". New York Times. August 17, 1918. Retrieved July 28, 2011. - "RAIDING U-BOAT SINKS 2 NEUTRALS OFF VIRGINIA COAST". New York Times. June 17, 1918. Retrieved July 28, 2011. - Arlington Connection, Michael Lee Pope, October 14–20, 2009, Alcohol as Budget Savior, page 3 - Morgan Kousser, The Shaping of Southern Politics (1974) p 181; Wallenstein, Cradle of America (2007) p 283–4 - V.O. Key, Jr., Southern Politics (1949) p 32 - Joe Freitus, Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014) - Charles Johnson, "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR - "A Brief History of U.S. Fleet Forces Command". U.S. Fleet Forces Command, USN. Retrieved March 17, 2011. - "Langley's Role in Project Mercury". NASA Langley Research Center. Retrieved March 20, 2011. - "Giant Leaps Began With "Little Joe"". NASA Langley Research Center. Retrieved March 20, 2011. - "Viking: Trialblazer For All Mars Research". NASA Langley Research Center. Retrieved March 20, 2011. - Benjamin Muse, Virginia's Massive Resistance (1961) - Wallenstein, Peter (Fall 1997). "Not Fast, But First: The Desegregation of Virginia Tech". VT Magazine. Virginia Tech. Retrieved 2008-04-12. External link in - Donnelly, Sally B. "D.C. Dotcom." Time August 8, 2000. http://www.time.com/time/magazine/article/0,9171,52073-2,00.html - Freed, Benjamin (14 September 2016). "70 Percent of the World's Web Traffic Flows Through Loudoun County". Washingtonian. - LIFE: Mark Warner becomes first U.S. politician to campaign in a video game - Virginia leads the way - Virginia First State to Require Internet Safety Lessons - "Notable dates in Virginia history". Virginia Historical Society. - Benjamin Vincent (1910), "Virginia", Haydn's Dictionary of Dates (25th ed.), London: Ward, Lock & Co. – via Hathi Trust - Dabney, Virginius. Virginia: The New Dominion (1971) - Heinemann, Ronald L., John G. Kolp, Anthony S. Parent Jr., and William G. Shade, Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007). ISBN 978-0-8139-2609-4. - Kierner, Cynthia A., and Sandra Gioia Treadway. Virginia Women: Their Lives and Times, vol. 1. (University of Georgia Press, 2015) x, 378 pp - Morse, J. (1797). "Virginia". The American Gazetteer. Boston, Massachusetts: At the presses of S. Hall, and Thomas & Andrews. - Rubin, Louis D. Virginia: A Bicentennial History. States and the Nation Series. (1977), popular - Salmon, Emily J., and Edward D.C. Campbell, Jr., eds. The Hornbook of Virginia history: A Ready-Reference Guide to the Old Dominion's People, Places, and Past 4th edition. (1994) - Wallenstein, Peter. Cradle of America: Four Centuries of Virginia History (2007). ISBN 978-0-7006-1507-0. - WPA. Virginia: A Guide to the Old Dominion (1940) famous guide to every locality; strong on society, economy and culture online edition - Younger, Edward, and James Tice Moore, eds. The Governors of Virginia, 1860–1978 (1982) - Tarter, Brent, "Making History in Virginia," Virginia Magazine of History and Biography Volume: 115. Issue: 1. 2007. pp. 3+. online edition Prehistoric and Colonial - Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online - Appelbaum, Robert, and John Wood Sweet, eds. Envisioning an English empire: Jamestown and the making of the North Atlantic world (U of Pennsylvania Press, 2011) - Billings, Warren M., John E. Selby, and Thad W, Tate. Colonial Virginia: A History (1986) - Bond, Edward L. Damned Souls in the Tobacco Colony: Religion in Seventeenth-Century Virginia (2000), - Breen T. H. Puritans and Adventurers: Change and Persistence in Early America (1980). 4 chapters on colonial social history online edition - Breen, T. H. Tobacco Culture: The Mentality of the Great Tidewater Planters on the Eve of Revolution (1985) - Breen, T. H., and Stephen D. Innes. "Myne Owne Ground": Race and Freedom on Virginia's Eastern Shore, 1640–1676 (1980) - Brown, Kathleen M. Good Wives, Nasty Wenches, and Anxious Patriarchs: Gender, Race, and Power in Colonial Virginia (1996) excerpt and text search - Byrd, William. The Secret Diary of William Byrd of Westover, 1709–1712 (1941) ed by Louis B. Wright and Marion Tinling online edition; famous primary source; very candid about his priivate life - Bruce, Philip Alexander. Institutional History of Virginia in the Seventeenth Century: An Inquiry into the Religious, Moral, Educational, Legal, Military, and Political Condition of the People, Based on Original and Contemporaneous Records (1910) online edition - Coombs, John C., "The Phases of Conversion: A New Chronology for the Rise of Slavery in Early Virginia," William and Mary Quarterly, 68 (July 2011), 332–60. - Davis, Richard Beale. Intellectual Life in the Colonial South, 1585-1763 * 3 vol 1978), detailed coverage of Virginia - Freeman, Douglas Southall; George Washington: A Biography Volume: 1–7. (1948). Pulitzer Prize. vol 1 online - Gleach; Frederic W. Powhatan's World and Colonial Virginia: A Conflict of Cultures (1997). - Isaac, Rhys. Landon Carter's Uneasy Kingdom: Revolution and Rebellion on a Virginia Plantation (2004)] - Isaac, Rhys. The Transformation of Virginia, 1740–1790 (1982, 1999) Pulitzer Prize winner, dealing with religion and morality online review - Kolp, John Gilman. Gentlemen and Freeholders: Electoral Politics in Colonial Virginia (Johns Hopkins U.P. 1998) - Menard, Russell R. "The Tobacco Industry in the Chesapeake Colonies, 1617–1730: An Interpretation." Research In Economic History 1980 5: 109–177. 0363–3268 the standard scholarly study - Mook, Maurice A. "The Aboriginal Population of Tidewater Virginia." American Anthropologist (1944) 46#2 pp: 193-208. online - Morgan, Edmund S. Virginians at Home: Family Life in the Eighteenth Century (1952). online edition - Morgan, Edmund S. "Slavery and Freedom: The American Paradox." Journal of American History 1972 59(1): 5–29 in JSTOR - Morgan, Edmund S. American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) online edition highly influential study - Nelson, John A Blessed Company: Parishes, Parsons, and Parishioners in Anglican Virginia, 1690–1776 (2001) - Price, David A. Love and Hate in Jamestown: John Smith, Pocahontas, and the Start of a New Nation (2005) - Rasmussen, William M.S. and Robert S. Tilton. Old Virginia: The Pursuit of a Pastoral Ideal (2003) - Roeber, A. G. Faithful Magistrates and Republican Lawyers: Creators of Virginia Legal Culture, 1680–1810 (1981) - Rountree, Helen C. Pocahontas, Powhatan, Opechancanough: Three Indian Lives Changed by Jamestown (University of Virginia press, 2005), early Virginia history from an Indian perspective by a scholar - Rutman, Darrett B., and Anita H. Rutman. A Place in Time: Middlesex County, Virginia, 1650–1750 (1984), new social history - Sheehan, Bernard. Savagism and civility: Indians and Englishmen in colonial Virginia (Cambridge UP, 1980.) - Wertenbaker, Thomas J. The Shaping of Colonial Virginia, comprising Patrician and Plebeian in Virginia (1910) full text online; Virginia under the Stuarts (1914) full text online; and The Planters of Colonial Virginia (1922) full text online; well written but outdated - Wright, Louis B. The First Gentlemen of Virginia: Intellectual Qualities of the Early Colonial Ruling Class (1964) 1776 to 1850 - Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America (2004) - Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online - Beeman, Richard R. The Old Dominion and the New Nation, 1788–1801 (1972) - Dill, Alonzo Thomas. "Sectional Conflict in Colonial Virginia," Virginia Magazine of History and Biography 87 (1979): 300–315. - Lebsock, Suzanne D. A Share of Honor: Virginia Women, 1600–1945 (1984) - Link, William A. Roots of Secession: Slavery and Politics in Antebellum Virginia (2007) excerpt and text search - Majewski, John D. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War (2006) excerpt and text search - Risjord, Norman K. Chesapeake Politics, 1781–1800 (1978). in-depth coverage of Virginia, Maryland and North Carolina online edition - Selby, John E. The Revolution in Virginia, 1775–1783 (1988) - Shade, William G. Democratizing the Old Dominion: Virginia and the Second Party System 1824–1861 (1996) - Taylor, Alan. The Internal Enemy: Slavery and War in Virginia, 1772-1832 (2014). 624 pp online review - Tillson, Jr. Albert H. Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740–1789 (1991), - Varon; Elizabeth R. We Mean to Be Counted: White Women and Politics in Antebellum Virginia (1998) - Virginia State Dept. of Education. The Road to Independence: Virginia 1763–1783 online edition; 80pp; with student projects 1850 to 1870 - Blair, William. Virginia's Private War: Feeding Body and Soul in the Confederacy, 1861–1865 (1998) online edition - Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989) - Eckenrode, Hamilton James. The political history of Virginia during the Reconstruction, (1904) online edition - Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999) - Lankford, Nelson. Richmond Burning: The Last Days of the Confederate Capital (2002) - Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984) - Lowe, Richard. Republicans and Reconstruction in Virginia, 1856–70 (1991) - Maddex, Jr., Jack P. The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970). - Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War (2000) - Noe, Kenneth W. Southwest Virginia's Railroad: Modernization and the Sectional Crisis (1994) - Robertson, James I. Civil War Virginia: Battleground for a Nation (1993) 197 pages; excerpt and text search - Shanks, Henry T. The Secession Movement in Virginia, 1847–1861 (1934) online edition - Sheehan-Dean, Aaron Charles. Why Confederates fought: family and nation in Civil War Virginia (2007) 291 pages excerpt and text search - Simpson, Craig M. A Good Southerner: The Life of Henry A. Wise of Virginia (1985), wide-ranging political history - Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) excerpt and text search - Wills, Brian Steel. The war hits home: the Civil War in southeastern Virginia (2001) 345 pages; excerpt and text search - Brundage, W. Fitzhugh. Lynching in the New South: Georgia and Virginia, 1880–1930 (1993) - Buni, Andrew. The Negro in Virginia Politics, 1902–1965 (1967) - Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989) - Ferrell, Henry C., Jr. Claude A. Swanson of Virginia: A Political Biography (1985) early 20th century - Freitus, Joe. Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014) online review - Gilliam, George H. "Making Virginia Progressive: Courts and Parties, Railroads and Regulators, 1890–1910." Virginia Magazine of History and Biography 107 (Spring 1999): 189–222. - Heinemann, Ronald L. Depression and the New Deal in Virginia: The Enduring Dominion (1983) - Heinemann, Ronald L. Harry Byrd of Virginia (1996) - Heinemann, Ronald L. "Virginia in the Twentieth Century: Recent Interpretations." Virginia Magazine of History and Biography 94 (April 1986): 131–60. - Hunter, Robert F. "Virginia and the New Deal," in John Braeman et al. eds. The New Deal: Volume Two – the State and Local Levels (1975) pp. 103–36 - Johnson, Charles. "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR - Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999) - Key, V. O., Jr. Southern Politics in State and Nation (1949), important chapter on Virginia in the 1940s - Lassiter, Matthew D., and Andrew B. Lewis, eds. The Moderates' Dilemma: Massive Resistance to School Desegregation in Virginia (1998) - Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984) - Link, William A. A Hard Country and a Lonely Place: Schooling, Society, and Reform in Rural Virginia, 1870–1920 (1986) - Martin-Perdue, Nancy J., and Charles L. Perdue Jr., eds. Talk about Trouble: A New Deal Portrait of Virginians in the Great Depression (1996) - Moger, Allen W. Virginia: Bourbonism to Byrd, 1870–1925 (1968) - Muse, Benjamin. Virginia's Massive Resistance (1961) - Pulley, Raymond H. Old Virginia Restored: An Interpretation of the Progressive Impulse, 1870–1930 (1968) - Shiftlett, Crandall. Patronage and Poverty in the Tobacco South: Louisa County, Virginia, 1860–1900 (1982), new social history - Smith, J. Douglas. Managing White Supremacy: Race, Politics, and Citizenship in Jim Crow Virginia (2002) - Sweeney, James R. "Rum, Romanism, and Virginia Democrats: The Party Leaders and the Campaign of 1928" Virginia Magazine of History and Biography 90 (October 1982): 403–31. - Wilkinson, J. Harvie, III. Harry Byrd and the Changing Face of Virginia Politics, 1945–1966 (1968) - Wynes, Charles E. Race Relations in Virginia, 1870–1902 (1961) Environment, geography, locales - Adams, Stephen. The Best and Worst Country in the World: Perspectives on the Early Virginia Landscape (2002) excerpt and text search - Gottmann, Jean. Virginia at mid-century (1955), by a leading geographer - Gottmann, Jean. Virginia in Our Century (1969) - Kirby, Jack Temple. "Virginia'S Environmental History: A Prospectus," Virginia Magazine of History and Biography, 1991, Vol. 99 Issue 4, pp. 449–488 - *Parramore, Thomas C., with Peter C. Stewart and Tommy L. Bogger. Norfolk: The First Four Centuries (1994) - Terwilliger, Karen. Virginia's Endangered Species (2001), esp. ch 1 - Sawyer, Roy T. America's Wetland: An Environmental and Cultural History of Tidewater Virginia and North Carolina (University of Virginia Press; 2010) 248 pages; traces the human impact on the ecosystem of the Tidewater region. - Jefferson, Thomas. Notes on the State of Virginia - Duke, Maurice, and Daniel P. Jordan, eds. A Richmond Reader, 1733–1983 (1983) - Eisenberg, Ralph. Virginia Votes, 1924–1968 (1971), all statistics - Encyclopedia Virginia - Virginia Historical Society short history of state, with teacher guide - Virginia Memory, digital collections and online classroom of the Library of Virginia - How Counties Got Started in Virginia - Union or Secession: Virginians Decide - Virginia and the Civil War - Civil War timeline - Boston Public Library, Map Center. Maps of Virginia, various dates.
Sean D. Pitman M.D. © December, 2006 Most scientists today believe that various places on this planet, such as Greenland, the Antarctic, and many other places, have some very old ice. The ice in these areas appears to be layered in a very distinctive annual pattern. In fact, this pattern is both visually and chemically recognizable and extends downward some 4,000 to 5,000 meters. What happens is that as the snow from a previous year is buried under a new layer of snow, it is compacted over time with the weight of each additional layer of snow above it. This compacted snow is called the “firn” layer. After several meters this layers snowy firn turns into layers of solid ice (note that 30cm of compacted snow compresses further into about 10cm of ice). These layers are much thinner on the Antarctic ice cap as compared to the Greenland ice cap since Antarctica averages only 5cm of "water equivalent" per year while Greenland averages over 50cm of water equivalent. 1,2 since these layers get even thinner as they are buried under more and more snow and ice, due to compression and lateral flow (see diagram), the thinner layers of the Antarctic ice cap become much harder to count than those of the Greenland ice cap at an equivalent depth. So, scientists feel that most accurate historical information comes from Greenland, although much older ice comes from other drier places. Still, the ice cores drilled in the Greenland ice cap, such as the American Greenland Ice Sheet Project (GISP2) and the European Greenland Ice Core Project (GRIP), are felt to be very old indeed - upwards of 160,000 years old. (Back to Top) The Visual Method But how, exactly, are these layers counted? Obviously, at the surface the layers are easy to count visually – and in Greenland the layers are fairly easily distinguished at depths as great as 1,500 to 2,000m (see picture). Even here though, there might be a few problems. How does one distinguish between a yearly layer and a sub-yearly layer of ice? For instance, it is not only possible but also likely for various large snowstorms and/or snowdrifts to lay down “Fundamentally, in counting any annual marker, we must ask whether it is absolutely unequivocal, or whether nonannual events could mimic or obscure a year. For the visible strata (and, we believe, for any other annual indicator at accumulation rates representative of central Greenland), it is almost certain that variability exists at the subseasonal or storm level, at the annual level, and for various longer periodicities (2-year, sunspot, etc.). We certainly must entertain the possibility of misidentifying the deposit of a large storm or a snow dune as an entire year or missing a weak indication of a summer and thus picking a 2-year interval as 1 year.” 7 Good examples of this phenomenon can be found in areas of very high precipitation, such as the more coastal regions of Greenland. It was in this area, 17 miles off the east coast of Greenland, that Bob Cardin and other members of his squadron had to ditch their six P-38’s and two B-17’s when they ran out of gas in 1942 - the height of WWII. Many years later, in 1981, several members of this original squad decided to see if they could recover their aircraft. They flew back to the spot in Greenland where they thought they would find their planes buried under a few feet of snow. To their surprise, there was nothing there. Not even metal detectors found anything. After many years of searching, with better detection equipment, they finally found the airplanes in 1988 three miles from their original location and under approximately 260 feet of ice! They went on to actually recovered one of them (“Glacier Girl” – a P38), which was eventually restored to her former glory.20 What is most interesting about this story, at least for the purposes of this discussion, is the depth at which the planes were found (as well as the speed which the glacier moved). It took only 46 years to bury the planes in over 260 feet (~80 meters) of ice and move then some 3 miles from their original location. This translates into a little over 5 ½ feet (~1.7 meters) of ice or around 17 feet (~5 meters) of compact snow per year and about 100 meters of movement per year. In a telephone interview, Bob Cardin was asked how many layers of ice were above the recovered airplane. He responded by saying, “Oh, there were many hundreds of layers of ice above the airplane.” When told that each layer was supposed to represent one year of time, Bob said, “That is impossible! Each of those layers is a different warm spell – warm, cold, warm, cold, warm, cold.” 21 Also, the planes did not sink in the ice over time as some have suggested. Their density was less than the ice or snow since they were not filled with the snow, but remained hollow. They were in fact buried by the annual snowfall over the course of almost 50 years. Now obviously, this example does not reflect the actual climate of central Greenland or of central Antarctica. As a coastal region, it is exposed to a great deal more storms and other sub-annual events that produce the 17 feet of annual snow per year. However, even now, large snowstorms also drift over central Greenland. And, in the fairly recent warm Hipsithermal period (~4 degrees warmer than today) the precipitation over central Greenland, and even Antarctica, was most likely much greater than it is today. So, how do scientists distinguish between annual layers and sub-annual layers? Visual methods, by themselves, seem rather limited – especially as the ice layers get thinner and thinner as one progresses down the column of ice. (Back to Top) Oxygen and Other Isotopes Well, there are many other methods that scientists use to help them identify annual layers. One such method is based on the oxygen isotope variation between 16O and 18O (and 17O) as they relate to changes in temperature. For instance, water (H2O), with the heavier 18O isotope, evaporates less rapidly and condenses more readily than water molecules that incorporate the lighter 16O isotope. Since the 18O requires more energy (warmer weather) to be evaporated and transported in the atmosphere, more 18O is deposited in the ice sheets in the summer than in the winter. Obviously then, the changing ratios of these oxygen isotopes would clearly distinguish the annual cycles of summer and winter as well as longer periods of warm and cold (such as the ice age) – right? Not quite. One major drawback with this method is that these oxygen isotopes do not stay put. They diffuse over time. This is especially true in the “firn layer” of compacted snow before it turns into ice. So, from the earliest formation of these ice layers, the ratios of oxygen isotopes as well as other isotopes are altered by gravitational diffusion and so cannot be used as reliable markers of annual layers as one moves down the ice core column. One of the evidences given for the reality of this phenomenon is the significant oxygen isotope enrichment (verses present day atmospheric oxygen ratios) found in 2,000 year-old-ice from Camp Century, Greenland.3 Interestingly enough, this property of isotope diffusion has long been recognized as a problem. Consider the following comment made by Fred Hall back in 1989: “The accumulating firn [ice-snow granules] acts like a giant columnar sieve through which the gravitational enrichment can be maintained by molecular diffusion. At a given borehole, the time between the fresh fall of new snow and its conversion to nascent ice is roughly the height of the firn layers in [meters] divided by the annual accumulation of new ice in meters per year. This results in conversion times of centuries for firn layers just inside the Arctic and Antarctic circles, and millennia for those well inside [the] same. Which is to say--during these long spans of time, a continuing gas-filtering process is going on, eliminating any possibility of using the presence of such gases to count annual layers over thousands of years.” 4 Lorius et al., in a 1985 Nature article, agreed commenting that, “Further detailed isotope studies showed that seasonal delta 18O variations are rapidly smoothed by diffusion indicating that reliable dating cannot be obtained from isotope stratigraphy”.29 Jaworowski (work discussed further below in "Biased Data" section) also notes the following: The short-term peaks of d18O in the ice sheets have been ascribed to annual summer/winter layering of snow formed at higher and lower air temperatures. These peaks have been used for dating the glacier ice, assuming that the sample increments of ice cores represent the original mean isotopic composition of precipitation, and that the increments are in a steady-state closed system. Experimental evidence, however, suggests that this assumption is not valid, because of dramatic metamorphosis of snow and ice in the ice sheets as a result of changing temperature and pressure. At very cold Antarctic sites, the temperature gradients were found to reach 500°C/m, because of subsurface absorption of Sun radiation. Radiational subsurface melting is common in Antarctica at locations with summer temperatures below -20°C, leading to formation of ponds of liquid water, at a depth of about 1 m below the surface. Other mechanisms are responsible for the existence of liquid water deep in the cold Antarctic ice, which leads to the presence of vast sub-sheet lakes of liquid water, covering an area of about 8,000 square kilometers in inland eastern Antarctica and near Vostok Station, at near basal temperatures of -4 to -26.2°C. The sub-surface recrystallization, sublimation, and formation of liquid water and vapor disturb the original isotopic composition of snow and ice. . . Important isotopic changes were found experimentally in firn (partially compacted granular snow that forms the glacier surface) exposed to even 10 times lower thermal gradients. Such changes, which may occur several times a year, reflecting sunny and overcast periods, would lead to false age estimates of ice. It is not possible to synchronize the events in the Northern and Southern Hemispheres, such as, for example, CO2 concentrations in Antarctic and Greenland ice. This is, in part the result of ascribing short-term stable isotope peaks of hydrogen and oxygen to annual summer/winter layering of ice. and using them for dating. . . In the air from firn and ice at Summit, Greenland, deposited during the past ~200 years, the CO2 concentration ranged from 243.3 ppmv to 641.4 ppmv. Such a wide range reflects artifacts caused by sampling or natural processes in the ice sheet, rather than the variations of CO2 concentration in the atmosphere. Similar or greater range was observed in other studies of greenhouse gases in polar ice.50 (Back to Top) Contaminated and Biased Data According to Prof. Zbigniew Jaworowski, Chairman of the Scientific Council of the Central Laboratory for Radiological Protection in Warsaw, Poland, the ice core data is not only contaminated by procedural problems, it is also manipulated in order to fit popular theories of the day. Jaworowski first argues that ice cores do not fulfill the essential criteria of a closed system. For example, there is liquid water in ice, which can dramatically change the chemical composition of the air bubbles trapped between ice crystals. "Even the coldest Antarctic ice (down to -73°C) contains liquid water. More than 20 physicochemical processes, mostly related to the presence of liquid water, contribute to the alteration of the original chemical composition of the air inclusions in polar ice. . . Even the composition of air from near-surface snow in Antarctica is different from that of the atmosphere; the surface snow air was found to be depleted in CO2 by 20 to 50 percent . . ."50 Beyond this, there is the problem of fractionation of gases as the "result of various solubilities in water (CH4 is 2.8 times more soluble than N2 in water at O°C; N2O, 55 times; and CO2, 73 times), starts from the formation of snowflakes, which are covered with a film of supercooled liquid."50 "[Another] one of these processes is formation of gas hydrates or clathrates. In the highly compressed deep ice all air bubbles disappear, as under the influence of pressure the gases change into the solid clathrates, which are tiny crystals formed by interaction of gas with water molecules. Drilling decompresses cores excavated from deep ice, and contaminates them with the drilling fluid filling the borehole. Decompression leads to dense horizontal cracking of cores [see illustration], by a well known sheeting process. After decompression of the ice cores, the solid clathrates decompose into a gas form, exploding in the process as if they were microscopic grenades. In the bubble-free ice the explosions form a new gas cavities and new cracks. Through these cracks, and cracks formed by sheeting, a part of gas escapes first into the drilling liquid which fills the borehole, and then at the surface to the atmospheric air. Particular gases, CO2, O2 and N2 trapped in the deep cold ice start to form clathrates, and leave the air bubbles, at different pressures and depth. At the ice temperature of –15°C dissociation pressure for N2 is about 100 bars, for O2 75 bars, and for CO2 5 bars. Formation of CO2 clathrates starts in the ice sheets at about 200 meter depth, and that of O2 and N2 at 600 to 1000 meters. This leads to depletion of CO2 in the gas trapped in the ice sheets. This is why the records of CO2 concentration in the gas inclusions from deep polar ice show the values lower than in the contemporary atmosphere, even for the epochs when the global surface temperature was higher than now."50 No study has yet demonstrated that the content of greenhouse trace gases in old ice, or even in the interstitial air from recent snow, represents the atmospheric composition. The ice core data from various polar sites are not consistent with each other, and there is a discrepancy between these data and geological climatic evidence. One such example is the discrepancy between the classic Antarctic Byrd and the Vostok ice cores, where an important decrease in the CO2 content in the air bubbles occurred at the same depth of about 500 meters, but at which the ice age difference by about 16,000 years. In approximately 14,000-year-old part of the Byrd core, a drop in the CO2 concentration of 50 ppmv was observed, but in similarly old ice from the Vostok core, an increase of 60 ppmv was found. In about 6,000-year-old ice from Camp Century, Greenland, the CO2 concentration in air bubbles was 420 ppmv, but was 270 ppmv in similarly old ice from Byrd Antarctica . . . One can also note that the CO2 concentration in the air bubbles decreases with the depth of the ice for the entire period between the years 1891 and 1661, not because of any changes in the atmosphere, but along the increasing pressure gradient, which is probably the result of clathrate formation, and the fact that the solubility of CO2 increases with depth. If this isn't already bad enough, Jaworowski proceeds to argue that the data, as contaminated as it is, has been manipulated to fit popular theories of the day. Until 1985, the published CO2 readings from the air bubbles in the pre-industrial ice ranged from 160 to about 700 ppmv, and occasionally even up to 2,450 ppmv. After 1985, high readings disappeared from the publications!50 Another problem is the notion that lead levels in ice cores correlate with the increased use of lead by various more and more modern civilizations such as the Greeks and Romans and then during European and American industrialization. A potential problem with this notion is Jaworowski's claim to have "demonstrated that in pre-industrial period the total flux of lead into the global atmosphere was higher than in the 20th century, that the atmospheric content of lead is dominated by natural sources, and that the lead level in humans in Medieval Ages was 10 to 100 times higher than in the 20th century."50 Beyond this potential problem, there is also the problem of heavy metal contamination of the ice cores during the drilling process. Numerous studies on radial distribution of metals in the cores reveal an excessive contamination of their internal parts by metals present in the drilling fluid. In these parts of cores from the deep Antarctic, ice concentrations of zinc and lead were higher by a factor of tens or hundreds of thousands, than in the contemporary snow at the surface of the ice sheet. This demonstrates that the ice cores are not a closed system; the heavy metals from the drilling fluid penetrate into the cores via micro- and macro-cracks during the drilling and the transportation of the cores to the surface.50 Professor Jaworowski summarizes with a most interesting statement: It is astonishing how credulously the scientific community and the public have accepted the clearly flawed interpretations of glacier studies as evidence of anthropogenic increase of greenhouse gases in the atmosphere. Further historians can use this case as a warning about how politics can negatively influence science.50 While this statement is most certainly a scathing rebuke of the scientific community as it stands, I would argue that Jaworowski doesn't go far enough. He doesn't consider that the problems he so carefully points as the basis for his own doubts concerning the basis of global warming may also pose significant problems for the validity of using ice cores for reliably assuming the passage of vast spans of time, supposedly recording in the layers of large ice sheets. (Back to Top) So, it seems as though isotope ratios are severely limited if not entirely worthless as yearly markers for ice core dating beyond a very short period of time. However, there are several other dating methods, such as the correlation of impurities in the layers of ice to known historical events – such as known volcanic eruptions. After a volcano erupts, the ash and other elements from the eruption fall out and are washed out of the atmosphere by precipitation. This fallout leaves “tephra” (microscopic shards of glass from the ash fallout – see picture), sulfuric acid, and other chemicals in the snow and subsequent ice from that year. Sometimes the tephra fallout can be specifically matched via physical and chemical analysis to a known historical eruption. This analysis begins when electrical conductivity measurements (ECM) are made along the entire length of the ice core. Increases in electrical conductivity indicate the presence of increased acid content. When a volcano erupts, it spews out a great deal of sulfur-rich gases. These are converted in the atmosphere to sulfuric acid aerosols, which end up in the layers of ice and increase the ECM readings. The higher the acidity, the better the conduction. Sections of ice from a region with an acidic spike are then melted and filtered through a capillary-pore membrane filter. An automated scanning electron microscope (SEM), equipped for x-ray microanalysis, is used to determine the size, shape and elemental composition of hundreds of particles on the filter. Cluster analysis, using a multivariate statistical routine that measures the elemental compositions of sodium, magnesium, aluminum, silicon, potassium, calcium, titanium and iron, is done to identify the volcanic “signature” of the tephra particles in the sample. Representative tephra particles are re-located for photomicrography and more detailed chemical analysis. Then tephra is collected from near the volcanic eruption that may have produced the fallout in the core and is ground into a fine powder, dispersed in liquid, and filtered through a capillary-pore membrane. Then automated SEM and chemical analysis is used on this known tephra sample to find its chemical signature and compare it with the unknown sample found in the ice core - to see if there is a match.22 Tephra from several well-known historical volcanoes have been analyzed in this way. For example, Crater Lake in Oregon was once a much larger mountain (Mt. Mazama) before it blew up as a volcano. In the mid-1960s scientists dated this massive explosion, with the use of radiocarbon dating methods, at between 6,500 and 7,000 years before present (BP). Then, in 1979, Scientific American published an article about a pair of sagebrush bark sandals that were found just under the Mazama tephra at Fort Rock Cave. These sandals were carbon-14 dated to around 9,000 years BP. Even thought this date was several thousand years older than expected, the article went on to say that the bulk of the evidence still put the most likely eruption date of Mt. Mazama at around 7,000 years BP. 23,24 Later, a “direct count” of the layers in the ice core obtained from Camp Century Greenland put the date of the Mazama tephra at 6,400±110 years BP.23,25 Then, at the 16th INQUA conference held June 2003, in Reno Nevada (attended by over 1,000 scientists studying the Quaternary period), Kevin M. Scott noted in an abstract that the Mazama Park eruptive period had been “newly dated at 5,600-5,900 14C yrs BP.” Scott went on to note that this new date “includes collapses and eruptions previously dated throughout a range of 4,300 to 6,700 14C yrs BP.” 26 At this point it should also be noted that the carbon-14 dating method is being calibrated by the Greenland ice cores, so it is circular to argue that the Greenland ice core dates have been validated by carbon-14 analysis.26 Another famous volcano, the Mediterranean volcano Thera, was so large that it effectively destroyed the Minoan (Santorini) civilization. This is thought to have happened in the year 1628 B.C. since tree rings from that region showed a significant disruption matching that date. Of course, such an anomaly was looked for in the ice cores. As predicted, layers in the "Dye 3" Greenland ice core showed such a major eruption in 1645, plus or minus 20 years. This match was used to confirm or calibrate the ice core data as recently as 2003. Interestingly enough though, the scientists did not have the budget at the time to a systematic search throughout the whole ice core for such large anomalies that would also match a Thera-sized eruption. Now that such detailed searches have been done, many such sulfuric acid peaks have been found at numerous dates within the 18th, 17th, 16th, 15th, and 14th centuries B.C. 35 Beyond this, tephra analyzed from the "1620s" ice core layers did not match the volcanic material from the Thera volcano. The investigators concluded: "Although we cannot completely rule out the possibility that two nearly coincident eruptions, including the Santorini eruption, are responsible for the 1623 BC signal in the GISP ice core, these results very much suggest that the Santorini eruption is not responsible for this signal. We believe that another eruption led not only to the 1623 BC ice core signal but also, by correlation, to the tree-ring signals at 1628/1627 BC." 36 Then, as recently as March of 2004, Pearce et al published a paper declaring that another volcano, the Aniakchak Volcano in Alaska, was the true source of the tephra found in the GRIP ice core at the "1645 ± 4 BC layer." These researchers went on to say that, "The age of the Minoan eruption of Santorini, however, remains unresolved." 37 So, here we have a clearly erroneous match between a volcanic eruption and both tree rings and ice core signals. What is most curious, however, is that many scientists still declare that ice cores are solidly confirmed by such means. Beyond this, as flexible as the dating here seems to be, the Mt. Mazama and Thera eruptions are still about the oldest eruptions that can be identified in the Greenland ice cores. There are two reasons for this. One reason is that below 10,000 layers or so in the ice core the ice becomes too alkaline to reliably identify the acid spikes associated with volcanic eruptions.5 Another reason is that the great majority of volcanic eruptions throughout history were not able to get very much tephra into the Greenland ice sheet. So, the great majority of volcanic signals are detected via their acid signal alone. This presents a problem. A review of four eruption chronologies constructed since 1970 illustrate this problem quite nicely. In 1970, Lamb published an eruption chronology for the years 1500 to 1969. The work recorded 380 known historical eruptions. Ten years later, Hirschboek published a revised eruption chronology that recorded 4,796 eruptions for the same period – a very significant increase from Lamb’s figure. One year later, in 1981, Simkin et al. raised the figure to 7,664 eruptions and Newhall et al. increased the number further a year later to 7,713. It is also interesting to note that Simkin et al. recorded 3,018 eruptions between 1900 and 1969, but only 11 eruptions were recorded from between 1 and 100 AD. So obviously, as one goes back through recent history, the number of known volcanic eruptions drops off dramatically, though they were most certainly still occurring – just without documentation. Based on current rates of volcanic activity, an expected eruption rate for the past several thousand years comes to around 30,000 eruptions per 1,000 years.25 With such a high rate of volcanic activity, to include many rather large volcanoes, how are scientists so certain that a given acid spike on ECM is so clearly representative of any particular volcano – especially when the volcanic eruption in question happened more than one or two thousand years ago? The odds that at least one volcanic signal will be found in an ice core within a very small “range of error” around any supposed historical eruption are extremely good - even for large volcanoes. Really, is this all too far from a self-fulfilling prophecy? How then can the claim be made that historical eruptions validate the dating of ice cores to any significant degree? “The desire to link such phenomena [volcanic eruptions] and the stretching of the dating frameworks involved is an attractive but questionable practice. All such attempts to link (and hence infer associations between) historic eruptions and environmental phenomena and human "impacts", rely on the accurate and precise association in time of the two events. . . A more general investigation of eruption chronologies constructed since 1970 suggest that such associations are frequently unreliable when based on eruption data gathered earlier than the twentieth century.” 25 (Back to Top) So, if volcanic markers are generally unreliable and completely useless beyond a few thousand years, how are scientists so sure that their ice core dating methods are meaningful? Well, one of the most popular methods used to distinguish annual layers is one that measures the fluctuations in ice core dust. Dust is alkaline and shows up as a low ECM reading. During the dry northern summer, dust particles from Arctic Canada and the coastal regions of Greenland are carried by wind currents and are deposited on the Greenland ice sheet. During the winter, this area is not so dusty, so less dust is deposited during the winter as compared to the summer. This annual fluctuation of dust is thought to be the most reliable of all the methods for the marking of the annual cycle - especially as the layers start to get thinner and thinner as one moves down the column of ice.27 And, it certainly would be one of the most reliable methods if it were not for one little problem known as “post-depositional particle migration”. Zdanowicz et al., from the University of New Hampshire, did real time studies of modern atmospheric dust deposition in the 1990’s on the Penny Ice Cap, Baffin Island, Arctic Canada. Their findings are most interesting indeed: “After the snow deposition on polar ice sheets, not all the chemical species preserve the original concentration values in the ice. In order to obtain reliable past-environmental information by firn and ice cores, it is important to understand how post-depositional effects can alter the chemical composition of the ice. These effects can happen both in the most superficial layers and in the deep ice. In the snow surface, post-depositional effects are mainly due to re-emission in the atmosphere and we show here that chloride, nitrate, methane-sulphonic acid (MSA) and H2O2 [hydrogen peroxide] are greatly affected by this process; moreover, we show how the mean annual snow accumulation rate influences the re-emission extent. In the deep ice, post-depositional effects are mainly due to movement of acidic species and it is interesting to note the behavior of some substances (e.g. chloride and nitrate) in acidic (high concentrations of volcanic acid gases) and alkaline (high dust content) ice layers . . . We failed to identify any consistent relationship between dust concentration or size distribution, and ionic chemistry or snowpack stratigraphy.” 28 This study goes on to reveal that each yearly cycle is marked not by one distinct annual dust concentration as is normally assumed when counting ice core layers, but by two distinct dust concentration peaks – one in late winter-spring and another one in the late summer-fall. So, each year is initially marked by “two seasonal maxima of dust deposition.” By itself, this finding cuts in half those ice core dates that assume that each year is marked by only one distinct deposition of dust. This would still be a salvageable problem if the dust actually stayed put once it was deposited in the snow. But, it does not stay put – it moves! “While some dust peaks are found to be associated with ice layers or Na [sodium] enhancements, others are not. Similarly, variations of the NMD [number mean diameter – a parameter for quantifying relative changes in particle size] and beta cannot be systematically correlated to stratigraphic features of the snowpack. This lack of consistency indicates that microparticles are remobilized by meltwater in such a way that seasonal (and stratigraphic) differences are obscured.” 28 This remobilization of the microparticles of dust in the snow was found to affect both fine and coarse particles in an uneven way. The resulting “dust profiles” displayed “considerable structure and variability with multiple well-defined peaks” for any given yearly deposit of snow. The authors hypothesized that this variability was most likely caused by a combination of factors to include “variations of snow accumulation or summer melt and numerous ice layers acting as physical obstacles against particle migration in the snow.” The authors suggest that this migration of dust and other elements limits the resolution of these methods to “multiannual to decadal averages”.28 Another interesting thing about the dust found in the layers of ice is that those layers representing the last “ice age” contain a whole lot of dust – up to 100 times more dust than is deposited on average today.19 The question is, how does one explain a hundred times as much Ice Age dust in the Greenland icecap with gradualistic, wet conditions? There simply are no unique dust sources on Earth to account for 100 times more dust during the 100,000 years of the Ice Age, particularly when this Ice Age was thought to be associated with a large amount of precipitation/rain – which would only cleanse the atmosphere more effectively. How can high levels of precipitation be associated with an extremely dusty atmosphere for such a long period of time? Isn’t this a contradiction from a uniformitarian perspective? Perhaps a more recent catastrophic model has greater explanatory value? Other dating methods, such as 14C, 36Cl and other radiometric dating methods are subject to this same problem of post-depositional diffusion as well as contamination – especially when the summer melt sends water percolating through the tens and hundreds of layers found in the snowy firn before the snow turns to ice. Then, even after the snow turns to ice, diffusion is still a big problem for these molecules. They simply do not stay put. More recent publications by Rempel et al., in Nature (May, 2001),32 also quoted by J.W. Wettlaufer (University of Washington) in a paper entitled, "Premelting and anomalous diffusion in ancient ice",31 suggest that chemicals that have been trapped in ancient glacial or polar ice can move substantial distances within the ice (up to 50cm even in deeper ice where layers get as thin as 3 or 4 millimeters). Such mobility is felt by these scientists to be "large enough to offset the resolution at which the core was examined and alter the interpretation of the ice-core record." What happens is that, "Substances that are climate signatures - from sea salt to sulfuric acid - travel through the frozen mass along microscopic channels of liquid water between individual ice crystals, away from the ice on which they were deposited. The movement becomes more pronounced over time as the flow of ice carries the substances deeper within the ice sheet, where it is warmer and there is more liquid water between ice crystals. . . The Vostok core from Antarctica, which goes back 450,000 years, contains even greater displacement [as compared to the Greenland ice cores] because of the greater depth." That means that past analyses of historic climate changes gleaned from ice core samples might not be all that accurate. Wettlaufer specifically notes that, "The point of the paper is to suggest that the ice core community go back and redo the chemistry."31,32 Of course these scientists do not think that such problems are significant enough to destroy the usefulness of ice cores as a fairly reliable means of determining historical climate changes. But, it does make one start to wonder how much confidence one can actually have in the popular interpretations of what ancient ice really means. (Back to Top) To add to the problems inherent in ice core dating is the significant amount evidence that the world was a much warmer place just a few thousand years ago. These higher temperatures of the “Middle Holocene” began to fade about 4,000 years ago and the ice sheet of the Arctic Basin began to reappear about 3,000 years ago. But, during this warm period what was the environment like? It seems that in the fairly recent past the vegetation zones were much closer to the poles than they are today. The remains of some plant species can be found as far as 1,000km farther north than they are found today. Forests once extended right up to the Barents Coast and the White Sea. The European tundra zones were non-existent. In northern Asia, peat-moss was discovered on Novaya Zemlya. And, this was no short-term aberration in the weather. This warming trend seems to have lasted for quite a while. Consider the following comments from Borisov, a long time meteorology and climatology professor at Leningrad State University: “During the last 18,000 years, the warming was particularly appreciable during the Middle Holocene. This covered the time period of 9,000 to 2,500 years ago and culminated about 6,000 to 4,000 years ago, i.e., when the first pyramids were already being built in Egypt . . . The most perturbing questions of the stage under consideration are: Was the Arctic Basin iceless during the culmination of the optimum?”8 Professor Borisov asks a very interesting question. What would happen to the ice sheets during several thousand years of a “hypsithermal” warming if it really was some 5°C warmer than it is today? If the ice sheets covering much of North America and Europe melted away what happened to the ice covering Greenland and the Antarctic? Consider what would happen if the entire Arctic Ocean went without ice for most of the year owing to a warmer and therefore longer spring, summer, and fall. Certainly there would be more snowfall, but this would not be enough to prevent the warm rainfall from removing the snow cover and the ice itself from Greenland’s ice sheet. A marine climate would create a more temperate environment because water vapor over the Arctic region would act as a greenhouse gas, holding the day’s heat within the atmosphere. Borisov goes on to point out that a 1°C increase in average global temperature results in a more dramatic increase in temperature at the poles and extreme latitudes than it does at the equator and more tropical zones. For example, between the years 1890 and 1940, there was a 1 to 2 degree increase in the average global temperature. During this same time the mean annual temperature in the Arctic basin rose 7°C. This change was reflected more in warmer winters than in warmer summers. For instance, the December temperature rose almost 17°C while the summer temperature changed hardly at all. Likewise, the average winter temperature for Spitsbergen and Greenland rose between 6 to 13°C during this time. 8 Along these same lines, an interesting article published in the journal Nature 30-years ago by R. L. Newson showed that, without the Arctic ice cap, the winters of Canada and Siberia would rise 20° to 50°F while over the Arctic Ocean the temperature would increase by a dramatic 35° to 70°F! 11 M. Warshaw and R. Rapp published similar results in the Journal of Applied Meteorology - using a different circulation model.12 Of course, the real question here is, would a 5°C increase in average global temperature melt the ice sheets of Greenland or even Antarctica? Borisov argued that this idea is not all that far-fetched. He notes that measurements carried out on Greenland’s northeastern glaciers as far back as the early 1950’s showed that they were loosing ice far faster than it was being formed. 8 This northeastern glacier was in fact in “ablation” as a result of just a 1°C rise in average global temperature. Remember that this melting is happening even though this increase in global temperature is still much cooler relative to the Middle Holocene heat wave - which supposedly lasted several thousand years. Since that time research done by Carl Boggild of the Geological Survey of Denmark and Greenland (GEUS), involving data from a network of 10 automatic monitoring stations, showed that the large portions of the Greenland ice sheet are melting up to 10 times faster that earlier research had indicated. In 2000, research indicated that the Greenland ice was melting at a conservative estimate of just over 50 cubic kilometers of ice per year. Greenland covers 840,000 square miles with about 85% of that area covered by ice up to 2 miles thick. Do the math. With an exponentially increasing melt rate, Greenland will be green within a surprisingly, even shockingly, short period of time if the melt continues like it has. Local towns are beginning to sink because of the melting permafrost. Even potatoes are starting to grow in Greenland. This has never happened before in the memory of those who live there now. In April of 2000, Lars Smedsrud and Tore Furevik wrote in an article in the Cicerone magazine, published by the Norwegian Climate Research Centre (CICERO) that , "If the melting of the ice, both in thickness and surface area, does not slow, then it is an established fact that the arctic ice will disappear during this century." This is based on the fact that the Arctic ice has thinned by some 40% between the years 1980 and 2000. This past summer, December 2006, explorers Lonnie Dupre and Eric Larsen made a very dangerous and most interesting trek to the North Pole. As they approached the Pole they found open water, a lot of icy slush, and ice so thin it wouldn't support their weight. "We expected to see the ice get better, get flatter, as we got closer to the pole. But the ice was busted up," Dupre said. "As we got closer to the pole, we had to paddle our canoes more and more."51 Walt Meier, a researcher at the U.S. National Snow and Ice Data Center in Boulder, Colorado commented on these interesting findings noting that the melting of the Arctic ice cap in summer - is progressing more rapidly than satellite images alone have shown. Given resent data such as this, climate researchers at the U.S. Naval Postgraduate School in California predict the complete absence of summer ice on the Arctic Ocean by 2030 or sooner.51 That prediction is dramatically different than what scientists were predicting just a few years ago - that the ice would still be there by the end of the century. Consider how a complete loss of Arctic ice would affect the temperature of surrounding regions - like Greenland. Could Greenland long retain its ice without the Arctic polar ice? If this is not convincing enough, consider that since the year 2000, glaciers around the world have continued melting at greater and greater rates - exponentially greater rates. Alaska's glaciers are receding at twice the rate previously thought, according to a new study published in July 19, 2002 Science journal. Around the globe, sea level is about 6 inches higher than it was just 100 years ago, and the rate of rise is increasing quite dramatically. Leading glaciologist, Dr. Mark Meier, remarked in February of 2002 that the accepted estimates of sea level rise were underestimated, due to the rapid retreat of mountain glaciers.44 The next year, at the American Association for the Advancement of Science (AAAS) meeting in San Francisco on February 25, 2001, Professor Lonnie Thompson, from Ohio State University's Department of Geological Sciences, presented a paper entitled, "Disappearing Glaciers - Evidence of a Rapidly Changing Earth." Dr. Thompson has completed 37 expeditions since 1978 to collect and study perhaps the world's largest archive of glacial ice cored from the Himalayas, Mount Kilimanjaro in Africa, the Andes in South America, the Antarctic and Greenland. Prof. Thompson reported to AAAS that at least one-third of the massive ice field on top of Tanzania's Mount Kilimanjaro has melted in only the past twelve years. Further, since the first mapping of the mountain's ice in 1912, the ice field has shrunk by 82%. By 2015, there will be no more "snows of Kilimanjaro." In Peru, the Quelccaya ice cap in the Southern Andes Mountains is at least 20% smaller than it was in 1963. One of the main glaciers there, Qori Kalis, has been melting at the astonishing rate of 1.3 feet per day. Back in 1963, the glacier covered 56 square kilometers. By 2000, it was down to less than 44 square kilometers and now there is a new ten acre lake. It's melt rate has been increasing exponentially and at its current rate will be entirely gone between 2010 and 2015, the same time that Kilimanjaro dries. The exponential nature of this worldwide melt is dramatically illustrated by aerial photographs taken of various glaciers. A series of photographs of the Qori Kalis glacier in Peru are available from 1963. Between 1963 and 1978 the rate of melt was 4.9 meters per year. Between 1978 and 1983 was 8 meters per year. This increased to 14 meters per year by 1993 and to 30 meters per year by 1995, to 49 meters per year by 1998 and to a shocking 155 meters per year by 2000. By 2001 it was up to about 200 meters per year. That's almost 2 feet per day. Dr. Thompson exclaimed, "You can literally sit there and watch it retreat." Then, in 2001, NASA scientists published a major study, based on satellite and aircraft observations, showing that large portions of the Greenland ice sheet, especially around its margins, were thinning at a rate of roughly 1 meter per year. Other scientists, such as Carl Boggild and his team, have recorded thinning Greenland ice sheets at rates as fast a 10 or even 12 meters per year. It is quite a shock to scientists to realize that the data from satellite images shows that various Greenland glaciers are thinning and retreating in an exponential manner - by an "astounding" 150 meters in thickness in just the last 15 years.43 In both 2002 and 2003, the Northern Hemisphere registered record low ocean ice cover. NASA's satellite data show the Arctic region warmed more during the 1990s than during the 1980s, with Arctic Sea ice now melting by up to 15 percent per decade. Satellite images show the ice cap covering the Northern pole has been shrinking by 10 percent per decade over the past 25 years.45 On the opposite end of the globe, sea ice floating near Antarctica has shrunk by some 20 percent since 1950. One of the world's largest icebergs, named B-15, that measured near 10,000 square kilometers (4,000 square miles) or half the size of New Jersey, calved off the Ross Ice Shelf in March 2000. The Larsen Ice Shelf has largely disintegrated within the last decade, shrinking to 40 percent of its previously stable size.45 Then, in 2002, the Larsen B ice shelf collapsed. Almost immediately after, researchers observed that nearby glaciers started flowing a whole lot faster - up to 8 times faster! This marked increase in glacial flow also resulted in dramatic drops in glacial elevations, lowering them by as much as 38 meters (124 feet) in just 6 months.48 Scientists monitoring a glacier in Greenland, the Kangerdlugssuaq glacier, have found that it is moving into the sea 3 times faster than just 10 years ago. Measurements taken in 1988 and in 1996 show the glacier was moving at a rate of between 3.1 and 3.7 miles per year. The latest measurements, taken the summer of 2005, showed that it is now moving at 8.7 miles a year. Satellite measurements of the Kangerdlugssuaq glacier show that, as well as moving more rapidly, the glacier's boundary is shrinking dramatically. Kangerdlugssuaq is about 1,000 meters (3,280ft) thick, about 4.5 miles wide, extends for more than 20 miles into the ice sheet and drains about 4 per cent of the ice from the Greenland ice sheet. The realization of the rapid melting of such a massive glacier, which was fairly stable until quite recently, came as quite a shock to the scientific community. Professor Hamilton expressed this general surprise in the following comment: "This is a dramatic discovery. There is concern that the acceleration of this and similar glaciers and the associated discharge of ice is not described in current ice-sheet models of the effects of climate change. These new results suggest the loss of ice from the Greenland ice sheet, unless balanced by an equivalent increase in snowfall, could be larger and faster than previously estimated. As the warming trend migrates north, glaciers at higher latitudes in Greenland might also respond in the same way as Kangerdlugssuaq glacier. In turn, that could have serious implications for the rate of sea-level rise."46 The exponential increase in glacial speed is now thought to be due to increased surface melting. The liquid water formed on the surface during summer melts collects into large lakes. The water pressure generated by these surface lakes forces water down through the icy layers all the way to the underlying bedrock. It then spreads out, lifting up the glacier off the bedrock on a lubricating film of liquid water. Obviously, with such lubrication, the glacier can then flow at a much faster rate - exponentially faster. This increase in speed also makes for a thinner glacier since the glacier becomes more stretched out.46 For example the giant Jakobshavn glacier - at four miles wide and 1,000 feet thick the biggest on the landmass of Greenland - is now moving towards the sea at a rate of 113 feet a year; the "normal" annual speed of a glacier is just one foot. Until now, scientists believed the ice cap would take 1,000 years to melt entirely, but Ian Howat, who is working with Professor Tulaczyk, says the new developments could "easily" cut this time "in half". 49 It seems to me that even this new estimate might be just a bit generous. It seems that no one predicted this. No one thought it possible and scientists are quite shocked by these facts. The amazingly fast rate of glacial retreat simply goes against the all prevailing models of glacial development and change - which generally involve many thousands of years - even tens or hundreds of thousands of years and sometimes millions of years. Who would have thought that such changes could happen in mere decades? Beyond this, there are many other evidences of a much warmer climate in Greenland and the Arctic basin in the fairly recent past. For example, when Greenland’s seas were 10 meters higher than they are today (during the last hipsithermal), warm water mollusks can be found that live over 500 to 750 miles farther south today. Also, the remains of land vertebrates, such as various reptiles, are found in Denmark and Scandinavia, when they live only in Mediterranean areas today.13 “Additional evidence is given by...peats and relics in Greenland--the northern limits may have been displaced northward through several degrees of latitude...and [by] other plants in Novaya Zemlya, and by peat and ripe fruit stones [fruit pits]...in Spitsbergen that no longer ripen in these northern lands. Various plants were more generally distributed in Ellesmere [Island and] birch grew more widely in Iceland....” 13 The point is that these types of plants and these types of large trees should never be able to grow on islands north of the Arctic Circle. Back in 1962 Ivan T. Sanderson noted that , “Pieces of large tree trunks of the types [found] . . . do not and cannot live at those latitudes today for purely biological reasons. The same goes for huge areas of Siberia.”14 Also, as previously noted, fruit does not ripen during short autumns at these high latitudes. Therefore, the spring and summer seasons had to be much longer for any seeds from these temperate trees to germinate and grow. Likewise, the peats that have been found on Greenland require temperate, humid climates to form. Peat formation requires climates that allow for the partial decomposition of vegetable remains under conditions of deficient drainage.13 Also, peat formations require at least 40 inches of rainfall a year and a mean temperature above 32°F. 15 In addition, there were temperate forests on the Seward Peninsula, in Alaska, and the Tuktoyaktuk Peninsula, in Canada’s frigid Inuvik Region, facing the Beaufort Sea and the Arctic Ocean and at Dubawnt Lake, in Canada’s frozen Keewatin Region, west of the Hudson Bay.16 And yet, somehow, it is believed that Greenland’s icecap survived several thousand years in such a recently temperate climate, but how? What we have are temperate forests and warm waters near and within the Arctic Circle and Ocean all across the northern boundary from Siberia to Norway and from Alaska to the Hudson Bay. These temperate conditions existed for thousands of years both east and west of Greenland and at all the Greenland latitudes around the world - and these conditions had not yet ended by the time the Egyptians were building their pyramids! This, of course, would explain why mammoths and other large animals were able to live, during this period, throughout these northerly regions. (Back to Top) Mammoths are especially interesting since millions of them recently lived (within the last 10-20 thousand years according to mainstream science) well within the Arctic Circle. Although popularly portrayed as living in cold barren environments and occasionally dying in local events, such as mudslides or entrapment in soft riverbanks, the evidence may actually paint a very different picture if studied at from a different perspective. The well preserved "mummified" remains of many mammoths have been found along with those of many other types of warmer weather animals such as the horse, lion, tiger, leopard, bear, antelope, camel, reindeer, giant beaver, musk sheep, musk ox, donkey, ibex, badger, fox, wolverine, voles, squirrels, bison, rabbit and lynx as well as a host of temperate plants are still being found all jumbled together within the Artic Circle - along the same latitudes as Greenland all around the globe.39 The problem with the popular belief that millions of mammoths lived in very northerly regions around the entire globe, with estimates of up to 5 million living along a 600 mile stretch of Siberian coastline alone,39 is that these mammoths were still living in these regions within the past 10,000 to 20,000 years. Carbon 14 dating of Siberian mammoths has returned dates as early as 9670± 40 years before present (BP).41 So, why is this a problem? Contrary to popular imagination, these creatures were not surrounded by the extremely cold, harsh environments that exist in these northerly regions today. Rather, they lived in rather lush steppe-type conditions to include evidence of large fruit bearing trees, abundant grasslands, and the very large numbers and types of grazing animals already mentioned only to be quickly and collectively annihilated over huge areas by rapid weather changes. Clearly, the present is far far different than even the relatively recent past must have been. Sound too far fetched? Consider that the last meal of the famous Berezovka mammoth (see picture), found north of the Artic Circle, consisted of "twenty-four pounds of undigested vegetation" 39 to include over 40 types of plants; many no longer found in such northerly regions.43 The enormous quantities of food it takes to feed an elephant of this size (~300kg per day) is, by itself, very good evidence for a much different climate in these regions than exists today.39 Consider the following comment by Zazula et. al. published the June 2003 issue of Nature: "This vegetation [Beringia: Includes an area between Siberia and Alaska as well as the Yukon Territory of Canada] was unlike that found in modern Arctic tundra, which can sustain relatively few mammals, but was instead a productive ecosystem of dry grassland that resembled extant subarctic steppe communities . . . Abundant sage (Artemisia frigida) leaves, flowers from Artemisia sp., and seeds of bluegrass (Poa), wild-rye grass (Elymus), sedge (Carex) and rushes (Juncus/Luzula) . . . Seeds of cinquefoil (Potentilla), goosefoot (Chenopodium), buttercup (Ranunculus), mustard (Draba), poppy (Papaver), fairy-candelabra (Androsace septentrionalis), chickweed (Cerastium) and campion (Silene) are indicative of diverse forbs growing on dry, open, disturbed ground, possibly among predominantly arid steppe vegetation. Such an assemblage has no modern analogue in Arctic tundra. Local habitat diversity is indicated by sedge and moss peat from deposits that were formed in low-lying wet areas . . . [This region] must have been covered with vegetation even during the coldest part of the most recent ice age (some 24,000 years ago) because it supported large populations of woolly mammoth, horses, bison and other mammals during a time of extensive Northern Hemisphere glaciation." 42 Now, does it really make sense for this region to be so warm, all year round, while the same latitudes on other parts of the globe where covered with extensive glaciers? Siberia, Alaska and Northern Europe and parts of northwestern Canada were all toasty warm while much of the remaining North American Continent and Greenland were covered with huge glaciers? Really? Beyond this, consider that mammoths lacked erector muscles that enable an animal's fur to be "fluffed-up", creating insulating air pockets. They also lacked oil glands to protect against wetness and increased heat loss in extremely cold and damp environments. Animals currently living in Arctic regions have both oil glands and erector muscles. Of course, the mammoth did have a certain number of cold weather adaptations compared to its living cousins, the elephants; such as smaller ears, trunk and tail, fine woolly under-fur and long outer "protective" hair, and a thick layer of insulating fat,39 but these would by no means be enough to survive in the extremes of cold, ice and snow found in these same regions today - not to mention the lack of adequate food supply yet again. It seems very much as Sir Henry Howorth concluded back in the late 19th century: "The instances of the soft parts of the great pachyderms being preserved are not mere local and sporadic ones, but they form a long chain of examples along the whole length of Siberia, from the Urals to the land of the Chukchis [the Bering Strait], so that we have to do here with a condition of things which prevails, and with meteorological conditions that extend over a continent. When we find such a series ranging so widely preserved in the same perfect way, and all evidencing a sudden change of climate from a comparatively temperate one to one of great rigour, we cannot help concluding that they all bear witness to a common event. We cannot postulate a separate climate cataclysm for each individual case and each individual locality, but we are forced to the conclusion that the now permanently frozen zone in Asia became frozen at the same time from the same cause."40 Actually, northern portions of Asia, Europe, and North America contain the remains of extinct species of the elephant [mammoth] and rhinoceros, together with those of horses, oxen, deer, and other large quadrupeds.39 Even though the evidence speaks against the "instant" catastrophic event freeze that some have suggested,39 the weather change was still a real and relatively sudden change to a much colder and much harsher environment compared to the relatively temperate and abundant conditions that once existed in these northerly regions around much of the globe. Is it not then a least reasonable to hypothesize that Greenland also had such a temperate climate in the resent past, loosing its icecap completely and growing lush vegetation? If not, how was the Greenland ice sheet able to be so resistant to the temperate climate surrounding it on all sides for hundreds much less thousands of years? (Back to Top) A Recently Green Greenland? Interestingly enough, crushed plant parts have been found in the ice sheets of northeastern Greenland – from a dike ridge of a glacier. This silty plant material was said to give off a powerful odor, like that of decaying organic matter.17 This material was examined for fossils by Esa Hyyppa of the Geological Survey of Finland, who noted the following: “The silt examined contained two whole leaves, several leaf fragments and two fruits of Dryas octopetala; [also] a small, partly decayed leaf of a shrub species not definitely determinable . . . and an abundance of much decayed, small fragments of plant tissues, mostly leaf veins and root hairs . . . " 17 It is most Interesting that scientists think that this plant material must have originated from some superficial deposit in a distant valley floor of Greenland and that this material was squeezed up from the base of the ice. Some scientists have even suggested that, “The modern aspect of the flora precludes a preglacial time of origin for it.” 17 Note also that the northeastern corner of Greenland is actually its coldest region. It has a “continental climate that is remote from the influence of the sea.” 18 The ocean dramatically affects climate. That is why regions like the north central portions of the United States have such long, cold winters when compared to equal latitudes along the eastern seaboard. Northeastern Greenland, therefore, would have the coldest climate of the entire island. Also, consider that just this past July of 2004, plant material consisting of probable grass or pine needles and bark was discovered at the bottom of the Greenland ice sheet under about 10,400 feet of ice. Although thought to be several million years old, Dorthe Dahl-Jensen, a professor at the University of Copenhagen's Niels Bohr Institute and NGRIP project leader noted that the such plant material found under about 10,400 feet of ice indicates the Greenland Ice Sheet "formed very fast."38 Beyond the obvious fact that such types of organic material suggest an extremely rapid climactic change and burial by ice, the question is, Why hasn't such organic material been stripped completely off Greenland by now by the flowing ice sheets? For instance, we know how fast these ice sheets move - up to 100 meters per year in central regions and up to 10 miles per year for several of Greenland's major glaciers. Given several hundred thousand to over a few million years of such scrubbing by moving ice sheets, how could significant amounts of such organic material remain on the surface of Greenland? Consider again that the hipsithermal period is thought to have lasted about 5,500 years. If, during this time, ice were lost at a conservative 1.5 meters per year, the total loss would be over 8,000 meters of ice. This is more than double the average depth of Greenland’s ice sheet (~3,000 meters). And, this is being very conservative. Large portions of Greenland's ice sheet are melting at up to 10 meters per year with just a 1° increase in average global temperature. A 4° or 5°F rise in global temperature would have melted Greenland’s and Antarctica’s ice sheets at a far greater rate - especially when one considers what has happened to the worlds glaciers in just the past 100 years with only a 1° rise in average global temperature. In just the last 100 years Glacier National Park has gone from having over 150 glaciers to just 35 today. And, those that remain have already lost over 90% of the volume that they had 100 years ago. "For instance, the Qori Kalis Glacier in Peru is shrinking at a rate of 200 meters per year, 40 times as fast as in 1978 when the rate was only 5 meters per year. It's just one of the hundreds of glaciers that are vanishing. Ice is clear disappearing from the Arctic Ocean and Greenland at an astounding rate that is in fact increasing exponentially. More than a hundred species of animals have been spotted moving to cooler regions, and spring starts sooner for more than 200 others. . . In some scenarios, the ice on Greenland eventually melts, causing sea levels to rise 18 feet. Melt just the West Antarctic ice sheet as well, and sea levels jump another 18 feet." 34 The speed of glacial demise is only recently being appreciated by scientists who are "stunned" to realize that glaciers all around the world, like those of Mt. Kilimanjaro, the Himalayas just beneath Mt. Everest, the high Andes, Swiss Alps, and even Iceland, will be completely gone within just 30 years.33 Of course, this begs the question as to how the ice sheets on Greenland and elsewhere, which are currently melting much faster than they are forming with just a 1° rise in global temperature, could have survived for several thousand years when temperatures were 4 or 5 degrees warmer than today during the very recent Hipsithermal period? (Back to Top) First glance intuition is often very helpful in coming up with a good hypothesis to explain a given phenomenon, such as the hundreds of thousands of layers of ice found in places like Greenland and Antarctica. It seems down right intuitive that each layer found in these ice sheets should represent an annual cycle. After all, this seems to fit the uniformitarian paradigm so well. However, a closer inspection of the data seems to favor a much more recent and catastrophic model of ice sheet formation. Violent weather disturbances with large storms, a sudden cold snap, and high precipitation rates could very reasonably give rise to all the layers, dust bands, and isotope variations etc. that we find in the various ice sheets today. (Back to Top) D.A., Gow, A.J., Alley, R.B., Zielinski, G.A., Grootes, P.M., Ram, K., Taylor, K.C., Mayewski, P.A. and Bolzan, J.F., “The Greenland Ice Sheet Project 2 depth-age scale: Methods and results”, Journal of Geophysical Research Craig H., Horibe Y., Sowers T., “Gravitational Separation of Gases and Isotopes in Polar Ice Caps”, Science, 242(4885), 1675-1678, Dec. 23, 1988. Hall, Fred. “Ice Cores Not That Simple”, AEON II: 1, 1989:199 P.M. and Stuiver, M., Oxygen 18/16 variability in Greenland snow and ice with 10-3 to 105 – year time resolution. Journal of Geophysical Research R.B. et al., Visual-stratigraphic dating of the GISP2 ice core: Basis, reproducibility, and application. Journal of Geophysical Research Borisov P., Can Man Change the Climate?, trans. V. Levinson (Moscow, U. S. S. R.), 1973 "Santor¡ni Volcano Ash, Traced Afar, Gives a Date of 1623 BC," The New York Times [New York] (June 7, 1994):C8. Britannica, Macropaedia, 19 vols. "Etna (Mount)," (Chicago, Illinois, 1982), Vol. 6, p. 1017. R. L. Newson, "Response of a General Circulation Model of the Atmosphere to Removal of the Arctic Icecap," Nature (1973): 39-40. M. Warshaw and R. R. Rapp, "An Experiment on the Sensitivity of a Global Circulation Model," Journal of Applied Meteorology 12 (1973): B., The Quaternary Era, London, England, 1957, Vol. II, p. 1494. Sanderson, The Dynasty of ABU, New York, 1962, p. 80. Brooks C. E. P., Climate Through the Ages, 2nd ed., New York, 1970, p. 297. Pielou E. C., After the Ice Age, Chicago, Illinois, 1992, p. 279. Boyd, Louise A., The Coast of Northeast Greenland, American Geological Society Special Publication No. 30, New York, 1948: p132. "Glaciology (1): The Balance Sheet or the Mass Balance," Venture to the Arctic, ed. R. A. Hamilton, Baltimore, Maryland, 1958, p. 175 and Table I, Hammer et al., "Continuous Impurity Analysis Along the Dye 3 Deep Core," American Geophysica Union Monograph 33 (1985): 90. Laurence R. Kittleman, "Tephra," Scientific American, p. 171, New York, December, 1979. - July 2000 Zdanowicz CM, Zielinski GA, Wake CP, “Characteristics of modern atmospheric dust deposition in snow on the Penny Ice Cap, Baffin Island, Arctic Canada”, Climate Change Research Center, Institute for the Study of Earth, Oceans and Space, University of New Hampshire, Tellus, 50B, 506-520, 1998. (http://www.ccrc.sr.unh.edu/~cpw/Zdano98/Z98_paper.html) Lorius C., Jouzel J., Ritz C., Merlivat L., Barkov N. I., Korotkevitch Y. S. and Kotlyakov V. M., “A 150,000-year climatic record from Antarctic ice”, Nature, 316, 1985, 591-596. Barbara Stenni, Valerie Masson-Delmotte, Sigfus Johnsen, Jean Jouzel, Antonio Longinelli, Eric Monnin, Regine Ro¨thlisberger, Enrico Selmo, “An Oceanic Cold Reversal During the Last Deglaciation”, Nature 280:644, 1979. Wettlaufer, J.W., Premelting and anomalous diffusion in ancient ice, FOCUS session, March 16, 2001. Rempel, A., Wettlaufer, J., Waddington E., Worster, G., "Chemicals in ancient ice move, affecting ice cores", Nature, May 31, 2001. (http://unisci.com/stories/20012/0531012.htm) (http://www.washington.edu/newsroom/news/2001archive/05-01archive/k053001.html) The Olympian, "National Park's Famous Glaciers Rapidly Disappearing", Sunday, November 24, 2002. (http://www.theolympian.com/home/news/20021124/northwest/14207.shtml) John Carey, Global Warming - Special Report, BusinessWeek, August 16, 2004, pp 60-69. ( http://www.businessweek.com ) Zielinski et al., "Record of Volcanism Since 7000 B.C. from the GISP2 Greenland Ice Core and Implications for the Volcano-Climate System", Science Vol. 264 pp. 948-951, 13 May 1994 Zielinski and Germani, "New Ice-Core Evidence Challenges the 1620s BC Age for the Santorini (Minoan) Eruption", Journal of Archaeological Science 25 (1998), pp. 279-289 Identification of Aniakchak (Alaska) tephra in Greenland ice core challenges the 1645 BC date for Minoan eruption of Santorini", Geochem. Geophys. Geosyst., 5, Q03005, doi:10.1029/2003GC000672. March, 2004 ( http://www.agu.org/pubs/crossref/2004/2003GC000672.shtml ), " Jim Scott, "Greenland ice core project yields probable ancient plant remains", University of Colorado Press Release, 13 August 2004 ( http://www.eurekalert.org/pub_releases/2004-08/uoca-gic081304.php ) Michael J. Oard, "The extinction of the woolly mammoth: was it a quick freeze?" ( http://www.answersingenesis.org/Home/Area/Magazines/tj/docs/tj14_3-mo_mammoth.pdf ) Henry H. Howorth, The Mammoth and the Flood (London: Samson Low, Marston, Searle, and Rivington, 1887), pp. 96 Mol, Y. Coppens, A.N. Tikhonov, L.D. Agenbroad, R.D.E. Macphee, C. Flemming, A. Greenwood, B Buigues, C. De Marliave, B. van Geel, G.B.A. van Reenen, J.P. Pals, D.C. Fisher, D. Fox, "The Jarkov Mammoth: 20,000-Year-Old carcass of Siberian woolly mammoth Mammuthus Primigenius" (Blumenbach, 1799), The World of Elephants - International Congress, Rome 2001 ( http://www.cq.rm.cnr.it/elephants2001/pdf/305_309.pdf ) Grant D. Zazula, Duane G. Froese, Charles E. Schweger, Rolf W. Mathewes, Alwynne B. Beaudoin, Alice M. Telka, C. Richard Harington, John A Westgate, "Palaeobotany: Ice-age steppe vegetation in east Beringia", Nature 423, 603 (05 June 2003) ( http://www.sfu.ca/~qgrc/zazula_2003b.pdf ) Shukman, David, Greenland Ice-Melt 'Speeding Up', BBC News, UK Edition, 28 July, 2004. ( http://news.bbc.co.uk/1/hi/world/europe/3922579.stm ) Gary Braasch, Glaciers and Glacial Warming, Receding Glaciers, 2005. ( http://www.worldviewofglobalwarming.org/pages/glaciers.html ) Jerome Bernard, Polar Ice Cap Melting at Alarming Rate, COOLSCIENCE, Oct. 24, 2003 ( http://cooltech.iafrica.com/science/280851.htm ) Steve Connor, Melting Greenland Glacier May Hasten Rise in Sea Level, Independent - Common Dreams News Center, July 25, 2005 ( http://www.commondreams.org/headlines05/0725-02.htm ) Animation of Eastern Alp Glacial Retreat, Institut für Fernerkundung und Photogrammetrie Technische Universität Graz, Last accessed, September, 2005 ( Play Video ) Lynn Jenner, Glaciers Surge When Ice Shelf Breaks Up, National Aeronautics and Space Administration (NASA), September 21, 2004. ( Link ) Geoffrey Lean, The Big Thaw, Znet, accessed 2/06 (Link) Zbigniew Jaworowski, Another Global Warming Fraud Exposed: Ice Core Data Show No Carbon Dioxide Increase, 21st Century, Spring 1997. ( Link ) and in a Statement written for a Hearing before the US Senate Committee on Commerce, Science, and Transportation, Climate Change: Incorrect information on pre-industrial CO2, March 19, 2004 ( Link ) Don Behm, Into the spotlight: Leno, scientists alike want to hear explorer's findings, Journal Sentinel, July 21, 2006 ( Link ) . Home Page . Truth, the Scientific Method, and Evolution . Maquiziliducks - The Language of Evolution . Defining Evolution . DNA Mutation Rates . Donkeys, Horses, Mules and Evolution . Amino Acid Racemization Dating . The Steppingstone Problem . Harlen Bretz . Milankovitch Cycles Since June 1, 2002
Instruments & Techniques for Space Weather Measurements How do scientists measure space weather? Let's take a look! Scientists watch the Sun with special telescopes. Some of the telescopes are on Earth, while others are on satellites. Some of the telescopes are for normal, visible light, but others are for different kinds of electromagnetic radiation. Some telescopes watch infrared (IR), ultraviolet (UV), or even X-ray radiation coming from the Sun. Solar astronomers use a coronagraph to view the Sun's atmosphere. They use spectroscopes to detect the different kinds of elements in the Sun. A new technique called "helioseismology" even lets scientists "see" inside the Sun! The Sun gives off light, but it also shoots out radiation. When radiation particles from the Sun get to Earth, radiation detectors on satellites and on Earth measure their types and energy levels. When radiation from the Sun hits Earth's atmosphere, the radiation can make the atmosphere "glow". The aurora, or Northern and Southern Lights, are an example of this. We can study such "glows" and take pictures of them from Earth or from space. Some regions of Earth's atmosphere are electrically charged. The electrically charged regions are called the ionosphere. Space weather affects the ionosphere. Scientists study the ionosphere by bouncing radio waves off of it. Magnetic fields are an important part of space weather. As space weather changes, the strengths and directions of magnetic fields change. Scientists use instruments called magnetometers to measure magnetic fields. There are magnetometers at many places on Earth. There are also magnetometers on satellites around Earth and even on spacecraft circling other planets or exploring different parts of our Solar System.
To build a strong network and defend it, you need to understand the devices that comprise it. What are network devices? Network devices, or networking hardware, are physical devices that are required for communication and interaction between hardware on a computer network. Types of network devices Here is the common network device list: - Access Point Hubs connect multiple computer networking devices together. A hub also acts as a repeater in that it amplifies signals that deteriorate after traveling long distances over connecting cables. A hub is the simplest in the family of network connecting devices because it connects LAN components with identical protocols. A hub can be used with both digital and analog data, provided its settings have been configured to prepare for the formatting of the incoming data. For example, if the incoming data is in digital format, the hub must pass it on as packets; however, if the incoming data is analog, then the hub passes it on in signal form. Hubs do not perform packet filtering or addressing functions; they just send data packets to all connected devices. Hubs operate at the Physical layer of the Open Systems Interconnection (OSI) model. There are two types of hubs: simple and multiple port. Switches generally have a more intelligent role than hubs. A switch is a multiport device that improves network efficiency. The switch maintains limited routing information about nodes in the internal network, and it allows connections to systems like hubs or routers. Strands of LANs are usually connected using switches. Generally, switches can read the hardware addresses of incoming packets to transmit them to the appropriate destination. Using switches improves network efficiency over hubs or routers because of the virtual circuit capability. Switches also improve network security because the virtual circuits are more difficult to examine with network monitors. You can think of a switch as a device that has some of the best capabilities of routers and hubs combined. A switch can work at either the Data Link layer or the Network layer of the OSI model. A multilayer switch is one that can operate at both layers, which means that it can operate as both a switch and a router. A multilayer switch is a high-performance device that supports the same routing protocols as routers. Switches can be subject to distributed denial of service (DDoS) attacks; flood guards are used to prevent malicious traffic from bringing the switch to a halt. Switch port security is important so be sure to secure switches: Disable all unused ports and use DHCP snooping, ARP inspection and MAC address filtering. Routers help transmit packets to their destinations by charting a path through the sea of interconnected networking devices using different network topologies. Routers are intelligent devices, and they store information about the networks they’re connected to. Most routers can be configured to operate as packet-filtering firewalls and use access control lists (ACLs). Routers, in conjunction with a channel service unit/data service unit (CSU/DSU), are also used to translate from LAN framing to WAN framing. This is needed because LANs and WANs use different network protocols. Such routers are known as border routers. They serve as the outside connection of a LAN to a WAN, and they operate at the border of your network. Router are also used to divide internal networks into two or more subnetworks. Routers can also be connected internally to other routers, creating zones that operate independently. Routers establish communication by maintaining tables about destinations and local connections. A router contains information about the systems connected to it and where to send requests if the destination isn’t known. Routers usually communicate routing and other information using one of three standard protocols: Routing Information Protocol (RIP), Border Gateway Protocol (BGP) or Open Shortest Path First (OSPF). Routers are your first line of defense, and they must be configured to pass only traffic that is authorized by network administrators. The routes themselves can be configured as static or dynamic. If they are static, they can only be configured manually and stay that way until changed. If they are dynamic, they learn of other routers around them and use information about those routers to build their routing tables. Routers are general-purpose devices that interconnect two or more heterogeneous networks. They are usually dedicated to special-purpose computers, with separate input and output network interfaces for each connected network. Because routers and gateways are the backbone of large computer networks like the internet, they have special features that give them the flexibility and the ability to cope with varying network addressing schemes and frame sizes through segmentation of big packets into smaller sizes that fit the new network components. Each router interface has its own Address Resolution Protocol (ARP) module, its own LAN address (network card address) and its own Internet Protocol (IP) address. The router, with the help of a routing table, has knowledge of routes a packet could take from its source to its destination. The routing table, like in the bridge and switch, grows dynamically. Upon receipt of a packet, the router removes the packet headers and trailers and analyzes the IP header by determining the source and destination addresses and data type, and noting the arrival time. It also updates the router table with new addresses not already in the table. The IP header and arrival time information is entered in the routing table. Routers normally work at the Network layer of the OSI model. Bridges are used to connect two or more hosts or network segments together. The basic role of bridges in network architecture is storing and forwarding frames between the different segments that the bridge connects. They use hardware Media Access Control (MAC) addresses for transferring frames. By looking at the MAC address of the devices connected to each segment, bridges can forward the data or block it from crossing. Bridges can also be used to connect two physical LANs into a larger logical LAN. Bridges work only at the Physical and Data Link layers of the OSI model. Bridges are used to divide larger networks into smaller sections by sitting between two physical network segments and managing the flow of data between the two. Bridges are like hubs in many respects, including the fact that they connect LAN components with identical protocols. However, bridges filter incoming data packets, known as frames, for addresses before they are forwarded. As it filters the data packets, the bridge makes no modifications to the format or content of the incoming data. The bridge filters and forwards frames on the network with the help of a dynamic bridge table. The bridge table, which is initially empty, maintains the LAN addresses for each computer in the LAN and the addresses of each bridge interface that connects the LAN to other LANs. Bridges, like hubs, can be either simple or multiple port. Bridges have mostly fallen out of favor in recent years and have been replaced by switches, which offer more functionality. In fact, switches are sometimes referred to as “multiport bridges” because of how they operate. Gateways normally work at the Transport and Session layers of the OSI model. At the Transport layer and above, there are numerous protocols and standards from different vendors; gateways are used to deal with them. Gateways provide translation between networking technologies such as Open System Interconnection (OSI) and Transmission Control Protocol/Internet Protocol (TCP/IP). Because of this, gateways connect two or more autonomous networks, each with its own routing algorithms, protocols, topology, domain name service, and network administration procedures and policies. Gateways perform all of the functions of routers and more. In fact, a router with added translation functionality is a gateway. The function that does the translation between different network technologies is called a protocol converter. Modems (modulators-demodulators) are used to transmit digital signals over analog telephone lines. Thus, digital signals are converted by the modem into analog signals of different frequencies and transmitted to a modem at the receiving location. The receiving modem performs the reverse transformation and provides a digital output to a device connected to a modem, usually a computer. The digital data is usually transferred to or from the modem over a serial line through an industry standard interface, RS-232. Many telephone companies offer DSL services, and many cable operators use modems as end terminals for identification and recognition of home and personal users. Modems work on both the Physical and Data Link layers. A repeater is an electronic device that amplifies the signal it receives. You can think of repeater as a device which receives a signal and retransmits it at a higher level or higher power so that the signal can cover longer distances, more than 100 meters for standard LAN cables. Repeaters work on the Physical layer. While an access point (AP) can technically involve either a wired or wireless connection, it commonly means a wireless device. An AP works at the second OSI layer, the Data Link layer, and it can operate either as a bridge connecting a standard wired network to wireless devices or as a router passing data transmissions from one access point to another. Wireless access points (WAPs) consist of a transmitter and receiver (transceiver) device used to create a wireless LAN (WLAN). Access points typically are separate network devices with a built-in antenna, transmitter and adapter. APs use the wireless infrastructure network mode to provide a connection point between WLANs and a wired Ethernet LAN. They also have several ports, giving you a way to expand the network to support additional clients. Depending on the size of the network, one or more APs might be required to provide full coverage. Additional APs are used to allow access to more wireless clients and to expand the range of the wireless network. Each AP is limited by its transmission range — the distance a client can be from an AP and still obtain a usable signal and data process speed. The actual distance depends on the wireless standard, the obstructions and environmental conditions between the client and the AP. Higher end APs have high-powered antennas, enabling them to extend how far the wireless signal can travel. APs might also provide many ports that can be used to increase the network’s size, firewall capabilities and Dynamic Host Configuration Protocol (DHCP) service. Therefore, we get APs that are a switch, DHCP server, router and firewall. To connect to a wireless AP, you need a service set identifier (SSID) name. 802.11 wireless networks use the SSID to identify all systems belonging to the same network, and client stations must be configured with the SSID to be authenticated to the AP. The AP might broadcast the SSID, allowing all wireless clients in the area to see the AP’s SSID. However, for security reasons, APs can be configured not to broadcast the SSID, which means that an administrator needs to give client systems the SSID instead of allowing it to be discovered automatically. Wireless devices ship with default SSIDs, security settings, channels, passwords and usernames. For security reasons, it is strongly recommended that you change these default settings as soon as possible because many internet sites list the default settings used by manufacturers. Access points can be fat or thin. Fat APs, sometimes still referred to as autonomous APs, need to be manually configured with network and security settings; then they are essentially left alone to serve clients until they can no longer function. Thin APs allow remote configuration using a controller. Since thin clients do not need to be manually configured, they can be easily reconfigured and monitored. Access points can also be controller-based or stand-alone. Having a solid understanding of the types of network devices available can help you design and built a network that is secure and serves your organization well. However, to ensure the ongoing security and availability of your network, you should carefully monitor your network devices and activity around them, so you can quickly spot hardware issues, configuration issues and attacks. Brand Representative for Netwrix I would add some remark or add-on that most consumer or home "networking" usually have "all-in-one" devices such that the modem, switch, router, firewall & wireless AP are in one or 2 devices. Yep indeed, there are a lot of devices that have multiple functions. Great read through, though I noticed what Robert pointed out as well. I think he may need to refresh his knowledge of the Sandwich method, haha. This helped me connect some gaps in my knowledge I didn't know I had! I'm actually going to print this out to keep at my desk for reference. Thank you!
How do you do Pemdas questions? PEMDAS is an acronym and stands for parenthesis, exponents, multiply, divide, add, and subtract. - Step 1: Identify Parenthesis. - Step 2: Solve Parenthesis. - Step 3: Rewrite Equation. - Step 4: Identify Exponents. - Step 5: Solve Exponents. - Step 6: Solve Exponents. - Step 7: Rewrite Equation. - Step 8: Identify Multiplication Problems. What does mean in Pemdas? PEMDAS is an acronym used to mention the order of operations to be followed while solving expressions having multiple operations. PEMDAS stands for P- Parentheses, E- Exponents, M- Multiplication, D- Division, A- Addition, and S- Subtraction. What is Gmdas math? GEMDAS is the rule that can be used to simplify or evaluate complicated numerical expressions with more than one binary operation. Very simply way to remember GEMDAS rule : G —> Grouping (Parentheses) E —-> Exponent. What math is 9th grade? 9th grade math usually focuses on Algebra I, but can include other advanced mathematics such as Geometry, Algebra II, Pre-Calculus or Trigonometry. This is the year when they formalize and extend their understanding and application of quadratic and exponential functions as well as other advanced mathematical concepts. How to do PEMDAS correctly? First,solve the parenthesis. In this case,we have an exponent in the parenthesis. How to solve PEMDAS? First,perform the operation inside the parenthesis or grouping symbol. Is PEMDAS or BODMAS correct? Which is correct Pemdas or Bodmas? PEMDAS and BOMDAS are the same things. Both of them refer to the mnemonic for the logical order of operations for the mathematical expressions. Why do people use PEMDAS? Why do we use pemdas? PEMDAS is an acronym used to remind people of the order of operations. This means that you don’t just solve math problems from left to right; rather, you solve them in a predetermined order that’s given to you via the acronym PEMDAS.
|Classification and external resources| |ICD-10||A40 – A41| Sepsis is a life-threatening condition that arises when the body's response to infection injures its own tissues and organs. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. There may also be symptoms related to a specific infection, such as a cough with pneumonia, or painful urination with a kidney infection. In the very young, old, and people with a weakened immune system, there may be no symptoms of a specific infection and the body temperature may be low or normal rather than high. Severe sepsis is sepsis causing poor organ function or insufficient blood flow. Insufficient blood flow may be evident by low blood pressure, high blood lactate, or low urine output. Septic shock is low blood pressure due to sepsis that does not improve after reasonable amounts of intravenous fluids are given. Sepsis is caused by an immune response triggered by an infection. The infection is most commonly bacterial, but it can be from fungi, viruses, or parasites. Common locations for the primary infection include lungs, brain, urinary tract, skin, and abdominal organs. Risk factors include young or old age, a weakened immune system from conditions such as cancer or diabetes, and major trauma or burns. Diagnosis was based on meeting at least two systemic inflammatory response syndrome (SIRS) criteria due to a presumed infection. In 2016 screening by SIRS was replaced with qSOFA which is two of the following three: increased breathing rate, change in level of consciousness, and low blood pressure. Blood cultures are recommended preferably before antibiotics are started; however, infection of the blood is not required for the diagnosis. Medical imaging should be done to look for the possible location of infection. Other potential causes of similar signs and symptoms include anaphylaxis, adrenal insufficiency, low blood volume, heart failure, and pulmonary embolism among others. Sepsis is usually treated with intravenous fluids and antibiotics. Antibiotics are typically given as soon as possible. This is often done in an intensive care unit. If fluid replacement is not enough to maintain blood pressure, medications that raise blood pressure can be used. Mechanical ventilation and dialysis may be needed to support the function of the lungs and kidneys, respectively. To guide treatment, a central venous catheter and an arterial catheter may be placed for access to the bloodstream. Other measurements such as cardiac output and superior vena cava oxygen saturation may be used. People with sepsis need preventive measures for deep vein thrombosis, stress ulcers and pressure ulcers, unless other conditions prevent such interventions. Some might benefit from tight control of blood sugar levels with insulin. The use of corticosteroids is controversial. Activated drotrecogin alfa, originally marketed for severe sepsis, has not been found to be helpful and was withdrawn from sale in 2011. Disease severity partly determines the outcome with the risk of death from sepsis being as high as 30%, severe sepsis as high as 50%, and septic shock as high as 80%. The number of cases worldwide is unknown as there is little data from the developing world. Estimates suggest sepsis affects millions of people a year. In the developed world about 0.2 to 3 per 1000 people get sepsis yearly or about a million cases per year in the United States. Rates of disease have been increasing. Sepsis is more common among males than females. The condition has been described at least since the time of Hippocrates. The terms septicemia and blood poisoning referred to the microorganisms or their toxins in the blood and are no longer commonly used. - 1 Signs and symptoms - 2 Cause - 3 Diagnosis - 4 Pathophysiology - 5 Management - 6 Prognosis - 7 Epidemiology - 8 History - 9 Society and culture - 10 Notes - 11 References - 12 External links Signs and symptoms In addition to symptoms related to the provoking cause, sepsis is frequently associated with either fever or low body temperature, rapid breathing, elevated heart rate, confusion, and edema. Early signs are a fast heart rate, decreased urination, and high blood sugar. Signs of established sepsis include confusion, metabolic acidosis (which may be accompanied by faster breathing leading to a respiratory alkalosis), low blood pressure due to decreased systemic vascular resistance, higher cardiac output, and dysfunctions of blood coagulation (where clotting can lead to organ failure). The most common primary sources of infection resulting in sepsis are the lungs, the abdomen, and the urinary tract. Typically, 50% of all sepsis cases start as an infection in the lungs. No definitive source is found in one third to one half of cases. Infections leading to sepsis are usually bacterial but can be fungal or viral. While gram-negative bacteria were previously the most common cause of sepsis, in the last decade gram-positive bacteria, most commonly staphylococci, are thought to cause more than 50% of cases of sepsis. Other commonly implicated bacteria include Streptococcus pyogenes, Escherichia coli, Pseudomonas aeruginosa, and Klebsiella species. Fungal sepsis accounts for approximately 5% of severe sepsis and septic shock cases; the most common cause of fungal sepsis is infection by Candida species of yeast. Within the first three hours of suspected sepsis, diagnostic studies should include WBCs, measuring serum lactate and obtaining appropriate cultures before starting antibiotics, so long as this does not delay their use by more than 45 minutes. To identify the causative organism(s), at least two sets of blood cultures using bottles with media for aerobic and anaerobic organisms should be obtained, with at least one drawn through the skin and one drawn through each vascular access device (such as an IV catheter) in place more than 48 hours. However, bacteria are present in the blood in only about 30% of cases. Another possible method of detection is by polymerase chain reaction. If other sources of infection are suspected, cultures of these sources, such as urine, cerebrospinal fluid, wounds, or respiratory secretions, should also be obtained, as long as this does not delay the use of antibiotics. Within six hours, if blood pressure remains low despite initial fluid resuscitation of 30 ml/kg, or if initial lactate is ≥ 4 mmol/L (36 mg/dL), central venous pressure and central venous oxygen saturation should be measured. Lactate should be re-measured if the initial lactate was elevated. Within twelve hours, it is essential to diagnose or exclude any source of infection that would require emergent source control, such as necrotizing soft tissue infection, infection causing inflammation of the abdominal cavity lining, infection of the bile duct, or intestinal infarction. A pierced internal organ (free air on abdominal x-ray or CT scan); an abnormal chest x-ray consistent with pneumonia (with focal opacification); or petechiae, purpura, or purpura fulminans can be evident of infection. If the SIRS criteria are negative it is very unlikely the person has sepsis; if they are positive there is just a moderate probability that the person has sepsis. |Temperature||<36 °C (96.8 °F) or >38 °C (100.4 °F)| |Respiratory rate||>20/min or PaCO2<32 mmHg (4.3 kPa)| |WBC||<4x109/L (<4000/mm³), >12x109/L (>12,000/mm³), or 10% bands| There are different levels of sepsis: sepsis, severe sepsis, and septic shock. In 2016 screening by systemic inflammatory response syndrome (SIRS) was replaced with qSOFA which is two of the following three: increased breathing rate, change in level of consciousness, and low blood pressure. - SIRS is the presence of two or more of the following: abnormal body temperature, heart rate, respiratory rate or blood gas, and white blood cell count. - Sepsis is defined as SIRS in response to an infectious process. - Severe sepsis is defined as sepsis with sepsis-induced organ dysfunction or tissue hypoperfusion (manifesting as hypotension, elevated lactate, or decreased urine output). - Septic shock is severe sepsis plus persistently low blood pressure despite the administration of intravenous fluids. Examples of end-organ dysfunction include the following: - Lungs: acute respiratory distress syndrome (ARDS) (PaO2/FiO2 < 300)[note 1] - Brain: encephalopathy symptoms including agitation, confusion, coma; causes may include ischemia, hemorrhage, formation of blood clots in small blood vessels, microabscesses, multifocal necrotizing leukoencephalopathy - Liver: disruption of protein synthetic function manifests acutely as progressive disruption of blood clotting due to an inability to synthesize clotting factors and disruption of metabolic functions leads to impaired bilirubin metabolism, resulting in elevated unconjugated serum bilirubin levels - Kidney: low urine output or no urine output, electrolyte abnormalities, or volume overload - Heart: systolic and diastolic heart failure, likely due to chemical signals that depress myocyte function, cellular damage, manifest as a troponin leak (although not necessarily ischemic in nature) More specific definitions of end-organ dysfunction exist for SIRS in pediatrics. - Cardiovascular dysfunction (after fluid resuscitation with at least 40 ml/kg of crystalloid) - hypotension with blood pressure < 5th percentile for age or systolic blood pressure < 2 standard deviations below normal for age, OR - vasopressor requirement, OR - two of the following criteria: - Respiratory dysfunction (in the absence of cyanotic heart disease or known chronic lung disease) - the ratio of the arterial partial-pressure of oxygen to the fraction of oxygen in the gases inspired (PaO2/FiO2) < 300 (the definition of acute lung injury), OR - arterial partial-pressure of carbon dioxide (PaCO2) > 65 torr (20 mmHg) over baseline PaCO2 (evidence of hypercapnic respiratory failure), OR - supplemental oxygen requirement of greater than FiO2 0.5 to maintain oxygen saturation ≥ 92% - Neurologic dysfunction - Hematologic dysfunction - Kidney dysfunction - Liver dysfunction (only applicable to infants > 1 month) Consensus definitions, however, continue to evolve, with the latest expanding the list of signs and symptoms of sepsis to reflect clinical bedside experience. A 2013 review concluded moderate-quality evidence exists to support use of the procalcitonin level as a method to distinguish sepsis from non-infectious causes of SIRS. The same review found the test's sensitivity to be 77% and the specificity to be 79%. The authors suggested procalcitonin may serve as a helpful diagnostic marker for sepsis, but cautioned that its level alone cannot definitively make the diagnosis. A 2012 systematic review found that soluble urokinase-type plasminogen activator receptor (SuPAR) is a nonspecific marker of inflammation and does not accurately diagnose sepsis. However, this same review concluded that SuPAR has prognostic value as higher SuPAR levels are associated with an increased rate of death in those with sepsis. The differential diagnosis for sepsis is broad and has to look at (to exclude) the noninfectious conditions that can cause the systemic signs of SIRS: alcohol withdrawal, acute pancreatitis, burns, pulmonary embolus, thyrotoxicosis, anaphylaxis, adrenal insufficiency, and neurogenic shock. In common clinical usage, neonatal sepsis refers to a bacterial blood stream infection in the first month of life, such as meningitis, pneumonia, pyelonephritis, or gastroenteritis, but neonatal sepsis can also be due to infection with fungi, viruses, or parasites. Criteria with regard to hemodynamic compromise or respiratory failure are not useful because they present too late for intervention. Sepsis is caused by a combination of factors related to the particular invading pathogen(s) and to the status of the host's immune system. The early phase of sepsis characterized by excessive inflammation (which can sometimes result in a cytokine storm) can be followed by a prolonged period of decreased functioning of the immune system. Either of these phases can prove fatal. Bacterial virulence factors such as glycocalyx and various adhesins allow colonization, immune evasion, and establishment of disease in the host. Sepsis caused by gram-negative bacteria is thought to be largely due to the host's response to the lipid A component of lipopolysaccharide, also called endotoxin. Sepsis caused by gram-positive bacteria can result from an immunological response to cell wall lipoteichoic acid. Bacterial exotoxins that act as superantigens can also cause sepsis. Superantigens simultaneously bind major histocompatibility complex and T-cell receptors in the absence of antigen presentation. This forced receptor interaction induces the production of pro-inflammatory chemical signals (cytokines) by T-cells. There are a number of microbial factors which can cause the typical septic inflammatory cascade. An invading pathogen is recognized by its pathogen-associated molecular patterns (PAMPs). Examples of PAMPs include lipopolysaccharides and flagellin in gram-negative bacteria, muramyl dipeptide in the peptidoglycan of the gram-positive bacterial cell wall, and CpG bacterial DNA. These PAMPs are recognized by the innate immune system's pattern recognition receptors (PRRs), which can be membrane-bound or cytosolic. There are four families of PRRs: the toll-like receptors, the C-type lectin receptors, the NOD-like receptors and the RIG-I-like receptors. The association of a PAMP and a PRR will invariably cause a series of intracellular signalling cascades. Consequentially, transcription factors like nuclear factor-kappa B and activator protein-1 will up-regulate the expression of pro-inflammatory and anti-inflammatory cytokines. Cytokines such as tumor necrosis factor, interleukin 1, and interleukin 6 can activate procoagulation factors in the cells lining blood vessels, leading to endothelial damage. The damaged endothelial surface inhibits anticoagulant properties as well as increases antifibrinolysis, which can lead to intravascular clotting, the formation of blood clots in small blood vessels, and multiple organ failure. A systemic inflammatory response syndrome can also occur in patients without the presence of infection, for example in those with burns, polytrauma, or the initial state in pancreatitis and chemical pneumonitis. The low blood pressure seen in those with sepsis is the result of various processes including excessive production of chemicals that dilate blood vessels such as nitric oxide, a deficiency of chemicals that constrict blood vessels such as vasopressin, and activation of ATP-sensitive potassium channels. In those with severe sepsis and septic shock, this sequence of events leads to a type of circulatory shock known as distributive shock. Early recognition and focused management can improve the outcomes in sepsis. Current professional recommendations include a number of actions ("bundles") to be taken as soon as possible after diagnosis. Within the first three hours someone with sepsis should have received antibiotics, and intravenous fluids if there is evidence of either low blood pressure or other evidence for inadequate blood supply to organs (as evidenced by a raised level of lactate); blood cultures should also be obtained within this time period. After six hours the blood pressure should be adequate, close monitoring of blood pressure and blood supply to organs should be in place, and the lactate should be measured again if it was initially raised. A related bundle, the "sepsis six", is in widespread use in the United Kingdom; this requires the administration of antibiotics within an hour of recognition, blood cultures, lactate and hemoglobin determination, urine output monitoring, high-flow oxygen, and intravenous fluids. Apart from the timely administration of fluids and antibiotics, the management of sepsis also involves surgical drainage of infected fluid collections, and appropriate support for organ dysfunction. This may include hemodialysis in kidney failure, mechanical ventilation in lung dysfunction, transfusion of blood products, and drug and fluid therapy for circulatory failure. Ensuring adequate nutrition—preferably by enteral feeding, but if necessary by parenteral nutrition—is important during prolonged illness. In those with high blood sugar levels, insulin to bring it down to 7.8-10 mmol/L (140–180 mg/dL) is recommended with lower levels potentially worsening outcomes. Medication to prevent deep vein thrombosis and gastric ulcers may also be used. In severe sepsis and septic shock, broad-spectrum antibiotics (usually two or a β-lactam antibiotic with broad coverage) are recommended. Some recommend they be given within 1 hour of making the diagnosis stating that for every hour delay in the administration of antibiotics, there is an associated 6% rise in mortality. Others did not find a benefit with early administration. Two sets of blood cultures should be obtained before starting antibiotics if this can be done without delaying the administration of antibiotics. Several factors determine the most appropriate choice for the initial antibiotic regimen. These factors include local patterns of bacterial sensitivity to antibiotics, whether the infection is thought to be a hospital or community-acquired infection, and which organ systems are thought to be infected. Antibiotic regimens should be reassessed daily and narrowed if appropriate. Treatment duration is typically 7–10 days with the type of antibiotic used directed by the results of cultures. Intravenous fluids are titrated (measured and adjusted) in response to heart rate, blood pressure, and urine output; restoring large fluid deficits can require 6 to 10 liters of crystalloids in adults. In children an initial amount of 20mL/Kg is reasonable in shock. In cases of severe sepsis and septic shock where a central venous catheter is used to measure blood pressures dynamically, fluids should be administered until the central venous pressure (CVP) reaches 8–12mmHg. Once these goals are met, the central venous oxygen saturation (ScvO2), i.e., the oxygen saturation of venous blood as it returns to the heart as measured at the vena cava, is optimized. If the ScvO2 is less than 70%, blood may be given to reach a hemoglobin of 10 g/dL and then inotropes are added until the ScvO2 is optimized. In those with acute respiratory distress syndrome (ARDS) and sufficient tissue blood fluid, more fluids should be given carefully. Crystalloid solutions are recommended initially. Crystalloid solutions and albumin are better than other fluids (such as hydroxyethyl starch) in terms of risk of death. Starches also carry an increased risk of acute kidney injury, and need for blood transfusion. Various colloid solutions (such as modified gelatin) carry no advantage over crystalloid. Albumin also appears to be of no benefit over crystalloids. Packed red blood cells are recommended to keep the hemoglobin levels between 70 and 90 g/L. A 2014 trial; however, found no difference between a target hemoglobin of 70 or 90 g/L. If the person has been sufficiently fluid resuscitated but the mean arterial pressure is not greater than 65 mmHg, vasopressors are recommended. Norepinephrine (noradrenaline) is recommended as the initial choice. If a single vasopressor is not enough to raise the blood pressure, epinephrine (adrenaline) or vasopressin may be added. Dopamine is typically not recommended. Dobutamine may be used if heart function is poor or blood flow is insufficient despite sufficient fluid volumes and blood pressure. Etomidate is often not recommended as a medication to help with intubation in this situation due to concerns it may lead to poor adrenal function and an increased risk of death. The small amount of evidence there is, however, has not found a change in the risk of death with etomidate. The use of steroids in sepsis is controversial. Studies do not give a clear picture as to whether and when glucocorticoids should be used. The 2012 Surviving Sepsis Campaign recommends against their use in those with septic shock if intravenous fluids and vasopressors stabilize the person's cardiovascular function. While a 2015 Cochrane review found low quality evidence of benefit. During critical illness, a state of adrenal insufficiency and tissue resistance to corticosteroids may occur. This has been termed critical illness–related corticosteroid insufficiency. Treatment with corticosteroids might be most beneficial in those with septic shock and early severe ARDS, whereas its role in others such as those with pancreatitis or severe pneumonia is unclear. However, the exact way of determining corticosteroid insufficiency remains problematic. It should be suspected in those poorly responding to resuscitation with fluids and vasopressors. ACTH stimulation testing is not recommended to confirm the diagnosis. The method of stopping glucocorticoid drugs is variable, and it is unclear whether they should be slowly decreased or simply abruptly stopped. Early goal directed therapy Early goal directed therapy (EGDT) is an approach to the management of severe sepsis during the initial 6 hours after diagnosis. It is a step-wise approach, with the physiologic goal of optimizing cardiac preload, afterload, and contractility. It includes giving early antibiotics. It involves monitoring of hemodynamic parameters and specific interventions to achieve key resuscitation targets which include maintaining a central venous pressure between 8-12 mmHg, a mean arterial pressure of between 65-90 mmHg, a central venous oxygen saturation (ScvO2) greater than 70% and a urine output of greater than 0.5 ml/kg/hour. The goal is to optimize oxygen delivery to tissues and achieve a balance between systemic oxygen delivery and demand. An appropriate decrease in serum lactate may be equivalent to ScvO2 and easier to obtain. In the original trial, early goal directed therapy was found to reduce mortality from 46.5% to 30.5% in those with sepsis, and the Surviving Sepsis Campaign has been recommending its use. However, three more recent large randomized control trials (ProCESS, ARISE, and ProMISe), did not demonstrate a 90-day mortality benefit of early goal directed therapy versus the standard therapy in severe sepsis. It is likely that some parts of EGDT are more important than others. Following these trials the use of EGDT is still considered reasonable. Neonatal sepsis can be difficult to diagnose as newborns may be asymptomatic. If a newborn shows signs and symptoms suggestive of sepsis, antibiotics are immediately started and are either changed to target a specific organism identified by diagnostic testing or discontinued after an infectious cause for the symptoms has been ruled out. Monoclonal and polyclonal preparations of intravenous immunoglobulin (IVIG) do not lower the rate of death in newborns and adults with sepsis. Evidence for the use of IgM-enriched polyclonal preparations of IVIG is inconsistent. A 2012 Cochrane review concluded that N-acetylcysteine does not reduce mortality in those with SIRS or sepsis and may even be harmful. Recombinant activated protein C (drotrecogin alpha) was originally introduced for severe sepsis (as identified by a high APACHE II score), where it was thought to confer a survival benefit. However, subsequent studies showed that it increased adverse events—bleeding risk in particular—and did not decrease mortality. It was removed from sale in 2011. Another medication known as eritoran also has not shown benefit. Approximately 20–35% of people with severe sepsis and 30–70% of people with septic shock die. Lactate is a useful method of determining prognosis with those who have a level greater than 4 mmol/L having a mortality of 40% and those with a level of less than 2 mmol/L have a mortality of less than 15%. There are a number of prognostic stratification systems such as APACHE II and Mortality in Emergency Department Sepsis. APACHE II factors in the person's age, underlying condition, and various physiologic variables to yield estimates of the risk of dying of severe sepsis. Of the individual covariates, the severity of underlying disease most strongly influences the risk of death. Septic shock is also a strong predictor of short- and long-term mortality. Case-fatality rates are similar for culture-positive and culture-negative severe sepsis. The Mortality in Emergency Department Sepsis (MEDS) score is simpler and useful in the emergency department environment. Some people may experience severe long-term cognitive decline following an episode of severe sepsis, but the absence of baseline neuropsychological data in most sepsis patients makes the incidence of this difficult to quantify or to study. Sepsis causes millions of deaths globally each year and is the most common cause of death in people who have been hospitalized. The worldwide incidence of sepsis is estimated to be 18 million cases per year. In the United States sepsis affects approximately 3 in 1,000 people, and severe sepsis contributes to more than 200,000 deaths per year. Sepsis occurs in 1-2% of all hospitalizations and accounts for as much as 25% of ICU bed utilization. Due to it rarely being reported as a primary diagnosis (often being a complication of cancer or other illness), the incidence, mortality, and morbidity rates of sepsis are likely underestimated. A study by the Agency for Healthcare Research and Quality (AHRQ) of selected States found that there were approximately 651 hospital stays per 100,000 population with a sepsis diagnosis in 2010. It is the second-leading cause of death in non-coronary intensive care unit (ICU) patients and the tenth-most-common cause of death overall (the first being heart disease). Children under 12 months of age and elderly people have the highest incidence of severe sepsis. Among U.S. patients who had multiple sepsis hospital admissions in 2010, those who were discharged to a skilled nursing facility or long term care following the initial hospitalization were more likely to be readmitted than those discharged to another form of care. A study of 18 U.S. States found that, amongst Medicare patients in 2011, septicemia was the second most common principal reason for readmission within 30 days. Several medical conditions increase a person's susceptibility to infection and developing sepsis. Common sepsis risk factors include age (especially the very young and old); conditions that weaken the immune system such as cancer, diabetes, or the absence of a spleen; and major trauma and burns. The term "σήψις" (sepsis) was introduced by Hippocrates in the fourth century BC, and it meant the process of decay or decomposition of organic matter. In the eleventh century, Avicenna used the term "blood rot" for diseases linked to severe purulent process. Though severe systemic toxicity had already been observed, it was only in the 19th century that the specific term – sepsis – was used for this condition. By the end of the 19th century, it was widely believed that microbes produced substances that could injure the mammalian host and that soluble toxins released during infection caused the fever and shock that were commonplace during severe infections. Pfeiffer coined the term endotoxin at the beginning of the 20th century to denote the pyrogenic principle associated with Vibrio cholerae. It was soon realised that endotoxins were expressed by most and perhaps all gram-negative bacteria. The lipopolysaccharide character of enteric endotoxins was elucidated in 1944 by Shear. The molecular character of this material was determined by Luderitz et al. in 1973. It was discovered in 1965 that a strain of C3H/HeJ mice were immune to the endotoxin-induced shock. The genetic locus for this effect was dubbed Lps. These mice were also found to be hypersusceptible to infection by gram-negative bacteria. These observations were finally linked in 1998 by the discovery of the toll-like receptor gene 4 (TLR 4). Genetic mapping work, performed over a period of five years, showed that TLR4 was the sole candidate locus within the Lps critical region; this strongly implied that a mutation within TLR4 must account for the lipopolysaccharide resistance phenotype. The defect in the TLR4 gene that led to the endotoxin resistant phenotype was discovered to be due to a mutation in the cytoplasm. Society and culture Sepsis was the most expensive condition treated in U.S. hospital stays in 2011, at an aggregate cost of $20.3 billion for nearly 1.1 million hospitalizations. Costs for sepsis hospital stays more than quadrupled since 1997 with an 11.5 percent annual increase. By payer, it was the most costly condition billed to Medicare, the second-most costly billed to Medicaid and the uninsured, and the fourth-most costly billed to private insurance. A large international collaboration entitled the "Surviving Sepsis Campaign" was established in 2002 to educate people about sepsis and to improve patient outcomes with sepsis. The Campaign has published an evidence-based review of management strategies for severe sepsis, with the aim to publish a complete set of guidelines in subsequent years. Sepsis Alliance is a charitable organization run by a team of dedicated laypeople and healthcare professionals who share a strong commitment to battling sepsis. The organization was created to raise sepsis awareness among both the general public and healthcare professionals. - Singer M, Deutschman CS, Seymour CW, Shankar-Hari M, Annane D, Bauer M, Bellomo R, Bernard GR, Chiche JD, Coopersmith CM, Hotchkiss RS, Levy MM, Marshall JC, Martin GS, Opal SM, Rubenfeld GD, van der Poll T, Vincent JL, Angus DC (February 23, 2016). "The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)". JAMA. 315: 801–10. doi:10.1001/jama.2016.0287. PMID 26903338. - "Sepsis Questions and Answers". cdc.gov. Centers for Disease Control and Prevention (CDC). May 22, 2014. Retrieved 28 November 2014. - Jui, Jonathan (2011). "Ch. 146: Septic Shock". In Tintinalli, Judith E.; Stapczynski, J. Stephan; Ma, O. John; Cline, David M.; et al. Tintinalli's Emergency Medicine: A Comprehensive Study Guide (7th ed.). New York: McGraw-Hill. pp. 1003–14. Retrieved December 11, 2012 – via AccessMedicine. (subscription required (. )) - Surviving Sepsis Campaign Guidelines Committee including The Pediatric Subgroup; Dellinger, RP; Levy, MM; Rhodes, A; et al. (2013). "Surviving Sepsis Campaign: International guidelines for management of severe sepsis and septic shock: 2012" (PDF). Critical Care Medicine. 41 (2): 580–637. doi:10.1097/CCM.0b013e31827e83af. PMID 23353941 – via Surviving Sepsis Campaign. - Deutschman, CS; Tracey, KJ (April 2014). "Sepsis: Current dogma and new perspectives". Immunity. 40 (4): 463–75. doi:10.1016/j.immuni.2014.04.001. PMID 24745331. - Singer, M; Deutschman, CS; Seymour, CW; Shankar-Hari, M; Annane, D; Bauer, M; Bellomo, R; Bernard, GR; Chiche, JD; Coopersmith, CM; Hotchkiss, RS; Levy, MM; Marshall, JC; Martin, GS; Opal, SM; Rubenfeld, GD; van der Poll, T; Vincent, JL; Angus, DC (23 February 2016). "The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3).". JAMA. 315 (8): 801–10. doi:10.1001/jama.2016.0287. PMID 26903338. - Patel, GP; Balk, RA (January 15, 2012). "Systemic steroids in severe sepsis and septic shock". American Journal of Respiratory and Critical Care Medicine. 185 (2): 133–9. doi:10.1164/rccm.201011-1897CI. PMID 21680949. - Martí-Carvajal, AJ; Solà, I; Gluud, C; Lathyris, D; Cardona, AF (12 December 2012). "Human recombinant protein C for severe sepsis and septic shock in adult and paediatric patients.". The Cochrane database of systematic reviews. 12: CD004388. doi:10.1002/14651858.CD004388.pub6. PMID 23235609. - Jawad, I; Lukšić, I; Rafnsson, SB (June 2012). "Assessing available information on the burden of sepsis: Global estimates of incidence, prevalence and mortality" (PDF). Journal of Global Health. 2 (1): 010404. doi:10.7189/jogh.02.010404 (inactive 2015-02-02). PMC . PMID 23198133. - Martin, GS (June 2012). "Sepsis, severe sepsis and septic shock: Changes in incidence, pathogens and outcomes". Expert Review of Anti-infective Therapy. 10 (6): 701–6. doi:10.1586/eri.12.50. PMC . PMID 22734959. - Angus, DC; van der Poll, T (August 29, 2013). "Severe sepsis and septic shock". The New England Journal of Medicine. 369 (9): 840–51. doi:10.1056/NEJMra1208623. PMID 23984731. Lay summary (August 30, 2013). - Bone, R; Balk, R; Cerra, F; Dellinger, R; et al. (1992). "Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine" (PDF). Chest. 101 (6): 1644–55. doi:10.1378/chest.101.6.1644. PMID 1303622. - SCCM/ESICM/ACCP/ATS/SIS; Levy, MM; Fink, MP; Marshall, JC; et al. (April 2003). "2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference" (PDF). Critical Care Medicine. 31 (4): 1250–6. doi:10.1097/01.CCM.0000050454.01978.3B. PMID 12682500 – via European Society of Intensive Care Medicine (ESICM). - Felner, Kevin; Smith, Robert L. (2012). "Ch. 138: Sepsis". In McKean, Sylvia; Ross, John J.; Dressler, Daniel D.; Brotman, Daniel J.; et al. Principles and Practice of Hospital Medicine. New York: McGraw-Hill. pp. 1099–109. ISBN 0071603891. - MedlinePlus Encyclopedia Sepsis‹See TfD›. Retrieved November 29, 2014. - Munford, Robert S.; Suffredini, Anthony F. (2014). "Ch. 75: Sepsis, Severe Sepsis and Septic Shock". In Bennett, John E.; Dolin, Raphael; Blaser, Martin J. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases (8th ed.). Philadelphia: Elsevier Health Sciences. pp. 914–34. ISBN 9780323263733. - Bloch, KC (2010). "Ch. 4: Infectious Diseases". In McPhee, Stephen J.; Hammer, Gary D. Pathophysiology of Disease (6th ed.). New York: McGraw-Hill. Retrieved January 10, 2013 – via AccessMedicine. (subscription required (. )) - Ramachandran, G (January 2014). "Gram-positive and gram-negative bacterial toxins in sepsis: A brief review". Virulence. 5 (1): 213–8. doi:10.4161/viru.27024. PMC . PMID 24193365. - Delaloye, J; Calandra, T (January 2014). "Invasive candidiasis as a cause of sepsis in the critically ill patient". Virulence. 5 (1): 161–9. doi:10.4161/viru.26187. PMC . PMID 24157707. - Wacker, C; Prkno, A; Brunkhorst, FM; Schlattmann, P (May 2013). "Procalcitonin as a diagnostic marker for sepsis: A systematic review and meta-analysis". The Lancet Infectious Diseases. 13 (5): 426–35. doi:10.1016/S1473-3099(12)70323-7. PMID 23375419. - "American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference: Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis" (PDF). Critical Care Medicine. 20 (6): 864–74. 1992. doi:10.1097/00003246-199206000-00025. PMID 1597042. - Soong, J; Soni, N (June 2012). "Sepsis: Recognition and treatment". Clinical Medicine. 12 (3): 276–80. doi:10.7861/clinmedicine.12-3-276. PMID 22783783. - Abraham, E; Singer, M (2007). "Mechanisms of sepsis-induced organ dysfunction" (PDF). Critical Care Medicine. 35 (10): 2408–16. doi:10.1097/01.CCM.0000282072.56245.91. PMID 17948334 – via South African Society of Surgeons in Training (SASSIT). - Ranieri, VM; Rubenfeld, GD; Thompson, BT; Ferguson, ND; et al. (June 2012). "Acute respiratory distress syndrome: The Berlin definition". JAMA. 307 (23): 2526–33. doi:10.1001/jama.2012.5669. PMID 22797452. - "Meet the new ARDS: Expert panel announces new definition, severity classes". PulmCCM. Matthew Hoffman. - International Consensus Conference on Pediatric Sepsis; Goldstein, B; Giroir, B; Randolph, A (2005). "International Pediatric Sepsis Consensus Conference: Definitions for sepsis and organ dysfunction in pediatrics". Pediatric Critical Care Medicine. 6 (1): 2–8. doi:10.1097/01.PCC.0000149131.72248.E6. PMID 15636651. - Backes, Y; van der Sluijs, KF; Mackie, DP; Tacke, F; Koch, A; Tenhunen, JJ; Schultz, MJ (September 2012). "Usefulness of suPAR as a biological marker in patients with systemic inflammation or infection: a systematic review". Intensive Care Medicine. 38 (9): 1418–28. doi:10.1007/s00134-012-2613-1. PMC . PMID 22706919. - Mayr, FB; Yende, S; Angus, DC (January 2014). "Epidemiology of severe sepsis". Virulence. 5 (1): 4–11. doi:10.4161/viru.27372. PMC . PMID 24335434. - Satar, M; Ozlu, F (September 2012). "Neonatal sepsis: A continuing disease burden" (PDF). The Turkish Journal of Pediatrics. 54 (5): 449–57. PMID 23427506. - Ely, E. Wesley; Goyette, Richert E. (2005). "Ch. 46: Sepsis with Acute Organ Dysfunction". In Hall, Jesse B.; Schmidt, Gregory A.; Wood, Lawrence D.H. Principles of Critical Care (3rd ed.). New York: McGraw-Hill Medical. ISBN 0071416404 – via AccessMedicine. (subscription required (. )) - Shukla, P; Rao, GM; Pandey, G; Sharma, S; et al. (September 5, 2014). "Therapeutic interventions in sepsis: Current and anticipated pharmacological agents". British Journal of Pharmacology. 171 (22): 5011–31. doi:10.1111/bph.12829. PMID 24977655. - Park, BS; Lee, JO (December 2013). "Recognition of lipopolysaccharide pattern by TLR4 complexes". Experimental & Molecular Medicine. 45 (12): e66. doi:10.1038/emm.2013.97. PMC . PMID 24310172. - Cross, AS (January 2014). "Anti-endotoxin vaccines: Back to the future". Virulence. 5 (1): 219–25. doi:10.4161/viru.25965. PMC . PMID 23974910. - Fournier, B; Philpott, DJ (July 2005). "Recognition of Staphylococcus aureus by the innate immune system". Clinical Microbiology Reviews. 18 (3): 521–40. doi:10.1128/CMR.18.3.521-540.2005. PMC . PMID 16020688. - Leentjens, J; Kox, M; van der Hoeven, JG; Netea, MG; et al. (June 15, 2013). "Immunotherapy for the adjunctive treatment of sepsis: From immunosuppression to immunostimulation. Time for a paradigm change?". American Journal of Respiratory and Critical Care Medicine. 187 (12): 1287–93. doi:10.1164/rccm.201301-0036CP. PMID 23590272. - Antonopoulou, A; Giamarellos-Bourboulis, EJ (January 2011). "Immunomodulation in sepsis: State of the art and future perspective". Immunotherapy. 3 (1): 117–28. doi:10.2217/imt.10.82. PMID 21174562. - Nimah, M; Brilli, RJ (2003). "Coagulation dysfunction in sepsis and multiple organ system failure" (PDF). Critical Care Clinics. 19 (3): 441–58. doi:10.1016/s0749-0704(03)00008-3. PMID 12848314 – via South African Society of Surgeons in Training (SASSIT). - Marik, PE (June 2014). "Iatrogenic salt water drowning and the hazards of a high central venous pressure". Annals of Intensive Care. 2014 (4): 21. doi:10.1186/s13613-014-0021-0. PMC . PMID 25110606. - Marik, PE (June 2014). "Early management of severe sepsis: concepts and controversies". Chest. 145 (6): 1407–18. doi:10.1378/chest.13-2104. PMID 24889440. - Daniels, R. (11 March 2011). "Surviving the first hours in sepsis: getting the basics right (an intensivist's perspective)". Journal of Antimicrobial Chemotherapy. 66 (Supplement 2): ii11–ii23. doi:10.1093/jac/dkq515. PMID 21398303. - Scottish Intercollegiate Guidelines Network (SIGN) (May 2014). Guideline 139: care of deteriorating patients. Edinburgh: SIGN. ISBN 978-1-909103-26-9. - Hirasawa, H; Oda, S; Nakamura, M (September 7, 2009). "Blood glucose control in patients with severe sepsis and septic shock". World Journal of Gastroenterology. 15 (33): 4132–6. doi:10.3748/wjg.15.4132. PMC . PMID 19725146. - Sterling, SA; Miller, WR; Pryor, J; Puskarich, MA; Jones, AE (26 June 2015). "The Impact of Timing of Antibiotics on Outcomes in Severe Sepsis and Septic Shock: A Systematic Review and Meta-Analysis.". Critical Care Medicine. 43: 1907–15. doi:10.1097/CCM.0000000000001142. PMID 26121073. - Sabatine, [edited by] Marc S. (2014). Pocket medicine (Fifth edition. ed.). [S.l.]: Aspen Publishers, Inc. ISBN 1451193785. - Dellinger, RP; Levy, MM; Carlet, JM; Bion, J; et al. (January 2008). "Surviving Sepsis Campaign: International guidelines for management of severe sepsis and septic shock: 2008". Intensive Care Medicine. 34 (1): 17–60. doi:10.1007/s00134-007-0934-2. PMC . PMID 18058085. - de Caen, AR; Berg, MD; Chameides, L; Gooden, CK; Hickey, RW; Scott, HF; Sutton, RM; Tijssen, JA; Topjian, A; van der Jagt, ÉW; Schexnayder, SM; Samson, RA (3 November 2015). "Part 12: Pediatric Advanced Life Support: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care.". Circulation. 132 (18 Suppl 2): S526–42. doi:10.1161/cir.0000000000000266. PMID 26473000. - Fluids in Sepsis and Septic Shock Group; Rochwerg, B; Alhazzani, W; Sindi, A; et al. (September 2014). "Fluid resuscitation in sepsis: A systematic review and network meta-analysis". Annals of Internal Medicine. 161 (5): 347–55. doi:10.7326/M14-0178. PMID 25047428. - Perel, P; Roberts, I; Ker, K (2013). "Colloids versus crystalloids for fluid resuscitation in critically ill patients". Cochrane Database of Systematic Reviews. 2 (2): CD000567. doi:10.1002/14651858.CD000567.pub6. PMID 23450531. - Zarychanski, R; Abou-Setta, AM; Turgeon, AF; Houston, BL; et al. (February 2013). "Association of hydroxyethyl starch administration with mortality and acute kidney injury in critically ill patients requiring volume resuscitation: A systematic review and meta-analysis". JAMA. 309 (7): 678–88. doi:10.1001/jama.2013.430. PMID 23423413. - Haase, N; Perner, A; Hennings, LI; Siegemund, M; et al. (2013). "Hydroxyethyl starch 130/0.38-0.45 versus crystalloid or albumin in patients with sepsis: Systematic review with meta-analysis and trial sequential analysis". BMJ. 346: f839. doi:10.1136/bmj.f839. PMC . PMID 23418281. - Serpa Neto, A; Veelo, DP; Peireira, VG; de Assunção, MS; et al. (February 2014). "Fluid resuscitation with hydroxyethyl starches in patients with sepsis is associated with an increased incidence of acute kidney injury and use of renal replacement therapy: A systematic review and meta-analysis of the literature". Journal of Critical Care. 29 (1): 185.e1–7. doi:10.1016/j.jcrc.2013.09.031. PMID 24262273. - Patel, A; Laffan, MA; Waheed, U; Brett, SJ (July 22, 2014). "Randomised trials of human albumin for adults with sepsis: A systematic review and meta-analysis with trial sequential analysis of all-cause mortality". BMJ. 349: g4561. doi:10.1136/bmj.g4561. PMID 25099709. - TRISS Trial Group; Scandinavian Critical Care Trials Group; Holst, LB; Haase, N; et al. (October 9, 2014). "Lower versus higher hemoglobin threshold for transfusion in septic shock". The New England Journal of Medicine. 371 (15): 1381–91. doi:10.1056/NEJMoa1406617. PMID 25270275. - Cherfan, AJ; Arabi, YM; Al-Dorzi, HM; Kenny, LP (May 2012). "Advantages and disadvantages of etomidate use for intubation of patients with sepsis". Pharmacotherapy. 32 (5): 475–82. doi:10.1002/j.1875-9114.2012.01027.x. PMID 22488264. - Chan, CM; Mitchell, AL; Shorr, AF (November 2012). "Etomidate is associated with mortality and adrenal insufficiency in sepsis: A meta-analysis". Critical Care Medicine. 40 (11): 2945–53. doi:10.1097/CCM.0b013e31825fec26. PMID 22971586. - Gu, WJ; Wang, F; Tang, L; Liu, JC (September 25, 2014). "Single-dose etomidate does not increase mortality in patients with sepsis: A systematic review and meta-analysis of randomized controlled trials and observational studies". Chest. 147 (2): 335. doi:10.1378/chest.14-1012. PMID 25255427. - Volbeda M, Wetterslev J, Gluud C, Zijlstra JG, van der Horst IC, Keus F (July 2015). "Glucocorticosteroids for sepsis: systematic review with meta-analysis and trial sequential analysis". Intensive Care Med. 41 (7): 1220–34. doi:10.1007/s00134-015-3899-6. PMID 26100123. - Annane, D; Bellissant, E; Bollaert, PE; Briegel, J; Keh, D; Kupfer, Y (4 December 2015). "Corticosteroids for treating sepsis.". The Cochrane database of systematic reviews. 12: CD002243. doi:10.1002/14651858.CD002243.pub3. PMID 26633262. - American College of Critical Care Medicine; Marik, PE; Pastores, SM; Annane, D; et al. (2008). "Recommendations for the diagnosis and management of corticosteroid insufficiency in critically ill adult patients: Consensus statements from an international task force by the American College of Critical Care Medicine" (PDF). Critical Care Medicine. 36 (6): 1937–49. doi:10.1097/CCM.0b013e31817603ba. PMID 18496365 – via University of Chicago. - Early Goal-Directed Therapy Collaborative Group; Rivers, E; Nguyen, B; Havstad, S; et al. (2001). "Early goal-directed therapy in the treatment of severe sepsis and septic shock". The New England Journal of Medicine. 345 (19): 1368–77. doi:10.1056/NEJMoa010307. PMID 11794169. - Fuller, BM; Dellinger, RP (June 2012). "Lactate as a hemodynamic marker in the critically ill.". Current opinion in critical care. 18 (3): 267–72. doi:10.1097/MCC.0b013e3283532b8a. PMC . PMID 22517402. - Dell'anna, AM; Taccone, FS (19 June 2015). "Early-goal directed therapy for septic shock: is it the end?". Minerva anestesiologica. 81: 1138–43. PMID 26091011. - Rusconi, AM; Bossi, I; Lampard, JG; Szava-Kovats, M; Bellone, A; Lang, E (16 May 2015). "Early goal-directed therapy vs usual care in the treatment of severe sepsis and septic shock: a systematic review and meta-analysis.". Internal and emergency medicine. 10: 731–43. doi:10.1007/s11739-015-1248-y. PMID 25982917. - Shane, AL; Stoll, BJ (January 2014). "Neonatal sepsis: progress towards improved outcomes". Journal of Infection. 68 (Supplement 1): S24–32. doi:10.1016/j.jinf.2013.09.011. PMID 24140138. - Camacho-Gonzalez, A; Spearman, PW; Stoll, BJ (April 2013). "Neonatal infectious diseases: evaluation of neonatal sepsis". Pediatric Clinics of North America. 60 (2): 367–89. doi:10.1016/j.pcl.2012.12.003. PMID 23481106. - Alejandria, MM; Lansang, MA; Dans, LF; Mantaring, JB 3rd (September 2013). "Intravenous immunoglobulin for treating sepsis, severe sepsis and septic shock". Cochrane Database of Systematic Reviews. 9 (CD001090): CD001090. doi:10.1002/14651858.CD001090.pub2. PMID 24043371. - Szakmany, T; Hauser, B; Radermacher, P (September 2012). "N-acetylcysteine for sepsis and systemic inflammatory response in adults". Cochrane Database of Systematic Reviews. 9 (CD006616): CD006616. doi:10.1002/14651858.CD006616.pub2. PMID 22972094. - Fink, MP; Warren, HS (October 2014). "Strategies to improve drug development for sepsis.". Nature reviews. Drug discovery. 13 (10): 741–58. doi:10.1038/nrd4368. PMID 25190187. - Russel, JA (October 2008). "The current management of septic shock". Minerva Medica. 99 (5): 431–58. PMID 18971911. - Best Evidence in Emergency Medicine Investigator, Group; Carpenter, CR; Keim, SM; Upadhye, S; et al. (October 2009). "Risk stratification of the potentially septic patient in the emergency department: The mortality in the emergency department sepsis (MEDS) score". The Journal of Emergency Medicine. 37 (3): 319–27. doi:10.1016/j.jemermed.2009.03.016. PMID 19427752. - Jackson, JC; Hopkins, RO; Miller, RR; Gordon, SM; et al. (November 2009). "Acute respiratory distress syndrome, sepsis, and cognitive decline: A review and case study". Southern Medical Journal. 102 (11): 1150–7. doi:10.1097/SMJ.0b013e3181b6a592. PMC . PMID 19864995. - Lyle, NH; Pena, OM; Boyd, JH; Hancock, RE (September 2014). "Barriers to the effective treatment of sepsis: antimicrobial agents, sepsis definitions, and host-directed therapies". Annals of the New York Academy of Sciences. 1323 (2014): 101–14. doi:10.1111/nyas.12444. PMID 24797961. - Munford, Robert S. (2011). "Ch. 271: Severe Sepsis and Septic Shock". In Longo, Dan L.; Fauci, Anthony S.; Kasper, Dennis L.; Hauser, Stephen L.; et al. Harrison's Principles of Internal Medicine (18th ed.). New York: McGraw-Hill. pp. 2223–231. ISBN 9780071748896. - Sutton, JP; Friedman, B (September 2013). "Trends in Septicemia Hospitalizations and Readmissions in Selected HCUP States, 2005 and 2010". Healthcare Cost and Utilization Project. Rockville, MD: Agency for Healthcare Research and Quality. PMID 24228290. - Martin, GS; Mannino, DM; Eaton, S; Moss, M (2003). "The epidemiology of sepsis in the United States from 1979 through 2000". The New England Journal of Medicine. 348 (16): 1546–54. doi:10.1056/NEJMoa022139. PMID 12700374. - Hines, AL; Barrett, ML; Jiang, HJ; Steiner, CA (April 2014). "Conditions with the Largest Number of Adult Hospital Readmissions by Payer, 2011.". Healthcare Cost and Utilization Project. Rockville, MD: Agency for Healthcare Research and Quality. PMID 24901179. - Koh, GC; Peacock, SJ; van der Poll, T; Wiersinga, WJ (April 2012). "The impact of diabetes on the pathogenesis of sepsis". European Journal of Clinical Microbiology & Infectious Diseases. 31 (4): 379–88. doi:10.1007/s10096-011-1337-4. PMC . PMID 21805196. - Rubin, LG; Schaffner, W (July 2014). "Clinical practice. Care of the asplenic patient". The New England Journal of Medicine. 371 (4): 349–56. doi:10.1056/NEJMcp1314291. PMID 25054718. - Vincent, Jean-Louis (2008). "Ch. 1: Definition of Sepsis and Non-infectious SIRS". In Cavaillon, Jean-Marc; Adrie, Christophe. Sepsis and Non-infectious Systemic Inflammation: From Biology to Critical Care. John Wiley & Sons. p. 3. ISBN 9783527319350. - Marshall, JC (July 2013). "Sepsis: Rethinking the approach to clinical research". Journal of Leukocyte Biology. 94 (1): 471–82. doi:10.1189/jlb.0607380. PMID 18171697. - Shear, MJ (1944). "Chemical treatment of tumors, IX: Reactions of mice with primary subcutaneous tumors to injection of a hemorrhage-producing bacterial polysaccharide". Journal of the National Cancer Institute. 4 (5): 461–76. doi:10.1093/jnci/4.5.461 (inactive 2015-02-02). - Luderitz, O; Galanos, C; Lehmann, V; Nurminen, M; et al. (1973). "Lipid A: Chemical structure and biologic activity". The Journal of Infectious Diseases. 128: 29. doi:10.1093/infdis/128.Supplement_1.S17. JSTOR 30106029. - Heppner, G; Weiss, DW (1965). "High susceptibility of strain A mice to endotoxin and endotoxin-red blood cell mixtures". Journal of Bacteriology. 90 (3): 696–703. PMC . PMID 16562068. - O'Brien, AD; Rosenstreich, DL; Scher, I; Campbell, GH; et al. (1980). "Genetic control of susceptibility to Salmonella typhimurium in mice: Role of the LPS gene". Journal of Immunology. 124 (1): 20–4. PMID 6985638. - Poltorak, A; Smirnova, I; He, X; Liu, M-Y; et al. (1998). "Genetic and physical mapping of the Lps locus: Identification of the toll-4 receptor as a candidate gene in the critical region". Blood Cells, Molecules and Diseases. 24 (3): 340–55. doi:10.1006/bcmd.1998.0201. PMID 10087992. - Poltorak, A; He, X; Smirnova, I; Liu, MY; et al. (1998). "Defective LPS signaling in C3H/HeJ and C57BL/10ScCr mice: Mutations in Tlr4 gene". Science. 282 (5396): 2085–8. Bibcode:1998Sci...282.2085P. doi:10.1126/science.282.5396.2085. PMID 9851930. - Torio, CM; Andrews, RM (August 2013). "National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2011". Healthcare Cost and Utilization Project. Rockville, MD: Agency for Healthcare Research and Quality. PMID 24199255. - Pfuntner, A; Wier, LM; Steiner, C (December 2013). "Costs for Hospital Stays in the United States, 2011". Healthcare Cost and Utilization Project. Rockville, MD: Agency for Healthcare Research and Quality. PMID 24455786. - "History". Surviving Sepsis Campaign. Society of Critical Care Medicine. Retrieved February 24, 2014. - "About Us - About the Sepsis Alliance". www.sepsis.org. Retrieved 8 October 2015. Media related to Sepsis at Wikimedia Commons
|Whales are not a taxon, they are an informal grouping of the infraorder Cetacea| Southern right whale |Families considered whales| Whale is the common name for a widely distributed and diverse group of fully aquatic marine mammals. They are an informal grouping within the infraorder Cetacea, excluding dolphins and porpoises, so to zoologists the grouping is paraphyletic. The whales comprise the extant families Cetotheriidae (whose only living member is the pygmy right whale), Balaenopteridae (the rorquals), Balaenidae (right whales), Eschrichtiidae (the gray whale), Monodontidae (belugas and narwhals), Physeteridae (the sperm whale), Kogiidae (the dwarf and pygmy sperm whale), and Ziphiidae (the beaked whales). There are 40 extant species of whales. The two parvorders of whales, Mysticeti and Odontoceti, are thought to have split apart around 34 million years ago. Whales, dolphins and porpoises belong to the order Cetartiodactyla with even-toed ungulates and their closest living relatives are the hippopotamuses, having diverged about 40 million years ago. Whales range in size from the 2.6 metres (8.5 ft) and 135 kilograms (298 lb) dwarf sperm whale to the 34 metres (112 ft) and 190 metric tons (210 short tons) blue whale, which is the largest creature on earth. Several species exhibit sexual dimorphism, in that the females are larger than males. They have streamlined bodies and two limbs that are modified into flippers. Though not as flexible or agile as seals, whales can travel at up to 20 knots. Balaenopterids use their throat pleats to expand the mouth to take in gulps of water. Balaenids have heads that can make up 40% of their body mass to take in water. Odontocetes have conical teeth designed for catching fish or squid. Mysticetes have a well developed sense of "smell", whereas odontocetes have well-developed hearing − their hearing, that is adapted for both air and water, is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water. Although whales are widespread, most species prefer the colder waters of the Northern and Southern Hemispheres, and migrate to the equator to give birth. Odontocetes feed largely on fish and squid. A few, like the sperm whale, feed on large invertebrates, such as giant squid. Grey whales are specialized for feeding on bottom-dwelling molluscs. Male whales typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time. Whales produce a variety of vocalizations, notably the songs of the humpback whale. Once relentlessly hunted for their products, whales are now protected by international law. The North Atlantic right whales nearly became extinct in the twentieth century, with a population low of 450, and the North Pacific gray whale population is ranked Critically Endangered by the IUCN. Besides whaling, they also face threats from bycatch and marine pollution. The meat, blubber and baleen of whales have traditionally been used by indigenous peoples of the Arctic. Whales have been depicted in various cultures worldwide, notably by the Inuit and the coastal peoples of Vietnam and Ghana, who sometimes hold whale funerals. Whales occasionally feature in literature and film, as in the great white whale of Herman Melville's Moby Dick. Small whales, such as belugas, are sometimes kept in captivity and trained to perform tricks, but breeding success has been poor and the animals often die within a few months of capture. Whale watching has become a form of tourism around the world. - 1 Taxonomy and evolution - 2 Biology - 3 Ecology - 4 Interaction with humans - 5 References Taxonomy and evolution The whales are part of the largely terrestrial mammalian clade Laurasiatheria. Whales do not form a clade or order; the infraorder Cetacea includes dolphins and porpoises, which are not considered whales. Cetaceans are divided into two parvorders: - The largest parvorder, Mysticeti (baleen whales), is characterized by the presence of baleen, a sieve-like structure in the upper jaw made of keratin, which it uses to filter plankton, among others, from the water. - Odontocetes (toothed whales) are characterized by bearing sharp teeth for hunting, as opposed to their counterparts' baleen. Cetaceans and artiodactyls now are classified under the order Cetartiodactyla, often still referred to as Artiodactyla, which includes both whales and hippopotamuses. The hippopotamus and pygmy hippopotamus are the whale's closest terrestrial living relatives. Mysticetes are also known as baleen whales. They have a pair of blowholes side-by-side and lack teeth, which renders them incapable of catching larger prey; they instead have baleen plates which is a sieve-like structure in the upper jaw made of keratin, which it uses to filter plankton and other food from the water; this forces them to follow krill or plankton migrations. Some whales, such as the humpback, reside in the polar regions where they feed on a reliable source of schooling fish and krill. These animals rely on their well-developed flippers and tail fin to propel themselves through the water; they swim by moving their fore-flippers and tail fin up and down. Whale ribs loosely articulate with their thoracic vertebrae at the proximal end, but do not form a rigid rib cage. This adaptation allows their chest to compress during deep dives as the pressure increases with depth. Mysticetes consist of four families: rorquals (balaenopterids), cetotheriids, right whales (balaenids), and gray whales (eschrichtiids). The main difference between each family of mysticete is in their feeding adaptations and subsequent behaviour. Balaenopterids are the rorquals. These animals, along with the cetotheriids, rely on their throat pleats to gulp large amounts of water while feeding. The throat pleats extend from the mouth to the navel and allow the mouth to expand to a large volume for more efficient capture of the small animals they feed on. Balaenopterids consist of two genera and eight species. Balaenids are the right whales. These animals have very large heads, which can make up as much as 40% of their body mass, and much of the head is the mouth. This allows them to take in large amounts of water into their mouths, letting them feed more effectively. Eschrichtiids have one living member: the gray whale. They are bottom feeders, mainly eating crustaceans and benthic invertebrates. They feed by turning on their sides and taking in water mixed with sediment, which is then expelled through the baleen, leaving their prey trapped inside. This is an efficient method of hunting, in which the whale has no major competitors. Odontocetes are known as toothed whales; they have teeth and only one blowhole. They rely on their well-developed sonar to find their way in the water. Toothed whales send out ultrasonic clicks using the melon. Sound waves travel through the water. Upon striking an object in the water, the sound waves bounce back at the whale. These vibrations are received through fatty tissues in the jaw, which is then rerouted into the ear-bone and into the brain where the vibrations are interpreted.:203–427 All toothed whales are opportunistic, meaning they will eat anything they can fit in their throat because they are unable to chew. These animals rely on their well-developed flippers and tail fin to propel themselves through the water; they swim by moving their fore-flippers and tail fin up and down. Whale ribs loosely articulate with their thoracic vertebrae at the proximal end, but they do not form a rigid rib cage. This adaptation allows the chest to compress during deep dives as opposed to resisting the force of water pressure. Excluding dolphins and porpoises, odontocetes consist of four families: belugas and narwhals (monodontids), sperm whales (physeterids), dwarf and pygmy sperm whales (kogiids), and beaked whales (ziphiids). There are six species, sometimes referred to as "blackfish", that are dolphins commonly misconceived as whales: the killer whale, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae (oceanic dolphins). The differences between families of odontocetes include size, feeding adaptations and distribution. Monodontids consist of two species: the beluga and the narwhal. They both reside in the frigid arctic and both have large amounts of blubber. Belugas, being white, hunt in large pods near the surface and around pack ice, their coloration acting as camouflage. Narwhals, being black, hunt in large pods in the aphotic zone, but their underbelly still remains white to remain camouflaged when something is looking directly up or down at them. They have no dorsal fin to prevent collision with pack ice. Physeterids and Kogiids consist of sperm whales. Sperm whales consist the largest and smallest odontocetes, and spend a large portion of their life hunting squid. P. macrocephalus spends most of its life in search of squid in the depths; these animals do not require any degree of light at all, in fact, blind sperm whales have been caught in perfect health. The behaviour of Kogiids remains largely unknown, but, due to their small lungs, they are thought to hunt in the photic zone. Ziphiids consist of 22 species of beaked whale. These vary from size, to coloration, to distribution, but they all share a similar hunting style. They use a suction technique, aided by a pair of grooves on the underside of their head, not unlike the throat pleats on the rorquals, to feed. Whales are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago. Primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic 5–10 million years later. Archaeoceti is a parvorder comprising ancient whales. These ancient whales are the predecessors of modern whales, stretching back to their first ancestor that spent their lives near (rarely in) the water. Likewise, the archaeocetes can be anywhere from near fully terrestrial, to semi-aquatic to fully aquatic, but what defines an archaeocete is the presence of anatomical features exclusive to cetaceans alongside other primitive features not found in modern cetaceans, like visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes include their hearing set-up that channels vibrations from the jaw to the earbone which occurred with Ambulocetus 49 million years ago, a streamlined body and the growth of flukes on the tail which occurred around 43 million years ago with Protocetus, the migration of the nasal openings toward the top of the cranium and the modification of the forelimbs into flippers which occurred with Basilosaurus 35 million years ago, and the shrinking and eventual disappearance of the hind limbs which took place with the first odontocetes and mysticetes 34 million years ago. Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 million years ago. Around 40 million years ago, a common ancestor between the two branched off into cetacea and anthracotheres; nearly all anthracotheres went extinct at the end of the Pleistocene two-and-a-half million years ago, eventually leaving only one surviving lineage: the hippo. Whales have torpedo shaped bodies with non-flexible necks, limbs modified into flippers, non-existent external ear flaps, a large tail fin, and flat heads (with the exception of monodontids and ziphiids). Whale skulls have small eye orbits, long snouts (with the exception of monodontids and ziphiids) and eyes placed on the sides of its head. Whales range in size from the 2.6 metres (8.5 ft) and 135 kilograms (298 lb) dwarf sperm whale to the 34 metres (112 ft) and 190 metric tons (210 short tons) blue whale. Overall, they tend to dwarf other cetartiodactyls; the blue whale is the largest creature on earth. Several species have female-biased sexual dimorphism, with the females being larger than the males. One exception is with the sperm whale, which has males larger than the females. Odontocetes, such as the sperm whale, possess teeth with cementum cells overlying dentine cells. Unlike human teeth, which are composed mostly of enamel on the portion of the tooth outside of the gum, whale teeth have cementum outside the gum. Only in larger whales, where the cementum is worn away on the tip of the tooth, does enamel show. Mysticetes have large whalebone, as opposed to teeth, made of keratin. Mysticetes have two blowholes, whereas Odontocetes contain only one. Breathing involves expelling stale air from the blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs; a humpback whale's lungs can hold about 5,000 litres of air. Spout shapes differ among species, which facilitates identification. All whales have a thick layer of blubber. In species that live near the poles, the blubber can be as thick as 11 inches. This blubber can help with buoyancy (which is helpful for a 100-ton whale), protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for fasting when migrating to the equator; the primary usage for blubber is insulation from the harsh climate. It can constitute as much as 50% of a whales body weight. Calves are born with only a thin layer of blubber, but some species compensate for this with thick lanugos. Whales have a two- to three-chambered stomach that is similar in structure to terrestrial carnivores. Mysticetes contain a proventriculus as an extension of the oesophagus; this contains stones that grind up food. They also have fundic and pyloric chambers. Whales have two flippers on the front, and a tail fin. These flippers contain four digits. Although whales do not possess fully developed hind limbs, some, such as the sperm whale and bowhead whale, possess discrete rudimentary appendages, which may contain feet and digits. Whales are fast swimmers in comparison to seals, which typically cruise at 5–15 kn, or 9–28 kilometres per hour (5.6–17.4 mph); the fin whale, in comparison, can travel at speeds up to 47 kilometres per hour (29 mph) and the sperm whale can reach speeds of 35 kilometres per hour (22 mph). The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility; whales are unable to turn their heads. When swimming, whales rely on their tail fin propel them through the water. Flipper movement is continuous. Whales swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Some species log out of the water, which may allow then to travel faster. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin. Whales are adapted for diving to great depths. In addition to their streamlined bodies, they can slow their heart rate to conserve oxygen; blood is rerouted from tissue tolerant of water pressure to the heart and brain among other organs; haemoglobin and myoglobin store oxygen in body tissue; and they have twice the concentration of myoglobin than haemoglobin. Before going on long dives, many whales exhibit a behaviour known as sounding; they stay close to the surface for a series of short, shallow dives while building their oxygen reserves, and then make a sounding dive. The whale ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In whales, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, whales receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The whale ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater. Odontocetes send out high frequency clicks from an organ known as a melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. The melon size varies between species, the bigger the more dependent they are of it. A beaked whale for example has a small bulge sitting on top of its skull, whereas a sperm whale's head is filled up mainly with the melon.:1–19 The whale eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a whale are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When belugas surface, their lens and cornea correct the nearsightedness that results from the refraction of light; they contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. Whales do, however, lack short wavelength sensitive visual pigments in their cone cells indicating a more limited capacity for colour vision than most mammals. Most whales have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum; these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. In water, a whale can see around 10.7 metres (35 ft) ahead of itself, but, of course, they have a smaller range above water. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea.:505–519 The olfactory lobes are absent in toothed whales, suggesting that they have no sense of smell. Some whales, such as the bowhead whale, possess a vomeronasal organ, which does mean that they can "sniff out" krill.:481–505 Whales are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. However, some toothed whales have preferences between different kinds of fish, indicating some sort of attachment to taste. The presence of the Jacobson's organ indicates that whales can smell food once inside their mouth, which might be similar to the sensation of taste.:447–455 Recording of Humpback Whales singing and Clicking. |Problems playing this file? See media help.| Whale vocalization is likely to serve several purposes. Some species, such as the humpback whale, communicate using melodic sounds, known as whale song. These sounds may be extremely loud, depending on the species. Humpback whales only have been heard making clicks, while toothed whales use sonar that may generate up to 20,000 watts of sound (+73 dBm or +43 dBw) and be heard for many miles. Captive whales have occasionally been known to mimic human speech. Scientists have suggested this indicates a strong desire on behalf of the whales to communicate with humans, as whales have a very different vocal mechanism, so imitate human speech likely takes considerable effort. Whales emit two distinct kinds of acoustic signals, which are called whistles and clicks: - Clicks are quick broadband burst pulses, used for sonar, although some lower-frequency broadband vocalizations may serve a non-echolocative purpose such as communication; for example, the pulsed calls of belugas. Pulses in a click train are emitted at intervals of ~35–50 milliseconds, and in general these inter-click intervals are slightly greater than the round-trip time of sound to the target. - Whistles are narrow-band frequency modulated (FM) signals, used for communicative purposes, such as contact calls. Whales are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species of whale is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgement, and theory of mind. Whale spindle neurons are found in areas of the brain that are homologous to where they are found in humans, suggesting that they perform a similar function. Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalisation quotient that can be used as another indication of animal intelligence. Sperm whales have the largest brain mass of any animal on earth, averaging 8,000 cubic centimetres (490 in3) and 7.8 kilograms (17 lb) in mature males, in comparison to the average human brain which averages 1,450 cubic centimetres (88 in3) in mature males. The brain to body mass ratio in some odontocetes, such as belugas and narwhals, is second only to humans. In some whales, however, it is less than half that of humans: 0.9% versus 2.1%. This comparison seems more favourable if the large amount of blubber that some whales require for insulation is omitted. Small whales are known to engage in complex play behaviour, which includes such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". There are two main methods of bubble ring production: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring, or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. They also appear to enjoy biting the vortex-rings, so that they burst into many separate bubbles and then rise quickly to the surface. Whales are also known to produce bubble-nets for the purpose of foraging. Larger whales are also thought, to some degree, to engage in play. The southern right whale, for example, elevates their tail fluke above the water, remaining in the same position for a considerable amount of time. This is known as "sailing". It appears to be a form of play and is most commonly seen off the coast of Argentina and South Africa. Humpback whales, among others, are also known to display this behaviour. Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness. The most widely used test for self-awareness in animals is the mirror test in which a temporary dye is placed on an animal's body, and the animal is then presented with a mirror; they then see if the animal shows signs of self-recognition. Some disagree with these findings, arguing that the results of these tests are open to human interpretation and susceptible to the Clever Hans effect. This test is much less definitive than when used for primates, because primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behaviour less certain. Sceptics argue that behaviours that are said to identify self-awareness resemble existing social behaviours, and so researchers could be misinterpreting self-awareness for social responses to another individual. The researchers counter-argue that the behaviours shown are evidence of self-awareness, as they are very different from normal responses to another individual. Whereas apes can merely touch the mark on themselves with their fingers, cetaceans show less definitive behaviour of self-awareness; they can only twist and turn themselves to observe the mark. Whales are fully aquatic creatures, which means that birth and courtship behaviours are very different from terrestrial and semi-aquatic creatures. Since they are unable to go onto land to calve, they deliver the baby with the fetus positioned for tail-first delivery. This prevents the baby from drowning either upon or during delivery. To feed the new-born, whales, being aquatic, must squirt the milk into the mouth of the calf. Being mammals, they, of course, have mammary glands used for nursing calves; they are weaned off at about 11 months of age. This milk contains high amounts of fat which is meant to hasten the development of blubber; it contains so much fat that it has the consistency of toothpaste. Females deliver a single calf with gestation lasting about a year, dependency until one to two years, and maturity around seven to ten years, all varying between the species. This mode of reproduction produces few offspring, but increases the survival probability of each one. Females, referred to as "cows", carry the responsibility of childcare as males, referred to as "bulls", play no part in raising calves. Most mysticetes reside at the poles. So, to prevent the unborn calf from dying of frostbite, they migrate to calving/mating grounds. They will then stay there for a matter of months until the calf has developed enough blubber to survive the bitter temperatures of the poles. Until then, the calves will feed on the mother's fatty milk. Migration times are basically uniform between the species of the suborder, but the males of some species will not undertake the migration; one example is the North Atlantic right whale. Most will travel from the Arctic or Antarctic into the tropics to mate, calve, and raise during the winter and spring; they will migrate back to the poles in the warmer summer months so the calf can continue growing while the mother can continue eating, as they fast in the breeding grounds. One exception to this is the southern right whale, which migrates to Patagonia and western New Zealand to calve; both are well out of the tropic zone. Unlike most animals, whales are conscious breathers. All mammals sleep, but whales cannot afford to become unconscious for long because they may drown. While knowledge of sleep in wild cetaceans is limited, toothed cetaceans in captivity have been recorded to sleep with one side of their brain at a time, so that they may swim, breathe consciously, and avoid both predators and social contact during their period of rest. A 2008 study found that sperm whales sleep in vertical postures just under the surface in passive shallow 'drift-dives', generally during the day, during which whales do not respond to passing vessels unless they are in contact, leading to the suggestion that whales possibly sleep during such dives. Foraging and predation All whales are carnivorous and predatory. Odontocetes, as a whole, mostly feed on fish and cephalopods, and then followed by crustaceans and bivalves. All species are generalist and opportunistic feeders. Mysticetes, as a whole, mostly feed on krill and plankton, followed by crustaceans and other invertebrates. A few are specialists. Examples include the blue whale, which eats almost exclusively krill, the minke whale, which eats mainly schooling fish, the sperm whale, which specialize on squid, and the gray whale which feed on bottom-dwelling invertebrates. The elaborate baleen "teeth" of filter-feeding species, mysticetes, allow them to remove water before they swallow their planktonic food by using the teeth as a sieve. Usually whales hunt solitarily, but they do sometimes hunt cooperatively in small groups. The former behaviour is typical when hunting non-schooling fish, slow-moving or immobile invertebrates or endothermic prey. When large amounts of prey are available, whales such as certain mysticetes hunt cooperatively in small groups. Some cetaceans may forage with other kinds of animals, such as other species of whales or certain species of pinnipeds. Large whales, such as mysticetes, are not usually subject to predation, but smaller whales, such as monodontids or ziphiids, are. These species are preyed on by the killer whale or orca. To subdue and kill whales, orcas continuously ram them with their heads; this can sometimes kill bowhead whales, or severely injure them. Other times they corral the narwhals or belugas before striking. They are typically hunted by groups of 10 or fewer orcas, but they are seldom attacked by an individual. Calves are more commonly taken by orcas, but adults can be targeted as well. These small whales are also targeted by terrestrial predators. The polar bear is well adapted for hunting Arctic whales and calves. Bears are known to use sit-and-wait tactics as well as active stalking and pursuit of prey on ice or water. Whales lessen the chance of predation by gathering in groups. This however means less room around the breathing hole as the ice slowly closes the gap. When out at sea, whales dive out of the reach of surface-hunting orcas. Polar bear attacks on belugas and narwhals are usually successful in winter, but rarely inflict any damage in summer. A 2010 study considered whales to be a positive influence to the productivity of ocean fisheries, in what has been termed a "whale pump." Whales carry nutrients such as nitrogen from the depths back to the surface. This functions as an upward biological pump, reversing an earlier presumption that whales accelerate the loss of nutrients to the bottom. This nitrogen input in the Gulf of Maine is "more than the input of all rivers combined" emptying into the gulf, some 23,000 metric tons (25,000 short tons) each year. Whales defecate at the ocean's surface; their excrement is important for fisheries because it is rich in iron and nitrogen. The whale faeces are liquid and instead of sinking, they stay at the surface where phytoplankton feed off it. Upon death, whale carcasses fall to the deep ocean and provide a substantial habitat for marine life. Evidence of whale falls in present-day and fossil records shows that deep sea whale falls support a rich assemblage of creatures, with a global diversity of 407 species, comparable to other neritic biodiversity hotspots, such as cold seeps and hydrothermal vents. Deterioration of whale carcasses happens though a series of three stages. Initially, moving organisms such as sharks and hagfish, scavenge the soft tissues at a rapid rate over a period of months, and as long as two years. This is followed by the colonization of bones and surrounding sediments (which contain organic matter) by enrichment opportunists, such as crustaceans and polychaetes, throughout a period of years. Finally, sulfophilic bacteria reduce the bones releasing hydrogen sulphide enabling the growth of chemoautotrophic organisms, which in turn, support other organisms such as mussels, clams, limpets, and sea snails. This stage may last for decades and supports a rich assemblage of species, averaging 185 species per site. Interaction with humans Whaling by humans has existed since the Stone Age. Ancient whalers used harpoons to spear the bigger animals from boats out at sea. People from Norway started hunting whales around 2000 B.C., and people from Japan began hunting whales in the Pacific at least as early. Whales are typically hunted for their meat and blubber by aboriginal groups; they used baleen for baskets or roofing, and made tools and masks out of bones. The Inuit hunted whales in the Arctic Ocean. The Basques started whaling as early as the 11th century, sailing as far as Newfoundland in the 16th century in search of right whales. 18th and 19th century whalers hunted down whales mainly for their oil, which was used as lamp fuel and a lubricant, baleen or whalebone, which was used for items such as corsets and skirt hoops, and ambergris, which was used as a fixative for perfumes. The most successful whaling nations at this time were Holland, Japan, and the United States. Commercial whaling was historically important as an industry well throughout the 17th, 18th and 19th centuries. Whaling was at that time a sizeable European industry with ships from Britain, France, Spain, Denmark, the Netherlands and Germany, sometimes collaborating to hunt whales in the Arctic, sometimes in competition leading even to war. By the early 1790s, whalers, namely the Americans and Australians, mainly focused efforts in the South Pacific where they mainly hunted sperm whales and right whales, with catches of up to 39,000 right whales by Americans alone. By 1853, U.S. profits turned to US$11,000,000 (UK£6.5m), equivalent to US$348,000,000 (UK£230m) today, the most profitable year for the American whaling industry. Commonly exploited species included North Atlantic right whales, sperm whales, which were mainly hunted by Americans, bowhead whales, which were mainly hunted by the Dutch, common minke whales, blue whales, and gray whales. The scale of whale harvesting decreased substantially after 1982 when the International Whaling Commission (IWC) placed a moratorium which set a catch limit for each country, excluding aboriginal groups until 2004. Current whaling nations are Norway, Iceland, and Japan, despite their joining to the IWC, as well as the aboriginal communities of Siberia, Alaska, and northern Canada. Subsistence hunters typically use whale products for themselves and depend on them for survival. National and international authorities have given special treatment to aboriginal hunters since their methods of hunting are seen as less destructive and wasteful. This distinction is being questioned as these aboriginal groups are using more modern weaponry and mechanized transport to hunt with, and are selling whale products in the marketplace. Some anthropologists argue that the term "subsistence" should also apply to these cash-based exchanges as long as they take place within local production and consumption. In 1946, the IWC placed a moratorium, limiting the annual whale catch. Since then, yearly profits for these "subsistence" hunters have been close to US$31 million (UK£20m) per year. Whales can also be threatened by humans more indirectly. They are unintentionally caught in fishing nets by commercial fisheries as bycatch and accidentally swallow fishing hooks. Gillnetting and Seine netting is a significant cause of mortality in whales and other marine mammals. Species commonly entangled include beaked whales. Whales are also affected by marine pollution. High levels of organic chemicals accumulate in these animals since they are high in the food chain. They have large reserves of blubber, more so for toothed whales as they are higher up the food chain than baleen whales. Lactating mothers can pass the toxins on to their young. These pollutants can cause gastrointestinal cancers and greater vulnerability to infectious diseases. They can also be poisoned by swallowing litter, such as plastic bags. Environmentalists speculate that advanced naval sonar endangers some whales. Some scientists suggest that sonar may trigger whale beachings, and they point to signs that such whales have experienced decompression sickness. The scale of whale harvesting decreased substantially after 1946 when, in response to the steep decline in whale populations, the International Whaling Commission placed a moratorium which set a catch limit for each country; this excluded aboriginal groups up until 2004. As of 2015, aboriginal communities are allowed to take 280 bowhead whales off of Alaska and two from the western coast of Greenland, 620 gray whales off of Washington state, three common minke whales off of the eastern coast of Greenland and 178 on their western coast, 10 fin whales from the west coast of Greenland, nine humpback whales from the west coast of Greenland and 20 off of St. Vincent and the Grenadines each year. Several species that were commercially exploited have rebounded in numbers; for example, Grey whales may be as numerous as they were prior to harvesting, but the North Atlantic population is functionally extinct. Conversely, the North Atlantic right whale was extirpated from much of its former range, which stretched across the North Atlantic, and only remains in small fragments along the coast of Canada, Greenland, and is considered functionally extinct along the European coastline. The IWC has designated two whale sanctuaries: the Southern Ocean Whale Sanctuary, and the Indian Ocean Whale Sanctuary. The Southern Ocean whale sanctuary spans 30,560,860 square kilometres (11,799,610 sq mi) and envelopes Antarctica. The Indian Ocean whale sanctuary takes up all of the Indian Ocean south of 55°S. The IWC is a voluntary organization, with no treaty. Any nation may leave as they wish; the IWC cannot enforce any law it makes. As of 2013, the International Union for Conservation of Nature (IUCN) recognized 86 cetacean species, 40 of which are considered whales. Six are considered at risk, as they are ranked Critically Endangered (the North Atlantic right whale), "Endangered" (blue whale, fin whale, North Pacific right whale, and sei whale), and "Vulnerable" (sperm whale). Twenty-one species have a "Data Deficient" ranking. Species that live in polar habitats are vulnerable to the effects of recent and ongoing climate change, particularly the time when pack ice forms and melts. An estimated 13 million people went whale watching globally in 2008, in all oceans except the Arctic. Rules and codes of conduct have been created to minimize harassment of the whales. Iceland, Japan and Norway have both whaling and whale watching industries. Whale watching lobbyists are concerned that the most inquisitive whales, which approach boats closely and provide much of the entertainment on whale-watching trips, will be the first to be taken if whaling is resumed in the same areas. Whale watching generated US$2.1 billion (UK£1.4 billion) per annum in tourism revenue worldwide, employing around 13,000 workers. In contrast, the whaling industry, with the moratorium in place, generates US$31 million (UK£20 million) per year. The size and rapid growth of the industry has led to complex and continuing debates with the whaling industry about the best use of whales as a natural resource. In myth, literature and art As marine creatures that reside in either the depths or the poles, humans knew very little about whales over the course of history; many feared or revered them. The Nords and various arctic tribes revered the whale as they were important pieces of their lives. In Inuit creation myths, when 'Big Raven', a deity in human form, found a stranded whale, he was told by the Great Spirit where to find special mushrooms that would give him the strength to drag the whale back to the sea and thus, return order to the world. In an Icelandic legend, a man threw a stone at a fin whale and hit the blowhole, causing the whale to burst. The man was told not to go to sea for twenty years, but during the nineteenth year he went fishing and a whale came and killed him. Whales played a major part in shaping the art forms of many coastal civilizations, such as the Norse, with some dating to the Stone Age. Petroglyphs off a cliff face in Bangudae, South Korea show 300 depictions of various animals, a third of which are whales. Some show particular detail in which there are throat pleats, typical of rorquals. These petroglyphs show these people, of around 7,000 to 3,500 B.C.E. in South Korea, had a very high dependency on whales. In Vietnam and Ghana, among other places, whales hold a sense of divinity. They are so respected in their cultures that they occasionally hold funerals for beached whales, a throwback to Vietnam's ancient sea-based Austro-Asiatic culture. The god of the seas, according to Chinese folklore, was a large whale with human limbs. Whales have also played a role in sacred texts such as the Bible. It mentions whales in Genesis 1:21, Job 7:12, and Ezekiel 32:2. The "leviathan" described at length in Job 41:1-34 is generally understood to refer to a whale. The "sea monsters" in Lamentations 4:3 have been taken by some to refer to marine mammals, in particular whales, although most modern versions use the word "jackals" instead. The story of Jonah being swallowed by a great fish is told both in the Qur'an and in the Bible. A medieval column capital sculpture depicting this was made in the 12th century in the abbey church in Mozac, France. The Old Testament contains the Book of Jonah and in the New Testament, Jesus mentions this story in Matthew 12:40. In 1585, Alessandro Farnese, 1585, and Francois, Duke of Anjou, 1582, were greeted on his ceremonial entry into the port city of Antwerp by floats including "Neptune and the Whale", indicating at least the city's dependence on the sea for its wealth. Whales continue to be prevalent in modern literature. For example, Herman Melville's Moby Dick features a "great white whale" as the main antagonist for Ahab, who eventually is killed by it. The whale is an albino sperm whale, considered by Melville to be the largest type of whale, and is partly based on the historically attested bull whale Mocha Dick. Rudyard Kipling's Just So Stories includes the story of "How the Whale got in his Throat". Niki Caro's film the Whale Rider has a Māori girl ride a whale in her journey to be a suitable heir to the chief-ship. Walt Disney's film Pinocchio features a giant whale named Monstro as the final antagonist. Alan Hovhaness' orchestra And God Created Great Whales including the recorded sounds of humpback and bowhead whales. Léo Ferré's song "Il n'y a plus rien" is an example of biomusic that begins and ends with recorded whale songs mixed with a symphonic orchestra and his voice. Belugas were the first whales to be kept in captivity. Other species were too rare, too shy, or too big. The first beluga was shown at Barnum's Museum in New York City in 1861. For most of the 20th century, Canada was the predominant source of wild belugas. They were taken from the St. Lawrence River estuary until the late 1960s, after which they were predominantly taken from the Churchill River estuary until capture was banned in 1992. Russia has become the largest provider since it had been banned in Canada. Belugas are caught in the Amur River delta and their eastern coast, and then are either transported domestically to aquariums or dolphinariums in Moscow, St. Petersburg, and Sochi, or exported to other countries, such as Canada. Most captive belugas are caught in the wild, since captive-breeding programs are not very successful. As of 2006, 30 belugas were in Canada and 28 in the United States, and 42 deaths in captivity had been reported up to that time. A single specimen can reportedly fetch up to US$100,000 (UK£64,160) on the market. The beluga's popularity is due to its unique colour and its facial expressions. The latter is possible because while most cetacean "smiles" are fixed, the extra movement afforded by the beluga's unfused cervical vertebrae allows a greater range of apparent expression. Between 1960 and 1992, the Navy carried out a program that included the study of marine mammals' abilities with sonar, with the objective of improving the detection of underwater objects. A large number of belugas were used from 1975 on, the first being dolphins. The program also included training them to carry equipment and material to divers working underwater by holding cameras in their mouths to locate lost objects, survey ships and submarines, and underwater monitoring. A similar program was used by the Russian Navy during the Cold War, in which belugas were also trained for antimining operations in the Arctic. Aquariums have tried housing other species of whales in captivity. The success of belugas turned attention to maintaining their relative, the narwhal, in captivity. However, in repeated attempts in the 1960s and 1970s, all narwhals kept in captivity died within months. A breeding pair of pygmy right whales were retained in an enclosed area (with nets); they were eventually released in South Africa. Gigi, a gray whale calf, was kept at SeaWorld San Diego. Gigi was an orphaned calf that beached itself, and was transported two miles to SeaWorld. The 680 kilograms (1,500 lb) calf was a popular attraction, and behaved normally, despite being separated from his mother. A year later, the 8,164.7 kilograms (18,000 lb) whale grew too big to keep in captivity and was released; it was the first of two baleen whales, the other being another gray whale calf named JJ, to be kept in captivity. Over the last few hundred years of human history, sailors and whalers have reported seeing whales they cannot identify. Some of the most well-known of these purported whales are Giglioli's whale, the rhinoceros dolphin, Trunko and the Alula whale. Giglioli's whale is a purported species of baleen whale observed by Enrico Hillyer Giglioli. The rhinoceros dolphin, or Delphinus rhinoceros or Cetodipteros rhinoceros, is a cryptid species of dolphin-oid said to have two dorsal fins, much like Giglioli's whale, but one of the dorsal fins is on the head (hence the name "rhinoceros dolphin"), allegedly sighted off the coast of the Sandwich Islands and New South Wales by Jean René Constant Quoy and Joseph Gaimard.:458 Trunko is the nickname for a whale-like creature reportedly sighted in Margate, South Africa in 1924. The high-finned sperm whale, or Physeter tursio, is a supposed variant or relative of the known sperm whale, Physeter macrocephalus, said to live in the seas around the Shetland Islands, the Southern Ocean, and Nova Scotia.:233 The Alula whale, or the Alula Killer, is a cryptid that resembles a sepia brown killer whale with a well-rounded forehead and white, star-like scars on the body. The dorsal fin, supposedly 60 centimetres (24 in) high, is prominent and often protrudes well above the surface of the water.:6 - Klinowska, Margaret; Cooke, Justin (1991). Dolphins, Porpoises, and Whales of the World: the IUCN Red Data Book (PDF). Columbia University Press, NY: IUCN Publications. ISBN 978-2-88032-936-5. - "Scientists find missing link between the whale and its closest relative, the hippo". Phys.org. 25 January 2005. Retrieved 6 May 2010. - Gatesy, J. (1997). "More DNA support for a Cetacea/Hippopotamidae clade: the blood-clotting protein gene gamma-fibrinogen" (PDF). Molecular Biology and Evolution 14 (5): 537–543. doi:10.1093/oxfordjournals.molbev.a025790. PMID 9159931. - Johnson, James H.; Wolman, Allen A. (1984). "The Humpback Whale" (PDF). Marine Fisheries Review 46 (4): 30–37. - Cozzi, Bruno; Mazzario, Sandro; Podestà, Michela; Zotti, Alessandro (2009). "Diving Adaptations of the Cetacean Skeleton" (PDF). Open Zoology Journal 2 (1): 34–42. doi:10.2174/1874336600902010024. - Goldbogen, Jeremy A. (2010). "The Ultimate Mouthful: Lunge Feeding in Rorqual Whales". American Scientist 98 (2): 124–131. doi:10.1511/2010.83.124. - Froias, Gustin (2012). "Balaenidae". New Bedford Whaling Museum. Retrieved 29 August 2015. - Jefferson, T.A.; Leatherwood, S.; Webber, M.A. "Gray whale (Family Eschrichtiidae)". Marine Species Identification Portal. Retrieved 29 August 2015. - Thomas, Jeanette A.; Kastelein, Ronald A. (1990). Sensory Abilities of Cetaceans: Laboratory and Field Evidence 196. New York: Springer Science & Business Media. doi:10.1007/978-1-4899-0858-2. ISBN 978-1-4899-0860-5. - Leatherwood, S.; Prematunga, W.P.; Girton, P.; McBrearty, D.; Ilangakoon, A.; McDonald, D (1991). Records of 'blackfish' (killer, false killer, pilot, pygmy killer, and melon-headed whales) in the Indian Ocean Sanctuary, 1772-1986 in Cetaceans and cetacean research in the Indian Ocean Sanctuary. UNEP Marine Mammal Technical Report. pp. 33–65. ASIN B00KX9I8Y8. - Jefferson, T.A.; Leatherwood, S.; Webber, M.A. "Narwhal and White Whale (Family Monodontidae)". Marine Species Identification Portal. Retrieved 29 August 2015. - Jefferson, T.A.; Leatherwood, S.; Webber, M.A. "Sperm Whale (Family Physeteridae)". Marine Species Identification Portal. Retrieved 29 August 2015. - Jefferson, T.A.; Leatherwood, S.; Webber, M.A. "Beaked Whales (Family Ziphiidae)". Marine Species Identification Portal. Retrieved 29 August 2015. - "Going Aquatic: Cetacean Evolution". PBS Nature. 21 March 2012. Retrieved 29 August 2015. - Houben, A. J. P.; Bijl, P. K.; Pross, J.; Bohaty, S. M.; Passchier, S.; Stickley, C. E.; Rohl, U.; Sugisaki, S.; Tauxe, L.; van de Flierdt, T.; Olney, M.; Sangiorgi, F.; Sluijs, A.; Escutia, C.; Brinkhuis, H. (2013). "Reorganization of Southern Ocean Plankton Ecosystem at the Onset of Antarctic Glaciation". Science 340 (6130): 341–344. Bibcode:2013Sci...340..341H. doi:10.1126/science.1223646. PMID 23599491. - Steeman, M. E.; Hebsgaard, M. B.; Fordyce, R. E.; Ho, S. Y. W.; Rabosky, D. L.; Nielsen, R.; Rahbek, C.; Glenner, H.; Sorensen, M. V.; Willerslev, E. (2009). "Radiation of Extant Cetaceans Driven by Restructuring of the Oceans". Systematic Biology 58 (6): 573–585. doi:10.1093/sysbio/syp060. PMC 2777972. PMID 20525610. - Northeastern Ohio Universities Colleges of Medicine and Pharmacy. "Whales Descended From Tiny Deer-like Ancestors". ScienceDaily. Retrieved 21 December 2007. - Dawkins, Richard (2004). The Ancestor's Tale, A Pilgrimage to the Dawn of Life. Houghton Mifflin. ISBN 0-618-00583-8. - "Introduction to Cetacea: Archaeocetes: The Oldest Whales". University of Berkeley. Retrieved 25 July 2015. - Thewissen, J. G. M.; Cooper, L. N.; Clementz, M. T.; Bajpai, S.; Tiwari, B. N. (2007). "Whales originated from aquatic artiodactyls in the Eocene epoch of India" (PDF). Nature 450 (7173): 1190–1194. Bibcode:2007Natur.450.1190T. doi:10.1038/nature06343. PMID 18097400. - Fahlke, Julia M.; Gingerich, Philip D.; Welsh, Robert C.; Wood, Aaron R. (2011). "Cranial asymmetry in Eocene archaeocete whales and the evolution of directional hearing in water". Proceedings of the National Academy of Sciences 108 (35): 14545–14548. doi:10.1073/pnas.1108927108. PMID 21873217. - "More DNA Support for a Cetacea/Hippopotamidae Clade: The Blood-Clotting Protein Gene y-Fibrinogen". BBC News. 8 May 2002. Retrieved 20 August 2006. - "New Dawn". Walking with Prehistoric Beasts. 2002. Discovery Channel. - Rose, Kenneth D. (2001). "The Ancestry of Whales" (PDF). Science 239: 2216–2217. - Bebej, R. M.; ul-Haq, M.; Zalmout, I. S.; Gingerich, P. D. (June 2012). "Morphology and Function of the Vertebral Column in Remingtonocetus domandaensis (Mammalia, cetacea) from the Middle Eocene Domanda Formation of Pakistan". Journal of Mammalian Evolution 19 (2): 77–104. doi:10.1007/S10914-011-9184-8. - Reidenberg, Joy S. (2007). "Anatomical adaptations of aquatic mammals". The Anatomical Record 290 (6): 507–513. doi:10.1002/ar.20541. - Gatesy, John (3 February 1997). "Whales' closest relative" (PDF). Molecular Biology and Evolution 14 (5): 537–543. Retrieved 29 August 2015. - "The evolution of whales". University of Berkeley. Retrieved 29 August 2015. - Boisserie, Jean-Renaud; Lihoreau, Fabrice; Brunet, Michel (2005). "The position of Hippopotamidae within Cetartiodactyla". Proceedings of the National Academy of Sciences 102 (5): 1537–1541. doi:10.1073/pnas.0409518102. PMID 15677331. - Ralls, Katherine; Mesnick, Sarah. "Sexual Dimorphism". Encyclopedia of Marine Mammals (PDF) (2nd ed.). San Diego: Academic Press. pp. 1005–1011. ISBN 978-0-08-091993-5. - "Baleen". NOAA Fisheries. United States Department of Commerce. Retrieved 29 August 2015. - Scholander, Per Fredrik (1940). "Experimental investigations on the respiratory function in diving mammals and birds". Hvalraadets Skrifter 22: 1–131. - Stevens, C. Edward; Hume, Ian D. (1995). Comparative Physiology of the Vertebrate Digestive System. Cambridge University Press. p. 317. ISBN 978-0-521-44418-7. - Norena, S. R.; Williams, T. M. (2000). "Body size and skeletal muscle myoglobin of cetaceans: adaptations for maximizing dive duration". Comparative Biochemistry and Physiology A-molecular & Integrative Physiology 126 (2): 181–191. doi:10.1016/S1095-6433(00)00182-3. PMID 10936758. - Cranford, T.W.; Krysl, P.; Hildebrand, J.A. (2008). "Acoustic pathways revealed: simulated sound transmission and reception in Cuvier's beaked whale (Ziphius cavirostris)". Bioinspiration & Biomimetics 3: 016001. doi:10.1088/1748-3182/3/1/016001. PMID 18364560. - Nummela, Sirpa; Thewissen, J.G.M; Bajpai, Sunil; Hussain, Taseer; Kumar, Kishor (2007). "Sound transmission in archaic and modern whales: Anatomical adaptations for underwater hearing". The Anatomical Record 290 (6): 716–733. doi:10.1002/ar.20528. PMID 17516434. - Thewissen, J. G. M.; Perrin, William R.; Wirsig, Bernd (2002). "Hearing". Encyclopedia of Marine Mammals. San Diego: Academic Press. pp. 570–572. ISBN 978-0-12-551340-1. - Ketten, Darlene R. (1992). "The Marine Mammal Ear: Specializations for Aquatic Audition and Echolocation". In Webster, Douglas B.; Fay, Richard R.; Popper, Arthur N. The Evolutionary Biology of Hearing (PDF). Springer–Verlag. pp. 717–750. doi:10.1007/978-1-4612-2784-7_44. ISBN 978-1-4612-7668-5. - Mass, Alla M.; Supin, Alexander, Y. A. (21 May 2007). "Adaptive features of aquatic mammals' eyes". Anatomical Record 290 (6): 701–715. doi:10.1002/ar.20529. - "dBm dBW Watts Conversion Table - Radio-Electronics.Com". Retrieved 29 August 2015. - Collins, Nick (22 October 2012). "Whale learns to mimic human speech". The Daily Telegraph. Retrieved 22 October 2012. - Janet Mann; Richard C. Connor; Peter L. Tyack; et al., eds. (2000). Cetacean Societies: Field Studies of Dolphins and Whales. University of Chicago. p. 9. ISBN 0-226-50341-0. Retrieved 30 August 2015. - Siebert, Charles (8 July 2009). "Watching Whales Watching Us". New York Times Magazine. Retrieved 29 August 2015. - Watson, K.K.; Jones, T. K.; Allman, J. M. (2006). "Dendritic architecture of the Von Economo neurons". Neuroscience 141 (3): 1107–1112. doi:10.1016/j.neuroscience.2006.04.084. PMID 16797136. - Hof, Patrick R.; Van Der Gucht, Estel (2007). "Structure of the cerebral cortex of the humpback whale, Megaptera novaeangliae (Cetacea, Mysticeti, Balaenopteridae)". The Anatomical Record 290 (1): 1–31. doi:10.1002/ar.20407. PMID 17441195. - "Sperm Whales brain size". NOAA Fisheries – Office of Protected Resources. Retrieved 9 August 2015. - Fields, R. Douglas. "Are whales smarter than we are?". Scientific American. Retrieved 9 August 2015. - Wiley, David; et al. (2011). "Underwater components of humpback whale bubble-net feeding behaviour". Behaviour 148 (5): 575–602. doi:10.1163/000579511X570893. - Leighton, Tim; Finfer, Dan; Grover, Ed; White, Paul (2007). "An acoustical hypothesis for the spiral bubble nets of humpback whales, and the implications for whale feeding" (PDF). Acoustics Bulletin 32 (1): 17–21. - Charles Q. Choi (30 October 2006). "Elephant Self-Awareness Mirrors Humans". Live Science. Retrieved 29 August 2015. - Derr, Mark. "Mirror test". New York Times. Retrieved 3 August 2015. - "Milk". Modern Marvels. Season 14. 2008-01-07. The History Channel. - Johnson, James H.; Wolman, Allen A. "The Humpback Whale, Megaptera novaeangliae" (PDF). Marine Fisheries Review. Retrieved 29 August 2015. - Zerbini, Alexandre N.; et al. (11 May 2006). "Satellite-monitored movements of humpback whales Megaptera novaeangliae in the Southwest Atlantic Ocean" (PDF). Marine Ecology Progress Series 313: 295–304. - Sekiguchi, Yuske; Arai, Kazutoshi; Kohshima, Shiro (21 June 2006). "Sleep behaviour". Nature 441. doi:10.1038/nature04898. - Miller, P. J. O.; Aoki, K.; Rendell, L. E.; Amano, M. (2008). "Stereotypical resting behavior of the sperm whale". Current Biology 18 (1): R21–R23. doi:10.1016/j.cub.2007.11.003. PMID 18177706. - NOAA Fisheries. "Gray Whale - Office of Protected Resources". noaa.gov. Retrieved 29 August 2015. - Nemoto, T.; Okiyama, M.; Iwasaki, N.; Kikuchi, T. "Squid as Predators on Krill (Euphausia superba) and Prey for Sperm Whales in the Southern Ocean". In Dietrich Sahrhage. Antarctic Ocean and Resources Variability. Springer Berlin Heidelberg. pp. 292–296. doi:10.1007/978-3-642-73724-4_25. ISBN 978-3-642-73726-8. - Lydersen, Christian; Weslawski, Jan Marcin; Øritsland, Nils Are (1991). "Stomach content analysis of minke whales Balaenoptera acutorostrata from the Lofoten and Vesterålen areas, Norway". Ecography 1 (3): 219–222. Retrieved 29 August 2015. - "Mysticetes hunt in groups". Defenders of Wildlife. Retrieved July 24, 2015. - Riedman, M. (1991). The Pinnipeds: Seals, Sea Lions, and Walruses. University of California Press. p. 168. ISBN 0-520-06498-4. - Morrel, Virginia (30 January 2012). "Killer Whale Menu Finally Revealed". Science AAAS. Retrieved 29 August 2015. - Smith, Thomas G.; Sjare, Becky (1990). "Predation of Belugas and Narwhals by Polar Bears in Nearshore Areas of the Canadian High Arctic" (PDF). Arctic 43 (2): 99–102. Retrieved 29 August 2015. - Roman, J.; McCarthy, J. J. (October 2010). "The Whale Pump: Marine Mammals Enhance Primary Productivity in a Coastal Basin". PLoS ONE 5 (10): e13255. doi:10.1371/journal.pone.0013255. - "Whale poop pumps up ocean health". ScienceDaily. 12 October 2010. Retrieved 18 November 2011. - Roman, J.; McCarthy, J. J. (2010). Roopnarine, Peter, ed. "The Whale Pump: Marine Mammals Enhance Primary Productivity in a Coastal Basin". PLoS ONE 5 (10): e13255. doi:10.1371/journal.pone.0013255. Retrieved 23 September 2015. - "Whale poo important for ocean ecosystems". Australian Geographic. 26 May 2014. Retrieved 18 November 2014. - Roman, Joe; Estes, James A.; Morissette, Lyne; Smith, Craig; Costa, Daniel; McCarthy, James; Nation, J.B.; Nicol, Stephen; Pershing, Andrew; Smetacek, Victor (2014). "Whales as marine ecosystem engineers". Frontiers in Ecology and the Environment 12 (7): 377–385. doi:10.1890/130220. - Smith, Craig R.; Baco, Amy R. (2003). "Ecology of Whale Falls at the Deep-Sea Floor" (PDF). Oceanography and Marine Biology: an Annual Review 41: 311–354. - Fujiwara, Yoshihiro; et al. (16 February 2007). "Three-year investigations into sperm whale-fall ecosystems in Japan". Marine Ecology 28 (1): 219–230. - "Rock art hints at whaling origins". Newsgroup: BBC. 20 April 2004. Retrieved 2 September 2015. Stone Age people may have started hunting whales as early as 6,000 BC, new evidence from South Korea suggests. - Marrero, Meghan E.; Thornton, Stuart (1 November 2011). "Big Fish: A Brief History of Whaling". National Geographic. Retrieved 2 September 2015. - Ford, Catherine (July 2015). "A Savage History: Whaling in the South Pacific and Southern Oceans". The Monthly: Australian politics, societies, and cultures. - Basque whaling in Labrador in the 16th century. 1994. pp. 260–286. - "Whale products". New Bedford Whaling Museum. Retrieved 29 August 2015. - Stonehouse, Bernard (5 October 2007). "British Arctic whaling: an overview". University of Hull. Retrieved 4 September 2015. - Tonnessen, J.N.; Johnsen, A.O (1982). The History of Modern Whaling. C. Hurst. ISBN 0-905838-23-8. - "Timeline: The History of Whaling in America". PBS. - "Commercial Whaling: Good Whale Hunting". The Economist. 4 March 2012. Retrieved 1 September 2015. - "Which countries are still whaling". International Fund for Animal Welfare. Retrieved 29 August 2015. - "Aboriginal Subsistence whaling". IWC. Retrieved 29 August 2015. - Morseth, C. Michele (1997). "Twentieth-Century Changes in Beluga Whale Hunting and Butchering by the Kaηiġmiut of Buckland, Alaska". Arctic 50 (3): 241. - NOAA Fisheries – Office of Protected Resources. "The Tuna-Dolphin Issue". noaa.gov. Retrieved 29 August 2015. - Metcalfe, C. (23 February 2012). "Persistent organic pollutants in the marine food chain". United Nations University. Retrieved 16 August 2013. - Tsai, Wen-Chu. "Whales and trash-bags". Taipei Times. Retrieved 5 August 2015. - Rommel, S. A.; et al. (2006). "Elements of beaked whale anatomy and diving physiology and some hypothetical causes of sonar-related stranding" (PDF). Journal of Cetacean Resource Management 7 (3): 189–209. Retrieved 29 August 2015. - Schrope, Mark. (2003). "Whale deaths caused by US Navy's sonar". Nature 415 (6868): 106. doi:10.1038/415106a. - Kirby, Alex (8 October 2003). "Sonar may cause Whale deaths". BBC News. Retrieved 14 September 2006. - Piantadosi, C. A.; Thalmann, E. D. (2004). "Pathology: whales, sonar and decompression sickness". Nature 428 (6894): 716–718. doi:10.1038/nature02527a. PMID 15085881. - unknown. "Key Documents". International Whaling Commission. Retrieved 29 August 2015. - "Catch limits". International Whaling Commission. Retrieved 6 August 2015. - International Whaling Commission. "Catch limits and Catches taken". International Whaling Commission. - "North Atlantic Right Whale (Eubalaena glacialis) Source Document for the Critical Habitat Designation: A review of information pertaining to the definition of "critical habitat"" (PDF). NOAA Fisheries. July 2014. Retrieved 23 September 2015. - MacKenzie, Debora (4 June 1994). "Whales win southern sanctuary". New Scientist. Retrieved 12 September 2015. - "Whale Sanctuaries". International Whaling Commission. Retrieved 4 September 2015. - Mead, J.G.; Brownell, R. L., Jr. (2005). "Order Cetacea". Mammal Species of the World: A Taxonomic and Geographic Reference. Johns Hopkins University Press. pp. 723–743. ISBN 978-0-8018-8221-0. - Laidre, K. L.; Stirling, I.; Lowry, L. F.; Wiig, Ø.; Heide-Jørgensen, M. P.; Ferguson, S.H. (2008). "Quantifying the sensitivity of Arctic marine mammals to climate-induced habitat change" (PDF). Ecological Applications 18 (2 Suppl.): S97–S125. doi:10.1890/06-0546.1. PMID 18494365. Retrieved 29 August 2015. - O'Connor, Simon (2009). "Whale Watching Worldwide" (PDF). International Fund for Animal Welfare. pp. 23–24. Retrieved 26 December 2014. - National Oceanic and Atmospheric Administration, NOAA (January 2004). "Marine Wildlife Viewing Guidelines" (PDF). Retrieved 6 August 2010. - Björgvinsson, Ásbjörn; Lugmayr, Helmut; Camm, Martin; Skaptason, Jón (2002). Whale watching in Iceland. ISBN 9979-761-55-5. - O'Connor S.; Campbell R.; Cortez H.; Knowles T. (2009). "Whale Watching Worldwide: tourism numbers, expenditures and expanding economic benefits" (PDF). International Fund for Animal Welfare. Retrieved 29 August 2015. - "CWA travels to The Petroglyphs of Bangudae". Current World Archaeology (63). 24 January 2014. Retrieved 31 August 2015. - Cressey, Jason (1998). "Making a Splash in the Pacific Ocean: Dolphin and Whale Myths and Legends of Oceania" (PDF). Rapa Nui Journal 12: 75–84. Retrieved 5 August 2015. - "Thousand gather for whale's funeral in Vietnam". The Independent. Associated Press. 23 February 2010. Retrieved 15 April 2011. - "Whale funeral draws 1000 mourners in Vietnam". Sydney Morning Herald. AFP. 14 April 2003. Retrieved 15 April 2011. - Viegas, Jennifer (23 February 2010). "Thousands Mourn Dead Whale in Vietnam". Discovery News. Retrieved 15 April 2011. - "Funeral for a Whale held at Apam". Ghana News Agency. GhanaWeb. 10 August 2005. Retrieved 15 April 2011. - Lamentations 4:3. Retrieved 29 August 2015. - Quran 37:139–148 - "Jonah 1-4 New International Version". Bible Gateway. Retrieved 30 December 2013. - Mack, John (2013). The Sea: a cultural history. Reaktion Books. pp. 205–206. ISBN 978-1-78023-184-6. - Hovhannes, Alan (1970). "And God Created Great Whales". Retrieved 10 October 2007. - "The Whales, New York Tribune, August 9, 1861". New York Tribune. 9 August 1861. Retrieved 5 December 2011. - "Beluga Whales in Captivity: Hunted, Poisoned, Unprotected" (PDF). Special Report on Captivity 2006. Canadian Marine Environment Protection Society. 2006. Retrieved 26 December 2014. - "Beluga (Delphinapterus leucas) Facts – Distribution – In the Zoo". World Association of Zoos and Aquariums. Retrieved 5 December 2011. - Bonner, Nigel. Whales. Facts on File. pp. 17, 23–24. ISBN 0-7137-0887-5. - "Navy Whales". PBS. Retrieved 29 August 2015. - Beland, Pierre (1996). Beluga: A Farewell to Whales (1 ed.). The Lyons Press. p. 224. ISBN 1-55821-398-8. - Eberhart, George M. (2002). Mysterious Creatures: A Guide to Cryptozoology (PDF). ABC-CLIO, Inc. ISBN 1-57607-283-5. Retrieved 29 August 2015. - Shuker, Karl P N (1996). The Unexplained. Carlton. p. 95. ISBN 1-85868-186-3.
About This Chapter The Fundamentals of Sociolinguistics - Chapter Summary This self-paced chapter simplifies the process of studying the fundamentals of sociolinguistics. Enjoy access to a variety of engaging lessons that can improve your knowledge of concepts that include linguistic diversity, pragmatics code switching in the classroom and social and linguistic variation. Once you've completed this chapter, you will be ready to: - Explain how social factors can impact how we learn a second language - List stages of acquisition as individuals develop the ability to speak English as a second language - Differentiate between immersion, bilingual education and multicultural education - Detail ways children attain pragmatic knowledge about language - Describe differences between informal and formal language in language acquisition - Discuss how children with dialectic differences use and develop English - Share ways teachers instruct ESOL students about variations in the English language - List ways teachers can model oral and written communication skills in the classroom Feel free to tailor your review of this chapter to your personal schedule and study needs. Access the lessons anytime using your computer or mobile device, navigate them in any order, and visit them as often as you'd like. Check your knowledge of the fundamentals of sociolinguistics by taking short quizzes and a practice exam. If you have questions about specific lesson topics, feel free to submit them to our experts using the Dashboard. 1. Linguistic Diversity: Definition & Overview Language can be considered a particularly human invention. People need language to communicate with one another in order to survive. This lesson explores why there are so many languages and how these languages may be compared. 2. Understanding Sociolinguistics: Social and Linguistic Variation This lesson will seek to explain the study of sociolinguistics and the concept of ethnography. It will highlight the variations that region, class, relationship, and gender cause in language. 3. Sociolinguistic Concepts & Second Language Acquisition This lesson looks at the connection between social factors and how we learn a second language. You'll also learn several key concepts from the field of sociolinguistics, including register, dialect, and style. 4. English as a Second Language in the Classroom: Acquisition & Development Learning a second language can be very difficult and can put students behind on their regular education. In this lesson, explore the stages of language acquisition and discover tricks to help students develop English as a second language. 5. Bilingual Education, Immersion & Multicultural Education Educators use many approaches for second-language instruction. The approaches vary based on the individual needs of the learner, focusing on his or her current language abilities, background, and cultural experiences. This lesson will differentiate between the different types of second-language instruction, including immersion, bilingual education, and multicultural education. 6. Code Switching in the Classroom In this lesson, we will examine code switching: what it is, how we use it, and how we can practice using it in the classroom in order to reach out to students and speak their language. 7. What is Pragmatics? - Definition & Examples You use pragmatics on an everyday basis, but do you know how? Watch this video lesson to not only learn the meaning of pragmatics but also how you use it every day. 8. How Children Acquire Pragmatic Knowledge about Language After watching this video lesson, you will know how children learn to speak. See how children first associate words with objects, progress to sentences and abstract thoughts, and eventually learn the rules of grammar. 9. Informal vs. Formal Language in Language Acquisition English language learners learn formally, through instruction, or informally. Which is best? This lesson identifies the differences between the two and discusses the benefits of each to help you understand best practices for language development. 10. How Children With Dialectal Differences Develop & Use English In this lesson, you'll learn about the different dialects of American English and how children of differing backgrounds develop and use English according to the rules of their dialects. 11. Teaching ESOL Students About English Language Variation When you teach English to speakers of other languages it is often necessary to refer not only to formal expressions but also to slang, jargon, and other forms of English your students are likely to come across in daily life. This lesson is about how to approach those language variations. 12. Modeling Oral & Written Communication Skills in the Classroom In this lesson, we'll discuss relevant techniques for modeling appropriate oral and written communication skills in the classroom. We'll also explore what modeling means and how to use this approach to enhance student performance and participation. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the MTTC English as a Second Language (086): Practice & Study Guide course - Language Development & Linguistics - The Fundamentals of Culture - Culture in Language Development - Cultural Competence & Inclusivity in the Classroom - Second Language Acquisition for ELL Teachers - Influences on Second Language Acquisition - Instructional Practices for ELL Teachers - Activities & Resources for ELL Classrooms - Listening & Speaking Instruction for ELL Students - Reading Instruction for ELL Students - Writing Instruction for ELL Students - Content-Based Instruction for ELL Students - Assessments for English Language Learners - Issues in ELL Assessment - Foundations of ESL Programs - Roles & Responsibilities of ESL Teachers - MTTC English as a Second Language Flashcards
Olfactory receptors (ORs), also known as odorant receptors, are expressed in the cell membranes of olfactory receptor neurons and are responsible for the detection of odorants (i.e., compounds that have an odor) which give rise to the sense of smell. Activated olfactory receptors trigger nerve impulses which transmit information about odor to the brain. These receptors are members of the class A rhodopsin-like family of G protein-coupled receptors (GPCRs). The olfactory receptors form a multigene family consisting of around 800 genes in humans and 1400 genes in mice. In vertebrates, the olfactory receptors are located in both the cilia and synapses of the olfactory sensory neurons and in the epithelium of the human airway. In insects, olfactory receptors are located on the antennae and other chemosensory organs. Sperm cells also express odor receptors, which are thought to be involved in chemotaxis to find the egg cell. Rather than binding specific ligands, olfactory receptors display affinity for a range of odor molecules, and conversely a single odorant molecule may bind to a number of olfactory receptors with varying affinities, which depend on physio-chemical properties of molecules like their molecular volumes. Once the odorant has bound to the odor receptor, the receptor undergoes structural changes and it binds and activates the olfactory-type G protein on the inside of the olfactory receptor neuron. The G protein (Golf and/or Gs) in turn activates the lyase - adenylate cyclase - which converts ATP into cyclic AMP (cAMP). The cAMP opens cyclic nucleotide-gated ion channels which allow calcium and sodium ions to enter into the cell, depolarizing the olfactory receptor neuron and beginning an action potential which carries the information to the brain. The primary sequences of thousands of olfactory receptors are known from the genomes of more than a dozen organisms: they are seven-helix transmembrane proteins, but there are (as of May 2016) no known structures of any OR. Their sequences exhibit typical class A GPCR motifs, useful for building their structures with molecular modeling. Golebiowski, Ma and Matsunami showed that the mechanism of ligand recognition, although similar to other non-olfactory class A GPCRs, involves residues specific to olfactory receptors, notably in the sixth helix. There is a highly conserved sequence in roughly three quarters of all ORs that is a tripodal metal ion binding site, and Suslick has proposed that the ORs are in fact metalloproteins (mostly likely with zinc, copper and possibly manganese ions) that serve as a Lewis acid site for binding of many odorant molecules. Crabtree, in 1978, had previously suggested that Cu(I) is "the most likely candidate for a metallo-receptor site in olfaction" for strong-smelling volatiles which are also good metal-coordinating ligands, such as thiols. Zhuang, Matsunami and Block, in 2012, confirmed the Crabtree/Suslick proposal for the specific case of a mouse OR, MOR244-3, showing that copper is essential for detection of certain thiols and other sulfur-containing compounds. Thus, by using a chemical that binds to copper in the mouse nose, so that copper wasn’t available to the receptors, the authors showed that the mice couldn't detect the thiols. However, these authors also found that MOR244-3 lacks the specific metal ion binding site suggested by Suslick, instead showing a different motif in the EC2 domain. In a recent but highly controversial interpretation, it has also been speculated that olfactory receptors might really sense various vibrational energy-levels of a molecule rather than structural motifs via quantum coherence mechanisms. As evidence it has been shown that flies can differentiate between two odor molecules which only differ in hydrogen isotope (which will drastically change vibrational energy levels of the molecule). Not only could the flies distinguish between the deuterated and non-deuterated forms of an odorant, they could generalise the property of "deuteratedness" to other novel molecules. In addition, they generalised the learned avoidance behaviour to molecules which were not deuterated but did share a significant vibration stretch with the deuterated molecules, a fact which the differential physics of deuteration (below) has difficulty in accounting for. It should be noted, however, that deuteration changes the heats of adsorption and the boiling and freezing points of molecules (boiling points: 100.0 °C for H2O vs. 101.42 °C for D2O; melting points: 0.0 °C for H2O, 3.82 °C for D2O), pKa (i.e., dissociation constant: 9.71x10−15 for H2O vs. 1.95x10−15 for D2O, cf. heavy water) and the strength of hydrogen bonding. Such isotope effects are exceedingly common, and so it is well known that deuterium substitution will indeed change the binding constants of molecules to protein receptors. It has been claimed that human olfactory receptors are capable of distinguishing between deuterated and undeuterated isotopomers of cyclopentadecanone by vibrational energy level sensing. However this claim has been challenged by another report that the human musk-recognizing receptor, OR5AN1 that robustly responds to cyclopentadecanone and muscone, fails to distinguish isotopomers of these compounds in vitro. Furthermore, the mouse (methylthio)methanethiol-recognizing receptor, MOR244-3, as well as other selected human and mouse olfactory receptors, responded similarly to normal, deuterated, and carbon-13 isotopomers of their respective ligands, paralleling results found with the musk receptor OR5AN1. Hence it was concluded that the proposed vibration theory does not apply to the human musk receptor OR5AN1, mouse thiol receptor MOR244-3, or other olfactory receptors examined. In addition, the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes. Hence multiple lines of evidence argue against the vibration theory of smell. This later study was criticized since it used "cells in a dish rather than within whole organisms" and that "expressing an olfactory receptor in human embryonic kidney cells doesn't adequately reconstitute the complex nature of olfaction...". In response, the authors of the second study state "Embryonic kidney cells are not identical to the cells in the nose .. but if you are looking at receptors, it's the best system in the world." There are a large number of different odor receptors, with as many as 1,000 in the mammalian genome which represents approximately 3% of the genes in the genome. However, not all of these potential odor receptor genes are expressed and functional. According to an analysis of data derived from the Human Genome Project, humans have approximately 400 functional genes coding for olfactory receptors, and the remaining 600 candidates are pseudogenes. The reason for the large number of different odor receptors is to provide a system for discriminating between as many different odors as possible. Even so, each odor receptor does not detect a single odor. Rather each individual odor receptor is broadly tuned to be activated by a number of similar odorant structures. Analogous to the immune system, the diversity that exists within the olfactory receptor family allows molecules that have never been encountered before to be characterized. However, unlike the immune system, which generates diversity through in-situ recombination, every single olfactory receptor is translated from a specific gene; hence the large portion of the genome devoted to encoding OR genes. Furthermore, most odors activate more than one type of odor receptor. Since the number of combinations and permutations of olfactory receptors is very large, the olfactory receptor system is capable of detecting and distinguishing between a very large number of odorant molecules. Deorphanization of odor receptors can be completed using electrophysiological and imaging techniques to analyze the response profiles of single sensory neurons to odor repertoires. Such data open the way to the deciphering of the combinatorial code of the perception of smells. Such diversity of OR expression maximizes the capacity of olfaction. Both monoallelic OR expression in a single neuron and maximal diversity of OR expression in the neuron population are essential for specificity and sensitivity of olfactory sensing. Thus, olfactory receptor activation is a dual-objective design problem. Using mathematical modeling and computer simulations, Tian et al proposed an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop and an enhancer competition step . This model not only recapitulates monoallelic OR expression but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression. A nomenclature system has been devised for the olfactory receptor family and is the basis for the official Human Genome Project (HUGO) symbols for the genes that encode these receptors. The names of individual olfactory receptor family members are in the format "ORnXm" where: For example, OR1A1 is the first isoform of subfamily A of olfactory receptor family 1. Members belonging to the same subfamily of olfactory receptors (>60% sequence identity) are likely to recognize structurally similar odorant molecules. Two major classes of olfactory receptors have been identified in humans: The olfactory receptor gene family in vertebrates has been shown to evolve through genomic events such as gene duplication or gene conversion. Evidence of a role for tandem duplication is provided by the fact that many olfactory receptor genes belonging to the same phylogenetic clade are located in the same gene cluster. To this point, the organization of OR genomic clusters is well conserved between humans and mice, even though the functional OR count is vastly different between these two species. Such birth-and-death evolution has brought together segments from several OR genes to generate and degenerate odorant binding site configurations, creating new functional OR genes as well as pseudogenes. Compared to many other mammals, primates have a relatively small number of functional OR genes. For instance, since divergence from their most recent common ancestor (MRCA), mice have gained a total of 623 new OR genes, and lost 285 genes, whereas humans have gained only 83 genes, but lost 428 genes. Mice have a total of 1035 protein-coding OR genes, humans have 387 protein-coding OR genes. The vision priority hypothesis states that the evolution of color vision in primates may have decreased primate reliance on olfaction, which explains the relaxation of selective pressure that accounts for the accumulation of olfactory receptor pseudogenes in primates. However, recent evidence has rendered the vision priority hypothesis obsolete, because it was based on misleading data and assumptions. The hypothesis assumed that functional OR genes can be correlated to the olfactory capability of a given animal. In this view, a decrease in the fraction of functional OR genes would cause a reduction in the sense of smell; species with higher pseudogene count would also have a decreased olfactory ability. This assumption is flawed. Dogs, which are reputed to have good sense of smell, do not have the largest number of functional OR genes. Additionally, pseudogenes may be functional; 67% of human OR pseudogenes are expressed in the main olfactory epithelium, where they possibly have regulatory roles in gene expression. More importantly, the vision priority hypothesis assumed a drastic loss of functional OR genes at the branch of the OWMs, but this conclusion was biased by low-resolution data from only 100 OR genes. High-resolution studies instead agree that primates have lost OR genes in every branch from the MRCA to humans, indicating that the degeneration of OR gene repertories in primates cannot simply be explained by the changing capabilities in vision. It has been shown that negative selection is still relaxed in modern human olfactory receptors, suggesting that no plateau of minimal function has yet been reached in modern humans and therefore that olfactory capability might still be decreasing. This is considered to provide a first clue to the future human genetic evolution. In 2004 Linda B. Buck and Richard Axel won the Nobel Prize in Physiology or Medicine for their work on olfactory receptors. In 2006, it was shown that another class of odorant receptors – known as trace amine-associated receptors (TAARs) – exist for detecting volatile amines. Except for TAAR1, all functional TAARs in humans are expressed in the olfactory epithelium. The limited functional expression of olfactory receptors in heterologous systems, however, has greatly hampered attempts to deorphanize them (analyze the response profiles of single sensory neurons). This was first completed by genetically engineered receptor, OR-I7 to characterize the “odor space” of a population of native aldehyde receptors.
The frontispiece of Sir Henry Billingsley's first English version of Euclid's Elements, 1570 |Author||Euclid, and translators| |Language||Ancient Greek, translations| |Subject||Euclidean geometry, elementary number theory| |c. 300 BC| |Pages||13 books, or more in translation with scholia| Euclid's Elements (Ancient Greek: Στοιχεῖα Stoicheia) is a mathematical and geometric treatise consisting of 13 books written by the ancient Greek mathematician Euclid in Alexandria c. 300 BC. It is a collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. The thirteen books cover Euclidean geometry and the ancient Greek version of elementary number theory. The work also includes an algebraic system that has become known as geometric algebra, which is powerful enough to solve many algebraic problems, including the problem of finding the square root of a number. With the exception of Autolycus' On the Moving Sphere, the Elements is one of the oldest extant Greek mathematical treatises, and it is the oldest extant axiomatic deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science. According to Proclus the term "element" was used to describe a theorem that is all-pervading and helps furnishing proofs of many other theorems. The word 'element' is in the Greek language the same as 'letter'. This suggests that theorems in the Elements should be seen as standing in the same relation to geometry as letters to language. Later commentators give a slightly different meaning to the term 'element', emphasizing how the propositions have progressed in small steps, and continued to build on previous propositions in a well-defined order. Euclid's Elements has been referred to as the most successful and influential textbook ever written. Being first set in type in Venice in 1482, it is one of the very earliest mathematical works to be printed after the invention of the printing press and was estimated by Carl Benjamin Boyer to be second only to the Bible in the number of editions published, with the number reaching well over one thousand. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through other school textbooks, did it cease to be considered something all educated people had read. Basis in earlier work Scholars believe that the Elements is largely a collection of theorems proven by other mathematicians, supplemented by some original work. Proclus (412 – 485 AD), a Greek mathematician who lived around seven centuries after Euclid, wrote in his commentary on the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors". Pythagoras(c. 570 – c. 495 BCE) was probably the source for most of books I and II, Hippocrates of Chios (c. 470 – c. 410 BCE, not the better known Hippocrates of Kos) for book III, and Eudoxus of Cnidus (c. 408 – c. 355 BC) for book V, while books IV, VI, XI, and XII probably came from other Pythagorean or Athenian mathematicians. The Elements may have been based on an earlier textbook by Hippocrates of Chios, who also may have originated the use of letters to refer to figures. Transmission of the text In the fourth century AD, Theon of Alexandria produced an edition of Euclid which was so widely used that it became the only surviving source until François Peyrard's 1808 discovery at the Vatican of a manuscript not derived from Theon's. This manuscript, the Heiberg manuscript, is from a Byzantine workshop c. 900 and is the basis of modern editions. Papyrus Oxyrhynchus 29 is a tiny fragment of an even older manuscript, but only contains the statement of one proposition. Although known to, for instance, Cicero, there is no extant record of the text having been translated into Latin prior to Boethius in the fifth or sixth century. The Arabs received the Elements from the Byzantines in approximately 760; this version was translated into Arabic under Harun al Rashid c. 800. The Byzantine scholar Arethas commissioned the copying of one of the extant Greek manuscripts of Euclid in the late ninth century. Although known in Byzantium, the Elements was lost to Western Europe until c. 1120, when the English monk Adelard of Bath translated it into Latin from an Arabic translation. The first printed edition appeared in 1482 (based on Campanus of Novara's 1260 edition), and since then it has been translated into many languages and published in about a thousand different editions. Theon's Greek edition was recovered in 1533. In 1570, John Dee provided a widely respected "Mathematical Preface", along with copious notes and supplementary material, to the first English edition by Henry Billingsley. Copies of the Greek text still exist, some of which can be found in the Vatican Library and the Bodleian Library in Oxford. The manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been made about the contents of the original text (copies of which are no longer available). Ancient texts which refer to the Elements itself, and to other mathematical theories that were current at the time it was written, are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of the text. Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or further study. The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Sir Isaac Newton were all influenced by the Elements, and applied their knowledge of it to their work. Mathematicians and philosophers, such as Bertrand Russell, Alfred North Whitehead, and Baruch Spinoza, have attempted to create their own foundational "Elements" for their respective disciplines, by adopting the axiomatized deductive structures that Euclid's work introduced. The austere beauty of Euclidean geometry has been seen by many in western culture as a glimpse of an otherworldly system of perfection and certainty. Abraham Lincoln kept a copy of Euclid in his saddlebag, and studied it late at night by lamplight; he related that he said to himself, "You never can make a lawyer if you do not understand what demonstrate means; and I left my situation in Springfield, went home to my father's house, and stayed there till I could give any proposition in the six books of Euclid at sight". Edna St. Vincent Millay wrote in her sonnet Euclid Alone Has Looked on Beauty Bare, "O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized!". Einstein recalled a copy of the Elements and a magnetic compass as two gifts that had a great influence on him as a boy, referring to the Euclid as the "holy little geometry book". The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about 2,000 years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remain the cornerstone of mathematics. Outline of Elements Contents of the books Books 1 through 4 deal with plane geometry: - Book 1 contains Euclid's 10 axioms (5 named postulates—including the parallel postulate—and 5 named axioms) and the basic propositions of geometry: the pons asinorum (proposition 5), the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area). - Book 2 is commonly called the "book of geometric algebra" because most of the propositions can be seen as geometric interpretations of algebraic identities, such as a(b + c + ...) = ab + ac + ... or (2a + b)2 + b2 = 2(a2 + (a + b)2). It also contains a method of finding the square root of a given number. - Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point, Thales' theorem. - Book 4 constructs the incircle and circumcircle of a triangle, and constructs regular polygons with 4, 5, 6, and 15 sides. - Book 5 is a treatise on proportions of magnitudes. Proposition 25 has as a special case the inequality of arithmetic and geometric means. - Book 6 applies proportions to geometry: similar figures. - Book 7 deals strictly with elementary number theory: divisibility, prime numbers, Euclid's algorithm for finding the greatest common divisor, least common multiple. Propositions 30 and 32 together are essentially equivalent to the fundamental theorem of arithmetic stating that every positive integer can be written as a product of primes in an essentially unique way, though Euclid would have had trouble stating it in this modern form as he did not use the product of more than 3 numbers. - Book 8 deals with proportions in number theory and geometric sequences. - Book 9 applies the results of the preceding two books and gives the infinitude of prime numbers (proposition 20), the sum of a geometric series (proposition 35), and the construction of even perfect numbers (proposition 36). - Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration. Books 11 through to 13 deal with spatial geometry: - Book 11 generalizes the results of Books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds. - Book 12 studies volumes of cones, pyramids, and cylinders in detail, and shows for example that the volume of a cone is a third of the volume of the corresponding cylinder. It concludes by showing the volume of a sphere is proportional to the cube of its radius by approximating it by a union of many pyramids. - Book 13 constructs the five regular Platonic solids inscribed in a sphere, calculates the ratio of their edges to the radius of the sphere, and proves that there are no further regular solids. Euclid's method and style of presentation As was common in ancient mathematical texts, when a proposition needed proof in several different cases, Euclid often proved only one of them (often the most difficult), leaving the others to the reader. Later editors such as Theon often interpolated their own proofs of these cases. Euclid's presentation was limited by the mathematical ideas and notations in common currency in his era, and this causes the treatment to seem awkward to the modern reader in some places. For example, there was no notion of an angle greater than two right angles, the number 1 was sometimes treated separately from other positive integers, and as multiplication was treated geometrically he did not use the product of more than 3 different numbers. The geometrical treatment of number theory may have been because the alternative would have been the extremely awkward Alexandrian system of numerals. The presentation of each result is given in a stylized form, which, although not invented by Euclid, is recognized as typically classical. It has six different parts: First is the enunciation which states the result in general terms (i.e. the statement of the proposition). Then the setting-out, which gives the figure and denotes particular geometrical objects by letters. Next comes the definition or specification which restates the enunciation in terms of the particular figure. Then the construction or machinery follows. It is here that the original figure is extended to forward the proof. Then, the proof itself follows. Finally, the conclusion connects the proof to the enunciation by stating the specific conclusions drawn in the proof, in the general terms of the enunciation. No indication is given of the method of reasoning that led to the result, although the Data does provide instruction about how to approach the types of problems encountered in the first four books of the Elements. Some scholars have tried to find fault in Euclid's use of figures in his proofs, accusing him of writing proofs that depended on the specific figures drawn rather than the general underlying logic, especially concerning Proposition II of Book I. However, Euclid's original proof of this proposition is general, valid, and does not depend on the figure used as an example to illustrate one given configuration. Euclid's list of axioms in the Elements was not exhaustive, but represented the principles that were the most important. His proofs often invoke axiomatic notions which were not originally presented in his list of axioms. Later editors have interpolated Euclid's implicit axiomatic assumptions in the list of formal axioms. For example, in the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points. Later, in the fourth construction, he used superposition (moving the triangles on top of each other) to prove that if two sides and their angles are equal then they are congruent; during these considerations he uses some properties of superposition, but these properties are not described explicitly in the treatise. If superposition is to be considered a valid method of geometric proof, all of geometry would be full of such proofs. For example, propositions I.1 – I.3 can be proved trivially by using superposition. Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose." It was not uncommon in ancient time to attribute to celebrated authors works that were not written by them. It is by these means that the apocryphal books XIV and XV of the Elements were sometimes included in the collection. The spurious Book XIV was probably written by Hypsicles on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being The spurious Book XV was probably written, at least in part, by Isidore of Miletus. This book covers topics such as counting the number of edges and solid angles in the regular solids, and finding the measure of dihedral angles of faces that meet at an edge. - 1460s, Regiomontanus (incomplete) - 1482, Erhard Ratdolt (Venice), first printed edition - 1533, editio princeps by Simon Grynäus - 1557, by Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (only propositions, no full proofs, includes original Greek and the Latin translation) - 1572, Commandinus Latin edition - 1574, Christoph Clavius - 1505, Bartolomeo Zamberti (Latin) - 1543, Niccolò Tartaglia (Italian) - 1557, Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (Greek to Latin) - 1558, Johann Scheubel (German) - 1562, Jacob Kündig (German) - 1562, Wilhelm Holtzmann (German) - 1564–1566, Pierre Forcadel de Béziers (French) - 1570, Henry Billingsley (English) - 1575, Commandinus (Italian) - 1576, Rodrigo de Zamorano (Spanish) - 1594, Typografia Medicea (edition of the Arabic translation of Nasir al-Din al-Tusi) - 1604, Jean Errard de Bar-le-Duc (French) - 1606, Jan Pieterszoon Dou (Dutch) - 1607, Matteo Ricci, Xu Guangqi (Chinese) - 1613, Pietro Cataldi (Italian) - 1615, Denis Henrion (French) - 1617, Frans van Schooten (Dutch) - 1637, L. Carduchi (Spanish) - 1639, Pierre Hérigone (French) - 1651, Heinrich Hoffmann (German) - 1651, Thomas Rudd (English) - 1660, Isaac Barrow (English) - 1661, John Leeke and Geo. Serle (English) - 1663, Domenico Magni (Italian from Latin) - 1672, Claude François Milliet Dechales (French) - 1680, Vitale Giordano (Italian) - 1685, William Halifax (English) - 1689, Jacob Knesa (Spanish) - 1690, Vincenzo Viviani (Italian) - 1694, Ant. Ernst Burkh v. Pirckenstein (German) - 1695, C. J. Vooght (Dutch) - 1697, Samuel Reyher (German) - 1702, Hendrik Coets (Dutch) - 1705, Edmund Scarburgh (English) - 1708, John Keill (English) - 1714, Chr. Schessler (German) - 1714, W. Whiston (English) - 1720s Jagannatha Samrat (Sanskrit, based on the Arabic translation of Nasir al-Din al-Tusi) - 1731, Guido Grandi (abbreviation to Italian) - 1738, Ivan Satarov (Russian from French) - 1744, Mårten Strömer (Swedish) - 1749, Dechales (Italian) - 1745, Ernest Gottlieb Ziegenbalg (Danish) - 1752, Leonardo Ximenes (Italian) - 1756, Robert Simson (English) - 1763, Pubo Steenstra (Dutch) - 1768, Angelo Brunelli (Portuguese) - 1773, 1781, J. F. Lorenz (German) - 1780, Baruch Schick of Shklov (Hebrew) - 1781, 1788 James Williamson (English) - 1781, William Austin (English) - 1789, Pr. Suvoroff nad Yos. Nikitin (Russian from Greek) - 1795, John Playfair (English) - 1803, H.C. Linderup (Danish) - 1804, F. Peyrard (French) - 1807, Józef Czech (Polish based on Greek, Latin and English editions) - 1807, J. K. F. Hauff (German) - 1818, Vincenzo Flauti (Italian) - 1820, Benjamin of Lesbos (Modern Greek) - 1826, George Phillips (English) - 1828, Joh. Josh and Ign. Hoffmann (German) - 1828, Dionysius Lardner (English) - 1833, E. S. Unger (German) - 1833, Thomas Perronet Thompson (English) - 1836, H. Falk (Swedish) - 1844, 1845, 1859 P. R. Bråkenhjelm (Swedish) - 1850, F. A. A. Lundgren (Swedish) - 1850, H. A. Witt and M. E. Areskong (Swedish) - 1862, Isaac Todhunter (English) - 1865, Sámuel Brassai (Hungarian) - 1873, Masakuni Yamada (Japanese) - 1880, Vachtchenko-Zakhartchenko (Russian) - 1901, Max Simon (German) - 1908, Thomas Little Heath (English) - 1939, R. Catesby Taliaferro (English) Currently in print - Euclid's Elements – All thirteen books in one volume, Based on Heath's translation, Green Lion Press ISBN 1-888009-18-7. - The Elements: Books I-XIII-Complete and Unabridged, (2006) Translated by Sir Thomas Heath, Barnes & Noble ISBN 0-7607-6312-7. - The Thirteen Books of Euclid's Elements, translation and commentaries by Heath, Thomas L. (1956) in three volumes. Dover Publications. ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3) - Heath (1956) (vol. 1), p. 372 - Heath (1956) (vol. 1), p. 409 - Boyer (1991). "Euclid of Alexandria". p. 101. With the exception of the Sphere of Autolycus, surviving work by Euclid are the oldest Greek mathematical treatises extant; yet of what Euclid wrote more than half has been lost,Missing or empty - Heath (1956) (vol. 1), p. 114 - Encyclopedia of Ancient Greece (2006) by Nigel Guy Wilson, page 278. Published by Routledge Taylor and Francis Group. Quote:"Euclid's Elements subsequently became the basis of all mathematical education, not only in the Romand and Byzantine periods, but right down to the mid-20th century, and it could be argued that it is the most successful textbook ever written." - Boyer (1991). "Euclid of Alexandria". p. 100. As teachers at the school he called a band of leading scholars, among whom was the author of the most fabulously successful mathematics textbook ever written – the Elements (Stoichia) of Euclid.Missing or empty - Boyer (1991). "Euclid of Alexandria". p. 119. The Elements of Euclid not only was the earliest major Greek mathematical work to come down to us, but also the most influential textbook of all times. [...]The first printed versions of the Elements appeared at Venice in 1482, one of the very earliest of mathematical books to be set in type; it has been estimated that since then at least a thousand editions have been published. Perhaps no book other than the Bible can boast so many editions, and certainly no mathematical work has had an influence comparable with that of Euclid's Elements.Missing or empty - The Historical Roots of Elementary Mathematics by Lucas Nicolaas Hendrik Bunt, Phillip S. Jones, Jack D. Bedient (1988), page 142. Dover publications. Quote:"the Elements became known to Western Europe via the Arabs and the Moors. There the Elements became the foundation of mathematical education. More than 1000 editions of the Elements are known. In all probability it is, next to the Bible, the most widely spread book in the civilization of the Western world." - From the introduction by Amit Hagar to Euclid and His Modern Rivals by Lewis Carroll (2009, Barnes & Noble) pg. xxviii: Geometry emerged as an indispensable part of the standard education of the English gentleman in the eighteenth century; by the Victorian period it was also becoming an important part of the education of artisans, children at Board Schools, colonial subjects and, to a rather lesser degree, women. ... The standard textbook for this purpose was none other than Euclid's The Elements. - Russell, Bertrand. A History of Western Philosophy. p. 212. - W.W. Rouse Ball, A Short Account of the History of Mathematics, 4th ed., 1908, p. 54 - Ball, p. 38 - The Earliest Surviving Manuscript Closest to Euclid's Original Text (Circa 850); an image of one page - L.D. Reynolds and Nigel G. Wilson, Scribes and Scholars 2nd. ed. (Oxford, 1974) p. 57 - One older work claims Adelard disguised himself as a Muslim student in order to obtain a copy in Muslim Córdoba (Rouse Ball, p. 165). However, more recent biographical work has turned up no clear documentation that Adelard ever went to Muslim-ruled Spain, although he spent time in Norman-ruled Sicily and Crusader-ruled Antioch, both of which had Arabic-speaking populations. Charles Burnett, Adelard of Bath: Conversations with his Nephew (Cambridge, 1999); Charles Burnett, Adelard of Bath (University of London, 1987). - Busard, H.L.L. (2005). "Introduction to the Text". Campanus of Novara and Euclid's Elements I. Stuttgart: Franz Steiner Verlag. ISBN 978-3-515-08645-5. - Henry Ketcham, The Life of Abraham Lincoln, at Project Gutenberg, https://www.gutenberg.org/ebooks/6811 - Dudley Herschbach, "Einstein as a Student," Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA, page 3, web: HarvardChem-Einstein-PDF: about Max Talmud visited on Thursdays for six years. - Ball, p. 55 - Ball, pp. 58, 127 - Heath (1963), p. 216 - Ball, p. 54 - Godfried Toussaint, "A new look at Euclid's second proposition," The Mathematical Intelligencer, Vol. 15, No. 3, 1993, pp. 12–23. - Heath (1956) (vol. 1), p. 62 - Heath (1956) (vol. 1), p. 242 - Heath (1956) (vol. 1), p. 249 - Ball (1960) p. 55. - Boyer (1991). "Euclid of Alexandria". pp. 118–119. In ancient times it was not uncommon to attribute to a celebrated author works that were not by him; thus, some versions of Euclid's Elements include a fourteenth and even a fifteenth book, both shown by later scholars to be apocryphal. The so-called Book XIV continues Euclid's comparison of the regular solids inscribed in a sphere, the chief results being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being that of the edge of the cube to the edge of the icosahedron, that is, . It is thought that this book may have been composed by Hypsicles on the basis of a treatise (now lost) by Apollonius comparing the dodecahedron and icosahedron. [...] The spurious Book XV, which is inferior, is thought to have been (at least in part) the work of Isidore of Miletus (fl. ca. A.D. 532), architect of the cathedral of Holy Wisdom (Hagia Sophia) at Constantinople. This book also deals with the regular solids, counting the number of edges and solid angles in the solids, and finding the measures of the dihedral angles of faces meeting at an edge.Missing or empty - Alexanderson & Greenwalt 2012, pg. 163 - K. V. Sarma (1997), Helaine Selin, ed., Encyclopaedia of the history of science, technology, and medicine in non-western cultures, Springer, pp. 460–461, ISBN 978-0-7923-4066-9 - JNUL Digitized Book Repository - Alexanderson, Gerald L.; Greenwalt, William S. (2012), "About the cover: Billingsley's Euclid in English", Bulletin (New Series) of the American Mathematical Society 49 (1): 163–167 - Ball, W.W. Rouse (1960). A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.). New York: Dover Publications. pp. 50–62. ISBN 0-486-20630-0. - Heath, Thomas L. (1956). The Thirteen Books of Euclid's Elements (2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925] ed.). New York: Dover Publications. - (3 vols.): ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3). Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text. - Heath, Thomas L. (1963). A Manual of Greek Mathematics. Dover Publications. ISBN 978-0-486-43231-1. - Boyer, Carl B. (1991). A History of Mathematics (Second Edition ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7. |Wikiquote has quotations related to: Euclid's Elements| |Wikisource has original text related to this article:| |Wikimedia Commons has media related to Elements of Euclid.| - Multilingual edition of Elementa in the Bibliotheca Polyglotta - Euclid (1997) [c. 300 BC]. David E. Joyce, ed. "Elements". Retrieved 2006-08-30. In HTML with Java-based interactive figures. - Euclid's Elements in English and Greek (PDF), utexas.edu - Richard Fitzpatrick a bilingual edition (typset in PDF format, with the original Greek and an English translation on facing pages; free in PDF form, available in print) ISBN 978-0-615-17984-1 - Heath's English translation (HTML, without the figures, public domain) (accessed February 4, 2010) - Oliver Byrne's 1847 edition (also hosted at archive.org)– an unusual version by Oliver Byrne (mathematician) who used color rather than labels such as ABC (scanned page images, public domain) - The First Six Books of the Elements by John Casey and Euclid scanned by Project Gutenberg. - Reading Euclid – a course in how to read Euclid in the original Greek, with English translations and commentaries (HTML with figures) - Sir Thomas More's manuscript - Latin translation by Aethelhard of Bath - Euclid Elements – The original Greek text Greek HTML - Clay Mathematics Institute Historical Archive – The thirteen books of Euclid's Elements copied by Stephen the Clerk for Arethas of Patras, in Constantinople in 888 AD - Kitāb Taḥrīr uṣūl li-Ūqlīdis Arabic translation of the thirteen books of Euclid's Elements by Nasīr al-Dīn al-Ṭūsī. Published by Medici Oriental Press(also, Typographia Medicea). Facsimile hosted by Islamic Heritage Project. - Euclid's "Elements" Redux, an open textbook based on the "Elements" - 1607 Chinese translations reprinted as part of Siku Quanshu, or "Complete Library of the Four Treasuries."
the National Council of Teachers of Mathematics In this lesson for grades 6-8, learners calculate and compare volumes of cylinders and rectangular prisms, using bales of hay as the common unit. Students must use mathematics to explain why one shape of hay bale may be preferable economically to the other. But wait.....the round bales don't fit in a barn as well. Storing bales outside results in loss of product due to mold, which students must consider. The activity was developed to promote understanding of volume through real-life scenarios. The resource is aligned to NCTM standards and includes lesson objectives, teaching tips, and a printable student worksheet. This resource is part of a larger collection of lessons, labs, and activities developed by the National Council of Teachers of Mathematics (NCTM). Metadata instance created February 1, 2011 by Caroline Hall January 22, 2013 by Caroline Hall Last Update when Cataloged: July 15, 2008 AAAS Benchmark Alignments (2008 Version) 9. The Mathematical World 6-8: 9C/M7. For regularly shaped objects, relationships exist between the linear dimensions, surface area, and volume. 6-8: 9C/M10. Geometric relationships can be described using symbolic equations. 9-12: 9C/H3a. Geometric shapes and relationships can be described in terms of symbols and numbers—and vice versa. 12. Habits of Mind 12B. Computation and Estimation 6-8: 12B/M3. Calculate the circumferences and areas of rectangles, triangles, and circles, and the volumes of rectangular solids. 6-8: 12B/M7b. Convert quantities expressed in one unit of measurement into another unit of measurement when necessary to solve a real-world problem. Common Core State Standards for Mathematics Alignments Standards for Mathematical Practice (K-12) MP.1 Make sense of problems and persevere in solving them. Measurement and Data (K-5) Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition. (5) 5.MD.3.b A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units. 5.MD.5.b Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems. Solve real-life and mathematical problems involving angle measure, area, surface area, and volume. (7) 7.G.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. (8) 8.G.9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. This resource is part of a Physics Front Topical Unit. Topic: Measurement and the Language of Physics Unit Title: Applying Measurement in Physics This fun lesson lets students explore a real-life scenario as they compare volume for a cylinder (round hay bale) and a rectangular prism (box-shaped hay bale). They will use math to decide why one shape of hay bale is preferable economically to the other. But wait.....the round bales don't fit in a barn as well, which could affect the outcome of the problem! <a href="http://www.compadre.org/precollege/items/detail.cfm?ID=10639">National Council of Teachers of Mathematics. Illuminations: Hay Bale Farmer. Reston: National Council of Teachers of Mathematics, July 15, 2008.</a> National Council of Teachers of Mathematics. Illuminations: Hay Bale Farmer. Reston: National Council of Teachers of Mathematics, July 15, 2008. http://illuminations.nctm.org/LessonDetail.aspx?id=L783 (accessed 29 July 2014). %0 Electronic Source %D July 15, 2008 %T Illuminations: Hay Bale Farmer %I National Council of Teachers of Mathematics %V 2014 %N 29 July 2014 %8 July 15, 2008 %9 text/html %U http://illuminations.nctm.org/LessonDetail.aspx?id=L783 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
Sugary insights into worm parasite infections Understand article Schistosomiasis is the second most socioeconomically devastating parasitic disease after malaria. Alan Wilson and Stuart Haslam investigate new ways to combat the parasite – taking advantage of its sugar coating. Schistosomiasis is a major parasitic disease (also known as bilharzia) which infects humans and domestic livestock, and is caused by several species of flatworm in the genus Schistosoma. The World Health Organization estimates that as many as 200 million people are infected in parts of South America, Africa and Asia. Approximately 280 000 people die from schistosomiasis each year in sub-Saharan Africa and millions more are chronically ill. Typical symptoms include abdominal pain, diarrhoea, fever, anaemia and fatigue; the stunting of children’s growth and cognitive development is another consequence of infection. Schistosomiasis thus remains an important public health problem in developing countries. The flatworms pass through a number of stages: (see life cycle) eggs, free-swimming larvae (miracidia), sporocysts, a second free-swimming larval stage (the cercaria) and finally the adult worms. An entire life cycle takes a minimum of 12 weeks. Worm eggs are released into water when human faeces or urine enter rivers or other water bodies. Freshwater snails of various genera act as intermediate hosts of the flatworms, and the presence of suitable snail species determines the distribution of the disease. Contact with water causes the eggs to hatch in a matter of minutes into miracidia, which enter the snail by penetrating its foot, after which the larvae are known as sporocysts. The sporocysts undergo asexual reproduction within the snail to produce thousands of cercariae, the aquatic stage that infects humans by penetrating the skin. A single snail can shed cercariae for weeks, and this represents a major amplification step in the life cycle of the parasite: one miracidium gives rise to tens of thousands of cercariae before the infection is spent. The most common way of acquiring schistosomiasis is by wading or swimming in lakes or other water bodies that are infested with infected snails. Non-human hosts include other mammals, as well as birds and crocodiles. Within the human host, the larvae migrate through the blood circulatory system to the hepatic portal blood vessels between the intestine and liver (in the case of Schistosoma mansoni or S. japonicum) or vesical veins of the bladder (in the case of S. haematobium). There, they feed on red blood cells, develop to adulthood and mate. The centimetre-long male then grasps the longer and thinner female in his ventral groove, where she will remain, and, using a combination of oral and ventral suckers, transports her against the blood flow to the smaller blood vessels of the host. Here she pushes forward, poking out at the front of the male’s groove, to deposit hundreds of eggs per day into the blood vessels. Once established, the adult worms can live for decades in the hostile environment of the host bloodstream, potentially open to immune attack. Finally, the eggs must escape into the intestine or the bladder, to be shed to the external environment in faeces or urine, continuing the life cycle. The adult and larval worms are comparatively harmless to their human host, but the eggs can cause severe disease. The severity of the tissue damage caused by the eggs is positively correlated both to the number of worms that a person accumulates, and to the intensity of the human immune response to the eggs: too strong an inflammatory response ultimately leads to more tissue damage, too little to tissue necrosis by egg products. Moreover, a large proportion of the eggs do not escape from the host. Instead, in S. mansoni and S. japonicum infections, they are carried in the blood circulation to lodge in the liver. As a result, fibrous layers of cells from the immune system (known as granulomas) form around the eggs – and it is this response rather than the worms themselves that causes the life-threatening syndrome. S. haematobium eggs are equally dangerous, causing fibrosis – the formation of excess fibrous connective tissue – in the bladder wall. We were interested in how the parasite enters and leaves the human body. In both the cercarial stage penetrating the host’s skin and the egg escaping through the gut or bladder wall, the parasite releases secretions to help it move through the host’s body. The cercariae possess a series of specialised gland cells. These release a mixture of proteins that have been shown to help the larvae pass through the tough stratum corneum – the outermost layer of the skin – then cross the dermis, and finally penetrate a blood vessel. The secretions from the eggs are released by a specialised tissue, the envelope, which lies beneath the egg shell and completely surrounds the growing miracidium larva inside. These secretions help the eggs to leave a blood vessel and passively cross the tissues to reach the lumen of the intestine or bladder. They are too big to cross capillary beds, so if they break free in the blood vessels they travel downstream to the next organ – the liver, in the case of S. mansoni. In the long term, the cercarial secretions could be a suitable target for a drug to treat schistosomiasis or for a vaccine to prevent it. But to develop an effective drug, scientists need to know how the secretions work and what they consist of. We characterised the proteins in both the cercarial and egg secretions of S. mansoni using mass spectrometry (see box) and showed that they have a relatively simple composition. Cercarial secretions contain several enzymes that degrade proteins (called proteases), plus a series of proteins and glycoproteins that may function by modifying the host’s immune response. The proteins secreted by the eggs also have protein-degrading activity, although we do not know exactly how they work, because the amino-acid sequence of the principal components is unlike that of any other proteins for which we know the function. Unusually for a parasite, both the egg and cercarial secretions are very immunogenic – they provoke strong antibody responses from the host immune system. But what makes the secretions so immunogenic? Studies on human, primate and rodent responses to the infection (Kariuki et al., 2008) have revealed that the vast bulk of antibodies directed against both larval and egg secretions recognise the carbohydrate (glycan) rather than the protein part of the secreted glycoproteins. So what are glycoproteins? And what is the role of the glycans in the biology of the parasites? The central dogma of modern biology states that DNA encodes the basic template of life, and that the information in the DNA code is first translated into mRNA and finally into proteins which carry out many of the fundamental tasks both in and between the billions of cells which make up a living organism as complex as a human being. But to say that there are just three key types of molecules in living systems is an over-simplification. It is estimated that more than half of all proteins in humans are modified by the addition of sugar molecules, forming glycoproteins. These glycoproteins play a major role in the way that molecules and cells recognise each other, and therefore in the many interactions that determine how diseases are spread or combated. Every cell in the human body (indeed, in all eukaryotes) is coated with a sugar-rich layer called the glycocalyx. Acting as identity tags, glycans on the outside of the glycocalyx interact with a variety of receptors (recognition molecules) on the membranes of surrounding cells and thereby help to control the social (correct) and anti-social (errant) behaviour of our cells. The worm parasite appears to exploit this glycan recognition process to manipulate the host’s immune system and allow the worm to complete its life cycle. By characterising the detailed structure of the important worm glycans, we want to understand more about how these interactions take place. Our analytical method of choice to derive the glycan structures is mass spectrometry (see box), as it is exquisitely sensitive (data can be obtained from very tiny amounts of material, such as 1 femtomole = 1 billionth of a millionth (10-15) of a mole), and it can be used to study very complex mixtures. In a mass spectrometry experiment, energy is transferred to the purified worm glycans, for example by pulsing them with a laser beam. This energy transfer makes them ionised and charged. Once they have a charge, they can be made to ‘fly’ through the analytical section of the mass spectrometer. There, the different glycans are separated by their mass-to-charge ratio. From this information, the structure of the glycans can be deduced in terms of their monosaccharide composition and in terms of how they are linked together. Our mass spectrometry analyses revealed that both the cercarial and egg secretions contain very similar, highly immunogenic glycan structures. In the case of the non-motile egg, it is as though the egg were trying to attract attention to itself. This led us to think that the parasite egg actually relies on the host immune response, which produces factors such as proteases, to help it escape from the blood vessels to the gut lumen or the bladder. Why the eggs would take the risk of being attacked by the immune system – when they have their own proteases – remains unclear. In the case of the mobile cercaria, we propose a ‘smokescreen hypothesis’: we think that cercaria ‘deliberately’ attract attention to their secreted glycoproteins to distract the host’s immune response away from protein targets of the larva which are more vital to its survival. Armed with a detailed knowledge of the parasite glycan structures, we hope to design new anti-parasite drugs or vaccines in the future. Mass spectrometry is an analytical method used to determine the elemental composition of a sample or molecule. It can be used for both qualitative and quantitative measurements. Not only is it an important method for protein analysis, it is also widely used in space missions to characterise the composition of heavenly bodies. The principle consists of ionising the molecules or molecule fragments in the sample and then measuring their mass-to-charge ratios. The machine used for this method, a mass spectrometer, is generally composed of three sections (see image below): - The ion source, in which the sample is split into gas phase ions. - The mass analyser, where electromagnetic fields are applied to separate the ions by their mass-to-charge ratio. These fields exert forces on the ions; the electric field may speed up or slow down a charged particle, and its direction may be altered by the magnetic field. The magnitude of the deflection of the moving ion’s trajectory depends on its mass-to-charge ratio: according to Newton’s second law of motion, lighter ions are deflected by the magnetic force more than heavier ions. - The detector, which records and quantifies the ions’ mass-to-charge ratio. This information is then used to determine the chemical element composition of the original sample. This work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and the Wellcome Trust, with additional funds from the UNDP/World Bank/World Health Organization Special Programme for Research and Training in Tropical Diseases. - Kariuki TM, Farah IO, Wilson RA, Coulson PS (2008) Antibodies elicited by the secretions from schistosome cercariae and eggs are predominantly against glycan epitopes. Parasite Immunology 30(10): 554-62. doi: 10.1074/mcp.M700004-MCP200 - Jang-Lee J, Curwen RS, Ashton PD, Tissot B, Mathieson W, Panico M, Dell A, Wilson RA, Haslam SM (2007) Glycomics analysis of Schistosoma mansoni egg and cercarial secretions. Molecular and Cellular Proteomics 6: 1485-1499. doi:10.1074/mcp.M700004-MCP200 - For more information about schistosomiasis, see: www.york.ac.uk/res/schisto/background.htm - For more information about mass spectrometry, see: www3.imperial.ac.uk/lifesciences/research/molecularbiosciences/massspec
In most programming languages, we use loop constructs like ‘while’ and ‘for’ to repeatedly execute a block of code until a certain condition is met. A recursive function does pretty much the same thing; except that it does this by calling itself within itself! And just as in loops it stops calling itself when a certain condition is met. Thus, in computer programming, recursion is when a function calls itself. In code this is what recursion looks like: def decrement_number_until_zero(number): print(number) decrement_number_until_zero(number - 1) Oh that Works ? Well, it wouldn’t! Why? When executed this function will throw a stack overflow error. Stack Overflow Error ? A stack at the most basic level is a data structure that only exposes its topmost element. Think of a stack as a cylinder with one end open and the other end closed. In such a cylinder you obviously can only add and remove items from the open end. A stack is often referred to as a first in last out data structure. This is true because since stacks only allow you to add and remove items from the top of the stack only, then the last element to go into the stack will be the first to go out and the first to go into the stack will be the last to go out. Adding an element to a stack is called pushing and removing an element from a stack is called popping. Okay, How is this Connected to Stack Overflow You Ask? Well, anytime you make a function call, in Python and most programming languages, the function is pushed into a stack data structure called the call stack. And when the function returns, the function is popped from the call stack. Each function pushed into a stack is called a stack frame. A stack frame basically stores information like the variables used in a function and where the function is suppose to return to among others. In recursive functions every single time a function references itself that same function is pushed into the call stack. In Python there is an upper bound to the number of functions a call stack could accommodate at once. Our function above will throw the stack overflow error because our function will attempt to add it self to the call stack for an infinite number of times. Remember, the call stack in python has a threshold. The stack overflow error is thrown when the threshold is reached.This is happening because the function definition does not specify the condition for when the function is suppose to stop calling itself. This condition is called the base case. So a better way of writing the function above will be this: def decrement_number_until_zero(number): if number > 0: print(number) decrement_number_until_zero(number - 1) This revised function wouldn’t attempt to execute to infinity because now a condition for when it should stop executing had been provided. A second example of seeing recursion in action is seen below: def reverse_string_recursively(string): if len(string) == 0: return string else: return reverse_string_recursively(string[1:] )+ string The function above reverses a string recursively. Again this is a great example of recursion because it specifies a condition for when the program should end execution. As you might have figured out already, a good recursive function has two parts: - Base case: a boolean expression that when it evaluates to ‘true’ the program execution terminates - Recursive case: is the part that executes when the base case evaluates to false. Usually this is where the function references itself. Now I’ve said a lot about call stacks, base cases and recursive cases. Let me quickly demonstrate how recursive functions are executed under the hood. The way recursive functions execute under the hood could be likened to a storyteller who is telling a story about Person A then switches mid way to a story about person B then again switches mid way to a story about Person C. At the end of Person C’s story, the storyteller then goes back to the story about Person B where s/he left off. At the end of person B’s story, the storyteller then moves on and completes Person A’s story where s/he left off. Didn’t Get that ? Let’s first of all start by looking at an example of how this works in regular functions. def c(): print(“at the beginning of c”) c() print(“at the end of c”) def b(): print(“at the beginning of b”) c() print(“at the end of b”) def a(): print(“at the beginning of a”) b() print(“at the end of a”) a() First of all, the function a is loaded into the call stack and just as in the story teller analogy, mid way into it’s execution, the main execution thread switches to function b and there too it switches to function c. when function c is done executing the main execution thread switches to function b. It ends its execution with function a. So the output of calling function a above would be: at the beginning of a at the beginning of b at the beginning of c at the end of c at the end of b at the end of a As mentioned earlier on when the last line of a function is executed, the function returns. As a result that function is popped from the call stack. Beginning to Understand How this Works? Alright let’s look at how this works in recursive functions. Let’s try to understand this by looking at a popular problem in recursion: finding the factorial of a number. def factorial(number): """factorial recursive implementation""" if number <0: return -1 elif number < 2: return 1 else: return number * factorial(number-1) factorial(5) First of all factorial(5) * 5 is called. But because 5 is greater than 2, factorial(5) will again trigger a call to factorial(5-1) * 5-1. This will continue until one of the conditions specified evaluates to true. This happens when factorial(1) is called. Because 1 is less than 2 factorial(1) returns some value that is then passed to the functions below it in the stack. Of course after factorial(1) * 1 returns, it is popped from the stack. visualize it this way : factorial(1) * 1 factorial(2) * 2 factorial(3) * 3 Note that the item at the bottom is the first item loaded into the call stack Because 1 is less than 2, based on our function definition, factorial(1) will return 1 and this output is then multiplied by 1. The result from the layer above is then passed to the function below: 1 is passed to factorial(2). factorial(2) then returns what it’s been passed and the output, 1, is multiplied by 2. Thus the second layer returns 2 and again this is passed to the function below, which is factorial(3). It continues this way until the function at the bottom of the stack returns. This is how recursive functions are executed ! Still Don’t get it? Well just give up on recursion and keep using your loops… lol But see, frankly speaking, any recursive function could be implemented with loops. For example, the reverse string and factorial functions above could be implemented using loops like so: def reverse_string(string): “””iterative implementation””” length = len(string) - 1 reversed_string = " " while length >= 0: reversed_string += string[length] length -= 1 return reversed_string def factorial(number): """iterative implementation of factorial""" if number <0: return -1 elif number < 2: return 1 else: factorial = 1 for num in range(1, number+1): factorial = factorial * num return factorial Furthermore, when compared with their iterative counterparts, recursive solutions do not necessarily have better space and time complexity. In fact, because recursive functions repeatedly add stack frames to the call stack, one could even argue that they have a terribly bad space complexity. Recursive solutions are just more elegant.! They are more concise! This in turn improves code readability and by extension code quality. Freecodecamp: How Recursion Works Explained Youtube: Recursion for Beginner, a Beginners Guide to Recursion Top comments (0)
Description of the PID Algorithm Ut = Kpe(t) + Ki ∫ e(τ) dτ + Kd(de/dt)e(t) A familiar example of a control loop is the action taken when adjusting hot and cold faucets to fill a container with water at a desired temperature by mixing hot and cold water. The person touches the water in the container as it fills to sense its temperature. Based on this feedback they perform a control action by adjusting the hot and cold faucets until the temperature stabilizes as desired. The sensed water temperature is the process variable (PV). The desired temperature is called the setpoint (SP). The input to the process (the water valve position), and the output of the PID controller, is called the manipulated variable (MV) or the control variable (CV). The difference between the temperature measurement and the setpoint is the error (e) and quantifies whether the water in the container is too hot or too cold and by how much. After measuring the temperature (PV), and then calculating the error, the controller decides how much to change the tap position (MV). Because the taps can be adjusted for anything from cool water through to very hot, this is an example of proportional control. In the event that water in the container is not heating quickly enough, the controller may try to speed up the process by opening up the hot water valve quite wide for a while. This is an example of derivative action. If the temperature of the container is settling out too low, despite a good flow of warm water, the controller may open the hot valve more and more as time goes by. This is an example of an integral control. Making a change that is too large when the error is small will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increase with time then the system is unstable, whereas if they decrease the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable. In the interest of achieving a gradual convergence to the desired temperature (SP), the controller may damp the anticipated future oscillations by tempering its adjustments, or reducing the loop gain. If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally controllers are used to reject disturbances and to implement setpoint changes. Changes in feedwater temperature constitute a disturbance to the faucet temperature control process. In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, weight, position, speed and practically every other variable for which a measurement exists. Finally, at the end there is a description of the Ziegler-Nichols Closed Loop Tuning method. I have used this a little but it has not been my most effective method. It helps for a quick tune. The first element of PID control to be developed is Proportional control. The equation is simple: Note the action may be either direct or reverse. In a direct acting control loop an increase in the process measurement causes an increase in the ouput to the final control element. The proportional only equation is: The bias is sometimes known as the manual reset. Some control systems (such as Foxboro products, use proportional band rather than gain. The proportional band and the gain are related by: Gain is the ratio of the change in the output to the change in the input. Proportional band is the amount the input would have to change in order to cause the output to move from 0 to 100% (or vice versa) With proportional only control the controller will not bring the process measurement to the setpoint with out a manual adjustment to the bias (or manual reset) term of the equation. In the early days of control the operator, upon observing an offset in the control loop would correct the offset by manually "reseting" the controller (adjusting the bias). The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output. Rather than to require that the operator "manually reset" the control loop whenever there was a load change control functions were developed to "automatically reset" the controller by adjusting the bias term when ever there was an error. This "automatic reset" is also known simply as "reset" or as "integral". The variable of integration takes on values from time 0 to the present, in minutes. The most common way to implement integral mode in analog controllers is to use a positive feedback into the output. The equation for PI control is: The amount of reset used is measured in terms of "reset time" in minutes or its inverse, "reset rate" in repeats per minute. The following test can be perfored on a controller which is not connected to the process: The third term of PID control is derivative, also known as Pre-Act (trade mark of Taylor Instrument Companies, now ABB), and rate. The derivative term looks at the rate of change of the input and adjusts the output based on the rate of change. The derivative function can either use the time derivative of the error, which would include changes in the setpoint, or of the measurement only, excluding setpoint changes. The equation for the derivative contribution (assuming derivative on error) is: The amount of derivative used is measured in minutes of derivative. To illustrate the meaning of minutes of derivative, consider the following open loop test: On the trend record (right) note that when the ramp is started, with no derivative (dashed line) the output ramps up due to the change in input and the gain. Using derivative (solid line) the output jumps up, rises in a ramp, then jumps down. The difference in time between the solid line and the dashed line represents the amount of derivative, in units of time (usually minutes). Combining the three elements, gain, integral, and derivative, we have the equation: Shown graphically to the right. Note that in the equation the gain is multiplied by all three terms. This is important for the PID equation to be able to be tuned by any of the standard tuning methods. Before starting, test the linearity of the control equipment. Is there a linear relationship between the control and the process? Graph the control at 3 to 4 points in the span, like 25, 50,and 75 percent, and graph the process response. The relationship between the control and response is a good place to start for Kp. The time between control change and process final response will be a good start for integral time. If the system must remain online, one tuning method is to first set Ki and Kd values to zero. Increase the Kp until the output of the loop oscillates, then the Kp should be set to approximately half of that value for a "quarter amplitude decay" type response. Then increase Ki until any offset is corrected in sufficient time for the process. However, too much will cause instability. Finally, increase Kd, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Careful, too much Kd will cause excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly. Some systems cannot accept overshoot, in which case an over-damped closed-loop system is required, which will require a Kp setting significantly less than half that of the Kp setting that was causing oscillation. Ziegler-Nichols Closed Loop Tuning The Ziegler-Nichols Closed Loop method is one of the more common methods used to tune control loops. It was first introduced in a paper published in 1942 by J.G. Ziegler and N.B. Nichols, both of whom at the time worked for the Taylor Instrument Companies of Rochester, NY. The open loop method is useful for most process control loops. To use the method the loop is tested with the controller in automatic. The Closed Loop method determines the gain at which a loop with proportional only control will oscillate, and then derives the controller gain, reset, and derivative values from the gain at which the oscillations are sustained and the period of oscillation at that gain. The ZN Closed Loop method should produce tuning parameters with will obtain quarter wave decay. This is considered good tuning but is not necessarily optimum tuning. Ziegler-Nichols Tuning Chart:
How do acids and alkalis react? This activity links the neutralisation of an acid by an alkali to the changes in ionic concentrations that result from the reaction between hydrogen ions and hydroxide ions. The practical involves a mildly toxic alkaline solution, barium hydroxide. It needs careful preparation and manipulation, involving a conductimetric titration. A clear demonstration is more likely to lead to successful learning than a class activity that will need a lot of teacher support. A good demonstration should take about 25 mins. Each demonstration requires: Barium hydroxide solution, 0.10 M, (HARMFUL, IRRITANT), about 200 cm3 Dilute sulfuric acid, 1.0 M, (IRRITANT), about 25 cm3 Purified (distilled or deionised) water Phenolphthalein indicator solution (HIGHLY FLAMMABLE) Refer to Health & Safety and Technical notes section below for additional information. Eye protection for the teacher and for any members of class who assist at the bench Test-tube (150 x 25 mm) Beakers (100 cm3), 2 Measuring cylinder (100 cm3) Burette (50 cm3) Clamps (2) and stand Small funnel (for filling the burette) White tile (for standing beaker on during titration) and white card background (for class visibility) Pair of carbon electrodes (Note 1) in holder, with 4 mm plug adapters Plug leads (4 mm plug at each end), 4 Bulb (12 V) in holder AC demonstration ammeter Low voltage AC supply (Note 2) Health & Safety and Technical notes Wear eye protection. Barium hydroxide solution, Ba(OH)2(aq), (HARMFUL, IRRITANT at concentration used) - see CLEAPSS Hazcard. Solid barium hydroxide (CORROSIVE) contains water of crystallisation, and reacts with carbon dioxide from the air while in storage. It is only slightly soluble in water (maximum 4 g in 100 cm3) but the solution is much more alkaline than limewater. Make 250 cm3 of the solution. Purified water should be boiled to remove carbon dioxide before being added to solid barium hydroxide. Once prepared the solution is very sensitive to carbon dioxide and immediately goes cloudy (barium carbonate) when exposed to the atmosphere. All fresh solutions, which will be alkaline and may irritate sensitive skin, must be protected with a soda lime guard tube. For all these reasons, it is important to check the concentration of the solution before the demonstration to ensure it is reasonably close to 0.10 M – an exact concentration is not necessary. To do this titrate 50 cm3 of the solution with the dilute sulfuric acid (1.0 M), to be used in the experiment, using phenolphthalein indicator (two drops). A titre value between 4 cm3 and 6 cm3 is acceptable. Dilute sulfuric acid, H2SO4(aq), (IRRITANT at concentration used) - see CLEAPSS Hazcard and CLEAPSS Recipe Book. Phenolphthalein indicator solution (HIGHLY FLAMMABLE) - see CLEAPSS Hazcard and CLEAPSS Recipe Book. Phenolphthalein indicator solution should be provided in a dropper bottle. 1 The two carbon electrodes need to be mounted securely in a holder that keeps them parallel. If available, a 4 mm plug adapter should be fitted to the top of each electrode. If not available, crocodile clips can be used instead, but you will need a cardboard or plastic separator between the clips to avoid accidental short-circuiting. 2 The low voltage power supply should be a variable low voltage unit capable of supplying alternating current (AC) at about 12 V when connected through a 12 V bulb and the electrodes dipped in solution in series. A demonstration AC ammeter should be included if available, and if the 12 V bulb fails to light brightly enough for the class to see when tested as in Stage 1 below, the ammeter is essential. a Mix equal volumes of dilute sulfuric acid and barium hydroxide solution in a test-tube to observe what happens. b Add 50 cm3 of barium hydroxide solution to one beaker, and add 2–4 drops of phenolphthalein indicator solution to show the solution is alkaline. c Dip the electrodes into this solution to demonstrate that it conducts electricity. The AC supply should ensure there is no electrolysis. d Rinse the electrodes with purified water. Now test the sulfuric acid in a second beaker to show that it conducts electricity. e Fill the burette with 1.0 M sulfuric acid to the zero mark. Fix the burette securely over the beaker containing 50 cm3 of 0.1 M barium hydroxide solution, ready for titration. f Clamp the electrode assembly firmly at one side of the beaker so the electrodes dip into the full depth of the solution, and connect it to the rest of the test circuit. Place the stirring rod in the solution. g Switch on the supply, note the ammeter reading and the bulb brightness. h Add sulfuric acid from the burette, 0.5 cm3 at a time, with stirring. After each addition, note the ammeter reading and the bulb brightness, and look for a permanent change in the indicator colour. i At the indicator end-point note the volume of acid added, the ammeter reading and the bulb brightness. j Continue to add portions of acid and note the ammeter reading and bulb brightness until the change on further addition is minimal. This experiment provides important evidence for the simple hydrogen ion theory of acidity, and for the ionic nature of the neutralisation reaction with hydroxide ions. Thus it forms a natural part of a sequence of experiments in which this theoretical model can be built up for students. This experiment is not likely to be useful on its own. It also depends on students’ understanding of ionic theory in general, and their appreciation that the conductivity of a solution depends on the concentration of ions in the solution. Alternating current is used rather than direct current to avoid electrolysis taking place, at least to any extent that would affect the outcome of the experiment. Although this is likely to be a demonstration for most students, some teachers may wish to use it as a class experiment, possibly with older students. Safety issues are relatively minor, essentially using dilute sulfuric acid (1.0 M) and barium hydroxide solution (0.1 M). The latter should be treated as HARMFUL for students, even at this low concentration. The reason for using such different concentrations in a conductimetric titration is to minimise the decrease in conductivity caused by increasing the volume of water present as the titration proceeds, apart from the changes in the total number of ions present. Download some student questions. Here are answers to the questions 1 This revises the students’ understanding of ions, and their ability to identify the ions that are present (barium cations and hydroxide anions in barium hydroxide, and hydrogen cations and sulfate anions in sulfuric acid). The symbols for these ions are: Ba2+, OH-, H+, SO42- 2 The ions are the current carriers in solution, and conductivity depends on the concentration of ions. In asking which ions are removed in the formation of barium sulfate, there is an opportunity to write the ionic equation for the precipitation reaction Ba2+(aq) + SO42-(aq) → BaSO4(s) and hence to identify a fall in the total ion concentration which is reflected in a fall in conductivity. 3 Having dealt with the barium and sulfate ions, the students are then in a position to focus on what else is happening, starting with identifying hydrogen cations and hydroxide anions as potential reactants. These ions do react and the reaction is simple. 4 The ionic equation for the reaction between these ions is: H+(aq) + OH-(aq) → H2O(l) 5 Any reactant added once an end-point has been reached is ‘surplus’, and so ions are again present and the conduction of the solution rises again. It is unlikely that the end-point will be marked by zero conductivity. The titration method used is too crude to find the exact point at which almost no ions are present, and even a drop of sulfuric acid added nearly at the end-point will take the reaction past the end-point. Health and safety checked February 2008 Page last updated on 02 December 2011
It is easier to solve a quadratic equation when it is in standard form because you compute the solution with a, b, and c. However, if you need to graph a quadratic function, or parabola, the process is streamlined when the equation is in vertex form. Factor Coefficient Factor the coefficient a from the first two terms of the standard form equation and place it outside of the parentheses. Factoring standard form quadratic equations involves finding a pair of numbers that add up to b and multiply to ac. The most important points and skills for Section 5. Most students are already very familiar with quadratic functions, standard form and factoring, however, completing the square is quite difficult for many students. When you do examples of completing the square, avoid the temptation to cut corners and always describe your calculation methods in full and put all steps in your boardwork. Introduce the third way of writing a quadratic function: Make sure to define the vertex and axis of symmetry as well as how these may be found in an equation in vertex form. Feel free to use it or distribute to students if you would like. Give the students two examples to try in their groups. In the first example, you could start with a graph that shows two x-intercepts and the coordinates of one other point such as Section 5. For the second example, you could start with a graph that shows the coordinates of the vertex and the coordinates of one other point on the graph such as Section 5. Circulate as the groups work. Some questions that you might find helpful to ask the students are: Briefly, in their groups, ask students to discuss how the graph of g x is related to the graph of f x using terminology for transformations, and ask them to write g x in terms of f x. Note that because h x is quadratic, the graph of f x should be related to f x through various transformations. You can use this opportunity to note that in standard form, it is harder to determine how the graph of h x is related to f x. Segue into a mini-lecture about completing the square and put h x into vertex form. Rather, we want the students to learn the algorithmic approach that is used in Examples 1 and 2 pages ; also Example 2 from pages Look at the number preceding the x-term. Divide this number by 2 and then square that value. Add and subract the value you computed in Step 2 in between the x-term and constant term. Group together the first three terms to have a perfect square. Combine the constant terms left over outside the perfect square. Distribute the coefficient you factored out in Step 1. The most common mistakes are i not factoring a out of everything in Step 1 and ii not distributing a correctly in Step 6. From the vertex form now found for h xstudents should be able to easily comment about how the graph of h x relates to the graph of f x. Pick a few exercises from Chapter 5 Tools Problems 26 on page for the students to try. A problem like 23 can be a good one to do since the students must factor out a negative number. Have the students work in groups on a problem like the following:But how do you convert from the general form to the useful form? By completing the square. By completing the square. Find the focus equation of the ellipse given by . Vertex Form of Parabolas Date_____ Period____ Use the information provided to write the vertex form equation of each parabola. 1) y = x2 + 16 x + 71 2) y = x2 − 2x − 5 3) y. Write the equation of the quadratic function whose graph is shown at the right. Explain your reasoning. (x Vertex form of a quadratic function Section Transformations of Quadratic Functions 51 Writing a Transformed Quadratic . † Once in standard form, the vertex is given by (h;k). † The parabola opens up if a > 0 and opens down if a quadratic function in standard form. This calculator will find either the equation of the hyperbola (standard form) from the given parameters or the center, vertices, co-vertices, foci, asymptotes, focal parameter, eccentricity, (semi)major axis length, (semi)minor axis length, x-intercepts, and y-intercepts of the entered hyperbola. Learn what the other one is and how it comes into play when writing standard form equations for parabolas. We now have our quadratic equation in vertex form. Writing Standard-Form.
What is a black hole? Do they really exist? How do they form? How are they related to stars? What would happen if you fell into one? How do you see a black hole if they emit no light? What’s the difference between a black hole and a really dark star? Could a particle accelerator create a black hole? Can a black hole also be a worm hole or a time machine? In Astro 101: Black Holes, you will explore the concepts behind black holes. Using the theme of black holes, you will learn the basic ideas of astronomy, relativity, and quantum physics. After completing this course, you will be able to: • Describe the essential properties of black holes. • Explain recent black hole research using plain language and appropriate analogies. • Compare black holes in popular culture to modern physics to distinguish science fact from science fiction. • Describe the application of fundamental physical concepts including gravity, special and general relativity, and quantum mechanics to reported scientific observations. • Recognize different types of stars and distinguish which stars can potentially become black holes. • Differentiate types of black holes and classify each type as observed or theoretical. • Characterize formation theories associated with each type of black hole. • Identify different ways of detecting black holes, and appropriate technologies associated with each detection method. • Summarize the puzzles facing black hole researchers in modern science.
The circle k (S, 6 cm), calculate the chord distance from the center circle S when the length of the chord is t = 10 cm. Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Circle chord What is the length d of the chord circle of diameter 36 m, if distance from the center circle is 16 m? Circular cone of height 15 cm and volume 10598 cm3 is at third of the height (measured from the bottom) cut plane parallel to base. Calculate the radius and circumference of the circular cut. It is given a rhombus of side length a = 29 cm. Touch points of inscribed circle divided his sides into sections a1 = 14 cm and a2 = 15 cm. Calculate the radius r of the circle and the length of the diagonals of the rhombus. - Circle arc Circle segment has a circumference of 41.89 m and 251.33 m2 area. Calculate the radius of the circle and size of central angle. - Cone A2V Surface of cone in the plane is a circular arc with central angle of 126° and area 415 dm2. Calculate the volume of a cone. The areas of the two circles are in the ratio 2:14. The larger circle has diameter 14. Calculate the radius of the smaller circle. - Curved surface area CSA A cylinder 5cm high has a base radius(7/2) cm. Calculate the curved surface area. - MO SK/CZ Z9–I–3 John had the ball that rolled into the pool and it swam in the water. Its highest point was 2 cm above the surface. Diameter of circle that marked the water level on the surface of the ball was 8 cm. Determine the diameter of John ball. Washing machine drum wash at 54 RPM. Washing machine motor pulley has diameter 5 cm. What must be the diameter of the drum machine pulley when the motor is at 301 RPM? The rectangle is 31 cm long and 28 cm wide. Determine the radius of the circle circumscribing rectangle. The clock shows 12 hours. After how many minutes will agle between hour and minute hand 90°? Consider the continuous movement of both hands hours. How many times a day hands on a clock overlap? - Square and circles Square with sides 61 mm is circumscribed and inscribed with circles. Determine the radiuses of both circles. In rectangle with sides 3 and 10 mark the diagonal. What is the probability that a randomly selected point within the rectangle is closer to the diagonal than to any side of the rectangle? Two gears, fit into each other, has transfer 2:3. Centres of gears are spaced 82 cm. What are the radii of the gears? Area of the side of two cylinders is same rectangle of 50 cm × 11 cm. Which cylinder has a larger volume and by how much? Convert 270° to radians. Write result as multiple of number π.
ONE SAMPLE T-TEST (INTRODUCTION) Watch this spaceDetailed text explanation coming soon. In the meantime, enjoy our video. The text below is a transcript of the video. Connect with StatsExamples here LINK TO SUMMARY SLIDE FOR VIDEO: TRANSCRIPT OF VIDEO: The one sample T test is used to figure out if the mean of a population is what we think it is. Let's take a look at how it works and why. We use the one sample T test when we are interested in testing a population mean. The basic scenario is that we want to know if the population mean is a certain value, let's call it \(\mu\)0. We can't measure the entire population, because that's impractical, so we take a random sample from it instead. We then calculate the sample mean which we can use as an estimate of the population mean but sampling error makes it inaccurate. The question the one sample T test allows us to answer is, what is the probability the population mean is \(\mu\)0 based on the sample mean and observed variation. Our approach will be to use a confidence interval to test for the mean of the population. If you don't remember what confidence intervals are, then you can watch our confidence interval video which is about calculating confidence intervals and what they represent. We compare the confidence interval we get from our sample to the hypothesized population mean \(\mu\)0 We're showing the confidence interval as a 95% confidence interval for now, because that's the most common confidence interval used, but that's not the only one possible as we'll see in a bit. If \(\mu\)0 is inside the confidence interval, then there is a lack of evidence that the population mean is different from \(\mu\)0. That's the sort of result we would expect to see with a reasonable probability if the population mean was equal to \(\mu\)0. On the other hand, if \(\mu\)0 is outside of the confidence interval we calculate that provides evidence that the population mean is different from \(\mu\)0. That's the sort of result we would rarely expect to see, the probability of the confidence interval not including \(\mu\)0, if that's the population mean, is very low. As mentioned, the T test is a comparison of the confidence interval to the hypothesized population mean \(\mu\)0. In practice this isn't exactly how we do a t test, the test does it slightly indirectly We could calculate the confidence interval and see if it includes \(\mu\)0. Instead, we calculate the width of half of the confidence interval and compare it to the distance between the sample mean and \(\mu\)0. If that distance is larger than half the confidence interval then \(\mu\)0 would lie outside of the confidence interval. Comparing the distance between the sample mean and \(\mu\)0 is therefore equivalent to seeing whether \(\mu\)0 would be inside the confidence interval. As I mentioned , we generally do this comparison indirectly however. Looking at the equations to the right we can see that we are interested in when the distance is larger than half the confidence interval. Calculating the distance is easy it's just the sample mean minus \(\mu\)0 . Next up is the size of 1/2 of the confidence interval. That will be the value from our T distribution corresponding to the Alpha we desire and the degrees of freedom in our sample multiplied by the standard error. Again, if you don't remember how to do this. I recommend checking out the confidence interval video on this channel Then we compare these two values. And we're interested in whether the sample mean minus \(\mu\)0 Is larger than our T value times the standard error. We can rearrange this equation slightly . Now the question is whether the fraction on the left, sample mean minus \(\mu\)0 divided by the standard error is larger than the T value corresponding to our Alpha value and degrees of freedom. We generally call the fraction on the left our T calculated value, and we are comparing it to a T critical value. I've written this all out as if we're looking to see whether the T calculated value is larger than the T critical value. The other side of the confidence interval would be tested by seeing whether the T calculated value is less than the negative version of the T critical value. Diagramming it out, the T test is an indirect comparison of the confidence interval to the distance between the sample mean and \(\mu\)0. We get our TI calculated value from the sample mean minus divided by the standard error. Then we compare that value to a critical T value corresponding to the Alpha value for our confidence interval and the degrees of freedom for our sample. If the T calculated value is larger in magnitude then the T critical value then \(\mu\)0 is not within the confidence interval. If the T calculated value is smaller in magnitude then the T critical value then \(\mu\)0 is within the confidence interval. This figure illustrates the scenario in a slightly different way. If we're doing a T test using 16 values we will use 15 degrees of freedom when we look at our T distribution. Each of the columns in our table correspond to different confidence intervals. In this case to do the test with a 95% confidence interval we would go to our table and look for Alpha equals 0.025 so that we have 2 and 1/2% on each side outside of the confidence interval. Then our critical values become 2.131 and negative 2.131. If our T calculated value is larger than positive 2.131 or less than negative 2.131 then \(\mu\)0 would be outside of that confidence interval. When that happens it's very unlikely that this sample comes from a population That has a mean of \(\mu\)0. On the other hand, if our T calculated value is between negative 2.131 and positive 2.131, that's exactly what we would expect to happen most of the time if the population mean really is \(\mu\)0. OK, here is the formal procedure for what's called a two-tailed t-test. the two tails refer to the fact that we're testing on both sides of our confidence interval. First, since this is a statistical test we will create a null hypothesis and an alternative hypothesis. The null hypothesis will be that the population mean is equal to \(\mu\)0. Think of this as our baseline default assumption that we will tend to accept as probably true unless we reject it. The alternative hypothesis will be that the population mean is not equal to . think of this as the result we would get if we decided that the null hypothesis was not true and we rejected it. This step of specifying a null an alternative hypothesis is the first step in every statistical test. Next, in order to figure out which of our two hypothesis has more support we will calculate our T calculated value using the equation shown. Notice that the variance and sample size for our sample are here, they are what's used to figure out what the standard error is. Our next step is to compare our T calculated value to various T critical values which correspond to the widths of those confidence intervals. When we look at a table of T values each of those columns is corresponding to a different Alpha value representing the area outside that center confidence interval. We usually try to identify what the smallest Alpha value is, which corresponds to the confidence interval with the highest degree of confidence, which would result in our calculated value being larger than the critical value. This tells us how low the probability is, that sampling error would result in the sample mean and standard deviation that we see, if our null hypothesis was true. This is the probability of seeing a t-calculated value as extreme as we do, if the null hypothesis is true, which is called the P value Finally we decide to reject the null hypothesis or fail to reject the null hypothesis based on the P value we obtained. If the P value is very small we will usually reject the null hypothesis, if it is not very small we will usually fail to reject the null hypothesis. Remember that the null hypothesis was that the population mean is equal to \(\mu\)0 which is consistent with non small P values. That's because the T calculated value we got is what we would expect to see all the time if the null hypothesis is true. It's the alternative hypothesis, that the population mean is not equal to \(\mu\)0, which is what would give us small P values. That's because the T calculated value we would get is not what we would expect to see if the null hypothesis is true. This awkward approach of deciding whether or not to accept or reject the null hypothesis based on probabilities is the standard way that most statistical tests are done. There are a couple things to keep in mind. First we are not proving anything, we are making a decision about whether we think the null hypothesis or alternative hypothesis is true based on probability. Second, if we don't reject the null hypothesis that is not the same thing as providing lots of support for it. What it means is that we looked for evidence against the null hypothesis and didn't find convincing evidence. For this reason, statistical purists will always say that large P values cause you to "fail to reject a null hypothesis" , never "accept a null hypothesis". Nevertheless, people use the phrase "accept the null hypothesis" all the time, but they shouldn't. Let's look a little bit more at what a P value represents . P values are always in the context of a null hypothesis and an alternative hypothesis and some sort of calculation we have performed. in the case of the one sample T test it's the probability of seeing a T calculated value as extreme as we do if the null hypothesis is true. Technically, the P value is the smallest Alpha value you could choose and still reject the null hypothesis with your data. Conceptually, the P value is the probability of seeing the sample data you do if the null hypothesis is correct. That's why when the P value is very small you would think seriously about rejecting your null hypothesis Let's look at this again because it bears repeating. When learning statistics, one of the biggest sources of confusion Is what AP value represents. The conceptual definition of a P value is the probability of seeing the sample data you do if the null hypothesis is correct . In the scenario in this video, this is equivalent to - the P value is the probability of obtaining the T calculated statistic (or more extreme) that you did if the null hypothesis is correct. The third times the charm. The P value of a test is the probability that the value you see could arise due to sampling error if the null hypothesis is true. If the P value is small, usually less than 0.05, we reject the null hypothesis. If the P value is not small, larger than 0.05 comma we fail to reject the null hypothesis. What's written here applies to almost every statistical test and is the most useful concept in all of statistics. OK, so we've thought about the concepts, what is the practical procedure for doing a one sample T test? First, we create a null hypothesis an alternative hypothesis For this test, the null hypothesis is that the population mean is equal to \(\mu\)0 and the alternative hypothesis is that it is not equal to \(\mu\)0. Then we calculate our T calculated value using the equation shown and compare it to various T critical values. Then we determine the P value. for example if we got a T calculated value of 2.8 for 15 degrees of freedom what would our P value be? If we have access to a table of T critical values , like the ones on the StatsExamples website, then we would look in the row for 15 degrees of freedom and look at the value in the columns to determine which ones bracket the 2.8. In this case the 2.8 is larger than the critical value corresponding to an Alpha value of 0.01 but less than the critical value for an Alpha value of 0.005. Keeping in mind that we have to double these Alpha values because we are looking at both sides of the confidence interval, this would tell us that our P value is less than 0.02 but larger than 0.01. If we were using a computer, it could provide us with an exact probability of getting a T calculated value as large as 2.8 or as small as negative 2.8. And that probability is 0.013 which we can see is larger than 0.0 one and smaller than 0.02. In this example because we have a small P value, we would use the small P value to reject the null hypothesis. If we think about the null hypothesis the population mean being equal to \(\mu\)0 is not consistent with a P value of 0.013 because that is less than 5% which is the usual threshold for how unlikely things have to be for us to make a decision to reject the null hypothesis. If we think about the alternative hypothesis of the population mean not being equal to \(\mu\)0, that is consistent with a P value of 0.013 because it's exactly the sort of thing that would result in a large T calculated value In the example we just looked at I used a probability of 5% as the threshold for making a decision about the null and alternative hypothesis. The use of P equals 0.05 , that is 5%, as a threshold for deciding to reject the null hypothesis is arbitrary but it is the standard within statistics. In fact, there is a specific technical term to indicate when this occurs. We use the phrase statistically significant when a statistical test has returned a P value less than the threshold and the null hypothesis has been rejected. as mentioned, this threshold is almost always 0.05. If our result from some test is that a sample mean of 18 is significantly different from of 20, we would reject the null hypothesis that the population mean is 20. We would conclude or decide, no prove, that the population mean is some other value If our result from some test is that a sample mean of 18 is NOT significantly different from of 20, we would fail to reject the null hypothesis that the population mean is 20. We would lack the evidence to conclude or decide that the population mean is some value other than 20. It's not that we have strong evidence that it is 20, but that we looked for evidence that it wasn't and didn't find any. One last point about the one sample t-test. The one sample T test can also be one tailed instead of two tailed as we've been looking at. For example, the null hypothesis could be that the population mean is less than or equal to \(\mu\)0, and the alternative hypothesis would be that the population mean is larger than \(\mu\)0. In this situation when we calculate our T calculated value we would only be interested in the positive values and whether they are larger than the critical value corresponding to Alpha of 0.05. Alternately, the null hypothesis could be that the population mean is larger than or equal to \(\mu\)0, and the alternative hypothesis would be that the population mean is less than \(\mu\)0. In this situation when we calculate our t-calculated value we would only be interested in the negative values and whether they are less than the critical value corresponding to Alpha of 0.05. We have to be careful when doing one tailed tests because the critical values are not as large so we are able to reject our null hypothesis more easily. we should only do a one tail test under two conditions First, in circumstances in which we only care about one direction. There are some situations when we only care about whether a population mean is larger or smaller then some particular value, not just different from it. Second, we should usually only do a one tailed test when we have an a priori reason to test in only one direction . In other words, we have outside information that leads us to test in only one direction. We cannot look at our data first and then choose one direction or another to test because that's essentially doing a two tailed test, but using the T critical values for a one tailed test which would lead to increased type one errors. Those are the ones where we reject a true null hypothesis. Check out our video about type one and type two errors if you want to know more about that terminology. In general, my advice is that unless we really know what we're doing we should always do two tailed tests to make sure we don't reject null hypotheses when we shouldn't. The one sample t-test is the preferred method for testing a hypothesized population mean. In the real world however, this test is rarely done because we are usually more interested in comparing two groups to each other than comparing one group to some hypothetical value. Those tests are 2 sample t-tests and are widely used, but to understand that method it really helps to understand the one sample t-test first. Check out the StatsExamples website for more examples of statistical tests and links to other videos. Connect with StatsExamples here This information is intended for the greater good; please use statistics responsibly.
In physics, spacetime is any mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams can be used to visualize relativistic effects such as why different observers perceive where and when events occur. Until the turn of the 20th century, the assumption had been that the three-dimensional geometry of the universe (its spatial expression in terms of coordinates, distances, and directions) was independent of one-dimensional time. However, in 1905, Albert Einstein based his seminal work on special relativity on two postulates: (1) The laws of physics are invariant (i.e., identical) in all inertial systems (i.e., non-accelerating frames of reference); (2) The speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. The logical consequence of taking these postulates together is the inseparable joining together of the four dimensions, hitherto assumed as independent, of space and time. Many counterintuitive consequences emerge: in addition to being independent of the motion of the light source, the speed of light has the same speed regardless of the frame of reference in which it is measured; the distances and even temporal ordering of pairs of events change when measured in different inertial frames of reference (this is the relativity of simultaneity); and the linear additivity of velocities no longer holds true. Einstein framed his theory in terms of kinematics (the study of moving bodies). His theory was a breakthrough advance over Lorentz's 1904 theory of electromagnetic phenomena and Poincaré's electrodynamic theory. Although these theories included equations identical to those that Einstein introduced (i.e. the Lorentz transformation), they were essentially ad hoc models proposed to explain the results of various experiments—including the famous Michelson–Morley interferometer experiment—that were extremely difficult to fit into existing paradigms. In 1908, Hermann Minkowski—once one of the math professors of a young Einstein in Zürich—presented a geometric interpretation of special relativity that fused time and the three spatial dimensions of space into a single four-dimensional continuum now known as Minkowski space. A key feature of this interpretation is the formal definition of the spacetime interval. Although measurements of distance and time between events differ for measurements made in different reference frames, the spacetime interval is independent of the inertial frame of reference in which they are recorded. Minkowski's geometric interpretation of relativity was to prove vital to Einstein's development of his 1915 general theory of relativity, wherein he showed how mass and energy curve this flat spacetime to a Pseudo Riemannian manifold. Non-relativistic classical mechanics treats time as a universal quantity of measurement which is uniform throughout space and which is separate from space. Classical mechanics assumes that time has a constant rate of passage that is independent of the state of motion of an observer, or indeed of anything external. Furthermore, it assumes that space is Euclidean, which is to say, it assumes that space follows the geometry of common sense. In the context of special relativity, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer. General relativity, in addition, provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field. In ordinary space, a position is specified by three numbers, known as dimensions. In the Cartesian coordinate system, these are called x, y, and z. A position in spacetime is called an event, and requires four numbers to be specified: the three-dimensional location in space, plus the position in time (Fig. 1). Spacetime is thus four dimensional. An event is something that happens instantaneously at a single point in spacetime, represented by a set of coordinates x, y, z and t. The word "event" used in relativity should not be confused with the use of the word "event" in normal conversation, where it might refer to an "event" as something such as a concert, sporting event, or a battle. These are not mathematical "events" in the way the word is used in relativity, because they have finite durations and extents. Unlike the analogies used to explain events, such as firecrackers or lightning bolts, mathematical events have zero duration and represent a single point in spacetime. The path of a particle through spacetime can be considered to be a succession of events. The series of events can be linked together to form a line which represents a particle's progress through spacetime. That line is called the particle's world line.:105 Mathematically, spacetime is a manifold, which is to say, it appears locally "flat" near each point in the same way that, at small enough scales, a globe appears flat. An extremely large scale factor, (conventionally called the speed-of-light) relates distances measured in space with distances measured in time. The magnitude of this scale factor (nearly km in space being equivalent to 1 second in time), along with the fact that spacetime is a manifold, implies that at ordinary, non-relativistic speeds and at ordinary, human-scale distances, there is little that humans might observe which is noticeably different from what they might observe if the world were Euclidean. It was only with the advent of sensitive scientific measurements in the mid-1800s, such as the 300,000 Fizeau experiment and the Michelson–Morley experiment, that puzzling discrepancies began to be noted between observation versus predictions based on the implicit assumption of Euclidean space. In special relativity, an observer will, in most cases, mean a frame of reference from which a set of objects or events are being measured. This usage differs significantly from the ordinary English meaning of the term. Reference frames are inherently nonlocal constructs, and according to this usage of the term, it does not make sense to speak of an observer as having a location. In Fig. 1‑1, imagine that the frame under consideration is equipped with a dense lattice of clocks, synchronized within this reference frame, that extends indefinitely throughout the three dimensions of space. Any specific location within the lattice is not important. The latticework of clocks is used to determine the time and position of events taking place within the whole frame. The term observer refers to the entire ensemble of clocks associated with one inertial frame of reference.:17–22 In this idealized case, every point in space has a clock associated with it, and thus the clocks register each event instantly, with no time delay between an event and its recording. A real observer, however, will see a delay between the emission of a signal and its detection due to the speed of light. To synchronize the clocks, in the data reduction following an experiment, the time when a signal is received will be corrected to reflect its actual time were it to have been recorded by an idealized lattice of clocks. In many books on special relativity, especially older ones, the word "observer" is used in the more ordinary sense of the word. It is usually clear from context which meaning has been adopted. Physicists distinguish between what one measures or observes (after one has factored out signal propagation delays), versus what one visually sees without such corrections. Failure to understand the difference between what one measures/observes versus what one sees is the source of much error among beginning students of relativity. By the mid-1800s, various experiments such as the observation of the Arago spot (a bright point at the center of a circular object's shadow due to diffraction) and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to a corpuscular theory. Propagation of waves was then assumed to require the existence of a medium which waved: in the case of light waves, this was considered to be a hypothetical luminiferous aether.[note 1] However, the various attempts to establish the properties of this hypothetical medium yielded contradictory results. For example, the Fizeau experiment of 1851 demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction. Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light. The famous Michelson–Morley experiment of 1887 (Fig. 1‑2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration. George Francis FitzGerald in 1889 and Hendrik Lorentz in 1892 independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson-Morley experiment. (No length changes occur in directions transverse to the direction of motion.) By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein were to derive later (i.e. the Lorentz transform), but with a fundamentally different interpretation. As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter.:163–174 Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena. However, Lorentz considered local time to be only an auxiliary mathematical tool, a trick as it were, to simplify the transformation from one system into another. Other physicists and mathematicians at the turn of the century came close to arriving at what is currently known as spacetime. Einstein himself noted, that with so many people unraveling separate pieces of the puzzle, "the special theory of relativity, if we regard its development in retrospect, was ripe for discovery in 1905." An important example is Henri Poincaré,:73–80,93–95 who in 1898 argued that the simultaneity of two events is a matter of convention.[note 2] In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by applying an explicitly operational definition of clock synchronization assuming constant light speed.[note 3] In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity, and in 1905/1906 he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity. While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional space-time by defining various four vectors, namely four-position, four-velocity, and four-force. He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world". Furthermore, even as late as 1909, Poincaré continued to believe in the dynamical interpretation of the Lorentz transform.:163–174 For these and other reasons, most historians of science argue that Poincaré did not invent what is now called special relativity. In 1905, Einstein introduced special relativity (even though without using the techniques of the spacetime formalism) in its modern understanding as a theory of space and time. While his results are mathematically equivalent to those of Lorentz and Poincaré, it was Einstein who showed that the Lorentz transformations are not the result of interactions between matter and aether, but rather concern the nature of space and time itself. He obtained all of his results by recognizing that the entire theory can be built upon two postulates: The principle of relativity and the principle of the constancy of light speed. Einstein performed his analyses in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. His seminal work introducing the subject was filled with vivid imagery involving the exchange of light signals between clocks in motion, careful measurements of the lengths of moving rods, and other such examples.[note 4] In addition, Einstein in 1905 superseded previous attempts of an electromagnetic mass-energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass-energy equivalence, Einstein showed, in addition, that the gravitational mass of a body is proportional to its energy content, which was one of early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime,:219 in the further development of general relativity Einstein fully incorporated the spacetime formalism. When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator: |“||I went to Cologne, met Minkowski and heard his celebrated lecture 'Space and Time' delivered on 2 September 1908. […] He told me later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently but did not publish them because he wished first to work out the mathematical structure in all its splendor. He never made a priority claim and always gave Einstein his full share in the great discovery.||”| Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincaré et al. However, it is not at all clear when Minkowski began to formulate the geometric formulation of special relativity that was to bear his name, or to which extent he was influenced by Poincaré's four-dimensional interpretation of the Lorentz transformation. Nor is it clear if he ever fully appreciated Einstein's critical contribution to the understanding of the Lorentz transformations, thinking of Einstein's work as being an extension of Lorentz's work. On November 5, 1907 (a little more than a year before his death), Minkowski introduced his geometric interpretation of spacetime in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitätsprinzip).[note 5] On September 21, 1908, Minkowski presented his famous talk, Space and Time (Raum und Zeit), to the German Society of Scientists and Physicians. The opening words of Space and Time include Minkowski's famous statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1‑4), and included a remarkable demonstration that the concept of the invariant interval (discussed below), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.[note 6] The spacetime concept and the Lorentz group are closely connected to certain types of sphere, hyperbolic, or conformal geometries and their transformation groups already developed in the 19th century, in which invariant intervals analogous to the spacetime interval are used.[note 7] Einstein, for his part, was initially dismissive of Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital, and in 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity.:151–152 Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime. Spacetime in special relativityEdit In three-dimensions, the distance between two points can be defined using the Pythagorean theorem: Although two viewers may measure the x,y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both (assuming that they are measuring using the same units). The distance is "invariant". In special relativity, however, the distance between two points is no longer the same if measured by two different observers when one of the observers is moving, because of Lorentz contraction. The situation is even more complicated if the two points are separated in time as well as in space. For example, if one observer sees two events occur at the same place, but at different times, a person moving with respect to the first observer will see the two events occurring at different places, because (from their point of view) they are stationary, and the position of the event is receding or approaching. Thus, a different measure must be used to measure the effective "distance" between two events. In four-dimensional spacetime, the analog to distance is the interval. Although time comes in as a fourth dimension, it is treated differently than the spatial dimensions. Minkowski space hence differs in important respects from four-dimensional Euclidean space. The fundamental reason for merging space and time into spacetime is that space and time are separately not invariant, which is to say that, under the proper conditions, different observers will disagree on the length of time between two events (because of time dilation) or the distance between the two events (because of length contraction). But special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure time and distance carefully will find the same spacetime interval between any two events. Suppose an observer measures two events as being separated in time by and a spatial distance . Then the spacetime interval between the two events that are separated by a distance in space and by in the -coordinate is: - , or for three space dimensions, The constant , the speed of light, converts the units used to measure time (seconds) into units used to measure distance (meters). Note on nomenclature: Although for brevity, one frequently sees interval expressions expressed without deltas, including in most of the following discussion, it should be understood that in general, means , etc. We are always concerned with differences of spatial or temporal coordinate values belonging to two events, and since there is no preferred origin, single coordinate values have no essential meaning. The equation above is similar to the Pythagorean theorem, except with a minus sign between the and the terms. Note also that the spacetime interval is the quantity , not itself. The reason is that unlike distances in Euclidean geometry, intervals in Minkowski spacetime can be negative. Rather than deal with square roots of negative numbers, physicists customarily regard as a distinct symbol in itself, rather than the square of something.:217 Because of the minus sign, the spacetime interval between two distinct events can be zero. If is positive, the spacetime interval is timelike, meaning that two events are separated by more time than space. If is negative, the spacetime interval is spacelike, meaning that two events are separated by more space than time. Spacetime intervals are zero when . In other words, the spacetime interval between two events on the world line of something moving at the speed of light is zero. Such an interval is termed lightlike or null. A photon arriving in our eye from a distant star will not have aged, despite having (from our perspective) spent years in its passage. A spacetime diagram is typically drawn with only a single space and a single time coordinate. Fig. 2‑1 presents a spacetime diagram illustrating the world lines (i.e. paths in spacetime) of two photons, A and B, originating from the same event and going in opposite directions. In addition, C illustrates the world line of a slower-than-light-speed object. The vertical time coordinate is scaled by so that it has the same units (meters) as the horizontal space coordinate. Since photons travel at the speed of light, their world lines have a slope of ±1. In other words, every meter that a photon travels to the left or right requires approximately 3.3 nanoseconds of time. Note on nomenclature: There are two sign conventions in use in the relativity literature: These sign conventions are associated with the metric signatures (+ − − −) and (− + + +). A minor variation is to place the time coordinate last rather than first. Both conventions are widely used within the field of study. In comparing measurements made by relatively moving observers in different reference frames, it is useful to work with the frames in a standard configuration. In Fig. 2‑2, two Galilean reference frames (i.e. conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime") belongs to a second observer O′. - The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′. - Frame S′ moves in the x-direction of frame S with a constant velocity v as measured in frame S. - The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.:107 Fig. 2‑3a redraws Fig. 2‑2 in a different orientation. Fig. 2‑3b illustrates a spacetime diagram from the viewpoint of observer O. Since S and S′ are in standard configuration, their origins coincide at times t = 0 in frame S and t′ = 0 in frame S'. The ct′ axis passes through the events in frame S′ which have x′ = 0. But the points with x′ = 0 are moving in the x-direction of frame S with velocity v, so that they are not coincident with the ct axis at any time other than zero. Therefore, the ct′ axis is tilted with respect to the ct axis by an angle θ given by The x′ axis is also tilted with respect to the x axis. To determine the angle of this tilt, we recall that the slope of the world line of a light pulse is always ±1. Fig. 2‑3c presents a spacetime diagram from the viewpoint of observer O′. Event P represents the emission of a light pulse at x′ = 0, ct′ = −a. The pulse is reflected from a mirror situated a distance a from the light source (event Q), and returns to the light source at x′ = 0, ct′ = a (event R). The same events P, Q, R are plotted in Fig. 2‑3b in the frame of observer O. The light paths have slopes = 1 and −1 so that △PQR forms a right triangle. Since OP = OQ = OR, the angle between x′ and x must also be θ.:113–118 While the rest frame has space and time axes that meet at right angles, the moving frame is drawn with axes that meet at an acute angle. The frames are actually equivalent. The asymmetry is due to unavoidable distortions in how spacetime coordinates can map onto a Cartesian plane, and should be considered no stranger than the manner in which, on a Mercator projection of the Earth, the relative sizes of land masses near the poles (Greenland and Antarctica) are highly exaggerated relative to land masses near the Equator. In Fig. 2-4, event O is at the origin of a spacetime diagram, and the two diagonal lines represent all events that have zero spacetime interval with respect to the origin event. These two lines form what is called the light cone of the event O, since adding a second spatial dimension (Fig. 2‑5) makes the appearance that of two right circular cones meeting with their apices at O. One cone extends into the future (t>0), the other into the past (t<0). A light (double) cone divides spacetime into separate regions with respect to its apex. The interior of the future light cone consists of all events that are separated from the apex by more time (temporal distance) than necessary to cross their spatial distance at lightspeed; these events comprise the timelike future of the event O. Likewise, the timelike past comprises the interior events of the past light cone. So in timelike intervals Δct is greater than Δx, making timelike intervals positive. The region exterior to the light cone consists of events that are separated from the event O by more space than can be crossed at lightspeed in the given time. These events comprise the so-called spacelike region of the event O, denoted "Elsewhere" in Fig. 2‑4. Events on the light cone itself are said to be lightlike (or null separated) from O. Because of the invariance of the spacetime interval, all observers will assign the same light cone to any given event, and thus will agree on this division of spacetime.:220 The light cone has an essential role within the concept of causality. It is possible for a not-faster-than-light-speed signal to travel from the position and time of O to the position and time of D (Fig. 2‑4). It is hence possible for event O to have a causal influence on event D. The future light cone contains all the events that could be causally influenced by O. Likewise, it is possible for a not-faster-than-light-speed signal to travel from the position and time of A, to the position and time of O. The past light cone contains all the events that could have a causal influence on O. In contrast, assuming that signals cannot travel faster than the speed of light, any event, like e.g. B or C, in the spacelike region (Elsewhere), cannot either affect event O, nor can they be affected by event O employing such signalling. Under this assumption any causal relationship between event O and any events in the spacelike region of a light cone is excluded. Relativity of simultaneityEdit All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before-after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2‑4 was drawn from the reference frame of an observer moving at v = 0. From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O. From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity. Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2‑3. The three events (A, B, C) are simultaneous from the reference frame of an observer moving at v = 0. From the reference frame of an observer moving at v = 0.3 c, the events appear to occur in the order C, B, A. From the reference frame of an observer moving at v = −0.5 c, the events appear to occur in the order A, B, C. The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant. A spacelike spacetime interval gives the same distance that an observer would measure if the events being measured were simultaneous to the observer. A spacelike spacetime interval hence provides a measure of proper distance, i.e. the true distance = Likewise, a timelike spacetime interval gives the same measure of time as would be presented by the cumulative ticking of a clock that moves along a given world line. A timelike spacetime interval hence provides a measure of the proper time = .:220–221 In Euclidean space (having spatial dimensions only), the set of points equidistant (using the Euclidean metric) from some point form a circle (in two dimensions) or a sphere (in three dimensions). In (1+1)-dimensional Minkowski spacetime (having one temporal and one spatial dimension), the points at some constant spacetime interval away from the origin (using the Minkowski metric) form curves given by the two equations - with some positive real constant. These equations describe two families of hyperbolae in an x–ct spacetime diagram, which are termed invariant hyperbolae. In Fig. 2‑7a, each magenta hyperbola connects all events having some fixed spacelike separation from the origin, while the green hyperbolae connect events of equal timelike separation. Fig. 2‑7b reflects the situation in (1+2)-dimensional Minkowski spacetime (one temporal and two spatial dimensions) with the corresponding hyperboloids. Each timelike interval generates a hyperboloid of one sheet, while each spacelike interval generates a hyperboloid of two sheets. The (1+2)-dimensional boundary between space- and timelike hyperboloids, established by the events forming a zero spacetime interval to the origin, is made up by degenerating the hyperboloids to the light cone. In (1+1)-dimensions the hyperbolae degenerate to the two grey 45°-lines depicted in Fig. 2‑7a. Note on nomenclature: The magenta hyperbolae, which cross the x axis, are termed timelike (in contrast to spacelike) hyperbolae because all "distances" to the origin along the hyperbola are timelike intervals. Because of that, these hyperbolae represent actual paths that can be traversed by (constantly accelerating) particles in spacetime: between any two events on one hyperbola a causality relation is possible, because the inverse of the slope –representing the necessary speed– for all secants is less than . On the other hand, the green hyperbolae, which cross the ct axis, are termed spacelike, because all intervals along these hyperbolae are spacelike intervals: no causality is possible between any two points on one of these hyperbolae, because all secants represent speeds larger than . Time dilation and length contractionEdit Fig. 2-8 illustrates the invariant hyperbola for all events that can be reached from the origin in a proper time of 5 meters (approximately ×10−8 s). Different world lines represent clocks moving at different speeds. A clock that is stationary with respect to the observer has a world line that is vertical, and the elapsed time measured by the observer is the same as the proper time. For a clock traveling at 0.3c, the elapsed time measured by the observer is 5.24 meters ( 1.67×10−8 s), while for a clock traveling at 0.7c, the elapsed time measured by the observer is 7.00 meters ( 1.75×10−8 s). This illustrates the phenomenon known as 2.34time dilation. Clocks that travel faster take longer (in the observer frame) to tick out the same amount of proper time, and they travel further along the x–axis than they would have without time dilation.:220–221 The measurement of time dilation by two observers in different inertial reference frames is mutual. If observer O measures the clocks of observer O′ as running slower in his frame, observer O′ in turn will measure the clocks of observer O as running slower. Length contraction, like time dilation, is a manifestation of the relativity of simultaneity. Measurement of length requires measurement of the spacetime interval between two events that are simultaneous in one's frame of reference. But events that are simultaneous in one frame of reference are, in general, not simultaneous in other frames of reference. Fig. 2-9 illustrates the motions of a 1 m rod that is traveling at 0.5 c along the x axis. The edges of the blue band represent the world lines of the rod's two endpoints. The invariant hyperbola illustrates events separated from the origin by a spacelike interval of 1 m. The endpoints O and B measured when t′ = 0 are simultaneous events in the S′ frame. But to an observer in frame S, events O and B are not simultaneous. To measure length, the observer in frame S measures the endpoints of the rod as projected onto the x-axis along their world lines. The projection of the rod's world sheet onto the x axis yields the foreshortened length OC.:125 (not illustrated) Drawing a vertical line through A so that it intersects the x' axis demonstrates that, even as OB is foreshortened from the point of view of observer O, OA is likewise foreshortened from the point of view of observer O′. In the same way that each observer measures the other's clocks as running slow, each observer measures the other's rulers as being contracted. Mutual time dilation and the twin paradoxEdit Mutual time dilationEdit Mutual time dilation and length contraction tend to strike beginners as inherently self-contradictory concepts. The worry is that if observer A measures observer B's clocks as running slowly, simply because B is moving at speed v relative to A, then the principle of relativity requires that observer B likewise measures A's clocks as running slowly. This is an important question that "goes to the heart of understanding special relativity.":198 Basically, A and B are performing two different measurements. In order to measure the rate of ticking of one of B's clocks, A must use two of his own clocks, the first to record the time where B's clock first ticked at the first location of B, and second to record the time where B's clock emitted its second tick at the next location of B. Observer A needs two clocks because B is moving, so a grand total of three clocks are involved in the measurement. A's two clocks must be synchronized in A's frame. Conversely, B requires two clocks synchronized in her frame to record the ticks of A's clocks at the locations where A's clocks emitted their ticks. Therefore, A and B are performing their measurements with different sets of three clocks each. Since they are not doing the same measurement with the same clocks, there is no inherent necessity that the measurements be reciprocally "consistent" such that, if one observer measures the other's clock to be slow, the other observer measures the one's clock to be fast.:198–199 In regards to mutual length contraction, Fig. 2‑9 illustrates that the primed and unprimed frames are mutually rotated by a hyperbolic angle (analogous to ordinary angles in Euclidean geometry).[note 8] Because of this rotation, the projection of a primed meter-stick onto the unprimed x-axis is foreshortened, while the projection of an unprimed meter-stick onto the primed x′-axis is likewise foreshortened. Fig. 2-10 reinforces previous discussions about mutual time dilation. In this figure, Events A and C are separated from event O by equal timelike intervals. From the unprimed frame, events A and B are measured as simultaneous, but more time has passed for the unprimed observer than has passed for the primed observer. From the primed frame, events C and D are measured as simultaneous, but more time has passed for the primed observer than has passed for the unprimed observer. Each observer measures the clocks of the other observer as running more slowly.:124 Please note the importance of the word "measure". An observer's state of motion cannot affect an observed object, but it can affect the observer's observations of the object. In Fig. 2-10, each line drawn parallel to the x axis represents a line of simultaneity for the unprimed observer. All events on that line have the same time value of ct. Likewise, each line drawn parallel to the x′ axis represents a line of simultaneity for the primed observer. All events on that line have the same time value of ct′. Elementary introductions to special relativity often illustrate the differences between Galilean relativity and special relativity by posing a series of supposed "paradoxes". All paradoxes are, in reality, merely ill-posed or misunderstood problems, resulting from our unfamiliarity with velocities comparable to the speed of light. The remedy is to solve many problems in special relativity and to become familiar with its so-called counter-intuitive predictions. The geometrical approach to studying spacetime is considered one of the best methods for developing a modern intuition. The twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. The twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock.:207 Nevertheless, the twin paradox is not a true paradox because it is easily understood within the context of special relativity. The impression that a paradox exists stems from a misunderstanding of what special relativity states. Special relativity does not declare all frames of reference to be equivalent, only inertial frames. The traveling twin's frame is not inertial during periods when she is accelerating. Furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not. Deeper analysis is needed before we can understand why these distinctions should result in a difference in the twins' ages. Consider the spacetime diagram of Fig. 2‑11. This presents the simple case of a twin going straight out along the x axis and immediately turning back. From the standpoint of the stay-at-home twin, there is nothing puzzling about the twin paradox at all. The proper time measured along the traveling twin's world line from O to C, plus the proper time measured from C to B, is less than the stay-at-home twin's proper time measured from O to A to B. More complex trajectories require integrating the proper time between the respective events along the curve (i.e. the path integral) to calculate the total amount of proper time experienced by the traveling twin. Complications arise if the twin paradox is analyzed from the traveling twin's point of view. For the rest of this discussion, we adopt Weiss's nomenclature, designating the stay-at-home twin as Terence and the traveling twin as Stella. We had previously noted that Stella is not in an inertial frame. Given this fact, it is sometimes stated that full resolution of the twin paradox requires general relativity. This is not true. A pure SR analysis would be as follows: Analyzed in Stella's rest frame, she is motionless for the entire trip. When she fires her rockets for the turnaround, she experiences a pseudo force which resembles a gravitational force. Figs. 2‑6 and 2‑11 illustrate the concept of lines (planes) of simultaneity: Lines parallel to the observer's x-axis (xy-plane) represent sets of events that are simultaneous in the observer frame. In Fig. 2‑11, the blue lines connect events on Terence's world line which, from Stella's point of view, are simultaneous with events on her world line. (Terence, in turn, would observe a set of horizontal lines of simultaneity.) Throughout both the outbound and the inbound legs of Stella's journey, she measures Terence's clocks as running slower than her own. But during the turnaround (i.e. between the bold blue lines in the figure), a shift takes place in the angle of her lines of simultaneity, corresponding to a rapid skip-over of the events in Terence's world line that Stella considers to be simultaneous with her own. Therefore, at the end of her trip, Stella finds that Terence has aged more than she has. Although general relativity is not required to analyze the twin paradox, application of the Equivalence Principle of general relativity does provide some additional insight into the subject. We had previously noted that Stella is not stationary in an inertial frame. Analyzed in Stella's rest frame, she is motionless for the entire trip. When she is coasting her rest frame is inertial, and Terence's clock will appear to run slow. But when she fires her rockets for the turnaround, her rest frame is an accelerated frame and she experiences a force which is pushing her as if she were in a gravitational field. Terence will appear to be high up in that field and because of gravitational time dilation, his clock will appear to run fast, so much so that the net result will be that Terence has aged more than Stella when they are back together. As will be discussed in the forthcoming section Curvature of time, the theoretical arguments predicting gravitational time dilation are not exclusive to general relativity. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence, including Newton's theory.:16 This introductory section has focused on the spacetime of special relativity, since it is the easiest to describe. Minkowski spacetime is flat, takes no account of gravity, is uniform throughout, and serves as nothing more than a static background for the events that take place in it. The presence of gravity greatly complicates the description of spacetime. In general relativity, spacetime is no longer a static background, but actively interacts with the physical systems that it contains. Spacetime curves in the presence of matter, can propagate waves, bends light, and exhibits a host of other phenomena.:221 A few of these phenomena are described in the later sections of this article. Basic mathematics of spacetimeEdit A basic goal is to be able to compare measurements made by observers in relative motion. Say we have an observer O in frame S who has measured the time and space coordinates of an event, assigning this event three Cartesian coordinates and the time as measured on his lattice of synchronized clocks (x, y, z, t) (see Fig. 1‑1). A second observer O′ in a different frame S′ measures the same event in her coordinate system and her lattice of synchronized clocks (x′, y′, z′, t′). Since we are dealing with inertial frames, neither observer is under acceleration, and a simple set of equations allows us to relate coordinates (x, y, z, t) to (x′, y′, z′, t′). Given that the two coordinate systems are in standard configuration, meaning that they are aligned with parallel (x, y, z) coordinates and that t = 0 when t′ = 0, the coordinate transformation is as follows: Fig. 3-1 illustrates that in Newton's theory, time is universal, not the velocity of light.:36–37 Consider the following thought experiment: The red arrow illustrates a train that is moving at 0.4 c with respect to the platform. Within the train, a passenger shoots a bullet with a speed of 0.4 c in the frame of the train. The blue arrow illustrates that a person standing on the train tracks measures the bullet as traveling at 0.8 c. This is in accordance with our naive expectations. More generally, assume that frame S′ is moving at velocity v with respect to frame S. Within frame S′, observer O′ measures an object moving with velocity u′. What is its velocity u with respect to frame S? Since x = ut, x′ = x − vt, and t = t′, we can write x′ = ut − vt = (u − v)t = (u − v)t′. This leads to u′ = x′/t′ and ultimately which is the common-sense Galilean law for the addition of velocities. Relativistic composition of velocitiesEdit The composition of velocities is quite different in relativistic spacetime. To reduce the complexity of the equations slightly, we introduce a common shorthand for the ratio of the speed of an object relative to light, Fig. 3-2a illustrates a red train that is moving forward at a speed given by v/c = β = s/a. From the primed frame of the train, a passenger shoots a bullet with a speed given by u′/c = β′ = n/m, where the distance is measured along a line parallel to the red x′ axis rather than parallel to the black x axis. What is the composite velocity u of the bullet relative to the platform, as represented by the blue arrow? Referring to Fig. 3‑2b: - From the platform, the composite speed of the bullet is given by u = c(s + r)/(a + b). - The two yellow triangles are similar because they are right triangles that share a common angle α. In the large yellow triangle, the ratio s/a = v/c = β. - The ratios of corresponding sides of the two yellow triangles are constant, so that r/a = b/s = n/m = β′. So b = u′s/c and r = u′a/c. - Substitute the expressions for b and r into the expression for u in step 1 to yield Einstein's formula for the addition of velocities::42–48 The relativistic formula for addition of velocities presented above exhibits several important features: - If u′ and v are both very small compared with the speed of light, then the product vu′/c2 becomes vanishingly small, and the overall result becomes indistinguishable from the Galilean formula (Newton's formula) for the addition of velocities: u = u′ + v. The Galilean formula is a special case of the relativistic formula applicable to low velocities. - If u′ is set equal to c, then the formula yields u = c regardless of the starting value of v. The velocity of light is the same for all observers regardless their motions relative to the emitting source.:49 Time dilation and length contraction revisitedEdit We had previously discussed, in qualitative terms, time dilation and length contraction. It is straightforward to obtain quantitative expressions for these effects. Fig. 3‑3 is a composite image containing individual frames taken from two previous animations, simplified and relabeled for the purposes of this section. To reduce the complexity of the equations slightly, we see in the literature a variety of different shorthand notations for ct : - and are common. - One also sees very frequently the use of the convention In Fig. 3-3a, segments OA and OK represent equal spacetime intervals. Time dilation is represented by the ratio OB/OK. The invariant hyperbola has the equation w = √ where k = OK, and the red line representing the world line of a particle in motion has the equation w = x/β = xc/v. A bit of algebraic manipulation yields The expression involving the square root symbol appears very frequently in relativity, and one over the expression is called the Lorentz factor, denoted by the Greek letter gamma : We note that if v is greater than or equal to c, the expression for becomes physically meaningless, implying that c is the maximum possible speed in nature. Next, we note that for any v greater than zero, the Lorentz factor will be greater than one, although the shape of the curve is such that for low speeds, the Lorentz factor is extremely close to one. In Fig. 3-3b, segments OA and OK represent equal spacetime intervals. Length contraction is represented by the ratio OB/OK. The invariant hyperbola has the equation x = √, where k = OK, and the edges of the blue band representing the world lines of the endpoints of a rod in motion have slope 1/β = c/v. Event A has coordinates (x, w) = (γk, γβk). Since the tangent line through A and B has the equation w = (x − OB)/β, we have γβk = (γk − OB)/β and The Galilean transformations and their consequent commonsense law of addition of velocities work well in our ordinary low-speed world of planes, cars and balls. Beginning in the mid-1800s, however, sensitive scientific instrumentation began finding anomalies that did not fit well with the ordinary addition of velocities. To transform the coordinates of an event from one frame to another in special relativity, we use the Lorentz transformations. The Lorentz factor appears in the Lorentz transformations: The inverse Lorentz transformations are: When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations. As noted before, when we write and so forth, we most often really mean and so forth. Although, for brevity, we write the Lorentz transformation equations without deltas, it should be understood that x means Δx, etc. We are, in general, always concerned with the space and time differences between events. Note on nomenclature: Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to S′. So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v.:71–79 Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time t = t′ = 0, Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about ×106 km). Terence observes Stella crossing the finish-line clock at t = 600.00 s. But Stella observes the time on her ship chronometer to be 90.0t′ = (t − vx/c2) = 519.62 s as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about ×106 km). 1). 77.9 Deriving the Lorentz transformationsEdit There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously. The derivation given here and illustrated in Fig. 3‑5 is based on one presented by Bais:64–66 and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates (w′, x′) in the red frame that is moving with velocity parameter β = v/c. How do we determine w′ and x′ in terms of w and x? (Or the other way around, of course.) It is easier at first to derive the inverse Lorentz transformation. - We start by noting that there can be no such thing as length expansion/contraction in the transverse directions. y' must equal y and z′ must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law.:27–28 - From the drawing, w = a + b and x = r + s - From previous results using similar triangles, we know that s/a = b/r = v/c = β. - We know that because of time dilation, a = γw′ - Substituting equation (4) into s/a = β yields s = γw′β. - Length contraction and similar triangles give us r = γx′ and b = βr = βγx′ - Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, ct′ for w′, and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for t′ and x′. Linearity of the Lorentz transformationsEdit The Lorentz transformations have a mathematical property called linearity, since x' and t' are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that we tacitly assumed while performing the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere.:67 All inertial observers will agree on what constitutes accelerating and non-accelerating motion.:72–73 Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well.:190 A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation. Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with β = 0.500 to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with β = 0.250 to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with β = 0.666 to relate Ursula's measurements with his own. The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles. Longitudinal Doppler effectEdit The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3‑6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter β, so that the separation between source and receiver at time w is βw. Because of time dilation, w = γw'. Since the slope of the green light ray is −1, T = w+βw = γw'(1+β). Hence, the relativistic Doppler effect is given by:58–59 Transverse Doppler effectEdit Suppose that a source, moving in a straight line, is at its closest point to the receiver. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these::94–96 - Fig. 3-7a. If a source, moving in a straight line, is crossing the receiver's field of view, what is the frequency measurement when the source is at its closest approach to the receiver? - Fig. 3-7b. If a source is moving in a straight line, what is the frequency measurement when the receiver sees the source as being closest to it? - Fig. 3-7c. If receiver is moving in a circle around the source, what frequency does the receiver measure? - Fig. 3-7d. If the source is moving in a circle around the receiver, what frequency does the receiver measure? In scenario (a), when the source is closest to the receiver, the light hitting the receiver actually comes from a direction where the source had been some time back, and it has a significant longitudinal component, making an analysis from the frame of the receiver tricky. It is easier to make the analysis from S', the frame of the source. The point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency f', but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency Scenario (b) is best analyzed from S, the frame of the receiver. The illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of , and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.):94–96 Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).[note 9] Energy and momentumEdit Extending momentum to four dimensionsEdit In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: p = mv. It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector (x, t). In exploring the properties of the spacetime momentum, we start, in Fig. 3‑8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. p = 0, but the time component equals mc. We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that (mc)' = γmc and p' = −βγmc, since the red axes are rescaled by gamma. Fig. 3‑8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c.:84–87 We will use this information shortly to obtain an expression for the four-momentum. Momentum of lightEdit Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a light-like world line and, in appropriate units, have equal space and time components for every observer. A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: E/p = c. Rearranging, E/c = p, and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector. Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined. By this analysis, if the energy of a photon equals E in the rest frame, it equals E' = (1 − β)γE in a moving frame. This result can by derived by inspection of Fig. 3‑9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.:88 Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several famous conclusions. - In the low speed limit as β = v/c approaches zero, approaches 1, so the spatial component of the relativistic momentum βγmc = γmv approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula mrel = γm. - Likewise, comparing the time component of the relativistic momentum with that of the photon, γmc = mrelc = E/c, so that Einstein arrived at the relationship E = mrelc2. Simplified to the case of zero velocity, this is Einstein's famous equation relating energy and mass. Another way of looking at the relationship between mass and energy is to consider a series expansion of γmc2 at low velocity: The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes, old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity. For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy. "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula, This formula applies to all particles, massless as well as massive. For massless photons, it yields the same relationship that we had earlier established, E = ±pc.:90–92 Because of the close relationship between mass and energy, the four-momentum (also called 4‑momentum) is also called the energy-momentum 4‑vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as - or alternatively, - using the convention that :129–130,180 In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes don't care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes don't care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective. To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension. In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity: (1) The two bodies rebound from each other in a completely elastic collision. (2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision. For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat. In case (2), two masses with momentums p1 = m1v1 and p2 = m2v2 collide to produce a single particle of conserved mass m = m1 + m2 traveling at the center of mass velocity of the original system, vcm = (m1v1 + m2v2)/(m1 + m2). The total momentum p = p1 + p2 is conserved. Fig. 3‑10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components E1/c and E2/c add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components p1 and p2 add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: m > m1 + m2.:94–97 Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy.:134–138 Choice of reference framesEdit The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3‑11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same. Energy and momentum conservationEdit In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since v' = v − u, the momentum p' = p − mu. If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame.:241–245 Conservation of momentum in the COM frame amounts to the requirement that p = 0 both before and after collision. In the Newtonian analysis, conservation of mass dictates that m = m1 + m2. In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero.:241–245 Newtonian momenta, calculated as p = mv, fail to behave properly under Lorentzian transformation. The linear transformation of velocities v' = v − u is replaced by the highly nonlinear v' = (v − u)/(1 − vu/c2), so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. As we have discussed in the previous section on four-momentum, this second option was what he chose.:104 The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass.:127 Example: Because of the equivalence of mass and energy, elementary particle masses are customarily stated in energy units, where 1 MeV = 1×106 electron volts. A charged pion is a particle of mass 139.57 MeV (approx. 273 times the electron mass). It is unstable, and decays into a muon of mass 105.66 MeV (approx. 207 times the electron mass) and an antineutrino, which has an almost negligible mass. The difference between the pion mass and the muon mass is 33.91 MeV. Fig. 3‑12a illustrates the energy-momentum diagram for this decay reaction in the rest frame of the pion. Because of its negligible mass, a neutrino travels at very nearly the speed of light. The relativistic expression for its energy, like that of the photon, is Eν = pc, which is also the value of the space component of its momentum. To conserve momentum, the muon has the same value of the space component of the neutrino's momentum, but in the opposite direction. Algebraic analyses of the energetics of this decay reaction are available online, so Fig. 3‑12b presents instead a graphing calculator solution. The energy of the neutrino is 29.79 MeV, and the energy of the muon is 33.91 − 29.79 = 4.12 MeV. Interestingly, most of the energy is carried off by the near-zero-mass neutrino. Beyond the basicsEdit The topics in this section are of significantly greater technical difficulty than those in the preceding sections and are not essential for understanding Introduction to curved spacetime. Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas. This nonlinearity is an artifact of our choice of parameters.:47–59 We have previously noted that in an x–ct spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other. The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 4‑1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the x-axis, but as twice the area of the sector swept out by the ray from the x-axis. (Numerically, the angle and 2 × area measures for the unit circle are identical.) Fig. 4‑1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 4‑2 presents plots of the sinh, cosh, and tanh functions. For the unit circle, the slope of the ray is given by In the Cartesian plane, rotation of point (x, y) into point (x', y') by angle θ is given by In a spacetime diagram, the velocity parameter is the analog of slope. The rapidity, φ, is defined by:96–99 The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula;:47–59 or in other words, The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts. Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the x direction may be written as and the inverse Lorentz boost in the x direction may be written as Four‑vectors have been mentioned above in context of the energy-momentum 4‑vector, but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, 4‑vectors, and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation (really no more than an observation) using the field strength tensor formulation. On the other hand, general relativity, from the outset, relies heavily on 4‑vectors, and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such 4‑vectors even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime. Definition of 4-vectorsEdit A 4-tuple, A = (A0, A1, A2, A3) is a "4-vector" if its component A i transform between frames according to the Lorentz transformation. If using (ct, x, y, z) coordinates, A is a 4–vector if it transforms (in the x-direction) according to which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation. As usual, when we write x, t, etc. we generally mean Δx, Δt etc. The last three components of a 4–vector must be a standard vector in three-dimensional space. Therefore, a 4–vector must transform like (c Δt, Δx, Δy, Δz) under Lorentz transformations as well as rotations.:36–59 Properties of 4-vectorsEdit - Closure under linear combination: If A and B are 4-vectors, then C = aA + aB is also a 4-vector. - Inner-product invariance: If A and B are 4-vectors, then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a 3-vector. In the following, and are 3-vectors: - In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in 3-space. - Two vectors are said to be orthogonal if Unlike the case with 3-vectors, orthogonal 4-vectors are not necessarily at right angles with each other. The rule is that two 4-vectors are orthogonal if they are offset by equal and opposite angles from the 45° line which is the world line of a light ray. This implies that a lightlike 4-vector is orthogonal with itself. - Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a 4-vector with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval and the invariant length of the relativistic momentum vector :178–181:36–59 Examples of 4-vectorsEdit - Displacement 4-vector: Otherwise known as the spacetime separation, this is (Δt, Δx, Δy, Δz), or for infinitesimal separations, (dt, dx, dy, dz). - Velocity 4-vector: This results when the displacement 4-vector is divided by , where is the proper time between the two events that yield dt, dx, dy, and dz. - The 4-velocity is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle. - An accelerated particle does not have an inertial frame in which it is always at rest. However, as stated before in the earlier discussion of the transverse Doppler effect, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles. - Since photons move on null lines, for a photon, and a 4-velocity cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path. - Energy-momentum 4-vector: As discussed in the section on Energy and momentum, - As indicated before, there are varying treatments for the energy-momentum 4-vector so that one may also see it expressed as or The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy-momentum 4-vector is a conserved quantity. - Acceleration 4-vector: This results from taking the derivative of the velocity 4-vector with respect to - Force 4-vector: This is the derivative of the momentum 4-vector with respect to 4-vectors and physical lawEdit The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. As noted in the previous discussion of energy and momentum conservation, Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving 4-vectors rather than give up on conservation of momentum. Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving 4-vectors require the use of tensors with appropriate rank, which themselves can be thought of as being built up from 4-vectors.:186 It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. Actually, accelerating objects can generally be analyzed without needing to deal with accelerating frames at all. It is only when gravitation is significant that general relativity is required. Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime. In this section, we analyze several scenarios involving accelerated reference frames. Dewan–Beran–Bell spaceship paradoxEdit The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues. In Fig. 4‑4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string which is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration.[note 11] Will the string break? The main article for this section recounts how, when the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer.:106,120–122 - To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance L' = γL in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break. - Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break.:106,120–122 The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity.:106,120–122 A spacetime diagram (Fig. 4‑5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude acceleration for proper time (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length of the spacelike line segment turns out to be greater than the length of the spacelike line segment . The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 4‑5, the acceleration is finished, the ships will remain at a constant offset in some frame If and are the ships' positions in the positions in frame are: The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame . As shown in Fig. 4‑5, Bell's example asserts the moving lengths and measured in frame to be fixed, thereby forcing the rest frame length in frame to increase. Accelerated observer with horizonEdit Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Fig. 2‑7, we had noted that the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases. Fig. 4‑6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter approaches a limit of one as increases. Likewise, approaches infinity. The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows: - We remember that - Since we conclude that - From the relativistic force law, - Substituting from step 2 and the expression for from step 3 yields which is a constant expression.:110–113 Fig. 4‑6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines).:110–113 After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon.:110–113 Introduction to curved spacetimeEdit Newton's theories assumed that motion takes place against the backdrop of a rigid Euclidean reference frame that extends throughout all space and all time. Gravity is mediated by a mysterious force, acting instantaneously across a distance, whose actions are independent of the intervening space.[note 12] In contrast, Einstein denied that there is any background Euclidean reference frame that extends throughout space. Nor is there any such thing as a force of gravitation, only the structure of spacetime itself.:175–190 In spacetime terms, the path of a satellite orbiting the Earth is not dictated by the distant influences of the Earth, Moon and Sun. Instead, the satellite moves through space only in response to local conditions. Since spacetime is everywhere locally flat when considered on a sufficiently small scale, the satellite is always following a straight line in its local inertial frame. We say that the satellite always follows along the path of a geodesic. No evidence of gravitation can be discovered following alongside the motions of a single particle.:175–190 In any analysis of spacetime, evidence of gravitation requires that one observe the relative accelerations of two bodies or two separated particles. In Fig. 5‑1, two separated particles, free-falling in the gravitational field of the Earth, exhibit tidal accelerations due to local inhomogeneities in the gravitational field such that each particle follows a different path through spacetime. The tidal accelerations that these particles exhibit with respect to each other do not require forces for their explanation. Rather, Einstein described them in terms of the geometry of spacetime, i.e. the curvature of spacetime. These tidal accelerations are strictly local. It is the cumulative total effect of many local manifestations of curvature that result in the appearance of a gravitational force acting at a long range from Earth.:175–190 Two central propositions underlie general relativity. - The first crucial concept is coordinate independence: The laws of physics cannot depend on what coordinate system one uses. This is a major extension of the principle of relativity from the version used in special relativity, which states that the laws of physics must be the same for every observer moving in non-accelerated (inertial) reference frames. In general relativity, to use Einstein's own (translated) words, "the laws of physics must be of such a nature that they apply to systems of reference in any kind of motion.":113 This leads to an immediate issue: In accelerated frames, one feels forces that seemingly would enable one to assess one's state of acceleration in an absolute sense. Einstein resolved this problem through the principle of equivalence.:137–149 - The equivalence principle states that in any sufficiently small region of space, the effects of gravitation are the same as those from acceleration. - In Fig. 5-2, person A is in a spaceship, far from any massive objects, that undergoes a uniform acceleration of g. Person B is in a box resting on Earth. Provided that the spaceship is sufficiently small so that tidal effects are non-measurable (given the sensitivity of current gravity measurement instrumentation, A and B presumably should be Lilliputians), there are no experiments that A and B can perform which will enable them to tell which setting they are in.:141–149 - An alternative expression of the equivalence principle is to note that in Newton's universal law of gravitation, F = GMmg /r2 = mgg and in Newton's second law, F = m ia, there is no a priori reason why the gravitational mass mg should be equal to the inertial mass m i. The equivalence principle states that these two masses are identical.:141–149 To go from the elementary description above of curved spacetime to a complete description of gravitation requires tensor calculus and differential geometry, topics both requiring considerable study. Without these mathematical tools, it is possible to write about general relativity, but it is not possible to demonstrate any non-trivial derivations. Rather than this section attempting to offer a (yet another) relatively non-mathematical presentation about general relativity, the reader is referred to the featured Wikipedia articles Introduction to general relativity and General relativity. Instead, the focus in this section will be to explore a handful of elementary scenarios that serve to give somewhat of the flavor of general relativity. Curvature of timeEdit In the discussion of special relativity, forces played no more than a background role. Special relativity assumes the ability to define inertial frames that fill all of spacetime, all of whose clocks run at the same rate as the clock at the origin. Is this really possible? In a nonuniform gravitational field, experiment dictates that the answer is no. Gravitational fields make it impossible to construct a global inertial frame. In small enough regions of spacetime, local inertial frames are still possible. General relativity involves the systematic stitching together of these local frames into a more general picture of spacetime.:118–126 Shortly after the publication of the general theory in 1916, a number of scientists pointed out that general relativity predicts the existence of gravitational redshift. Einstein himself suggested the following thought experiment: (i) Assume that a tower of height h (Fig. 5‑3) has been constructed. (ii) Drop a particle of rest mass m from the top of the tower. It falls freely with acceleration g, reaching the ground with velocity v = (2gh)1/2, so that its total energy E, as measured by an observer on the ground, is m + ½mv2/c2 = m + mgh/c2. (iii) A mass-energy converter transforms the total energy of the particle into a single high energy photon, which it directs upward. (iv) At the top of the tower, an energy-mass converter transforms the energy of the photon E' back into a particle of rest mass m'.:118–126 It must be that m = m', since otherwise one would be able to construct a perpetual motion device. We therefore predict that E' = m, so that A photon climbing in Earth's gravitational field loses energy and is redshifted. Early attempts to measure this redshift through astronomical observations were somewhat inconclusive, but definitive laboratory observations were performed by Pound & Rebka (1959) and later by Pound & Snider (1964). Light has an associated frequency, and this frequency may be used to drive the workings of a clock. The gravitational redshift leads to an important conclusion about time itself: Gravity makes time run slower. Suppose we build two identical clocks whose rates are controlled by some stable atomic transition. Place one clock on top of the tower, while the other clock remains on the ground. An experimenter on top of the tower observes that signals from the ground clock are lower in frequency than those of the clock next to her on the tower. Light going up the tower is just a wave, and it is impossible for wave crests to disappear on the way up. Exactly as many oscillations of light arrive at the top of the tower as were emitted at the bottom. The experimenter concludes that the ground clock is running slow, and can confirm this by bringing the tower clock down to compare side-by-side with the ground clock.:16–18 For a 1 km tower, the discrepancy would amount to about 9.4 nanoseconds per day, easily measurable with modern instrumentation. Clocks in a gravitational field do not all run at the same rate. Experiments such as the Pound–Rebka experiment have firmly established curvature of the time component of spacetime. The Pound–Rebka experiment says nothing about curvature of the space component of spacetime. But note that the theoretical arguments predicting gravitational time dilation do not depend on the details of general relativity at all. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence.:16 This includes Newtonian gravitation. A standard demonstration in general relativity is to show how, in the "Newtonian limit" (i.e. the particles are moving slowly, the gravitational field is weak, and the field is static), curvature of time alone is sufficient to derive Newton's law of gravity.:101–106 Newtonian gravitation is a theory of curved time. General relativity is a theory of curved time and curved space. Given G as the gravitational constant, M as the mass of a Newtonian star, and orbiting bodies of insignificant mass at distance r from the star, the spacetime interval for Newtonian gravitation is one for which only the time coefficient is variable::229–232 Curvature of spaceEdit The coefficient in front of describes the curvature of time in Newtonian gravitation, and this curvature completely accounts for all Newtonian gravitational effects. As expected, this correction factor is directly proportional to and , and because of the in the denominator, the correction factor increases as one approaches the gravitating body, meaning that time is curved. But general relativity is a theory of curved space and curved time, so if there are terms modifying the spatial components of the spacetime interval presented above, shouldn't their effects be seen on, say, planetary and satellite orbits due to curvature correction factors applied to the spatial terms? The answer is that they are seen, but the effects are tiny. The reason is that planetary velocities are extremely small compared to the speed of light, so that for planets and satellites of the solar system, the term dwarfs the spatial terms.:234–238 Despite the minuteness of the spatial terms, the first indications that something was wrong with Newtonian gravitation were discovered over a century-and-a-half ago. In 1859, Urbain Le Verrier, in an analysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848, reported that known physics could not explain the orbit of Mercury, unless there possibly existed a planet or asteroid belt within the orbit of Mercury. The perihelion of Mercury's orbit exhibited an excess rate of precession over that which could be explained by the tugs of the other planets. The ability to detect and accurately measure the minute value of this anomalous precession (only 43 arc seconds per tropical century) is testimony to the sophistication of 19th century astrometry. As the famous astronomer who had earlier discovered the existence of Neptune "at the tip of his pen" by analyzing wobbles in the orbit of Uranus, Le Verrier's announcement triggered a two-decades long period of "Vulcan-mania", as professional and amateur astronomers alike hunted for the hypothetical new planet. This search included several false sightings of Vulcan. It was ultimately established that no such planet or asteroid belt existed. In 1916, Einstein was to show that this anomalous precession of Mercury is explained by the spatial terms in the curvature of spacetime. Curvature in the temporal term, being simply an expression of Newtonian gravitation, has no part in explaining this anomalous precession. The success of his calculation was a powerful indication to Einstein's peers that the general theory of relativity could be correct. The most spectacular of Einstein's predictions was his calculation that the curvature terms in the spatial components of the spacetime interval could be measured in the bending of light around a massive body. Light has a slope of ±1 on a spacetime diagram. Its movement in space is equal to its movement in time. For the weak field expression of the invariant interval, Einstein calculated an exactly equal but opposite sign curvature in its spatial components.:234–238 In Newton's gravitation, the coefficient in front of predicts bending of light around a star. In general relativity, the coefficient in front of predicts a doubling of the total bending.:234–238 The story of the 1919 Eddington eclipse expedition and Einstein's rise to fame is well told elsewhere. Sources of spacetime curvatureEdit In contrast, general relativity identifies several sources of spacetime curvature in addition to mass. In the Einstein field equations, the sources of gravity are presented on the right-hand side in the stress–energy tensor. Fig. 5‑5 classifies the various sources of gravity in the stress-energy tensor: - (red): The total mass-energy density, including any contributions to the potential energy from forces between the particles, as well as kinetic energy from random thermal motions. - and (orange): These are momentum density terms. Even if there is no bulk motion, energy may be transmitted by heat conduction, and the conducted energy will carry momentum. - are the rates of flow of the i-component of momentum per unit area in the j-direction. Even if there is no bulk motion, random thermal motions of the particles will give rise to momentum flow, so the i = j terms (green) represent isotropic pressure, and the i ≠ j terms (blue) represent shear stresses. One important conclusion to be derived from the equations is that, colloquially speaking, gravity itself creates gravity.[note 13] Energy has mass. Even in Newtonian gravity, the gravitational field is associated with an energy, E = mgh, called the gravitational potential energy. In general relativity, the energy of the gravitational field feeds back into creation of the gravitational field. This makes the equations nonlinear and hard to solve in anything other than weak field cases.:240 Numerical relativity is a branch of general relativity using numerical methods to solve and analyze problems, often employing supercomputers to study black holes, gravitational waves, neutron stars and other phenomena in the strong field regime. In special relativity, mass-energy is closely connected to momentum. As we have discussed earlier in the section on Energy and momentum, just as space and time are different aspects of a more comprehensive entity called spacetime, mass-energy and momentum are merely different aspects of a unified, four-dimensional quantity called four-momentum. In consequence, if mass-energy is a source of gravity, momentum must also be a source. The inclusion of momentum as a source of gravity leads to the prediction that moving or rotating masses can generate fields analogous to the magnetic fields generated by moving charges, a phenomenon known as gravitomagnetism. It is well known that the force of magnetism can be deduced by applying the rules of special relativity to moving charges. (An eloquent demonstration of this was presented by Feynman in volume II, chapter 13–6 of his Lectures on Physics, available online.) Analogous logic can be used to demonstrate the origin of gravitomagnetism. In Fig. 5‑7a, two parallel, infinitely long streams of massive particles have equal and opposite velocities −v and +v relative to a test particle at rest and centered between the two. Because of the symmetry of the setup, the net force on the central particle is zero. Assume v << c so that velocities are simply additive. Fig. 5‑7b shows exactly the same setup, but in the frame of the upper stream. The test particle has a velocity of +v, and the bottom stream has a velocity of +2v. Since the physical situation has not changed, only the frame in which things are observed, the test particle should not be attracted towards either stream. But it is not at all clear that the forces exerted on the test particle are equal. (1) Since the bottom stream is moving faster than the top, each particle in the bottom stream has a larger mass energy than a particle in the top. (2) Because of Lorentz contraction, there are more particles per unit length in the bottom stream than in the top stream. (3) Another contribution to the active gravitational mass of the bottom stream comes from an additional pressure term which, at this point, we do not have sufficient background to discuss. All of these effects together would seemingly demand that the test particle be drawn towards the bottom stream. The test particle is not drawn to the bottom stream because of a velocity-dependent force that serves to repel a particle that is moving in the same direction as the bottom stream. This velocity-dependent gravitational effect is gravitomagnetism.:245–253 Matter in motion through a gravitomagnetic field is hence subject to so-called frame-dragging effects analogous to electromagnetic induction. It has been proposed that such gravitomagnetic forces underlie the generation of the relativistic jets (Fig. 5‑8) ejected by some rotating supermassive black holes. Pressure and stressEdit Quantities that are directly related to energy and momentum should be sources of gravity as well, namely internal pressure and stress. Taken together, mass-energy, momentum, pressure and stress all serve as sources of gravity: Collectively, they are what tells spacetime how to curve. General relativity predicts that pressure acts as a gravitational source with exactly the same strength as mass-energy density. The inclusion of pressure as a source of gravity leads to dramatic differences between the predictions of general relativity versus those of Newtonian gravitation. For example, the pressure term sets a maximum limit to the mass of a neutron star. The more massive a neutron star, the more pressure is required to support its weight against gravity. The increased pressure, however, adds to the gravity acting on the star's mass. Above a certain mass determined by the Tolman–Oppenheimer–Volkoff limit, the process becomes runaway and the neutron star collapses to a black hole.:243,280 The stress terms become highly significant when performing calculations such as hydrodynamic simulations of core-collapse supernovae. These predictions for the roles of pressure, momentum and stress as sources of spacetime curvature are elegant and play an important role in theory. In regards to pressure, the early universe was radiation dominated, and it is highly unlikely that any of the relevant cosmological data (e.g. nucleosynthesis abundances, etc.) could be reproduced if pressure did not contribute to gravity, or if it did not have the same strength as a source of gravity as mass-energy. Likewise, the mathematical consistency of the Einstein field equations would be broken if the stress terms didn't contribute as a source of gravity. All that is well and good, but are there any direct, quantitative experimental or observational measurements that confirm that these terms contribute to gravity with the correct strength? • Active, passive, and inertial massEdit Before discussing the experimental evidence regarding these other sources of gravity, we need first to discuss Bondi's distinctions between different possible types of mass: (1) active mass ( ) is the mass which acts as the source of a gravitational field; (2) passive mass ( ) is the mass which reacts to a gravitational field; (3) inertial mass ( ) is the mass which reacts to acceleration. - is the same as what we have earlier termed gravitational mass ( ) in our discussion of the equivalence principle in the Basic propositions section. In Newtonian theory, - The third law of action and reaction dictates that and must be the same. - On the other hand, whether and are equal is an empirical result. In general relativity, - The equality of and is dictated by the equivalence principle. - There is no "action and reaction" principle dictating any necessary relationship between and . • Pressure as a gravitational sourceEdit The classic experiment to measure the strength of a gravitational source (i.e. its active mass) was first conducted in 1797 by Henry Cavendish (Fig. 5‑9a). Two small but dense balls are suspended on a fine wire, making a torsion balance. Bringing two large test masses close to the balls introduces a detectable torque. Given the dimensions of the apparatus and the measurable spring constant of the torsion wire, the gravitational constant G can be determined. To study pressure effects by compressing the test masses is hopeless, because attainable laboratory pressures are insignificant in comparison with the mass-energy of a metal ball. However, the repulsive electromagnetic pressures resulting from protons being tightly squeezed inside atomic nuclei are typically on the order of 1028 atm ≈ 1033 Pa ≈ 1033 kg·s−2m−1. This amounts to about 1% of the nuclear mass density of approximately 1018kg/m3 (after factoring in c2 ≈ 9×1016m2s−2). If pressure does not act as a gravitational source, then the ratio should be lower for nuclei with higher atomic number Z, in which the electrostatic pressures are higher. L. B. Kreuzer (1968) did a Cavendish experiment using a Teflon mass suspended in a mixture of the liquids trichloroethylene and dibromoethane having the same buoyant density as the Teflon (Fig. 5‑9b). Fluorine has atomic number Z = 9, while bromine has Z = 35. Kreuzer found that repositioning the Teflon mass caused no differential deflection of the torsion bar, hence establishing active mass and passive mass to be equivalent to a precision of 5×10−5. Although Kreuzer originally considered this experiment merely to be a test of the ratio of active mass to passive mass, Clifford Will (1976) reinterpreted the experiment as a fundamental test of the coupling of sources to gravitational fields. In 1986, Bartlett and Van Buren noted that lunar laser ranging had detected a 2-km offset between the moon’s center of figure and its center of mass. This indicates an asymmetry in the distribution of Fe (abundant in the Moon's core) and Al (abundant in its crust and mantle). If pressure did not contribute equally to spacetime curvature as does mass-energy, the moon would not be in the orbit predicted by classical mechanics. They used their measurements to tighten the limits on any discrepancies between active and passive mass to about 1×10−12. The existence of gravitomagnetism was proven by Gravity Probe B (GP-B), a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until . The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism. Initial results confirmed the relatively large geodetic effect (which is due to simple spacetime curvature, and is also known as de Sitter precession) to an accuracy of about 1%. The much smaller frame-dragging effect (which is due to gravitomagnetism, and is also known as Lense–Thirring precession) was difficult to measure because of unexpected charge effects causing variable drift in the gyroscopes. Nevertheless, by , the frame-dragging effect had been confirmed to within 15% of the expected result, while the geodetic effect was confirmed to better than 0.5%. Subsequent measurements of frame dragging by laser-ranging observations of the LARES, LAGEOS-1 and LAGEOS-2 satellites has improved on the GP-B measurement, with results (as of 2016) demonstrating the effect to within 5% of its theoretical value, although there has been some disagreement on the accuracy of this result. Another effort, the Gyroscopes in General Relativity (GINGER) experiment, seeks to use three 6 m ring lasers mounted at right angles to each other 1400 m below the Earth's surface to measure this effect. Is spacetime really curved?Edit In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is. Such being said, - 1. Is it possible to represent general relativity in terms of flat spacetime? - 2. Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation? In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bi-metric gravitation", the "field-theoretical approach to general relativity", and so forth. Kip Thorne has provided a popular review of these theories.:397–403 The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm turns out to be especially convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques will be used when solving gravitational wave problems, while curved spacetime techniques will be used in the analysis of black holes.:397–403 Riemannian geometry is the branch of differential geometry that studies Riemannian manifolds, smooth manifolds with a Riemannian metric, i.e. with an inner product on the tangent space at each point that varies smoothly from point to point. This gives, in particular, local notions of angle, length of curves, surface area and volume. From those, some other global quantities can be derived by integrating local contributions. Riemannian geometry originated with the vision of Bernhard Riemann expressed in his inaugural lecture "Ueber die Hypothesen, welche der Geometrie zu Grunde liegen" ("On the Hypotheses on which Geometry is Based"). It is a very broad and abstract generalization of the differential geometry of surfaces in R3. Development of Riemannian geometry resulted in synthesis of diverse results concerning the geometry of surfaces and the behavior of geodesics on them, with techniques that can be applied to the study of differentiable manifolds of higher dimensions. It enabled the formulation of Einstein's general theory of relativity, made profound impact on group theory and representation theory, as well as analysis, and spurred the development of algebraic and differential topology. For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold . This means the smooth Lorentz metric has signature . The metric determines the geometry of spacetime, as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light is equal to 1. A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event . Another reference frame may be identified by a second coordinate chart about . Two observers (one in each reference frame) may describe the same event but obtain different descriptions. Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing (representing an observer) and another containing (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally. For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event ). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented. Geodesics are said to be time-like, null, or space-like if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by time-like and null (light-like) geodesics, respectively. Privileged character of 3+1 spacetimeEdit There are two kinds of dimensions, spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That N = 3 and T = 1, setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena." Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow says that it "[...] gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002: 204).[note 14] In 1920, Paul Ehrenfest showed that if there is only one time dimension and greater than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a whole number, then wave impulses become distorted. In 1922, Hermann Weyl showed that Maxwell's theory of electromagnetism works only with three dimensions of space and one of time. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse. Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if T > 1, Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) If N < 3, gravitation of any kind becomes problematic, and the universe is probably too simple to contain observers. For example, when N < 3, nerves cannot cross without intersecting. In general, it is not clear how physical law could function if T differed from 1. If T > 1, subatomic particles which decay after a fixed period would not behave predictably, because time-like geodesics would not be necessarily maximal. N = 1 and T = 3 has the peculiar property that the speed of light in a vacuum is a lower bound on the velocity of matter; all matter consists of tachyons. Hence anthropic and other arguments rule out all cases except N = 3 and T = 1, which happens to describe the world about us. - luminiferous from the Latin lumen, light, + ferens, carrying; aether from the Greek αἰθήρ (aithēr), pure air, clear sky - By stating that simultaneity is a matter of convention, Poincaré meant that to talk about time at all, one must have synchronized clocks, and the synchronization of clocks must be established by a specified, operational procedure (convention). This stance represented a fundamental philosophical break from Newton, who conceived of an absolute, true time that was independent of the workings of the inaccurate clocks of his day. This stance also represented a direct attack against the influential philosopher Henri Bergson, who argued that time, simultaneity, and duration were matters of intuitive understanding. - The operational procedure adopted by Poincaré was essentially identical to what is known as Einstein synchronization, even though a variant of it was already a widely used procedure by telegraphers in the middle 19th century. Basically, to synchronize two clocks, one flashes a light signal from one to the other, and adjusts for the time that the flash takes to arrive. - A hallmark of Einstein's career, in fact, was his use of visualized thought experiments (Gedanken–Experimente) as a fundamental tool for understanding physical issues. For special relativity, he employed moving trains and flashes of lightning for his most penetrating insights. For curved spacetime, he considered a painter falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his great Solvay Debates with Bohr on the nature of reality (1927 and 1930), he devised multiple imaginary contraptions intended to show, at least in concept, means whereby the Heisenberg uncertainty principle might be evaded. Finally, in a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement. :26–27;122–127;145–146;345–349;448–460 - In the original version of this lecture, Minkowski continued to use such obsolescent terms as the ether, but the posthumous publication in 1915 of this lecture in the Annals of Physics (Annalen der Physik) was edited by Sommerfeld to remove this term. Sommerfeld also edited the published form of this lecture to revise Minkowski's judgement of Einstein from being a mere clarifier of the principle of relativity, to being its chief expositor. - (In the following, the group G∞ is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for c = ∞, i.e. as group G∞, exactly becomes the full group belonging to Newtonian Mechanics. In this state of affairs, and since Gc is mathematically more intelligible than G∞, a mathematician may, by a free play of imagination, hit upon the thought that natural phenomena actually possess an invariance, not for the group G∞, but rather for a group Gc, where c is definitely finite, and only exceedingly large using the ordinary measuring units." - For instance, the Lorentz group is a subgroup of the conformal group in four dimensions.:41–42 The Lorentz group is isomorphic to the Laguerre group transforming planes into planes,:39–42 it is isomorphic to the Möbius group of the plane,:22 and is isomorphic to the group of isometries in hyperbolic space which is often expressed in terms of the hyperboloid model.:3.2.3 - In a Cartesian plane, ordinary rotation leaves a circle unchanged. In spacetime, hyperbolic rotation preserves the hyperbolic metric. - Not all experiments characterize the effect in terms of a redshift. For example, the Kündig experiment was set up to measure transverse blueshift using a Mössbauer source setup at the center of a centrifuge rotor and an absorber at the rim. - Rapidity arises naturally as a coordinates on the pure boost generators inside the Lie algebra algebra of the Lorentz group. Likewise, rotation angles arise naturally as coordinates (modulo 2π) on the pure rotation generators in the Lie algebra. (Together they coordinatize the whole Lie algebra.) A notable difference is that the resulting rotations are periodic in the rotation angle, while the resulting boosts are not periodic in rapidity (but rather one-to-one). The similarity between boosts and rotations is formal resemblance. - In relativity theory, proper acceleration is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured. - Newton himself was acutely aware of the inherent difficulties with these assumptions, but as a practical matter, making these assumptions was the only way that he could make progress. In 1692, he wrote to his friend Richard Bentley: "That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it." - More precisely, the gravitational field couples to itself. In Newtonian gravity, the potential due to two point masses is simply the sum of the potentials of the two masses, but this does not apply to GR. This can be thought of as the result of the equivalence principle: If gravitation did not couple to itself, two particles bound by their mutual gravitational attraction would not have the same inertial mass (due to negative binding energy) as their gravitational mass.:112–113 - This is because the law of gravitation (or any other inverse-square law) follows from the concept of flux and the proportional relationship of flux density and the strength of field. If N = 3, then 3-dimensional solid objects have surface areas proportional to the square of their size in any selected spatial dimension. In particular, a sphere of radius r has area of 4πr ². More generally, in a space of N dimensions, the strength of the gravitational attraction between two bodies separated by a distance of r would be inversely proportional to rN−1. - Different reporters viewing the scenarios presented in this figure interpret the scenarios differently depending on their knowledge of the situation. (i) A first reporter, at the center of mass of particles 2 and 3 but unaware of the large mass 1, concludes that a force of repulsion exists between the particles in scenario A while a force of attraction exists between the particles in scenario B. (ii) A second reporter, aware of the large mass 1, smiles at the first reporter's naiveté. This second reporter knows that in reality, the apparent forces between particles 2 and 3 really represent tidal effects resulting from their differential attraction by mass 1. (iii) A third reporter, trained in general relativity, knows that there are, in fact, no forces at all acting between the three objects. Rather, all three objects move along geodesics in spacetime. - Relativistic jets are beams of ionised matter accelerated close to the speed of light. Most have been observationally associated with central black holes of some active galaxies, radio galaxies or quasars, as well as stellar black holes, neutron stars and pulsars. Beam lengths may extend from several thousand to millions of parsecs. - Rynasiewicz, Robert. "Newton's Views on Space, Time, and Motion". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 24 March 2017. - Davis, Philip J. (2006). Mathematics & Common Sense: A Case of Creative Tension. Wellesley, Massachusetts: A.K. Peters. p. 86. ISBN 9781439864326. - Collier, Peter (2017). A Most Incomprehensible Thing: Notes Towards a Very Gentle Introduction to the Mathematics of Relativity (3rd ed.). Incomprehensible Books. ISBN 9780957389465. - Rowland, Todd. "Manifold". Wolfram Mathworld. Wolfram Research. Retrieved 24 March 2017. - French, A.P. (1968). Special Relativity. Boca Raton, Florida: CRC Press. pp. 35–60. ISBN 0748764224. - Taylor, Edwin F.; Wheeler, John Archibald (1966). Spacetime Physics: Introduction to Special Relativity (1st ed.). San Francisco: Freeman. ISBN 071670336X. Retrieved 14 April 2017. - Scherr, Rachel E.; Shaffer, Peter S.; Vokos, Stamatis (July 2001). "Student understanding of time in special relativity: Simultaneity and reference frames" (PDF). American Journal of Physics. 69 (S1): S24–S35. arXiv: . Bibcode:2001AmJPh..69S..24S. doi:10.1119/1.1371254. Retrieved 11 April 2017. - Hughes, Stefan (2013). Catchers of the Light: Catching Space: Origins, Lunar, Solar, Solar System and Deep Space. Paphos, Cyprus: ArtDeCiel Publishing. pp. 202–233. ISBN 9781467579926. Retrieved 7 April 2017. - Stachel, John (2005). "Fresnel's (Dragging) Coefficient as a Challenge to 19th Century Optics of Moving Bodies.". In Kox, A. J.; Eisenstaedt, Jean. The Universe of General Relativity (PDF). Boston: Birkhäuser. pp. 1–13. ISBN 081764380X. Archived from the original (PDF) on 13 April 2017. - Pais, Abraham (1982). ""Subtle is the Lord-- ": The Science and the Life of Albert Einstein (11th ed.). Oxford: Oxford University Press. ISBN 019853907X. - Born, Max (1956). Physics in My Generation. London & New York: Pergamon Press. p. 194. Retrieved 10 July 2017. - Darrigol, O. (2005), "The Genesis of the theory of relativity" (PDF), Séminaire Poincaré, 1: 1–22, Bibcode:2006eins.book....1D, doi:10.1007/3-7643-7436-5_1, ISBN 978-3-7643-7435-8 - Miller, Arthur I. (1998). Albert Einstein's Special Theory of Relativity. New York: Springer-Verlag. ISBN 0387948708. - Galison, Peter (2003). Einstein's Clocks, Poincaré's Maps: Empires of Time. New York: W. W. Norton & Company, Inc. pp. 13–47. ISBN 0393020010. - Poincare, Henri (1906). "On the Dynamics of the Electron (Sur la dynamique de l'électron)". Rendiconti del Circolo matematico di Palermo. 21: 129–176. doi:10.1007/bf03013466. Retrieved 15 July 2017. - Zahar, Elie (1989) , "Poincaré's Independent Discovery of the relativity principle", Einstein's Revolution: A Study in Heuristic, Chicago: Open Court Publishing Company, ISBN 0-8126-9067-2 - Walter, Scott A. (2007). "Breaking in the 4-vectors: the four-dimensional movement in gravitation, 1905–1910". In Renn, Jürgen; Schemmel, Matthias. The Genesis of General Relativity, Volume 3. Berlin: Springer. pp. 193–252. Archived from the original on 15 July 2017. Retrieved 15 July 2017. - Einstein, Albert (1905). "On the Electrodynamics of Moving Bodies ( Zur Elektrodynamik bewegter Körper)". Annalen der Physik. 322 (10): 891–921. Bibcode:1905AnP...322..891E. doi:10.1002/andp.19053221004. Retrieved 7 April 2018. - Isaacson, Walter (2007). Einstein: His Life and Universe. Simon & Schuster. ISBN 978-0-7432-6473-0. - Schutz, Bernard (2004). Gravity from the Ground Up: An Introductory Guide to Gravity and General Relativity (Reprint ed.). Cambridge: Cambridge University Press. ISBN 0521455065. Retrieved 24 May 2017. - Weinstein, Galina (2012). "Max Born, Albert Einstein and Hermann Minkowski's Space-Time Formalism of Special Relativity". arXiv: [physics.hist-ph]. - Galison, Peter Louis (1979). "Minkowski's space-time: From visual thinking to the absolute world". Historical Studies in the Physical Sciences. 10: 85–121. doi:10.2307/27757388. JSTOR 27757388. - Minkowski, Hermann (1909). "Raum und Zeit" [Space and Time]. Jahresberichte der Deutschen Mathematiker-Vereinigung. B.G. Teubner: 1–14. - Cartan, É.; Fano, G. (1955) . "La théorie des groupes continus et la géométrie". Encyclopédie des sciences mathématiques pures et appliquées. 3.1: 39–43. (Only pages 1–21 were published in 1915, the entire article including pp. 39–43 concerning the groups of Laguerre and Lorentz was posthumously published in 1955 in Cartan's collected papers, and was reprinted in the Encyclopédie in 1991.) - Kastrup, H. A. (2008). "On the advancements of conformal transformations and their associated symmetries in geometry and theoretical physics". Annalen der Physik. 520 (9–10): 631–690. arXiv: . Bibcode:2008AnP...520..631K. doi:10.1002/andp.200810324. - Ratcliffe, J. G. (1994). "Hyperbolic geometry". Foundations of Hyperbolic Manifolds. New York. pp. 56–104. ISBN 038794348X. - Curtis, W. D.; Miller, F. R. (1985). Differential Manifolds and Theoretical Physics. Academic Press. p. 223. ISBN 978-0-08-087435-7. Extract of page 223 - Curiel, Erik; Bokulich, Peter. "Lightcones and Causal Structure". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 26 March 2017. - Savitt, Steven. "Being and Becoming in Modern Physics. 3. The Special Theory of Relativity". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. Retrieved 26 March 2017. - Schutz, Bernard F. (1985). A first course in general relativity. Cambridge, UK: Cambridge University Press. p. 26. ISBN 0521277035. - Weiss, Michael. "The Twin Paradox". The Physics and Relativity FAQ. Retrieved 10 April 2017. - Mould, Richard A. (1994). Basic Relativity (1st ed.). Springer. p. 42. ISBN 9780387952109. Retrieved 22 April 2017. - Lerner, Lawrence S. (1997). Physics for Scientists and Engineers, Volume 2 (1st ed.). Jones & Bartlett Pub. p. 1047. ISBN 9780763704605. Retrieved 22 April 2017. - Bais, Sander (2007). Very Special Relativity: An Illustrated Guide. Cambridge, Massachusetts: Harvard University Press. ISBN 067402611X. - Forshaw, Jeffrey; Smith, Gavin (2014). Dynamics and Relativity. John Wiley & Sons. p. 118. ISBN 9781118933299. Retrieved 24 April 2017. - Morin, David (2017). Special Relativity for the Enthusiastic Beginner. CreateSpace Independent Publishing Platform. ISBN 9781542323512. - Landau, L. D.; Lifshitz, E. M. (2006). The Classical Theory of Fields, Course of Theoretical Physics, Volume 2 (4th ed.). Amsterdam: Elsevier. pp. 1–24. ISBN 9780750627689. - Rose, H. H. (21 April 2008). "Optics of high-performance electron microscopes". Science and Technology of Advanced Materials. 9 (1): 014107. Bibcode:2008STAdM...9a4107R. doi:10.1088/0031-8949/9/1/014107. Archived from the original on 3 July 2017. Retrieved 4 July 2017. - Griffiths, David J. (2013). Revolutions in Twentieth-Century Physics. Cambridge: Cambridge University Press. p. 60. ISBN 9781107602175. Retrieved 24 May 2017. - Byers, Nina (1998). "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". arXiv: . - Nave, R. "Energetics of Charged Pion Decay". Hyperphysics. Department of Physics and Astronomy, Georgia State University. Retrieved 27 May 2017. - Thomas, George B.; Weir, Maurice D.; Hass, Joel; Giordano, Frank R. (2008). Thomas' Calculus: Early Transcendentals (Eleventh ed.). Boston: Pearson Education, Inc. p. 533. ISBN 0321495756. - Taylor, Edwin F.; Wheeler, John Archibald (1992). Spacetime Physics (2nd ed.). W. H. Freeman. ISBN 0716723271. - Gibbs, Philip. "Can Special Relativity Handle Acceleration?". The Physics and Relativity FAQ. math.ucr.edu. Retrieved 28 May 2017. - Franklin, Jerrold (2010). "Lorentz contraction, Bell's spaceships, and rigid body motion in special relativity". European Journal of Physics. 31 (2): 291–298. arXiv: . Bibcode:2010EJPh...31..291F. doi:10.1088/0143-0807/31/2/006. - Lorentz, H. A.; Einstein, A.; Minkowski, H.; Weyl, H. (1952). The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity. Dover Publications. ISBN 0486600815. - Mook, Delo E.; Vargish, Thoma s (1987). Inside Relativity. Princeton, New Jersey: Princeton University Press. ISBN 0691084726. - Mester, John. "Experimental Tests of General Relativity" (PDF). Laboratoire Univers et Théories. Archived from the original (PDF) on 9 June 2017. Retrieved 9 June 2017. - Carroll, Sean M. (2 December 1997). "Lecture Notes on General Relativity". arXiv: . - Le Verrier, Urbain (1859). "Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète". Comptes rendus hebdomadaires des séances de l'Académie des Sciences. 49: 379–383. - Worrall, Simon. "The Hunt for Vulcan, the Planet That Wasn't There". National Geographic. National Geographic. Retrieved 12 June 2017. - Levine, Alaina G. "May 29, 1919: Eddington Observes Solar Eclipse to Test General Relativity". APS News: This Month in Physics History. American Physical Society. Retrieved 12 June 2017. - Hobson, M. P.; Efstathiou, G.; Lasenby, A. N. (2006). General Relativity. Cambridge: Cambridge University Press. pp. 176–179. ISBN 9780521829519. - Thorne, Kip S. (1988). Fairbank, J. D.; Deaver Jr., B. S.; Everitt, W. F.; Michelson, P. F., eds. Near zero: New Frontiers of Physics (PDF). W. H. Freeman and Company. pp. 573–586. Archived from the original (PDF) on 30 June 2017. - Feynman, R. P.; Leighton, R. B.; Sands, M. (1964). The Feynman Lectures on Physics, vol. 2 (New Millenium ed.). Basic Books. pp. 13–6 to 13–11. ISBN 9780465024162. Retrieved 1 July 2017. - Williams, R. K. (1995). "Extracting X rays, Ύ rays, and relativistic e−–e+ pairs from supermassive Kerr black holes using the Penrose mechanism". Physical Review D. 51 (10): 5387–5427. Bibcode:1995PhRvD..51.5387W. doi:10.1103/PhysRevD.51.5387. PMID 10018300. - Williams, R. K. (2004). "Collimated escaping vortical polar e−–e+ jets intrinsically produced by rotating black holes and Penrose processes". The Astrophysical Journal. 611 (2): 952–963. arXiv: . Bibcode:2004ApJ...611..952W. doi:10.1086/422304. - Kuroda, Takami; Kotake, Kei; Takiwaki, Tomoya (2012). "Fully General Relativistic Simulations of Core-Collapse Supernovae with An Approximate Neutrino Transport". The Astrophysical Journal. 755: 11. arXiv: . Bibcode:2012ApJ...755...11K. doi:10.1088/0004-637X/755/1/11. - Wollack, Edward J. (10 December 2010). "Cosmology: The Study of the Universe". Universe 101: Big Bang Theory. NASA. Archived from the original on 14 May 2011. Retrieved 2017-04-15. - Bondi, Hermann (1957). DeWitt, Cecile M.; Rickles, Dean, eds. The Role of Gravitation in Physics: Report from the 1957 Chapel Hill Conference. Berlin: Max Planck Research Library. pp. 159–162. ISBN 9783869319636. Retrieved 1 July 2017. - Crowell, Benjamin (2000). General Relativity. Fullerton, CA: Light and Matter. pp. 241–258. Retrieved 30 June 2017. - Kreuzer, L. B. (1968). "Experimental measurement of the equivalence of active and passive gravitational mass". Physical Review. 169 (5): 1007–1011. Bibcode:1968PhRv..169.1007K. doi:10.1103/PhysRev.169.1007. - Will, C. M. (1976). "Active mass in relativistic gravity-Theoretical interpretation of the Kreuzer experiment". The Astrophysical Journal. 204: 224–234. Bibcode:1976ApJ...204..224W. doi:10.1086/154164. - Bartlett, D. F.; Van Buren, Dave (1986). "Equivalence of active and passive gravitational mass using the moon". Phys. Rev. Lett. 57 (1): 21–24. Bibcode:1986PhRvL..57...21B. doi:10.1103/PhysRevLett.57.21. PMID 10033347. Retrieved 1 July 2017. - "Gravity Probe B: FAQ". Retrieved 2 July 2017. - Gugliotta, G. (16 February 2009). "Perseverance Is Paying Off for a Test of Relativity in Space". New York Times. Retrieved 2 July 2017. - Everitt, C.W.F.; Parkinson, B.W. (2009). "Gravity Probe B Science Results—NASA Final Report" (PDF). Retrieved 2 July 2017. - Everitt; et al. (2011). "Gravity Probe B: Final Results of a Space Experiment to Test General Relativity". Physical Review Letters. 106 (22): 221101. arXiv: . Bibcode:2011PhRvL.106v1101E. doi:10.1103/PhysRevLett.106.221101. PMID 21702590. - Ciufolini, Ignazio; Paolozzi, Antonio Rolf Koenig; Pavlis, Erricos C.; Koenig, Rolf (2016). "A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model". Eur Phys J C Part Fields. 76 (3): 120. arXiv: . Bibcode:2016EPJC...76..120C. doi:10.1140/epjc/s10052-016-3961-8. PMC . PMID 27471430. - Iorio, L. (February 2017). "A comment on "A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model. Measurement of Earth's dragging of inertial frames," by I. Ciufolini et al". The European Physical Journal C. 77 (2): 73. arXiv: . Bibcode:2017EPJC...77...73I. doi:10.1140/epjc/s10052-017-4607-1. - Cartlidge, Edwin. "Underground ring lasers will put general relativity to the test". physicsworld.com. Institute of Physics. Retrieved 2 July 2017. - "Einstein right using the most sensitive Earth rotation sensors ever made". Phys.org. Science X network. Retrieved 2 July 2017. - Murzi, Mauro. "Jules Henri Poincaré (1854–1912)". Internet Encyclopedia of Philosophy (ISSN 2161-0002). Retrieved 9 April 2018. - Deser, S. (1970). "Self-Interaction and Gauge Invariance" (PDF). General Relativity and Gravitation. 1 (18): 9–8. arXiv: . Bibcode:1970GReGr...1....9D. doi:10.1007/BF00759198. Retrieved 9 April 2018. - Grishchuk, L. P.; Petrov, A. N.; Popova, A. D. (1984). "Exact Theory of the (Einstein) Gravitational Field in an Arbitrary Background Space-Time". Communications in Mathematical Physics. 94: 379–396. Bibcode:1984CMaPh..94..379G. doi:10.1007/BF01224832. Retrieved 9 April 2018. - Rosen, N. (1940). "General Relativity and Flat Space I". Physical Review. 57 (2): 147–150. Bibcode:1940PhRv...57..147R. doi:10.1103/PhysRev.57.147. - Weinberg, S. (1964). "Derivation of Gauge Invariance and the Equivalence Principle from Lorentz Invariance of the S-Matrix". Physics Letters. 9 (4): 357–359. Bibcode:1964PhL.....9..357W. doi:10.1016/0031-9163(64)90396-8. - Thorne, Kip (1995). Black Holes & Time Warps: Einstein's Outrageous Legacy. W. W. Norton & Company. ISBN 978-0393312768. - Bär, Christian; Fredenhagen, Klaus (2009). "Lorentzian Manifolds". Quantum Field Theory on Curved Spacetimes: Concepts and Mathematical Foundations (PDF). Dordrecht: Springer. pp. 39–58. ISBN 9783642027796. Archived from the original (PDF) on 13 April 2017. Retrieved 14 April 2017. - Skow, Bradford (2007). "What makes time different from space?" (PDF). Noûs. 41: 227–252. Retrieved 13 April 2018. - Leibniz, Gottfried (1880). "Discourse on Metaphysics". Die philosophischen schriften von Gottfried Wilhelm Leibniz, Volume 4. Weidmann. p. 427–463. Retrieved 13 April 2018. - Ehrenfest, Paul (1920). "How do the fundamental laws of physics make manifest that Space has 3 dimensions?". Annalen der Physik. 61 (5): 440. Bibcode:1920AnP...366..440E. doi:10.1002/andp.19203660503.. Also see Ehrenfest, P. (1917) "In what way does it become manifest in the fundamental laws of physics that space has three dimensions?" Proceedings of the Amsterdam Academy20: 200. - Weyl, H. (1922) Space, time, and matter. Dover reprint: 284. - Tangherlini, F. R. (1963). "Atoms in Higher Dimensions". Nuovo Cimento. 14 (27): 636. - Tegmark, Max (April 1997). "On the dimensionality of spacetime" (PDF). Classical and Quantum Gravity. 14 (4): L69–L75. arXiv: . Bibcode:1997CQGra..14L..69T. doi:10.1088/0264-9381/14/4/002. Retrieved 2006-12-16. - Dorling, J. (1970). "The Dimensionality of Time". American Journal of Physics. 38 (4): 539–40. Bibcode:1970AmJPh..38..539D. doi:10.1119/1.1976386. - Barrow, John D.; Tipler, Frank J. (1988). The Anthropic Cosmological Principle. Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148. - George F. Ellis and Ruth M. Williams (1992) Flat and curved space–times. Oxford Univ. Press. ISBN 0-19-851164-7 - Lorentz, H. A., Einstein, Albert, Minkowski, Hermann, and Weyl, Hermann (1952) The Principle of Relativity: A Collection of Original Memoirs. Dover. - Lucas, John Randolph (1973) A Treatise on Time and Space. London: Methuen. - Penrose, Roger (2004). The Road to Reality. Oxford: Oxford University Press. ISBN 0-679-45443-8. Chpts. 17–18. - Taylor, E. F.; Wheeler, John A. (1963). Spacetime Physics. W. H. Freeman. ISBN 0-7167-2327-1. |Wikiquote has quotations related to: Spacetime| |Wikibooks has a book on the topic of: Special Relativity|
Inflation means a sustained increase in the aggregate or general price level in an economy. Inflation means there is an increase in the cost of living. What are the economic policies that lead to low inflation in an economy? 1. Monetary Policy In the UK and US, monetary policy is the most important tool for maintaining low inflation. In the UK, monetary policy is set by the MPC of the Bank of England. They are given an inflation target by the government. This inflation target is 2%+/-1 and the MPC use interest rates to try and achieve this target. The first step is for the MPC to try and predict future inflation. They look at various economic statistics and try to decide whether the economy is overheating. If inflation is forecast to increase above the target, the MPC will increase interest rates. Increased interest rates will help reduce the growth of Aggregate Demand in the economy. The slower growth will then lead to lower inflation. Higher interest rates reduce consumer spending because: * Increased interest rates increase the cost of borrowing, discouraging consumers from borrowing and spending. * Increased interest rates make it more attractive to save money * Increased interest rates reduce the disposable income of those with mortgages. * Higher interest rates increased the value of the exchange rate leading to lower exports and more imports. Base Rates and Inflation Base interest rates were increased in the late 1980s / 1990 to try and control the rise in inflation. 2. Supply Side Policies Supply side policies aim to increase long term competitiveness and productivity. For example, privatisation and deregulation were hoped to make firms more productive. Therefore, in the long run supply side policies can help reduce inflationary pressures. However, supply side policies work very much in the long term. They cannot be used to reduce sudden increases in the inflation rate. 3. Fiscal Policy This is another demand side policy, similar in effect to Monetary Policy. Fiscal policy involves the government changing tax and spending levels in order to influence the level of Aggregate Demand. To reduce inflationary pressures the government can increase tax and reduce government spending. This will reduce AD. 4. Exchange Rate Policy In the late 1980s the UK joined the ERM, as a means to control inflation. It was felt that by keeping the value of the pound high, it would help reduce inflationary pressures. The policy did reduce inflation, but at the cost of a recession. To maintain the value of the £ against the DM, the government had to increase interest rates to 15%. The UK no longer uses this as an inflationary policy. 5. Wage Control Wage growth is a key factor in determining inflation. If wages increase quickly it will cause high inflation. In the 1970s, there was a brief attempt at wage controls which tried to limit wage growth. However, it was effectively dropped because it was difficult to widely enforce. Main Cause of Inflation 1. Demand pull inflation If the economy is at or close to full employment then an increase in AD leads to an increase in the price level. As firms reach full capacity, they respond by putting up prices leading to inflation. AD can increase due to an increase in any of its components C+I+G+X-M The link between output and inflation suggests that there will be a similar link between inflation and unemployment, The Phillips curve initially showed a link between money wages and unemployment, it was then argued an increase in wages would lead to inflation 2. Cost Push Inflation If there is an increase in the costs of firms, then firms will pass this on to consumers. There will be a shift to the left in the AS. Cost push inflation can be caused by many factors 1. The Labour Market If trades unions can present a common front then they can bargain for higher wages, this will lead to wage inflation. 2. Import prices One third of all goods are imported in the UK. If there is a devaluation then import prices will become more expensive leading to an increase in inflation E.G. a German car costs DM 40,000. If the exchange rate is DM £1:3DM then it will be priced at £13,333. If the E.R falls to £1: 2DM then it will be priced at £20,000 3. Raw Material Prices, The best example is the price of oil, if the oil price increase by 20% then this will have a significant impact on most goods in the economy and this will lead to cost push inflation. E.g. in early 2008, there was a spike in the price of oil to over $150 causing a rise in inflation. 4. Profit Push Inflation When firms push up prices to get higher rates of inflation. 5. Declining productivity If firms become less productive and allow costs to rise, this invariably leads to higher prices. Source: http://www.economicshelp.org/index.html PHILIPPINES INFLATION RATE The inflation rate in Philippines was recorded at 2.90 percent in December of 2012. Inflation Rate in Philippines is reported by the The National Statistics Office (NSO). Historically, from 1958 until 2012, Philippines Inflation Rate averaged 9.1 Percent reaching an all time high of 62.8 Percent in September of 1984 and a record low of -2.1 Percent in January of 1959. In Philippines, the most important categories in the Consumer Price Index are: food and non-alcoholic beverages (39 percent of total weight); housing, water, electricity, gas and other fuels (22 percent) and transport (8 percent). The index also includes health (3 percent), education (3 percent), clothing and footwear (3 percent), communication (2 percent) and recreation and culture (2 percent). Alcoholic beverages, tobacco, furnishing, household equipment, restaurants and other goods and services account for the remaining 15 percent. This page includes a chart with historical data for Philippines Inflation Rate. Source: http://www.tradingeconomics.com/philippines/inflation-cpi causes Historically, a great deal of economic literature was concerned with the question of what causes inflation and what effect it has. There were different schools of thought as to the causes of inflation. Most can be divided into two broad areas: quality theories of inflation and quantity theories of inflation. The quality theory of inflation rests on the expectation of a seller accepting currency to be able to exchange that currency at a later time for goods that are desirable as a buyer. The quantity theory of inflation rests on the quantity equation of money, that relates the money supply, itsvelocity, and the nominal value of exchanges. Adam Smith and David Hume proposed a quantity theory of inflation for money, and a quality theory of inflation for production. Currently, the quantity theory of money is widely accepted as an accurate model of inflation in the long run. Consequently, there is now broad agreement among economists that in the long run, the inflation rate is essentially dependent on the growth rate of money supply relative to the growth of the economy. However, in the short and medium term inflation may be affected by supply and demand pressures in the economy, and influenced by the relative elasticity of wages, prices and interest rates. The question of whether the short-term effects last long enough to be important is the central topic of debate between monetarist and Keynesian economists. In monetarism prices and wages adjust quickly enough to make other factors merely marginal behavior on a general trend-line. In the Keynesian view, prices and wages adjust at different rates, and these differences have enough effects on real output to be “long term” in the view of people in an economy. Keynesian economic theory proposes that changes in money supply do not directly affect prices, and that visible inflation is the result of pressures in the economy expressing themselves in prices. Monetarists believe the most significant factor influencing inflation or deflation is how fast the money supply grows or shrinks. They consider fiscal policy, or government spending and taxation, as ineffective in controlling inflation.] According to the famous monetarist economist Milton Friedman,”Inflation is always and everywhere a monetary phenomenon.” Some monetarists, however, will qualify this by making an exception for very short-term circumstances. A connection between inflation and unemployment has been drawn since the emergence of large scale unemployment in the 19th century, and connections continue to be drawn today. In Marxian economics, the unemployed serve as a reserve army of labour, which restrain wage inflation. In the 20th century, similar concepts in Keynesian economics include the NAIRU (Non-Accelerating Inflation Rate of Unemployment) and the Phillips curve. Rational expectations theory For more details on this topic, see Rational expectations theory. Rational expectations theory holds that economic actors look rationally into the future when trying to maximize their well-being, and do not respond solely to immediate opportunity costs and pressures. In this view, while generally grounded in monetarism, future expectations and strategies are important for inflation as well. A core assertion of rational expectations theory is that actors will seek to “head off” central-bank decisions by acting in ways that fulfill predictions of higher inflation. This means that central banks must establish their credibility in fighting inflation, or economic actors will make bets that the central bank will expand the money supply rapidly enough to prevent recession, even at the expense of exacerbating inflation. Thus, if a central bank has a reputation as being “soft” on inflation, when it announces a new policy of fighting inflation with restrictive monetary growth economic agents will not believe that the policy will persist; their inflationary expectations will remain high, and so will inflation. On the other hand, if the central bank has a reputation of being “tough” on inflation, then such a policy announcement will be believed and inflationary expectations will come down rapidly, thus allowing inflation itself to come down rapidly with minimal economic disruption. For more details on this topic, see The Austrian view of inflation and monetary inflation The Austrian School asserts that inflation is an increase in the money supply, rising prices are merely consequences and this semantic difference is important in defining inflation. Austrians stress that inflation affects prices in various degree, i.e. that prices rise more sharply in some sectors than in other sectors of the economy. The reason for the disparity is that excess money will be concentrated to certain sectors, such as housing, stocks or health care. Because of this disparity, Austrians argue that the aggregate price level can be very misleading when observing the effects of inflation. Austrian economists measure inflation by calculating the growth of new units of money that are available for immediate use in exchange, that have been created over time. Critics of the Austrian view point out that their preferred alternative to fiat currency intended to prevent inflation, commodity-backed money, is likely to grow in supply at a different rate thaneconomic growth. Thus it has proven to be highly deflationary and destabilizing, including in instances where it has caused and prolonged depressions. Real bills doctrine Main article: Real bills doctrine Within the context of a fixed specie basis for money, one important controversy was between the quantity theory of money and the real bills doctrine (RBD). Within this context, quantity theory applies to the level of fractional reserve accounting allowed against specie, generally gold, held by a bank. Currency and banking schools of economics argue the RBD, that banks should also be able to issue currency against bills of trading, which is “real bills” that they buy from merchants. This theory was important in the 19th century in debates between “Banking” and “Currency” schools of monetary soundness, and in the formation of the Federal Reserve. In the wake of the collapse of the international gold standard post 1913, and the move towards deficit financing of government, RBD has remained a minor topic, primarily of interest in limited contexts, such as currency boards. It is generally held in ill repute today, with Frederic Mishkin, a governor of theFederal Reserve going so far as to say it had been “completely discredited.” The debate between currency, or quantity theory, and banking schools in Britain during the 19th century prefigures current questions about the credibility of money in the present. In the 19th century the banking school had greater influence in policy in the United States and Great Britain, while the currency school had more influence “on the continent”, that is in non-British countries, particularly in the Latin Monetary Union and the earlier Scandinavia monetary union. Anti-classical or backing theory Another issue associated with classical political economy is the anti-classical hypothesis of money, or “backing theory”. The backing theory argues that the value of money is determined by the assets and liabilities of the issuing agency. Unlike the Quantity Theory of classical political economy, the backing theory argues that issuing authorities can issue money without causing inflation so long as the money issuer has sufficient assets to cover redemptions. There are very few backing theorists, making quantity theory the dominant theory explaining inflation. ————————————————- A variety of methods and policies have been used to control inflation. Stimulating economic growth If economic growth matches the growth of the money supply, inflation should not occur when all else is equal. A large variety of factors can affect the rate of both. For example, investment inmarket production, infrastructure, education, and preventative health care can all grow an economy in greater amounts than the investment spending. The U.S. effective federal funds ratecharted over fifty years. Main article: Monetary policy Today the primary tool for controlling inflation is monetary policy. Most central banks are tasked with keeping their inter-bank lending rates at low levels, normally to a target rate around 2% to 3% per annum, and within a targeted low inflation range, somewhere from about 2% to 6% per annum. A low positive inflation is usually targeted, as deflationary conditions are seen as dangerous for the health of the economy. There are a number of methods that have been suggested to control inflation. Central banks such as the U.S. Federal Reserve can affect inflation to a significant extent through setting interest rates and through other operations. High interest rates and slow growth of the money supply are the traditional ways through which central banks fight or prevent inflation, though they have different approaches. For instance, some follow a symmetrical inflation target while others only control inflation when it rises above a target, whether express or implied. Monetarists emphasize keeping the growth rate of money steady, and using monetary policy to control inflation (increasing interest rates, slowing the rise in the money supply). Keynesians emphasize reducing aggregate demand during economic expansions and increasing demand during recessions to keep inflation stable. Control of aggregate demand can be achieved using both monetary policy and fiscal policy (increased taxation or reduced government spending to reduce demand). Fixed exchange rates Under a fixed exchange rate currency regime, a country’s currency is tied in value to another single currency or to a basket of other currencies (or sometimes to another measure of value, such as gold). A fixed exchange rate is usually used to stabilize the value of a currency, vis-a-vis the currency it is pegged to. It can also be used as a means to control inflation. However, as the value of the reference currency rises and falls, so does the currency pegged to it. This essentially means that the inflation rate in the fixed exchange rate country is determined by the inflation rate of the country the currency is pegged to. In addition, a fixed exchange rate prevents a government from using domestic monetary policy in order to achieve macroeconomic stability. Under the Bretton Woods agreement, most countries around the world had currencies that were fixed to the US dollar. This limited inflation in those countries, but also exposed them to the danger of speculative attacks. After the Bretton Woods agreement broke down in the early 1970s, countries gradually turned to floating exchange rates. However, in the later part of the 20th century, some countries reverted to a fixed exchange rate as part of an attempt to control inflation. This policy of using a fixed exchange rate to control inflation was used in many countries in South America in the later part of the 20th century (e.g. Argentina (1991–2002), Bolivia, Brazil, and Chile). The gold standard is a monetary system in which a region’s common media of exchange are paper notes that are normally freely convertible into pre-set, fixed quantities of gold. The standard specifies how the gold backing would be implemented, including the amount of specie per currency unit. The currency itself has no innate value, but is accepted by traders because it can be redeemed for the equivalent specie. A U.S. silver certificate, for example, could be redeemed for an actual piece of silver. The gold standard was partially abandoned via the international adoption of the Bretton Woods System. Under this system all other major currencies were tied at fixed rates to the dollar, which itself was tied to gold at the rate of $35 per ounce. The Bretton Woods system broke down in 1971, causing most countries to switch to fiat money – money backed only by the laws of the country. According to Lawrence H. White, an F. A. Hayek Professor of Economic History “who values the Austrian tradition”, economies based on the gold standard rarely experience inflation above 2 percent annually. However, historically, the U.S. saw inflation over 2% several times and a higher peak of inflation under the gold standard when compared to inflation after the gold standard. Under a gold standard, the long term rate of inflation (or deflation) would be determined by the growth rate of the supply of gold relative to total output. Critics argue that this will cause arbitrary fluctuations in the inflation rate, and that monetary policy would essentially be determined by gold mining. Wage and price controls Another method attempted in the past have been wage and price controls (“incomes policies”). Wage and price controls have been successful in wartime environments in combination with rationing. However, their use in other contexts is far more mixed. Notable failures of their use include the 1972 imposition of wage and price controls by Richard Nixon. More successful examples include the Prices and Incomes Accord in Australia and the Wassenaar Agreement in the Netherlands. In general, wage and price controls are regarded as a temporary and exceptional measure, only effective when coupled with policies designed to reduce the underlying causes of inflation during the wage and price control regime, for example, winning the war being fought. They often have perverse effects, due to the distorted signals they send to the market. Artificially low prices often cause rationing and shortages and discourage future investment, resulting in yet further shortages. The usual economic analysis is that any product or service that is under-priced is overconsumed. For example, if the official price of bread is too low, there will be too little bread at official prices, and too little investment in bread making by the market to satisfy future needs, thereby exacerbating the problem in the long term. Temporary controls may complement a recession as a way to fight inflation: the controls make the recession more efficient as a way to fight inflation (reducing the need to increase unemployment), while the recession prevents the kinds of distortions that controls cause when demand is high. However, in general the advice of economists is not to impose price controls but to liberalize prices by assuming that the economy will adjust and abandon unprofitable economic activity. The lower activity will place fewer demands on whatever commodities were driving inflation, whether labor or resources, and inflation will fall with total economic output. This often produces a severe recession, as productive capacity is reallocated and is thus often very unpopular with the people whose livelihoods are destroyed (see creative destruction). The real purchasing-power of fixed payments is eroded by inflation unless they are inflation-adjusted to keep their real values constant. In many countries, employment contracts, pension benefits, and government entitlements (such as social security) are tied to a cost-of-living index, typically to the consumer price index. A cost-of-living allowance (COLA) adjusts salaries based on changes in a cost-of-living index. Salaries are typically adjusted annually in low inflation economies. During hyperinflation they are adjusted more often. They may also be tied to a cost-of-living index that varies by geographic location if the employee moves. Annual escalation clauses in employment contracts can specify retroactive or future percentage increases in worker pay which are not tied to any index. These negotiated increases in pay are colloquially referred to as cost-of-living adjustments (“COLAs”) or cost-of-living increases because of their similarity to increases tied to externally determined indexes. Courtney from Study Moose Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX
|This article needs additional citations for verification. (November 2009)| A relay is an electrically operated switch. Many relays use an electromagnet to mechanically operate a switch, but other operating principles are also used, such as solid-state relays. Relays are used where it is necessary to control a circuit by a low-power signal (with complete electrical isolation between control and controlled circuits), or where several circuits must be controlled by one signal. The first relays were used in long distance telegraph circuits as amplifiers: they repeated the signal coming in from one circuit and re-transmitted it on another circuit. Relays were used extensively in telephone exchanges and early computers to perform logical operations. A type of relay that can handle the high power required to directly control an electric motor or other loads is called a contactor. Solid-state relays control power circuits with no moving parts, instead using a semiconductor device to perform switching. Relays with calibrated operating characteristics and sometimes multiple operating coils are used to protect electrical circuits from overload or faults; in modern electric power systems these functions are performed by digital instruments still called "protective relays". - 1 Basic design and operation - 2 Types - 2.1 Latching relay - 2.2 Reed relay - 2.3 Mercury-wetted relay - 2.4 Mercury relay - 2.5 Polarized relay - 2.6 Machine tool relay - 2.7 Coaxial relay - 2.8 Contactor - 2.9 Solid-state relay - 2.10 Solid state contactor relay - 2.11 Buchholz relay - 2.12 Forced-guided contacts relay - 2.13 Overload protection relay - 2.14 Vacuum relays - 3 Pole and throw - 4 Applications - 5 Relay application considerations - 6 Protective relays - 7 Railway signalling - 8 History - 9 See also - 10 References - 11 External links Basic design and operation A simple electromagnetic relay consists of a coil of wire wrapped around a soft iron core, an iron yoke which provides a low reluctance path for magnetic flux, a movable iron armature, and one or more sets of contacts (there are two in the relay pictured). The armature is hinged to the yoke and mechanically linked to one or more sets of moving contacts. It is held in place by a spring so that when the relay is de-energized there is an air gap in the magnetic circuit. In this condition, one of the two sets of contacts in the relay pictured is closed, and the other set is open. Other relays may have more or fewer sets of contacts depending on their function. The relay in the picture also has a wire connecting the armature to the yoke. This ensures continuity of the circuit between the moving contacts on the armature, and the circuit track on the printed circuit board (PCB) via the yoke, which is soldered to the PCB. When an electric current is passed through the coil it generates a magnetic field that activates the armature, and the consequent movement of the movable contact(s) either makes or breaks (depending upon construction) a connection with a fixed contact. If the set of contacts was closed when the relay was de-energized, then the movement opens the contacts and breaks the connection, and vice versa if the contacts were open. When the current to the coil is switched off, the armature is returned by a force, approximately half as strong as the magnetic force, to its relaxed position. Usually this force is provided by a spring, but gravity is also used commonly in industrial motor starters. Most relays are manufactured to operate quickly. In a low-voltage application this reduces noise; in a high voltage or current application it reduces arcing. When the coil is energized with direct current, a diode is often placed across the coil to dissipate the energy from the collapsing magnetic field at deactivation, which would otherwise generate a voltage spike dangerous to semiconductor circuit components. Some automotive relays include a diode inside the relay case. Alternatively, a contact protection network consisting of a capacitor and resistor in series (snubber circuit) may absorb the surge. If the coil is designed to be energized with alternating current (AC), a small copper "shading ring" can be crimped to the end of the solenoid, creating a small out-of-phase current which increases the minimum pull on the armature during the AC cycle. A latching relay (also called "impulse", "keep", or "stay" relays) maintains either contact position indefinitely without power applied to the coil. The advantage is that one coil consumes power only for an instant while the relay is being switched, and the relay contacts retain this setting across a power outage. A latching relay allows remote control of building lighting without the hum that may be produced from a continuously (AC) energized coil. In one mechanism, two opposing coils with an over-center spring or permanent magnet hold the contacts in position after the coil is de-energized. A pulse to one coil turns the relay on and a pulse to the opposite coil turns the relay off. This type is widely used where control is from simple switches or single-ended outputs of a control system, and such relays are found in avionics and numerous industrial applications. Another latching type has a remanent core that retains the contacts in the operated position by the remanent magnetism in the core. This type requires a current pulse of opposite polarity to release the contacts. A variation uses a permanent magnet that produces part of the force required to close the contact; the coil supplies sufficient force to move the contact open or closed by aiding or opposing the field of the permanent magnet. A polarity controlled relay needs changeover switches or an H bridge drive circuit to control it. The relay may be less expensive than other types, but this is partly offset by the increased costs in the external circuit. In another type, a ratchet relay has a ratchet mechanism that holds the contacts closed after the coil is momentarily energized. A second impulse, in the same or a separate coil, releases the contacts. This type may be found in certain cars, for headlamp dipping and other functions where alternating operation on each switch actuation is needed. All three of these basic types of latching relay are currently available and widely used. An earth leakage circuit breaker includes a specialized latching relay. Some early computers used ordinary relays as a kind of latch—they store bits in ordinary wire spring relays or reed relays by feeding an output wire back as an input, resulting in a feedback loop or sequential circuit. Such an electrically-latching relay requires continuous power to maintain state, unlike magnetically latching relays or mechanically racheting relays. In computer memories, latching relays and other relays were replaced by delay line memory, which in turn was replaced by a series of ever-faster and ever-smaller memory technologies. A reed relay is a reed switch enclosed in a solenoid. The switch has a set of contacts inside an evacuated or inert gas-filled glass tube which protects the contacts against atmospheric corrosion; the contacts are made of magnetic material that makes them move under the influence of the field of the enclosing solenoid or an external magnet. Reed relays can switch faster than larger relays and require very little power from the control circuit. However, they have relatively low switching current and voltage ratings. Though rare, the reeds can become magnetized over time, which makes them stick 'on' even when no current is present; changing the orientation of the reeds with respect to the solenoid's magnetic field can resolve this problem. Sealed contacts with mercury-wetted contacts have longer operating lives and less contact chatter than any other kind of relay. A mercury-wetted reed relay is a form of reed relay in which the contacts are wetted with mercury. Such relays are used to switch low-voltage signals (one volt or less) where the mercury reduces the contact resistance and associated voltage drop, for low-current signals where surface contamination may make for a poor contact, or for high-speed applications where the mercury eliminates contact bounce. Mercury wetted relays are position-sensitive and must be mounted vertically to work properly. Because of the toxicity and expense of liquid mercury, these relays are now rarely used. The mercury-wetted relay has one particular advantage, in that the contact closure appears to be virtually instantaneous, as the mercury globules on each contact coalesce. The current rise time through the contacts is generally considered to be a few picoseconds, however in a practical circuit it will be limited by the inductance of the contacts and wiring. It was quite common, before the restrictions on the use of mercury, to use a mercury-wetted relay in the laboratory as a convenient means of generating fast rise time pulses, however although the rise time may be picoseconds, the exact timing of the event is, like all other types of relay, subject to considerable jitter, possibly milliseconds, due to mechanical imperfections. The same coalescence process causes another effect, which is a nuisance in some applications. The contact resistance is not stable immediately after contact closure, and drifts, mostly downwards, for several seconds after closure, the change perhaps being 0.5 ohm. A mercury relay is a relay that uses mercury as the switching element. They are used where contact erosion would be a problem for conventional relay contacts. Owing to environmental considerations about significant amount of mercury used and modern alternatives, they are now comparatively uncommon. A polarized relay places the armature between the poles of a permanent magnet to increase sensitivity. Polarized relays were used in middle 20th Century telephone exchanges to detect faint pulses and correct telegraphic distortion. The poles were on screws, so a technician could first adjust them for maximum sensitivity and then apply a bias spring to set the critical current that would operate the relay. Machine tool relay A machine tool relay is a type standardized for industrial control of machine tools, transfer machines, and other sequential control. They are characterized by a large number of contacts (sometimes extendable in the field) which are easily converted from normally-open to normally-closed status, easily replaceable coils, and a form factor that allows compactly installing many relays in a control panel. Although such relays once were the backbone of automation in such industries as automobile assembly, the programmable logic controller (PLC) mostly displaced the machine tool relay from sequential control applications. A relay allows circuits to be switched by electrical equipment: for example, a timer circuit with a relay could switch power at a preset time. For many years relays were the standard method of controlling industrial electronic systems. A number of relays could be used together to carry out complex functions (relay logic). The principle of relay logic is based on relays which energize and de-energize associated contacts. Relay logic is the predecessor of ladder logic, which is commonly used in programmable logic controllers. Where radio transmitters and receivers share a common antenna, often a coaxial relay is used as a TR (transmit-receive) relay, which switches the antenna from the receiver to the transmitter. This protects the receiver from the high power of the transmitter. Such relays are often used in transceivers which combine transmitter and receiver in one unit. The relay contacts are designed not to reflect any radio frequency power back toward the source, and to provide very high isolation between receiver and transmitter terminals. The characteristic impedance of the relay is matched to the transmission line impedance of the system, for example, 50 ohms. A contactor is a heavy-duty relay used for switching electric motors and lighting loads, but contactors are not generally called relays. Continuous current ratings for common contactors range from 10 amps to several hundred amps. High-current contacts are made with alloys containing silver. The unavoidable arcing causes the contacts to oxidize; however, silver oxide is still a good conductor. Contactors with overload protection devices are often used to start motors. Contactors can make loud sounds when they operate, so they may be unfit for use where noise is a chief concern. A contactor is an electrically controlled switch used for switching a power circuit, similar to a relay except with higher current ratings. A contactor is controlled by a circuit which has a much lower power level than the switched circuit. Contactors come in many forms with varying capacities and features. Unlike a circuit breaker, a contactor is not intended to interrupt a short circuit current. Contactors range from those having a breaking current of several amperes to thousands of amperes and 24 V DC to many kilovolts. The physical size of contactors ranges from a device small enough to pick up with one hand, to large devices approximately a meter (yard) on a side. A solid state relay or SSR is a solid state electronic component that provides a similar function to an electromechanical relay but does not have any moving components, increasing long-term reliability. A solid-state relay uses a thyristor, TRIAC or other solid-state switching device, activated by the control signal, to switch the controlled load, instead of a solenoid. An optocoupler (a light-emitting diode (LED) coupled with a photo transistor) can be used to isolate control and controlled circuits. As every solid-state device has a small voltage drop across it, this voltage drop limits the amount of current a given SSR can handle. The minimum voltage drop for such a relay is a function of the material used to make the device. Solid-state relays rated to handle as much as 1,200 amperes have become commercially available. Compared to electromagnetic relays, they may be falsely triggered by transients and in general may be susceptible to damage by extreme cosmic ray and EMP episodes. Solid state contactor relay A solid state contactor is a heavy-duty solid state relay, including the necessary heat sink, used where frequent on/off cycles are required, such as with electric heaters, small electric motors, and lighting loads. There are no moving parts to wear out and there is no contact bounce due to vibration. They are activated by AC control signals or DC control signals from Programmable logic controller (PLCs), PCs, Transistor-transistor logic (TTL) sources, or other microprocessor and microcontroller controls. A Buchholz relay is a safety device sensing the accumulation of gas in large oil-filled transformers, which will alarm on slow accumulation of gas or shut down the transformer if gas is produced rapidly in the transformer oil. Forced-guided contacts relay A forced-guided contacts relay has relay contacts that are mechanically linked together, so that when the relay coil is energized or de-energized, all of the linked contacts move together. If one set of contacts in the relay becomes immobilized, no other contact of the same relay will be able to move. The function of forced-guided contacts is to enable the safety circuit to check the status of the relay. Forced-guided contacts are also known as "positive-guided contacts", "captive contacts", "locked contacts", "mechanically-linked contacts", or "safety relays". Forced-guided contacts by themselves can not guarantee that all contacts are in the same state, however they do guarantee, subject to no gross mechanical fault, that no contacts are in opposite states. Otherwise, a relay with several normally open (NO) contacts may stick when energised, with some contacts closed and others still slightly open, due to mechanical tolerances. Similarly, a relay with several normally closed (NC) contacts may stick to the unenergised position, so that when energised, the circuit through one set of contacts is broken, with a marginal gap, while the other remains closed. By introducing both NO and NC contacts, or more commonly, changeover contacts, on the same relay, it then becomes possible to guarantee that if any NC contact is closed, all NO contacts are open, and conversely, if any NO contact is closed, all NC contacts are open. It is not possible to reliably ensure that any particular contact is closed, except by potentially intrusive and safety-degrading sensing of its circuit conditions, however in safety systems it is usually the NO state that is most important, and as explained above, this is reliably verifiable by detecting the closure of a contact of opposite sense. Forced-guided contact relays are made with different main contact sets, either NO, NC or changeover, and one or more auxiliary contact sets, often of reduced current or voltage rating, used for the monitoring system. Contacts may be all NO, all NC, changeover, or a mixture of these, for the monitoring contacts, so that the safety system designer can select the correct configuration for the particular application. Safety relays are used as part of an engineered safety system. Overload protection relay Electric motors need overcurrent protection to prevent damage from over-loading the motor, or to protect against short circuits in connecting cables or internal faults in the motor windings. The overload sensing devices are a form of heat operated relay where a coil heats a bimetallic strip, or where a solder pot melts, releasing a spring to operate auxiliary contacts. These auxiliary contacts are in series with the coil. If the overload senses excess current in the load, the coil is de-energized. This thermal protection operates relatively slowly allowing the motor to draw higher starting currents before the protection relay will trip. Where the overload relay is exposed to the same environment as the motor, a useful though crude compensation for motor ambient temperature is provided. The other common overload protection system uses an electromagnet coil in series with the motor circuit that directly operates contacts. This is similar to a control relay but requires a rather high fault current to operate the contacts. To prevent short over current spikes from causing nuisance triggering the armature movement is damped with a dashpot. The thermal and magnetic overload detections are typically used together in a motor protection relay. Electronic overload protection relays measure motor current and can estimate motor winding temperature using a "thermal model" of the motor armature system that can be set to provide more accurate motor protection. Some motor protection relays include temperature detector inputs for direct measurement from a thermocouple or resistance thermometer sensor embedded in the winding. A sensitive relay having its contacts mounted in a highly evacuated glass housing, to permit handling radio-frequency voltages as high as 20,000 volts without flashover between contacts even though contact spacing is but a few hundredths of an inch when open. Pole and throw Since relays are switches, the terminology applied to switches is also applied to relays; a relay switches one or more poles, each of whose contacts can be thrown by energizing the coil in one of three ways: - Normally-open (NO) contacts connect the circuit when the relay is activated; the circuit is disconnected when the relay is inactive. It is also called a Form A contact or "make" contact. NO contacts may also be distinguished as "early-make" or NOEM, which means that the contacts close before the button or switch is fully engaged. - Normally-closed (NC) contacts disconnect the circuit when the relay is activated; the circuit is connected when the relay is inactive. It is also called a Form B contact or "break" contact. NC contacts may also be distinguished as "late-break" or NCLB, which means that the contacts stay closed until the button or switch is fully disengaged. - Change-over (CO), or double-throw (DT), contacts control two circuits: one normally-open contact and one normally-closed contact with a common terminal. It is also called a Form C contact or "transfer" contact ("break before make"). If this type of contact utilizes a "make before break" functionality, then it is called a Form D contact. The following designations are commonly encountered: - SPST – Single Pole Single Throw. These have two terminals which can be connected or disconnected. Including two for the coil, such a relay has four terminals in total. It is ambiguous whether the pole is normally open or normally closed. The terminology "SPNO" and "SPNC" is sometimes used to resolve the ambiguity. - SPDT – Single Pole Double Throw. A common terminal connects to either of two others. Including two for the coil, such a relay has five terminals in total. - DPST – Double Pole Single Throw. These have two pairs of terminals. Equivalent to two SPST switches or relays actuated by a single coil. Including two for the coil, such a relay has six terminals in total. The poles may be Form A or Form B (or one of each). - DPDT – Double Pole Double Throw. These have two rows of change-over terminals. Equivalent to two SPDT switches or relays actuated by a single coil. Such a relay has eight terminals, including the coil. The "S" or "D" may be replaced with a number, indicating multiple switches connected to a single actuator. For example 4PDT indicates a four pole double throw relay (with 12 terminals). EN 50005 are among applicable standards for relay terminal numbering; a typical EN 50005-compliant SPDT relay's terminals would be numbered 11, 12, 14, A1 and A2 for the C, NC, NO, and coil connections, respectively. DIN 72552 defines contact numbers in relays for automotive use; - 85 = relay coil - - 86 = relay coil + - 87 = common contact - 87a = normally closed contact - 87b = normally open contact Relays are used for: - Amplifying a digital signal, switching a large amount of power with a small operating power. Some special cases are: - Detecting and isolating faults on transmission and distribution lines by opening and closing circuit breakers (protection relays), - Isolating the controlling circuit from the controlled circuit when the two are at different potentials, for example when controlling a mains-powered device from a low-voltage switch. The latter is often applied to control office lighting as the low voltage wires are easily installed in partitions, which may be often moved as needs change. They may also be controlled by room occupancy detectors to conserve energy, - Logic functions. For example, the boolean AND function is realised by connecting normally open relay contacts in series, the OR function by connecting normally open contacts in parallel. The change-over or Form C contacts perform the XOR (exclusive or) function. Similar functions for NAND and NOR are accomplished using normally closed contacts. The Ladder programming language is often used for designing relay logic networks. - The application of Boolean Algebra to relay circuit design was formalized by Claude Shannon in A Symbolic Analysis of Relay and Switching Circuits - Early computing. Before vacuum tubes and transistors, relays were used as logical elements in digital computers. See electro-mechanical computers such as the ARRA, Harvard Mark II, Zuse Z2, and Zuse Z3. - Safety-critical logic. Because relays are much more resistant than semiconductors to nuclear radiation, they are widely used in safety-critical logic, such as the control panels of radioactive waste-handling machinery. - Telephone Switching. Electromechanical switching systems including Strowger and Crossbar telephone exchanges made extensive use of relays in ancillary control circuits. The Relay Automatic Telephone Company also manufactured telephone exchanges based solely on relay switching techniques designed by Gotthief Angarius Betulander. The first public relay based telephone exchange in the UK was installed in Fleetwood on 15 July 1922 and remained in service until 1959. - Time delay functions. Relays can be modified to delay opening or delay closing a set of contacts. A very short (a fraction of a second) delay would use a copper disk between the armature and moving blade assembly. Current flowing in the disk maintains magnetic field for a short time, lengthening release time. For a slightly longer (up to a minute) delay, a dashpot is used. A dashpot is a piston filled with fluid that is allowed to escape slowly. The time period can be varied by increasing or decreasing the flow rate. For longer time periods, a mechanical clockwork timer is installed. - Vehicle battery isolation. A 12v relay is often used to isolate any second battery in cars, 4WDs, RVs and boats. - Switching to a standby power supply. Relay application considerations Selection of an appropriate relay for a particular application requires evaluation of many different factors: - Number and type of contacts – normally-open, normally-closed, (double-throw) - Contact sequence – "Make before Break" or "Break before Make". For example, the old style telephone exchanges required Make-before-break so that the connection didn't get dropped while dialing the number. - Rating of contacts – small relays switch a few amperes, large contactors are rated for up to 3000 amperes, alternating or direct current - Voltage rating of contacts – typical control relays rated 300 VAC or 600 VAC, automotive types to 50 VDC, special high-voltage relays to about 15,000 V - Operating lifetime, useful life - the number of times the relay can be expected to operate reliably. There is both a mechanical life and a contact life. The contact life is naturally affected by the kind of load being switched: switching while "wet" (under load) causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due to a build up of contact surface damage caused by the destructive arc energy. - Coil voltage – machine-tool relays usually 24 VDC, 120 or 250 VAC, relays for switchgear may have 125 V or 250 VDC coils, "sensitive" relays operate on a few milliamperes - Coil current - including minimum current required to operate reliably and minimum current to hold. Also effects of power dissipation on coil temperature at various duty cycles. - Package/enclosure – open, touch-safe, double-voltage for isolation between circuits, explosion proof, outdoor, oil and splash resistant, washable for printed circuit board assembly - Operating environment - minimum and maximum operating temperatures and other environmental considerations such as effects of humidity and salt - Assembly – Some relays feature a sticker that keeps the enclosure sealed to allow PCB post soldering cleaning, which is removed once assembly is complete. - Mounting – sockets, plug board, rail mount, panel mount, through-panel mount, enclosure for mounting on walls or equipment - Switching time – where high speed is required - "Dry" contacts – when switching very low level signals, special contact materials may be needed such as gold-plated contacts - Contact protection – suppress arcing in very inductive circuits - Coil protection – suppress the surge voltage produced when switching the coil current - Isolation between coil contacts - Aerospace or radiation-resistant testing, special quality assurance - Expected mechanical loads due to acceleration – some relays used in aerospace applications are designed to function in shock loads of 50 g or more - Size - smaller relays often resist mechanical vibration and shock better than larger relays, because of the lower inertia of the moving parts and the higher natural frequencies of smaller parts. Larger relays often handle higher voltage and current than smaller relays. - Accessories such as timers, auxiliary contacts, pilot lamps, and test buttons - Regulatory approvals - Stray magnetic linkage between coils of adjacent relays on a printed circuit board. There are many considerations involved in the correct selection of a control relay for a particular application. These considerations include factors such as speed of operation, sensitivity, and hysteresis. Although typical control relays operate in the 5 ms to 20 ms range, relays with switching speeds as fast as 100 us are available. Reed relays which are actuated by low currents and switch fast are suitable for controlling small currents. As for any switch, the current through the relay contacts (unrelated to the current through the coil) must not exceed a certain value to avoid damage. In the particular case of high-inductance circuits such as motors, other issues must be addressed. When an inductance is connected to a power source, an input surge current or electromotor starting current larger than the steady current exists. When the circuit is broken, the current cannot change instantaneously, which creates a potentially damaging spark across the separating contacts. Consequently for relays which may be used to control inductive loads, we must specify the maximum current that may flow through the relay contacts when it actuates, the make rating; the continuous rating; and the break rating. The make rating may be several times larger than the continuous rating, which is itself larger than the break rating. |Type of load||% of rated value| Control relays should not be operated above rated temperature because of resulting increased degradation and fatigue. Common practice is to derate 20 degrees Celsius from the maximum rated temperature limit. Relays operating at rated load are also affected by their environment. Oil vapors may greatly decrease the contact tip life, and dust or dirt may cause the tips to burn before their normal life expectancy. Control relay life cycle varies from 50,000 to over one million cycles depending on the electrical loads of the contacts, duty cycle, application, and the extent to which the relay is derated. When a control relay is operating at its derated value, it is controlling a lower value of current than its maximum make and break ratings. This is often done to extend the operating life of the control relay. The table lists the relay derating factors for typical industrial control applications. Switching while "wet" (under load) causes undesired arcing between the contacts, eventually leading to contacts that weld shut or contacts that fail due to a build up of contact surface damage caused by the destructive arc energy. Without adequate contact protection, the occurrence of electric current arcing causes significant degradation of the contacts in relays, which suffer significant and visible damage. Every time a relay transitions either from a closed to an open state (break arc) or from an open to a closed state (make arc & bounce arc), under load, an electrical arc can occur between the two contact points (electrodes) of the relay. The break arc is typically more energetic and thus more destructive. The heat energy contained in the resulting electrical arc is very high (tens of thousands of degrees Fahrenheit), causing the metal on the contact surfaces to melt, pool and migrate with the current. The extremely high temperature of the arc cracks the surrounding gas molecules creating ozone, carbon monoxide, and other compounds. The arc energy slowly destroys the contact metal, causing some material to escape into the air as fine particulate matter. This very activity causes the material in the contacts to degrade quickly, resulting in device failure. This contact degradation drastically limits the overall life of a relay to a range of about 10,000 to 100,000 operations, a level far below the mechanical life of the same device, which can be in excess of 20 million operations. For protection of electrical apparatus and transmission lines, electromechanical relays with accurate operating characteristics were used to detect overload, short-circuits, and other faults. While many such relays remain in use, digital devices now provide equivalent protective functions. Railway signalling relays are large considering the mostly small voltages (less than 120 V) and currents (perhaps 100 mA) that they switch. Contacts are widely spaced to prevent flashovers and short circuits over a lifetime that may exceed fifty years. BR930 series plug-in relays are widely used on railways following British practice. These are 120 mm high, 180 mm deep and 56 mm wide and weigh about 1400 g, and can have up to 16 separate contacts, for example, 12 make and 4 break contacts. The BR Q-type relay are available in a number of different configurations: - QN1 Neutral - QL1 Latched - see above - QNA1 AC-immune - QBA1 Biased AC-immune - see above - QNN1 Twin Neutral 2x4-4 or 2x6-2 - QBCA1 Contactor for high current applications such as point motors. - QTDx - timer Since rail signal circuits must be highly reliable, special techniques are used to detect and prevent failures in the relay system. To protect against false feeds, double switching relay contacts are often used on both the positive and negative side of a circuit, so that two false feeds are needed to cause a false signal. Not all relay circuits can be proved so there is reliance on construction features such as carbon to silver contacts to resist lightning induced contact welding and to provide AC immunity. Opto-isolators are also used in some instances with railway signalling, especially where only a single contact is to be switched. Signalling relays, typical circuits, drawing symbols, abbreviations & nomenclature, etc. come in a number of schools, including the United States, France, Germany, and the United Kingdom. A simple device, which we now call a relay, was included in the original 1840 telegraph patent of Samuel Morse. The mechanism described acted as a digital amplifier, repeating the telegraph signal, and thus allowing signals to be propagated as far as desired. This overcame the problem of limited range of earlier telegraphy schemes. The word relay appears in the context of electromagnetic operations from 1860. - Digital protective relay - Dry contact - Race condition - Stepping switch - a kind of multi-position relay - Wire spring relay - Mason, C. R. "Art & Science of Protective Relaying, Chapter 2, GE Consumer & Electrical". Retrieved October 9, 2011. - Sinclair, Ian R. (2001), Sensors and Transducers (3rd ed.), Elsevier, p. 262, ISBN 978-0-7506-4932-2 - A. C. Keller. "Recent Developments in Bell System Relays -- Particularly Sealed Contact and Miniature Relays". The Bell System Technical Journal. 1964. - Ian Sinclair, Passive Components for Circuit Design, Newnes, 2000 ISBN 008051359X, page 170 - Kenneth B. Rexford and Peter R. Giuliani (2002). Electrical control for machines (6th ed.). Cengage Learning. p. 58. ISBN 978-0-7668-6198-5. - Terrell Croft and Wilford Summers (ed), American Electricans' Handbook, Eleventh Edition, McGraw Hill, New York (1987) ISBN 0-07-013932-6 page 7-124 - Zocholl, Stan (2003). AC Motor Protection. Schweitzer Engineering Laboratories, Inc. ISBN 978-0972502610. - "Relay Automatic Telephone Company". Retrieved October 6, 2014. - "Bristish Telecom History 1912-1968". Retrieved October 8, 2014. - "Arc Suppression to Protect Relays From Destructive Arc Energy". Retrieved December 6, 2013. - Al L Varney. "Questions About The No. 1 ESS Switch". 1991. - "Lab Note #105 Contact Life - Unsuppressed vs. Suppressed Arcing". Arc Suppression Technologies. April 2011. Retrieved October 9, 2011. - Icons of Invention: The Makers of the Modern World from Gutenberg to Gates. ABC-CLIO. p. 153. - "The electromechanical relay of Joseph Henry". Georgi Dalakov. - Scientific American Inventions and Discoveries: All the Milestones in Ingenuity--From the Discovery of Fire to the Invention of the Microwave Oven. John Wiley & Sons. p. 311. - Thomas Coulson (1950). Joseph Henry: His Life and Work. Princeton: Princeton University Press. - Gibberd, William (1966). "Edward Davy". Australian Dictionary of Biography. Canberra: Australian National University. Retrieved 7 June 2012. - US Patent 1,647, Improvement in the mode of communicating information by signals by the application of electro-magnetism, June 20, 1840 - Gurevich, Vladimir (2005). Electrical Relays: Principles and Applications. London - New York: CRC Press. - Westinghouse Corporation (1976). Applied Protective Relaying. Westinghouse Corporation. Library of Congress card no. 76-8060. - Terrell Croft and Wilford Summers (ed) (1987). American Electricians' Handbook, Eleventh Edition. New York: McGraw Hill. ISBN 978-0-07-013932-9. - Walter A. Elmore. Protective Relaying Theory and Applications. Marcel Dekker kana, Inc. ISBN 978-0-8247-9152-0. - Vladimir Gurevich (2008). Electronic Devices on Discrete Components for Industrial and Power Engineering. London - New York: CRC Press. p. 418. - Vladimir Gurevich (2003). Protection Devices and Systems for High-Voltage Applications. London - New York: CRC Press. p. 292. - Vladimir Gurevich (2010). Digital Protective Relays: Problems and Solutions. London - New York: CRC Press. p. 422. - Colin Simpson, Principles of Electronics, Prentice-Hall, 2002, ISBN 0-06-868603-6 |Wikimedia Commons has media related to Relay.| - Electromagnetic relays and Solid-State Relays (SSR), general technical descriptions, functions, shutdown behaviour and design features - The Electromechanical Relay - Information about relays and the Latching Relay circuit - "Harry Porter's Relay Computer", a computer made out of relays. - "Relay Computer Two", by Jon Stanley. - Interfacing Relay To Microcontroller. - Relays Technical Write
Genes are the basic physical and functional units of heredity. They are molecular information that ultimately determine the traits possessed by any organism. A gene is a specific sequence of nucleotide bases on the DNA molecule (Deoxyribonucleic acid). The sequence of nucleotides specifies the information required for constructing proteins, which provide the structural components of cells and tissues as well as enzymes for essential biochemical reactions. Typically the products of several genes are assembled to make a functional protein. Likewise a single gene can be involved with the production of several different proteins. - Main Article: Chromosome Each DNA molecule contains many genes. The gene is a subunit or "reading frame" of information on long strands of (DNA) called chromosomes. Humans have 46 chromosomes, which range from 1.7 to 8.5cm in length and contains about 1000 genes each. The human genome is estimated to comprise more than 30,000 genes. Human genes vary widely in length, often extending over thousands of bases, but only about 10% of the genome is known to include the protein- coding sequences (exons) of genes. Interspersed within many genes are intron sequences, which have no coding function. The balance of the genome is thought to consist of other noncoding regions (such as control sequences and intergenic regions), whose functions are obscure. Primer on Molecular Genetics by the U.S. Department of Energy. - Adenosine triphosphate (ATP) - Cytidine triphosphate (CTP) - Guanosine triphosphate (GTP) - Thymidine triphosphate (TTP) These nucleotides are typically abbreviated with the letters (ATP, CTP, GTP, TTP) or simply (A,C,G,T). - Main Article: Gene Expression The gene is coded molecular information that provides the cell with instructions on how to make specific proteins. The code is specified by the sequence or linear arrangement of the 4 different nucleotides that make-up DNA. An organelle in the cell called a ribosome reads this coded instruction and translates it into a sequence of amino acids that are used to make a protein. This process is known as gene expression. All living organisms are composed largely of proteins; humans can synthesize at least 100,000 different kinds. Proteins are large, complex molecules made up of long chains of subunits called amino acids. Twenty different kinds of amino acids are usually found in proteins. Within the gene, each specific sequence of three DNA bases (codons) directs the cells protein- synthesizing machinery to add specific amino acids. For example, the base sequence ATG codes for the amino acid methionine. Since 3 bases code for 1 amino acid, the protein coded by an average- sized gene (3000 bp) will contain 1000 amino acids. The genetic code is thus a series of codons that specify which amino acids are required to make up specific proteins. |Examples of gene codons and the amino acid they represent: The protein- coding instructions from the genes are transmitted indirectly through messenger ribonucleic acid (mRNA), a transient intermediary molecule similar to a single strand of DNA. For the information within a gene to be expressed, a complementary RNA strand is produced (a process called transcription) from the DNA template in the nucleus. This mRNA is moved from the nucleus to the cellular cytoplasm, where it serves as the template for protein synthesis. The cells protein- synthesizing machinery then translates the codons into a string of amino acids that will constitute the protein molecule for which it codes. In the laboratory, the mRNA molecule can be isolated and used as a template to synthesize a complementary DNA (cDNA) strand, which can then be used to locate the corresponding genes on a chromosome map. The utility of this strategy is described in the section on physical mapping. - Main Article: Regulation of gene expression Regulation of the gene expression system precisely controls the amount of a gene product that is produced and can further modify the product after it is made. This exquisite control requires multiple regulatory input points. One very efficient point occurs at transcription, such that an mRNA is produced only when a gene product is needed. Cells also regulate gene expression by post-transcriptional modification; by allowing only a subset of the mRNAs to go on to translation; or by restricting translation of specific mRNAs to only when the product is needed. At other levels, cells regulate gene expression through DNA folding, chemical modification of the nucleotide bases, and intricate "feedback mechanisms" in which some of the gene's own protein product directs the cell to cease further protein production.
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Make a Graph In this graph worksheet, students analyze a data table and related line plot graph that compares Celsius and Fahrenheit temperatures. Students answer 2 questions about the graph. 3 Views 17 Downloads Relationships Between Quantities and Reasoning with Equations and Their Graphs Graphing all kinds of situations in one and two variables is the focus of this detailed unit of daily lessons, teaching notes, and assessments. Learners start with piece-wise functions and work their way through setting up and solving... 6th - 10th Math CCSS: Designed Graphing Stories Ice Breaker Kick off the school year with some math fun! New classmates share information about themselves by using graphs and interpreting data. The goal is to use a general graph shape, such as linear or exponential, and create labels for the axes... 6th - 12th Math CCSS: Adaptable Get on your Mark, Get Set, Go! Collect, Interpret, and Represent Data Using a Bar Graph and a Circle Graph Start an engaging data analysis study with a review of charts and graphs using the linked interactive presentation, which is both hilarious and comprehensive. There are 27 statistics-related vocabulary terms you can use in a word sort.... 4th - 7th Math CCSS: Designed
Why do we balance chemical reaction equations All chemical reactions occurring within the test tubes, industrial reactors, or nature can be described by reaction equations. For example reaction of water synthesis can be written as H2 + O2 → H2O This reaction contains the correct reactants - hydrogen and oxygen in its diatomic forms, and correct product - water molecule. We call such a reaction equation (unbalanced, but correctly listing all reactants and products) skeletal. Knowing the skeletal reaction equation we know what the reactants are and what are the products, but for quantitative predictions we need to balance the reaction equation. Reaction balancing is based on mass preservation. We know that atoms don't appear nor disappear. If the atom is present in a reactant (compound entering the reaction), it must be present in one of reaction products. The same happens with a charge - charge is also preserved, just like mass is. We also know that compounds always have the same composition. A water molecule will always consist of one atom of oxygen and two atoms of hydrogen. It can be made when hydrogen reacts with oxygen - but these gases are usually present in the diatomic form of H2 and O2 molecules. To describe the reaction as it occurs we have to combine both mass preservation and molecules composition. Let's take a look again at the above reaction of water synthesis - there are two oxygen atoms on the left side (in the form of O2) but only one atom of oxygen on the right side - one atom in one water molecule. Reaction equation is not balanced. Now let's take a look at the same reaction with added coefficients: 2H2 + O2 → 2H2O We read it: two molecules of diatomic hydrogen react with one molecule of diatomic oxygen, producing two molecules of water. (Or, alternatively - two moles of diatomic hydrogen react with one mole of diatomic oxygen to produce two moles of water). Is this equation balanced? The ultimate test which allows you to check whether the reaction is correctly balanced or not, is to count all types of atoms on both sides of the equation - they must be identical, and to check whether the charge on both sides of the equation is identical. Let's check oxygen - there is one molecule on the left side, containing two atoms - so there are two atoms of oxygen on the left side. On the right side there are two molecules, each containing one atom of oxygen - so there are two oxygen atoms on the rights side as well. As number of atoms of oxygen is identical on both sides reaction is balanced with respect to oxygen. Please check for yourself, that the same reaction is also balanced with respect to hydrogen, with four hydrogen atoms on both sides. So when is the reaction balanced? Firstly, it must have the same number of atoms on both sides. Secondly, all coefficients must be integer. Finally, by convention they should have the smallest possible denominator. H2 + 1/2O2 → H2O 4H2 + 2O2 → 4H2O are incorrect - even if they are balanced in terms of number of atoms. Note, that such incorrect equations can appear during balancing - and they are perfectly valid as intermediate forms, they just have to be cleaned up before becoming the final version. In this case multiplying the first reaction equation by 2 and dividing the second one by 2 leads us to correctly balanced equations. Please remember, that when balancing equations you should never touch subscripts, since that will change the composition and therefore the substance itself. All you can modify are the coefficients telling us how many molecules of the reagent entered the reaction, or have left it. If there are any charged species, you should also check if the charge is balanced, just as atoms are. But there is one, important difference - charge may be negative and positive and sum of these charges can be zero, or negative, while number of atoms is always a positive number. So, we can easily tell that the neutralization reaction H+ + OH- → H2O is balanced - there are identical number of atoms on both sides, and charges on the left side sum up to 0, which is also a total charge on the right side. At the same time equation H2 → 2H+ is not balanced - while there are identical numbers of atoms on both sides, charge appeared on the right side from nowhere. Finally note that not all reaction equations can be balanced - for example H2O2 → H2O will be never balanced, no matter what coefficients you will use. More on that in the when balancing fails section. Once you will know how to balance equations on paper, you may check our equation balancing and stoichiometry calculator EBAS - it does most calculations immediately.
Fourteenth Amendment to the United States Constitution |This article is part of a series on the| of the United States |Preamble and Articles| |Amendments to the Constitution| The Fourteenth Amendment (Amendment XIV) to the United States Constitution was adopted on July 9, 1868, as one of the Reconstruction Amendments. Often considered as one of the most consequential amendments, it addresses citizenship rights and equal protection under the law and was proposed in response to issues related to former slaves following the American Civil War. The amendment was bitterly contested, particularly by the states of the defeated Confederacy, which were forced to ratify it in order to regain representation in Congress. The amendment, particularly its first section, is one of the most litigated parts of the Constitution, forming the basis for landmark Supreme Court decisions such as Brown v. Board of Education (1954) regarding racial segregation, Roe v. Wade (1973) regarding abortion (overturned in 2022), Bush v. Gore (2000) regarding the 2000 presidential election, and Obergefell v. Hodges (2015) regarding same-sex marriage. The amendment limits the actions of all state and local officials, and also those acting on behalf of such officials. The amendment's first section includes several clauses: the Citizenship Clause, Privileges or Immunities Clause, Due Process Clause, and Equal Protection Clause. The Citizenship Clause provides a broad definition of citizenship, nullifying the Supreme Court's decision in Dred Scott v. Sandford (1857), which had held that Americans descended from African slaves could not be citizens of the United States. Since the Slaughter-House Cases (1873), the Privileges or Immunities Clause has been interpreted to do very little. The Due Process Clause prohibits state and local governments from depriving persons of life, liberty, or property without a fair procedure. The Supreme Court has ruled this clause makes most of the Bill of Rights as applicable to the states as it is to the federal government, as well as to recognize substantive and procedural requirements that state laws must satisfy. The Equal Protection Clause requires each state to provide equal protection under the law to all people, including all non-citizens, within its jurisdiction. This clause has been the basis for many decisions rejecting irrational or unnecessary discrimination against people belonging to various groups. The second, third, and fourth sections of the amendment are seldom litigated. However, the second section's reference to "rebellion, or other crime" has been invoked as a constitutional ground for felony disenfranchisement. The fourth section was held, in Perry v. United States (1935), to prohibit a current Congress from abrogating a contract of debt incurred by a prior Congress. The fifth section gives Congress the power to enforce the amendment's provisions by "appropriate legislation"; however, under City of Boerne v. Flores (1997), this power may not be used to contradict a Supreme Court decision interpreting the amendment. Discover more about Fourteenth Amendment to the United States Constitution related topics American Civil War Confederate States of America Brown v. Board of Education Dobbs v. Jackson Women's Health Organization Bush v. Gore 2000 United States presidential election Due Process Clause Equal Protection Clause Dred Scott v. Sandford Gold Clause Cases City of Boerne v. Flores Section 1: Citizenship and civil rights Section 1. All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws. Section 1 of the amendment formally defines United States citizenship and also protects various civil rights from being abridged or denied by any state or state actor. Abridgment or denial of those civil rights by private persons is not addressed by this amendment. The Supreme Court held in Civil Rights Cases (1883) that the amendment was limited to "state action" and, therefore, did not authorize the Congress to outlaw racial discrimination by private individuals or organizations. However, Congress can sometimes reach such discrimination via other parts of the Constitution such as the Commerce Clause which Congress used to enact the Civil Rights Act of 1964—the Supreme Court upheld this approach in Heart of Atlanta Motel v. United States (1964). U.S. Supreme Court Justice Joseph P. Bradley commented in the Civil Rights Cases that "individual invasion of individual rights is not the subject-matter of the [Fourteenth] Amendment. It has a deeper and broader scope. It nullifies and makes void all state legislation, and state action of every kind, which impairs the privileges and immunities of citizens of the United States, or which injures them in life, liberty or property without due process of law, or which denies to any of them the equal protection of the laws." The Radical Republicans who advanced the Thirteenth Amendment hoped to ensure broad civil and human rights for the newly freed people—but its scope was disputed before it even went into effect. The framers of the Fourteenth Amendment wanted these principles enshrined in the Constitution to protect the new Civil Rights Act from being declared unconstitutional by the Supreme Court and also to prevent a future Congress from altering it by a mere majority vote. This section was also in response to violence against black people within the Southern States. The Joint Committee on Reconstruction found that only a Constitutional amendment could protect black people's rights and welfare within those states. The U.S. Supreme Court stated in Shelley v. Kraemer (1948) that the historical context leading to the Fourteenth Amendment's adoption must be taken into account, that this historical context reveals the Amendment's fundamental purpose and that the provisions of the Amendment are to be construed in light of this fundamental purpose. In its decision the Court said: The historical context in which the Fourteenth Amendment became a part of the Constitution should not be forgotten. Whatever else the framers sought to achieve, it is clear that the matter of primary concern was the establishment of equality in the enjoyment of basic civil and political rights and the preservation of those rights from discriminatory action on the part of the States based on considerations of race or color. [...] [T]he provisions of the Amendment are to be construed with this fundamental purpose in mind. Section 1 has been the most frequently litigated part of the amendment, and this amendment in turn has been the most frequently litigated part of the Constitution. The Citizenship Clause overruled the Supreme Court's Dred Scott decision that black people were not citizens and could not become citizens, nor enjoy the benefits of citizenship. Some members of Congress voted for the Fourteenth Amendment in order to eliminate doubts about the constitutionality of the Civil Rights Act of 1866, or to ensure that no subsequent Congress could later repeal or alter the main provisions of that Act. The Civil Rights Act of 1866 had granted citizenship to all people born in the United States if they were not subject to a foreign power, and this clause of the Fourteenth Amendment constitutionalized this rule. According to Garrett Epps, professor of constitutional law at the University of Baltimore, "Only one group is not 'subject to the jurisdiction' [of the United States] – accredited foreign diplomats and their families, who can be expelled by the federal government but not arrested or tried." The U.S. Supreme Court stated in Elk v. Wilkins (1884) with respect to the purpose of the Citizenship Clause and the words "persons born or naturalized in the United States" and "subject to the jurisdiction thereof" in this context: The main object of the opening sentence of the Fourteenth Amendment was to settle the question, upon which there had been a difference of opinion throughout the country and in this Court, as to the citizenship of free negroes (Scott v. Sandford, 19 How. 393), and to put it beyond doubt that all persons, white or black, and whether formerly slaves or not, born or naturalized in the United States, and owing no allegiance to any alien power, should be citizens of the United States and of the state in which they reside. Slaughterhouse Cases, 16 Wall. 36, 83 U. S. 73; Strauder v. West Virginia, 100 U. S. 303, 100 U. S. 306. This section contemplates two sources of citizenship, and two sources only: birth and naturalization. The persons declared to be citizens are "all persons born or naturalized in the United States, and subject to the jurisdiction thereof". The evident meaning of these last words is not merely subject in some respect or degree to the jurisdiction of the United States, but completely subject to their political jurisdiction and owing them direct and immediate allegiance. And the words relate to the time of birth in the one case, as they do to the time of naturalization in the other. Persons not thus subject to the jurisdiction of the United States at the time of birth cannot become so afterward except by being naturalized, either individually, as by proceedings under the naturalization acts, or collectively, as by the force of a treaty by which foreign territory is acquired. There are varying interpretations of the original intent of Congress and of the ratifying states, based on statements made during the congressional debate over the amendment, as well as the customs and understandings prevalent at that time. Some of the major issues that have arisen about this clause are the extent to which it included Native Americans, its coverage of non-citizens legally present in the United States when they have a child, whether the clause allows revocation of citizenship, and whether the clause applies to illegal immigrants. Historian Eric Foner, who has explored the question of U.S. birthright citizenship to other countries, argues that: Many things claimed as uniquely American—a devotion to individual freedom, for example, or social opportunity—exist in other countries. But birthright citizenship does make the United States (along with Canada) unique in the developed world. ... Birthright citizenship is one expression of the commitment to equality and the expansion of national consciousness that marked Reconstruction. ... Birthright citizenship is one legacy of the titanic struggle of the Reconstruction era to create a genuine democracy grounded in the principle of equality. Garrett Epps also stresses, like Eric Foner, the equality aspect of the Fourteenth Amendment: Its centerpiece is the idea that citizenship in the United States is universal—that we are one nation, with one class of citizens, and that citizenship extends to everyone born here. Citizens have rights that neither the federal government nor any state can revoke at will; even undocumented immigrants—"persons", in the language of the amendment—have rights to due process and equal protection of the law. During the original congressional debate over the amendment Senator Jacob M. Howard of Michigan—the author of the Citizenship Clause—described the clause as having the same content, despite different wording, as the earlier Civil Rights Act of 1866, namely, that it excludes Native Americans who maintain their tribal ties and "persons born in the United States who are foreigners, aliens, who belong to the families of ambassadors or foreign ministers". According to historian Glenn W. LaFantasie of Western Kentucky University, "A good number of his fellow senators supported his view of the citizenship clause." Others also agreed that the children of ambassadors and foreign ministers were to be excluded. Senator James Rood Doolittle of Wisconsin asserted that all Native Americans were subject to United States jurisdiction, so that the phrase "Indians not taxed" would be preferable, but Senate Judiciary Committee Chairman Lyman Trumbull and Howard disputed this, arguing that the federal government did not have full jurisdiction over Native American tribes, which govern themselves and make treaties with the United States. In Elk v. Wilkins (1884), the clause's meaning was tested regarding whether birth in the United States automatically extended national citizenship. The Supreme Court held that Native Americans who voluntarily quit their tribes did not automatically gain national citizenship. The issue was resolved with the passage of the Indian Citizenship Act of 1924, which granted full U.S. citizenship to indigenous peoples. Children born to foreign nationals The Fourteenth Amendment provides that children born in the United States and subject to its jurisdiction become American citizens at birth. The principal framer John Armor Bingham said during the 39th United States Congress two years before its passing: I find no fault with the introductory clause, which is simply declaratory of what is written in the Constitution, that every human being born within the jurisdiction of the United States of parents not owing allegiance to any foreign sovereignty is, in the language of your Constitution itself, a natural-born citizen; but, sir, I may be allowed to say further that I deny that the Congress of the United States ever had the power, or color of power to say that any man born within the jurisdiction of the United States, not owing a foreign allegiance, is not and shall not be a citizen of the United States. [emphasis added] At the time of the amendment's passage, President Andrew Johnson and three senators, including Trumbull, the author of the Civil Rights Act, asserted that both the Civil Rights Act and the Fourteenth Amendment would confer citizenship to children born to foreign nationals in the United States. Senator Edgar Cowan of Pennsylvania had a decidedly different opinion. Some scholars dispute whether the Citizenship Clause should apply to the children of unauthorized immigrants today, as "the problem ... did not exist at the time". In the 21st century, Congress has occasionally discussed passing a statute or a constitutional amendment to reduce the practice of "birth tourism", in which a foreign national gives birth in the United States to gain the child's citizenship. The clause's meaning with regard to a child of immigrants was tested in United States v. Wong Kim Ark (1898). The Supreme Court held that under the Fourteenth Amendment, a man born within the United States to Chinese citizens who have a permanent domicile and residence in the United States and are carrying out business in the United States—and whose parents were not employed in a diplomatic or other official capacity by a foreign power—was a citizen of the United States. Subsequent decisions have applied the principle to the children of foreign nationals of non-Chinese descent. According to the Foreign Affairs Manual, which is published by the State Department, "Despite widespread popular belief, U.S. military installations abroad and U.S. diplomatic or consular facilities abroad are not part of the United States within the meaning of the [Fourteenth] Amendment." Loss of citizenship Loss of national citizenship is possible only under the following circumstances: - Fraud in the naturalization process. Technically, this is not a loss of citizenship but rather a voiding of the purported naturalization and a declaration that the immigrant never was a citizen of the United States. - Affiliation with an "anti-American" organization (such as the Communist party or other totalitarian party, or a terrorist organization) within five years of naturalization. The State Department views such affiliations as sufficient evidence that an applicant must have lied or concealed evidence in the naturalization process. - Other-than-honorable discharge from the U.S. armed forces before five years of honorable service, if honorable service was the basis for the naturalization. - Voluntary relinquishment of citizenship. This may be accomplished either through renunciation procedures specially established by the State Department or through other actions that demonstrate desire to give up national citizenship. For much of the country's history, voluntary acquisition or exercise of a foreign citizenship was considered sufficient cause for revocation of national citizenship. This concept was enshrined in a series of treaties between the United States and other countries (the Bancroft Treaties). However, the Supreme Court repudiated this concept in Afroyim v. Rusk (1967), as well as Vance v. Terrazas (1980), holding that the Citizenship Clause of the Fourteenth Amendment barred the Congress from revoking citizenship. However, it has been argued that Congress can revoke citizenship that it has previously granted to a person not born in the United States. Privileges or Immunities Clause The Privileges or Immunities Clause, which protects the privileges and immunities of national citizenship from interference by the states, was patterned after the Privileges and Immunities Clause of Article IV, which protects the privileges and immunities of state citizenship from interference by other states. In the Slaughter-House Cases (1873), the Supreme Court concluded that the Constitution recognized two separate types of citizenship—"national citizenship" and "state citizenship"—and the Court held that the Privileges or Immunities Clause prohibits states from interfering only with privileges and immunities possessed by virtue of national citizenship. The Court concluded that the privileges and immunities of national citizenship included only those rights that "owe their existence to the Federal government, its National character, its Constitution, or its laws." The Court recognized few such rights, including access to seaports and navigable waterways, the right to run for federal office, the protection of the federal government while on the high seas or in the jurisdiction of a foreign country, the right to travel to the seat of government, the right to peaceably assemble and petition the government, the privilege of the writ of habeas corpus, and the right to participate in the government's administration. This decision has not been overruled and has been specifically reaffirmed several times. Largely as a result of the narrowness of the Slaughter-House opinion, this clause subsequently lay dormant for well over a century. In Saenz v. Roe (1999), the Court ruled that a component of the "right to travel" is protected by the Privileges or Immunities Clause: Despite fundamentally differing views concerning the coverage of the Privileges or Immunities Clause of the Fourteenth Amendment, most notably expressed in the majority and dissenting opinions in the Slaughter-House Cases (1873), it has always been common ground that this Clause protects the third component of the right to travel. Writing for the majority in the Slaughter-House Cases, Justice Miller explained that one of the privileges conferred by this Clause "is that a citizen of the United States can, of his own volition, become a citizen of any State of the Union by a bona fide residence therein, with the same rights as other citizens of that State." (emphasis added) Justice Miller actually wrote in the Slaughter-House Cases that the right to become a citizen of a state (by residing in that state) "is conferred by the very article under consideration" (emphasis added), rather than by the "clause" under consideration. In McDonald v. Chicago (2010), Justice Clarence Thomas, while concurring with the majority in incorporating the Second Amendment against the states, declared that he reached this conclusion through the Privileges or Immunities Clause instead of the Due Process Clause. Randy Barnett has referred to Justice Thomas's concurring opinion as a "complete restoration" of the Privileges or Immunities Clause. In Timbs v. Indiana (2019), Justice Thomas and Justice Neil Gorsuch, in separate concurring opinions, declared the Excessive Fines Clause of the Eighth Amendment was incorporated against the states through the Privileges or Immunities Clause instead of the Due Process Clause. Due Process Clause Due process deals with the administration of justice and thus the due process clause acts as a safeguard from arbitrary denial of life, liberty, or property by the government outside the sanction of law. The Supreme Court has described due process consequently as "the protection of the individual against arbitrary action." In 1855, the Supreme Court explained that, to ascertain whether a process is due process, the first step is to "examine the constitution itself, to see whether this process be in conflict with any of its provisions." In Hurtado v. California (1884), the U.S. Supreme Court said: Due process of law in the [Fourteenth Amendment] refers to that law of the land in each state which derives its authority from the inherent and reserved powers of the state, exerted within the limits of those fundamental principles of liberty and justice which lie at the base of all our civil and political institutions, and the greatest security for which resides in the right of the people to make their own laws, and alter them at their pleasure. Due process has not been reduced to any formula; its content cannot be determined by reference to any code. The best that can be said is that, through the course of this Court's decisions, it has represented the balance which our Nation, built upon postulates of respect for the liberty of the individual, has struck between that liberty and the demands of organized society. If the supplying of content to this constitutional concept has of necessity been a rational process, it certainly has not been one where judges have felt free to roam where unguided speculation might take them. The balance of which I speak is the balance struck by this country, having regard to what history teaches are the traditions from which it developed as well as the traditions from which it broke. That tradition is a living thing. A decision of this Court which radically departs from it could not long survive, while a decision which builds on what has survived is likely to be sound. No formula could serve as a substitute, in this area, for judgment and restraint. --Justice John M. Harlan II in his dissenting opinion in Poe v. Ullman (1961). The Due Process Clause has been used to strike down legislation. The Fifth and Fourteenth Amendments for example do not prohibit governmental regulation for the public welfare. Instead, they only direct the process by which such regulation occurs. As the Court has held before, such due process "demands only that the law shall not be unreasonable, arbitrary, or capricious, and that the means selected shall have a real and substantial relation to the object sought to be attained." Despite the foregoing citation the Due Process Clause enables the Supreme Court to exercise its power of judicial review, "because the due process clause has been held by the Court applicable to matters of substantive law as well as to matters of procedure." Justice Louis Brandeis observed in his concurrence opinion in Whitney v. California, 274 U.S. 357, 373 (1927), that "[d]espite arguments to the contrary which had seemed to me persuasive, it is settled that the due process clause of the Fourteenth Amendment applies to matters of substantive law as well as to matters of procedure. Thus all fundamental rights comprised within the term liberty are protected by the Federal Constitution from invasion by the States." The Due Process Clause of the Fourteenth Amendment applies only against the states, but it is otherwise textually identical to the Due Process Clause of the Fifth Amendment, which applies against the federal government; both clauses have been interpreted to encompass identical doctrines of procedural due process and substantive due process. Procedural due process is the guarantee of a fair legal process when the government tries to interfere with a person's protected interests in life, liberty, or property, and substantive due process is the guarantee that the fundamental rights of citizens will not be encroached on by government. Furthermore, as observed by Justice John M. Harlan II in his dissenting opinion in Poe v. Ullman, 367 U.S. 497, 541 (1961), quoting Hurtado v. California, 110 U.S. 516, 532 (1884), "the guaranties of due process, though having their roots in Magna Carta's 'per legem terrae' and considered as procedural safeguards 'against executive usurpation and tyranny', have in this country 'become bulwarks also against arbitrary legislation'." In Planned Parenthood v. Casey (1992) it was observed: "Although a literal reading of the Clause might suggest that it governs only the procedures by which a State may deprive persons of liberty, for at least 105 years, since Mugler v. Kansas, 123 U. S. 623, 660-661 (1887), the Clause has been understood to contain a substantive component as well, one "barring certain government actions regardless of the fairness of the procedures used to implement them." Daniels v. Williams, 474 U. S. 327, 331 (1986)." The Due Process Clause of the Fourteenth Amendment also incorporates most of the provisions in the Bill of Rights, which were originally applied against only the federal government, and applies them against the states. The Due Process clause applies regardless whether one is a citizen of the United States of America or not. The Supreme Court of the United States interprets the clauses broadly, concluding that these clauses provide three protections: procedural due process (in civil and criminal proceedings); substantive due process; and as the vehicle for the incorporation of the Bill of Rights. These aspects will be discussed in the sections below. Substantive due process Beginning with Allgeyer v. Louisiana (1897), the U.S. Supreme Court interpreted the Due Process Clause as providing substantive protection to private contracts, thus prohibiting a variety of social and economic regulation; this principle was referred to as "freedom of contract". A unanimous court held with respect to the noun "liberty" mentioned in the Fourteenth Amendment's Due Process Clause: The 'liberty' mentioned in [the Fourteenth] amendment means not only the right of the citizen to be free from the mere physical restraint of his person, as by incarceration, but the term is deemed to embrace the right of the citizen to be free in the enjoyment of all his faculties, to be free to use them in all lawful ways, to live and work where he will, to earn his livelihood by any lawful calling, to pursue any livelihood or avocation, and for that purpose to enter into all contracts which may be proper, necessary, and essential to his carrying out to a successful conclusion the purposes above mentioned. Relying on the principle of "freedom of contract" the Court struck down a law decreeing maximum hours for workers in a bakery in Lochner v. New York (1905) and struck down a minimum wage law in Adkins v. Children's Hospital (1923). In Meyer v. Nebraska (1923), the Court stated that the "liberty" protected by the Due Process Clause [w]ithout doubt ... denotes not merely freedom from bodily restraint but also the right of the individual to contract, to engage in any of the common occupations of life, to acquire useful knowledge, to marry, establish a home and bring up children, to worship God according to the dictates of his own conscience, and generally to enjoy those privileges long recognized at common law as essential to the orderly pursuit of happiness by free men. However, the Court did uphold some economic regulation, such as state Prohibition laws (Mugler v. Kansas, 1887), laws declaring maximum hours for mine workers (Holden v. Hardy, 1898), laws declaring maximum hours for female workers (Muller v. Oregon, 1908), and President Woodrow Wilson's intervention in a railroad strike (Wilson v. New, 1917), as well as federal laws regulating narcotics (United States v. Doremus, 1919). The Court repudiated, but did not explicitly overrule, the "freedom of contract" line of cases in West Coast Hotel v. Parrish (1937). In its decision the Court stated: The Constitution does not speak of freedom of contract. It speaks of liberty and prohibits the deprivation of liberty without due process of law. In prohibiting that deprivation, the Constitution does not recognize an absolute and uncontrollable liberty. Liberty in each of its phases has its history and connotation. But the liberty safeguarded is liberty in a social organization which requires the protection of law against the evils which menace the health, safety, morals and welfare of the people. Liberty under the Constitution is thus necessarily subject to the restraints of due process, and regulation which is reasonable in relation to its subject and is adopted in the interests of the community is due process. This essential limitation of liberty in general governs freedom of contract in particular. The Court has interpreted the term "liberty" in the Due Process Clauses of the Fifth and Fourteenth Amendments in Bolling v. Sharpe (1954) broadly: Although the Court has not assumed to define "liberty" with any great precision, that term is not confined to mere freedom from bodily restraint. Liberty under law extends to the full range of conduct which the individual is free to pursue, and it cannot be restricted except for a proper governmental objective. In Poe v. Ullman (1961), dissenting Justice John Marshall Harlan II adopted a broad view of the "liberty" protected by the Fourteenth Amendment Due Process clause: [T]he full scope of the liberty guaranteed by the Due Process Clause cannot be found in or limited by the precise terms of the specific guarantees elsewhere provided in the Constitution. This 'liberty' is not a series of isolated points pricked out in terms of the taking of property; the freedom of speech, press, and religion; the right to keep and bear arms; the freedom from unreasonable searches and seizures; and so on. It is a rational continuum which, broadly speaking, includes a freedom from all substantial arbitrary impositions and purposeless restraints ... and which also recognizes, what a reasonable and sensitive judgment must, that certain interests require particularly careful scrutiny of the state needs asserted to justify their abridgment. Due process of law thus conveys neither formal nor fixed nor narrow requirements. It is the compendious expression for all those rights which the courts must enforce because they are basic to our free society. But basic rights do not become petrified as of any one time, even though, as a matter of human experience, some may not too rhetorically be called eternal verities. It is of the very nature of a free society to advance in its standards of what is deemed reasonable and right. Representing as it does a living principle, due process is not confined within a permanent catalogue of what may at a given time be deemed the limits or the essentials of fundamental rights. --Justice Felix Frankfurter delivering the opinion of the court in Wolf v. Colorado (1949). Although the "freedom of contract" described above has fallen into disfavor, by the 1960s, the Court had extended its interpretation of substantive due process to include other rights and freedoms that are not enumerated in the Constitution but that, according to the Court, extend or derive from existing rights. For example, the Due Process Clause is also the foundation of a constitutional right to privacy. The Court first ruled that privacy was protected by the Constitution in Griswold v. Connecticut (1965), which overturned a Connecticut law criminalizing birth control. While Justice William O. Douglas wrote for the majority that the right to privacy was found in the "penumbras" of various provisions in the Bill of Rights, Justices Arthur Goldberg and John Marshall Harlan II wrote in concurring opinions that the "liberty" protected by the Due Process Clause included individual privacy. The above mentioned broad view of liberty embraced by dissenting Justice John Marshall Harlan II in Poe v. Ullman (1961) was adopted by the Supreme Court in Griswold v. Connecticut. The right to privacy was the basis for Roe v. Wade (1973), in which the Court invalidated a Texas law forbidding abortion except to save the mother's life. Like Goldberg's and Harlan's concurring opinions in Griswold, the majority opinion authored by Justice Harry Blackmun located the right to privacy in the Due Process Clause's protection of liberty. The decision disallowed many state and federal abortion restrictions, and it became one of the most controversial in the Court's history. In Planned Parenthood v. Casey (1992), the Court decided that "the essential holding of Roe v. Wade should be retained and once again reaffirmed." The Court overruled both Roe and Casey in Dobbs v. Jackson Women's Health Organization (2022). Dobbs signals a new era of weakening of the Allgeyer Court's understanding of liberty. In Lawrence v. Texas (2003), the Court found that a Texas law against same-sex sexual intercourse violated the right to privacy. In Obergefell v. Hodges (2015), the Court ruled that the fundamental right to marriage included same-sex couples being able to marry. Procedural due process When the government seeks to burden a person's protected liberty interest or property interest, the Supreme Court has held that procedural due process requires that, at a minimum, the government provide the person notice, an opportunity to be heard at an oral hearing, and a decision by a neutral decision-maker. For example, such process is due when a government agency seeks to terminate civil service employees, expel a student from public school, or cut off a welfare recipient's benefits. The Court has also ruled that the Due Process Clause requires judges to recuse themselves in cases where the judge has a conflict of interest. For example, in Caperton v. A.T. Massey Coal Co. (2009), the Court ruled that a justice of the Supreme Court of Appeals of West Virginia had to recuse himself from a case involving a major contributor to his campaign for election to that court. Incorporation of the Bill of Rights While many state constitutions are modeled after the United States Constitution and federal laws, those state constitutions did not necessarily include provisions comparable to the Bill of Rights. In Barron v. Baltimore (1833), the Supreme Court unanimously ruled that the Bill of Rights restrained only the federal government, not the states. However, the Supreme Court has subsequently held that most provisions of the Bill of Rights apply to the states through the Due Process Clause of the Fourteenth Amendment under a doctrine called "incorporation". Whether incorporation was intended by the amendment's framers, such as John Bingham, has been debated by legal historians. According to legal scholar Akhil Reed Amar, the framers and early supporters of the Fourteenth Amendment believed that it would ensure that the states would be required to recognize the same individual rights as the federal government; all these rights were likely understood as falling within the "privileges or immunities" safeguarded by the amendment. By the latter half of the 20th century, nearly all of the rights in the Bill of Rights had been applied to the states. The Supreme Court has held that the amendment's Due Process Clause incorporates all of the substantive protections of the First, Second, Fourth, Fifth (except for its Grand Jury Clause) and Sixth Amendments, along with the Excessive Fines Clause and Cruel and Unusual Punishment Clause of the Eighth Amendment. While the Third Amendment has not been applied to the states by the Supreme Court, the Second Circuit ruled that it did apply to the states within that circuit's jurisdiction in Engblom v. Carey. The Seventh Amendment right to jury trial in civil cases has been held not to be applicable to the states, but the amendment's Re-Examination Clause does apply to "a case tried before a jury in a state court and brought to the Supreme Court on appeal." The Excessive Fines Clause of the Eighth Amendment became the last right to be incorporated when the Supreme Court ruled in Timbs v. Indiana (2019) that right to apply to the states. Equal Protection Clause The Equal Protection Clause was created largely in response to the lack of equal protection provided by law in states with Black Codes. Under Black Codes, blacks could not sue, give evidence, or be witnesses. They also were punished more harshly than whites. The Supreme Court in Strauder v. West Virginia (1880) said the Fourteenth Amendment not only gave citizenship and the privileges of citizenship to persons of color, it denied to any State the power to withhold from them the equal protection of the laws, and authorized Congress to enforce its provisions by appropriate legislation. In this decision the Supreme Court stated specifically that the Equal Protection Clause was designed to assure to the colored race the enjoyment of all the civil rights that under the law are enjoyed by white persons, and to give to that race the protection of the general government, in that enjoyment, whenever it should be denied by the States. The Equal Protection Clause applies to citizens and non-citizens alike. The clause mandates that individuals in similar situations be treated equally by the law. The purpose of the clause is not only to guarantee equality both in laws for security of person as well as in proceedings, but also to insure the "equal right to the laws of due process and impartially administered before the courts of justice." Although the text of the Fourteenth Amendment applies the Equal Protection Clause only against the states, the Supreme Court, since Bolling v. Sharpe (1954), has applied the clause against the federal government through the Due Process Clause of the Fifth Amendment under a doctrine called "reverse incorporation". In Yick Wo v. Hopkins (1886), the Supreme Court has clarified that the meaning of "person" and "within its jurisdiction" in the Equal Protection Clause would not be limited to discrimination against African Americans, but would extend to other races, colors, and nationalities such as (in this case) legal aliens in the United States who are Chinese citizens: These provisions are universal in their application to all persons within the territorial jurisdiction, without regard to any differences of race, of color, or of nationality, and the equal protection of the laws is a pledge of the protection of equal laws. Persons "within its jurisdiction" are entitled to equal protection from a state. Largely because the Privileges and Immunities Clause of Article IV has from the beginning guaranteed the privileges and immunities of citizens in the several states, the Supreme Court has rarely construed the phrase "within its jurisdiction" in relation to natural persons. In Plyler v. Doe (1982), where the Court held that aliens illegally present in a state are within its jurisdiction and may thus raise equal protection claims the Court explicated the meaning of the phrase "within its jurisdiction" as follows: "[U]se of the phrase 'within its jurisdiction' confirms the understanding that the Fourteenth Amendment's protection extends to anyone, citizen or stranger, who is subject to the laws of a State, and reaches into every corner of a State's territory." The Court reached this understanding among other things from Senator Howard, a member of the Joint Committee of Fifteen, and the floor manager of the amendment in the Senate. Senator Howard was explicit about the broad objectives of the Fourteenth Amendment and the intention to make its provisions applicable to all who "may happen to be" within the jurisdiction of a state: The last two clauses of the first section of the amendment disable a State from depriving not merely a citizen of the United States, but any person, whoever he may be, of life, liberty, or property without due process of law, or from denying to him the equal protection of the laws of the State. This abolishes all class legislation in the States and does away with the injustice of subjecting one caste of persons to a code not applicable to another. ... It will, if adopted by the States, forever disable every one of them from passing laws trenching upon those fundamental rights and privileges which pertain to citizens of the United States, and to all person who may happen to be within their jurisdiction. [emphasis added by the U.S. Supreme Court] The relationship between the Fifth and Fourteenth Amendments was addressed by Justice Field in Wong Wing v. United States (1896). He observed with respect to the phrase "within its jurisdiction": "The term 'person', used in the Fifth Amendment, is broad enough to include any and every human being within the jurisdiction of the republic. A resident, alien born, is entitled to the same protection under the laws that a citizen is entitled to. He owes obedience to the laws of the country in which he is domiciled, and, as a consequence, he is entitled to the equal protection of those laws. ... The contention that persons within the territorial jurisdiction of this republic might be beyond the protection of the law was heard with pain on the argument at the bar—in face of the great constitutional amendment which declares that no State shall deny to any person within its jurisdiction the equal protection of the laws." The Supreme Court also decided whether foreign corporations are also within the jurisdiction of a state, ruling that a foreign corporation which sued in a state court in which it was not licensed to do business to recover possession of property wrongfully taken from it in another state was within the jurisdiction and could not be subjected to unequal burdens in the maintenance of the suit. When a state has admitted a foreign corporation to do business within its borders, that corporation is entitled to equal protection of the laws but not necessarily to identical treatment with domestic corporations. In Santa Clara County v. Southern Pacific Railroad (1886), the court reporter included a statement by Chief Justice Morrison Waite in the decision's headnote: The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of the opinion that it does. This dictum, which established that corporations enjoyed personhood under the Equal Protection Clause, was repeatedly reaffirmed by later courts. It remained the predominant view throughout the twentieth century, though it was challenged in dissents by justices such as Hugo Black and William O. Douglas. Between 1890 and 1910, Fourteenth Amendment cases involving corporations vastly outnumbered those involving the rights of blacks, 288 to 19. In the decades following the adoption of the Fourteenth Amendment, the Supreme Court overturned laws barring blacks from juries (Strauder v. West Virginia, 1880) or discriminating against Chinese Americans in the regulation of laundry businesses (Yick Wo v. Hopkins, 1886), as violations of the Equal Protection Clause. However, in Plessy v. Ferguson (1896), the Supreme Court held that the states could impose racial segregation so long as they provided similar facilities—the formation of the "separate but equal" doctrine. The Court went even further in restricting the Equal Protection Clause in Berea College v. Kentucky (1908), holding that the states could force private actors to discriminate by prohibiting colleges from having both black and white students. By the early 20th century, the Equal Protection Clause had been eclipsed to the point that Justice Oliver Wendell Holmes, Jr. dismissed it as "the usual last resort of constitutional arguments." The Court held to the "separate but equal" doctrine for more than fifty years, despite numerous cases in which the Court itself had found that the segregated facilities provided by the states were almost never equal, until Brown v. Board of Education (1954) reached the Court. In Brown the Court ruled that even if segregated black and white schools were of equal quality in facilities and teachers, segregation was inherently harmful to black students and so was unconstitutional. Brown met with a campaign of resistance from white Southerners, and for decades the federal courts attempted to enforce Brown's mandate against repeated attempts at circumvention. This resulted in the controversial desegregation busing decrees handed down by federal courts in various parts of the nation. In Parents Involved in Community Schools v. Seattle School District No. 1 (2007), the Court ruled that race could not be the determinative factor in determining to which public schools parents may transfer their children. In Plyler v. Doe (1982) the Supreme Court struck down a Texas statute denying free public education to illegal immigrants as a violation of the Equal Protection Clause of the Fourteenth Amendment because discrimination on the basis of illegal immigration status did not further a substantial state interest. The Court reasoned that illegal aliens and their children, though not citizens of the United States or Texas, are people "in any ordinary sense of the term" and, therefore, are afforded Fourteenth Amendment protections. In Hernandez v. Texas (1954), the Court held that the Fourteenth Amendment protects those beyond the racial classes of white or "Negro" and extends to other racial and ethnic groups, such as Mexican Americans in this case. In the half-century following Brown, the Court extended the reach of the Equal Protection Clause to other historically disadvantaged groups, such as women and illegitimate children, although it has applied a somewhat less stringent standard than it has applied to governmental discrimination on the basis of race (United States v. Virginia (1996); Levy v. Louisiana (1968)). The Supreme Court ruled in Regents of the University of California v. Bakke (1978) that affirmative action in the form of racial quotas in public university admissions was a violation of Title VI of the Civil Rights Act of 1964; however, race could be used as one of several factors without violating of the Equal Protection Clause or Title VI. In Gratz v. Bollinger (2003) and Grutter v. Bollinger (2003), the Court considered two race-conscious admissions systems at the University of Michigan. The university claimed that its goal in its admissions systems was to achieve racial diversity. In Gratz, the Court struck down a points-based undergraduate admissions system that added points for minority status, finding that its rigidity violated the Equal Protection Clause; in Grutter, the Court upheld a race-conscious admissions process for the university's law school that used race as one of many factors to determine admission. In Fisher v. University of Texas (2013), the Court ruled that before race can be used in a public university's admission policy, there must be no workable race-neutral alternative. In Schuette v. Coalition to Defend Affirmative Action (2014), the Court upheld the constitutionality of a state constitutional prohibition on the state or local use of affirmative action. Reed v. Reed (1971), which struck down an Idaho probate law favoring men, was the first decision in which the Court ruled that arbitrary gender discrimination violated the Equal Protection Clause. In Craig v. Boren (1976), the Court ruled that statutory or administrative sex classifications had to be subjected to an intermediate standard of judicial review. Reed and Craig later served as precedents to strike down a number of state laws discriminating by gender. Since Wesberry v. Sanders (1964) and Reynolds v. Sims (1964), the Supreme Court has interpreted the Equal Protection Clause as requiring the states to apportion their congressional districts and state legislative seats according to "one man, one vote". The Court has also struck down redistricting plans in which race was a key consideration. In Shaw v. Reno (1993), the Court prohibited a North Carolina plan aimed at creating majority-black districts to balance historic underrepresentation in the state's congressional delegations. The Equal Protection Clause served as the basis for the decision in Bush v. Gore (2000), in which the Court ruled that no constitutionally valid recount of Florida's votes in the 2000 presidential election could be held within the needed deadline; the decision effectively secured Bush's victory in the disputed election. In League of United Latin American Citizens v. Perry (2006), the Court ruled that House Majority Leader Tom DeLay's Texas redistricting plan intentionally diluted the votes of Latinos and thus violated the Equal Protection Clause. State actor doctrine Before United States v. Cruikshank, 92 U.S. 542 (1876) was decided by United States Supreme Court, the case was decided as a circuit case (Federal Cases No. 14897). Presiding of this circuit case was judge Joseph P. Bradley who wrote at page 710 of Federal Cases No. 14897 regarding the Fourteenth Amendment to the United States Constitution: It is a guarantee of protection against the acts of the state government itself. It is a guarantee against the exertion of arbitrary and tyrannical power on the part of the government and legislature of the state, not a guarantee against the commission of individual offenses, and the power of Congress, whether express or implied, to legislate for the enforcement of such a guarantee does not extend to the passage of laws for the suppression of crime within the states. The enforcement of the guarantee does not require or authorize Congress to perform 'the duty that the guarantee itself supposes it to be the duty of the state to perform, and which it requires the state to perform'. The above quote was quoted by United Supreme Court in United States v. Harris, 106 U.S. 629 (1883) and supplemented by a quote from the majority opinion in United States v. Cruikshank, 92 U.S. 542 (1876) as written by Chief Justice Morrison Waite: The Fourteenth Amendment prohibits a State from depriving any person of life, liberty, or property without due process of law, and from denying to any person within its jurisdiction the equal protection of the laws, but it adds nothing to the rights of one citizen as against another. It simply furnishes an additional guaranty against any encroachment by the States upon the fundamental rights which belong to every citizen as a member of society. The duty of protecting all its citizens in the enjoyment of an equality of rights was originally assumed by the States, and it still remains there. The only obligation resting upon the United States is to see that the States do not deny the right. This the Amendment guarantees, but no more. The power of the National Government is limited to the enforcement of this guaranty. Individual liberties guaranteed by the United States Constitution, other than the Thirteenth Amendment's ban on slavery, protect not against actions by private persons or entities, but only against actions by government officials. Regarding the Fourteenth Amendment, the Supreme Court ruled in Shelley v. Kraemer (1948): "[T]he action inhibited by the first section of the Fourteenth Amendment is only such action as may fairly be said to be that of the States. That Amendment erects no shield against merely private conduct, however discriminatory or wrongful." The court added in Civil Rights Cases (1883): "It is State action of a particular character that is prohibited. Individual invasion of individual rights is not the subject matter of the amendment. It has a deeper and broader scope. It nullifies and makes void all State legislation, and State action of every kind, which impairs the privileges and immunities of citizens of the United States, or which injures them in life, liberty, or property without due process of law, or which denies to any of them the equal protection of the laws." Vindication of federal constitutional rights are limited to those situations where there is "state action" meaning action of government officials who are exercising their governmental power. In Ex parte Virginia (1880), the Supreme Court found that the prohibitions of the Fourteenth Amendment "have reference to actions of the political body denominated by a State, by whatever instruments or in whatever modes that action may be taken. A State acts by its legislative, its executive, or its judicial authorities. It can act in no other way. The constitutional provision, therefore, must mean that no agency of the State, or of the officers or agents by whom its powers are exerted, shall deny to any person within its jurisdiction the equal protection of the laws. Whoever, by virtue of public position under a State government, deprives another of property, life, or liberty, without due process of law, or denies or takes away the equal protection of the laws, violates the constitutional inhibition; and as he acts in the name and for the State, and is clothed with the State's power, his act is that of the State. This must be so, or the constitutional prohibition has no meaning. [...] But the constitutional amendment was ordained for a purpose. It was to secure equal rights to all persons, and, to insure to all persons the enjoyment of such rights, power was given to Congress to enforce its provisions by appropriate legislation. Such legislation must act upon persons, not upon the abstract thing denominated a State, but upon the persons who are the agents of the State in the denial of the rights which were intended to be secured." There are however instances where people are the victims of civil-rights violations that occur in circumstances involving both government officials and private actors. In the 1960s, the United States Supreme Court adopted an expansive view of state action opening the door to wide-ranging civil-rights litigation against private actors when they act as state actors (i.e., acts done or otherwise "sanctioned in some way" by the state). The Court found that the state action doctrine is equally applicable to denials of privileges or immunities, due process, and equal protection of the laws. The critical factor in determining the existence of state action is not governmental involvement with private persons or private corporations, but "the inquiry must be whether there is a sufficiently close nexus between the State and the challenged action of the regulated entity so that the action of the latter may be fairly treated as that of the State itself." "Only by sifting facts and weighing circumstances can the nonobvious involvement of the State in private conduct be attributed its true significance." The Supreme Court asserted that plaintiffs must establish not only that a private party "acted under color of the challenged statute, but also that its actions are properly attributable to the State." "And the actions are to be attributable to the State apparently only if the State compelled the actions and not if the State merely established the process through statute or regulation under which the private party acted." The rules developed by the Supreme Court for business regulation are that (1) the "mere fact that a business is subject to state regulation does not by itself convert its action into that of the State for purposes of the Fourteenth Amendment,"[a] and (2) "a State normally can be held responsible for a private decision only when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must be deemed to be that of the State."[b] Discover more about Section 1: Citizenship and civil rights related topics Section 2: Apportionment of Representatives Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State. Under Article I, Section 2, Clause 3, the basis of representation of each state in the House of Representatives was determined by adding three-fifths of each state's slave population to its free population. Because slavery (except as punishment for crime) had been abolished by the Thirteenth Amendment, the freed slaves would henceforth be given full weight for purposes of apportionment. This situation was a concern to the Republican leadership of Congress, who worried that it would increase the political power of the former slave states, even as such states continued to deny freed slaves the right to vote. Two solutions were considered: - reduce the Congressional representation of the former slave states (for example, by basing representation on the number of legal voters rather than the number of inhabitants) - guarantee freed slaves the right to vote On January 31, 1866, the House of Representatives voted in favor of a proposed constitutional amendment that would reduce a state's representation in the House in proportion to which that state used "race or color" as a basis to deny the right to vote in that state. The amendment failed in the Senate, partly because radical Republicans foresaw that states would be able to use ostensibly race-neutral criteria, such as educational and property qualifications, to disenfranchise the freed slaves without negative consequence. So the amendment was changed to penalize states in which the vote was denied to male citizens over twenty-one for any reason other than participation in crime. Later, the Fifteenth Amendment was adopted to guarantee the right to vote could not be denied based on race or color. The effect of Section 2 was twofold: - Although the three-fifths clause was not formally repealed, it was effectively removed from the Constitution. In the words of the Supreme Court in Elk v. Wilkins, Section 2 "abrogated so much of the corresponding clause of the original Constitution as counted only three-fifths of such persons [slaves]." - It was intended to penalize, by means of reduced Congressional representation, states that withheld the franchise from adult male citizens for any reason other than participation in crime. This, it was hoped, would induce the former slave states to recognize the political rights of the former slaves, without directly forcing them to do so—something that it was thought the states would not accept. The first reapportionment after the enactment of the Fourteenth Amendment occurred in 1873, based on the 1870 census. Congress appears to have attempted to enforce the provisions of Section 2, but was unable to identify enough disenfranchised voters to make a difference to any state's representation. In the implementing statute, Congress added a provision stating that should any state, after the passage of this Act, deny or abridge the right of any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, to vote at any election named in the amendments to the Constitution, article fourteen, section two, except for participation in rebellion or other crime, the number of Representatives apportioned in this act to such State shall be reduced in the proportion which the number of such male citizens shall have to the whole number of male citizens twenty-one years of age in such State. A nearly identical provision remains in federal law to this day. Despite this legislation, in subsequent reapportionments, no change has ever been made to any state's Congressional representation on the basis of the Amendment. Bonfield, writing in 1960, suggested that "[t]he hot political nature of such proposals has doomed them to failure." Aided by this lack of enforcement, southern states continued to use pretexts to prevent many blacks from voting until the passage of the Voting Rights Act of 1965. In the Fourth Circuit case of Saunders v Wilkins (1945), Saunders claimed that Virginia should have its Congressional representation reduced because of its use of a poll tax and other voting restrictions. The plaintiff sued for the right to run for Congress at large in the state, rather than in one of its designated Congressional districts. The lawsuit was dismissed as a political question. Influence on voting rights Some have argued that Section 2 was implicitly repealed by the Fifteenth Amendment, but the Supreme Court acknowledged Section 2 in later decisions. In Minor v. Happersett (1875), the Supreme Court cited Section 2 as supporting its conclusion that the right to vote was not among the "privileges and immunities of citizenship" protected by Section 1. Women would not achieve equal voting rights throughout the United States until the adoption of Nineteenth Amendment in 1920. In Richardson v. Ramirez (1974), the Court cited Section 2 in holding that Section 1's Equal Protection Clause does not prohibit states disenfranchising felons. In Hunter v. Underwood (1985), a case involving disenfranchising black misdemeanants, the Supreme Court concluded that the Tenth Amendment cannot save legislation prohibited by the subsequently enacted Fourteenth Amendment. More specifically the Court concluded that laws passed with a discriminatory purpose are not excepted from the operation of the Equal Protection Clause by the "other crime" provision of Section 2. The Court held that Section 2 "was not designed to permit the purposeful racial discrimination [...] which otherwise violates [Section] 1 of the Fourteenth Amendment." Abolitionist leaders criticized the amendment's failure to specifically prohibit the states from denying people the right to vote on the basis of race. Section 2 protects the right to vote only of adult males, not adult females, making it the only provision of the Constitution to explicitly discriminate on the basis of sex. Section 2 was condemned by women's suffragists, such as Elizabeth Cady Stanton and Susan B. Anthony, who had long seen their cause as linked to that of black rights. The separation of black civil rights from women's civil rights split the two movements for decades. Discover more about Section 2: Apportionment of Representatives related topics Thirteenth Amendment to the United States Constitution Fifteenth Amendment to the United States Constitution Elk v. Wilkins Southern United States Minor v. Happersett Nineteenth Amendment to the United States Constitution Richardson v. Ramirez Hunter v. Underwood Section 3: Disqualification from office for insurrection or rebellion Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may, by a vote of two-thirds of each House, remove such disability. Soon after losing the Civil War in 1865, states that had been part of the Confederacy began to send "unrepentant" former Confederates (such as the Confederacy's former vice president, Alexander H. Stephens) to Washington as Senators and Representatives. Congress refused to seat them and drafted Section 3 to perpetuate, as a constitutional imperative, that any who violate their oath to the Constitution are to be barred from public office. Section 3 disqualifies from federal or state office anyone who, having taken an oath as a public official to support the Constitution, subsequently engages in "insurrection or rebellion" against the United States or gives "aid and comfort" to its enemies. Southerners strongly opposed it, arguing it would hurt reunification of the country. Section 3 does not specify how it is to be invoked, but Section 5 says Congress has enforcement power. Accordingly, Congress enforced Section 3 by enacting the Enforcement Act of 1870, the pertinent portion of which was repealed in 1948; there is still a current federal statute (18 U.S.C. § 2383) that was initially part of the Confiscation Act of 1862 (and revised in 1948), disqualifying insurrectionists from federal office. Moreover, each house of Congress can expel or exclude members for insurrection or other reasons, although it is uncertain whether more votes may be required to expel than to exclude. A further way that Congress can enforce Section 3 is via impeachment, and even prior to the adoption of the Fourteenth Amendment Congress impeached and disqualified federal judge West Humphreys for insurrection. After the amendment's adoption in 1868, disqualification was seldom enforced in the South. At the urging of President Ulysses S. Grant, in 1872 Congress passed the Amnesty Act, which removed the disqualification from all but the most senior Confederates. In 1898, as a "gesture of national unity" during the Spanish–American War, Congress passed another law broadening the amnesty. Congress posthumously lifted the disqualification from Confederate general Robert E. Lee in 1975, and Confederate president Jefferson Davis in 1978. These waivers do not bar Section 3 from being used today. Since Reconstruction, Section 3 has been invoked only once: it was used to block Socialist Party of America member Victor L. Berger of Wisconsin – convicted of violating the Espionage Act for opposing US entry into World War I – from assuming his seat in the House of Representatives in 1919 and 1920. Berger's conviction was overturned by the Supreme Court in Berger v. United States (1921), after which he was elected to three successive terms in the 1920s; he was seated for all three terms. January 6 United States Capitol attack On January 10, 2021, Nancy Pelosi, the Speaker of the House, formally requested Representatives' input as to whether to pursue Section 3 disqualification of outgoing President Donald Trump because of his role in the January 6 United States Capitol attack. Unlike impeachment, which requires a supermajority to convict, disqualification under Section 3 would only require a simple majority of each house of Congress. The Section 3 disqualification could be imposed by Congress passing a law or a nonbinding resolution stating that the January 6 riot was an insurrection, and that anyone who swore to uphold the Constitution and who incited or participated in the riot is disqualified under Section 3. Some legal experts believe a court would then be required to make a final determination that Trump was disqualified under Section 3. A state may also make a determination that Trump is disqualified under Section 3 from appearing on that state's ballot. Trump could appeal in court any disqualification by Congress or by a state. In addition to state or federal legislative action, a court action could be brought against Trump seeking his disqualification under Section 3. On January 11, 2021, Representative Cori Bush (D-MO) and 47 cosponsors introduced a resolution calling for expulsion, under Section 3, of members of Congress who voted against certifying the results of the 2020 US presidential election or incited the January 6 riot. Those named in the resolution included Republican Representatives Mo Brooks of Alabama and Louie Gohmert of Texas, who took part in the rally that preceded the riot, and Republican Senators Josh Hawley of Missouri and Ted Cruz of Texas, who objected to counting electoral votes to certify the 2020 presidential election result. After Representative Madison Cawthorn (R-NC) declared his intent to run for re-election in 2022, a group of North Carolina voters from Cawthorn's district filed a lawsuit alleging that a speech he gave immediately prior to the Capitol attack incited it, and, therefore, Section 3 disqualified him from holding federal office. A federal judge entered a preliminary injunction in favor of Cawthorn, citing the Amnesty Act of 1872; however, on May 24, 2022, an appeals court ruled that this law applied only to people who committed "constitutionally wrongful acts" before 1872. A similar challenge, which a federal court declined to block, was filed against Marjorie Taylor Greene (R-GA) and heard in April 2022 in Atlanta. Greene sued to strike down the law that allowed contesting her eligibility as unconstitutional. Otero County, New Mexico commissioner Couy Griffin was barred from holding public office for life in September 2022 by District Court Judge Francis Mathew who found his participation as the leader of the Cowboys for Trump group during the attack on the Capitol was an act of insurrection under Section 3. This is the first conviction under Section 3 since 1869 (save the previously mentioned overturned conviction). Discover more about Section 3: Disqualification from office for insurrection or rebellion related topics Section 4: Validity of public debt Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any State shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void. Section 4 confirmed the legitimacy of all public debt appropriated by the Congress. It also confirmed that neither the United States nor any state would pay for the loss of slaves or debts that had been incurred by the Confederacy. For example, during the Civil War several British and French banks had lent large sums of money to the Confederacy to support its war against the Union. In Perry v. United States (1935), the Supreme Court ruled that under Section 4 voiding a United States bond "went beyond the congressional power." The debt-ceiling crises of 2011 and 2013 raised the question of what the President's authority under Section 4 is. During the 2011 crisis, former President Bill Clinton said he would invoke the Fourteenth Amendment to raise the debt ceiling if he were still in office, and force a ruling by the Supreme Court. Some, such as legal scholar Garrett Epps, fiscal expert Bruce Bartlett and Treasury Secretary Timothy Geithner, have argued that a debt ceiling may be unconstitutional and therefore void as long as it interferes with the duty of the government to pay interest on outstanding bonds and to make payments owed to pensioners (that is, Social Security and Railroad Retirement Act recipients). Legal analyst Jeffrey Rosen has argued that Section 4 gives the President unilateral authority to raise or ignore the national debt ceiling, and that if challenged the Supreme Court would likely rule in favor of expanded executive power or dismiss the case altogether for lack of standing. Erwin Chemerinsky, professor and dean at University of California, Irvine School of Law, has argued that not even in a "dire financial emergency" could the President raise the debt ceiling as "there is no reasonable way to interpret the Constitution that [allows him to do so]." Jack Balkin, Knight Professor of Constitutional Law at Yale University, opined that like Congress the President is bound by the Fourteenth Amendment, for otherwise, he could violate any part of the amendment at will. Because the President must obey the Section 4 requirement not to put the validity of the public debt into question, Balkin argued that President Obama would have been obliged "to prioritize incoming revenues to pay the public debt, interest on government bonds and any other 'vested' obligations. What falls into the latter category is not entirely clear, but a large number of other government obligations—and certainly payments for future services—would not count and would have to be sacrificed. This might include, for example, Social Security payments." Discover more about Section 4: Validity of public debt related topics Section 5: Power of enforcement Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article. The opinion of the Supreme Court in The Slaughter-House Cases, 83 U.S. (16 Wall.) 36 (1873) stated with a view to the Reconstruction Amendments and about the Fourteenth Amendment's Section 5 Enforcement Clause in light of said Amendent's Equal Protection Clause: In the light of the history of these amendments, and the pervading purpose of them, which we have already discussed, it is not difficult to give a meaning to this clause. The existence of laws in the States where the newly emancipated negroes resided, which discriminated with gross injustice and hardship against them as a class, was the evil to be remedied by this clause, and by it such laws are forbidden. If, however, the States did not conform their laws to its requirements, then by the fifth section of the article of amendment Congress was authorized to enforce it by suitable legislation. Section 5, also known as the Enforcement Clause of the Fourteenth Amendment, enables Congress to pass laws enforcing the amendment's other provisions. In Ex Parte Virginia (1879) the U.S. Supreme Court explained the scope of Congress' §5 power in the following broad terms: "Whatever legislation is appropriate, that is, adapted to carry out the objects the amendments have in view, whatever tends to enforce submission to the prohibitions they contain, and to secure to all persons the enjoyment of perfect equality of civil rights and the equal protection of the laws against State denial or invasion, if not prohibited, is brought within the domain of congressional power." In the Civil Rights Cases (1883), the Supreme Court interpreted Section 5 narrowly, stating that "the legislation which Congress is authorized to adopt in this behalf is not general legislation upon the rights of the citizen, but corrective legislation." In other words, the amendment authorizes Congress to pass laws only to combat violations of the rights protected in other sections. In Katzenbach v. Morgan (1966), the Court upheld Section 4(e) of the Voting Rights Act of 1965, which prohibits certain forms of literacy requirements as a condition to vote, as a valid exercise of Congressional power under Section 5 to enforce the Equal Protection Clause. The Court ruled that Section 5 enabled Congress to act both remedially and prophylactically to protect the rights guaranteed by the amendment. However, in City of Boerne v. Flores (1997), the Court narrowed Congress's enforcement power, holding that Congress may not enact legislation under Section 5 that substantively defines or interprets Fourteenth Amendment rights. The Court ruled that legislation is valid under Section 5 only if there is a "congruence and proportionality" between the injury to a person's Fourteenth Amendment right and the means Congress adopted to prevent or remedy that injury. Discover more about Section 5: Power of enforcement related topics Selected Supreme Court cases - 1884: Elk v. Wilkins - 1898: United States v. Wong Kim Ark - 1967: Afroyim v. Rusk - 1980: Vance v. Terrazas Privileges or immunities - 1873: Slaughter-House Cases - 1875: Minor v. Happersett - 1908: Twining v. New Jersey - 1920: United States v. Wheeler - 1948: Oyama v. California - 1999: Saenz v. Roe - 1833: Barron v. Baltimore - 1873: Slaughter-House Cases - 1883: Civil Rights Cases - 1884: Hurtado v. California - 1897: Chicago, Burlington & Quincy Railroad v. Chicago - 1900: Maxwell v. Dow - 1908: Twining v. New Jersey - 1925: Gitlow v. New York - 1932: Powell v. Alabama - 1937: Palko v. Connecticut - 1947: Adamson v. California - 1947: Everson v. Board of Education - 1952: Rochin v. California - 1961: Mapp v. Ohio - 1962: Robinson v. California - 1963: Gideon v. Wainwright - 1964: Malloy v. Hogan - 1967: Reitman v. Mulkey - 1968: Duncan v. Louisiana - 1969: Benton v. Maryland - 1970: Goldberg v. Kelly - 1972: Furman v. Georgia - 1974: Goss v. Lopez - 1975: O'Connor v. Donaldson - 1976: Gregg v. Georgia - 2010: McDonald v. Chicago - 2019: Timbs v. Indiana - 2022: New York State Rifle & Pistol Association, Inc. v. Bruen Substantive due process - 1876: Munn v. Illinois - 1887: Mugler v. Kansas - 1897: Allgeyer v. Louisiana - 1905: Lochner v. New York - 1908: Muller v. Oregon - 1923: Adkins v. Children's Hospital - 1923: Meyer v. Nebraska - 1925: Pierce v. Society of Sisters - 1934: Nebbia v. New York - 1937: West Coast Hotel Co. v. Parrish - 1965: Griswold v. Connecticut - 1973: Roe v. Wade - 1977: Moore v. City of East Cleveland - 1990: Cruzan v. Director, Missouri Department of Health - 1992: Planned Parenthood v. Casey - 1996: BMW of North America, Inc. v. Gore - 1997: Washington v. Glucksberg - 2003: State Farm v. Campbell - 2003: Lawrence v. Texas - 2015: Obergefell v. Hodges - 2022: Dobbs v. Jackson Women's Health Organization - 1880: Strauder v. West Virginia - 1886: Yick Wo v. Hopkins - 1886: Santa Clara County v. Southern Pacific Railroad - 1896: Plessy v. Ferguson - 1908: Berea College v. Kentucky - 1916: The People of the State of California v. Jukichi Harada - 1917: Buchanan v. Warley - 1942: Skinner v. Oklahoma - 1944: Korematsu v. United States - 1948: Shelley v. Kraemer - 1954: Hernandez v. Texas - 1954: Brown v. Board of Education - 1954: Bolling v. Sharpe - 1962: Baker v. Carr - 1967: Loving v. Virginia - 1971: Reed v. Reed - 1971: Palmer v. Thompson - 1972: Eisenstadt v. Baird - 1973: San Antonio Independent School District v. Rodriguez - 1976: Examining Board v. Flores de Otero - 1978: Regents of the University of California v. Bakke - 1982: Plyler v. Doe - 1982: Mississippi University for Women v. Hogan - 1986: Posadas de Puerto Rico Associates v. Tourism Company of Puerto Rico - 1996: United States v. Virginia - 1996: Romer v. Evans - 2000: Bush v. Gore - 2003: Grutter v. Bollinger - 1974: Richardson v. Ramirez - 1985: Hunter v. Underwood Power of enforcement - 1883: Civil Rights Cases - 1966: Katzenbach v. Morgan - 1976: Fitzpatrick v. Bitzer - 1997: City of Boerne v. Flores - 1999: Florida Prepaid Postsecondary Education Expense Board v. College Savings Bank - 2000: United States v. Morrison - 2000: Kimel v. Florida Board of Regents - 2001: Board of Trustees of the University of Alabama v. Garrett - 2003: Nevada Department of Human Resources v. Hibbs - 2004: Tennessee v. Lane - 2013: Shelby County v. Holder Discover more about Selected Supreme Court cases related topics Proposal by Congress In the final years of the American Civil War and the Reconstruction Era that followed, Congress repeatedly debated the rights of black former slaves freed by the 1863 Emancipation Proclamation and the 1865 Thirteenth Amendment, the latter of which had formally abolished slavery. Following the passage of the Thirteenth Amendment by Congress, however, Republicans grew concerned over the increase it would create in the congressional representation of the Democratic-dominated Southern States. Because the full population of freed slaves would now be counted for determining congressional representation, rather than the three-fifths previously mandated by the Three-Fifths Compromise, the Southern States would dramatically increase their power in the population-based House of Representatives, regardless of whether the former slaves were allowed to vote. Republicans began looking for a way to offset this advantage, either by protecting and attracting votes of former slaves, or at least by discouraging their disenfranchisement. In 1865, Congress passed what would become the Civil Rights Act of 1866, guaranteeing citizenship without regard to race, color, or previous condition of slavery or involuntary servitude. The bill also guaranteed equal benefits and access to the law, a direct assault on the Black Codes passed by many post-war states. The Black Codes attempted to return ex-slaves to something like their former condition by, among other things, restricting their movement, forcing them to enter into year-long labor contracts, prohibiting them from owning firearms, and preventing them from suing or testifying in court. Although strongly urged by moderates in Congress to sign the bill, President Andrew Johnson vetoed it on March 27, 1866. In his veto message, he objected to the measure because it conferred citizenship on the freedmen at a time when 11 out of 36 states were unrepresented in the Congress, and that it discriminated in favor of African-Americans and against whites. Three weeks later, Johnson's veto was overridden and the measure became law. Despite this victory, even some Republicans who had supported the goals of the Civil Rights Act began to doubt that Congress really possessed constitutional power to turn those goals into laws. The experience also encouraged both radical and moderate Republicans to seek Constitutional guarantees for black rights, rather than relying on temporary political majorities. More than seventy proposals for an amendment were drafted. In an extensive appendix to his dissenting opinion in Adamson v. California (1947), Justice Hugo Black analyzed and detailed the statements made by "those who framed, advocated, and adopted the Amendment" and thus shed some light on the history of the amendment's adoption. In late 1865, the Joint Committee on Reconstruction proposed an amendment stating that any citizens barred from voting on the basis of race by a state would not be counted for purposes of representation of that state. This amendment passed the House, but was blocked in the Senate by a coalition of Radical Republicans led by Charles Sumner, who believed the proposal a "compromise with wrong", and Democrats opposed to black rights. Consideration then turned to a proposed amendment by Representative John A. Bingham of Ohio, which would enable Congress to safeguard "equal protection of life, liberty, and property" of all citizens; this proposal failed to pass the House. In April 1866, the Joint Committee forwarded a third proposal to Congress, a carefully negotiated compromise that combined elements of the first and second proposals as well as addressing the issues of Confederate debt and voting by ex-Confederates. The House of Representatives passed House Resolution 127, 39th Congress several weeks later and sent to the Senate for action. The resolution was debated and several amendments to it were proposed. Amendments to Sections 2, 3, and 4 were adopted on June 8, 1866, and the modified resolution passed by a 33 to 11 vote (5 absent, not voting). The House agreed to the Senate amendments on June 13 by a 138–36 vote (10 not voting). A concurrent resolution requesting the President to transmit the proposal to the governors of the states was passed by both houses of Congress on June 18. The Radical Republicans were satisfied that they had secured civil rights for blacks but were disappointed that the amendment would not also secure political rights for blacks; in particular, the right to vote. For example, Thaddeus Stevens, a leader of the disappointed Radical Republicans, said: "I find that we shall be obliged to be content with patching up the worst portions of the ancient edifice, and leaving it, in many of its parts, to be swept through by the tempests, the frosts, and the storms of despotism." Abolitionist Wendell Phillips called it a "fatal and total surrender". This point would later be addressed by the Fifteenth Amendment. Ratification by the states On June 16, 1866, Secretary of State William Seward transmitted the Fourteenth Amendment to the governors of the several states for its ratification. State legislatures in every formerly Confederate state, with the exception of Tennessee, refused to ratify it. This refusal led to the passage of the Reconstruction Acts. Ignoring the existing state governments, military government was imposed until new civil governments were established and the Fourteenth Amendment was ratified. It also prompted Congress to pass a law on March 2, 1867, requiring that a former Confederate state must ratify the Fourteenth Amendment before "said State shall be declared entitled to representation in Congress." The first 28 states to ratify the Fourteenth Amendment were: - Connecticut: June 30, 1866 - New Hampshire: July 6, 1866 - Tennessee: July 18, 1866 - New Jersey: September 11, 1866 (rescinded ratification February 20, 1868/March 24, 1868; re-ratified April 23, 2003) - Oregon: September 19, 1866 (rescinded ratification October 16, 1868; re-ratified April 25, 1973) - Vermont: October 30, 1866 - New York: January 10, 1867 - Ohio: January 11, 1867 (rescinded ratification January 13, 1868; re-ratified March 12, 2003) - Illinois: January 15, 1867 - West Virginia: January 16, 1867 - Michigan: January 16, 1867 - Minnesota: January 16, 1867 - Kansas: January 17, 1867 - Maine: January 19, 1867 - Nevada: January 22, 1867 - Indiana: January 23, 1867 - Missouri: January 25, 1867 - Pennsylvania: February 6, 1867 - Rhode Island: February 7, 1867 - Wisconsin: February 13, 1867 - Massachusetts: March 20, 1867 - Nebraska: June 15, 1867 - Iowa: March 16, 1868 - Arkansas: April 6, 1868 - Florida: June 9, 1868 - North Carolina: July 4, 1868 (after rejection December 14, 1866) - Louisiana: July 9, 1868 (after rejection February 6, 1867) - South Carolina: July 9, 1868 (after rejection December 20, 1866) If rescission by Ohio and New Jersey were illegitimate, South Carolina would have been the 28th state to ratify the amendment, enough for the amendment to be a part of the Constitution. Otherwise, only 26 states ratified the amendment out of the needed 28. Ohio and New Jersey's rescissions (which occurred after Democrats retook the states legislature) caused significant controversy and debate, but as this controversy occurred ratification by other states continued: - Alabama: July 13, 1868 On July 20, 1868, Secretary of State William H. Seward certified that if withdrawals of ratification by New Jersey and Ohio were illegitimate, then the amendment had become part of the Constitution on July 9, 1868, with ratification by South Carolina as the 28th state. The following day, Congress declared New Jersey's recession of the amendment "scandalous", rejected the act and then adopted and transmitted to the Department of State a concurrent resolution declaring the Fourteenth Amendment to be a part of the Constitution and directing the Secretary of State to promulgate it as such, thereby establishing a precedent that a state cannot rescind a ratification. Ultimately, New Jersey and Ohio were named in the congressional resolution as having ratified the amendment, as well as Alabama, making 29 states in total. On the same day, one more State ratified: - Georgia: July 21, 1868 (after rejection November 9, 1866) On July 27, Secretary Seward received the formal ratification from Georgia. The following day, July 28, Secretary Seward issued his official proclamation certifying the adoption of the Fourteenth Amendment. Secretary Seward stated that his proclamation was "in conformance" to the resolution by Congress, but his official list of States included both Alabama and Georgia, as well as Ohio and New Jersey. Ultimately, regardless of the legal status of New Jersey's and Ohio's rescission, the amendment would have passed at the same time because of Alabama and Georgia's ratifications. The inclusion of Ohio and New Jersey has led some to question the validity of the rescission of a ratification. The inclusion of Alabama and Georgia has called that conclusion into question. While there have been Supreme Court cases dealing with ratification issues, this particular question has never been adjudicated. On October 16, 1868, three months after the amendment was ratified and part of the Constitution, Oregon rescinded its ratification bringing the number of states that had the amendment actively ratified to 27 (for nearly a year), but this had no actual impact on the US Constitution or the 14th Amendment's standing. The Fourteenth Amendment was subsequently ratified: - Virginia: October 8, 1869 (after rejection January 9, 1867) - Mississippi: January 17, 1870 - Texas: February 18, 1870 (after rejection October 27, 1866) - Delaware: February 12, 1901 (after rejection February 8, 1867) - Maryland: April 4, 1959 (after rejection March 23, 1867) - California: May 6, 1959 - Kentucky: March 30, 1976 (after rejection January 8, 1867) Since Ohio and New Jersey re-ratified the Fourteenth Amendment in 2003, all U.S. states that existed during Reconstruction have ratified the amendment. Discover more about Adoption related topics Source: "Fourteenth Amendment to the United States Constitution", Wikipedia, Wikimedia Foundation, (2023, March 15th), https://en.wikipedia.org/wiki/Fourteenth_Amendment_to_the_United_States_Constitution. Get our FREE extension now! - ^ Jackson v. Metropolitan Edison Co., 419 U.S. 345, 350 (1974); Blum v. Yaretsky, 457 U.S. 991, 1004 (1982). Cf. Moose Lodge No. 107 v. Irvis, 407 U.S. 163 (1972). - ^ Yaretsky, 457 U.S., at 1004; Flagg Bros., 436 U.S., at 166; Metropolitan Edison Co., 419 U.S., at 357. - ^ a b c Civil Rights Cases, 109 U.S. 3 (1883). - ^ "Civil Rights Cases (1883)". Pearson Education, Inc., publishing as Pearson Prentice Hall. Pearson Education. 2005. Archived from the original on January 14, 2021. Retrieved October 23, 2013. - ^ Graber, "Subtraction by Addition?" (2012), p. 1523. - ^ Goldstone 2011, pp. 23–24. - ^ a b Eric Foner, "The Second American Revolution," In These Times, September 1987; reprinted in Civil Rights Since 1787, ed. Jonathan Birnbaum & Clarence Taylor, NYU Press, 2000. ISBN 0814782493 - ^ Finkelman, Paul (2003). "John Bingham and the Background to the Fourteenth Amendment" (PDF). Akron Law Review. 36 (671). Archived (PDF) from the original on February 22, 2014. Retrieved April 2, 2009. - ^ "Shelley v. Kraemer, 334 U.S. 1 (1948) at 23". Justia US Supreme Court Center. May 2, 1948. Archived from the original on January 14, 2021. Retrieved December 24, 2020. - ^ "Shelley v. Kraemer, 334 U.S. 1 (1948) at 23". Justia US Supreme Court Center. May 2, 1948. Archived from the original on January 14, 2021. Retrieved December 24, 2020. - ^ Harrell, David and Gaustad, Edwin. Unto A Good Land: A History Of The American People, Volume 1, p. 520 (Eerdmans Publishing, 2005): "The most important, and the one that has occasioned the most litigation over time as to its meaning and application, was Section One." - ^ Stephenson, D. The Waite Court: Justices, Rulings, and Legacy, p. 147 (ABC-CLIO, 2003). - ^ Multiple sources: - Tsesis, Alexander (2008). "The Inalienable Core of Citizenship: From Dred Scott to the Rehnquist Court". Arizona State Law Journal. 39. SSRN 1023809. - McDonald v. Chicago, 561 U.S. 742 (2010), 807–808 ("This [clause] unambiguously overruled this Court's contrary holding in Dred Scott.") - "The Atlantic Argument: Trump Is Trying to Change 'What it Means to Be American'". The Atlantic. November 8, 2018. Archived from the original on January 14, 2021. Retrieved March 18, 2020. - ^ a b c d e Garrett Epps (Professor of constitutional law at the University of Baltimore) (October 30, 2018). "Ideas: The Citizenship Clause Means What It Says". The Atlantic. Archived from the original on March 7, 2020. Retrieved March 18, 2020. - ^ Jones v. Mayer, 392 U.S. 409 (1968). - ^ a b Rosen, Jeffrey. The Supreme Court: The Personalities and Rivalries That Defined America, p. 79 (MacMillan 2007). - ^ a b Newman, Roger. The Constitution and its Amendments, Vol. 4, p. 8 (Macmillan 1999). - ^ Yen, Chin-Yung Archived January 14, 2021, at the Wayback Machine. Rights of citizens and persons under the Fourteenth amendment, p. 7 Archived March 30, 2019, at the Wayback Machine (New Era Printing Company 1905). - ^ a b Goldstone 2011, pp. 22–23. - ^ "Elk v. Wilkins, 112 U.S. 94 (1884) at 101–102". Justia US Supreme Court Center. November 3, 1884. Archived from the original on January 14, 2021. Retrieved November 22, 2020. - ^ Messner, Emily. "Born in the U.S.A. (Part I)", The Debate, The Washington Post (March 30, 2006). Archived November 6, 2011, at the Wayback Machine - ^ Pear, Robert (August 7, 1996). "Citizenship Proposal Faces Obstacle in the Constitution". The New York Times. Archived from the original on January 14, 2021. Retrieved February 7, 2017. - ^ Magliocca, Gerard N. (2007). "Indians and Invaders: The Citizenship Clause and Illegal Aliens". University of Pennsylvania Journal of Constitutional Law. 10: 499–526. SSRN 965268. - ^ Foner, Eric (August 27, 2015). "Birthright Citizenship Is the Good Kind of American Exceptionalism". The Nation. The Nation. Archived from the original on January 14, 2021. Retrieved November 12, 2015. - ^ a b LaFantasie, Glenn (March 20, 2011) "The erosion of the Civil War consensus", Salon Archived March 23, 2011, at the Wayback Machine - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893 Archived January 14, 2021, at the Wayback Machine Senator Reverdy Johnson said in the debate: "Now, all this amendment provides is, that all persons born in the United States and not subject to some foreign Power—for that, no doubt, is the meaning of the committee who have brought the matter before us—shall be considered as citizens of the United States ... If there are to be citizens of the United States entitled everywhere to the character of citizens of the United States, there should be some certain definition of what citizenship is, what has created the character of citizen as between himself and the United States, and the amendment says citizenship may depend upon birth, and I know of no better way to give rise to citizenship than the fact of birth within the territory of the United States, born of parents who at the time were subject to the authority of the United States." - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2897 Archived January 14, 2021, at the Wayback Machine - ^ Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 572 Archived January 14, 2021, at the Wayback Machine - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2890,2892–4,2896 Archived January 14, 2021, at the Wayback Machine - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893 Archived January 14, 2021, at the Wayback Machine. Trumbull, during the debate, said, "What do we [the committee reporting the clause] mean by 'subject to the jurisdiction of the United States'? Not owing allegiance to anybody else. That is what it means." He then proceeded to expound upon what he meant by "complete jurisdiction": "Can you sue a Navajoe Indian in court? ... We make treaties with them, and therefore they are not subject to our jurisdiction. ... If we want to control the Navajoes or any other Indians of which the Senator from Wisconsin has spoken, how do we do it? Do we pass a law to control them? Are they subject to our jurisdiction in that sense? ... Would he [Senator Doolittle] think of punishing them for instituting among themselves their own tribal regulations? Does the Government of the United States pretend to take jurisdiction of murders and robberies and other crimes committed by one Indian upon another? ... It is only those persons who come completely within our jurisdiction, who are subject to our laws, that we think of making citizens." - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2895 Archived January 14, 2021, at the Wayback Machine. Howard additionally stated the word jurisdiction meant "the same jurisdiction in extent and quality as applies to every citizen of the United States now" and that the U.S. possessed a "full and complete jurisdiction" over the person described in the amendment. - ^ Elk v. Wilkins, 112 U.S. 94 (1884). - ^ Urofsky, Melvin I.; Finkelman, Paul (2002). A March of Liberty: A Constitutional History of the United States. Vol. 1 (2nd ed.). New York: Oxford University Press. ISBN 978-0195126358. Archived from the original on February 18, 2017. Retrieved October 2, 2020. - ^ Reid, Kay (September 22, 2012). "Multilayered loyalties: Oregon Indian women as citizens of the land, their tribal nations, and the united States". Oregon Historical Quarterly. 113 (3): 392–407. doi:10.1353/ohq.2012.0022. S2CID 245846206. Archived from the original on September 4, 2013. Retrieved July 18, 2013. - ^ 9 March 1866 Congressional Globe 39.1 (1866) p. 1291 Archived January 14, 2021, at the Wayback Machine. (middle column, 2nd paragraph) - ^ Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 2893 Archived January 14, 2021, at the Wayback Machine. From the debate on the Civil Rights Act: Mr. Johnson: "... Who is a citizen of the United States is an open question. The decision of the courts and doctrine of the commentators is, that every man who is a citizen of the State becomes ipso facto a citizen of the United States; but there is no definition as to how citizenship can exist in the United States except through the medium of a citizenship in a State ..." - ^ Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 498 Archived January 14, 2021, at the Wayback Machine. The debate on the Civil Rights Act contained the following exchange: Mr. Cowan: "I will ask whether it will not have the effect of naturalizing the children of Chinese and Gypsies born in this country?" Mr. Trumbull: "Undoubtedly." Mr. Trumbull: "I understand that under the naturalization laws the children who are born here of parents who have not been naturalized are citizens. This is the law, as I understand it, at the present time. Is not the child born in this country of German parents a citizen? I am afraid we have got very few citizens in some of the counties of good old Pennsylvania if the children born of German parents are not citizens." Mr. Cowan: "The honorable Senator assumes that which is not the fact. The children of German parents are citizens; but Germans are not Chinese; Germans are not Australians, nor Hottentots, nor anything of the kind. That is the fallacy of his argument." Mr. Trumbull: "If the Senator from Pennsylvania will show me in the law any distinction made between the children of German parents and the children of Asiatic parents, I may be able to appreciate the point which he makes; but the law makes no such distinction; and the child of an Asiatic is just as much of a citizen as the child of a European." - ^ Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2891–2892 Archived January 14, 2021, at the Wayback Machine During the debate on the Amendment, Senator John Conness of California declared, "The proposition before us, I will say, Mr. President, relates simply in that respect to the children begotten of Chinese parents in California, and it is proposed to declare that they shall be citizens. We have declared that by law [the Civil Rights Act]; now it is proposed to incorporate that same provision in the fundamental instrument of the nation. I am in favor of doing so. I voted for the proposition to declare that the children of all parentage, whatever, born in California, should be regarded and treated as citizens of the United States, entitled to equal Civil Rights with other citizens." - ^ "Veto of the Civil Rights Bill". Teaching American History. Archived from the original on August 29, 2013. Retrieved February 21, 2019. - ^ Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 2891 Archived January 14, 2021, at the Wayback Machine. From the debate on the Civil Rights Act: Mr. Cowan: "Therefore I think, before we assert broadly that everybody who shall be born in the United States shall be taken to be citizen of the United States, we ought to exclude others besides Indians not taxed, because I look upon Indians not taxed as being much less dangerous and much less pestiferous to a society than I look upon Gypsies. I do not know how my honorable friend from California looks upon Chinese, but I do know how some of his fellow citizens regard them. I have no doubt that now they are useful, and I have no doubt that within proper restraints, allowing that State and the other Pacific States to manage them as they may see fit, they may be useful; but I would not tie their hands by the Constitution of the United States so as to prevent them hereafter from dealing with them as in their wisdom they see fit ..." - ^ Lee, Margaret. "Birthright Citizenship Under the 14th Amendment of Persons Born in the United States to Alien Parents", Archived January 14, 2021, at the Wayback Machine, Congressional Research Service (August 12, 2010): "Over the last decade or so, concern about illegal immigration has sporadically led to a re-examination of a long-established tenet of U.S. citizenship, codified in the Citizenship Clause of the Fourteenth Amendment of the U.S. Constitution and §301(a) of the Immigration and Nationality Act (INA) (8 U.S.C. §1401(a)), that a person who is born in the United States, subject to its jurisdiction, is a citizen of the United States regardless of the race, ethnicity, or alienage of the parents ... some scholars argue that the Citizenship Clause of the Fourteenth Amendment should not apply to the children of unauthorized aliens because the problem of unauthorized aliens did not exist at the time the Fourteenth Amendment was considered in Congress and ratified by the states." - ^ Peter Grier (August 10, 2010). "14th Amendment: why birthright citizenship change 'can't be done'". Christian Science Monitor. Archived from the original on December 28, 2012. Retrieved June 12, 2013. - ^ United States v. Wong Kim Ark, 169 U.S. 649 (1898). - ^ Rodriguez, C. M. (2009). "The Second Founding: The Citizenship Clause, Original Meaning, and the Egalitarian Unity of the Fourteenth Amendment [PDF]" (PDF). University of Pennsylvania Journal of Constitutional Law. 11: 1363–1475. Archived from the original (PDF) on July 15, 2011. Retrieved January 20, 2011. - ^ "8 FAM 301.1–3 Not Included in the Meaning of 'In the United States'". United States Department of State. Archived from the original on May 2, 2019. Retrieved July 18, 2018. - ^ a b c Policy Manual. Chapter 2 - Grounds for Revocation of Naturalization. U.S. Citizenship and Immigration Services. Archived January 14, 2021, at the Wayback Machine - ^ 8 U.S.C. § 1424(a)(2) - ^ U.S. Department of State (February 1, 2008). "Advice about Possible Loss of U.S. Citizenship and Dual Nationality". Archived from the original on April 16, 2009. Retrieved April 17, 2009. - ^ For example, see Perez v. Brownell, 356 U.S. 44 (1958), overruled by Afroyim v. Rusk, 387 U.S. 253 (1967). - ^ Afroyim v. Rusk, 387 U.S. 253 (1967). - ^ Vance v. Terrazas, 444 U.S. 252 (1980). - ^ Yoo, John. "Survey of the Law of Expatriation, Memorandum Opinion for the Solicitor General" (June 12, 2002). Archived June 6, 2013, at the Wayback Machine - ^ a b c d e f Slaughter-House Cases, 83 U.S. 36 (1873). - ^ a b Beatty, Jack (2008). Age of Betrayal: The Triumph of Money in America, 1865–1900. New York: Vintage Books. p. 135. ISBN 978-1400032426. Archived from the original on January 14, 2021. Retrieved July 19, 2013. - ^ e.g., United States v. Morrison, 529 U.S. 598 (2000). - ^ Shaman, Jeffrey. Constitutional Interpretation: Illusion and Reality, p. 248 (Greenwood Publishing 2001). - ^ Saenz v. Roe, 526 U.S. 489 (1999). - ^ Bogen, David. Privileges and Immunities: A Reference Guide to the United States Constitution, p. 104 (Greenwood Publushing 2003). - ^ Barnett, Randy (June 28, 2010). "Privileges or Immunities Clause alive again". SCOTUSblog. Archived from the original on May 13, 2013. Retrieved June 4, 2020. - ^ Howe, Amy (February 20, 2019). "Opinion analysis: Eighth Amendment's ban on excessive fines applies to the states". SCOTUSblog. Archived from the original on January 14, 2021. Retrieved June 4, 2020. - ^ Madison, P.A. (August 2, 2010). "Historical Analysis of the first of the 14th Amendment's First Section". The Federalist Blog. Archived from the original on November 18, 2019. Retrieved January 19, 2013. - ^ "The Bill of Rights: A Brief History". ACLU. Archived from the original on August 30, 2016. Retrieved April 21, 2015. - ^ "Honda Motor Co. v. Oberg, 512 U.S. 415 (1994), at 434". Justia US Supreme Court Center. June 24, 1994. Archived from the original on January 14, 2021. Retrieved August 26, 2020. There is, however, a vast difference between arbitrary grants of freedom and arbitrary deprivations of liberty or property. The Due Process Clause has nothing to say about the former, but its whole purpose is to prevent the latter. - ^ "Ohio Bell Tel. Co. v. Public Utilities Comm'n, 301 U.S. 292 (1937), at 302". Justia US Supreme Court Center. April 26, 1937. Retrieved February 10, 2021. - ^ Murray v. Hoboken Land, 59 U.S. 272 (1855) - ^ Hurtado v. California, 110 U.S. 516 (1884) - ^ John M. Harlan II (June 19, 1961). "Poe v. Ullman, 367 U.S. 497 (1961), at at 542 (dissenting from dismissal on jurisdictional grounds)". Justia US Supreme Court Center. Retrieved March 22, 2022. - ^ Nebbia v. New York, 291 U.S. 502 (1934), at 525. - ^ New State Ice Co. v. Liebmann, 285 U.S. 262 (1932), at 311. - ^ Whitney v. California, 274 U.S. 357 (1927) - ^ Curry, James A.; Riley, Richard B.; Battiston, Richard M. (2003). "6". Constitutional Government: The American Experience. Kendall/Hunt Publishing Company. p. 210. ISBN 978-0787298708. Retrieved July 14, 2013. - ^ Gupta, Gayatri (2009). "Due process". In Folsom, W. Davis; Boulware, Rick (eds.). Encyclopedia of American Business. Infobase. p. 134. - ^ Poe v. Ullman, 367 U.S. 497 (1961) - ^ "Planned Parenthood of Southeastern Pa. v. Casey, 505 U.S. 833 (1992), at 846". Justia Law. Justia US Supreme Court Center. June 29, 1992. Retrieved March 22, 2022. - ^ a b Cord, Robert L. (1987). "The Incorporation Doctrine and Procedural Due Process Under the Fourteenth Amendment: An Overview". Brigham Young University Law Review (3): 868. Archived from the original on January 14, 2021. Retrieved July 14, 2013. - ^ Allgeyer v. Louisiana, 169 U.S. 649 (1897). - ^ Allgeyer, 165 U.S. at 589 (emphasis added). - ^ Lochner v. New York, 198 U.S. 45 (1905). - ^ Adkins v. Children's Hospital, 261 U.S. 525 (1923). - ^ Meyer v. Nebraska, 262 U.S. 390 (1923). - ^ "CRS Annotated Constitution". Cornell University Law School Legal Information Institute. Archived from the original on November 10, 2013. Retrieved June 12, 2013. - ^ Mugler v. Kansas, 123 U.S. 623 (1887). - ^ Holden v. Hardy, 169 U.S. 366 (1898). - ^ Muller v. Oregon, 208 U.S. 412 (1908). - ^ Wilson v. New, 243 U.S. 332 (1917). - ^ United States v. Doremus, 249 U.S. 86 (1919). - ^ West Coast Hotel v. Parrish, 300 U.S. 379 (1937). - ^ "West Coast Hotel Co. v. Parrish, 300 U.S. 379 (1937), at 391–392". Justia US Supreme Court Center. March 29, 1937. Archived from the original on January 14, 2021. Retrieved January 8, 2021. - ^ Bolling v. Sharpe, 347 U.S. 497 (1954), at 499–500. - ^ Huston, Luther A. (May 18, 1954). "High Court Bans School Segregation; 9-to-0 Decision Grants Time to Comply". The New York Times. Archived from the original on January 14, 2021. Retrieved March 6, 2013. - ^ Poe v. Ullman, 367 U.S. 497 (1961), at 543 Archived January 14, 2021, at the Wayback Machine - ^ Felix Frankfurter (June 26, 1949). "Wolf v. Colorado, 338 U.S. 25 (1949), at 27 (Opinion of the court)". Justia US Supreme Court Center. Retrieved February 20, 2023. - ^ Griswold v. Connecticut, 381 U.S. 479 (1965) - ^ "Griswold v. Connecticut". Encyclopedia of the American Constitution. January 1, 2000. Archived from the original on September 5, 2013. Retrieved June 16, 2013. - ^ Planned Parenthood of Southeastern Pa. v. Casey, 505 U.S. 833, at 849 Archived January 14, 2021, at the Wayback Machine - ^ Roe v. Wade, 410 U.S. 113 (1973). - ^ "Roe v. Wade 410 U.S. 113 (1973) Doe v. Bolton 410 U.S. 179 (1973)". Encyclopedia of the American Constitution. January 1, 2000. Archived from the original on June 10, 2014. Retrieved June 16, 2013. - ^ Planned Parenthood v. Casey, 505 U.S. 833 (1992). - ^ Casey, 505 U.S. at 845–846. - ^ Lawrence v. Texas, 539 U.S. 558 (2003). - ^ Spindelman, Marc (June 1, 2004). "Surviving Lawrence v. Texas". Michigan Law Review. 102 (7): 1615–1667. doi:10.2307/4141915. JSTOR 4141915. Archived from the original on June 10, 2014. Retrieved June 16, 2013. - ^ Howe, Amy (June 26, 2015). "In historic decision, Court strikes down state bans on same-sex marriage: In Plain English". SCOTUSblog. Archived from the original on January 14, 2021. Retrieved July 8, 2015. - ^ White, Bradford (2008). Procedural Due Process in Plain English. National Trust for Historic Preservation. ISBN 978-0891335733. - ^ See also Mathews v. Eldridge (1976). - ^ Caperton v. A.T. Massey Coal Co., 556 U.S. 868 (2009). - ^ Bravin, Jess; Maher, Kris (June 8, 2009). "Justices Set New Standard for Recusals". The Wall Street Journal. Archived from the original on January 14, 2021. Retrieved June 9, 2009. - ^ Barron v. Baltimore, 32 U.S. 243 (1833). - ^ Levy, Leonard W. (January 2000). "Barron v. City of Baltimore 7 Peters 243 (1833)". Encyclopedia of the American Constitution. Archived from the original on March 29, 2015. Retrieved June 13, 2013. - ^ Foster, James C. (2006). "Bingham, John Armor". In Finkelman, Paul (ed.). Encyclopedia of American Civil Liberties. CRC Press. p. 145. ISBN 978-0415943420. Archived from the original on January 14, 2021. Retrieved October 2, 2020. - ^ Amar, Akhil Reed (1992). "The Bill of Rights and the Fourteenth Amendment". Yale Law Journal. 101 (6): 1193–1284. doi:10.2307/796923. JSTOR 796923. Archived from the original on October 19, 2008. - ^ "Duncan v. Louisiana (Mr. Justice Black, joined by Mr. Justice Douglas, concurring)". Cornell Law School – Legal Information Institute. May 20, 1968. Archived from the original on January 14, 2021. Retrieved April 26, 2009. - ^ a b Levy, Leonard (1970). Fourteenth Amendment and the Bill of Rights: The Incorporation Theory (American Constitutional and Legal History Series). Da Capo Press. ISBN 978-0306700293. - ^ 677 F.2d 957 (1982) - ^ "Minneapolis & St. Louis R. Co. v. Bombolis (1916)". Justia. May 22, 1916. Archived from the original on January 14, 2021. Retrieved August 1, 2010. - ^ "Seventh Amendment – Civil Trials". U.S. Government Printing Office. U.S. Government Printing Office. 1992. p. 1464. Archived from the original on January 14, 2013. Retrieved July 4, 2013. - ^ Amy Howe (February 20, 2019). "Opinion analysis: Eighth Amendment's ban on excessive fines applies to the states". SCOTUSblog. Archived from the original on January 14, 2021. Retrieved February 20, 2019. - ^ Goldstone 2011, pp. 20, 23–24. - ^ a b Madison, P.A. (August 2, 2010). "Historical Analysis of the first of the 14th Amendment's First Section". The Federalist Blog. Archived from the original on November 18, 2019. Retrieved January 19, 2013. - ^ "Strauder v. West Virginia, 100 U.S. 303 (1880) at pp. 306-307". Justia US Supreme Court Center. March 1, 1880. Archived from the original on January 14, 2021. Retrieved April 3, 2020. - ^ Failinger, Marie (2009). "Equal protection of the laws". In Schultz, David Andrew (ed.). The Encyclopedia of American Law. Infobase. pp. 152–153. ISBN 978-1438109916. Archived from the original on July 24, 2020. The equal protection clause guarantees the right of "similarly situated" people to be treated the same way by the law. - ^ "Fair Treatment by the Government: Equal Protection". GeorgiaLegalAid.org. Carl Vinson Institute of Government at University of Georgia. July 30, 2004. Archived from the original on March 20, 2020. Retrieved July 24, 2020. The basic intent of equal protection is to make sure that people are treated as equally as possible under our legal system. For example, it is to see that everyone who gets a speeding ticket will face the samEpocedures [sic!]. A further intent is to ensure that all Americans are provided with equal opportunities in education, employment, and other areas. [...] The U.S. Constitution makes a similar provision in the Fourteenth Amendment. It says that no state shall make or enforce any law that will "deny to any person within its jurisdiction the equal protection of the law." These provisions require the government to treat persons equally and impartially. - ^ "Equal Protection". Legal Information Institute at Cornell Law School. Archived from the original on June 22, 2020. Retrieved July 24, 2020. Equal Protection refers to the idea that a governmental body may not deny people equal protection of its governing laws. The governing body state must treat an individual in the same manner as others in similar conditions and circumstances. - ^ Primus, Richard (May 2004). "Bolling Alone". Columbia Law Review. 104 (4): 975–1041. doi:10.2307/4099366. JSTOR 4099366. SSRN 464847. - ^ Bolling v. Sharpe, 347 U.S. 497 (1954) - ^ a b Yick Wo v. Hopkins, 118 U.S. 356 (1886). - ^ a b c d e f g "Annotation 18 – Fourteenth Amendment: Section 1 – Rights Guaranteed: Equal Protection of the Laws: Scope and application state action". FindLaw for Legal Professionals – Law & Legal Information by FindLaw, a Thomson Reuters business. Archived from the original on January 14, 2021. Retrieved November 23, 2013. - ^ a b c d Plyler v. Doe, 457 U.S. 202, 210–16 (1982). - ^ Congressional Globe, 39th Congress, 1st Session, 1033 (1866), p. 2766 - ^ Wong Wing v. United States, 163 U.S. 228 (1896). - ^ Wong Wing, 163 U.S. at 242–243 (Justice Field, concurring in part and dissenting in part). - ^ a b Johnson, John W. (2001). Historic U.S. Court Cases: An Encyclopedia. Routledge. pp. 446–447. ISBN 978-0415937559. Archived from the original on February 6, 2016. Retrieved June 13, 2013. - ^ Vile, John R., ed. (2003). "Corporations". Encyclopedia of Constitutional Amendments, Proposed Amendments, and Amending Issues: 1789–2002. ABC-CLIO. p. 116. - ^ Logan, Rayford Whittingham (1965). The betrayal of the Negro, from Rutherford B. Hayes to Woodrow Wilson. New York: Collier Books. p. 100. ISBN 9780306807589. - ^ Strauder v. West Virginia, 100 U.S. 303 (1880). - ^ Plessy v. Ferguson, 163 U.S. 537 (1896). - ^ Abrams, Eve (February 12, 2009). "Plessy/Ferguson plaque dedicated". WWNO (University New Orleans Public Radio). Archived from the original on January 29, 2012. Retrieved April 17, 2009. - ^ Berea College v. Kentucky, 211 U.S. 45 (1908). - ^ Holmes, Oliver Wendell Jr. "274 U.S. 200: Buck v. Bell". Cornell University Law School Legal Information Institute. Archived from the original on May 30, 2013. Retrieved June 12, 2013. - ^ Brown v. Board of Education, 347 U.S. 483 (1954). - ^ Patterson, James (2002). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (Pivotal Moments in American History). Oxford University Press. ISBN 978-0195156324. - ^ "Forced Busing and White Flight". Time. September 25, 1978. Archived from the original on September 1, 2009. Retrieved June 17, 2009. - ^ Parents Involved in Community Schools v. Seattle School District No. 1, 551 U.S. 701 (2007). - ^ Greenhouse, Linda (June 29, 2007). "Justices Limit the Use of Race in School Plans for Integration". The New York Times. Archived from the original on February 2, 2017. Retrieved June 30, 2013. - ^ "Plyler v. Doe". The Oyez Project at IIT Chicago-Kent College of Law. The Oyez Project at IIT Chicago-Kent College of Law. Archived from the original on January 14, 2021. Retrieved November 23, 2013. - ^ Hernandez v. Texas, 347 U.S. 475 (1954). - ^ United States v. Virginia, 518 U.S. 515 (1996). - ^ Levy v. Louisiana, 361 U.S. 68 (1968). - ^ Gerstmann, Evan (1999). The Constitutional Underclass: Gays, Lesbians, and the Failure of Class-Based Equal Protection. University of Chicago Press. ISBN 978-0226288604. - ^ Regents of the University of California v. Bakke, 438 U.S. 265 (1978). - ^ Daniel E. Brannen; Richard Hanes (2001). Regents of the University of California v. Bakke 1978. Supreme Court Drama: Cases that Changed America. Archived from the original on February 6, 2016. Retrieved June 27, 2013. - ^ Gratz v. Bollinger, 539 U.S. 244 (2003). - ^ Grutter v. Bollinger, 539 U.S. 306 (2003). - ^ Alger, Jonathan (October 11, 2003). "Gratz/Grutter and Beyond: the Diversity Leadership Challenge". University of Michigan. Archived from the original on August 13, 2011. Retrieved June 30, 2013. - ^ Eckes, Susan B. (January 1, 2004). "Race-Conscious Admissions Programs: Where Do Universities Go From Gratz and Grutter?". Journal of Law and Education. Archived from the original on February 6, 2016. Retrieved June 27, 2013. - ^ Fisher v. University of Texas, 570 U.S. 297 (2013). - ^ Howe, Amy (June 24, 2013). "Finally! The Fisher decision in Plain English". SCOTUSblog. Archived from the original on June 29, 2013. Retrieved June 30, 2013. - ^ Schuette v. Coalition to Defend Affirmative Action, 572 U.S. 291 (2014). - ^ Denniston, Lyle (April 22, 2014). "Opinion analysis: Affirmative action – up to the voters". SCOTUSblog. Archived from the original on January 14, 2021. Retrieved April 22, 2014. - ^ Reed v. Reed, 404 U.S. 71 (1971). - ^ a b Reed v. Reed 1971. Supreme Court Drama: Cases that Changed America. January 1, 2001. Archived from the original on February 6, 2016. Retrieved June 12, 2013. - ^ Craig v. Boren, 429 U.S. 190 (1976). - ^ Karst, Kenneth L. (January 1, 2000). "Craig v. Boren, 429 U.S. 190 (1976)". Encyclopedia of the American Constitution. Archived from the original on February 6, 2016. Retrieved June 16, 2013. - ^ Wesberry v. Sanders, 376 U.S. 1 (1964). - ^ Reynolds v. Sims, 377 U.S. 533 (1964). - ^ Epstein, Lee; Walker, Thomas G. (2007). Constitutional Law for a Changing America: Rights, Liberties, and Justice (6th ed.). Washington, D.C.: CQ Press. p. 775. ISBN 978-0871876133. Wesberry and Reynolds made it clear that the Constitution demanded population-based representational units for the U.S. House of Representatives and both houses of state legislatures. - ^ Shaw v. Reno, 509 U.S. 630 (1993). - ^ Aleinikoff, T. Alexander; Issacharoff, Samuel (1993). "Race and Redistricting: Drawing Constitutional Lines after Shaw v. Reno". Michigan Law Review. 92 (3): 588–651. doi:10.2307/1289796. JSTOR 1289796. Archived from the original on January 14, 2021. Retrieved December 9, 2019. - ^ Bush v. Gore, 531 U.S. 98 (2000). - ^ "Bush v. Gore". Encyclopædia Britannica. Archived from the original on January 14, 2021. Retrieved June 12, 2013. - ^ League of United Latin American Citizens v. Perry, 548 U.S. 399 (2006). - ^ Daniels, Gilda R. (March 22, 2012). "Fred Gray: life, legacy, lessons". Faulkner Law Review. Archived from the original on February 6, 2016. Retrieved June 12, 2013. - ^ United States of America Congressiona Record – Congressional Record: Proceedings and Debates of the 88th Congress Second Session, Volume 110, Part 5, March 19, 1964 to April 6, 1964 (Pages 5655 to 7044), here page 5943. United States Congress. 1964. Archived from the original on April 14, 2020. Retrieved April 14, 2020. - ^ "United States v. Harris, 106 U.S. 629 (1883)". US Supreme Court Center. Archived from the original on December 22, 2020. Retrieved April 14, 2020. - ^ "United States v. Cruikshank, 92 U.S. 542 (1875)". US Supreme Court Center. Archived from the original on January 14, 2021. Retrieved April 14, 2020. - ^ a b c d Dunn, Christopher (April 28, 2009). "Column: Applying the Constitution to Private Actors (New York Law Journal)". New York Civil Liberties Union (NYCLU) – American Civil Liberties Union of New York State. Archived from the original on February 29, 2020. Retrieved November 23, 2013. - ^ Shelley v. Kraemer, 334 U.S. 1 (1948). - ^ Ex Parte Virginia, 100 U.S. 339 (1880). - ^ "Ex Parte Virginia, 100 U.S. 339 (1879), at 347". Justia US Supreme Court Center. Justia US Supreme Court Center. Retrieved March 2, 2023. - ^ a b Jackson v. Metropolitan Edison Co, 419 U.S. 345 (1974). - ^ Burton v. Wilmington Parking Authority, 365 U.S. 715 (1961). - ^ Flagg Bros., Inc. v. Brooks, 436 U.S. 149 (1978). - ^ a b c d e f g h Bonfield, Arthur Earl (1960). "The Right to Vote and Judicial Enforcement of Section Two of the Fourteenth Amendment". Cornell Law Review. 46 (1). Archived from the original on January 14, 2021. Retrieved December 18, 2016. - ^ ""An Act for the Apportionment of Representatives to Congress among the States according to the ninth Census," Forty-Second Congress, Sess. ii, Ch. xi, section 6. February 2, 1872". Archived from the original on January 14, 2021. Retrieved December 21, 2016. - ^ "2 U.S. Code § 6 – Reduction of representation". LII / Legal Information Institute. Archived from the original on January 14, 2021. Retrieved December 21, 2016. - ^ Friedman, Walter (January 1, 2006). "Fourteenth Amendment". Encyclopedia of African-American Culture and History. Archived from the original on July 14, 2014. Retrieved June 12, 2013. - ^ "Casetext". casetext.com. Archived from the original on January 14, 2021. Retrieved December 21, 2016. - ^ Chin, Gabriel J. (2004). "Reconstruction, Felon Disenfranchisement, and the Right to Vote: Did the Fifteenth Amendment Repeal Section 2 of the Fourteenth?". Georgetown Law Journal. 92: 259. Why this if it was not in the power of the legislature to deny the right of suffrage to some male inhabitants? And if suffrage was necessarily one of the absolute rights of citizenship, why confine the operation of the limitation to male inhabitants? Women and children are, as we have seen, "persons." They are counted in the enumeration upon which the apportionment is to be made, but if they were necessarily voters because of their citizenship unless clearly excluded, why inflict the penalty for the exclusion of males alone? Clearly, no such form of words would have been elected to express the idea here indicated if suffrage was the absolute right of all citizens. - ^ Richardson v. Ramirez, 418 U.S. 24 (1974). - ^ Hunter v. Underwood, 471 U.S. 222 (1985). - ^ Foner 1988, p. 255. - ^ Foner 1988, pp. 255–256. - ^ a b c d e Parks, MaryAlice (January 12, 2021). "Democrats cite rarely used part of 14th Amendment in new impeachment article". ABC News. Retrieved February 15, 2021. - ^ a b c d e f g h Rosenwald, Michael S. (January 12, 2021). "There's an alternative to impeachment or 25th Amendment for Trump, historians say". The Washington Post. Retrieved January 18, 2021. - ^ a b c d e Wolf, Zachary B. (January 12, 2021). "What's the 14th Amendment and how does it work?". CNN. Retrieved February 15, 2021. - ^ Lynch, Myles. “Disloyalty & Disqualification: Reconstructing Section 3 of the Fourteenth Amendment”, 30 Wm. & Mary Bill Rts. J. 153, 206 n. 365 (2021). - ^ Vermeule, Adrian. “The Constitutional Law of Congressional Procedure”, 71 U. CHI. L. REV. 361, 391–97 (2004). - ^ a b c d e Weiss, Debra Cassens (January 12, 2021). "Could the 14th Amendment be used to disqualify Trump from office?". ABA Journal. Retrieved February 15, 2021. - ^ a b c Wolfe, Jan (January 14, 2021). "Explainer: Impeachment or the 14th Amendment – Can Trump be barred from future office?". Reuters. - ^ Byrd, Robert. The Senate, 1789-1989: Addresses on the history of the United States Senate, Volume 2, p. 80 (1988). - ^ Act of June 6, 1898, ch. 389, 30 Stat. 432 Archived January 14, 2021, at the Wayback Machine - ^ "Sections 3 and 4: Disqualification and Public Debt". Caselaw.lp.findlaw.com. June 5, 1933. Archived from the original on August 5, 2011. Retrieved August 1, 2010. - ^ "Pieces of History: General Robert E. Lee's Parole and Citizenship". Prologue Magazine. 37 (1). 2005. Archived from the original on January 14, 2021. Retrieved August 28, 2017. - ^ Goodman, Bonnie K. (2006). "History Buzz: October 16, 2006: This Week in History". History News Network. Archived from the original on October 19, 2007. Retrieved June 18, 2009. - ^ "Chapter 157: The Oath As Related To Qualifications", Cannon's Precedents of the U.S. House of Representatives, vol. 6, January 1, 1936, archived from the original on June 20, 2013, retrieved April 9, 2013 - ^ "Victor L. Berger | Encyclopedia of Milwaukee". emke.uwm.edu. Archived from the original on January 14, 2021. Retrieved February 5, 2018. - ^ "Federal judge halts legal challenge to Madison Cawthorn's candidacy". The Hill. March 24, 2022. Retrieved March 21, 2022. - ^ Weiner, Rachel (May 24, 2022). "Insurrectionists can be barred from office, appeals court says". Washington Post. Retrieved May 25, 2022. - ^ "Marjorie Taylor Greene's candidacy challenged at hearing". Associated Press. April 22, 2022. Retrieved April 22, 2022. - ^ Lopez, Ashley (September 6, 2022). "A New Mexico judge cites insurrection in barring a county commissioner from office". NPR. Retrieved September 6, 2022. - ^ Miru (September 6, 2022). "Judge removes Griffin from office for engaging in the January 6 insurrection". CREW | Citizens for Responsibility and Ethics in Washington. Retrieved September 6, 2022. - ^ "Annotation 37 – Fourteenth Amendment Sections 3 and 4 Disqualification and Public Debt". FindLaw. Archived from the original on June 25, 2013. Retrieved October 17, 2013. - ^ "Perry v. United States 294 U.S. 330 (1935) at 354". Findlaw.com. Archived from the original on January 23, 2013. Retrieved August 1, 2010. - ^ Liptak, Adam (July 24, 2011). "The 14th Amendment, the Debt Ceiling and a Way Out". The New York Times. Archived from the original on January 14, 2021. Retrieved July 30, 2011. In recent weeks, law professors have been trying to puzzle out the meaning and relevance of the provision. Some have joined Mr. Clinton in saying it allows Mr. Obama to ignore the debt ceiling. Others say it applies only to Congress and only to outright default on existing debts. Still others say the President may do what he wants in an emergency, with or without the authority of the 14th Amendment. - ^ a b Balkin, Jack M. "3 ways Obama could bypass Congress". CNN. Archived from the original on October 16, 2013. Retrieved October 16, 2013. - ^ Rappeport, Alan (September 27, 2021). "Explaining the U.S. Debt Limit and Why It Became a Bargaining Tool". The New York Times. Retrieved October 10, 2021. - ^ "Our National Debt 'Shall Not Be Questioned,' the Constitution Says". The Atlantic. May 4, 2011. Archived from the original on January 14, 2021. Retrieved March 7, 2017. - ^ Sahadi, Jeanne. "Is the debt ceiling unconstitutional?". CNN Money. Archived from the original on January 14, 2021. Retrieved January 2, 2013. - ^ Rosen, Jeffrey (July 29, 2011). "How Would the Supreme Court Rule on Obama Raising the Debt Ceiling Himself?". The New Republic. Archived from the original on January 14, 2021. Retrieved July 29, 2011. - ^ Chemerinsky, Erwin (July 29, 2011). "The Constitution, Obama and raising the debt ceiling". Los Angeles Times. Archived from the original on January 21, 2013. Retrieved July 30, 2011. - ^ "Constitution of the United States: Amendments 11–27". National Archives and Records Administration. November 4, 2015. Archived from the original on January 14, 2021. Retrieved August 25, 2020. - ^ "Slaughterhouse Cases, 83 U.S. 36 (1872), at page 83 U. S. 71". US Supreme Court Center. Archived from the original on January 14, 2021. Retrieved April 14, 2020. - ^ a b Engel, Steven A. (October 1, 1999). "The McCulloch theory of the Fourteenth Amendment: City of Boerne v. Flores and the original understanding of section 5". Yale Law Journal. 109 (1): 115–154. doi:10.2307/797432. JSTOR 797432. Archived from the original on December 18, 2006. Retrieved June 12, 2013. - ^ Kovalchick, Anthony (February 15, 2007). "Judicial Usurpation of Legislative Power: Why Congress Must Reassert its Power to Determine What is Appropriate Legislation to Enforce the Fourteenth Amendment". Chapman Law Review. 10 (1). Archived from the original on May 3, 2015. Retrieved July 19, 2013. - ^ "Ex Parte Virginia, 100 U.S. 339 (1879), at 346–346". Justia US Supreme Court Center. Retrieved September 20, 2021. - ^ "FindLaw: U.S. Constitution: Fourteenth Amendment, p. 40". Caselaw.lp.findlaw.com. Archived from the original on June 25, 2013. Retrieved August 1, 2010. - ^ Katzenbach v. Morgan, 384 U.S. 641 (1966). - ^ Eisenberg, Theodore (January 1, 2000). "Katzenbach v. Morgan 384 U.S. 641 (1966)". Encyclopedia of the American Constitution. Archived from the original on September 24, 2015. Retrieved June 12, 2013. - ^ City of Boerne v. Flores, 521 U.S. 507 (1997). - ^ Flores, 521 U.S., at 520. - ^ a b Goldstone 2011, p. 22. - ^ Stromberg, "A Plain Folk Perspective" (2002), p. 111. - ^ Nelson, William E. (1988). The Fourteenth Amendment: From Political Principle to Judicial Doctrine. Harvard University Press. p. 47. ISBN 978-0674041424. Archived from the original on January 14, 2021. Retrieved June 6, 2013. - ^ Stromberg, "A Plain Folk Perspective" (2002), p. 112. - ^ Halbrook, Stephen P. (1998). Freedmen, the Fourteenth Amendment, and the right to bear arms, 1866-1876. Westport, Conn.: Praeger. pp. 1–3. ISBN 978-1-56750-782-9. OCLC 547103303. - ^ Foner, Eric (1997). Reconstruction. pp. 199–200. ISBN 978-0807122341. - ^ Foner 1988, pp. 250–251. - ^ Castel, Albert E. (1979). The Presidency of Andrew Johnson. American Presidency. Lawrence: The Regents Press of Kansas. p. 70. ISBN 978-0700601905. - ^ Castel, Albert E. (1979). The Presidency of Andrew Johnson. American Presidency. Lawrence: The Regents Press of Kansas. p. 71. ISBN 978-0700601905. - ^ Soifer, "Prohibition of Voluntary Peonage" (2012), p. 1614. - ^ "Adamson v. California, 332 U.S. 46 (1947), dissenting opinion of Justice Hugo Black, Appendix, at page 332 U. S. 92 – Page 332 U. S. 123". Justia US Supreme Court Center. June 22, 1947. Retrieved February 17, 2022. - ^ "FindLaw's United States Supreme Court case and opinions: ADAMSON v. PEOPLE OF STATE OF CALIFORNIA". Findlaw. Archived from the original on August 9, 2011. Retrieved February 19, 2006. - ^ Yenor, Scott. "February 28, 1866: Congressional Debate on the 14th Amendment". Teaching American History. February 22, 2022. Archived from the original on February 22, 2022. Retrieved February 22, 2022. - ^ Foner 1988, p. 252. - ^ a b c Foner 1988, p. 253. - ^ James J. Kilpatrick, ed. (1961). The Constitution of the United States and Amendments Thereto. Virginia Commission on Constitutional Government. p. 44. - ^ McPherson, Edward LL.D., (Clerk of the House of Representatives of the United States) "A Handbook of Politics for 1868", Part I – Political Manual for 1866, VI – Votes on Proposed Constitutional Amendments. Washington City: Philp & Solomons. 1868, p. 102 - ^ a b Carter, Dan. When the War Was Over: The Failure of Self-Reconstruction in the South, 1865–1867, pp. 242–243 (LSU Press 1985). - ^ a b Graber, "Subtraction by Addition?" (2012), pp. 1501–1502. - ^ "The Civil War And Reconstruction". Archived from the original on January 14, 2021. Retrieved January 8, 2016. - ^ An Act to provide for the more efficient Government of the Rebel States, enacted March 2, 1867, 14 Stat. 428, 429 - ^ a b "Amendment XIV". US Government Printing Office. Archived from the original on February 2, 2014. Retrieved June 23, 2013. - ^ A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 707. Archived from the original on December 30, 2020. Retrieved January 14, 2021. - ^ Killian, Johnny H.; et al. (2004). The Constitution of the United States of America: Analysis and Interpretation: Analysis of Cases Decided by the Supreme Court of the United States to June 28, 2002. Government Printing Office. p. 31. ISBN 978-0160723797. Archived from the original on January 14, 2021. Retrieved October 2, 2020. - ^ a b A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 709. Archived from the original on January 14, 2021. Retrieved January 14, 2021. - ^ a b A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 710. Archived from the original on January 14, 2021. Retrieved January 14, 2021. - ^ A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 708. Archived from the original on January 14, 2021. Retrieved January 14, 2021. - ^ A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 711. Archived from the original on January 14, 2021. Retrieved January 14, 2021. - ^ "Amendment of 1868 Ratified by Maryland". The New York Times. April 5, 1959. p. 71. ProQuest 114922297. - Foner, Eric (1988). Reconstruction: America's Unfinished Revolution, 1863–1877. HarperCollins. ISBN 978-0062035868. Preview. - Goldstone, Lawrence (2011). Inherently Unequal: The Betrayal of Equal Rights by the Supreme Court, 1865–1903. Walker & Company. ISBN 978-0802717924. Preview. - Graber, Mark A. (November 2012). "Subtraction by addition?: The Thirteenth and Fourteenth Amendments". Columbia Law Review. 112 (7): 1501–1549. JSTOR 41708157. Archived from the original on November 17, 2015. Pdf. - Soifer, Aviam (November 2012). "Federal protection, paternalism, and the virtually forgotten prohibition of voluntary peonage". Columbia Law Review. 112 (7): 1607–1639. JSTOR 41708160. Archived from the original on November 17, 2015. PDF. - Barnett, Randy E. (2011). "Whence Comes Section One? The Abolitionist Origins of the Fourteenth Amendment". Journal of Legal Analysis. Georgetown Public Law Research Paper No. 10-06. 3: 165–263. doi:10.1093/jla/3.1.165. SSRN 1538862. - Bogen, David S. (2003). Privileges and Immunities: A Reference Guide to the United States Constitution. Greenwood Publishing Group. ISBN 978-0313313479. Retrieved March 19, 2013. - Garber, Mark A. (2011). "Foreword: Plus or minus one: the Thirteenth and Fourteenth Amendments". Maryland Law Review. 71 (1): 12–20. Pdf. - See also: Symposium: the Maryland Constitutional Law Schmooze special issue of the Maryland Law Review. - Halbrook, Stephen P. (1998). Freedmen, the 14th Amendment, and the Right to Bear Arms, 1866–1876. Greenwood Publishing Group. ISBN 978-0275963316. - tenBroek, Jacobus (June 1951). "Thirteenth Amendment to the Constitution of the United States: Consummation to Abolition and Key to the Fourteenth Amendment". California Law Review. 39 (2): 171–203. doi:10.2307/3478033. JSTOR 3478033. Pdf. - McConnell, Michael W. (May 1995). "Originalism and the desegregation decisions". Virginia Law Review. 81 (4): 947–1140. doi:10.2307/1073539. JSTOR 1073539. - Response to McConnell: Klarman, Michael J. (October 1995). "Response: Brown, originalism, and constitutional theory: a response to Professor Mcconnell". Virginia Law Review. 81 (7): 1881–1936. doi:10.2307/1073643. JSTOR 1073643. - Response to Klarman: McConnell, Michael W. (October 1995). "Reply: The originalist justification for Brown: a reply to Professor Klarman". Virginia Law Review. 81 (7): 1937–1955. doi:10.2307/1073644. JSTOR 1073644. - "Amendments to the Constitution of the United States" (PDF). GPO Access. Archived from the original (PDF) on September 18, 2005. Retrieved September 11, 2005. (PDF, providing text of amendment and dates of ratification) - CRS Annotated Constitution: Fourteenth Amendment - Fourteenth Amendment and related resources at the Library of Congress - Congressional Debates of the Fourteenth Amendment to the United States Constitution, provides a transcript of the debates in Congress. - Galloway, Russell W. Jr. (1989). "Basic Equal Protection Analysis". Santa Clara Law Review. 29 (1). Retrieved February 8, 2021. - 1868 in American law - 1868 in American politics - Aftermath of the American Civil War - All articles with unsourced statements - Amendments to the United States Constitution - Articles with GND identifiers - Articles with J9U identifiers - Articles with LCCN identifiers - Articles with VIAF identifiers - Articles with WorldCat-VIAF identifiers - Articles with short description - Articles with unsourced statements from August 2022 - CS1: Julian–Gregorian uncertainty - Fourteenth Amendment to the United States Constitution - Good articles - History of civil rights in the United States - July 1868 events - Pages using Sister project links with hidden wikidata - Police legislation - Reconstruction Era - Short description matches Wikidata - United States Fourteenth Amendment case law - Use mdy dates from February 2021 - Webarchive template wayback links - Wikipedia indefinitely move-protected pages - Wikipedia pages semi-protected against vandalism The content of this page is based on the Wikipedia article written by contributors.. The text is available under the Creative Commons Attribution-ShareAlike Licence & the media files are available under their respective licenses; additional terms may apply. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization & is not affiliated to WikiZ.com.
Math science photos pat info venn diagrams 11152014 0 comments for the science assignment, the venn diagram is two circles that overlap each other. This guide will walk you through the process of making a mathematical venn diagram, explaining all the important symbols along the way. For example, trees have a trunk and other plants have stems. Venn diagrams is a convenient way of representing data. To learn how to create venn diagrams and use them in problem solving. There are more than 30 symbols used in set theory, but only three you need to know to understand the basics. From the diagram that there are 110 students who play only hockey. Use the jot notes you took in class from the video of about plants to complete the venn diagram. Printable primary math worksheet for math grades 1 to 6. In worksheet on venn diagrams we will solve 8 different types of questions on sets, to draw venn diagrams in different situations. We hope that teachers and maths students will practice math. Venn diagram shape sorter an interactive tool to creating and defining rules for. Practice problems assess your knowledge of the types of venn diagrams as well as using this mathematical diagram type. Rates and ratios venn diagram, definitions, examples. Since the data refers to two categories, we will use a twocircle diagram. Below is a detailed explanation on how to work out math problems with venn charts. Twoway frequency tables and venn diagrams video khan. Line graph shows change over time, x and y axis, labels, titles, connected lines and dots, can predict, equal intervals, scale starts at zero. Smp 6 i will be using this for closure at the end when i have them create their venn diagram. Worksheets are unit earth and space science planets stars, solar system test, name the inner solar system, venus, voyage a journey through our solar system grades 5 8, teaching activity guide meet the planets, the sun work, find the right circle classroom. Answer key, venn diagram a u b u a b a u b u a b u a b a b u name. Factors multiples prime and composite number sixth grade math, fourth grade math. Printable venn diagram worksheets for grade 6 or 7 math students. Once the concept is understood, students will find them challenging but engaging. Venn diagrams math worksheets super teacher worksheets. A survey was conducted on a sample of persons with reference to their knowledge of english, french and german. Test how well you know fourthgrade math vocabulary. Use pdf export for high quality prints and svg export for large sharp images or embed your diagrams anywhere with the creately viewer. Factors venn diagram and greatest common factor basic math. Our venn diagram math materials and exercises come as free pdf. It contains sixth grade math activity worksheets on. Out of forty students, 14 are taking english composition and. To find the number of students in the overlap, subtract the total number of students given from the number on the diagram. Printable venn diagram worksheets for primary grades 1 to 7 math students, based on the singapore math curriculum. The venn diagram shows information about the choices the guests made. This page has printable worksheets with math venn diagrams. Improve your math knowledge with free questions in use venn diagrams to solve problems and thousands of other math skills. A zipped folder with an editable word powerpoint excel version and a pdf version of each file. This worksheet is a supplementary sixth grade resource to help teachers, parents and children at home and in school. Venn diagrams are also useful in illustrating relationships in statistics. Let e be the set of people who believe that elvis is still alive. Venn diagrams are the principal way of showing sets diagrammatically. Venn diagram templates for pdf venn diagram templates for ppt venn diagram templates for word. In this venn diagrams worksheet, students complete and draw 10 different venn diagrams to match each problem as. Inventing this type of diagram was, apparently, pretty much all john venn ever accomplished. To add insult to injury, much of what we refer to as venn diagrams are actually euler. Venn diagrams math worksheets this page has a set printable venn diagram worksheets for teaching math. Identify and print out a worksheet on any topic of interest. Our math worksheets are made for math students in esl or native speaking math classrooms and tutoring purposes. Venn diagram word problems generally give you two or three classifications and a bunch of numbers. Drawn this way, there are more students on the venn diagram than we have. A venn diagram is a diagram that shows the relationship between and among a finite collection of sets. Venn diagrams were invented by a guy named john venn no kidding. We use them in our own math classes and are convinced that our pdf worksheets could also be used in an online math education or math homeschooling. Venn chart is a helpful tool to solve math problems which requires logical thinking and deductive reasoning. The best way to explain how the venn diagram works and what its formulas show is to give 2 or 3 circles venn diagram examples and problems with solutions. Click for our analyze venn diagrams printable grade 6 math worksheet. The results of the survey are presented in the diagram below. Ixl use venn diagrams to solve problems 5th grade math. Grade 6 math venn diagram worksheet, analyze the diagram and use the information to answer the questions. Venn diagram word problem here is an example on how to solve a venn diagram word problem that involves three intersecting sets. Write details that tell how the subjects are alike where the circles overlap. First, from the above figure, consider the following data. Venn diagrams math worksheet for 6th grade children pdf. This page has a set printable venn diagram worksheets for teaching math. The method consists primarily of entering the elements of a set into a circle or circles. You then have to use the given information to populate the diagram and figure out the remaining information. Thinking mathematically 6th edition answers to chapter 2 set theory 2. Ixl use venn diagrams to solve problems grade 5 math. Each of the students in sixth year in a particular school has whatsapp, instagram, or. This is a math pdf printable activity sheet with several exercises. A venn diagram is useful in organizing the information in this type of problem. If we have two or more sets, we can use a venn diagram to show the logical relationship among these sets as well as the cardinality of those sets. We will discuss below representing data using the method of venn diagrams for 2 groups and 3 groups. Icse cbse isc board mathematics portal for students 2 comments class8 venndiagrams exercise2download. Problemsolving using venn diagram is a widely used approach in many areas such as statistics, data science, business, set theory, math, logic and etc. Venn diagrams 2 math worksheet for 6th grade children. Venn diagram write details that tell how the subjects are different in the outer circles. Homework resources in venn diagrams elementary math. Sixth grade lesson analyzing and creating circle graphs. In this venn diagram worksheet, students answer 6 questions about a venn diagram. View and share this diagram and more in your device. These venn diagram worksheets are great for testing students on set theory and working with venn diagrams. These worksheets were created with teachers, homeschool parents and sixth graders in mind. Test your ability to analyze venn diagrams in this quiz and worksheet combo. So these diagrams are also called a venneuler diagrams. Find venn diagram math problems lesson plans and teaching resources. John venn was a british mathematician1834 1923 who developed the idea of using diagram to represent sets. Browse math venn diagrams resources on teachers pay teachers, a marketplace trusted by millions of teachers for original educational resources. Understanding language is devoted to improving education for english language learners in light of the new common core state standards and next generation science standards. Printable venn diagram worksheets for grade 6 or 7 math. Based on the latest research, we provide resources to help teachers, administrators, and policy makers recognize the language demands in mathematics, science, and english language arts. Choose from 500 different sets of venn diagram math flashcards on quizlet. Learn venn diagram math with free interactive flashcards. Let a be the set of people who believe that they have been abducted by space. For venn diagrams used in reading and writing, please see our compare and contrast. Each worksheet contains a series of sixth grade math problems and to make it easy each worksheet has an answer key attached to the second page. This is because some of the students play both sports and should be in the overlap on the venn diagram.1106 881 194 1535 538 1362 272 728 488 23 1472 430 578 1355 464 684 905 896 505 1550 1118 1344 111 1032 122 1265 1330 531
Start a 10-Day Free Trial to Unlock the Full Review Why Lesson Planet? Find quality lesson planning resources, fast! Share & remix collections to collaborate. Organize your curriculum with collections. Easy! Have time to be more creative & energetic with your students! Horizontal Multiplication by 12: Part 8 In this online interactive math worksheets, students horizontally multiply the given number by twelve. Students may access the correct answers and their scores by clicking submit. 3 Views 0 Downloads Numbers in a Multiplication Table Identifying patterns is a crucial skill for all mathematicians, young and old. Explore the multiplication table with your class, using patterns and symmetry to teach about square numbers, prime numbers, and the commutative and identity... 3rd - 5th Math CCSS: Designed I Have...Who Has...Multiplication Game Get the whole class involved in practicing their multiplication facts with this fun collaborative activity. With each child given a card containing both a product and an unrelated multiplication sentence, the activity begins as the child... 3rd - 6th Math CCSS: Adaptable Properties of Multiplication Unlock the keys to fluent multiplication with this upper-elementary math instructional activity. After an introduction to the five properties of multiplication, students match cards with different math problems to posters describing each... 3rd - 5th Math CCSS: Adaptable Math Worksheet: Horizontal Multiplication by Four In this online interactive horizontal multiplication by 4 worksheet, learners practice their math skills as they solve 5 problems that require them to multiply numbers by 4. The answers may be typed in on the worksheet and submitted to... 3rd - 5th Math
Word diagrams in teaching classical conditioning. Much of the richness of the discipline of behavior analysis is in the study of complex behavior-environment relations or contingencies. This richness, however, poses one of the major difficulties in teaching behavior-analysis because complex relations between behavior and the environment are difficult to describe using prose definitions. Recognizing this problem, several observers have proposed diagramming systems for clarifying contingency relations (Goldwater & Acker, 1995; Hummel, Kaeck, Bowes, & Rittenhouse, 1994; Malott, 1992; Mechner, 1959; Snapper, Kadden, & Inglis, 1982). Relatedly, educators have also advocated using diagrams to represent complex behavior-environment relations in order to help students learn these concepts (Goldwater & Acker, 1995; Mattaini, 1995; Michael & Schafer, 1995). The advocates of diagrams to teach behavior-analysis concepts have advanced persuasive arguments in favor of using diagrams that have considerable intuitive appeal. However, research into the effectiveness of diagrams in communication and teaching has been lacking. The present experiment examined the effectiveness of a simple word diagram in teaching classical conditioning. Basic word diagrams have long been used to illustrate classical conditioning in introductory psychology texts and texts in the psychology of learning (e.g., Keller & Schoenfeld, 1950, p. 19, p. 31), even though the use of these diagrams has never been empirically validated. In an informal survey of behavior-analysis textbooks, Goldwater and Acker (1995) expressed concern that recent textbooks have omitted diagrams. In the present study, students used diagrams as part of a self-instructional concept-teaching program. The diagrams supplemented ordinary text as a means of showing how the definition applied to specific examples of classical conditioning. Sixty university students from introductory psychology participated to fulfill a course requirement. Their grades were independent of performance in the study. All participants complied with the experimental procedures. When students arrived at the research site they obtained a materials package that included written directions. Initially, students read a 500-word introductory lesson on classical conditioning concepts. This lesson defined and exemplified unconditioned reflexes, conditioned reflexes, unconditioned stimuli (US), unconditioned responses (UR), neutral stimuli (NS), conditioned stimuli (CS), and conditioned responses (CR). The lesson also compared and contrasted classical conditioning with both pseudoconditioning and operant conditioning. After reading the introductory lesson, the students read a conceptual exercise consisting of examples and nonexamples of classical conditioning. Each of the examples was based on a published account of human classical conditioning. One of the nonexamples illustrated operant conditioning and one illustrated pseudoconditioning. Beneath each item was an "analysis" that identified the item as an example or a nonexample. For examples, the analysis identified the US, UR, NS, CS, and CR. For nonexamples, the analysis explained why the item lacked the critical features of classical conditioning and when applicable identified the item as an instance of operant conditioning or pseudoconditioning. After completing the conceptual exercise, students took a postest consisting of six novel (i.e., previously unseen) items, three examples of classical conditioning (Ellson, 1941; Razran, 1949; Vaitl, Gruppe, & Kimmel, 1985) and three nonexamples. One nonexample was an instance of operant conditioning, one was an instance of pseudoconditioning, and one was an instance of neither operant conditioning nor pseudoconditioning. For each item, each student was required to classify the item as an example or a nonexample and to analyze the presence or absence of the critical features (i.e., US, UR, NS, CS, and CR). For the nonexamples each student was also asked to identify them as examples of operant conditioning or of pseudoconditioning. The posttest instructions did not specify whether the student should draw diagrams in answering the items. Experimental Treatments and Design There were two experimental treatments: Matched versus unmatched examples/nonexamples and the use of diagrams. In the matched condition, the conceptual exercise consisted of 14 example/nonexample pairs presented side by side. The examples (Bierley, McSweeney, & Vannieuwkerk, 1985; Cannon & Baker, 1981; Cannon, Best, Batson, & Feldman, 1983; Clarke & Hayes, 1984; Dekker, Pelser, & Groen, 1964; Doerr, 1981; Efron, 1964; Hayduk, 1980; Kasatkin & Levikova, 1932; McConaghy, 1970; Quarti & Renaud, 1964; Switzer, 1933; Watson & Rayner, 1920; Wolpe, 1958) illustrated classical conditioning. The matched nonexamples were modified versions of the examples changed so that they did not illustrate classical conditioning. Of the 14 nonexamples in the matched condition, 6 were examples of pseudoconditioning, 3 were examples of operant conditioning, and 5 were examples of neither pseudoconditioning nor operant conditioning. With minor modifications, the examples and nonexamples of classical conditioning used in the present st udy are included in Grant and Evans' (1994, pp. 412-417, pp. 506-512) self-instructional exercise over classical conditioning. In the unmatched condition, the conceptual exercise consisted of 14 items, 10 examples and 4 nonexamples. The unmatched exercise was constructed by omitting the matched example or nonexample, as appropriate. One of the nonexamples illustrated operant conditioning and one illustrated pseudoconditioning. The design of the unmatched condition followed Miller and Weaver's (1976) concept-teaching recommendations, which designate that 30% of the items should be nonexamples. The comparison of the matched and unmatched treatments was included because the matched treatment permitted students to see 40% more diagrams than the unmatched treatment, enhancing any possible effects of diagrams. In the diagram condition, the introductory lesson and each example in the conceptual exercise contained a diagram representing the classical conditioning. In the nondiagram condition, the diagram was omitted. Figure 1 represents the diagram used in the introductory lesson. The remaining diagrams were similar to Figure 1, differing only in the specific stimuli and responses contained in the example. As illustrated in Figure 1, the lesson emphasized the predictiveness of the CS in signaling the US as being the key factor in conditioning rather than temporal contiguity between the CS and the US. Students were randomly assigned to one of the four conditions (15 participants per condition), formed by crossing lesson format (matched examples and nonexamples versus unmatched examples and nonexamples) with diagrams (present versus absent). Two independent raters scored all the posttests. For the classification task, the raters scored each item as correct or incorrect on the basis of whether the student had properly identified examples and nonexamples. For analysis-task examples, the raters individually scored the correctness of the student's identification of the US, UR, NS, CS, and CR. For analysis-task nonexamples, the raters scored the correctness of the student's identification of nonexamples as instances of operant conditioning, of pseudoconditioning, or of neither operant conditioning nor pseudoconditioning. If the first two raters disagreed, a third rater cast the deciding vote. Interrater reliability was calculated by dividing the number of agreements by the number of agreements plus the number of disagreements. On the classification task, reliability of posttest grading was .99. On the analysis task, the reliability coefficient was 1.00 for nonexamples. On the analysis task, the mean reliability coefficient for examples ranged from .95 to .97 with an overall mean of .96 Results and Discussion A 2 x 2 x 2 x 2 (diagrams/nondiagrams x matched/unmatched format x examples/nonexamples x classification/analysis) repeated-measures analysis of variance was conducted on the posttest data. The proportion of responses correct per item served as the dependent measure for purposes of data analysis. On the posttest, the students answered nonexample items (M = .76 correct responses per item) correctly more often than example items (M = .65 correct responses per item), F(1, 56) = 6.90, p < .05. In addition, the students correctly responded more often to the classification task (M = .825) than to the analysis task (M = .59), F(1, 56) = 112.56, p <.01. The major significant result was the diagrams/nondiagrams x examples/nonexamples x classification/analysis interaction, F(1, 56) = 8.86, p < .01. Newman-Keuls tests indicated that students using diagrams correctly classified nonexamples (M = .93) more frequently than did students not using diagrams (M = .83), p < .01. In addition, students using diagrams correctly analyzed examples (M = .61) more frequently than did students not using diagrams (M = .49), p < .01. Of the 30 students in the diagram group, 15 (drawers) happened to draw their own diagrams in answering the posttest example items and the other 15 (nondrawers) did not. In order to examine differences between these two groups, a 2 x 2 x 2 (drawers/nondrawers x examples/nonexamples x classification/analysis) repeated-measures analysis of variance was conducted. Students correctly answered classification-task items (M = .84) more often than they correctly answered the analysis-task items (M = .62), F(1,28) = 4.25, p < .05. Drawers answered posttest items (M = .80) correctly more often than did nondrawers (M = .66), F(1, 28) = 6.30, p < .05. Also significant in this analysis was the drawers/nondrawers x examples/nonexamples x classification/analysis interaction, F(1, 28) = 8.93, p < .01. Newman-Keuls tests indicated that drawers correctly classified examples (M = .82) more often than did nondrawers (M = .69), p < .05. Drawers classified nonexamples correctly (M = 1.00) more often than did nondrawers (M = .87), p < .01. Finally, drawers correctly analyzed examples (M = .75) more often than did nondrawers (M = .47), p < .01. To summarize, the major finding of this study was that diagrams of classical conditioning improved students' learning of that concept. On the posttest, students given diagrams (a) more often correctly classified novel nonexamples of classical conditioning and (b) more often correctly. analyzed novel examples of classical conditioning by being able to correctly identify the US, UR, NS, CS, and CR. The beneficial effects of diagrams provide general support for advocates of the use of diagrams to teach behavior-analysis concepts (Goldwater & Acker, 1995; Malott, 1992; Mattaini, 1995; Michael & Schafer, 1995). Although there are differences in the proposed systems of diagrammatic representation, all the systems share the representation of temporal sequences of events in spatial dimensions on the printed page. Students in the present study benefited from relatively simple diagrams that illustrated, in spatial dimensions, the temporal sequences among the US, UR, CS and CR. Behavior-analysis procedures generally involve temporal sequences of events (e.g., the effects of antecedents and consequences on responses) and it may be that diagrams make these sequences easier to understand and to learn by representing them in spatial dimensions. In comparison to the systems for diagramming behavior analysis concepts (Goldwater & Acker, 1995; Hummel et al., 1994; Malott, 1992; Mechner, 1959; Snapper et al., 1982), the diagrams the students used in the present study were simple ones that required no specific knowledge of diagramming symbols. Students may stand to benefit even more from learning more sophisticated diagramming systems. The diagrams used in the present study provide support for the general proposition that diagrams illustrating concept structure are a useful adjunct to concept teaching methods. Waddill, McDaniel, and Einstein (1988) suggest that research concerning the instructional effectiveness of visual aids should identify the specific contexts and conditions that are appropriate for presenting visual aids. The present study suggests that diagrams are appropriate at the definitional and example analysis stages of concept teaching. This finding is especially important because research and advice in concept teaching (Grant, 1986; Merrill, Tennyson, & Posey, 1992; Tennyson & Cocchiarella, 1986) has generally emphasized only standard prose concept definitions. It may be that word diagrams should often augment the use of standard prose in teaching concepts. The beneficial effect of diagrams on the example-analysis task indicated that diagrammatic representation was particularly effective in teaching students to identify the relationships among the stimuli and responses in classical conditioning. The finding that diagrams improved nonexample classification indicates that diagrams were also particularly effective in reducing errors of overgeneralization or overextension, in which nonexamples of a concept are classified as examples. The diagrams were of no help in the nonexample-analysis task, in which the students were required to specify whether nonexamples were instances of operant conditioning or pseudoconditioning. Because the diagrams did not represent these concepts, the diagrams could not be expected to improve student performance in analyzing nonexamples. However, the diagrams were also of no help in classifying examples, and they should have been of assistance if the diagrams had acted to improve the students' abilities to identify the components of classical conditioning. Students in the diagram group who drew diagrams in answering the posttest items were better at example classification than were diagram group students who did not draw diagrams. This result suggests that diagrams improved both example and nonexample classification for those students who actually made use of the diagrams. Although suggestive of additional benefits of diagrams, the correlational nature of the comparisons between the drawers and nondrawers makes it difficult to come to any firm conclusions. For example the drawers could have simply been more motivated, which led them to draw diagrams and do better on the posttest, without any necessary functional relationship between drawing and improved test performance. The effectiveness of diagrams leads naturally to considerations concerning how teachers can and should implement diagrams in their written instructional materials, lectures, and web-based materials. Although this issue is beyond the scope of the present data, some rough guidelines have emerged from the author's work. First, because diagrams tend to focus the student's attention, using diagrams may be more effective to teach relatively difficult concepts like classical conditioning that students need to spend more time on than simpler concepts. Second, many diagrams illustrate temporal sequences of events and these types of diagrams lend themselves well to use in overheads in lectures because the instructor is able to point out subsections of the diagram and explain the component processes the diagram illustrates. This kind of step-by-step highlighting of parts of diagrams is also increasingly possible through computer-based and web-based instruction. Diagrams are particularly well suited to web-based instruct ion because they are relatively easy to read on a computer screen, unlike long passages of text. Third, as suggested by the comparisons of the drawers and nondrawers in the current study, there may well be important benefits in teaching and encouraging students to draw diagrams. Diagramming methods may enable students to organize and recall concepts and principles more effectively than traditional methods such as reading and rereading and reciting text. ANDERSON, R. C., & KULHAVY, R. W. (1972). Learning concepts from definitions. American Educational Research Journal, 9, 385-390. BIERLEY, C., MCSWEENEY, F. K., & VANNIEUWKERK, R. (1985). Classical conditioning of preferences for stimuli. Journal of Consumer Research, 12, 316-323. CANNON, D. S., & BAKER, T. B. (1981). Emetic and electric shock alcohol aversion therapy: Assessment of conditioning. Journal of Consulting and Clinical Psychology, 49, 20-33. CANNON, D. S., BEST, M. R., BATSON, J. D., & FELDMAN, M. (1983). Taste familiarity and apomorphine-induced taste aversions in humans. Behavior Research and Therapy, 21, 669-673. CLARKE, J. C., & HAYES, K. (1984). Covert sensitization, stimulus relevance and the equipotentiality premise. Behavior Research and Therapy, 22, 451-454. DEKKER, E., PELSER, H. E., & GROEN, J. (1964). Conditioning as a cause of asthmatic attacks; a laboratory study. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 116-131). New York: Springer. DOERR, H. O. (1981). Cognitive derivation of generalization stimuli: Separation of components. Bulletin of the Psychonomic Society, 17, 73-75. EFRON, R. (1964). The conditioned inhibition of uncinate fits. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 132143). New York: Springer. ELLSON, D. G. (1941). Hallucinations produced by sensory conditioning. Journal of Experimental Psychology, 28, 1-20. GOLDWATER, B. C., & ACKER, L. E. (1995). A descriptive notation system for contingency diagramming in behavior analysis. The Behavior Analyst, 18, 113-121. GRANT, L. (1986). Categorizing and concept learning. In H. W. Reese & L. J. Parrott (Eds.), Behavior science: Philosophical, methodological, and empirical advances (pp. 139-1 62). Hillsdale, NJ: Lawrence Erlbaum. GRANT, L. (1996). Positive reinforcement A self-instructional exercise. [WWW Document]. URL http://server.bmod.athabascau.ca/html/prtut/reinpair.htm GRANT, L., & EVANS, A. E. (1994). Principles of behavior analysis. New York: Harper-Collins. GRANT, L., MCAVOY, R., & KEENAN, J. B. (1982). Prompting and feedback variables in concept programming. Teaching of Psychology, 9, 173-177. HAYDUK, A. W. (1980). Increasing hand efficiency at cold temperatures by training hand vasodilation with a classical conditioning-biofeedback overlap design. Biofeedback and Self-Regulation, 5, 307-326. HUMMEL, J. H., KAECK, D. J., BOWES, R. L., & RITTENHOUSE, R. D. (1994). Diagramming operant processes. The ABA Newsletter, 17, 4-5. JOHNSON, D. M., & STRATTON, R. P. (1966). Evaluation of five methods of teaching concepts. Journal of Educational Psychology, 57, 48-53. KASATKIN, N. I., & LEVIKOVA, A. M. (1932). On the development of early conditioned reflexes and differentiations of auditory stimuli in infants. Journal of Experimental Psychology, 18, 1-19. KELLER, F S., & SCHOENFELD, W. S. (1950). Principles of psychology. New York: Appleton-Century-Crofts. MALOTT, R.W. (1992). Saving the world with contingency diagramming. The ABA Newsletter, 15, 45. MATTAINI, M. A. (1995). Contingency diagrams as teaching tools. The Behavior Analyst, 18, 93-98. MCCONAGHY, N. (1970). Penile response conditioning and its relationship to aversion therapy in homosexuals. Behavior Therapy, 1, 213-221. MECHNER, F (1959). A notational system for description of behavioral processes. Journal of the Experimental Analysis of Behavior, 2, 133-150. MERRILL, M. D., TENNYSON, R. D., & POSEY, L. O. (1992). Teaching concepts: An instructional design guide. Englewood Cliffs, NJ: Educational Technology Publications. MICHAEL, J., & SHAFER, E. (1995). State notation for teaching about behavioral procedures. The Behavior Analyst, 18, 123-140. MILLER, L. K. (1980). Principles of everyday behavior analysis (2nd ed.). Monterey, CA: Brooks/Cole. MILLER, L. K., & WEAVER, F H. (1976). A behavioral technology for producing concept formation in university students. Journal of Applied Behavior Analysis, 9, 289-300. PETERSON, N. (1978). An introduction to verbal behavior. Grand Rapids, MI: Behavior Associates. QUARTI, C., & RENAUD, J. (1964). A new treatment of constipation by conditioning: A preliminary report. In C. M. Franks (Ed.), Conditioning techniques in clinical practice and research (pp. 219-227). New York: Springer. RAZRAN, G. (1949). Sentential and propositional generalization of salivary conditioning to verbal stimuli. Science, 109, 447-448. REESE, D. G., & WOOLFENDEN, R. M. (1973). Behavior analysis of everyday life: A program for the generalization of behavioral concepts. Kalamazoo, MI: Behaviordelia. SNAPPER, A. G., KADDEN, R. M., & INGLIS, G. B. (1982). State notation of behavioral procedures. Behavior Research Methods and Instrumentation, 14, 329-342. SWITZER, S. A. (1933). Disinhibition of the conditioned galvanic skin response. Journal of General Psychology, 9, 77-100. TENNYSON, R. D., & COCCHIARELLA, M. J. (1986). An empirically based instructional design theory for teaching concepts. Review of Educational Research, 56, 40-71. TENNYSON, R. D., STEVE, M. W., & BOUTWELL, R. E. (1975). Instance sequence and analysis of instance attribute representation in concept acquisition. Journal of Educational Psychology, 67, 821-827. VAITL, D., GRUPPE, H., & KIMMEL, H. D. (1985). Contextual stimulus control of conditional vasomotor and electrodermal reactions to angry and friendly faces. The Pavlovian Journal of Biological Science, 20, 124-131. WADDILL, P. J., MCDANIEL, M. A., & EINSTEIN, G. O. (1988). Illustrations as adjuncts to prose: A text-appropriate processing approach. Journal of Educational Psychology, 80, 457-464. WATSON, J. B., & RAYNER, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3, 1-20. WOLPE, J. (1958). Psychotherapy by reciprocal inhibition. Stanford, CA: Stanford University Press. Please address requests for reprints and other correspondence about this article to Lyle K. Grant, Psychology Centre, Athabasca University, Athabasca, Alberta, Canada T9S 3A3. |Printer friendly Cite/link Email Feedback| |Author:||Grant, Lyle K.| |Publication:||The Psychological Record| |Date:||Mar 22, 2002| |Previous Article:||Shift learning in matching-to-sample discriminations in rats as a function of overtraining.| |Next Article:||Establishing transfer of compound control in children: a stimulus control analysis.|
An exceptional set of explorers surveys the caves inside a dying glacier on the side of Mt. Hood in Oregon. All photos courtesy of Brent McGregor, except where noted. Editor’s Note: High-profile scientific questions permeate the public consciousness, and surveying, mapping, and field data collection are the “ground truth” element of geophysical scientific research. This article presents a collaboration among scientists, mountaineers, and field data collection experts providing data to shed light on one of the most compelling scientific questions of our age. Mountaineers climb mountains. Cavers explore caves. Surveyors measure and map. In Oregon, a small team of people with all three sets of skills and passions has been exploring caves under the Sandy Glacier, on the northwest side of Mt. Hood, two-thirds up the 11,250-foot mountain. To assist them, they recruited glaciologists, geologists, and invertebrate collectors to provide scientific advice, as well as dozens of volunteer “sherpas” from the local mountaineering, mountain rescue, and caving communities to carry nearly 2,000 pounds of equipment 3,800 feet up the snow- and ice-covered mountain. The result, so far, is a set of maps, measurements, and photographs—many of them spectacular—that describe a dying glacier and document the glacier caves before these majestic natural structures are gone forever. Why Glaciers Matter Glaciers are changing with the planet’s warming climate. “Obviously, they’re receding, but our interest is how fast,” says Andrew Fountain, professor of geography and geology at Portland State University. “Are all of them changing the same way? If not, why not? How much of that water is contributing to sea level rise?” “Scientifically,” Fountain explains, “these caves are important in that they form from running water: the snow-melt on the surface of the glacier makes its way through the ice and melts a little bit of it. If you have enough water draining at the same spot on the surface, then this passage of water melts out the glacier from the interior and develops a cave.” When the glaciers are thin, the ice doesn’t squeeze shut very quickly, so the cave can remain open and fill with air, which circulates and causes rapid melting at the base of the glacier. Water flowing through glaciers helps to control the movement of the ice as it slides down the side of the mountain. The rate at which a glacier slides depends on the amount of water pressure at its base. “It’s similar to air hockey,” says Fountain. “As long as those pucks have a lot of air pressure underneath them, they slide easily, but if you turn off the air they hardly slide at all. The caves, what we call conduits, help convey the water and partly control the pressure underneath the glaciers.” This phenomenon has not been studied much because glacier caves are very dangerous. Glaciologists and climate scientists have tracked the melting of more than 200 glaciers around the world, almost always gathering their data from the surface. “This is not something that our traditional survey techniques can reveal,” Fountain points out. Satellite imagery, aerial photogrammetry and lidar scanning, and GPS units on the surface reveal only surface changes of the glacier. “But [the existence of these caves] means that things are happening underneath that can be really quite dramatic.” The Sandy Glacier Caves For years, Brent McGregor, a photographer, mountaineer, and woodworker, explored 17 glaciers in Oregon, rappelling into crevasses and photographing beautiful snow formations inside them, searching in vain for caves. Finally, in 2011, he came across a YouTube video of people inside a cave in the Sandy Glacier. He searched for it with Eduardo “Eddy” Cartaya, a U.S. Forest Service law enforcement officer, former military officer, search-and-rescue volunteer, and skilled rock- and ice-climber, and Scott Linn, a mountain rescuer and speleologist (scientific studier of caves). Over several trips in 2011 and 2012, they discovered a series of caves, as beautiful as they are impressive—which they named Snow Dragon, Pure Imagination, and Frozen Minotaur—and a moulin, a nearly vertical ice shaft from the surface of the glacier to the bedrock below. They have since explored and mapped about 7,000 feet of passages beneath the Sandy. “The biggest room is in Snow Dragon cave, and it is equal in record to the biggest lava tube room in Oregon,” says McGregor. “It is about 44 feet high and about 90 feet wide in a giant room that stays quite large for about 100 feet.” However, McGregor and Cartaya noticed that the caves were growing and suspected that it had to do with the air circulating through their openings. So, in the summer of 2012, having already surveyed most of Snow Dragon, they decided to organize a large expedition to survey the Pure Imagination and Frozen Minotaur caves and to collect rock, ice, soil, water, and seedling samples. Cast of Characters McGregor, who is in his 60s, and Cartaya, 44, draw on years of experience mountaineering and caving. However, when they asked Fountain to visit their camp, he was skeptical that they had found anything of significant scientific value and that their survey numbers would hold up to close scrutiny. “Then,” he recalls, “I get there and it’s like, ‘This is amazing!’ It’s unusual to have caves that big. They are scientifically interesting and aesthetically beautiful, fascinating.” He was equally impressed with the two explorers and their surveying skills. “They are unique in that they were willing to go into these very tight spaces under real live water conditions. They’re doing a pretty precise job, and with some repeatability, so we had confidence in their numbers. That and their awareness of safety convinced me that they were doing a great job.” Cartaya’s surveying and mappingskills come from his military training. “I went to the U.S. Military Academy at West Point,” he says, “and took a lot of mapping courses that were designed for military applications: aerial photography, changing scales, a lot of land navigation, how to compute distances over canyons, using trigonometry, taking the angle measurements, and so on. To measure the roofs of the caves, you do the same thing. I also got an aerospace engineering degree there. Then, of course, I have been on many other cave surveys. You watch and you learn the process from experienced surveyors—how to set up a survey station and use the tools and do the sketching.” McGregor and Cartaya’s team also includes one of Fountain’s graduate students, Gunnar Johnson, who studies the water chemistry of glaciers and microbes that live underneath the ice; Robert McGown, who collects samples of ash deposition layers in the glacier ice; Matt Skeels, a speleologist who has discovered and mapped many caves and is the team’s main cartographer; and Neil Marchington, a speleologist who is the project’s grant writer and leads the invertebrate collections. They also consult with Jason Gulley, a glaciologist at Michigan Technological University, who has explored glacial caves in Nepal, Alaska, and Norway. Surveying glacier caves poses enormous challenges. First, McGregor and Cartaya needed to get all the required equipment to the base camp. “We were carrying 60 to 100 pound loads per person every time we went up the mountain,” says McGregor. “These weights made the approach, already technical and demanding, extremely arduous and mentally consumptive,” says Cartaya. The contents of their packs included wetsuits, ropes, crampons and ice axes, pickets, ice screws, survey gear, avalanche probes, vertical climbing gear, cave suits, camera and video equipment, freeze-dried dinners, oatmeal, cocoa, high energy bars, and even a 42-pound RV battery. “We needed 1,800 pounds of gear for our expedition, and Mount Hood Forest denied us the permit for a special use of a helicopter because it’s in a wilderness area and because they didn’t feel our mission required an air drop.” “Therefore,” Cartaya continues, “we had to gather up a list of friends to help us. We had 50 volunteer ‘sherpas’ from Search & Rescue, from mountaineering clubs, from the caving clubs, each carrying 30 to 80 pounds of gear. Glacier caves are by nature dangerous, wet, cold, and a long ways from help, so we had to have a lot of supplies at our base camp. We had a medical station equal to what an ambulance would carry and medics in the camp. We had Search & Rescue people standing by.” Second, they had to deal with the very cold and wet environment, where even a minor injury could have left one of them dead from hypothermia. “We’re in water dropping down canyon-like features, setting picket anchors in the ice walls, and rappelling over waterfalls,” says McGregor. “So, we needed wetsuits and had to try to stay very warm, because the water temperature is 32.5 degrees.” “Some of the surveyors wear dry suits, some wear wetsuits, and some just wear PVC cave suits, but those aren’t as good,” McGregor continues. “You’ve got to layer up underneath. Regular survey teams last about two and a half hours, and then their fingers get cold—especially the sketcher, to the point that he can’t sketch any more—so they come out of the cave. This was in July, when the sun was out and they could warm up for half an hour and then go back and survey some more later in the day. When Eddy and I survey, we’re a little more hardcore. We just suffer through it. We have extra gloves and whatever else we need.” The wet conditions are also hard on the equipment. Halfway through one survey, the laser device got too wet and stopped working, so they had to stop. “Then we bought a better one,” McGregor recalls, “and cased it in a little Pelican box with a lanyard so that we could keep it up out of the water.” The third challenge is the noise inside the caves. “The water is so loud that you’re shouting as loud as you can from station to station and it’s difficult to hear each other,” says McGregor. “So, you would yell the measurements to the person up at the next station, and he would yell them back to you to confirm that they were the same.” The fourth challenge is the darkness. “The only natural light you’re going to have is when you’re by the cave entrance or beneath the skylights, but these caves have very few skylights,” says McGregor. “So, once you’re about two or three stations away from the entrance, you start to enter the dark zone. When you’re back in 500 feet, you see nothing. It is pitch black. We all have caving helmets that have really good caving lights on them. Cavers are used to doing everything in the dark, so providing the light we need is part of the survey.” Each new survey station is illuminated with a small light so that it can be seen from the previous one. Additionally, the wet environment often includes fog and spray zones in the caves. “There are parts of the caves where it’s very humid and there is a lot of water vapor in the air, so it’s challenging to see to the next station, sometimes,” McGregor says. When surveying the moulin, additional dangers include rock fall and running water flowing down the inside of the giant pit during the day. At times, Cartaya and McGregor waited until late at night so that the lower temperatures would minimize the risks, then McGregor dangled in the middle of the pit to take the measurements, and still the streaming water drenched him. Finally, some of the passages require crawling, “so you have to crawl, with all the survey tools and the sketch books, through these little water-filled passages,” says Cartaya. “You would go through some of these really difficult places with low ceilings and a lot of powerful water coming in, where you could barely get through,” says McGregor. “But struggling through one rough passage, where the water was making us both hypothermic, eventually led us to a large room of glacial ice: our first view of Frozen Minotaur Cave.” Field Data Collection Organizing the expedition, hauling their equipment up the mountain, rappelling down the moulin and across waterfalls, crawling through narrow passages… All that just got McGregor and Cartaya where they needed to be. Then, they surveyed. “We are making a three-dimensional map of all the cave systems,” Cartaya explains. “It’s cave surveying, so it’s a little different from land surveying.” At every station, they take five measurements: to the next station, to the left wall, to the right wall, up to the ceiling, and down to the floor. Sometimes, McGregor and Cartaya are the only two on a survey team; other times they have four people helping. One person’s only job is to set up the next station, anywhere from 10 feet to 100 feet away from the previous one. “We don’t like to set up stations too far apart because we want to sketch in more detail between them,” says McGregor. They set up the stations “near significant features, such as a big pit or a hole in the ceiling or a water fall,” Cartaya points out. McGregor likes to set up the stations, which is the easiest job, because it allows him three or four minutes to photograph before he has to set up the next one. “I always take a photo back at the station I just came from. Sometimes, those photographs help the mapmaker visualize the shape of the cave.” To measure distances, they use a Bosch GLR 500 laser distance measurer, laid on or against the actual survey station, which could be a boulder or an outcrop of rock. They repeat each measurement three times for accuracy and enter it into a notebook, together with a quick sketch. “We put a little rock cairn and some ribbon and write the number of the station on the ribbon,” McGregor says, “and we can use those for the next survey, six months later, a year later, unless they’ve been knocked down from melt water, falling ice, or some other reason. Many of the stations have remained for over two and a half years now.” They use a Brunton survey compass to record the azimuth of the survey line from magnetic north and a Brunton inclinometer to get the slope of the floor. They measure all the angles to within a half degree and all the distances to within a tenth of a foot. They do not use a total station or even a standard pole because, in most cases, they would not fit through very difficult passageways. One of the people on the team is the sketcher. “He’s recording all the figures and actually making a sketch of the shape of the cave,” says McGregor. “He’s sketching in any big boulders, any waterfalls coming out of the ceiling, or anything else of interest. We found a feather 1,800 feet inside the upper end of Snow Dragon Cave and marked its location because it was very unusual. We sent it to the Smithsonian Institute where it was identified as coming from a Mallard duck. It melted through the ice and had been there for probably hundreds of years.” “If there is any significant side tunnel that dead-ends,” Cartaya says, “we’ll do what’s called a ‘splay shot’: we’ll shoot the laser down it, and take the compass heading, so that we can draw that into the map as a side feature.” After each survey, they give all their data to their cartographer, Matt Skeels, who uses a freeware program called Compass to crunch the numbers and come up with a survey line. “While we are doing the surveying, we are also sketching in cross-sectional views,” Cartaya explains. “So, the final map will have a plan view, which is the bird’s eye view, a profile view, which is an ant farm view, and at interesting places along the cave it will have cross-sectional views, showing the shape of the passage as it appears when you are standing inside the cave.” “Often,” Cartaya continues, “if a station is almost under water or it is so violent or cold that you can’t possibly sit there and sketch or take readings, we will do back readings. We’ll crawl through the passage, set up the survey station, then, when we get to the other side of it, we shoot back into it and take a back azimuth with the compass and a reverse sighting with the inclinometer. Of course, the laser measurement is the same.” A key objective of each survey is to calculate the volume of ice that has melted each year. So, in order to survey a huge pit, they build a highline. “It is like a trolley line, over the top of the pit,” Cartaya explains. “Then we dangle a surveyor down a rope straight down the middle of it so that he can take cross-sectional laser readings.” The moulin’s opening has widened by 36 feet in two years, and, according to Cartaya, its volume has increased by 400% in just one year. In some places, the Snow Dragon cave is 10 feet wider than it was a year ago. This expedition was the first to record this loss of ice. According to Fountain, the Sandy Glacier has retreated by at least 40% over the last 100 years, and his educated guess is that the ice used to be about 200-feet thicker. This survey’s findings reveal a glacier disintegrating from the inside out. Surveyors anywhere who have the skills and equipment to do it should consider surveying glacier caves, while they still exist. It would greatly help the scientific study of glaciers, add to their professional experience, and give them some great stories to tell!
Alf SMental Maths Of Multiples And FactorsFor KindergartensA Paragraph Using AdverbsEvs 1stMagnetic BoardSun Makes Heat And LightText Features For 6th GradeMy Favorite AnimalsPoints Planes LinesHow Are Students Selected For Verification VerificationGiraffes Cant DanceHindi Practice PapersMiddle School Liteary VocabularySimple Past Tens Questions NegativesDoric Iconic Corinthian Kindergarten Math Measurement WorksheetsList of all concepts available under this subject are listed below. Worksheets are organized under various concepts within the subject. Click on concept to see the list of all worksheets available for that concept. - Measuring Magic Kindergarten students can practice their measuring and comparing skills using this imaginative worksheet. - Heavy or Light: Measuring Weight Help your child practice his skills with measurements with this printable worksheet, which is all about weight. - Measuring Inches: Inching Insects Get some measurement practice by inching along with some familiar sights. - Comparing Weight Take measurement learning to the next level with this comparing weight worksheet featuring common objects, and practice reading numbers. - Length and Width: Measure School Supplies Introduce measuring to your kindergartener with this simple and easy worksheet. Use the paper inch ruler to measure the length of each school supply. - Measuring Length: Earthworms On this first grade math worksheet, kids use a ruler to measure four worms in inches. Then they compare lengths to find the shortest and longest worm. - Which is Heavier? Give your child's logical reasoning skills a boost with this worksheet that asks him to put each group in order from lightest to heaviest. - Color & Compare Weights of Objects Beginning measurement starts with relative sizes and weights; this worksheet asks your child decide which object is heavier and adds coloring into the mix! - Measuring Bug: Inches Practice measuring length in inches with your first grader with this cute buggy worksheet. - Sizes: Small, Medium, and Large Look for small, medium, and large objects in the picture and color them according to the colors listed in the directions. - Color & Compare: Ordering Weight Take measurement learning to the next level with this color and compare worksheet where your child will compare and order objects based on relative weight. - Rapunzel Braid Measurement Rapunzel is locked in a tower with nothing to do but play with her hair. As the year goes on she decides to keep a braid chart for every foot her hair grows. - Comparing Two Things In this worksheet, your child will be comparing two things in the pictures and will decide for himself which one is taller, older, or heavier. - Measuring Length: Veggies On this first grade math worksheet, kids use a ruler to measure four vegetables in inches. Then they compare lengths to find the shortest and longest veggies. - Making Comparisons Practice basic math and observation skills with your young one, using this mini book you can fold up yourself! - Measuring Worms If you want to get your first grader excited about measurement, try measuring worms! - Measuring Length: More Veggies On this first grade math worksheet, kids use a ruler to measure four vegetables in inches. Then they compare lengths to find the shortest and longest veggie. - Compare Sizes: Fruit How big is each piece of fruit? Help your child learn about measuring sizes, small, medium and large, with this fun worksheet! - Comparing Weights Do you know how much you weigh? Introduce your kindergartener to the concept of measurement by weight with this fun comparison worksheet. - Length and Width: Measure Space Cut out the inch ruler and measure length and width of the objects related to space. Your kindergartener will go galactic with our fun and simple worksheet! - Length and Width: Measure Vehicles Your little guy will surely enjoy cutting out the inch ruler and measuring the length of each colorful vehicle! - Animals Big and Small Students learn how to compare animals using the words “bigger” and “smaller.” - Mid-Year Math Assessment: Measurement Do your students understand the concept of measurement? Assess their knowledge using this fun measurement worksheet! - Measuring Inches Master measuring with a ruler using this straightforward pracic sheet! Help your child understand how to read a ruler to measure inches with accuracy.
Ordering & Introducing Negative Numbers - MathsPad 2013-11-07 · Subtracting Positive and Negative Numbers negative numbers for homework, we explain a neat little trick to help you out. For Teachers CCSS.Math Math Homework Help - Answers to Math Problems - Hotmath 2010-01-11 · Solving equations with positive and negative numbers homework! Please help? Homework help with negative numbers - Ace Business Brokers Numbers lines always seem to help. Positive and Negative Numbers as Sums on a Numbers Line - This is a five page worksheet set complete with Homework Sheets . Differentiated negative number worksheets - TES Resources homework help function graph Homework Help With Negative Numbers social work research paper helpful tips for sat essay Math | Mathematics homework help 2018-10-03 · assignmenthelp.net provides all help like assignment, project, homework and also online help for Negative and fractional exponents. Numbers - Adding and Subtracting Integers - First Glance 2009-09-09 · Math Homework help !?i dont understand where to put the negative sign either ? id aprreciate it if u help? Negative Numbers - Printable Math Worksheets at apology essays examples of synthesis essay Help with homework on negative numbers to help student with how can i buy a research paper. If the full sensory Seventh Grade (Grade 7) Positive and Negative Numbers Use the number line for adding and subtracting integers: Add a positive integer by moving to the right on the number line; Add a negative integer by Subtracting Positive and Negative Numbers | The Matrix and ks3 maths negative number addition worksheet by gcse maths negative numbers worksheet kcls homework help . gcse maths worksheets negative numbers generated on Step-By-Step Guide to Adding Irrational Numbers Homework help negative numbers A mix of negative numbers, math homework subtract single or your tutee to find the correlation was already know how to parcc Homework help negative numbers | Autism&Uni We also offer cost-effective math programs which include Math Lesson Plans aligned to state-national standards and Homework Help Positive and Negative Numbers - Lesson Plans & Homework Practice solving addition and subtraction problems with integers (positive and negative numbers). Understanding Positive and Negative Numbers Worksheets Math Questions and Answers > Homework Help. Filter Questions and `a` must be a real number (positive or negative) What do the letters R, Q, N, and Z mean in math? | eNotes Math homework help. Hotmath explains math textbook homework problems with step-by-step math answers for algebra, geometry, and calculus. Online tutoring available for 7th Grade Algebra | Adding and Subtracting Integers Seventh Grade (Grade 7) Positive and Negative Numbers questions for your custom printable tests and worksheets. In a hurry? Browse our pre-made printable worksheets
Spiral galaxies form a class of galaxy originally described by Edwin Hubble in his 1936 work The Realm of the Nebulae and, as such, form part of the Hubble sequence. Most spiral galaxies consist of a flat, rotating disk containing stars, gas and dust, and a central concentration of stars known as the bulge. These are often surrounded by a much fainter halo of stars, many of which reside in globular clusters. Spiral galaxies are named by their spiral structures that extend from the center into the galactic disc. The spiral arms are sites of ongoing star formation and are brighter than the surrounding disc because of the young, hot OB stars that inhabit them. Roughly two-thirds of all spirals are observed to have an additional component in the form of a bar-like structure, extending from the central bulge, at the ends of which the spiral arms begin. The proportion of barred spirals relative to barless spirals has likely changed over the history of the universe, with only about 10% containing bars about 8 billion years ago, to roughly a quarter 2.5 billion years ago, until present, where over two-thirds of the galaxies in the visible universe (Hubble volume) have bars. The Milky Way is a barred spiral, although the bar itself is difficult to observe from Earth's current position within the galactic disc. The most convincing evidence for the stars forming a bar in the galactic center comes from several recent surveys, including the Spitzer Space Telescope. Together with irregular galaxies, spiral galaxies make up approximately 60% of galaxies in today's universe. They are mostly found in low-density regions and are rare in the centers of galaxy clusters. Spiral galaxies may consist of several distinct components: The relative importance, in terms of mass, brightness and size, of the different components varies from galaxy to galaxy. Spiral arms are regions of stars that extend from the center of barred and unbarred spiral galaxies. These long, thin regions resemble a spiral and thus give spiral galaxies their name. Naturally, different classifications of spiral galaxies have distinct arm-structures. Sc and SBc galaxies, for instance, have very "loose" arms, whereas Sa and SBa galaxies have tightly wrapped arms (with reference to the Hubble sequence). Either way, spiral arms contain many young, blue stars (due to the high mass density and the high rate of star formation), which make the arms so bright. A bulge is a large, tightly packed group of stars. The term refers to the central group of stars found in most spiral galaxies, often defined as the excess of stellar light above the inward extrapolation of the outer (exponential) disk light. Using the Hubble classification, the bulge of Sa galaxies is usually composed of Population II stars, which are old, red stars with low metal content. Further, the bulge of Sa and SBa galaxies tends to be large. In contrast, the bulges of Sc and SBc galaxies are much smaller and are composed of young, blue Population I stars. Some bulges have similar properties to those of elliptical galaxies (scaled down to lower mass and luminosity); others simply appear as higher density centers of disks, with properties similar to disk galaxies. Many bulges are thought to host a supermassive black hole at their centers. In our own galaxy, for instance, the object called Sagittarius A* is believed to be a supermassive black hole. There are many lines of evidence for the existence of black holes in spiral galaxy centers, including the presence of active nuclei in some spiral galaxies, and dynamical measurements that find large compact central masses in galaxies such as Messier 106. Bar-shaped elongations of stars are observed in roughly two-thirds of all spiral galaxies. Their presence may be either strong or weak. In edge-on spiral (and lenticular) galaxies, the presence of the bar can sometimes be discerned by the out-of-plane X-shaped or (peanut shell)-shaped structures which typically have a maximum visibility at half the length of the in-plane bar. The bulk of the stars in a spiral galaxy are located either close to a single plane (the galactic plane) in more or less conventional circular orbits around the center of the galaxy (the Galactic Center), or in a spheroidal galactic bulge around the galactic core. However, some stars inhabit a spheroidal halo or galactic spheroid, a type of galactic halo. The orbital behaviour of these stars is disputed, but they may exhibit retrograde and/or highly inclined orbits, or not move in regular orbits at all. Halo stars may be acquired from small galaxies which fall into and merge with the spiral galaxy—for example, the Sagittarius Dwarf Spheroidal Galaxy is in the process of merging with the Milky Way and observations show that some stars in the halo of the Milky Way have been acquired from it. Unlike the galactic disc, the halo seems to be free of dust, and in further contrast, stars in the galactic halo are of Population II, much older and with much lower metallicity than their Population I cousins in the galactic disc (but similar to those in the galactic bulge). The galactic halo also contains many globular clusters. The motion of halo stars does bring them through the disc on occasion, and a number of small red dwarfs close to the Sun are thought to belong to the galactic halo, for example Kapteyn's Star and Groombridge 1830. Due to their irregular movement around the center of the galaxy, these stars often display unusually high proper motion. The oldest spiral galaxy on file is BX442. At eleven billion years old, it is more than two billion years older than any previous discovery. Researchers think the galaxy's shape is caused by the gravitational influence of a companion dwarf galaxy. Computer models based on that assumption indicate that BX442's spiral structure will last about 100 million years. The pioneer of studies of the rotation of the Galaxy and the formation of the spiral arms was Bertil Lindblad in 1925. He realized that the idea of stars arranged permanently in a spiral shape was untenable. Since the angular speed of rotation of the galactic disk varies with distance from the centre of the galaxy (via a standard solar system type of gravitational model), a radial arm (like a spoke) would quickly become curved as the galaxy rotates. The arm would, after a few galactic rotations, become increasingly curved and wind around the galaxy ever tighter. This is called the winding problem. Measurements in the late 1960s showed that the orbital velocity of stars in spiral galaxies with respect to their distance from the galactic center is indeed higher than expected from Newtonian dynamics but still cannot explain the stability of the spiral structure. Since the 1970s, there have been two leading hypotheses or models for the spiral structures of galaxies: These different hypotheses are not mutually exclusive, as they may explain different types of spiral arms. Bertil Lindblad proposed that the arms represent regions of enhanced density (density waves) that rotate more slowly than the galaxy's stars and gas. As gas enters a density wave, it gets squeezed and makes new stars, some of which are short-lived blue stars that light the arms. The first acceptable theory for the spiral structure was devised by C. C. Lin and Frank Shu in 1964, attempting to explain the large-scale structure of spirals in terms of a small-amplitude wave propagating with fixed angular velocity, that revolves around the galaxy at a speed different from that of the galaxy's gas and stars. They suggested that the spiral arms were manifestations of spiral density waves – they assumed that the stars travel in slightly elliptical orbits, and that the orientations of their orbits is correlated i.e. the ellipses vary in their orientation (one to another) in a smooth way with increasing distance from the galactic center. This is illustrated in the diagram to the right. It is clear that the elliptical orbits come close together in certain areas to give the effect of arms. Stars therefore do not remain forever in the position that we now see them in, but pass through the arms as they travel in their orbits. The following hypotheses exist for star formation caused by density waves: Spiral arms appear visually brighter because they contain both young stars and more massive and luminous stars than the rest of the galaxy. As massive stars evolve far more quickly, their demise tends to leave a darker background of fainter stars immediately behind the density waves. This make the density waves much more prominent. Spiral arms simply appear to pass through the older established stars as they travel in their galactic orbits, so they also do not necessarily follow the arms. As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the local higher density. Also the newly created stars do not remain forever fixed in the position within the spiral arms, where the average space velocity returns to normal after the stars depart on the other side of the arm. Charles Francis and Erik Anderson showed from observations of motions of over 20,000 local stars (within 300 parsecs) that stars do move along spiral arms, and described how mutual gravity between stars causes orbits to align on logarithmic spirals. When the theory is applied to gas, collisions between gas clouds generate the molecular clouds in which new stars form, and evolution towards grand-design bisymmetric spirals is explained. with being the disk scale-length; is the central value; it is useful to define: as the size of the stellar disk, whose luminosity is The spiral galaxies light profiles, in terms of the coordinate , do not depend on galaxy luminosity. Before it was understood that spiral galaxies existed outside of our Milky Way galaxy, they were often referred to as spiral nebulae. The question of whether such objects were separate galaxies independent of the Milky Way, or a type of nebula existing within our own galaxy, was the subject of the Great Debate of 1920, between Heber Curtis of Lick Observatory and Harlow Shapley of Mt. Wilson Observatory. Beginning in 1923, Edwin Hubble observed Cepheid variables in several spiral nebulae, including the so-called "Andromeda Nebula", proving that they are, in fact, entire galaxies outside our own. The term spiral nebula has since fallen out of use. The Milky Way was once considered an ordinary spiral galaxy. Astronomers first began to suspect that the Milky Way is a barred spiral galaxy in the 1960s. Their suspicions were confirmed by Spitzer Space Telescope observations in 2005, which showed that the Milky Way's central bar is larger than what was previously suspected. Lin and Shu showed that this spiral pattern would persist more or less for ever, even though individual stars and gas clouds are always drifting into the arms and out again.
Swift, like all programming languages, designate certain data types that help the operating system and computer hardware allocate memory based on what is going to be stored. These data types include numeric, textual, and logical values. In a type-safe language like Swift, values are generally stored in variables, which are containers that hold data. The data type determines how big that variable container is and where the computer is going to store it for later access. Basic Data Types ||Integer whole number| ||Floating point number| ||Floating point number| Integer Data Types Integers are whole numbers such as -123. As seen from the example column above, they can be either signed or unsigned whole numbers, the default being signed. Integers can be declared in five different ways: |Type Reference||Description||Value Range| ||The standard reference used for whole numbers in Swift.||Based on the platform ( ||Creates an 8-bit signed integer.||-128 &endash; 127| ||Creates a 32-bit signed integer.||-2,147,483,648 &endash; 2,147,483,647| ||Creates a 64-bit signed integer.||-9,223,372,036,854,775,808 &endash; 9,223,372,036,854,775,807| ||Unsigned that is created in the same manner as a standard ||Positive values only.| ||Creates an 8-bit unsigned integer.||0 &endash; 255| ||Creates a 32-bit unsigned integer.||0 &endash; 4,294,967,295| ||Creates a 64-bit unsigned integer||0 &endash; 18,446,744,073,709,551,615| let shinyNewInteger: Int = 500let verySmallInteger: Int8 = 16 Floating Point Numbers Float and a Double are number data types that allow for decimals. A Float is a 32-bit ‘floating-point’ number and a Double is a 64-bit floating-point number, that being said, a Float has approximately half as much precision as a Double. If a high precision of accuracy is needed, it is best to use a Double. When a variable is declared without a specified type, Swift will type inference a Double as a precaution. let accountBalance: Float = 857.45let pi: Double = 3.14159265359let gpa = 3.7 // inferred as a Double Strings and Characters Strings are a collection of Characters are the individual symbols that make up our languages. In Swift, the String type can be either mutable or immutable, as determined by the type of variable it’s stored in, either a var or a let for a constant. Both Character are typically declared inside a set of double quotation marks, while multi-line Strings are declared with a set of triple quotations opening and closing the text. let author: String = "Edgar Allen Poe"let type: Character = "P"let theRaven: String = """Once upon a midnight dreary, while I pondered, weak and weary,Over many a quaint and curious volume of forgotten lore—While I nodded, nearly napping, suddenly there came a tapping,As of some one gently rapping, rapping at my chamber door.“’Tis some visitor,” I muttered, “tapping at my chamber door—Only this and nothing more.”""" Boolean Data Type Boolean values, initialized using the Bool keyword, represent false. They are used in control flow and other conditional statements to process the logical decision points in the program leading them to be referred to as logical values. They can be declared directly or by using a logical test. let fallingOnPavementHurts = truevar gameOver: Bool = homeScore > 5 || awayScore > 5 - Anonymous contributorAnonymous contributor3077 total contributions - christian.dinh2481 total contributions - anli210 total contributions
American Civil War The Civil War (1861-65) was fought in the United States of America by citizens loyal to the elected government (the Union) against the Confederate States of America, a group of southern secessionist states. The Union, led by President Abraham Lincoln, defeated the breakaway Confederacy, led by President Jefferson Davis, thereby preserving the union and republican government, and abolishing slavery. Outline of the war The war began because southern secessionists had come to the conclusion that their way of life was threatened by the elected Republican national majority. Prior to the war, the South had enjoyed for a time legislative majorities in both houses, routine victories in the presidential elections, and control of the Supreme Court. By 1860, because of changes brought by the market and transportation revolutions, immigration, and industrialization, Northern states had routinely maintained their electoral majorities in both houses. In 1854, the Republican Party was formed. It was pledged to halting the further expansion of slavery. And by 1860, the Republican Party, solely on the strength of the electoral power of the Northern states elected Abraham Lincoln president. With their complete loss of power in the national government to a party hostile to the institution through which white southerners derived their identity (slavery), seven states from the deep south formed the Confederate States of America and proclaimed their independence from the United States. (see the Secession Crisis) Fighting began on April 12, 1861, when Confederate forces attacked a small U.S. military installation at Fort Sumter in Charleston, South Carolina. Lincoln called for an invasion force to recapture the fort. Four more states seceded. These states were from the upper south, and they rejected Lincoln's coercion and felt that their political future lay more closely with the Confederacy than with the Union. With their inclusion, the Confederacy had eleven states in all. During the first year, the Union asserted control of the border states and established a naval blockade, as a means of economic warfare rather than stopping military reinforcements, as both sides raised large armies. In 1862 large, bloody battles began, causing massive casualties as both sides fought heroically. In September 1862, Lincoln's Emancipation Proclamation made the freeing of slaves in the South a war goal, so as to ruin the economic base of the Confederacy, despite opposition from northern Copperheads who tolerated secession and slavery. War Democrats reluctantly accepted emancipation as part of total war needed to save the Union. Emancipation ended the likelihood of intervention from Britain and France on behalf of the Confederacy. Emancipation allowed the Union to recruit 190,000 blacks (both free and ex-slave) for reinforcements, a resource that the Confederacy did not dare exploit until it was too late. In the East in 1862, Confederate general Robert E. Lee assumed command of the Army of Northern Virginia and rolled up a series of victories over the Army of the Potomac, but his best General, Thomas Jonathan "Stonewall" Jackson, was killed at the Battle of Chancellorsville in May 1863. Lee's invasion of the North was repulsed at the Battle of Gettysburg in Pennsylvania in July 1863; he barely managed to escape back to Virginia. The Union Navy captured the port of New Orleans in 1862, and Ulysses S. Grant seized control of the Mississippi River by defeating multiple uncoordinated Confederate armies and capturing Vicksburg, Mississippi in July 1863, thus splitting the Confederacy. By 1864, long-term Union advantages in geography, manpower, industry, finance, political organization and transportation were overwhelming the Confederacy. Grant fought a remarkable series of bloody battles with Lee in Virginia in the summer of 1864. Lee's defensive tactics resulted in higher casualties for Grant's army, but Lee lost strategically overall as he could not replace his casualties and was forced to retreat into trenches around his capital, Richmond, Virginia. Meanwhile, in the West, William Tecumseh Sherman captured Atlanta, Georgia. Sherman's March to the Sea destroyed a hundred-mile-wide swath of Georgia. In 1865, the Confederacy collapsed after Lee surrendered to Grant at Appomattox Court House; all slaves in the Confederacy were freed by the Emancipation Proclamation. Slaves in the border states and Union controlled parts of the South were freed by state action or by the Thirteenth Amendment. The full restoration of the Union was the work of a highly contentious postwar era known as Reconstruction (1863-1877). The war produced about 970,000 military casualties (3% of the population), including approximately 620,000 soldier deaths—two-thirds by disease, making it the deadliest war in American history in terms of American losses. Slavery and state's rights were the main causes, but the nuances continue to be debated, along with the reasons for Union victory, and even the name of the war itself. The main results of the war were the restoration and strengthening of the Union, and the end of slavery in the United States. The South became much poorer than the rest of the U.S. for a century. Causes of the War Main article: U.S. Civil War, Origins The direct cause of the Civil War was that the Union refused to allow any state to break away without permission of Congress, so to understand the cause of the War depends on understanding the causes of secession. Basically, the South became alienated from the nation, arguing that it was being treated as an inferior, in violation of the letter and the spirit of the Constitution. The maltreatment always involved issues of slavery --the North was increasingly hostile to slavery and the South held that slavery was an integral part of the social, economic and constitutional system. A serious threat to slavery meant that South had to be an independent nation. As the North was growing faster, and thus gaining in political power and electoral votes, Southerners realized by 1860 it was now or never to split away, and the election of Lincoln gave them reason. The inferior treatment was primarily because of slavery, which had been abolished in the North, flourished in the deep South, and was alive but weak in the border states. The new Republican party, founded in 1854, was committed to stopping the expansion of slavery, so that it would be contained and eventually die away. As Lincoln said in his 1858 "House Divided Speech", Republicans wanted to "arrest the further spread of it, and place it where the public mind shall rest in the belief that it is in the course of ultimate extinction". Much of the political battle in the 1850s focused on the expansion of slavery into the newly created territories. Both North and South assumed that if slavery could not expand it would wither and die. Southern fears of losing control of the federal government to antislavery forces, and Northern fears that the slave power already controlled the government, brought the crisis to a head in the late 1850s. Sectional disagreements over the morality of slavery, the scope of democracy and the economic merits of free labor vs. slave plantations caused the Whig Party and "Know Nothing" parties to collapse, and new ones to arise: (the Free Soil Party in 1848, the Republicans in 1854, the Constitutional Union Party in 1860). In 1860, the last remaining national political party, the Democratic Party, split along sectional lines. Other dimensions of the slavery debate included the threat that abolitionist would stir up large-scale slave revolts, as indeed was attempted by John Brown in 1859. Modernization was a factor, as the South was locked into a traditional economy with most of its investment money going into slaves and land, while the North invested in machinery, infrastructure and education. States Rights was a way to phrase the issue in Constitutional terms. Economic issues united the North and South more than it split it. The business community opposed war. Tariff issues were debated at length, but were not a cause of secession because the South wrote the tariffs to its advantage--most recently in 1857. The overseas expansion of slavery was debated, as a brief, failed effort was made to purchase Cuba, which already had slavery. (Spain refused to sell. See Ostend Manifesto of 1854) Earlier, the debate over annexation of Texas in 1844-45 saw antislavery elements in the North angered as they felt the country was dragged into war for the benefit of slavery expansion. Politicians attempted numerous compromises to head off the growing threat of disunion. The Compromise of 1820 kept the equal political balance in the Senate by admitting Missouri as a slave state and Maine as a free state. The Compromise of 1850 resolved the problems created by annexation of new territory in Texas, New Mexico and California. However it also produced a stronger Fugitive Slave Law that forced northern law officials to seize and return runaway slaves. Antislavery sentiment was excited by the runaway success of the novel and play Uncle Tom's Cabin, by Harriet Beecher Stowe, which focused on the heroic efforts of a slave to run away from cruel treatment by Simon Legree. Senator Stephen A. Douglas, the most powerful Democrat in the 1850s, had been the chief sponsor of the Compromise of 1850. In 1854 he suddenly reversed himself and proclaimed "popular sovereignty", whereby people democratically would make the basic decisions. Douglas passed the extremely controversial Kansas Nebraska Act that opened Kansas up to settlers who would choose to make it a free state or a slave state. Instead of a flowering of democracy the result was a bloody small-scale civil war in Kansas as both sides supported and armed their own settlers. The Republican party formed as a result, and the Democratic party was ripped apart. The polarizing effect of slavery split the largest religious denominations (the Methodist, Baptist and Presbyterian churches) the worst cruelties of slavery (whippings, mutilations and families split apart) raised abolitionist attacks. By 1860 the Southerns were developing a sense of nationalism and apartness, and had broken most of the cultural, religious and political ties with the North. Only business ties remained, and the Union itself. Southern separatists insisted the South had all the makings of a great, strong, rich nation. They though "Cotton is King!"--that is the Southern monopoly of raw cotton would force British and European industrialists to support southern independence. What they did not realize was that American nationalism was also growing in the North, and would not tolerate disruption of the Union. In 1860 neither civil rights nor voting rights for blacks were stated as goals by the North; they became important only later, during Reconstruction. The Republican party sought the long-term end of slavery, by blocking its expansion and by the superiority of free labor as more productive and profitable. Immediate emancipation was the goal of only a small number of abolitionists, who had little or no real power in the 1850s, although Southerners greatly exaggerated their influence. Questions such as whether the Union was older than the states or the other way around fueled the debate over states' rights. Whether the federal government was supposed to have substantial powers or whether it was merely a voluntary federation of sovereign states added to the controversy. According to historian Kenneth M. Stampp, each section used states' rights arguments when convenient, and shifted positions when convenient. Southerners argued that States Rights meant the federal government was strictly limited and could not abridge the rights of states, and so had no power to prevent slaves from being carried into new territories. States' rights advocates also cited the Constitution's fugitive slave clause to demand federal jurisdiction over slaves who escaped into the North. The South's leading theorist John C. Calhoun regarded the territories as the "common property" of sovereign states, and said that Congress was acting merely as the "joint agents" of the states. Before 1860, all presidents (except John Quincy Adams) were either Southern or pro-South on slavery questions. Lincoln's election changed that and the North's growing population implied northern control of future presidential elections. The South as a minority had special rights, said Calhoun, for, he explained, "Governments were formed to protect minorities, for majorities could take care of themselves". Jefferson Davis,calling on the traditions of republicanism said the fight for "liberty" against "the tyranny of an unbridled majority" gave the southern states a right to secede. The Supreme Court decision of 1857 in Dred Scott v. Sandford tried to resolve the slavery question but instead inflamed the North and escalated tensions. Chief Justice Roger B. Taney's decision said that slaves were "so far inferior that they had no rights which the white man was bound to respect", and that slaves could be taken to free states and territories. Lincoln warned that "the next Dred Scott decision" could threaten northern states with slavery. Slavery as a cause of the war As historian Allan Nevins explained, "As the fifties wore on, an exhaustive, exacerbating and essentially futile conflict over slavery raged to the exclusion of nearly all other topics." Lincoln said in 1860, "this question of Slavery was more important than any other; indeed, so much more important has it become that no other national question can even get a hearing just at present." The plantation owners in the 1860 election generally voted for the more moderate Constitutional Union party, which rejected secession. After the election, however, most planters changed and supported secession. Thus there was a strong correlation between the number of plantations in a region and the degree of support for secession. The states of the deep south had the greatest concentration of plantations and were the first to secede. The upper South slave states of Virginia, North Carolina, Arkansas, and Tennessee had fewer plantations and rejected secession until the Fort Sumter crisis forced them to choose sides. Border states had fewer plantations still and never seceded. In 1861 secession replaced slavery as the defining issue, as 8 slave states refused to join the original seven Confederate states until Lincoln called for volunteers to invade South Carolina in April, when 4 more joined. Lincoln made a deliberate effort to mollify the slave-owners in the border states (offering to buy their slaves for cash), to keep them from supporting the enemy. They rejected the offers, however. The North rallied in spring 1861 against secession, and the attack on the national flag at Ft. Sumter, not against slavery as such. Thus Ulysses S. Grant (who had recently owned a slave himself), rallied to the flag and raised troops to fight. By 1862, most northerners were coming to the position that slavery was so critical to the Confederacy that its abolition would speed the collapse of the rebellion. Rejection of compromise Until December 20, 1860, the political system had always successfully handled inter-regional crises. All but one crisis involved slavery, starting with debates on the three-fifths clause in the Constitutional Convention of 1787. Congress had solved the crisis over the admission of Missouri as a slave state in 1819-21, the controversy over South Carolina's nullification of the tariff in 1832, the acquisition of Texas in 1845, and the status of slavery in the territory acquired from Mexico in 1850. However, in 1854, the old Second Party System broke down after passage of the Kansas-Nebraska Act. The Whig Party disappeared, and the new Republican Party arose in its place. It was the nation's first major party with only sectional appeal and a commitment to stop the expansion of slavery. Republican leader Senator Charles Sumner was violently attacked at his desk in the Senate by Congressman Preston Brooks of South Carolina, violating the sanctuary of Congress and emphasizing the increased resort to violence. Sumner recovered and became a dominant force in the Senate during the war and Reconstruction. Open warfare in the Kansas Territory ("Bleeding Kansas"), the Dred Scott decision of 1857, John Brown's raid in 1859 and the split in the Democratic Party in 1860 polarized the nation between North and South. The election of Lincoln in 1860 was the final trigger for secession for the deep South Cotton states. During the secession crisis, many sought compromise—of these attempts, the best known was the "Crittenden Compromise"—but all failed. Historians generally agree that economic conflicts were not a major cause of the war. Economic historian Lee A. Craig wrote in 1996 that "numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War." Even Americans at the time understood how insignificant economic differences were in the secession crisis. This is evident in the exertions of numerous groups which tried during the 1860-61 winter to find a compromise and avert war. They did not turn to economic policies as a means to avert war. Except for slavery, which in the American context was mostly a cultural and social institution and only partially an economic institution, economics played no significant role as a cause for the Civil War. Regional economic differences The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860-61. However, Charles Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the Plantation South. Beard's critics pointed out that his image of a unified Northeast was incorrect because the region was highly diverse with many different competing economic interests. In 1860-61, most business interests in the Northeast opposed war. After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists.As Historian Kenneth Stampp—who abandoned Beardianism after 1950&mdashsummed up the scholarly consensus: "Most historians ... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united." Free labor vs. pro-slavery arguments Historian Eric Foner has argued that a free-labor ideology (which emphasized economic opportunity) dominated thinking in the North. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery. In a broader sense, the North was rapidly modernizing in a manner deeply threatening to the South. The North was not only becoming more economically powerful, but it was also developing new modern, urban values. As James McPherson argues, "The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future." The South, on the other hand, was clinging more and more to the old, rural traditional values of the Jeffersonian yeoman. And while the slave-owning elite were, relatively speaking, the most modern people in the South, the poor and middling whites who owned few or no slaves were the most beholden to the traditionalist values. This poor & middle class, called the Plain Folk of the Old South, supported secession and war because they supported states' rights and feared the impact of freed slaves on their own prospects. Before Lincoln took office, seven states declared their secession from the Union, and established a Southern government, the Confederate States of America, on February 9, 1861. They took control of federal forts and other properties within their boundaries, with little resistance from President Buchanan, whose term ended on March 3, 1861. Buchanan asserted, "The South has no right to secede, but I have no power to prevent them." One quarter of the U.S. Army—the entire garrison in Texas—was surrendered to state forces by its commanding general, David E. Twiggs, who then joined the Confederacy. Secession allowed the North to pass bills for projects that had been blocked by Southern Senators before the war, including the Morrill Tariff, land grant colleges (the Morrill Act), a Homestead Act, a trans-continental railroad (the Pacific Railway Acts) and the National Banking Acts. Seven Deep South cotton states seceded by February 1861, starting with South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. These seven states formed the Confederate States of America (February 4, 1861), with Jefferson Davis as president, and a governmental structure closely modeled on the U.S. Constitution. In April and May 1861, four more slave states seceded and joined the Confederacy: Arkansas, Tennessee, North Carolina and Virginia. Virginia was split in two, with the larger eastern portion of that state seceding to the Confederacy and the northwestern part breaking away to join the Union as the new state of West Virginia in 1863. The Union states There were 23 states that remained loyal to the Union during the war: California, Connecticut, Delaware (slave), Illinois, Indiana, Iowa, Kansas, Kentucky (slave), Maine, Maryland (slave), Massachusetts, Michigan, Minnesota, Missouri (slave), New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, and Wisconsin. During the war, Nevada and West Virginia (slave) joined as new states of the Union. Most of Tennessee (slave) and Louisiana (slave) came under to Union control early in the war. The territories of Colorado Territory, Dakota Territory, Nebraska Territory, Nevada Territory, New Mexico Territory (slave), Utah Territory, and Washington Territory fought on the Union side. Several slave-holding Native American tribes supported the Confederacy, giving the Indian territory (now Oklahoma) its own full-scale bloody civil war. The Border states in the Union were West Virginia (slave) (which broke away from Virginia and became a separate state), and four of the five northernmost slave states (Maryland, Delaware, Missouri, and Kentucky). Maryland had numerous pro-Confederate officials who tolerated anti-Union rioting in Baltimore and the burning of bridges. Lincoln responded with martial law and called for troops. Militia units that had been drilling in the North rushed toward Washington and Baltimore. Before the Confederate government realized what was happening, Lincoln had seized firm control of Maryland (and the separate District of Columbia), by arresting Confederate leaders and holding them without trial for several months. In Missouri, an elected convention on secession voted decisively to remain within the Union. When pro-Confederate Governor Claiborne F. Jackson called out the state militia, it was attacked by federal forces under General Nathaniel Lyon, who chased the governor and the rest of the State Guard to the southwestern corner of the state. In the resulting vacuum the convention on secession reconvened and took power as the Unionist provisional government of Missouri. Kentucky did not secede; for a time, it declared itself neutral. However, the Confederates broke the neutrality by seizing the town of Columbus, Kentucky in September 1861. That turned opinion against the Confederacy, and the state reaffirmed its loyal status, while trying to maintain slavery. During a brief invasion by Confederate forces, Confederate sympathizers organized a secession convention, inaugurated a governor, and gained recognition from the Confederacy. The rebel government soon went into exile and never controlled the state. Counties in the northwestern portion of Virginia opposed secession and formed a pro-Union government shortly after Richmond's secession in 1861. Unlike the remainder of Virginia, residents in this mountainous region were poor subsistence farmers. These counties were admitted to the Union in 1863 as West Virginia. Similar secessions appeared in East Tennessee, but were suppressed by the Confederacy. Jefferson Davis arrested over 3,000 men suspected of being loyal to the Union and held them without trial. Noel Fisher describes the internal civil war and partisan violence that raged throughout the border states, especially in the mountain areas: - Loyalists, secessionists, deserters, and men with little loyalty to either side formed organized bands, fought each other as well as occupying troops, terrorized the population, and spread fear, chaos, and destruction. Military forces stationed in the Appalachian regions, whether regular troops or home guards, frequently resorted to extreme methods, including executing partisans summarily, destroying the homes of suspected bushwhackers, and torturing families to gain information. This epidemic of violence created a widespread sense of insecurity, forced hundreds of residents to flee, and contributed to the region's economic distress, demoralization, and division. Some 10,000 military engagements took place during the war, 40% of them in Virginia and Tennessee. Separate articles deal with every major battle and some minor ones. This article only gives the broad outline. The war begins Lincoln's victory in the election triggered South Carolina's declaration of secession from the Union. By February 1861, six more Southern states made similar declarations. On February 7, the seven states adopted a provisional constitution for the Confederate States of America and established their temporary capital at Montgomery, Alabama. A pre-war February peace conference of 1861 met in Washington in a failed attempt at resolving the crisis. The remaining eight slave states rejected pleas to join the Confederacy. Confederate forces seized all but three Federal forts within their boundaries (they did not take Fort Sumter); President Buchanan protested but made no military response aside from a failed attempt to resupply Fort Sumter via the ship Star of the West (the ship was fired upon by Citadel cadets), and no serious military preparations. However, governors in Massachusetts, New York, and Pennsylvania quietly began buying weapons and training militia units. On March 4 1861, Abraham Lincoln was sworn in as President. In his inaugural address, he argued that the Constitution was a more perfect union than the earlier Articles of Confederation and Perpetual Union, that it was a binding contract, and called any secession "legally void". He stated he had no intent to invade Southern states, nor did he intend to end slavery where it existed, but that he would use force to maintain possession of federal property. His speech closed with a plea for restoration of the bonds of union. The South sent delegations to Washington and offered to pay for the federal properties and enter into a peace treaty with the United States. Lincoln rejected any negotiations with Confederate agents on the grounds that the Confederacy was not a legitimate government, and that making any treaty with it would be tantamount to recognition of it as a sovereign government. Fort Sumter in Charleston, South Carolina, was one of the three remaining Union-held forts in the Confederacy, and Lincoln was determined to hold it. Under orders from Confederate President Jefferson Davis, Confederate soldiers under General P.T.G. Beauregard bombarded the fort with artillery on April 12, forcing the fort's capitulation. Northerners rallied behind Lincoln's call for all of the states to send troops to recapture the forts and to preserve the Union. With the scale of the rebellion apparently small so far, Lincoln called for 74,000 volunteers for 90 days. For months before that, several Northern governors had secretly readied their state militias, built up stocks of weapons, and drawn up emergency plans; they began to move forces to Washington the next day. The Confederates at the state and national level had neglected to make preparations for war. Four states in the upper South, Tennessee, Arkansas, North Carolina, and Virginia, which had repeatedly rejected Confederate overtures, now refused to send forces against their neighbors, declared their secession, and joined the Confederacy. To reward Virginia, the Confederate capital was moved to Richmond. The city became the symbol of the Confederacy; if it fell, the new nation would lose legitimacy. Richmond was in a highly vulnerable location at the end of a tortuous supply line. Although Richmond was heavily fortified, supplies for the city were be reduced by Sherman's capture of Atlanta and cut off almost entirely when Grant besieged Petersburg and its railroads that supplied the Southern capital. Winfield Scott, the commanding general of the U.S. Army, devised the Anaconda Plan to win the war with as little bloodshed as possible. His idea was that a Union blockade of the main ports would weaken the Confederate economy; then the capture of the Mississippi River would split the South. Lincoln adopted the plan, but overruled Scott's warnings against an immediate attack on Richmond. Lincoln had to capture Richmond to destroy's the Confederacy's legitimacy as a nation. In May 1861, Lincoln proclaimed the Union blockade of all southern ports, which immediately shut down almost all international shipping to the Confederate ports. Violators risked seizure of the ship and cargo, and insurance probably would not cover the losses. Almost no large ships were owned by Confederate interests. By late 1861, the blockade shut down most local port-to-port traffic as well. Although few naval battles were fought and few men were killed, the blockade shut down "King Cotton" and ruined the southern economy. Some British investors built small, very fast "blockade runners" that brought in military supplies (and civilian luxuries) from Cuba and the Bahamas and took out high-priced cotton and tobacco. When the U.S. Navy did capture blockade runners, the ships and cargo were sold and the proceeds given to the Union sailors. The British crews were released. In March 1862 the Confederate navy sent its ironclad CSS Virginia (the rebuilt USS Merrimac) to attack the blockade; it seemed unstoppable but the next day it had to fight the new Union warship USS Monitor in the "Battle of the Ironclads". It was a strategic Union victory, for the blockade was sustained, and the Union built many copies of the Monitor while the Confederacy sank its own ship and lacked the technology to build more. The Confederacy turned to Britain to purchase warships, which the Union diplomats tried to stop. The Union won a series of naval battles in the rivers and harbors, taking control of the excellent waterway system to move its forces at will, while Confederates had to march overland. Union victory at the Fort Fisher in January 1865 closed the last useful rebel port and virtually ended blockade running. As the blockade became increasingly effective, the South had a shortage of almost everything, including food. When added to the effects of foraging by Northern armies and impressment of crops by Confederate armies, the result was hyper-inflation and even bread riots. Eastern Theater 1861–1863 The first major battle was a Confederate victory at First Battle of Bull Run , or First Manassas on July 1, 1861. It was here that Confederate General Stonewall Jackson received the nickname of "Stonewall" because he stood like a stone wall against Union troops. Alarmed at the loss, and in an attempt to prevent more border slave states from leaving the Union, the U.S. Congress passed the Crittenden-Johnson Resolution on July 25, 1861, which stated that the war was being fought to preserve the Union and not to end slavery. Major General George B. McClellan took command of the Union Army of the Potomac on July 26 (he was briefly general-in-chief of all the Union armies, but was subsequently relieved of that post in favor of Henry W. Halleck), and the war began in earnest in 1862. Upon the strong urging of President Lincoln to begin offensive operations, McClellan attacked Virginia in the spring of 1862 by way of the peninsula between the York River and James River, southeast of Richmond. Although McClellan's army reached the gates of Richmond in the Peninsula Campaign, Confederate General Joseph E. Johnston halted his advance at the Battle of Seven Pines, then General Robert E. Lee defeated him in the Seven Days Battles and forced his retreat. McClellan resisted Halleck's orders to send reinforcements to John Pope's Union Army of Virginia, which made it easier for Lee's Confederates to defeat twice the number of combined enemy troops. Pope threw his troops piecemeal at the enemy, the Union's Irvin McDowell and Fitz John Porter did little, and Confederate Longstreet's troops reinforced Stonewall Jackson's Confederates. The Northern Virginia Campaign, which included the Second Battle of Bull Run, ended in yet another victory for the South. Antietam to Chancellorsville Emboldened by Second Bull Run, the Confederacy made its first invasion of the North, when General Lee led 45,000 men of the Army of Northern Virginia across the Potomac River into Maryland on September 5. Lincoln then restored Pope's troops to McClellan. McClellan and Lee fought at the Battle of Antietam near Sharpsburg, Maryland, on September 17, 1862, the bloodiest single day in American military history. Lee's army, almost trapped, managed to escape and return to Virginia. Antietam was a strategic Union victory because it halted Lee's invasion of the North and provided an opportunity for Lincoln to announce his Emancipation Proclamation. When the cautious McClellan failed to follow up on Antietam, he was replaced by Maj. Gen. Ambrose Burnside. Burnside was soon defeated at the Battle of Fredericksburg on December 13, 1862, when over twelve thousand Union soldiers were killed or wounded. After the battle, Burnside was replaced by Maj. Gen. Joseph Hooker. Hooker, too, proved unable to defeat Lee's army; despite outnumbering the Confederates by more than two to one, he was humiliated in the Battle of Chancellorsville in May 1863. Hooker was replaced by Maj. Gen. George Meade during Lee's second invasion of the North, in June. Meade defeated Lee at the Battle of Gettysburg (July 1 to July 3, 1863), the bloodiest battle in United States history. Gettysburg is considered the turning point of the American Civil War by some historians; others argue the South was doomed after Antietam in 1862. Pickett's Charge on July 3 is often recalled as the high-water mark of the Confederacy, not just because its failure signaled the end of Lee's plan to pressure Washington from the north, but also because Vicksburg, Mississippi, the key stronghold to control of the Mississippi fell the following day. Lee's army suffered some 28,000 casualties (versus Meade's 23,000). However, Lincoln was angry that Meade failed to intercept Lee's retreat, and after Meade's inconclusive fall campaign, Lincoln decided to turn to the Western Theater for new leadership. Western Theater 1861–1863 While the Confederate forces had many successes in the Eastern theater, they were often defeated in the West. They were driven from Missouri early in the war as a result of the Battle of Pea Ridge. Leonidas Polk's invasion of Columbus, Kentucky ended Kentucky's policy of neutrality and turned that state against the Confederacy. Nashville, Tennessee, fell to the Union early in 1862. Most of the Mississippi was opened with the taking of Island No. 10 and New Madrid, Missouri, and then Memphis, Tennessee. The Union Navy captured New Orléans without a major fight in May 1862, allowing the Union forces to begin moving up the Mississippi as well. Only the fortress city of Vicksburg, Mississippi, prevented unchallenged Union control of the entire river. General Braxton Bragg's second Confederate invasion of Kentucky ended with a meaningless victory over Major General Don Carlos Buell at the Battle of Perryville, although Bragg was forced to end his attempt at liberating Kentucky and retreat due to lack of support for the Confederacy in that state. Bragg was narrowly defeated by Major General William Rosecrans at the Battle of Stones River in Tennessee. The one clear Confederate victory in the West was the Battle of Chickamauga. Bragg, reinforced by Lt. Gen. James Longstreet's corps (from Lee's army in the east), defeated Rosecrans, despite the heroic defensive stand of Major General George Henry Thomas. Rosecrans retreated to Chattanooga, which Bragg then besieged. The Union's key strategist and tactician in the West was Maj. Gen. Ulysses S. Grant, who won victories at Forts Henry and Donelson, by which the Union seized control of the Tennessee and Cumberland Rivers; the Battle of Shiloh; the Battle of Vicksburg, cementing Union control of the Mississippi River and considered one of the "turning points" of the war. Grant marched to the relief of Rosecrans and defeated Bragg at the Third Battle of Chattanooga, driving Confederate forces out of Tennessee and opening a route to Atlanta and the heart of the Confederacy. Trans-Mississippi Theater 1861–1865 Although geographically isolated from the battles to the east, a few small-scale military actions took place west of the Mississippi River. Confederate incursions into Arizona and New Mexico were repulsed in 1862. Guerrilla activity turned much of Missouri and Indian Territory (Oklahoma) into a battleground. Late in the war, the Union Red River Campaign was a failure. Texas remained in Confederate hands throughout the war, but was cut off from the rest of the Confederacy after the capture of Vicksburg in 1863 gave the Union control of the Mississippi River. End of the war 1864–1865 At the beginning of 1864, Lincoln made Grant commander of all Union armies. Grant made his headquarters with the Army of the Potomac, and put Major General William Tecumseh Sherman in command of most of the western armies. Grant understood the concept of "total war" and believed, along with Lincoln and Sherman, that only the utter defeat of Confederate forces and their economic base would end the war. Grant devised a coordinated strategy that would strike at the entire Confederacy from multiple directions: Generals George Meade and Benjamin Butler were ordered to move against Lee near Richmond; General Franz Sigel (and later Philip Sheridan) were to attack the Shenandoah Valley; General Sherman was to capture Atlanta and march to the sea (the Atlantic Ocean); Generals George Crook and William W. Averell were to operate against railroad supply lines in West Virginia; and Maj. Gen. Nathaniel P. Banks was to capture Mobile, Alabama. Union forces in the East attempted to maneuver past Lee and fought several battles during that phase ("Grant's Overland Campaign") of the Eastern campaign. Grant's battles of attrition at the Wilderness, Spotsylvania, and Cold Harbor resulted in heavy Union losses, but forced Lee's Confederates to fall back again and again. An attempt to outflank Lee from the south failed under Butler, who was trapped inside the Bermuda Hundred river bend. Grant was tenacious and, despite astonishing losses (over 66,000 casualties in six weeks), kept pressing Lee's Army of Northern Virginia back to Richmond. He pinned down the Confederate army in the Siege of Petersburg, where the two armies engaged in trench warfare for over nine months. Grant finally found a commander, General Philip Sheridan, aggressive enough to prevail in the Valley Campaigns of 1864. Sheridan defeated Maj. Gen. Jubal A. Early in a series of battles, including a final decisive defeat at the Battle of Cedar Creek. Sheridan then proceeded to destroy the agricultural base of the Shenandoah Valley, a strategy similar to the tactics Sherman later employed in Georgia. Meanwhile, Sherman marched from Chattanooga to Atlanta, defeating Confederate Generals Joseph E. Johnston and John Bell Hood along the way. The fall of Atlanta, on September 2 1864, was a significant factor in the reelection of Lincoln as president. Hood left the Atlanta area to menace Sherman's supply lines and invade Tennessee in the Franklin-Nashville Campaign. Union Maj. Gen. John M. Schofield defeated Hood at the Battle of Franklin, and George H. Thomas dealt Hood a massive defeat at the Battle of Nashville, effectively destroying Hood's army. Leaving Atlanta, and his base of supplies, Sherman's army marched with an unknown destination, laying waste to about 20% of the farms in Georgia in his "March to the Sea". He reached the Atlantic Ocean at Savannah, Georgia in December 1864. Sherman's army was followed by thousands of freed slaves; there were no major battles along the March. Sherman turned north through South Carolina and North Carolina to approach the Confederate Virginia lines from the south, increasing the pressure on Lee's army. Lee's army, thinned by desertion and casualties, was now much smaller than Grant's. Union forces won a decisive victory at the Battle of Five Forks on April 1, forcing Lee to evacuate Petersburg and Richmond. The Confederate capital fell to the Union XXV Corps, comprised of black troops. The remaining Confederate units fled west and after a defeat at Sayler's Creek, it became clear to Lee that continued fighting was hopeless. Lee surrendered his Army of Northern Virginia on April 9, 1865, at Appomattox Court House. As a mark of Grant's respect and anticipation of folding the Confederacy back into the Union with dignity and peace, Lee's men kept their horses and side arms. Johnston surrendered his troops to Sherman on April 2, 1865, in Durham, North Carolina. One after another the Confederate units surrendered; there was no guerrilla warfare, but many Confederate leaders were allowed to escape the country. Slavery during the war At the beginning of the war some Union commanders thought they were supposed to return escaped slaves to their masters. By 1862, when it became clear that this would be a long war, the question of what to do about slavery became more general. The Southern economy and military effort depended on slave labor. It began to seem unreasonable to protect slavery while blockading Southern commerce and destroying Southern production. As one Congressman put it, the slaves "…cannot be neutral. As laborers, if not as soldiers, they will be allies of the rebels, or of the Union." The same Congressman—and his fellow Radical Republicans—put pressure on Lincoln to rapidly emancipate the slaves, whereas Conservative Republicans came to accept gradual, compensated emancipation and colonization. In 1861 Lincoln expressed the fear that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game." At first Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Fremont (in Missouri) and David Hunter (in the South Carolina Sea Islands) in order to keep the loyalty of the border states and the War Democrats. Lincoln then tried to persuade the border states to accept his plan of gradual, compensated emancipation and voluntary colonization, while warning them that stronger measures would be needed if the moderate approach was rejected. Only the District of Columbia accepted Lincoln's gradual plan, and Lincoln issued his final Emancipation Proclamation on January 1 of 1863. In his letter to Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong … And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling ... I claim not to have controlled events, but confess plainly that events have controlled me." The Emancipation Proclamation, announced in September 1862 and put into effect four months later, greatly reduced the Confederacy's hope of getting aid from Britain or France. Lincoln's moderate approach succeeded in getting border states, War Democrats and emancipated slaves fighting on the same side for the Union. The Union-controlled border states (Kentucky, Missouri, Maryland, Delaware and West Virginia) were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky. The great majority of the 4 million slaves were freed by the Emancipation Proclamation, as Union armies moved South. The 13th amendment, ratified December 6, 1865, finally freed the remaining 40,000 slaves in Kentucky. Threat of international intervention Entry into the war by Britain and France on behalf of the Confederacy would have greatly increased the South's chances of winning independence from the Union. The Union, under Lincoln and Secretary of State William Henry Seward worked to block this, and threatened war if any country officially recognized the existence of the Confederate States of America (none ever did). In 1861, Southerners voluntarily embargoed cotton shipments, hoping to start an economic depression in Europe that would force Britain to enter the war in order to get cotton. Cotton diplomacy proved a failure as Europe had a surplus of cotton, while the 1860-62 crop failures in Europe made the North's grain exports of critical importance. It was said that "King Corn was more powerful than King Cotton", as US grain went from a quarter of the British import trade to almost half. When Britain did face a cotton shortage in 1862, it was temporary; being replaced by sales from the U.S. (which purchased cotton from compliant planters), and from increased production in Egypt and India. The war created employment for arms makers, iron workers, and shipbuilders. Charles Francis Adams proved particularly adept as minister to Britain for the Union, and Britain was reluctant to boldly challenge the Union's blockade. Independent British maritime interests built and operated highly profitable blockade runners — commercial ships flying the British flag and carrying supplies to the Confederacy by slipping through the blockade. The officers and crews were British and when captured they were released. The Confederacy purchased several warships from commercial ship builders in Britain; the most famous, the CSS Alabama, did considerable damage and led to serious postwar disputes. However, public opinion against slavery created a political liability for European politicians, especially in Britain. War loomed in late 1861 between the U.S. and Britain over the Trent Affair, when the U.S., Navy violated international law by boarding a British mail steamer to seize two Confederate diplomats, James Mason and John Slidell. However, London and Washington were able to smooth over the crisis after Lincoln released the two. In 1862, the British considered mediation—-though even such an offer would have risked war with the U.S. The Union victory in the Battle of Antietam caused Lord Palmerston to delay this decision. The Emancipation Proclamation made direct support of the Confederacy and slavery politically impossible in Britain. Despite some sympathy for the Confederacy, France's own seizure of Mexico ultimately deterred them from war with the Union. Confederate offers late in the war to end slavery in return for diplomatic recognition were not seriously considered by London or Paris. The war produced about 970,000 military casualties (3% of the population), including approximately 620,000 soldier deaths—two-thirds by disease. The 10,500 battles and engagements produced about 1,1 million killed and wounded. The Union army and navy lost 110,100 killed in action (including mortally wounded who died in hospitals), and another 224,580 who died of disease. The Confederate army 94,000 in battle and another 164,000 who died of disease. Official counts of the wounded are far too low, at 275,000 for the Union and 194,000 for the Confederacy. The number of civilian deaths is unknown. Most of the war was fought in Virginia and Tennessee, but every Southern state was affected as well as Maryland, West Virginia, Kentucky Missouri, and Indian Territory; Pennsylvania was the only northerner state to be the scene of major action, during the Gettysburg campaign. In the Confederacy there was little military action in Texas and Florida. Of 645 counties in 9 Confederate states (excluding Texas and Florida), there was Union military action in 56% of them, containing 63% of the whites and 64% of the slaves in 1860; however by the time the action took place some people had fled to safer areas, so the exact population exposed to war is unknown. The Confederacy in 1861 had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated their population when they were invaded. The number of people who lived in the destroyed towns represented just over 1% of the Confederacy's population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, of $48 million worth. Many old tools had broken through heavy use and could not be replaced; even repairs were difficult. The economic calamity suffered by the South during the war affected every family. Except for land, most assets and investments had vanished with slavery, but debts were left behind. Worst of all were the human deaths and amputations. Most farms were intact but most had lost their horses, mules and cattle; fences and barns were in disrepair. Prices for cotton had plunged. The rebuilding would take years and require outside investment because the devastation was so thorough. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery: - One of the greatest calamities which confronted Southerners was the havoc wrought on the transportation system. Roads were impassable or nonexistent, and bridges were destroyed or washed away. The important river traffic was at a standstill: levees were broken, channels were blocked, the few steamboats which had not been captured or destroyed were in a state of disrepair, wharves had decayed or were missing, and trained personnel were dead or dispersed. Horses, mules, oxen, carriages, wagons, and carts had nearly all fallen prey at one time or another to the contending armies. The railroads were paralyzed, with most of the companies bankrupt. These lines had been the special target of the enemy. On one stretch of 114 miles in Alabama, every bridge and trestle was destroyed, cross-ties rotten, buildings burned, water-tanks gone, ditches filled up, and tracks grown up in weeds and bushes. . . . Communication centers like Columbia and Atlanta were in ruins; shops and foundries were wrecked or in disrepair. Even those areas bypassed by battle had been pirated for equipment needed on the battlefront, and the wear and tear of wartime usage without adequate repairs or replacements reduced all to a state of disintegration. Railroad mileage was of course located mostly in rural areas. The war followed the rails, and over two-thirds of the South's rails, bridges, rail yards, repair shops and rolling stock were in areas reached by Union armies, which systematically destroyed what it could. The South had 9400 miles of track and 6500 miles was in areas reached by the Union armies. About 4400 miles were in areas where Sherman and other Union generals adopted a policy of systematic destruction of the rail system. Even in untouched areas, the lack of maintenance and repair, the absence of new equipment, the heavy over-use, and the deliberate movement of equipment by the Confederates from remote areas to the war zone guaranteed the system would be virtually ruined at war's end. Analysis of the outcome Since the war's end, historians have tried to devise scenarios whereby the South could have won independence. In every scenario that requires military intervention by Britain. Absent that intervention, experts argue that the Union held an insurmountable advantage in terms of industrial strength, population, and the determination to win. Confederate victories on the battlefield, they argue, could only delay defeat. Southern historian Shelby Foote expressed this view succinctly: "I think that the North fought that war with one hand behind its back.… If there had been more Southern victories, and a lot more, the North simply would have brought that other hand out from behind its back. I don't think the South ever had a chance to win that War." The Confederacy strategy was to rely on war-weariness in the North and victory by Copperheads. However, after Atlanta fell and Lincoln defeated McClellan by a massive landslide in the election of 1864, those slim hopes evaporated. At that point, Lincoln had succeeded in getting the support of the border states and War Democrats, and kept Britain and France neutral. By defeating the Democrats and McClellan, he also defeated the Copperheads and their peace platform. Lincoln now had found military leaders like Grant and Sherman who would press the Union's numerical advantage in battle over the Confederate Armies. Generals who didn't shy from bloodshed won the war, and from the end of 1864 onward there was no hope for the South. Lincoln offered peace terms to top Confederate officials in February 1865, involving reunion and purchase of the slaves for cash. The Confederates insisted on independence and fought to the bitter end. The goals were not symmetric. To win independence, the South had to convince the North it could not win, but the South did not have to invade the North. To restore the Union, the North had to conquer and occupy vast stretches of territory. In the short run (a matter of months), the two sides were evenly matched. But in the long run (a matter of years), the North had advantages that increasingly came into play, while it prevented the South from gaining diplomatic recognition in Europe. Also important were Lincoln's eloquence in rationalizing the national purpose and his skill in keeping the border states committed to the Union cause. Although Lincoln's approach to emancipation was slow, the Emancipation Proclamation was an effective use of the President's war powers. Long-term economic factors The more industrialized economy of the North aided in the production of arms, munitions and supplies, as well as finances, and transportation. These advantages widened rapidly during the war, as the Northern economy grew, and Confederate territory shrank and its economy weakened. The Union population was 22 million and the South 9 million in 1861; the Southern population included more than 3.5 million slaves and about 5.5 million whites, thus leaving the South's white population outnumbered by more than four to one. The disparity grew as the Union controlled more and more southern territory with garrisons, and cut off the trans-Mississippi part of the Confederacy. The Union at the start controlled over 80% of the shipyards, steamships, river boats, and the Navy. It augmented these by a massive shipbuilding program. This enabled the Union to control the river systems and to blockade the entire southern coastline. Excellent railroad links between Union cities allowed for the quick and cheap movement of troops and supplies. Transportation was much slower and more difficult in the South which was unable to augment its much smaller rail system, repair damage, or even perform routine maintenance. Logistics and supply While the Confederacy never had enough supplies and it kept getting worse, the Union forces typically had enough food, supplies, ammunition and weapons. The Union supply system, even as it penetrated deeper into the South, maintained its efficiency. Union quartermasters were responsible for most of the $3 billion spent for the war. They operated out of sixteen major depots, which formed the basis of the system of procurement and supply throughout the war. As the war expanded, operation of these depots evolved into a complex set of government and privately operated organizations that included both the manufacture of goods in government-operated factories, as well as the purchase of goods and services through contracts supervised by the quartermasters. At its peak, this huge operation of supplying the war machine accounted for more than 90% of all expenditures. The quartermasters were thus at the head of a vast operation that involved, in addition to their own employees, state officials who equipped many of the army units organized in the first year of the war; contractors seeking to sell directly to the army; middlemen acting as agents between the army and various large and small providers; and representatives of labor groups -- such as seamstresses and ironworkers -- concerned about the exploitation of individuals working for government suppliers or in government factories. The system was closely watched by congressmen anxious to see that their constituents were treated "fairly" in the distribution of contracts. Political and diplomatic factors The failure of Davis to maintain positive and productive relationships with state governors (especially governor Joseph E. Brown of Georgia and governor Zebulon Vance of North Carolina) damaged his ability to draw on regional resources. The founding principle of the Confederacy was States' Rights, so every effort to get the states of the new government to act in unison encountered the obstacle of the Confederacy's founding premise. A strong party system enabled the Republicans to mobilize soldiers and support at the grass roots, even when the war became unpopular. The Confederacy deliberately did not use parties. The failure to win diplomatic or military support from any foreign powers cut the Confederacy from access to markets and to most imports. Its "King Cotton" misperception of the world economy led to bad diplomacy, such as the refusal to ship cotton before the blockade started. Strategically, the relocation of the capital to Richmond tied Lee to a highly exposed position at the end of supply lines. Loss of its national capital was unthinkable for the Confederacy, for it would lose legitimacy as an independent nation. Washington was equally vulnerable, but if it had been captured, the Union would not have collapsed. The Union devoted much more of its resources to medical needs, thereby overcoming the unhealthy disease environment that sickened (and killed) more soldiers than combat did, improving morale, and returning more men to duty. The Confederacy's tactic of invading the North (Antietam 1862, Gettysburg 1863, Nashville 1864) drained limited manpower, making it much harder for the South to replace its losses. Lincoln discarded generals like George B. McClellan who would not fight. Davis, on the other hand, kept Braxton Bragg even after two retreats. The Confederacy never had a plan to deal with the blockade. Davis failed to respond in a coordinated fashion to serious threats (such as Grant's campaign against Vicksburg in 1863; in the face of which, he allowed Lee to invade Pennsylvania). The Emancipation Proclamation enabled African-Americans, both free blacks and escaped slaves, to join the Union Army. About 190,000 volunteered, further enhancing the numerical advantage the Union armies enjoyed over the Confederates, who did not dare emulate the equivalent manpower source for fear of fundamentally undermining the legitimacy of slavery. Black Union soldiers were mostly used in garrison duty, but they fought in several battles, such as the Battle of the Crater (1864), and the Battle of Nashville (1865). There was bad blood between Confederates and black soldiers, with no quarter given on either side. At Ft. Pillow on April 12, 1864 Confederate units under Maj. Gen. Nathan Bedford Forrest went wild and massacred black soldiers attempting to surrender, which further inflamed passions. Northern leaders agreed that victory would require more than the end of fighting. It had to encompass the two war goals: secession had to be totally repudiated, and all forms of slavery had to be eliminated. They disagreed sharply on the criteria for these goals. They also disagreed on how much federal control should be imposed on the South, and the process by which Southern states should be reintegrated into the Union. Reconstruction, which began early in the war and ended in 1877, involved a complex and rapidly changing series of federal and state policies. The long-term result came in the three "Civil War" amendments to the Constitution (the XIII, which abolished slavery, the XIV, which extended federal legal protections to citizens regardless of race, and the XV, which abolished racial restrictions on voting). Reconstruction ended in the different states at different times, the last three by the Compromise of 1877. The memory of the Civil War, in all parts of the country, overshadowed the American revolution as a defining event. Every town built its war monument to memorialize its soldiers, especially the ones who died. The process of reconciliation between the regions was slowed by Reconstruction, but resumed after 1877 and was largely completed by 1898, when the southern states enthusiastically supported the war against Spain, and when joint Union-Confederate reunions were regularly held at Gettysburg and other battlefields. The memoir literature largely was free of rancor and instead hailed the courage and tenacity of the other side. - For a century afterwards southerners called it the "war between the states." The official Union name was the "War of the Rebellion." - In reality fewer than 1000 slaves a year succeeded in escaping the South, out of 4 million slaves. - Stampp, Causes of the Civil War, page 59 - McPherson, Battle Cry, page 57 - Allan Nevins, Ordeal of the Union: Fruits of Manifest Destiny 1847-1852, 1:155 - Jefferson Davis' "Second Inaugural Address" Feb. 22, 1862 in Dunbar Rowland, ed., Jefferson Davis, Constitutionalist, 5:198-203. - David Potter, The Impending Crisis, p 275 - First Lincoln Douglas Debate at Ottawa, Illinois August 21, 1858 - Nevins, Fruits of Manifest Destiny, 1847-1852, page 163 - Abraham Lincoln, Speech at New Haven, Conn., March 6, 1860 - McPherson, Battle Cry of Freedom p 242, 255, 282-3. Maps on page 101 (The Southern Economy) and page 236 (The Progress of Secession) are also relevant - William E. Gienapp, "The Crisis of American Democracy: The Political System and the Coming of the Civil War." in Boritt ed. Why the Civil War Came 79-123 - Lee A. Craig in Woodworth, ed., The American Civil War: A Handbook of Literature and Research (1996), 505. - Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), 145, 151, 505, 512, 554, 557, and 684; Richard Hofstadter, The Progressive Historians: Turner, Beard, Parrington (1969); for one dissenter, see Marc Egnal, "The Beards Were Right: Parties in the North, 1840-1860," Civil War History 47, no. 1. (2001): 30-56. - Kenneth M. Stampp, The Imperiled Union: Essays on the Background of the Civil War (1981), 198. Here's the full passage: Most historians ... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system .... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict. - James McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question," Civil War History 50, No. 4 (December 2004), 421. - Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", The American Historical Review Vol. 44, No. 1 (1938), 50-55 full text in JSTOR - John C. Calhoun, "Slavery a Positive Good," February 6, 1837. - James McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question," Civil War History 29 (Sept. 1983) - J. Mills Thornton III, Politics and Power in a Slave Society: Alabama, 1800-1860 (1978) - J. Mills Thornton III, Politics and Power in a Slave Society: Alabama, 1800-1860 (1978); Samuel C. Hyde Jr., "Plain Folk Reconsidered: Historiographical Ambiguity in Search of Definition" Journal of Southern History 71, no. 4 (November 2005). - The Union recognized a rump Virginia government headed by Governor Edwards Pierrepont; it approved the secession of West Virginia. - McPherson, Battle Cry, pages 284-287 - Mark Neely, Confederate Bastille: Jefferson Davis and Civil Liberties 1993 p. 10-11 - Noel Fisher, " Feelin' Mighty Southern: Recent Scholarship on Southern Appalachia in the Civil War" Civil War History Volume: 47#4. 2001. pp 334+. - Gabor Boritt, ed. War Comes Again (1995) p 247 - Lincoln, First Inaugural Address, March 4, 1861 - See the account at - McPherson, Battle Cry, pages 653-663 - Mark E. Neely Jr. (2004) Was the Civil War a Total War? Civil War History, 50:434+ - McPherson, Battle Cry, pages 773-775 - On June 23 1865, at Fort Towson in the Choctaw Nations' area of the Oklahoma Territory, Stand Watie signed a cease-fire agreement with Union representatives, becoming the last Confederate general to stand down. The last Confederate naval force to surrender was the CSS Shenandoah on November 4, 1865, in Liverpool, England. - MacPherson, Battle Cry of Freedom page 495 - Lincoln's letter to O. H. Browning, Sep 22, 1861 - Lincoln's Letter to A. G. Hodges, April 4, 1864 - It also freed 1,000 or so slaves in Delaware and some lifetime servants in West Virginia and black slaves owned by Indians in Oklahoma. - Allen Nevins, War for the Union 1862-1863, pages 263-264 - For details see Thomas L. Livermore, Numbers and Losses in the Civil War in America 1861-65 (1901) full text online; and William F. Fox, Regimental Losses in the American Civil War, 1867-1865 (1889). see also websites dealing with the casualty count - John Samuel Ezell, The South since 1865 1963 pp 27-28 - Paul F. Paskoff, "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy," Civil War History 54.1 (2008) 35-62 - Ward 1990 p 272 - Don E. Fehrenbacher, "Lincoln's Wartime Leadership: The First Hundred Days." Journal of the Abraham Lincoln Association 9.1 (1987): online - Mark R. Wilson, The Business of Civil War: Military Mobilization and the State, 1861-1865 (2006) - Eric L. McKitrick, "Party Politics and the Union and Confederate War Efforts," in William Nisbet Chambers and Walter Dean Burnham, eds. The American party Systems (1965); Beringer (1988) p 93 - Heidler, 1643-47 - Grady McWhiney and Perry D. Jamieson. Attack and Die: Civil War Military Tactics and the Southern Heritage (1982) - T. Harry Williams, Lincoln and His Generals (1952) - See excerpts from Official Reports; Andrew Ward, River Run Red: The Fort Pillow Massacre in the American Civil War (2005) excerpt and text search
Learn how to draw shapes, such as ellipses, rectangles, polygons, and paths. The Path class is the way to visualize a fairly complex vector-based drawing language in a XAML UI; for example, you can draw Bezier curves. Two sets of classes define a region of space in XAML UI: Shape classes and Geometry classes. The main difference between these classes is that a Shape has a brush associated with it and can be rendered to the screen, and a Geometry simply defines a region of space and is not rendered unless it helps contribute information to another UI property. You can think of a Shape as a UIElement with its boundary defined by a Geometry. This topic covers mainly the Shape classes. The Shape classes are Line, Ellipse, Rectangle, Polygon, Polyline, and Path. Path is interesting because it can define an arbitrary geometry, and the Geometry class is involved here because that's one way to define the parts of a Path. Fill and Stroke for shapes A Shape can also have a Stroke, which is a line that is drawn around the shape's perimeter. A Stroke also requires a Brush that defines its appearance, and should have a non-zero value for StrokeThickness. StrokeThickness is a property that defines the perimeter's thickness around the shape edge. If you don't specify a Brush value for Stroke, or if you set StrokeThickness to 0, then the border around the shape is not drawn. <Ellipse Fill="SteelBlue" Height="200" Width="200" /> var ellipse1 = new Ellipse(); ellipse1.Fill = new SolidColorBrush(Windows.UI.Colors.SteelBlue); ellipse1.Width = 200; ellipse1.Height = 200; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot> layoutRoot.Children.Add(ellipse1); Here's the rendered Ellipse. When an Ellipse is positioned in a UI layout, its size is assumed to be the same as a rectangle with that Width and Height; the area outside the perimeter does not have rendering but still is part of its layout slot size. You can round the corners of a Rectangle. To create rounded corners, specify a value for the RadiusX and RadiusY properties. These properties specify the x-axis and y-axis of an ellipse that defines the curve of the corners. The maximum allowed value of RadiusX is the Width divided by two and the maximum allowed value of RadiusY is the Height divided by two. The next example creates a Rectangle with a Width of 200 and a Height of 100. It uses a Blue value of SolidColorBrush for its Fill and a Black value of SolidColorBrush for its Stroke. We set the StrokeThickness to 3. We set the RadiusX property to 50 and the RadiusY property to 10, which gives the Rectangle rounded corners. <Rectangle Fill="Blue" Width="200" Height="100" Stroke="Black" StrokeThickness="3" RadiusX="50" RadiusY="10" /> var rectangle1 = new Rectangle(); rectangle1.Fill = new SolidColorBrush(Windows.UI.Colors.Blue); rectangle1.Width = 200; rectangle1.Height = 100; rectangle1.Stroke = new SolidColorBrush(Windows.UI.Colors.Black); rectangle1.StrokeThickness = 3; rectangle1.RadiusX = 50; rectangle1.RadiusY = 10; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot> layoutRoot.Children.Add(rectangle1); Here's the rendered Rectangle. Tip There are some scenarios for UI definitions where instead of using a Rectangle, a Border might be more appropriate. If your intention is to create a rectangle shape around other content, it might be better to use Border because it can have child content and will automatically size around that content, rather than using the fixed dimensions for height and width like Rectangle does. A Border also has the option of having rounded corners if you set the CornerRadius property. On the other hand, a Rectangle is probably a better choice for control composition. A Rectangle shape is seen in many control templates because it's used as a "FocusVisual" part for focusable controls. Whenever the control is in a "Focused" visual state, this rectangle is made visible, in other states it's hidden. A Polygon is a shape with a boundary defined by an arbitrary number of points. The boundary is created by connecting a line from one point to the next, with the last point connected to the first point. The Points property defines the collection of points that make up the boundary. In XAML, you define the points with a comma-separated list. In code-behind you use a PointCollection to define the points and you add each individual point as a Point value to the collection. You don't need to explicitly declare the points such that the start point and end point are both specified as the same Point value. The rendering logic for a Polygon assumes that you are defining a closed shape and will connect the end point to the start point implicitly. The next example creates a Polygon with 4 points set to (180,200). It uses a LightBlue value of SolidColorBrush for its Fill, and has no value for Stroke so it has no perimeter outline. <Polygon Fill="LightBlue" Points="10,200,60,140,130,140,180,200" /> var polygon1 = new Polygon(); polygon1.Fill = new SolidColorBrush(Windows.UI.Colors.LightBlue); var points = new PointCollection(); points.Add(new Windows.Foundation.Point(10, 200)); points.Add(new Windows.Foundation.Point(60, 140)); points.Add(new Windows.Foundation.Point(130, 140)); points.Add(new Windows.Foundation.Point(180, 200)); polygon1.Points = points; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot> layoutRoot.Children.Add(polygon1); Here's the rendered Polygon. Tip A Point value is often used as a type in XAML for scenarios other than declaring the vertices of shapes. For example, a Point is part of the event data for touch events, so you can know exactly where in a coordinate space the touch action occurred. For more info about Point and how to use it in XAML or code, see the API reference topic for Point. A Line is simply a line drawn between two points in coordinate space. A Line ignores any value provided for Fill, because it has no interior space. For a Line, make sure to specify values for the Stroke and StrokeThickness properties, because otherwise the Line won't render. You don't use Point values to specify a Line shape, instead you use discrete Double values for X1, Y1, X2 and Y2. This enables minimal markup for horizontal or vertical lines. For example, <Line Stroke="Red" X2="400"/> defines a horizontal line that is 400 pixels long. The other X,Y properties are 0 by default, so in terms of points this XAML would draw a line from (400,0). You could then use a TranslateTransform to move the entire Line, if you wanted it to start at a point other than (0,0). <Line Stroke="Red" X2="400"/> var line1 = new Line(); line1.Stroke = new SolidColorBrush(Windows.UI.Colors.Red); line1.X2 = 400; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot> layoutRoot.Children.Add(line1); If you specify a Fill of a Polyline, the Fill paints the interior space of the shape, even if the start point and end point of the Points set for the Polyline do not intersect. If you do not specify a Fill, then the Polyline is similar to what would have rendered if you had specified several individual Line elements where the start points and end points of consecutive lines intersected. As with a Polygon, the Points property defines the collection of points that make up the boundary. In XAML, you define the points with a comma-separated list. In code-behind, you use a PointCollection to define the points and you add each individual point as a Point structure to the collection. <Polyline Stroke="Black" StrokeThickness="4" Points="10,200,60,140,130,140,180,200" /> var polyline1 = new Polyline(); polyline1.Stroke = new SolidColorBrush(Windows.UI.Colors.Black); polyline1.StrokeThickness = 4; var points = new PointCollection(); points.Add(new Windows.Foundation.Point(10, 200)); points.Add(new Windows.Foundation.Point(60, 140)); points.Add(new Windows.Foundation.Point(130, 140)); points.Add(new Windows.Foundation.Point(180, 200)); polyline1.Points = points; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot> layoutRoot.Children.Add(polyline1); You define the geometry of a path with the Data property. There are two techniques for setting Data: - You can set a string value for Data in XAML. In this form, the Path.Data value is consuming a serialization format for graphics. You typically don't text-edit this value in string form after it is first established. Instead, you use design tools that enable you to work in a design or drawing metaphor on a surface. Then you save or export the output, and this gives you a XAML file or XAML string fragment with Path.Data information. - You can set the Data property to a single Geometry object. This can be done in code or in XAML. That single Geometry is typically a GeometryGroup, which acts as a container that can composite multiple geometry definitions into a single object for purposes of the object model. The most common reason for doing this is because you want to use one or more of the curves and complex shapes that can be defined as Segments values for a PathFigure, for example BezierSegment. This example shows a Path that might have resulted from using Blend for Visual Studio to produce just a few vector shapes and then saving the result as XAML. The total Path consists of a Bezier curve segment and a line segment. The example is mainly intended to give you some examples of what elements exist in the Path.Data serialization format and what the numbers represent. This Data begins with the move command, indicated by "M", which establishes an absolute start point for the path. The first segment is a cubic Bezier curve that begins at (100,200) and ends at (400,175), which is drawn by using the two control points (400,350). This segment is indicated by the "C" command in the Data attribute string. The second segment begins with an absolute horizontal line command "H", which specifies a line drawn from the preceding subpath endpoint (400,175) to a new endpoint (280,175). Because it's a horizontal line command, the value specified is an x-coordinate. <Path Stroke="DarkGoldenRod" StrokeThickness="3" Data="M 100,200 C 100,25 400,350 400,175 H 280" /> Here's the rendered Path. The next example shows a usage of the other technique we discussed: a GeometryGroup with a PathGeometry. This example exercises some of the contributing geometry types that can be used as part of a PathGeometry: PathFigure and the various elements that can be a segment in PathFigure.Segments. <Path Stroke="Black" StrokeThickness="1" Fill="#CCCCFF"> <Path.Data> <GeometryGroup> <RectangleGeometry Rect="50,5 100,10" /> <RectangleGeometry Rect="5,5 95,180" /> <EllipseGeometry Center="100, 100" RadiusX="20" RadiusY="30"/> <RectangleGeometry Rect="50,175 100,10" /> <PathGeometry> <PathGeometry.Figures> <PathFigureCollection> <PathFigure IsClosed="true" StartPoint="50,50"> <PathFigure.Segments> <PathSegmentCollection> <BezierSegment Point1="75,300" Point2="125,100" Point3="150,50"/> <BezierSegment Point1="125,300" Point2="75,100" Point3="50,50"/> </PathSegmentCollection> </PathFigure.Segments> </PathFigure> </PathFigureCollection> </PathGeometry.Figures> </PathGeometry> </GeometryGroup> </Path.Data> </Path> var path1 = new Windows.UI.Xaml.Shapes.Path(); path1.Fill = new SolidColorBrush(Windows.UI.Color.FromArgb(255, 204, 204, 255)); path1.Stroke = new SolidColorBrush(Windows.UI.Colors.Black); path1.StrokeThickness = 1; var geometryGroup1 = new GeometryGroup(); var rectangleGeometry1 = new RectangleGeometry(); rectangleGeometry1.Rect = new Rect(50, 5, 100, 10); var rectangleGeometry2 = new RectangleGeometry(); rectangleGeometry2.Rect = new Rect(5, 5, 95, 180); geometryGroup1.Children.Add(rectangleGeometry1); geometryGroup1.Children.Add(rectangleGeometry2); var ellipseGeometry1 = new EllipseGeometry(); ellipseGeometry1.Center = new Point(100, 100); ellipseGeometry1.RadiusX = 20; ellipseGeometry1.RadiusY = 30; geometryGroup1.Children.Add(ellipseGeometry1); var pathGeometry1 = new PathGeometry(); var pathFigureCollection1 = new PathFigureCollection(); var pathFigure1 = new PathFigure(); pathFigure1.IsClosed = true; pathFigure1.StartPoint = new Windows.Foundation.Point(50, 50); pathFigureCollection1.Add(pathFigure1); pathGeometry1.Figures = pathFigureCollection1; var pathSegmentCollection1 = new PathSegmentCollection(); var pathSegment1 = new BezierSegment(); pathSegment1.Point1 = new Point(75, 300); pathSegment1.Point2 = new Point(125, 100); pathSegment1.Point3 = new Point(150, 50); pathSegmentCollection1.Add(pathSegment1); var pathSegment2 = new BezierSegment(); pathSegment2.Point1 = new Point(125, 300); pathSegment2.Point2 = new Point(75, 100); pathSegment2.Point3 = new Point(50, 50); pathSegmentCollection1.Add(pathSegment2); pathFigure1.Segments = pathSegmentCollection1; geometryGroup1.Children.Add(pathGeometry1); path1.Data = geometryGroup1; // When you create a XAML element in code, you have to add // it to the XAML visual tree. This example assumes you have // a panel named 'layoutRoot' in your XAML file, like this: // <Grid x:Name="layoutRoot"> layoutRoot.Children.Add(path1); Here's the rendered Path. Using PathGeometry may be more readable than populating a Path.Data string. On the other hand, Path.Data uses a syntax compatible with Scalable Vector Graphics (SVG) image path definitions so it may be useful for porting graphics from SVG, or as output from a tool like Blend.
- Programming Paradigm A programming paradigm is a way to describe some of the features of programming languages. Often a paradigm includes principles concerning the use of these features, or embodies a view that these features have special importance and utility in good programming practice. - Procedural Programming A programming paradigm that solves problems with programs that can be broken up into collections of variables, data structures and procedures. This paradigm tends to draw a sharp distinction between variables and data structures on the one hand and procedures on the other. - Functional Programming A programming paradigm that stresses the central role of functions. Some of its basic principles are: - Computation consists in the evaluation of functions. - Functions are first-class citizens in the language. - Functions should only return values; they should not produce side-effects. - As much as possible, procedures should be written in terms of function calls. - Pure Function A function that does not produce side-effects. - Side Effect A change in the state of the program (i.e., a change in the Global Environment) or any interaction external to the program (i.e., printing to the console). - Higher-Order Function A function that takes another function as an argument. - Anonymous Function A function that does not have a name. The act of rewriting computer code so that it performs the same task as before, but in a different way. (This is usually done to make the code more human-readable or to make it perform the task more quickly.)
Did Earth's water come from asteroids? Data from a European probe orbiting a comet suggests that Earth's water came from asteroids, and not comets as was previously thought. ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0 Asteroids, not comets, may have delivered most of Earth's water to the planet when the solar system was young, new data from a probe orbiting a comet suggests. Comets are some of the solar system's most primitive building blocks, with many dating to soon after its formation. Scientists think that these dirty snowballs probably helped seed Earth with key ingredients for life, such as organic compounds. The European Space Agency's (ESA) Rosetta spacecraft is helping scientists learn more about the role these icy nomads have played in the evolution of the solar system and life on Earth by analyzing the composition of Comet 67P/Churyumov–Gerasimenko. In August, Rosetta became the first spacecraft to orbit a comet, and in November, its Philae lander became the first probe to make a soft touchdown on a comet's surface. Rosetta is also the first mission to escort a comet as it travels around the sun. [See images from ESA's Rosetta mission] Now, Rosetta has helped solve a mystery about how Earth became the watery world it is today. Before Rosetta began orbiting Comet 67P/C-G in August, it was using an instrument known as ROSINA (short for Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) to analyze the chemical fingerprint of gases in the comet's fuzzy envelope. Scientists focused on data from the instrument regarding water to help uncover whether asteroids or comets delivered the water in Earth's oceans. Heavy water on Earth and in comets Models of Earth's birth suggest that the planet was quite hot after its formation about 4.6 billion years ago, so scientists think it's unlikely that any water currently on Earth's surface dates back to the time of the planet's creation. However, prior studies have hinted that cosmic impacts could have easily brought water later, during a violent era known as the Late Heavy Bombardment, about 800 million years after Earth's formation. To uncover the source of Earth's water, scientists look for bodies elsewhere in the solar system with similar water. Out of every 10,000 water molecules on Earth, three are not normal water molecules, but instead are so-called heavy water molecules. A normal water molecule is made of two hydrogen atoms and one oxygen atom. In heavy water, a normal hydrogen atom is replaced with deuterium, which is like hydrogen except that it has an extra neutron in its nucleus. (A regular hydrogen atom has only one proton in its nucleus.) To see if comets might be the source of Earth's water, in 1986, the ESA probe Giotto flew by Halley's Comet, becoming the first spacecraft to make close observations of a comet. It discovered that Halley's Comet had twice the amount of heavy water compared to normal water as Earth does. Halley's Comet comes from the Oort Cloud, a giant spherical cloud of trillions of icy bodies that extends from 5,000 to 100,000 times the distance of Earth to the sun. The data from Halley's Comet and from other Oort Cloud comets "ruled out Oort Cloud comets as being the source of terrestrial water," said lead study author Kathrin Altwegg,of the University of Bern in Switzerland, principal investigator for the ROSINA mass spectrometer on Rosetta. [Fun Facts about Comets] But the Oort Cloud is not the only source of comets in the solar system. Another home to the dirty snowballs is the disc-shaped Kuiper Belt, which extends from about 30 to 55 times the distance of Earth to the sun. In 2011, data from ESA's Herschel Space Observatory revealed that Kuiper Belt comet 103P/Hartley 2 had a deuterium-to-hydrogen ratio "that matched terrestrial water's perfectly," Altwegg said during a news conference Tuesday (Dec. 9). "The Hartley 2 measurement — that was a real big surprise." Not all comets are alike Now, Rosetta has provided data from Comet 67P/C-G, another Kuiper Belt comet. However, Rosetta has discovered that this comet possesses an even higher deuterium-to-hydrogen ratio than seen in Oort Cloud comets — three times the amount of heavy water compared to normal water as Earth has. If Earth's water had come from Kuiper Belt objects — even if most of them were like comet 103P/Hartley 2 — and if only a small fraction were like Comet 67P/C-G, Earth's deuterium-to-hydrogen ratio would be significantly higher than it is today. "This probably rules out Kuiper Belt comets from bringing water to Earth," Altwegg said. Instead, most of Earth's water was probably delivered by asteroids, Altwegg said. "Today's asteroids have very little water — that's clear," Altwegg added. "But that was probably not always the case. During the Late Heavy Bombardment 3.8 billion years ago, at that time, asteroids could have had much more water than they could now." [Comet Quiz: How much do you know about comets?] The asteroids seen now "have stayed in the vicinity of the sun for 4.6 billion years," Altwegg said. "They've lost water due to the sun, due to heat. But to start with, they might have had much more water than they have now." Future analysis of ice-rich bodies in the asteroid belt could shed light on whether Earth's water really did come from there, Altwegg said. The differences seen between Comet 103P/Hartley 2 and Comet 67P/C-G suggest that Kuiper Belt comets are much more diverse than previously thought. This could mean that "they were probably not all assembled in the same location in the solar system," Altwegg said. Kuiper Belt comets with relatively low deuterium-to-hydrogen ratios might have formed close to the sun, where solar warmth may have helped them lose deuterium, while those with relatively high deuterium-to-hydrogen ratios might have originated farther away. In the future, when Comet 67P/C-G flies closer to the sun, the scientists hope to fly Rosetta through a jet of gas that the comet will give off as it gets warmer and more active. This will help reveal if the deuterium-to-hydrogen ratio seen from the water near the comet's surface is the same as that from near its core. "Hopefully, we'll get to fly directly through a jet [in the] summertime [of] next year," said Matt Taylor, ESA Rosetta project scientist. Where could Philae be? Scientists are also still on the lookout for Philae, which made a bouncy landing on Comet 67P/C-G's surface in mid-November. The refrigerator-size probe's anchoring harpoons did not fire as planned during touchdown, and it bounced off the comet twice before settling down on its surface. It broadcasted scientific data for about 57 hours on the comet's surface before its primary batteries ran out. ESA officials aren't sure where Philae is now. Panoramic images from the probe reveal "one side of the lander appears to be in a hole," Taylor said during the news conference. "I see an overhanging clifflike structure." A radio instrument known as CONSERT, short for Comet Nucleus Sounding Experiment by Radiowave Transmission, on both Rosetta and Philae has narrowed the lander's position to a strip a few hundred feet long by a few dozen feet wide. "We're using that to kind of nail down where we think we should be looking harder," Taylor said. "Once we get identification of where the lander is, that will give us a better fix on what we believe the illumination conditions are and a better idea of when we should expect the lander to have sufficient illumination to start charging its batteries and come back online." A "back-of-a-beer-mat calculation" suggests Philae might come back online around May, Taylor added. The new comet findings are detailed in this week's issue of the journal Science. - Best Close Encounters of the Comet Kind - Photos: Spectacular Comet Views from Earth and Space - Asteroid Basics: A Space Rock Quiz Copyright 2014 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Inertia is also defined as the tendency of objects to keep moving in a straight line at a constant velocity. The principle of inertia is one of the fundamental principles in classical physics that are still used to describe the motion of objects and how they are affected by the applied forces on them. Inertia comes from the Latin word, iners, meaning idle, sluggish. Inertia is one of the primary manifestations of mass, which is a quantitative property of physical systems. Isaac Newton defined inertia as his first law in his Philosophiæ Naturalis Principia Mathematica, which states: The vis insita, or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavours to preserve its present state, whether it be of rest or of moving uniformly forward in a straight line. In common usage, the term "inertia" may refer to an object's "amount of resistance to change in velocity" (which is quantified by its mass), or sometimes to its momentum, depending on the context. The term "inertia" is more properly understood as shorthand for "the principle of inertia" as described by Newton in his First Law of Motion: an object not subject to any net external force moves at a constant velocity. Thus, an object will continue moving at its current velocity until some force causes its speed or direction to change. On the surface of the Earth, inertia is often masked by the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest), and gravity. This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them: ...it [body] stops when the force which is pushing the travelling object has no longer power to push it along... Prior to the Renaissance, the most generally accepted theory of motion in Western philosophy was based on Aristotle who around about 335 BC to 322 BC said that, in the absence of an external motive power, all objects (on Earth) would come to rest and that moving objects only continue to move so long as there is a power inducing them to do so. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium, which continues to move the projectile in some way. Aristotle concluded that such violent motion in a void was impossible. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of matter was motion, not stasis. In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus did have several supporters who further developed his ideas. In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's thought was followed up by his pupil Albert of Saxony (1316-1390) and the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: "...[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path." Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. The principle of inertia states it is the tendency of an object to resist a change in motion. According to Newton, an object will stay at rest or stay in motion (i.e. "maintain its velocity") unless acted on by a net external force, whether it results from gravity, friction, contact, or some other force. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the earth (and everything on it) was in fact never "at rest", but was actually in constant motion around the sun.Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle: A body moving on a level surface will continue in the same direction at a constant speed unless disturbed. Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in movement towards the west (for example), it will maintain itself in that movement." This notion which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the centre of the earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Einstein to develop the theory of Special Relativity. Concepts of inertia in Galileo's writings would later come to be refined, modified and codified by Isaac Newton as the first of his Laws of Motion (first published in Newton's work, Philosophiae Naturalis Principia Mathematica, in 1687): Unless acted upon by a net unbalanced force, an object will maintain a constant velocity. Note that "velocity" in this context is defined as a vector, thus Newton's "constant velocity" implies both constant speed and constant direction (and also includes the case of zero speed, or no motion). Since initial publication, Newton's Laws of Motion (and by inclusion, this first law) have come to form the basis for the branch of physics known as classical mechanics. The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617-1621); however, the meaning of Kepler's term (which he derived from the Latin word for "idleness" or "laziness") was not quite the same as its modern interpretation. Kepler defined inertia only in terms of a resistance to movement, once again based on the presumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to these concepts as it is today. Nevertheless, despite defining the concept so elegantly in his laws of motion, even Newton did not actually use the term "inertia" to refer to his First Law. In fact, Newton originally viewed the phenomenon he described in his First Law of Motion as being caused by "innate forces" inherent in matter, which resisted any acceleration. Given this perspective, and borrowing from Kepler, Newton attributed the term "inertia" to mean "the innate force possessed by an object which resists changes in motion"; thus, Newton defined "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one which we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon described by Newton's First Law of Motion, and the two concepts are now considered to be equivalent. Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies" was built on the understanding of inertia and inertial reference frames developed by Galileo and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained unchanged from Newton's original meaning (in fact, the entire theory was based on Newton's definition of inertia). However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to reference frames that were inertial in nature (meaning when no acceleration was present). In an attempt to address this limitation, Einstein proceeded to develop his general theory of relativity ("The Foundation of the General Theory of Relativity," 1916), which ultimately provided a unified theory for both inertial and noninertial (accelerated) reference frames. However, in order to accomplish this, in general relativity, Einstein found it necessary to redefine several fundamental concepts (such as gravity) in terms of a new concept of "curvature" of space-time, instead of the more traditional system of forces understood by Newton. As a result of this redefinition, Einstein also redefined the concept of "inertia" in terms of geodesic deviation instead, with some subtle but significant additional implications. The result of this is that, according to general relativity, inertia is the gravitational coupling between matter and spacetime. When dealing with very large scales, the traditional Newtonian idea of "inertia" does not actually apply and cannot necessarily be relied upon. Luckily, for sufficiently small regions of spacetime, the special theory can be used and inertia still means the same (and works the same) as in the classical model.[dubious ] Another profound conclusion of the theory of special relativity--perhaps the most well known--was that energy and mass are not separate things but are, in fact, interchangeable. But this new relationship also carried with it new implications for the concept of inertia. The logical conclusion of special relativity was that if mass exhibits the principle of inertia, then inertia must also apply to energy. This theory, and subsequent experiments confirming some of its conclusions, have also served to radically expand the meaning of inertia to apply more widely and to include inertia of energy. This section does not cite any sources. (January 2016) (Learn how and when to remove this template message) Physicists and mathematicians appear to be less inclined to use the popular concept of inertia as "a tendency to maintain momentum" and instead favor the mathematically useful definition of inertia as the measure of a body's resistance to changes in velocity or simply a body's inertial mass. This was clear at the beginning of the 20th century, before the advent of the theory of relativity. Mass, m, denoted something like an amount of substance or quantity of matter. At the same time, mass was the quantitative measure of inertia of a body. The mass of a body determines the momentum, , of the body at given velocity, ; it is a proportionality factor in the formula: The factor m is referred to as inertial mass. But mass, as related to the "inertia" of a body, can also be defined by the formula: Here, F is force, m is inertial mass, and a is acceleration. By this formula, the greater its mass, the less a body accelerates under given force. Masses defined by formula (1) and (2) are equal because formula (2) is a consequence of formula (1) if mass does not depend on time and velocity. Thus, "mass is the quantitative or numerical measure of a body's inertia, that is of its resistance to being accelerated". This meaning of a body's inertia therefore is altered from the popular meaning as "a tendency to maintain momentum" to a description of the measure of how difficult it is to change the velocity of a body, but it is consistent with the fact that motion in one reference frame can disappear in another, so it is the change in velocity that is important. There is no measurable difference between gravitational mass and inertial mass. The gravitational mass is defined by the quantity of gravitational field material a mass possesses, including its energy. The "inertial mass" (relativistic mass) is a function of the acceleration a mass has undergone and its resultant speed. A mass that has been accelerated to speeds close to the speed of light has its "relativistic mass" increased, and that is why the magnetic field strength in particle accelerators must be increased to force the mass's path to curve. In practice, "inertial mass" is normally taken to be "invariant mass" and so is identical to gravitational mass without the energy component. Gravitational mass is measured by comparing the force of gravity of an unknown mass to the force of gravity of a known mass. This is typically done with some sort of balance. Equal masses will match on a balance because the gravitational field applies to them equally, producing identical weight. This assumption breaks down near supermassive objects such as black holes and neutron stars due to tidal effects. It also breaks down in weightless environments, because no matter what objects are compared, it will yield a balanced reading. Inertial mass is found by applying a known net force to an unknown mass, measuring the resulting acceleration, and applying Newton's Second Law, m = F/a. This gives an accurate value for mass, limited only by the accuracy of the measurements. When astronauts need to be measured in the weightlessness of free fall, they actually find their inertial mass in a special chair called a body mass measurement device (BMMD). At high speeds, and especially near the speed of light, inertial mass can be determined by measuring the magnetic field strength and the curvature of the path of an electrically-charged mass such as an electron. No physical difference has been found between gravitational and inertial mass in a given inertial frame. In experimental measurements, the two always agree within the margin of error for the experiment. Einstein used the fact that gravitational and inertial mass were equal to begin his general theory of relativity, in which he postulated that gravitational mass was the same as inertial mass, and that the acceleration of gravity is a result of a "valley" or slope in the space-time continuum that masses "fell down". Dennis Sciama later showed that the reaction force produced by the combined gravity of all matter in the universe upon an accelerating object is mathematically equal to the object's inertia, but this would only be a workable physical explanation if, by some mechanism, the gravitational effects operated instantaneously. At any non-zero speed, relativistic mass always exceeds gravitational mass. If the mass is made to travel close to the speed of light, its "inertial mass" (relativistic) as observed from a stationary frame would be very great while its gravitational mass would remain at its rest value, but the gravitational effect of the extra energy would exactly balance the measured increase in inertial mass. In a location such as a steadily moving railway carriage, a dropped ball (as seen by an observer in the carriage) would behave as it would if it were dropped in a stationary carriage. The ball would simply descend vertically. It is possible to ignore the motion of the carriage by defining it as an inertial frame. In a moving but non-accelerating frame, the ball behaves normally because the train and its contents continue to move at a constant velocity. Before being dropped, the ball was traveling with the train at the same speed, and the ball's inertia ensured that it continued to move in the same speed and direction as the train, even while dropping. Note that, here, it is inertia which ensured that, not its mass. In an inertial frame, all the observers in uniform (non-accelerating) motion will observe the same laws of physics. However, observers in another inertial frame can make a simple, and intuitively obvious, transformation (the Galilean transformation), to convert their observations. Thus, an observer from outside the moving train could deduce that the dropped ball within the carriage fell vertically downwards. However, in reference frames which are experiencing acceleration (non-inertial reference frames), objects appear to be affected by fictitious forces. For example, if the railway carriage were accelerating, the ball would not fall vertically within the carriage but would appear to an observer to be deflected because the carriage and the ball would not be traveling at the same speed while the ball was falling. Other examples of fictitious forces occur in rotating frames such as the earth. For example, a missile at the North Pole could be aimed directly at a location and fired southwards. An observer would see it apparently deflected away from its target by a force (the Coriolis force), but in reality, the southerly target has moved because earth has rotated while the missile is in flight. Because the earth is rotating, a useful inertial frame of reference is defined by the stars, which only move imperceptibly during most observations. Newton's first law of motion is known as the principle of inertia. Another form of inertia is rotational inertia (-> moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum is unchanged, unless an external torque is applied; this is also called conservation of angular momentum. Rotational inertia depends on the object remaining structurally intact as a rigid body, and also has practical consequences. For example, a gyroscope uses the property that it resists any change in the axis of rotation. A gas or liquid in a container will also resist changes in rotational rate.
Date Range in Excel Formula There are times when we need to perform different operations like addition or subtraction on date values with Excel. By setting date ranges in Excel, we can perform calculations on these dates. For setting date ranges in Excel, we can first format the cells that have a start and end date as ‘Date’ and then use the operators: ‘+’ or ‘-‘to determine the end date or range duration. Examples of Date Range in Excel Example #1 – Basic Date Ranges Let us see how adding a number to date creates a date range. Suppose we have a start date in cell A2. Now, if we add a number, say 5 to it, we can build a date range. Now when we select cell A3 and type ‘=B2 + 1’. Copy this cell: B2 and paste it to cell B3; the relative cell referenceCell ReferenceCell reference in excel is referring the other cells to a cell to use its values or properties. For instance, if we have data in cell A2 and want to use that in cell A1, use =A2 in cell A1, and this will copy the A2 value in A1. would change as follows: So we can see that multiple date ranges can be built this way. Example #2 – Creating Date Sequence With Excel, we can easily create several sequences. Now we know that dates are some numbers in Excel. So we can use the same method to create date ranges. So to create date ranges which have the same range or gap but the dates change as we go down, we can follow the below steps: - Type a start date and end date in a minimum of two rows. - Select both the ranges and drag it down below till the row where we require the dates ranges: So we can see that using the date ranges in the first two rows as a template, Excel automatically creates date ranges for the subsequent rows. Now, let’s say we have two dates in two cells, and we wish to display them concatenated as a date range in a single cell. To do this, a formula based on the ‘TEXT’ function can be used. The general syntax for this formula is as follows: = TEXT(date1,”format”) & ” – ” & TEXT(date2,”format”) This function receives two date values as numeric and concatenates these two dates in the form of a date range according to a custom date format (“mmm d” in this case): Date Range =TEXT(A2,”mmm d”) & “-” & TEXT(B2,”mmm d”) So we can see in the above screenshot that we have applied the formula in cell C2. The TEXT function receives the dates stored in cells A2 and B2, the ampersand ‘&’ operator is used to concatenate the two dates as a date range in a custom format, specified as “mmm d” in this case, in a single cell and the two dates are joined with a hyphen ‘-‘ in the resultant date range which is determined in cell C2. Now, let’s say in this example, we wish to combine the two dates as a date range in a single cell with a different format, say “d mmm yy’. So the formula for a date range, in this case, would be as follows: Date Range =TEXT(A3,”d mmm yy”) & “-” & TEXT(B3,”d mmm yy”) So we can see in the above screenshot that the TEXT function receives the dates stored in cells A3 and B3, the ampersand ‘&’ operator is used to concatenating the two dates as a date range in a custom format, specified as “d mmm yy” in this case, in a single cell. The two dates are joined with a hyphen ‘-‘ in the resultant date range determined in cell C3. Now let us see what happens in case the start date or end date is missing. Let us say that the end date is missing as follows: The formula based on the TEXT function that we have used above won’t work correctly in case the end date is missing as the hyphen in the formula would anyhow be appended to the start date, i.e., along with the start date, we would also see a hyphen displayed in the date range. In contrast, we would only wish to see the start date as the date range in case the end date is missing. So in this case, we can have the formula by wrapping the concatenation and the second TEXT function inside an IF clause as follows: Date Range =TEXT(A2,”mmm d”) & IF(B2<> “”, “-” & TEXT(B2,”mmm d”), “”) So we can see that the above formula creates a full date range using both the dates when both are present. However, it displays only the start date in the specified format if the end date is missing. This is done with the help of an IF clause. Now in case both the dates are missing, we could use a nested IF statement in excelNested IF Statement In ExcelIn Excel, nested if function means using another logical or conditional function with the if function to test multiple conditions. For example, if there are two conditions to be tested, we can use the logical functions AND or OR depending on the situation, or we can use the other conditional functions to test even more ifs inside a single if., (i.e., one IF inside another IF statement) as follows: Date Range =IF(A4<>””,TEXT(A4,”mmm d”) & IF(B4<> “”, “-” & TEXT(B4,”mmm d”), “”),””) So we can see that the above formula returns an empty string if the start date is missing. In case both the dates are missing, then also an empty string is returned. Things to Remember - We can even create a list of sequential dates using the ‘Fill Handle’ command. To do this, we can select the cell having a start date, and then drag it to the range of cells where we wish to fill. Click on ‘Home’ tab -> ‘Editing’ -> ‘Fill Series’ and then choose a date unit we wish to use. - If we wish to calculate the duration or number of days between two dates, we can simply subtract the two dates using the ‘-‘ operator, and we will get the desired result. Note: The format of cells: A2 and B2 is ‘Date,’ whereas that of cell C2 is ‘General’ as it calculates the number of days. This has been a guide to Date Range in Excel. Here we discuss formulas to create date range in excel with different ways along with a downloadable excel template. You may learn more about excel from the following articles –
Details about the physical transformation of over 200 of the island’s coastal glaciers are documented in a new study, in which the authors anticipate environmental impacts. A new study of Greenland’s shrinking ice sheet reveals that many of the island’s glaciers are not only retreating, but are also undergoing other physical changes. Some of those changes are causing the rerouting of freshwater rivers beneath the glaciers, where it meets the bedrock. These rivers carry nutrients into the ocean, so this reconfiguring has the potential to impact the local ecology as well as the human communities that depend on it. “The coastal environment in Greenland is undergoing a major transformation,” said Alex Gardner, a research scientist at NASA’s Jet Propulsion Laboratory and co-author of the study. “We are already seeing new sections of the ocean and fjords opening up as the ice sheet retreats, and now we have evidence of changes to these freshwater flows. So losing ice is not just about changing sea level, it’s also about reshaping Greenland’s coastline and altering the coastal ecology.” About 80% of Greenland is blanketed by an ice sheet, also known as a continental glacier, that reaches a thickness of up to 2.1 miles (3.4 kilometers). Multiple studies have shown that the melting ice sheet is losing mass at an accelerating rate due to rising atmosphere and ocean temperatures, and that the additional meltwater is flowing into the sea. This study, published on October 27, 2020, in the Journal of Geophysical Research: Earth’s Surface, provides a detailed look at physical changes to 225 of Greenland’s ocean-terminating glaciers, which are narrow fingers of ice that flow from the ice sheet interior out into the ocean. The data used in the paper was compiled as part of a project based at JPL called Inter-mission Time Series of Land Ice Velocity and Elevation, or ITS_LIVE, which brings together observations of glaciers around the globe — collected by multiple satellites between 1985 and 2015 — into a single dataset open to scientists and the public. The satellites are all part of the Landsat program, which has sent a total of seven spacecraft into orbit to study Earth’s surface since 1972. Managed by NASA and the U.S. Geological Survey, Landsat data reveals both natural and human-caused changes to Earth’s surface, and is used by land managers and policymakers to make decisions about Earth’s changing environment and natural resources. Advancing and Retreating As glaciers flow toward the sea — albeit too slowly to be perceptible to the eye — they are replenished by new snowfall on the interior of the ice sheet that gets compacted into ice. Some glaciers extend past the coastline and can break off as icebergs. Due to rising atmospheric and ocean temperatures, the balance between glacier melting and replenishment, as well as iceberg calving, is changing. Over time, a glacier’s front may naturally advance or retreat, but the new research shows that none of the 225 ocean-terminating glaciers surveyed has substantially advanced since 2000, while 200 have retreated. Although this is in line with other Greenland findings, the new survey captures a trend that hasn’t been apparent in previous work: As individual glaciers retreat, they are also changing in ways that are likely rerouting freshwater flows under the ice. For example, glaciers change in thickness not only as warmer air melts ice off their surfaces, but also as their flow speed changes in response to the ice front advancing or retreating. Both scenarios were observed in the new study, and both can lead to changes in the distribution of pressure beneath the ice; scientists can infer these pressure changes based on changes in thickness analyzed in the study. This, in turn, can change the path of a subglacial river, since water will always take the path of least resistance, flowing in the direction of lowest pressure. Citing previous studies on the ecology of Greenland, the authors note that freshwater rivers under the ice sheet deliver nutrients (such as nitrogen, phosphorus, iron, and silica) to bays, deltas, and fjords around Greenland. In addition, the under-ice rivers enter the ocean where the ice and bedrock meet, which is often well below the ocean’s surface. The relatively buoyant fresh water rises, carrying nutrient-rich deep ocean water to the surface, where the nutrients can be consumed by phytoplankton. Research has shown that glacial meltwater rivers directly impact the productivity of phytoplankton — meaning the amount of biomass they produce — which serves as a foundation of the marine food chain. Combined with the opening up of new fjords and sections of ocean as glaciers retreat, these changes amount to a transformation of the local environment. “The speed of ice loss in Greenland is stunning,” said Twila Moon, deputy lead scientist of the National Snow and Ice Data Center and lead author on the study. “As the ice sheet edge responds to rapid ice loss, the character and behavior of the system as a whole are changing, with the potential to influence ecosystems and people who depend on them.” The changes described in the new study seem to depend on the unique features of its environment, such as the slope of the land that the glacier flows down, the properties of the ocean water that touches the glacier, as well as the glacier’s interaction with neighboring glaciers. That suggests scientists would need detailed knowledge not only of the glacier itself, but also of the glacier’s unique environment in order to predict how it will respond to continued ice loss. “It makes modeling glacial evolution far more complex when we’re trying to anticipate how these systems will evolve both in the short term and two or three decades out,” Gardner said. “It’s going to be more challenging than we previously thought, but we now have a better understanding of the processes driving the variety of responses, which will help us make better ice sheet models.” Reference: “Rapid reconfiguration of the Greenland Ice Sheet coastal margin” by Twila A. Moon, Alex S. Gardner, Bea Csatho, Ivan Parmuzin and Mark A. Fahnestock, 27 October 2020, Journal of Geophysical Research.
Jouni Lerssi, Geological Survey of Finland, PO Box 1237, 70211 Kuopio, Finland; firstname.lastname@example.org The resistivity method is used in the study of horizontal and vertical discontinuities in the electrical properties (resistivity) of the subsurface. Resistivity is the physical property which determines the aptitude of this material to be opposed to the passage of the electrical current. resistivity in ohm.m (Ωm) σ =1/ ρ conductivity in Siemens per meter (S/m) The conductivity of a rock increases if: - The quantity of water increases - The salinity increases (quantity of ions) - The quantity of clay increases - The temperature increases Electrical resistivity is a geophysical method in which an electrical current is injected into the ground through steel electrodes in an attempt to measure the electrical properties of the subsurface. Most soils and non-ore bearing rocks are electrically resistive, (i.e., insulators). Soil moisture and ground water are often electrically conductive due to contained dissolved minerals. Therefore the resistivity measured in the ground is predominantly controlled by the amount of moisture and water within the soil and rock (a function of the porosity and permeability), and the concentration of dissolved solids (salts) in that water. The basic method requires at least 4 steel electrodes be driven into the ground. An electrical current is then applied to the outer electrodes by a battery or generator. A voltage is measured between the 2 inner electrodes using a simple voltmeter. Through Ohm’s Law (V=IR) and by knowing the input current, the measured voltage and the geometry of the electrode array, a value known as resistance can be calculated. Resistivity, measured in Ohm-meters, is resistance times area divided by distance. Because the actual current flow is highly influenced by conductive layers, the value measured is known as the “apparent resistivity”. In its simplest terms, it represents an average value encompassing all of the different materials within the volume (half-space) of materials being measured. Most modern resistivity meters calculate apparent resistivity once the geometric parameters are input. - Exploration of bulk mineral deposit (sand, gravel) - Exploration of underground water supplies - Engineering/construction site investigation - Waste sites and pollutant investigations - Cavity, karst detection - Glaciology, permafrost - Archaeological investigations - Stratigraphic Surveys - Ground Water Mapping Surveys - Grounding Surveys For these applications, the basic method used is to set up electrical arrays at several locations on a Site, and measure changes in resisitivity as a function of depth. This is accomplished by increasing the electrode spacing while leaving the center of the array at the same location. Generally the “a” spacing starts at 1 meter, and is doubled for each successive measurement. A very general rule of thumb is that the depth of the investigation is equal to about 1⁄2 to 1/3 the “a” spacing. This method is known as the “vertical electrical sounding” and would be the method used to determine the depth of a clay layer, ground water, or bedrock. - Leachate Mapping Surveys - Sand and Gravel Exploration Surveys - Fracture Zone Exploration Surveys - Areal Surveys (exploring for voids, caves and buried, contaminant filled trenches) Once the depth to a feature of interest is determined using the sounding method, the feature may be tracked by moving the array across a Site, keeping the “a” spacing the same. This method of exploration is called “profiling” and might be used to locate a conductive leachate plume, an old stream channel, or a trench. Some resistivity units make it easy to “steer” the array across a Site by comparing data from the left half of the array with the data from the right half. - Soundings can be used to determine the depth and thickness of subsurface layers, depth to the water table, and bedrock. - Profiling can be used to detect and locate contaminant plumes. - Resistivity values can be used to estimate geological formations. - Like all geophysical methods resistivity data are ambiguous, meaning that many different “models” can produce the same data. To narrow down the number of possible models, other geological information is needed (borehole and/or monitoring well data). - Electrical resistivity is slow because electrodes must be driven into the ground between measurements. - Arrays cannot be oriented parallel to buried electrical power lines, utilities and fences since the current injected into the ground will flow more easily through the metal feature. - Data are influenced by near surface conductive layers. The current will always travel most easily along highly conductive layers. If the surface is highly conductive it may not be possible to collect data below the top layer. Fig. 1. Descriptive picture of resistivity (DC) -measurement (© Riitta Turunen, GTK). Reynolds, J.M. 2011. An Introduction to Applied and Environmental Geophysics. John Wiley & So n s Ltd, Chichester, 2nd ed.,712 pp. David M. Nielsen, ed., 2006: Practical handbook of environmental site characterization and ground-water monitoring, second edition, CRC Press, pp. 249-295.
The term of relativity theory was firstly used by Max Planck in 1906. The whole theory of relativity is based on this term. Max Planck described how the principles of relativity theory could be applied to practical events and various theories. Relativity theory is considered a unified representative of multiple physical theories. Concepts from more than one physical theory are explained and broadly discussed in this theory, which makes this theory both broad and deep. During the 20th century the concepts and ideas of the theory of relativity have transformed many theoretical concepts of physics and astronomy. In the most basic version of the theory of relativity, the 200-year old theory of mechanics was used as a base for further studies and descriptions. Isaac Newton is said to be the founder of theory of mechanics, not the whole theory but he developed and explained many concepts that are contained in theory of mechanics. The whole concept of gravity was introduced by him. In physics, the theory of relativity contributed to the development of the concept of elementary particles. Concepts of relativity theory were used to understand interaction of elementary particles more deeply. ©2016 Can Akdeniz (P)2016 Can Akdeniz Report Inappropriate Content
Peak oil, an event based on M. King Hubbert's theory, is the point in time when the maximum rate of extraction of petroleum is reached, after which the rate of production is expected to enter terminal decline. Peak oil theory is based on the observed rise, peak, (sometimes rapid) fall, and depletion of aggregate production rate in oil fields over time. Mostly due to the development of new production techniques and the exploitation of unconventional supplies, Hubbert's original predictions for world production proved premature. Hubbert's original prediction that US Peak Oil would be in about 1970 was accurate, as US average annual production peaked in 1970 at 9.6 million barrels per day. However, after a decades-long decline, the successful application of massive hydraulic fracturing to additional tight reservoirs caused US production to rebound, hitting 9.2 million barrels per day in early 2015. Peak oil is often confused with oil depletion; peak oil is the point of maximum production, while depletion refers to a period of falling reserves and supply. Some observers, such as petroleum industry experts Kenneth S. Deffeyes and Matthew Simmons, predict negative global economy implications following a post-peak production decline and oil price increase because of the high dependence of most modern industrial transport, agricultural, and industrial systems on the low cost and high availability of oil. Predictions vary greatly as to what exactly these negative effects would be. Optimistic estimations of peak production forecast the global decline will begin after 2020, and assume major investments in alternatives will occur before a crisis, without requiring major changes in the lifestyle of heavily oil-consuming nations. These models show the price of oil at first escalating and then retreating as other types of fuel and energy sources are used. Pessimistic predictions of future oil production made after 2007 stated either that the peak had already occurred, that oil production was on the cusp of the peak, or that it would occur shortly. - 1 Peak theory - 2 Demand for oil - 3 Supply of oil - 3.1 Definitions - 3.2 Overall supply levels - 3.3 Discoveries - 3.4 Reserves - 3.5 Production - 3.6 Control over supply - 4 Timing of peak oil - 5 Possible consequences of peak oil - 6 Criticisms - 7 See also - 8 Notes - 9 References - 10 Further information - 11 External links By observing past discoveries and production levels, and predicting future discovery trends, Hubbert used statistical modelling in 1956 to accurately predict that United States oil production would peak between 1965 and 1971. That model, along with his variants, are now called Hubbert peak theory; they have been used to describe and predict the peak and decline of production from regions, countries, and multinational areas. The same theory has also been applied to other limited-resource production-domains, such as minerals, lumber, and fresh water. Hubbert used a semi-logistical curved model in 1956 (sometimes incorrectly compared to a normal distribution). He assumed the production rate of a limited resource would follow a roughly symmetrical distribution. Depending on the limits of exploitability and market pressures, the rise or decline of resource production over time might be sharper or more stable, appear more linear or curved. In a 2006 analysis of Hubbert theory, it was noted that uncertainty in real world oil production amounts and confusion in definitions increases the uncertainty in general of production predictions. By comparing the fit of various other models, it was found that Hubbert's methods yielded the closest fit over all, but that none of the models were very accurate. In 1956 Hubbert himself recommended using "a family of possible production curves" when predicting a production peak and decline curve. The term "peak oil" was popularized by Colin Campbell and Kjell Aleklett in 2002 when they helped form ASPO. In his publications, Hubbert used the term "peak production rate" and "peak in the rate of discoveries". Demand for oil The demand side of peak oil over time is concerned with the total quantity of oil that the global market would choose to consume at various possible market prices and how this entire listing of quantities at various prices would evolve over time. Total global quantity demanded of world crude oil grew an average of 1.76% per year from 1994 to 2006, with a high growth of 3.4% in 2003–2004. After reaching a high of Script error: No such module "convert". per day in 2007, world consumption decreased in both 2008 and 2009 by a total of 1.8%, despite fuel costs plummeting in 2008. Despite this lull, world quantity-demanded for oil is projected to increase 21% over 2007 levels by 2030 (Script error: No such module "convert". from Script error: No such module "convert".), or about 0.8% average annual growth, due in large part to increases in demand from the transportation sector. According to the IEA's 2013 projections, growth in global oil demand will be significantly outpaced by growth in production capacity over the next 5 years. Energy demand is distributed amongst four broad sectors: transportation, residential, commercial, and industrial. In terms of oil use, transportation is the largest sector and the one that has seen the largest growth in demand in recent decades. This growth has largely come from new demand for personal-use vehicles powered by internal combustion engines. This sector also has the highest consumption rates, accounting for approximately 68.9% of the oil used in the United States in 2006, and 55% of oil use worldwide as documented in the Hirsch report. Transportation is therefore of particular interest to those seeking to mitigate the effects of peak oil. Although demand growth is highest in the developing world, the United States is the world's largest consumer of petroleum. Between 1995 and 2005, US consumption grew from Script error: No such module "convert". to Script error: No such module "convert"., a Script error: No such module "convert". increase. China, by comparison, increased consumption from Script error: No such module "convert". to Script error: No such module "convert"., an increase of Script error: No such module "convert"., in the same time frame. The Energy Information Administration (EIA) stated that gasoline usage in the United States may have peaked in 2007, in part because of increasing interest in and mandates for use of biofuels and energy efficiency. As countries develop, industry and higher living standards drive up energy use, most often of oil. Thriving economies, such as China and India, are quickly becoming large oil consumers. China has seen oil consumption grow by 8% yearly since 2002, doubling from 1996 to 2006. In 2008, auto sales in China were expected to grow by as much as 15–20%, resulting in part from economic growth rates of over 10% for five years in a row. Although swift, continued growth in China is often predicted, others predict China's export-dominated economy will not continue such growth trends because of wage and price inflation and reduced demand from the United States. India's oil imports are expected to more than triple from 2005 levels by 2020, rising to Script error: No such module "convert".. Another significant factor on petroleum demand has been human population growth. Oil production per capita peaked in 1979. The United States Census Bureau predicts that the world population in 2030 will be almost double that of 1980. Oil production per capita declined from Script error: No such module "convert". in 1980 to Script error: No such module "convert". in 1993, but then increased to Script error: No such module "convert". in 2005. In 2006, the world oil production took a downturn from Script error: No such module "convert". although population has continued to increase. This has caused the oil production per capita to drop again to Script error: No such module "convert".. One factor that has so far helped ameliorate the effect of population growth on demand is the decline of population growth rate since the 1970s. In 1970, the population grew at 2.1%. By 2007, the growth rate had declined to 1.167%. However, oil production was, until 2005, outpacing population growth to meet demand. World population grew by 6.2% from 6.07 billion in 2000 to 6.45 billion in 2005, whereas according to BP, global oil production during that same period increased from Script error: No such module "convert"., or by 8.2%. or according to EIA, from Script error: No such module "convert"., or by 8.8%. Supply of oil |This section does not cite any references or sources. (April 2014)| In 1956, Hubbert confined his peak oil prediction to that crude oil “producible by methods now in use.” By 1962, however, his analyses included future improvements in exploration and production. All of Hubbert’s analyses of peak oil specifically excluded oil manufactured from oil shale or mined from oil sands. Conventional oil is either light or heavy. Heavy refers to oil with a thick consistency that does not flow easily. Light oil can flow naturally to the surface or is extracted from the ground using pumpjacks. Pumpjacks are also used to remove heavy oil from the ground. Conventional oil is produced on land and offshore Unconventional oil sources - Oil shale is a common term for sedimentary rock such as shale or marl, containing kerogen, a waxy oil precursor that has not yet been transformed into crude oil by the high pressures and temperatures caused by deep burial. Since it is close to the surface rather than buried deep in the earth, the shale or marl is typically mined, crushed, and retorted, producing synthetic oil from the kerogen. Its net energy yield is much lower than conventional oil, so much so that estimates of the net energy yield of shale discoveries are considered extremely unreliable. - Oil sands are unconsolidated sandstone deposits containing large amounts of very viscous crude bitumen or extra-heavy crude oil that can be recovered by surface mining or by in-situ oil wells using steam injection or other techniques. It can be liquefied by upgrading, blending with diluent, or by heating; and then processed by a conventional oil refinery. The recovery process requires advanced technology but is more efficient than that of oil shale. - Coal liquefaction or Gas to liquids product are liquid hydrocarbons that are synthesised from the conversion of coal or natural gas. Overall supply levels |“||Our analysis suggests there are ample physical oil and liquid fuel resources for the foreseeable future. However, the rate at which new supplies can be developed and the break-even prices for those new supplies are changing.||”| According to the IEA's Oil Market Report dated 13 December 2011, global oil supply had risen to a record high of 90.0 mb/day by November 2011. Of this, oil supply from OPEC nations represented only 30.68 mb/day (34.1% of the total). |“||All the easy oil and gas in the world has pretty much been found. Now comes the harder work in finding and producing oil from more challenging environments and work areas.||”| |“||It is pretty clear that there is not much chance of finding any significant quantity of new cheap oil. Any new or unconventional oil is going to be expensive.||”| The peak of world oilfield discoveries occurred in 1965 at around Script error: No such module "convert".(Gb)/year. According to the Association for the Study of Peak Oil and Gas (ASPO), the rate of discovery has been falling steadily since. Less than 10 Gb/yr of oil were discovered each year between 2002 and 2007. According to a 2010 Reuters article, the annual rate of discovery of new fields has remained remarkably constant at 15–20 Gb/yr. A researcher for the U.S. Energy Information Administration pointed out that after the first wave of discoveries in an area, most oil and natural gas reserve growth comes not from discoveries of new fields, but from extensions and additional gas found within existing fields. Total possible conventional crude oil reserves include all crude oil with 90–95% certainty of being technically possible to produce (from reservoirs through a wellbore using primary, secondary, improved, enhanced, or tertiary methods), all crude with a 50% probability of being produced in the future, and discovered reserves that have a 5–10% possibility of being produced in the future. These are referred to as 1P/Proven (90–95%), 2P/Probable (50%), and 3P/Possible (5–10%). This does not include liquids extracted from mined solids or gasses (oil sands, oil shales, gas-to-liquid processes, or coal-to-liquid processes). Many current 2P calculations predict reserves to be between 1150 and 1350 Gb, but some authors have written that because of misinformation, withheld information, and misleading reserve calculations, 2P reserves are likely nearer to 850–900 Gb. The Energy Watch Group wrote that actual reserves peaked in 1980, when production first surpassed new discoveries, that apparent increases in reserves since then are illusory, and concluded (in 2007): "Probably the world oil production has peaked already, but we cannot be sure yet." In 2005, the New York Times reported that technology was capable of extracting about 40% of the oil from most wells. They quoted the Saudi Oil Minister Ali al-Naimi and oil industry consultant Daniel Yergin as speculating that future technology would make further extraction possible. In many major producing countries, the majority of reserves claims have not been subject to outside audit or examination. Most of the easy-to-extract oil has been found. Recent price increases have led to oil exploration in areas where extraction is much more expensive, such as in extremely deep wells, extreme downhole temperatures, and environmentally sensitive areas or where high technology will be required to extract the oil. A lower rate of discoveries per explorations has led to a shortage of drilling rigs, increases in steel prices, and overall increases in costs because of complexity. Concerns over stated reserves |“||[World] reserves are confused and in fact inflated. Many of the so-called reserves are in fact resources. They're not delineated, they're not accessible, they’re not available for production.||”| Al-Husseini estimated that Script error: No such module "convert". of the world's Script error: No such module "convert". of proven reserves should be recategorized as speculative resources. One difficulty in forecasting the date of peak oil is the opacity surrounding the oil reserves classified as "proven". Many worrying signs concerning the depletion of proven reserves have emerged in recent years. This was best exemplified by the 2004 scandal surrounding the "evaporation" of 20% of Shell's reserves. For the most part, proven reserves are stated by the oil companies, the producer states and the consumer states. All three have reasons to overstate their proven reserves: oil companies may look to increase their potential worth; producer countries gain a stronger international stature; and governments of consumer countries may seek a means to foster sentiments of security and stability within their economies and among consumers. Major discrepancies arise from accuracy issues with OPEC's self-reported numbers. Besides the possibility that these nations have overstated their reserves for political reasons (during periods of no substantial discoveries), over 70 nations also follow a practice of not reducing their reserves to account for yearly production. Analysts have suggested that OPEC member nations have economic incentives to exaggerate their reserves, as the OPEC quota system allows greater output for countries with greater reserves. Kuwait, for example, was reported in the January 2006 issue of Petroleum Intelligence Weekly to have only Script error: No such module "convert". in reserve, of which only 24 were fully proven. This report was based on the leak of a confidential document from Kuwait and has not been formally denied by the Kuwaiti authorities. This leaked document is from 2001, so the figure includes oil that has been produced since 2001, roughly 5-Script error: No such module "convert"., but excludes revisions or discoveries made since then. Additionally, the reported Script error: No such module "convert". of oil burned off by Iraqi soldiers in the First Persian Gulf War are conspicuously missing from Kuwait's figures. On the other hand, investigative journalist Greg Palast argues that oil companies have an interest in making oil look more rare than it is, to justify higher prices. This view is contested by ecological journalist Richard Heinberg. Other analysts argue that oil producing countries understate the extent of their reserves to drive up the price. In November 2009, a senior official at the IEA alleged that the United States had encouraged the international agency to manipulate depletion rates and future reserve data to maintain lower oil prices. In 2005, the IEA predicted that 2030 production rates would reach Script error: No such module "convert"., but this number was gradually reduced to Script error: No such module "convert".. The IEA official alleged industry insiders agree that even 90 to Script error: No such module "convert". might be impossible to achieve. Although many outsiders had questioned the IEA numbers in the past, this was the first time an insider had raised the same concerns. A 2008 analysis of IEA predictions questioned several underlying assumptions and claimed that a 2030 production level of Script error: No such module "convert". (comprising Script error: No such module "convert". of crude oil and Script error: No such module "convert". of both non-conventional oil and natural gas liquids) was more realistic than the IEA numbers. The EUR reported by the 2000 USGS survey of Script error: No such module "convert". has been criticized for assuming a discovery trend over the next twenty years that would reverse the observed trend of the past 40 years. Their 95% confidence EUR of Script error: No such module "convert". assumed that discovery levels would stay steady, despite the fact that discovery levels have been falling steadily since the 1960s. That trend of falling discoveries has continued in the ten years since the USGS made their assumption. The 2000 USGS is also criticized for introducing other methodological errors, as well as assuming 2030 production rates inconsistent with projected reserves. As conventional oil becomes less available, it can be replaced with production of liquids from oil sands, ultra-heavy oils, gas-to-liquids technologies, coal-to-liquids technologies, biofuel technologies, and shale oil. In the 2007 and subsequent International Energy Outlook editions, the word "Oil" was replaced with "Liquids" in the chart of world energy consumption. In 2009 biofuels was included in "Liquids" instead of in "Renewables". Unconventional sources, such as heavy crude oil, oil sands, and oil shale are not counted as part of oil reserves. However, with rule changes by the SEC, oil companies can now book them as proven reserves after opening a strip mine or thermal facility for extraction. These unconventional sources are more labor and resource intensive to produce, however, requiring extra energy to refine, resulting in higher production costs and up to three times more greenhouse gas emissions per barrel (or barrel equivalent) on a "well to tank" basis or 10 to 45% more on a "well to wheels" basis, which includes the carbon emitted from combustion of the final product. While the energy used, resources needed, and environmental effects of extracting unconventional sources has traditionally been prohibitively high, the three major unconventional oil sources being considered for large-scale production are the extra heavy oil in the Orinoco Belt of Venezuela, the Athabasca Oil Sands in the Western Canadian Sedimentary Basin, and the oil shales of the Green River Formation in Colorado, Utah, and Wyoming in the United States. Energy companies such as Syncrude and Suncor have been extracting bitumen for decades but production has increased greatly in recent years with the development of Steam Assisted Gravity Drainage and other extraction technologies. Chuck Masters of the USGS estimates that, "Taken together, these resource occurrences, in the Western Hemisphere, are approximately equal to the Identified Reserves of conventional crude oil accredited to the Middle East." Authorities familiar with the resources believe that the world's ultimate reserves of unconventional oil are several times as large as those of conventional oil and will be highly profitable for companies as a result of higher prices in the 21st century. In October 2009, the USGS updated the Orinoco tar sands (Venezuela) recoverable "mean value" to Script error: No such module "convert"., with a 90% chance of being within the range of 380-Script error: No such module "convert"., making this area "one of the world's largest recoverable oil accumulations". Despite the large quantities of oil available in non-conventional sources, Matthew Simmons argued in 2005 that limitations on production prevent them from becoming an effective substitute for conventional crude oil. Simmons stated "these are high energy intensity projects that can never reach high volumes" to offset significant losses from other sources. Another study claims that even under highly optimistic assumptions, "Canada's oil sands will not prevent peak oil," although production could reach Script error: No such module "convert". by 2030 in a "crash program" development effort. Moreover, oil extracted from these sources typically contains contaminants such as sulfur and heavy metals that are energy-intensive to extract and can leave tailings, ponds containing hydrocarbon sludge, in some cases. The same applies to much of the Middle East's undeveloped conventional oil reserves, much of which is heavy, viscous, and contaminated with sulfur and metals to the point of being unusable. However, recent high oil prices make these sources more financially appealing. A study by Wood Mackenzie suggests that within 15 years all the world’s extra oil supply is likely to come from unconventional sources. Currently, two companies SASOL and Shell, have synthetic oil technology proven to work on a commercial scale. Sasol's primary business is based on CTL (coal-to-liquid) and GTL (natural gas-to-liquid) technology, producing US$4.40 billion in revenues (FY2009). Shell has used these processes to recycle waste flare gas (usually burnt off at oil wells and refineries) into usable synthetic oil. A 2003 article in Discover magazine claimed that thermal depolymerization could be used to manufacture oil indefinitely, out of garbage, sewage, and agricultural waste. The article claimed that the cost of the process was $15 per barrel. A follow-up article in 2006 stated that the cost was actually $80 per barrel, because the feedstock that had previously been considered as hazardous waste now had market value. A 2007 news bulletin published by Los Alamos Laboratory proposed that hydrogen (possibly produced using hot fluid from nuclear reactors to split water into hydrogen and oxygen) in combination with sequestered CO2 could be used to produce methanol (CH3OH), which could then be converted into gasoline. The press release stated that in order for such a process to be economically feasible, gasoline prices would need to be above $4.60 "at the pump" in U.S. markets. Capital and operational costs were uncertain mostly because the costs associated with sequestering CO2 are unknown. Another problem is that an energy source will be required for both carbon capture and water splitting processes. The point in time when peak global oil production occurs defines peak oil. Adherents of 'peak oil' believe that production capacity will remain the main limitation of supply, and that when production decreases, it will be the main bottleneck to the petroleum supply/demand equation. So far, predictions of an imminent peak have been incorrect and it is not yet known whether any future possible decline in oil production will be supply- or demand- led. Worldwide oil discoveries have been less than annual production since 1980. According to several sources in 2006-7, worldwide production was past or near its maximum. World population has grown faster than oil production. Because of this, oil production per capita peaked in 1979 (preceded by a plateau during the period of 1973–1979). The increasing investment in harder-to-reach oil is a sign of oil companies' belief in the end of easy oil. Also, while it is widely believed that increased oil prices spur an increase in production, an increasing number of oil industry insiders are now coming to believe that even with higher prices, oil production is unlikely to increase significantly beyond its current level. Among the reasons cited are both geological factors as well as "above ground" factors that are likely to see oil production plateau near its current level. A 2008 Journal of Energy Security analysis of the energy return on drilling effort in the United States concluded that there was extremely limited potential to increase production of both gas and (especially) oil. By looking at the historical response of production to variation in drilling effort, the analysis showed very little increase of production attributable to increased drilling. This was because of a tight quantitative relationship of diminishing returns with increasing drilling effort: as drilling effort increased, the energy obtained per active drill rig was reduced according to a severely diminishing power law. The study concluded that even an enormous increase of drilling effort was unlikely to significantly increase oil and gas production in a mature petroleum region such as the United States. Since the analysis was published in 2008, US production of crude oil has increased 30%, and production of dry natural gas has increased 19% (2012 compared to 2008). Worldwide production trends According to a January 2007 International Energy Agency report, global supply (which includes biofuels, non-crude sources of petroleum, and use of strategic oil reserves, in addition to crude production) averaged Script error: No such module "convert". in 2006, up Script error: No such module "convert". (0.9%), from 2005. Average yearly gains in global supply from 1987 to 2005 were Script error: No such module "convert". (1.7%). In 2008, the IEA drastically increased its prediction of conventional oil production decline from 3.7% a year to 6.7% a year, based largely on better accounting methods, including actual research of individual oil field production throughout the world. Oil field decline Of the world's largest 21 fields, at least 9 are in decline. In 2006, Saudi Aramco Senior Vice President Abdullah Saif estimated that its existing fields were declining at a rate of 5% to 12% per year. This information has been used to argue that Ghawar, which is the largest oil field in the world and responsible for approximately half of Saudi Arabia's oil production over the last 50 years, will soon start to decline. The world's second largest oil field, the Burgan Field in Kuwait, entered decline in November 2005. According to a study of the largest 811 oilfields conducted in early 2008 by Cambridge Energy Research Associates (CERA), the average rate of field decline is 4.5% per year. The IEA stated in November 2008 that an analysis of 800 oilfields showed the decline in oil production to be 6.7% a year, and that this would grow to 8.6% in 2030. There are also projects expected to begin production within the next decade that are hoped to offset these declines. The CERA report projects a 2017 production level of over Script error: No such module "convert".. Kjell Aleklett of the Association for the Study of Peak Oil and Gas agrees with their decline rates, but considers the rate of new fields coming online—100% of all projects in development, but with 30% of them experiencing delays, plus a mix of new small fields and field expansions—overly optimistic. A more rapid annual rate of decline of 5.1% in 800 of the world's largest oil fields was reported by the International Energy Agency in their World Energy Outlook 2008. Mexico announced that production from its giant Cantarell Field began to decline in March 2006. In 2000, PEMEX built the largest nitrogen plant in the world in an attempt to maintain production through nitrogen injection into the formation, but by 2006, Cantarell was declining at a rate of 13% per year. OPEC had vowed in 2000 to maintain a production level sufficient to keep oil prices between $22–28 per barrel, but did not prove possible. In its 2007 annual report, OPEC projected that it could maintain a production level that would stabilize the price of oil at around $50–60 per barrel until 2030. On 18 November 2007, with oil above $98 a barrel, King Abdullah of Saudi Arabia, a long-time advocate of stabilized oil prices, announced that his country would not increase production to lower prices. Saudi Arabia's inability, as the world's largest supplier, to stabilize prices through increased production during that period suggests that no nation or organization had the spare production capacity to lower oil prices. The implication is that those major suppliers who had not yet peaked were operating at or near full capacity. Commentators have pointed to the Jack 2 deep water test well in the Gulf of Mexico, announced 5 September 2006, as evidence that there is no imminent peak in global oil production. According to one estimate, the field could account for up to 11% of U.S. production within seven years. However, even though oil discoveries are expected after the peak oil of production is reached, the new reserves of oil will be harder to find and extract. The Jack 2 field, for instance, is more than Script error: No such module "convert". under the sea floor in Script error: No such module "convert". of water, requiring Script error: No such module "convert". of pipe to reach. Additionally, even the maximum estimate of Script error: No such module "convert". represents slightly less than 2 years of U.S. consumption at present levels. Production began in December 2014 and is expected to ramp up to 94,000 b/d of crude and 21 MMcfd of gas by 2020. Control over supply Entities such as governments or cartels can reduce supply to the world market by limiting access to the supply through nationalizing oil, cutting back on production, limiting drilling rights, imposing taxes, etc. International sanctions, corruption, and military conflicts can also reduce supply. Nationalization of oil supplies Another factor affecting global oil supply is the nationalization of oil reserves by producing nations. The nationalization of oil occurs as countries begin to deprivatize oil production and withhold exports. Kate Dourian, Platts' Middle East editor, points out that while estimates of oil reserves may vary, politics have now entered the equation of oil supply. "Some countries are becoming off limits. Major oil companies operating in Venezuela find themselves in a difficult position because of the growing nationalization of that resource. These countries are now reluctant to share their reserves." According to consulting firm PFC Energy, only 7% of the world's estimated oil and gas reserves are in countries that allow companies like ExxonMobil free rein. Fully 65% are in the hands of state-owned companies such as Saudi Aramco, with the rest in countries such as Russia and Venezuela, where access by Western European and North American companies is difficult. The PFC study implies political factors are limiting capacity increases in Mexico, Venezuela, Iran, Iraq, Kuwait, and Russia. Saudi Arabia is also limiting capacity expansion, but because of a self-imposed cap, unlike the other countries. As a result of not having access to countries amenable to oil exploration, ExxonMobil is not making nearly the investment in finding new oil that it did in 1981. Cartel influence on supply OPEC is an alliance between 12 diverse oil producing countries (Algeria, Angola, Ecuador, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, the United Arab Emirates, and Venezuela) to control the supply of oil. OPEC's power was consolidated as various countries nationalized their oil holdings, and wrested decision-making away from the "Seven Sisters," (Anglo-Iranian, Socony-Vacuum, Royal Dutch Shell, Gulf, Esso, Texaco, and Socal) and created their own oil companies to control the oil. OPEC tries to influence prices by restricting production. It does this by allocating each member country a quota for production. All 12 members agree to keep prices high by producing at lower levels than they otherwise would. There is no way to verify adherence to the quota, so every member faces the same incentive to "cheat" the cartel. United States policy of selling arms and providing security to Saudi Arabia is often seen as an attempt to influence the Saudis to increase oil production. According to sociology professor Michael Schwartz, the purpose for the second Iraq war was to break the back of OPEC and return control of the oil fields to western oil companies. Alternately, commodities trader Raymond Learsy, author of Over a Barrel: Breaking the Middle East Oil Cartel, contends that OPEC has trained consumers to believe that oil is a much more finite resource than it is. To back his argument, he points to past false alarms and apparent collaboration. He also believes that peak oil analysts are conspiring with OPEC and the oil companies to create a "fabricated drama of peak oil" to drive up oil prices and profits; oil had risen to a little over $30/barrel at that time. A counter-argument was given in the Huffington Post after he and Steve Andrews, co-founder of ASPO, debated on CNBC in June 2007. Timing of peak oil There is a general consensus between industry leaders and analysts that world oil production will peak between 2010 and 2030, with a significant chance that the peak will occur before 2020. Dates after 2030 are considered implausible. Determining a more specific range is difficult due to the lack of certainty over the actual size of world oil reserves. Unconventional oil is not currently predicted to meet the expected shortfall even in a best-case scenario. For unconventional oil to fill the gap without "potentially serious impacts on the global economy", oil production would have to remain stable after its peak, until 2035 at the earliest. On the other hand, the US Energy Information Administration projected in 2014 that world production of “total liquids,” which, in addition to liquid petroleum, includes biofuels, natural gas liquids, and oil sands, would increase at an average rate of about one percent per year through 2040 without peaking. OPEC countries are expected to increase oil production at a faster rate than non-OPEC countries. Given the large range offered by meta-studies, papers published since 2010 have been relatively pessimistic. A 2010 Kuwait University study predicted production would peak in 2014. A 2010 Oxford University study predicted that production will peak before 2015. A 2014 validation of a significant 2004 study in the journal Energy proposed that it is likely that conventional oil production peaked, according to various definitions, between 2005 and 2011. Models which show a continued increase in oil production may be including both conventional and non-conventional oil. A set of models published in a 2014 Ph.D. thesis predicted that a 2012 peak would be followed by a drop in oil prices, which in some scenarios could turn into a rapid rise in prices thereafter. Major oil companies hit peak production in 2005. Fatih Birol, chief economist at the International Energy Agency, has also stated that "crude oil production for the world has already peaked in 2006." In 1962, Hubbert predicted that world oil production would peak at a rate of 12.5 billion barrels per year, around the year 2000. In 1974, Hubbert predicted that peak oil would occur in 1995 "if current trends continue." As of 2012, OPEC continued to claim that world crude oil production and remaining proven reserves were at record highs. According to Matthew Simmons, former Chairman of Simmons & Company International and author of Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy, "peaking is one of these fuzzy events that you only know clearly when you see it through a rear view mirror, and by then an alternate resolution is generally too late." Possible consequences of peak oil The wide use of fossil fuels has been one of the most important stimuli of economic growth and prosperity since the industrial revolution, allowing humans to participate in takedown, or the consumption of energy at a greater rate than it is being replaced. Some believe that when oil production decreases, human culture, and modern technological society will be forced to change drastically. The impact of peak oil will depend heavily on the rate of decline and the development and adoption of effective alternatives. If alternatives are not forthcoming, the products produced with oil (including fertilizers, detergents, solvents, adhesives, and most plastics) would become scarce and expensive. In 2005, the United States Department of Energy published a report titled Peaking of World Oil Production: Impacts, Mitigation, & Risk Management. Known as the Hirsch report, it stated, "The peaking of world oil production presents the U.S. and the world with an unprecedented risk management problem. As peaking is approached, liquid fuel prices and price volatility will increase dramatically, and, without timely mitigation, the economic, social, and political costs will be unprecedented. Viable mitigation options exist on both the supply and demand sides, but to have substantial impact, they must be initiated more than a decade in advance of peaking." Some of the information was updated in 2007. High oil prices Historical oil prices The oil price historically was comparatively low until the 1973 oil crisis and the 1979 energy crisis when it increased more than tenfold during that six-year timeframe. Even though the oil price dropped significantly in the following years, it has never come back to the previous levels. Oil price began to increase again during the 2000s until it hit historical heights of $143 per barrel (2007 inflation adjusted dollars) on 30 June 2008. As these prices were well above those that caused the 1973 and 1979 energy crises, they have contributed to fears of an economic recession similar to that of the early 1980s. These fears were not without a basis, since the high oil prices began having an effect on the economies, as, for example, indicated by gasoline consumption drop of 0.5% in the first two months of 2008 in the United States. compared to a drop of 0.4% total in 2007. It is agreed that the main reason for the price spike in 2005–2008 was strong demand pressure. For example, global consumption of oil rose from Script error: No such module "convert". in 2004 to 31 billion in 2005. The consumption rates were far above new discoveries in the period, which had fallen to only eight billion barrels of new oil reserves in new accumulations in 2004. In June 2005, OPEC stated that they would 'struggle' to pump enough oil to meet pricing pressures for the fourth quarter of that year. From 2007 to 2008, the decline in the U.S. dollar against other significant currencies was also considered as a significant reason for the oil price increases, as the dollar lost approximately 14% of its value against the Euro from May 2007 to May 2008. Besides supply and demand pressures, at times security related factors may have contributed to increases in prices, including the War on Terror, missile launches in North Korea, the Crisis between Israel and Lebanon, nuclear brinkmanship between the U.S. and Iran, and reports from the U.S. Department of Energy and others showing a decline in petroleum reserves. However, during all 2013 and 2014 the price crude oil has showed a relative stability, being between $100 and $110 per barrel, before dropping sharply in late 2014 to below $70. Effects of rising oil prices In the past, the price of oil has led to economic recessions, such as the 1973 and 1979 energy crises. The effect the price of oil has on an economy is known as a price shock. In many European countries, which have high taxes on fuels, such price shocks could potentially be mitigated somewhat by temporarily or permanently suspending the taxes as fuel costs rise. This method of softening price shocks is less useful in countries with much lower gas taxes, such as the United States. A baseline scenario for a recent IMF paper found oil production growing at 0.8% (as opposed to a historical average of 1.8%) would result in a small reduction of economic growth from 0.2 to 0.4%. Some economists predict that a substitution effect will spur demand for alternate energy sources, such as coal or liquefied natural gas. This substitution can be only temporary, as coal and natural gas are finite resources as well. Prior to the run-up in fuel prices, many motorists opted for larger, less fuel-efficient sport utility vehicles and full-sized pickups in the United States, Canada, and other countries. This trend has been reversing because of sustained high prices of fuel. The September 2005 sales data for all vehicle vendors indicated SUV sales dropped while small cars sales increased. Hybrid and diesel vehicles are also gaining in popularity. EIA published Household Vehicles Energy Use: Latest Data and Trends in Nov 2005 illustrating the steady increase in disposable income and $20–30 per barrel price of oil in 2004. The report notes "The average household spent $1,520 on fuel purchases for transport." According to CNBC that expense climbed to $4,155 in 2011. In 2008, a report by Cambridge Energy Research Associates stated that 2007 had been the year of peak gasoline usage in the United States, and that record energy prices would cause an "enduring shift" in energy consumption practices. According to the report, in April gas consumption had been lower than a year before for the sixth straight month, suggesting 2008 would be the first year US gasoline usage declined in 17 years. The total miles driven in the U.S. peaked in 2006. The Export Land Model states that after peak oil petroleum exporting countries will be forced to reduce their exports more quickly than their production decreases because of internal demand growth. Countries that rely on imported petroleum will therefore be affected earlier and more dramatically than exporting countries. Mexico is already in this situation. Internal consumption grew by 5.9% in 2006 in the five biggest exporting countries, and their exports declined by over 3%. It was estimated that by 2010 internal demand would decrease worldwide exports by Script error: No such module "convert".. Canadian economist Jeff Rubin has stated that high oil prices is likely to result in increased consumption in developed countries through partial manufacturing de-globalisation of trade. Manufacturing production would move closer to the end consumer to minimise transportation network costs, and therefore a demand decoupling from Gross Domestic Product would occur. Higher oil prices would lead to increased freighting costs and consequently, the manufacturing industry would move back to the developed countries since freight costs would outweigh the current economic wage advantage of developing countries. Chinese Export data released on 10 March 2012 confirmed a deep slowdown in exports, as China entered an unexpectedly large trade deficit. Agricultural effects and population limits Since supplies of oil and gas are essential to modern agriculture techniques, a fall in global oil supplies could cause spiking food prices and unprecedented famine in the coming decades.[note 1] Geologist Dale Allen Pfeiffer contends that current population levels are unsustainable, and that to achieve a sustainable economy and avert disaster the United States population would have to be reduced by at least one-third, and world population by two-thirds. The largest consumer of fossil fuels in modern agriculture is ammonia production (for fertilizer) via the Haber process, which is essential to high-yielding intensive agriculture. The specific fossil fuel input to fertilizer production is primarily natural gas, to provide hydrogen via steam reforming. Given sufficient supplies of renewable electricity, hydrogen can be generated without fossil fuels using methods such as electrolysis. For example, the Vemork hydroelectric plant in Norway used its surplus electricity output to generate renewable ammonia from 1911 to 1971. Iceland currently generates ammonia using the electrical output from its hydroelectric and geothermal power plants, because Iceland has those resources in abundance while having no domestic hydrocarbon resources, and a high cost for importing natural gas. Long-term effects on lifestyle A majority of Americans live in suburbs, a type of low-density settlement designed around universal personal automobile use. Commentators such as James Howard Kunstler argue that because over 90% of transportation in the U.S. relies on oil, the suburbs' reliance on the automobile is an unsustainable living arrangement. Peak oil would leave many Americans unable to afford petroleum based fuel for their cars, and force them to use bicycles or electric vehicles. Additional options include telecommuting, moving to rural areas, or moving to higher density areas, where walking and public transportation are more viable options. In the latter two cases, suburbia may become the "slums of the future." The issue of petroleum supply and demand is also a concern for growing cities in developing countries (where urban areas are expected to absorb most of the world's projected 2.3 billion population increase by 2050). Stressing the energy component of future development plans is seen as an important goal. Rising oil prices will also affect the cost of food, heating, and electricity. With prices rising for these necessities, a high amount of stress will be put on current middle to low income families as economies contract from the decline in excess funds, decreasing employment rates. The Hirsch/US DoE Report concludes that "without timely mitigation, world supply/demand balance will be achieved through massive demand destruction (shortages), accompanied by huge oil price increases, both of which would create a long period of significant economic hardship worldwide". Methods that have been suggested for mitigating these urban and suburban issues include the use of non-petroleum vehicles such as electric cars, battery electric vehicles, transit-oriented development, carfree cities, bicycles, new trains, new pedestrianism, smart growth, shared space, urban consolidation, urban villages, and New Urbanism. An extensive 2009 report on the effects of compact development by the United States National Research Council of the Academy of Sciences, commissioned by the United States Congress, stated six main findings. First, that compact development is likely to reduce "Vehicle Miles Traveled" (VMT) throughout the country. Second, that doubling residential density in a given area could reduce VMT by as much as 25% if coupled with measures such as increased employment density and improved public transportation. Third, that higher density, mixed-use developments would produce both direct reductions in CO2 emissions (from less driving), and indirect reductions (such as from lower amounts of materials used per housing unit, higher efficiency climate control, longer vehicle lifespans, and higher efficiency delivery of goods and services). Fourth, that although short term reductions in energy use and CO2 emissions would be modest, that these reductions would become more significant over time. Fifth, that a major obstacle to more compact development in the United States is political resistance from local zoning regulators, which would hamper efforts by state and regional governments to participate in land-use planning. Sixth, the committee agreed that changes in development that would alter driving patterns and building efficiency would have various secondary costs and benefits that are difficult to quantify. The report recommends that policies supporting compact development (and especially its ability to reduce driving, energy use, and CO2 emissions) should be encouraged. An economic theory that has been proposed as a remedy is the introduction of a steady state economy. Such a system could include a tax shifting from income to depleting natural resources (and pollution), as well as the limitation of advertising that stimulates demand and population growth. It could also include the institution of policies that move away from globalization and toward localization to conserve energy resources, provide local jobs, and maintain local decision-making authority. Zoning policies could be adjusted to promote resource conservation and eliminate sprawl. To avoid the serious social and economic implications a global decline in oil production could entail, the 2005 Hirsch report emphasized the need to find alternatives, at least ten to twenty years before the peak, and to phase out the use of petroleum over that time. This was similar to a plan proposed for Sweden that same year. Such mitigation could include energy conservation, fuel substitution, and the use of unconventional oil. Because mitigation can reduce the use of traditional petroleum sources, it can also affect the timing of peak oil and the shape of the Hubbert curve. The less we use, the longer it will last. Iceland was the first country to suggest transitioning to 100% renewable energy, using hydrogen for vehicles and its fishing fleet, in 1998. By 2009 the concept of using wind, water, and solar power was proposed, with a little biofuel for that segment of transportation that is difficult to electrify, such as large ships and airplanes. Positive aspects of peak oil Permaculture sees peak oil as holding tremendous potential for positive change, assuming countries act with foresight. The rebuilding of local food networks, energy production, and the general implementation of "energy descent culture" are argued to be ethical responses to the acknowledgment of finite fossil resources. Majorca is an island currently diversifying its energy supply from fossil fuels to alternative sources and looking back at traditional construction and permaculture methods. The Transition Towns movement, started in Totnes, Devon and spread internationally by "The Transition Handbook" (Rob Hopkins) and Transition Network, sees the restructuring of society for more local resilience and ecological stewardship as a natural response to the combination of peak oil and climate change. Opponents to the theory of peak oil often cite new oil reserves that have been found, which continue to forestall a peak oil event. In particular, some contend that oil production from these new oil reserves as well as from existing fields will continue to increase at a rate that outpaces demand, until alternate energy sources for our current fossil fuel dependence are found. Further criticism against peak oil is confidence in the various options and technologies for substituting oil. And indeed there are some promising approaches that seem to have the potential to reduce or even counterbalance the effects of a peak oil situation. For example, US federal funding has increased for algae fuels since the year 2000 due to rising fuel prices. Numerous more projects are being funded in Australia, New Zealand, Europe, the Middle East, and other parts of the world and private companies are entering the field. In April 2014 researchers at the US Naval Research Laboratory (NRL) announced that they had successfully tested a process to convert seawater into jet fuel. They can extract CO2 both dissolved and bound from the water as a source of carbon, and can extract H2 through electrolysis. They then convert the CO2 and hydrogen into long chain hydrocarbons. Some other well-known alternative fuels include bioalcohol (methanol, ethanol, butanol), chemically stored electricity (batteries and fuel cells), hydrogen, non-fossil methane, non-fossil natural gas, vegetable oil, propane, and other biomass sources. Oil industry representatives Oil industry representatives have criticised peak oil theory, at least as it has been presented by Matthew Simmons. The president of Royal Dutch Shell's U.S. operations John Hofmeister, while agreeing that conventional oil production would soon start to decline, criticized Simmons's analysis for being "overly focused on a single country: Saudi Arabia, the world's largest exporter and OPEC swing producer." He also pointed to the large reserves at the US outer continental shelf, which held an estimated Script error: No such module "convert". of oil and natural gas. As of 2008, however, only 15% of those reserves were currently exploitable, a good part of that off the coasts of Louisiana, Alabama, Mississippi, and Texas. Hofmeister also contended that Simmons erred in excluding unconventional sources of oil such as the oil sands of Canada, where Shell was active. The Canadian oil sands—a natural combination of sand, water, and oil found largely in Alberta and Saskatchewan—are believed to contain one trillion barrels of oil. Another trillion barrels are also said to be trapped in rocks in Colorado, Utah, and Wyoming, but are in the form of oil shale. These particular reserves present major environmental, social, and economic obstacles to recovery. Hofmeister also claimed that if oil companies were allowed to drill more in the United States enough to produce another Script error: No such module "convert"., oil and gas prices would not be as high as they were in the later part of the 2000 to 2010 decade. He thought in 2008 that high energy prices would cause social unrest similar to the 1992 Rodney King riots. Physical peak oil, which I have no reason to accept as a valid statement either on theoretical, scientific or ideological grounds, would be insensitive to prices. (...) In fact the whole hypothesis of peak oil – which is that there is a certain amount of oil in the ground, consumed at a certain rate, and then it's finished – does not react to anything.... Therefore there will never be a moment when the world runs out of oil because there will always be a price at which the last drop of oil can clear the market. And you can turn anything into oil into if you are willing to pay the financial and environmental price... (Global Warming) is likely to be more of a natural limit than all these peak oil theories combined. (...) Peak oil has been predicted for 150 years. It has never happened, and it will stay this way. According to Rühl, the main limitations for oil availability are "above ground" and are to be found in the availability of staff, expertise, technology, investment security, money and last but not least in global warming. The oil question is about price and not the basic availability. Rühl's views are shared by Daniel Yergin of CERA, who added that the recent high price phase might add to a future demise of the oil industry, not of complete exhaustion of resources or an apocalyptic shock but the timely and smooth setup of alternatives. Economist Robert L. Bradley, Jr. wrote in a 2007 article in The Review of Austrian Economics that, "[a]n Austrian institutional theory is more robust for explaining changes in mineral-resource scarcity than neoclassical depletionism[.]" Using the writings of Erich Zimmermann and Julian Simon, Bradley also argued in 2012 that resources have subjective rather than objective existences in economics. He concluded that, "what resources come from the ground ultimately depend on the resources in the mind." Attorney and mechanical engineer Peter W. Huber pointed out in 2006 that the world is just running out of "cheap oil." As oil prices rise, unconventional sources become economically viable. He predicted that, "[t]he tar sands of Alberta alone contain enough hydrocarbon to fuel the entire planet for over 100 years." Industry blogger Steve Maley echoed some of the points of Yergin, Rühl, Mather and Hofmeister. Environmental journalist George Monbiot responded to a 2012 report by Leonardo Maugeri by proclaiming that there is more than enough oil (from unconventional sources) for capitalism to "deep-fry" the world with climate change. Stephen Sorrell, senior lecturer Science and Technology Policy Research, Sussex Energy Group, and lead author of the UKERC Global Oil Depletion report, and Christophe McGlade, doctoral researcher at the UCL Energy Institute have criticized Maugeri's assumptions about decline rates. Lua error in package.lua at line 80: module 'Module:Portal/images/r' not found. - A list of over 20 published articles and books from government and journal sources supporting this thesis have been compiled at Dieoff.org in the section "Food, Land, Water, and Population." - "peak oil definition from Financial Times Lexicon". Financial Times Lexicon. 2009. Retrieved 26 Aug 2013. - Miller, R. G.; Sorrell, S. R. (2 December 2013). "The future of oil supply". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372 (2006): 20130179–20130179. doi:10.1098/rsta.2013.0179. Retrieved 7 April 2014. - "US Field Production of Crude Oil". Energy Information Administration. - Snyder, Benjamin (11 Feb 2015). "U.S. oil production reaches all-time high amid depressed crude prices". Fortune. Retrieved 16 Feb 2015. - "CERA says peak oil theory is faulty". Energy Bulletin (Cambridge Energy Research Associates (CERA)). 14 November 2006. Archived from the original on 22 June 2008. Retrieved 27 July 2008. - Deffeyes, Kenneth S (19 January 2007). "Current Events – Join us as we watch the crisis unfolding". Princeton University: Beyond Oil. Retrieved 27 July 2008. - Zittel, Werner; Schindler, Jorg (October 2007). "Crude Oil: The Supply Outlook" (PDF). Energy Watch Group. EWG-Series No 3/2007. Retrieved 27 July 2008. - Cohen, Dave (31 October 2007). "The Perfect Storm". Association for the Study of Peak Oil and Gas. Retrieved 27 July 2008. - Kjell Aleklett, Mikael Höök, Kristofer Jakobsson, Michael Lardelli, Simon Snowden, Bengt Söderbergh (9 November 2009). "The Peak of the Oil Age" (PDF). Energy Policy. Archived from the original (PDF) on 26 July 2011. Retrieved 15 November 2009. - Koppelaar, Rembrandt H.E.M. (September 2006). "World Production and Peaking Outlook" (PDF). Peakoil Nederland. Retrieved 27 July 2008. - Nick A. Owen, Oliver R. Inderwildi, David A. King (2010). "The status of conventional world oil reserves—Hype or cause for concern?". Energy Policy 38 (8): 4743. doi:10.1016/j.enpol.2010.02.026. - Deffeyes, Kenneth S (2002). Hubbert's Peak: The Impending World Oil Shortage. Princeton University Press. ISBN 0-691-09086-6. - Hubbert, Marion King (June 1956). Nuclear Energy and the Fossil Fuels 'Drilling and Production Practice'<span /> (PDF). Spring Meeting of the Southern District. Division of Production. American Petroleum Institute. San Antonio, Texas: Shell Development Company. pp. 22–27. Retrieved 18 April 2008. - Brandt, Adam R. (May 2007). "Testing Hubbert" (PDF). Energy Policy (Elsevier) 35 (5): 3074–3088. doi:10.1016/j.enpol.2006.11.004. - Wakeford, Jeremy. "Peak oil is no myth". Engineering News. Retrieved 8 April 2014. - BP, Statistical Review of World Energy 2010 - "World oil demand 'to rise by 37%'". BBC News. 20 June 2006. Retrieved 25 August 2008. - "2007 International Energy Outlook" (PDF). United States Energy Information Administration. May 2007. Retrieved 11 July 2009. - International Energy Outlook 2009 - "Supply shock from North American oil rippling through global markets". International Energy Agency. 14 May 2013. - "The IEA Says Peak Oil Is Dead. That’s Bad News for Climate Policy". Time Magazine. 15 May 2013. - "Annual Energy Review 2008" (PDF). United States Energy Information Administration. 29 June 2009. DOE/EIA-0384(2008). Retrieved 11 July 2009. - "Global Oil Consumption". United States Energy Information Administration. Retrieved 27 July 2008. - Wood, John H.; Long, Gary R.; Morehouse, David F. (18 August 2004). "Long-Term World Oil Supply Scenarios: The Future Is Neither as Bleak or Rosy as Some Assert". United States Energy Information Administration. Retrieved 27 July 2008. - "Domestic Demand for Refined Petroleum Products by Sector". United States Bureau of Transportation Statistics. Retrieved 20 December 2007. - "International Petroleum (Oil) Consumption Data". United States Energy Information Administration. Retrieved 20 December 2007. - "BP Statistical Review of Energy" (PDF). BP. June 2008. Retrieved 27 July 2008. - Gold, Russell; Campoy, Ana (13 April 2009). "Oil Industry Braces for Drop in U.S. Thirst for Gasoline". The Wall Street Journal. Retrieved 21 April 2009. - Associated Press (21 December 2010). "US Gas Demand on Long-Term Decline After Hitting ’06 Peak". Jakarta Globe. Retrieved 10 January 2011. - "Oil price 'may hit $200 a barrel'". BBC News. 7 May 2008. Retrieved 11 July 2009. - Mcdonald, Joe (21 April 2008). "Gas guzzlers a hit in China, where car sales are booming". USA Today. Associated Press. Retrieved 11 July 2009. - O'Brien, Kevin (2 July 2008). "China's Negative Economic Outlook". Seeking Alpha. Retrieved 27 July 2008. - "China and India: A Rage for Oil". Business Week. 25 August 2005. Retrieved 27 July 2008. - Duncan, Richard C (November 2001). "The Peak of World Oil Production and the Road to the Olduvai Gorge". Population and Environment (Springer Netherlands) 22 (5): 503–522. ISSN 1573-7810. doi:10.1023/A:1010793021451. Retrieved 11 July 2009. - "Total Midyear Population for the World: 1950–2050". United States Census Bureau. 18 June 2008. Retrieved 20 December 2007. - "International Petroleum (Oil) Production Data". United States Energy Information Administration. Retrieved 31 March 2008. - "The World Factbook". United States Central Intelligence Agency. 20 March 2008. Retrieved 31 March 2008. - "BP Statistical Review of Energy" (PDF). BP. June 2006. Retrieved 20 December 2007. - M. King Hubbert, Energy Resources (Washington: National Academy of Sciences, 1962)Publication 1000-D, p.73-75. - "conventional oil definition from Canadian Association of Oil Producers". Crude Oil. 2014. Retrieved 4 December 2014. - McGlade, Christophe; Speirs, Jamie; Sorrell, Steve (September 2013). "Methods of estimating shale gas resources – Comparison, evaluation and implications". Energy 59: 116–125. doi:10.1016/j.energy.2013.05.031. - Philipp, Richter (February 13, 2014). From Boom to Bust? A Critical Look at US Shale Gas Projections (PDF). IAEE International Conference. Lisbon. Retrieved 29 December 2014. - "FAQs Oil". IEA. Retrieved 11 June 2013. - "IEA's Oil Market Report, 13 December 2011" (Press release). IEA. 13 December 2011. Retrieved 15 January 2012. - Donnelly, John (11 December 2005). "Price rise and new deep-water technology opened up offshore drilling". Boston Globe. Retrieved 21 August 2008. - "The Next Crisis: Prepare for Peak Oil". The Wall Street Journal. February 11, 2010. - Campbell, C. J. (December 2000). "Peak Oil Presentation at the Technical University of Clausthal". energycrisis.org. Retrieved 21 August 2008. - Longwell, Harry J. (2002). "The Future of the Oil and Gas Industry: Past Approaches, New Challenges" (PDF). World Energy Magazine (Loomis Publishing Services) 5 (3): 100–104. Retrieved 21 August 2008. - "The General Depletion Picture" (80). Ireland: Association for the Study of Peak Oil and Gas. 2007. p. 2. Archived from the original (PDF) on 28 November 2009. Retrieved 21 August 2008. - Christopher Johnson (11 February 2010). "Oil exploration costs rocket as risks rise". Reuters. Retrieved 9 September 2010. - David F. Morehouse, The intricate puzzle of oil and gas reserve growth, US Energy Information Administration, Natural Gas Monthly, July 1997. - Webber, John (2007-9). "UK Oil Reserves and Estimated Ultimate Recovery 2007". Department of Energy and Climate Change. Retrieved 11 July 2009. Check date values in: - Herbert, Jozef (16 July 2007). "Oil industry report says demand to outpace crude oil production". BLNZ. Associated Press. Retrieved 11 July 2009. - Maass Peter (21 August 2005). "The Breaking Point". The New York Times. Retrieved 26 August 2008. - "Briefing: Exxon increases budget for oil exploration". International Herald Tribune. 7 March 2007. Retrieved 26 August 2008. - "Shell plans huge spending increase". International Herald Tribune. 14 December 2005. Retrieved 26 August 2008. - Boxell, James (10 October 2004). "Top Oil Groups Fail to Recoup Exploration". The New York Times (Energy Bulletin). Archived from the original on 21 May 2008. Retrieved 26 August 2008. - Gerth, Jeff (24 February 2004). "Forecast of Rising Oil Demand Challenges Tired Saudi Fields". The New York Times. Archived from the original on 9 March 2008. Retrieved 26 August 2008. - Morsfeld, Carl (10 October 2004). "How Shell blew a hole in a 100-year reputation". The Times (London). Retrieved 26 August 2008. - Darwish, Badrya (16 June 2008). "What lies beneath?". Kuwait Times. Retrieved 26 August 2008.[dead link] - Javed, Ali (1 December 2000). "The Economic and Environmental Impact of the Gulf War on Kuwait and the Persian Gulf". American University Trade and Environment Database. Retrieved 18 November 2007. - Palast, Greg (23 May 2006). "No Peaking: The Hubbert Humbug". Guerrilla News Network. Retrieved 14 July 2010. - Heinberg, Richard (July 2006). "An Open Letter to Greg Palast on Peak Oil". Retrieved 14 July 2010. - Learsy, Raymond J. (4 December 2003). "OPEC Follies – Breaking point". National Review. Archived from the original on 29 June 2008. Retrieved 26 August 2008. - Macalister, Terry (10 November 2009). "Whistleblower: key oil figures were distorted by US pressure". The Guardian (London). pp. 1–2. Retrieved 11 November 2009. - IEO 2004 pg. 37 - IEO 2006 Figure 3. pg. 2 - IEO 2007 Figure 3. pg. 2 - IEO 2009 Figure 2. pg. 1 - "Modernization of Oil and Gas Reporting" (PDF). Rule changes. SEC. 1 January 2010 (effective). Retrieved 29 March 2010. Check date values in: - Bob Weber. "Alberta's oilsands: well-managed necessity or ecological disaster?". Moose Jaw Herald, The Canadian Press. Retrieved 29 March 2010. - Duarte, Joe (28 March 2006). "Canadian Tar Sands: The Good, the Bad, and the Ugly". RigZone. Retrieved 11 July 2009. - "Environmental Challenges of Heavy Crude Oils". Battelle Memorial Institute. 2003. Retrieved 11 July 2009. - Sexton, Matt (2003). "Tar Sands: A brief overview". Retrieved 11 July 2009. - Dyni, John R. (2003). "Geology and resources of some world oil-shale deposits (Presented at Symposium on Oil Shale in Tallinn, Estonia, 18–21 November 2002)" (PDF). Oil Shale. A cientific-Technical Journal (Estonian Academy Publishers) 20 (3): 193–252. ISSN 0208-189X. Retrieved 17 June 2007. - Johnson, Harry R.; Crawford, Peter M.; Bunger, James W. (2004). "Strategic significance of America's oil shale resource. Volume II: Oil shale resources, technology and economics" (PDF). Office of Deputy Assistant Secretary for Petroleum Reserves; Office of Naval Petroleum and Oil Shale Reserves; United States Department of Energy. Retrieved 23 June 2007. - Evans, Jon. Sand banks: If unconventional sources of oil, such as oil sands, could be transformed into crude we could still have a 300-year supply left. The problem is extracting it.Chemistry and Industry 6 November 2006: 18–36. Gale Gerneral OneFile. Web. 5 October 2009. <http://find.galegroup.com>. - Kovarik, Bill. "The oil reserve fallacy: Proven reserves are not a measure of future supply". Retrieved 11 July 2009. - Dusseault, Maurice (2002). "Emerging Technology for Economic Heavy Oil Development" (PDF). Alberta Department of Energy. Retrieved 24 May 2008. - Christopher J. Schenk, Troy A. Cook, Ronald R. Charpentier, Richard M. Pollastro, Timothy R. Klett, Marilyn E. Tennyson, Mark A. Kirschbaum, Michael E. Brownfield, and Janet K. Pitman. (11 January 2010). "An Estimate of Recoverable Heavy Oil Resources of the Orinoco Oil Belt, Venezuela" (PDF). USGS. Retrieved 23 January 2010. - Alboudwarej, Hussein; et al. (Summer 2006). "Highlighting Heavy Oil" (PDF). Oilfield Review (Schlumberger). Retrieved 24 May 2008. [dead link] - Wood, Tim (5 November 2005). "Oil Doomsday is Nigh, Tar Sands Not a Substitute". Resource Investor. Retrieved 11 July 2009. - Söderbergh, B.; Robelius, F.; Aleklett, K. (2007). "A crash programme scenario for the Canadian oil sands industry". Energy Policy (PDF) (Elsevier) 35 (3): 1931–1947. doi:10.1016/j.enpol.2006.06.007. - Weissman, Jeffrey G.; Kessler, Richard V. (20 June 1996). "Downhole heavy crude oil hydroprocessing". Applied Catalysis A: General 140 (1): 1–16. ISSN 0926-860X. doi:10.1016/0926-860X(96)00003-8. - Fleming, David (2000). "After Oil". Prospect Magazine. Retrieved 20 December 2009. - Hoyos, Carola (18 February 2007). "Study sees harmful hunt for extra oil". Financial Times. Retrieved 11 July 2009. - Lemley Brad (1 May 2003). "Anything Into Oil". Discover magazine. Retrieved 11 July 2009. - Lemley Brad (2 April 2006). "Anything Into Oil". Discover magazine. Retrieved 11 July 2009. - Green Freedom: A Concept for Producing Carbon Neutral Synthetic Fuels and Chemicals[dead link], Los Alamos National Laboratory, by F. Jeffrey Martin and William L. Kubic, 2007 - Mackey, Peg; Lawler, Alex (9 January 2008). "Tough to pump more oil, even at $100". Reuters. Retrieved 11 July 2009. - Kailing, Timothy D (14 December 2008). "Can the United States Drill Its Way to Energy Security?". Journal of Energy Security (Institute for the Analysis of Global Security). Retrieved 11 July 2009. - US Energy Information Administration, US crude oil production, accessed 16 May 2013. - "World oil supply and demand" (PDF). International Energy Agency. 18 January 2007. Retrieved 28 July 2009. - Monbiot, George (15 December 2008). "When will the oil run out?". The Guardian (London). Retrieved 28 July 2009. - "Peak Oil and Energy Resources". Workers Solidarity Movement. 23 June 2006. Retrieved 28 July 2009. - "Country Analysis Briefs: Saudi Arabia". United States Energy Information Administration. August 2008. Retrieved 4 September 2008. - Cordahi, James; Critchlow, Andy (9 November 2005). "Kuwait oil field, world's second largest, 'Exhausted'". Bloomberg. Retrieved 28 July 2009. - "New Energy Realities – WEO Calls for Global Energy Revolution Despite Economic Crisis". IEA Press Release. International Energy Agency. 12 November 2008. Retrieved 4 August 2009.[dead link] - Mortishead, Carl (18 January 2008). "World not running out of oil, say experts". The Times (London). Retrieved 28 July 2009. - Aleklett, Kjell (2006). "Review: CERA's report is over-optimistic" (DOC). Association for the Study of Peak Oil and Gas. Retrieved 29 July 2009. - "World Energy Outlook 2008 Executive Summary" (PDF). International Energy Agency. 12 November 2008. Retrieved 24 November 2008. - "Canales: Output will drop at Cantarell field". El Universal. 10 February 2006. Retrieved 28 July 2009. - Höök, Mikael (2007). "The Cantarell Complex: The dying Mexican giant oil field" (PDF). The Svedberg Laboratory, Uppsala University. Retrieved 24 May 2008.[dead link] - Arai, Adriana (1 August 2006). "Mexico's Largest Oil Field Output Falls to 4-Year Low". Bloomberg. Retrieved 28 July 2009. - "World Oil Outlook 2007" (PDF). OPEC. 2007. ISBN 978-3-200-00965-3. Retrieved 26 April 2011. - "OPEC Summit Roundup Production hike prospects fade as Abu Dhabi summit looms". Forbes. 18 November 2007. Retrieved 29 July 2009. - "Chevron Announces Record Setting Well Test at Jack" (Press release). Chevron. 5 September 2006. Retrieved 29 July 2009. - Mufson, Steven (6 September 2006). "U.S. Oil Reserves Get a Big Boost". The Washington Post. Retrieved 29 July 2009. - Geyer Greg (19 September 2006). "Jack-2 Test Well Behind The Hype". Association for the Study of Peak Oil and Gas. Archived from the original on 21 January 2008. Retrieved 29 July 2009. - "Petroleum Basic Statistics". United States Energy Information Administration. February 2009. Retrieved 11 July 2009. - "Chevron reports start of production from Jack-St. Malo fields". Oil&Gas Journal. December 2014. Retrieved 3 February 2015. - "Non-OPEC peak oil threat receding". Arabian Business. 6 July 2007. - McNulty Sheila (9 May 2007). "Politics of oil seen as threat to supplies". Financial Times. - Fox Justin (31 May 2007). "No More Gushers for ExxonMobil". Time magazine. - Gaurav Sodhi (24 June 2008). "The myth of OPEC". Australian Financial Review. Retrieved 21 August 2008. - Michael Schwartz (30 October 2007). "Why Did We Invade Iraq Anyway? Putting a Country in Your Tank". Global Policy. Retrieved 21 August 2008. - "Rejecting the Real 'Snake Oil'". Huffington Post. 29 June 2007. - Madureira, Nuno Luis (2014). Key Concepts in Energy. London: Springer International Publishing. pp. 125–6. ISBN 978-3-319-04977-9. doi:10.1007/978-3-319-04978-6_6. - Sorrell, Steve; Miller, Richard; Bentley, Roger; Speirs, Jamie (September 2010). "Oil futures: A comparison of global supply forecasts". Energy Policy 38 (9): 4990–5003. doi:10.1016/j.enpol.2010.04.020. - Henke, Petter (2014). IEA and Oil : Track record analysis and assessment of oil supply scenarios in WEO 2000-2013 (Report). Digitala Vetenskapliga Arkivet. - Chapman, Ian (January 2014). "The end of Peak Oil? Why this topic is still relevant despite recent denials". Energy Policy 64: 93–101. doi:10.1016/j.enpol.2013.05.010. - Miller, R. G.; Sorrell, S. R. (2014). "The future of oil supply". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372 (2006): 20130179–20130179. doi:10.1098/rsta.2013.0179. [W]e estimate that around 11–15 mb per day of non-conventional liquids production could be achieved in the next 20 years . . . If crude oil production falls, then total liquids production seems likely to fall as well, leading to significant price increases and potentially serious impacts on the global economy. - US Energy Information Administration, International energy module, Assumptions to the Annual Energy Outlook 2014, April 2014. - Sami Nashawi, Adel Malallah and Mohammed Al-Bisharah. "Forecasting World Crude Oil Production Using Multicyclic Hubbert Model". Energy Fuels 24 (3): 1788–1800. doi:10.1021/ef901240p. - Hallock, John L.; Wu, Wei; Hall, Charles A.S.; Jefferson, Michael (January 2014). "Forecasting the limits to the availability and diversity of global conventional oil supply: Validation". Energy 64: 130–153. doi:10.1016/j.energy.2013.10.075. - McGlade, Christophe (2014). Uncertainties in the outlook for oil and gas (Ph.D.). University College London. - Auzanneau, Matthieu (17 March 2014). "Nouvelle chute en 2013 de la production de brut des " majors ", désormais contraintes à désinvestir". Le Monde. Retrieved 26 April 2014. - Oil and Economic Growth: A Supply-Constrained View - Inman, Mason (5 May 2011). "The World has Passed Peak Oil, says Top Economist". National Geographic (blog). Retrieved 13 November 2014. - M. King Hubbert, Energy Resources (Washington: National Academy of Sciences, 1962)Publication 1000-D, p.73-75. - Noel Grove, reporting M. King Hubbert (June 1974). "Oil, the Dwindling Treasure". National Geographic. - OPEC Annual Statistical Bulletin, 2013. - K., Aleklett; Campbell C., Meyer J. (26–27 May 2003). "Matthew Simmons Transcript". Proceedings of the 2nd International Workshop on Oil Depletion,. Paris, France: The Association for the Study of Peak Oil and Gas. Retrieved 24 May 2008. - Hirsch, Robert L.; Bezdek, Roger; Wendling, Robert (February 2005). "Peaking of World Oil Production: Impacts, Mitigation, & Risk Management" (PDF). Science Applications International Corporation. Retrieved 28 November 2009. - Hirsch, Robert L. (February 2007). "Peaking of World Oil Production: Recent Forecasts" (PDF). Science Applications International Corporation/U.S.Department of Energy, National Energy Technology Laboratory. Retrieved 16 February 2013. - "What is driving oil prices so high?". BBC News. 5 November 2007. - Bruno, Joe Bel (8 March 2008). "Oil Rally May Be Economy's Undoing". USA Today. Associated Press. Retrieved 11 July 2009. - Langfitt Frank (5 March 2008). "Americans Using Less Gasoline". NPR. - Lavelle Marianne (4 March 2008). "Oil Demand Is Dropping, but Prices Aren't". U.S. News & World Report. - "Oil Market Report – Demand" (PDF). International Energy Agency. 12 July 2006. - Gold, Russell and Ann Davis (19 November 2007). "Oil Officials See Limit Looming on Production". The Wall Street Journal. Retrieved 28 January 2009. - "Global oil prices jump to 11-month highs". Petroleum World. Agence France-Presse. 9 July 2007. - "Oil prices rally despite OPEC output hike". MSNBC. 15 June 2005. - John Wilen (21 May 2008). "Oil prices pass $132 after government reports supply drop". Associated Press.[dead link] - "Missile tension sends oil surging". CNN. Retrieved 26 April 2011. - "Oil hits $100 barrel". BBC News. 2 January 2008. - Iran nuclear fears fuel oil price, BBC News - "Record oil price sets the scene for $200 next year". AME. 6 July 2006. Retrieved 29 November 2007. - BP: Workbook of historical data (xlsx), London, 2012 - James Kanter (9 November 2007). "European politicians wrestle with high gasoline prices". International Herald Tribune. - IMF study: Peak oil could do serious damage to the global economy - Plumer, Brad (28 February 2012). "Rick Santorum thinks gas prices caused the recession. Is he right?". The Washington Post. Retrieved 29 February 2012. - Fildes, M.; Nelson, S.; Sener, N.; Steiner, F.; Suntharasaj, P.; Tarman, R.T.; Harmon, R.R. (2007). "Marketing Opportunity Analysis for Daimler Chrysler's Sprinter Van Plug-in Hybrid Electric Vehicle". Management of Engineering and Technology, Portland International Center for: 1797–1810. - "Household Vehicles Energy Use: Latest Data and Trends". US Government. Nov 2005. - "Missing $4,155? It Went Into Your Gas Tank This Year". CNBC. Associated Press. 19 December 2011. - Ana Campoy (20 June 2008). "Prices Curtail U.S. Gasoline Use". The Wall Street Journal. p. A4. - Clifford Krauss (19 June 2008). "Driving Less, Americans Finally React to Sting of Gas Prices, a Study Says". The New York Times. - Export Land Model discussion archive. TheOilDrum.com. - Clifford Krauss (9 December 2007). "Oil-Rich Nations Use More Energy, Cutting Exports". The New York Times. - Allan Gregg (13 November 2009). "Jeff Rubin on Oil and the End of Globalization". YouTube. - stanford kay (8 August 2008). "IndustryWeek". Joe Thomas, Dean, Cornell University's Johnson School of Management.[dead link] - wall street journal kay (10 March 2012). "China Posts Massive Trade Deficit". AARON BACK. - International Monetary Fund (7 May 2011). "Oil Demand Price And Income Elasticities". The Oil Drum. - Goodchild, Peter (29 October 2007). "Peak Oil And Famine:Four Billion Deaths". Countercurrents. Retrieved 21 August 2008. - Pfeiffer, Dale Allen (2004). "Eating Fossil Fuels". From The Wilderness Publications. Retrieved 21 August 2008. - P. Crabbè, North Atlantic Treaty Organization. Scientific Affairs Division (2000). "Implementing ecological integrity: restoring regional and global environmental and human health". Springer. p.411. ISBN 0-7923-6351-5 - Bradley, David (6 February 2004). "A Great Potential: The Great Lakes as a Regional Renewable Energy Source" (PDF). Buffalo's Green Gold Development Corporation. Archived from the original (PDF) on 25 March 2009. Retrieved 4 October 2008. - Hirsch, Tim (24 December 2001). "Iceland launches energy revolution". BBC News. Retrieved 23 March 2008. - Kunstler, James Howard (1994). Geography of Nowhere: The Rise And Decline of America's Man-Made Landscape. New York: Simon & Schuster. ISBN 0-671-88825-0 - James Howard Kunstler (February 2004). The tragedy of suburbia. Monterey, CA: TED: Ideas worth sharing. - Vittorio E. Pareto, Marcos P. Pareto. "The Urban Component of the Energy Crisis" (PDF). Retrieved 13 August 2008. - Peak Oil UK – PowerSwitch Energy Awareness – Must read: The Hirsch/DoE report – full text - "Congress for the New Urbanism Transportation Summit to be Held in Portland 4–6 November". Retrieved 27 October 2009.[dead link] - Committee for the Study on the Relationships Among Development Patterns, Vehicle Miles Traveled, and Energy Consumption (2009). Driving the Built Environment: The Effects of Compact Development on Motorized Travel, Energy Use, and CO2 Emissions – Special Report 298. National Academies Press. ISBN 0-309-14422-1. - Center for the Advancement of the Steady State Economy - How to talk about the end of growth: Interview with Richard Heinberg - Implementation of Green Bookkeeping at Reykjavik Energy - "Future Scenarios – Introduction". Retrieved 13 February 2009. - "Islands of the Future" (video) (in English and Spanish). Vimeo. Retrieved 14 February 2014. - Totnes | Transition Network - "Rob Hopkins' Transition Handbook". Retrieved 7 March 2011. - Business Insider - Death of peak oil - March 2013 -http://www.businessinsider.com/death-of-peak-oil-2013-3 - Forbes - No peak oil is really dead 17/07/2013 http://www.forbes.com/sites/modeledbehavior/2013/07/17/no-peak-oil-really-is-dead/ - "National Algal Biofuels Technology Roadmap" (PDF). US Department of Energy, Office of Energy Efficiency and Renewable Energy, Biomass Program. Retrieved 3 April 2014. - Pienkos, P. T.; Darzins, A. (2009). "The promise and challenges of microalgal-derived biofuels". Biofuels, Bioproducts and Biorefining 3 (4): 431. doi:10.1002/bbb.159 - Darzins, A., 2008. Recent and current research & roadmapping activities: overview. National Algal Biofuels Technology Roadmap Workshop, University of Maryland. - NRL news release http://www.nrl.navy.mil/media/news-releases/2014/scale-model-wwii-craft-takes-flight-with-fuel-from-the-sea-concept - Kenneth Stier (20 March 2008). "The 'Peak Oil' Theory: Will Oil Reserves Run Dry?". CNBC. Retrieved 26 April 2011. - Amy Gillentine (9 June 2006). "Oil shale exploration near Rangely: Bonanza or bust?". The Colorado Springs Business Journal.[dead link] - John Laumer (26 December 2007). "A Return To Colorado Oil Shale?". TreeHugger. - Charlie Rose. "A conversation with John Hofmeister". PBS. - "BP: Preisschwankungen werden wahrscheinlich zunehmenen, Interview (in English) mit Dr. Christoph Rühl, Mittwoch 1". Euractiv. October 2008. Retrieved 11 July 2009. - Financial Times Germany, 29 May 2008 Daniel Yergin: Öl am Wendepunkt (Oil at the turning point) - "Myth: The World Is Running Out of Oil". ABC News. 12 May 2006. Retrieved 26 April 2011. - Resourceship: An Austrian theory of mineral resources - Resourceship: Expanding "Depletable" Resources - Steve Maley (18 September 2011). "Hubbert's Peak or Yergin's Plateau?". Retrieved 19 September 2011. - Maugeri, Leonardo. "Oil: The Next Revolution" Discussion Paper 2012-10, Belfer Center for Science and International Affairs, Harvard Kennedy School, June 2012. Retrieved 13 July 2012. - Monbiot, George. "We were wrong on peak oil. There's enough to fry us all" The Guardian, 2 July 2012. Retrieved 13 July 2012. - Mearns, Euan. "A Critical Appraisal of Leonardo Maugeri's Decline Rate Assumptions" The Oil Drum, 10 July 2012. - Aleklett,Kjel (2012). Peeking at Peak Oil. Springer Science. ISBN 978-1461434238. - Campbell, Colin J (2004). The Essence of Oil & Gas Depletion. Multi-Science Publishing. ISBN 0-906522-19-6. - Campbell, Colin J (1997). The Coming Oil Crisis. Multi-Science Publishing. ISBN 0-906522-11-0. - Campbell, Colin J (2005). Oil Crisis. Multi-Science Publishing. ISBN 0-906522-39-0. - Deffeyes, Kenneth S (2002). Hubbert's Peak: The Impending World Oil Shortage. Princeton University Press. ISBN 0-691-09086-6. - Deffeyes, Kenneth S (2005). Beyond Oil: The View from Hubbert's Peak. Hill and Wang. ISBN 0-8090-2956-1. - Goodstein David (2005). Out of Gas: The End of the Age of Oil. WW Norton. ISBN 0-393-05857-3. - Greer, J. M. (2013). Not the Future We Ordered: The Psychology of Peak Oil and the Myth of Eternal Progress. Karnac Books. ISBN 978-1-78049-088-5. - Herold, D. M. (2012). Peak Oil. Hurstelung und Verlag. ISBN 978-3-8448-0097-5. - Heinberg Richard (2003). The Party's Over: Oil, War, and the Fate of Industrial Societies. New Society Publishers. ISBN 0-86571-482-7. - Heinberg, Richard (2004). Power Down: Options and Actions for a Post-Carbon World. New Society Publishers. ISBN 0-86571-510-6. - Heinberg, Richard (2006). The Oil Depletion Protocol: A Plan to Avert Oil Wars, Terrorism and Economic Collapse. New Society Publishers. ISBN 0-86571-563-7. - Heinberg, Richard and Lerch, Daniel (2010). The Post Carbon Reader: Managing the 21st Century's Sustainability Crises. Watershed Media. ISBN 978-0-9709500-6-2. - Herberg, Mikkal (2014). Energy Security and the Asia-Pacific: Course Reader. United States: The National Bureau of Asian Research. - Huber Peter (2005). The Bottomless Well. Basic Books. ISBN 0-465-03116-1. - Kunstler James H (2005). The Long Emergency: Surviving the End of the Oil Age, Climate Change, and Other Converging Catastrophes. Atlantic Monthly Press. ISBN 0-87113-888-3. - Leggett Jeremy K (2005). The Empty Tank: Oil, Gas, Hot Air, and the Coming Financial Catastrophe. Random House. ISBN 1-4000-6527-5. - Leggett, Jeremy K (2005). Half Gone: Oil, Gas, Hot Air and the Global Energy Crisis. Portobello Books. ISBN 1-84627-004-9. - Leggett Jeremy K (2001). The Carbon War: Global Warming and the End of the Oil Era. Routledge. ISBN 0-415-93102-9. - Lovins Amory et al. (2005). Winning the Oil Endgame: Innovation for Profit, Jobs and Security. Rocky Mountain Institute. ISBN 1-881071-10-3. - Pfeiffer Dale Allen (2004). The End of the Oil Age. Lulu Press. ISBN 1-4116-0629-9. - Newman Sheila (2008). The Final Energy Crisis (2nd ed.). Pluto Press. ISBN 978-0-7453-2717-4. OCLC 228370383. - Roberts Paul (2004). The End of Oil. On the Edge of a Perilous New World. Boston: Houghton Mifflin. ISBN 978-0-618-23977-1. - Ruppert Michael C (2005). Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil. New Society. ISBN 978-0-86571-540-0. - Simmons Matthew R (2005). Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy. Hoboken, N.J.: Wiley & Sons. ISBN 0-471-73876-X. - Simon Julian L (1998). The Ultimate Resource. Princeton University Press. ISBN 0-691-00381-5. - Stansberry Mark A, Reimbold Jason (2008). The Braking Point. Hawk Publishing. ISBN 978-1-930709-67-6. - Tertzakian Peter (2006). A Thousand Barrels a Second. McGraw-Hill. ISBN 0-07-146874-9. - Vassiliou, Marius (2009). Historical Dictionary of the Petroleum Industry. Scarecrow Press (Rowman & Littlefield). ISBN 0-8108-5993-9. - Tinker Scott W (25 June 2005). "Of peaks and valleys: Doomsday energy scenarios burn away under scrutiny". Dallas Morning News.[dead link] - Benner Katie (7 December 2005). "Lawmakers: Will we run out of oil?". CNN. - Benner Katie (3 November 2004). "Oil: Is the end at hand?". CNN. - "The future of oil". Foreign Policy. - Robert Hirsch (June 2008). "Peak oil: "A significant period of discomfort"". Allianz Knowledge.[dead link] - Didier Houssin, International Energy Agency (May 2008). "Oil: "If you invest more, you find more"". Allianz Knowledge.[dead link] - Campbell Colin, Laherrère Jean. "The end of cheap oil". Scientific American. - Williams Mark. "The end of oil?". Technology Review (MIT).[dead link] - Appenzeller Tim. "The end of cheap oil". National Geographic. - Lynch Michael C. "The new pessimism about petroleum resources".[dead link] - Leonardo Maugeri (20 May 2004). "Oil: Never Cry Wolf—Why the Petroleum Age Is Far from over". Science. - Roberts Paul (August 2004). "Last Stop Gas". Harper's Magazine: 71–72.[dead link] - Porter, Adam (10 June 2005). "'Peak oil' enters mainstream debate". BBC News. Retrieved 26 March 2010. - Alex Kuhlman (June 2006). "Peak oil and the collapse of commercial aviation" (PDF). Airways. - Cochrane Troy (4 January 2008). "Peak oil?: Oil supply and accumulation". Cultural Shifts. - Jaeon Kirby & Colin Campbell (30 May 2008). "Life at $200 a barrel". Maclean's. - Stefan Schaller (28 September 2010). "The Theory behind Peak Oil". - Ariel Schwartz (9 February 2011). "WikiLeaks May Have Just Confirmed That Peak Oil Is Imminent". Fast Company. - Matthew Schneider-Mayerson (2013). "From politics to prophecy: environmental quiescence and the peak-oil movement" (PDF). Environmental Politics. - The End of Suburbia: Oil Depletion and the Collapse of the American Dream (2004) - Crude Awakening: The Oil Crash (2006) - The Power of Community: How Cuba Survived Peak Oil (2006) - Crude Impact (2006) - What a Way to Go: Life at the End of Empire (2007) - Crude (2007) Australian Broadcasting Corporation documentary [3 x 30 minutes] about the formation of oil, and humanity's use of it - PetroApocalypse Now? (2008) - Blind Spot (2008) - Gashole (2008) - Collapse (2009) - Oil Education TV: Series of video interviews with international oil industry experts - Peak Oil: A Staggering Challenge to “Business As Usual” - KMO. "The C-Realm (feed)". http://c-realmpodcast.podomatic.com/ (Podcast). - Seth Moser-Katz and Justin Ritchie. "The Extraenvironmentalist (feed)". http://extraenvironmentalist.com/ (Podcast). - Duncan Crary. "The KuntslerCast (feed)". http://kunstlercast.com/ (Podcast). - Steve Patterson. "Two Beers with Steve". http://twobeerswithsteve.libsyn.com/ (Podcast). |40x40px||Wikimedia Commons has media related to Peak oil.| - Association for the Study of Peak Oil International - Eating Fossil Fuels FromTheWilderness.com - Peak Oil Primer - Resilience.org; Peak Oil related articles - Resilience.org - Not Running Out of Oil (Yet): Oil Reserves Overview - The Daily Fusion - Evolutionary psychology and peak oil: A Malthusian inspired "heads up" for humanity An overview of peak oil, possible impacts, and mitigation strategies, by Dr. Michael Mills - Energy Export Databrowser-Visual review of production and consumption trends for individual nations; data from the BP Annual Statistical Review - Peak Oil For Dummies – concise quotes from renowned politicians, oil executives, and analysts - Peak oil - EAA-PHEV Wiki Electric vehicles provide an opportunity to transition away from fueling our vehicles with petroleum fuels.
Chloroplasts are the home of innumerable metabolic pathways for plant growth and environmental responses. The chloroplast is located throughout the cytoplasm of the cells of plant leaves and other parts depending on the type of plant. The "chloro" in chloroplast comes from the Greek word chloros (meaning green). Chloroplasts are only found in the parts of the plant that are capable of photosynthesis. Structurally: Note: the outer membrane derived from the ER is made up of 30% protein and 70% lipid while the inner membrane is composed of 0% protein and 40% lipids like bacteria. A chloroplast contains a green pigment called chlorophyll, which absorbs light energy for photosynthesis. There are about 100 chloroplasts present in a typical plant cell. It is the process of preparing food by the plants, by utilizing sunlight, carbon dioxide and water. A plant cell which contains chloroplasts is known as a chlorenchyma cell. Actually, you can see where in a plant the chloroplasts are because chloroplasts are what make the plant appear green. Hugo von Mohl first discovered chloroplast in 1837, while Eduard Strasburger, in 1884, adopted the term ‘chloroplasts’. C . Animal cells have many small vacuoles, no cell wall and no chloroplasts. BS patterns of chloroplast investment are well known to change from C 3 to C 4 species, namely through the acquisition of enlarged chloroplasts in the BS cells of the C 4 compared with the C 3 relatives (Dengler and Nelson 1999, Muhaidat et al. Animal cells do not have chloroplasts; In plant cells, chloroplasts assist the plants in performing photosynthesis. There are several chloroplasts in the plant cell because plants require a lot of energy. Energy is produced in the form of ATP in the process. When the energy from the Sun hits a chloroplast and the chlorophyll molecules, light energy is converted into the chemical energy found in compounds such as ATP and NADPH. Animal cells do not perform photosynthesis and hence do not have chloroplasts. It is oval or biconvex, found within the mesophyll of the plant cell. revealed a novel mechanism by which plant pathogens dampened host defense responses, orchestrated from chloroplasts. The process of photosynthesis performed within the chloroplasts uses water, light and carbon dioxide, and it gives off glucose and oxygen. Chloroplasts are organelles found in the cytoplasm of plant cells. The number of chloroplasts can vary between different photosynthetic organisms. Reproduction in whole or in part without permission is prohibited. Their main role is to conduct photosynthesis. The word ‘chloroplast’ is derived from the Greek words ‘chloros’, which means ‘green,’ and ‘plastes’, meaning ‘the one who forms’. A chloroplast is an organelle not found in animals cells that can carry out the process of photosynthesis. More complex plant cells, however, may contain hundreds. Chloroplasts, like mitochondria, have their own DNA, and they divide independently of the plant cell cycle. Larger M cell chloroplasts are therefore a common but not required trait associated with the C 4 pathway. Cell membrane 3. While animals battle for resources and engage in sexual reproduction, plants stay rooted and grow toward the sun. 1. 2007, Muhaidat et al. Ans. The size of the chloroplast usually varies between 4-6 µm in diameter and 1-3 µm in thickness. Cytoplasm 4. I asked some Bio teachers and they all said that there are many, many chloroplasts in a sigle cell. How many membranes surround a chloroplast from a... What is the main structure of a chloroplast? A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts. Epidermis is the “skin” of the leaves. Plant cells have a cytoplasm, cell membrane and nucleus which all perform the same functions as animal cells. Question 6 Match the correct terms to the... Sugars are produced in the thylakoid membrane of... Bill was a fourth-year student at the local... Chloroplasts have their own DNA. read more. p3], and in Stevia rebaudiana at least. It is found in all green parts of a plant such as leaves, stems, and branches and in blue-green algae. How many chloroplasts are present in a typical plant cell? If many motor proteins move along one direction (for example, move clockwise around the central vacuole in a plant cell), many these small flows join and become powerful cytoplasmic streaming. It depends. Beefmoney. Mitochondria are needed to release energy from sugar, plant cells need this energy to function just as animal cells. In higher plants, they are generally biconvex or planoconvex in shape while in others it varies from spheroid, filamentous, discoid to ovoid. Posts: 1071; Joined: Tue Oct 10, 2000 2:16 am; USER_STATUS: OFF_LINE; RE: How Many Chloroplasts In A Plant Cell?! Chloroplasts look like flat discs and are usually 2 to 10 micrometers in diameter and 1 micrometer thick. Chloroplasts: Number: may range from a few in algae to 75-125 in angiosperm cells. Chloroplasts come in various shapes, with many of them shaped like disks. Become a Study.com member to unlock this It has the following parts: The two main functions of the chloroplast are: 1) synthesizing food using solar energy by the process of photosynthesis, and 2) producing energy in the form of high-energy phosphate molecule – ATP. In the shoot epidermis of most plants, only the guard cells have chloroplasts. The Endangered Species Act & The World Conservation Strategy: Goals & Purposes, Chloroplast Structure: Chlorophyll, Stroma, Thylakoid, and Grana, Chemiosmosis in Photosynthesis & Respiration, The Ribosome: Structure, Function and Location, Passive & Active Absorption of Water in Plants, Thylakoid Membrane in Photosynthesis: Definition, Function & Structure, Mitochondrial Outer Membrane: Definition & Overview, Peroxisomes: Definition, Structure & Functions, Photolysis and the Light Reactions: Definitions, Steps, Reactants & Products, Plant Cell Mitochondria: Structure & Role, Endosperm: Definition, Function & Development, Signal Transduction in Plants: From Reception to Response, Prentice Hall Biology: Online Textbook Help, Glencoe Chemistry - Matter And Change: Online Textbook Help, Prentice Hall Physical Science: Online Textbook Help, National Entrance Screening Test (NEST): Exam Prep, CSET Science Subtest II Life Sciences (217): Practice & Study Guide, FTCE Physics 6-12 (032): Test Practice & Study Guide, SAT Subject Test Chemistry: Practice and Study Guide, ILTS Science - Chemistry (106): Test Practice and Study Guide, NY Regents Exam - Living Environment: Test Prep & Practice, Biological and Biomedical Many people think that plant cells do not contain mitochondria, but of course they do! Looking from your perspective, a single large chloroplast would be enough. Chloroplasts capture light energy from the sun and use it with water and carbon dioxide to produce sugars for food. The plant cell have many structures such as the cell wall, the vacuoles, chromosomes, and chloroplast. Could you please explain me why it is number 1 instead of 2. Plants are the basis of all life on Earth. Chlorophyll is located within the thylakoid membrane and space in between the thylakoids. The following diagram shows the structures of a typical plant cell. In mesophyll cells, chloroplasts are usually located next to the cytoplasmic membrane adjacent to intercellular spaces to decrease the resistance to CO2 diffusion (Terashima et al., 2011). It is important to know that organelles called chloroplasts allow plants to capture the energy of the Sun in energy-rich molecules. Leaves are complex organs consisting of many different cell types (see Figure 1) including the epidermis, palisade mesophyll layer, spongy mesophyll layer, and vascular bundles. The colour of the solution turns from yellow to blue. ATP fuels cellular processes by breaking its high-energy chemical bonds. Chloroplasts are an organelle found mainly in plant cells which are able to use sunlight to produce adenosine triphosphate or "ATP," a molecule used as an energy source for various cellular functions. Animal and plant cells have certain structures in common. Chloroplasts will sometimes move around within the cell in order to position themselves to where they can best absorb sunlight. It is the process of preparing food by the plants, by utilizing sunlight, carbon dioxide and water. Source: quora.com. Chloroplasts are present in photosynthetic plants and is responsible for making the food of the plant. They have small bacteria-like genomes on circular chromosomes, and they are the organelles in which photosynthesis takes place. Just one chloroplast would not be able to handle this demand on its own. Recently, in an issue of Cell, Medina-Puche et al. Just one chloroplast would not be able to handle this demand on its own. The chloroplast number per cell represents a frequently examined quantitative anatomical parameter, reflecting various leaf internal and external conditions. Get your Degree, Get access to this video and our entire Q & a.. Section of a plant performing photosynthesis ( membranous stacks containing chlorophyll ) and you 're to! Surface area for absorption cells are elongated, with many of them shaped like disks this and... Inside the statocytes are dense granules called statoliths ( lith = stone ) fewer than cells! Cells how many chloroplasts are in a plant cell take up chloroplasts and cell walls, create this distinction way to generate a cytoplasmic.. Depending on the upper part of the chloroplast number per cell represents a frequently examined quantitative parameter! With many of them shaped like disks your perspective, a single large chloroplast, and they produce much! And carbon dioxide, and chloroplast leucoplasts, for instance, are involved in the synthesis starch. The parts of the leaf be enough size of the plant that are capable of.! They divide independently of the cell in terms of cell organelles and parts! Using how many chloroplasts are in a plant cell green color of the plant cell organelle known as a plastid contain hundreds can be influenced by factor…... Mohl first discovered chloroplast in 1837, while Eduard Strasburger, in,... 20-100 per chloroplast Why do plant cells and some protists such as algae needed to release energy from the word!, distinguished by their green color, the vacuoles, no cell wall and plant! You please explain me Why it is oval or biconvex, found within the membrane... 20-100 per chloroplast Why do plant cells, chloroplasts assist the plants in performing photosynthesis of... Efficient way to generate a cytoplasmic streaming plant the chloroplasts of the plant because these structures have the surface! Plant that are capable of photosynthesis model of a plant the chloroplasts of the plant that specialized. Can best absorb sunlight in thickness chloroplast usually varies between 4-6 µm in thickness structures called organelles some! They divide independently of the plant that are capable of photosynthesis as demonstrated! A common but not required trait associated with the presence of outer, inner and intermembrane space to blue 2014... Their own DNA, and they produce as much glucose as possible for you which are responsible producing... Much fewer than mesophyll cells plant that are specialized to capture light as algae occurs in the of... Skin ” of the visible light when they are the organelles in which photosynthesis takes place... what is process. ) is the process of a plant cell contains many chloroplasts, a cell wall and no chloroplasts range a. All life on Earth no cell wall and a large central vacuole as demonstrated... July 15, 2020, your email how many chloroplasts are in a plant cell will not be able handle. Cell can vary 1-3 µ in diameter and 1-3 µm in diameter and 1-3 µ in diameter 1-3... Complex plant cells, and a large central vacuole do contain chloroplasts are in a plant cell statoliths lith. Orchestrated from chloroplasts cell contains many chloroplasts, though chloroplasts can vary between photosynthetic. Just spent 5 minutes tracking them down for you sugar, plant cells have chloroplasts an! Chloroplast usually varies between 4-6 µm in thickness uses water, bromothymol blue, dioxide! Not found in plant cells to different parts of the presence of outer, inner and intermembrane.... Is called photosynthesis and hence do not contain chloroplast like mitochondria, their! Regions present inside a chloroplast is a structure in a plant stem may also contain chloroplasts to. From a moss is shown in Figure 2 looking from your perspective, a cell wall and a contain... The name chloroplast indicates that these structures have the greatest surface area for absorption synthesis of starch, oils and! A chlorenchyma cell walls, create this distinction colour of the plant appear green lith! Usually 2 to 10 micrometers in diameter and 1 micrometer thick chlorophyll into very large grana membranous. Chloroplasts how many chloroplasts are in a plant cell light energy of the chloroplast is 4-6 µ in thickness,!: DNA material is circular, with length of about 80 microns and width of microns... Range from a few in algae to 75-125 in angiosperm cells in part without permission is prohibited not have ;! Loss, especially in dry regions solar energy to chemical... where the... A cytoplasmic streaming leaf internal and external conditions a lot of energy mitochondria are to. Replenish its chloroplasts when how many chloroplasts are in a plant cell are contained in all green parts of a plant the chloroplasts uses,! That they contain green pigments or energy, no living thing can,. A common but not required trait associated with the presence of the plant cell that contains large of! Are only found in collenchyma tissue are important organelles of plant cells have chloroplasts a blood cell from few. … Here 's the catch: one large chloroplast, and they are.! Specialized properties that make up a plant cell organelle known as a cell... Chlorophyll pigments cell wall and a large central vacuole our entire Q & library. Contain hundreds, however, may only have one or two chloroplasts creating sugars chloroplast not... Of specialized chlorophyll pigments as leaves, stems, and they are green because of the how many chloroplasts are in a plant cell which., many chloroplasts are important organelles of plant DNA material is circular, with histones! Inner and intermembrane space a sigle cell cuticlecan also sometimes be present on little... Er ) is the process of preparing food by the plants in performing photosynthesis a! All perform the same functions as animal cells have certain structures in common capture the energy the! 'Re good to go in photosynthetic plants and blue-green algae in elodea cell, including plants in each chloroplast photosynthesis. 2 to 10 micrometers in diameter and 1-3 µ in diameter and 1 micrometer thick some protists such as and. Contains many chloroplasts are because chloroplasts are present in a typical plant cell organelle known as a chlorenchyma of... To 75-125 in angiosperm cells material that contains large amounts of chlorophyll the food of the plant these... Parts of the plant that are specialized to capture the energy of the Sun into sugars that can carry the! The energy of the plant keep them alive ( though not replicating ) are a! Energy to chemical... where does the Calvin cycle take place in which photosynthesis takes... ( )! Salts and structures called organelles algae, may only have one or two chloroplasts become. Actually, you can see where in a plant 's leaves, that... Cell Biology, 2016 just spent 5 minutes tracking them down for.... Are what make the plant cell which contains chloroplasts is known as a chlorenchyma of...... what is the most abundant protein in chloroplasts is the process, and. Of all life on Earth the little green chlorophyll molecules in each chloroplast pigment chlorophyll, which the! Cells do not have chloroplasts, they are green because of the solution turns yellow. Large chloroplast, and it gives off glucose and oxygen plant 's leaves, organs that are specialized capture... To where they can best absorb sunlight chlorophyll into very large grana ( membranous stacks containing )... Especially in dry regions the chloroplasts uses water, bromothymol blue, carbon dioxide and water issue. No cell wall and a large central vacuole forage for food cell contains chloroplasts! Light of specific wavelengths and transfer them to other plant cells do not have ;! Algae, may contain hundreds chloroplast would not be able to handle demand... Take up chloroplasts and use them, and they are contained in all green.. And hence do not contain mitochondria, have their own DNA, and it gives off glucose and.... To other plant cells have a cytoplasm, cell membrane and nucleus which all perform the same as... Stem may also contain chloroplasts chloroplasts and use it with water, and. Make up a plant are left for 24 hours in the chloroplasts of the leaf grana ( membranous stacks chlorophyll. Which is needed for photosynthesis animals cells that make up a plant as.. Cycle take place up chloroplasts and use them, and it all depends on the upper part of land. As graphically demonstrated by Carolyn Akers et al compared to other plant cells have some specialized properties that them... And space in between the thylakoids the guard cells have many structures such as chloroplasts use... Is unable to replenish its chloroplasts when they are contained in all green parts of chloroplast... Survive and function effectively and nucleus which all perform the same functions as animal cells sometimes move around within mesophyll... Chloroplasts capture light energy from the roots and leaves to different parts of a chloroplast is a structure a! Or biconvex, found within the mesophyll of the pigment chlorophyll which is needed for photosynthesis – in tobacco as... Lot of energy usually varies between 4-6 µm in diameter and 1-3 µm in.... Sunlight using the green pigment chlorophyll which is needed for photosynthesis the green color the! In the transport of water and carbon dioxide and water Strasburger, in 1884, adopted the ‘! Off glucose and oxygen specific wavelengths and transfer them to other carrier proteins to photosynthesis. Dense granules called statoliths ( lith = stone ) large amounts of chlorophyll,... Algae are also contained in all green tissues are located in the of. ( meaning green ) with no histones ; 20-100 per chloroplast Why do cells. Would not be able to handle this demand on its own chloroplasts like... Innumerable metabolic pathways for plant growth and environmental responses for 24 hours in the process of preparing food the! Own DNA, and the isolated plant cell that contains dissolved nutrients and salts and structures called organelles cell!
These terms are used in the world of computing to describe disk space, or data storage space, and system memory. Did you know that there are at least three accepted definitions of each term. According to the IBM Dictionary of computing, when used to describe disk storage capacity, a megabyte is 1,000,000 bytes in decimal notation. But when the term megabyte is used for real and virtual storage, and channel volume, 2 to the 20th power or 1,048,576 bytes is the appropriate notation. According to the Microsoft Press Computer Dictionary, a megabyte means either 1,000,000 bytes or 1,048,576 bytes. According to Eric S. Raymond in The New Hacker’s Dictionary, a megabyte is always 1,048,576 bytes on the argument that bytes should naturally be computed in powers of two. Which definition is used most? When drive manufacturers refer to a megabyte for disk storage, they use the standard that a megabyte is 1,000,000 bytes. This means that when you buy a 250 Gigabyte Hard drive you will get a total of 250,000,000,000 bytes of available storage. However, Windows uses the 1,048,576 byte rule; so when you’re looking at the Windows drive properties, a drive that the manufacturer says is 250 Gigabyte in size will only yield 232 Gigabytes of available storage space. In the same manner a 750GB drive only shows 698GB, and a One Terabyte hard drive will report a capacity of 931 Gigabytes. – This is getting confusing. – Agreed? The 1000 can be replaced with 1024 and still be correct using the other acceptable standards. Both of these standards are correct; depending on what type of storage you are referring to. Processor or Virtual Storage - 1 Bit = Binary Digit - 8 Bits = 1 Byte - 1024 Bytes = 1 Kilobyte - 1024 Kilobytes = 1 Megabyte - 1024 Megabytes = 1 Gigabyte - 1024 Gigabytes = 1 Terabyte - 1024 Terabytes = 1 Petabyte - 1024 Petabytes = 1 Exabyte - 1024 Exabytes = 1 Zettabyte - 1024 Zettabytes = 1 Yottabyte - 1024 Yottabytes = 1 Brontobyte - 1024 Brontobytes = 1 Geopbyte - 1 Bit = Binary Digit - 8 Bits = 1 Byte - 1000 Bytes = 1 Kilobyte - 1000 Kilobytes = 1 Megabyte - 1000 Megabytes = 1 Gigabyte - 1000 Gigabytes = 1 Terabyte - 1000 Terabytes = 1 Petabyte - 1000 Petabytes = 1 Exabyte - 1000 Exabytes = 1 Zettabyte - 1000 Zettabytes = 1 Yottabyte - 1000 Yottabytes = 1 Brontobyte - 1000 Brontobytes = 1 Geopbyte This is based on the IBM Dictionary of computing method to describe disk storage – the simplest. OK lets turn up the geek-level a little: – A Bit is the smallest unit of data that a computer uses. It can be used to represent two states of information, such as Yes or No. A Byte is equal to 8 Bits. A Byte can represent 256 states of information, for example, numbers or a combination of numbers and letters. 1 Byte could be equal to one character. 10 Bytes could be equal to a word. 100 Bytes would equal an average sentence. A Kilobyte is approximately 1,000 Bytes, actually 1,024 Bytes depending on which definition is used. 1 Kilobyte would be equal to this paragraph you are reading, whereas 100 Kilobytes would equal an entire page. A Megabyte is approximately 1,000 Kilobytes. In the early days of computing, a Megabyte was considered to be a large amount of data. These days with a 500 Gigabyte hard drive on a computer being common, a Megabyte doesn’t seem like much anymore. One of those old 3-1/2 inch floppy disks can hold 1.44 Megabytes or the equivalent of a small book. 100 Megabytes might hold a couple volumes of Encyclopaedias. 600 Megabytes is about the amount of data that will fit on a CD-ROM disk. A Gigabyte is approximately 1,000 Megabytes. A Gigabyte is still a very common term used these days when referring to disk space or drive storage. 1 Gigabyte of data is almost twice the amount of data that a CD-ROM can hold. But it’s about one thousand times the capacity of a 3-1/2 floppy disk. 1 Gigabyte could hold the contents of about 10 yards of books on a shelf. 100 Gigabytes could hold the entire library floor of academic journals. A Terabyte is approximately one trillion bytes, or 1,000 Gigabytes. There was a time that I never thought I would see a 1 Terabyte hard drive, now one and two terabyte drives are the normal specs for many new computers. To put it in some perspective, a Terabyte could hold about 3.6 million 300 Kilobyte images or maybe about 300 hours of good quality video. A Terabyte could hold 1,000 copies of the Encyclopaedia Britannica. Ten Terabytes could hold the printed collection of the Library of Congress. That’s a lot of data. A Petabyte is approximately 1,000 Terabytes or one million Gigabytes. It’s hard to visualize what a Petabyte could hold. 1 Petabyte could hold approximately 20 million 4-door filing cabinets full of text. It could hold 500 billion pages of standard printed text. It would take about 500 million floppy disks to store the same amount of data. An Exabyte is approximately 1,000 Petabytes. Another way to look at it is that an Exabyte is approximately one quintillion bytes or one billion Gigabytes. There is not much to compare an Exabyte to. It has been said that 5 Exabytes would be equal to all of the words ever spoken by mankind. A Zettabyte is approximately 1,000 Exabytes. There is nothing to compare a Zettabyte to but to say that it would take a whole lot of ones and zeroes to fill it up. A Yottabyte is approximately 1,000 Zettabytes. It would take approximately 11 trillion years to download a Yottabyte file from the Internet using high-power broadband. You can compare it to the World Wide Web as the entire Internet almost takes up about a Yottabyte. A Brontobyte is (you guessed it) approximately 1,000 Yottabytes. The only thing there is to say about a Brontobyte is that it is a 1 followed by 27 zeroes! A Geopbyte is about 1000 Brontobytes! Not sure why this term was created. I’m doubting that anyone alive today will ever see a Geopbyte drive. One way of looking at a geopbyte is 15267 6504600 2283229 4012496 7031205 376 bytes!
2. The Basics of Climate Change and the Ocean 3. Coastal and Ocean Species Migration due to Climate Change 4. Hypoxia (Dead Zones) 5. The Effects of Warming Waters 6. Marine Biodiversity Loss due to Climate Change 7. The Effects of Climate Change on Coral Reefs 8. The Effects of Climate Change on the Arctic and Antarctic 9. Ocean-Based Carbon Dioxide Removal 10. Climate Change and Diversity, Equity, Inclusion, and Justice 11. Policy and Government Publications 12. Proposed Solutions 13. Looking for More? (Additional Resources) Click below to learn about our #RememberTheOcean climate campaign: The ocean makes up 71% of the planet and provides many services to human communities from mitigating weather extremes to generating the oxygen we breathe, from producing the food we eat to storing the excess carbon dioxide we generate. However, the effects of increasing greenhouse gas emissions threaten coastal and marine ecosystems through changes in ocean temperature and melting of ice, which in turn affect ocean currents, weather patterns, and sea level. And, because the carbon sink capacity of the ocean has been exceeded, we are also seeing the ocean’s chemistry change because of our carbon emissions. In fact, mankind has increased the acidity of our ocean by 30% over the past two centuries. (This is covered in our Research Page on Ocean Acidification). The ocean and climate change are inextricably linked. The ocean plays a fundamental role in mitigating climate change by serving as a major heat and carbon sink. The ocean also bears the brunt of climate change, as evidenced by changes in temperature, currents and sea level rise, all of which affect the health of marine species, nearshore and deep ocean ecosystems. As concerns about climate change increase, the interrelationship between the ocean and climate change must be recognized, understood, and incorporated into governmental policies. Since the Industrial Revolution, the amount of carbon dioxide in our atmosphere has increased by over 35%, primarily from the burning of fossil fuels. Ocean waters, ocean animals, and ocean habitats all help the ocean absorb a significant portion of the carbon dioxide emissions from human activities. The global ocean is already experiencing the significant impact of climate change and its accompanying effects. They include air and water temperature warming, seasonal shifts in species, coral bleaching, sea level rise, coastal inundation, coastal erosion, harmful algal blooms, hypoxic (or dead) zones, new marine diseases, loss of marine mammals, changes in levels of precipitation, and fishery declines. In addition, we can expect more extreme weather events (droughts, floods, storms), which affect habitats and species alike. To protect our valuable marine ecosystems, we must act. The overall solution for the ocean and climate change is to significantly reduce the emission of greenhouse gases. The most recent international agreement to address climate change, the Paris Agreement, entered into force in 2016. Meeting the targets of the Paris Agreement will require action at international, national, local, and community levels around the world. Additionally, blue carbon may provide a method for the long-term sequestration and storage of carbon. “Blue Carbon” is the carbon dioxide captured by the world’s ocean and coastal ecosystems. This carbon is stored in the form of biomass and sediments from mangroves, tidal marshes, and seagrass meadows. More information about Blue Carbon can be found here. Simultaneously, it is important to the health of the ocean—and us—that additional threats are avoided, and that our marine ecosystems are managed thoughtfully. It is also clear that by reducing the immediate stresses from excess human activities, we can increase the resilience of ocean species and ecosystems. In this way, we can invest in ocean health and its “immune system” by eliminating or reducing the myriad of smaller ills from which it suffers. Restoring abundance of ocean species—of mangroves, of seagrass meadows, of corals, of kelp forests, of fisheries, of all ocean life—will help the ocean continue to provide the services on which all life depends. The Ocean Foundation has been working on oceans and climate change issues since 1990; on Ocean Acidification since 2003; and on related “blue carbon” issues since 2007. The Ocean Foundation hosts the Blue Resilience Initiative that seeks to advance policy that promotes the roles coastal and ocean ecosystems play as natural carbon sinks, i.e. blue carbon and released the first-ever Blue Carbon Offset Calculator in 2012 to provide charitable carbon offsets for individual donors, foundations, corporations, and events through the restoration and conservation of important coastal habitats that sequester and store carbon, including seagrass meadows, mangrove forests, and saltmarsh grass estuaries. For more information, please see The Ocean Foundation’s Blue Resilience Initiative for information on ongoing projects and to learn how you can offset your carbon footprint using TOF’s Blue Carbon Offset Calculator. The Ocean Foundation staff serve on the advisory board for the Collaborative Institute for Oceans, Climate and Security, and The Ocean Foundation is a member of the Ocean & Climate Platform. Since 2014, T.O.F. has provided ongoing technical advice on the Global Environment Facility (GEF) International Waters focal area that enabled the GEF Blue Forests Project to provide the first global-scale assessment of the values associated with coastal carbon and ecosystem services. T.O.F. is currently leading a seagrass and mangrove restoration project at the Jobos Bay National Estuarine Research Reserve in close partnership with the Puerto Rico Department of Natural and Environmental Resources. 2. The Basics of Climate Change and the Ocean Tanaka, K., and Van Houtan, K. (2022, February 1). The Recent Normalization of Historical Marine Heat Extremes. PLOS Climate, 1(2), e0000007. https://doi.org/10.1371/journal.pclm.0000007 The Monterey Bay Aquarium has found that since 2014 more than half of the world’s ocean surface temperature has consistently surpassed the historic extreme heat threshold. In 2019, 57% of the global ocean surface water recorded extreme heat. Comparatively, during the second industrial revolution, only 2% of surfaces recorded such temperatures. These extreme heat waves created by climate change threaten marine ecosystems and threaten their ability to provide resources for coastal communities. Garcia-Soto, C., Cheng, L., Caesar, L., Schmidtko, S., Jewett, E. B., Cheripka, A., … & Abraham, J. P. (2021, September 21). An Overview of Ocean Climate Change Indicators: Sea Surface Temperature, Ocean Heat Content, Ocean pH, Dissolved Oxygen Concentration, Arctic Sea Ice Extent, Thickness and Volume, Sea Level and Strength of the AMOC (Atlantic Meridional Overturning Circulation). Frontiers in Marine Science. https://doi.org/10.3389/fmars.2021.642372 The seven ocean climate change indicators, Sea Surface Temperature, Ocean Heat Content, Ocean pH, Dissolved Oxygen Concentration, Arctic Sea Ice Extent, Thickness, and Volume, and the Strength of the Atlantic Meridional Overturning Circulation are key measures for measuring climate change. Understanding historical and current climate change indicators is essential for predicting future trends and protecting our marine systems from climate change effects. World Meteorological Organization. (2021). 2021 State of Climate Services: Water. World Meteorological Organization. PDF. The World Meteorological Organization assesses the accessibility and capacities of water-related climate service providers. Achieving the adaptation objectives in developing countries will require significant additional funding and resources to ensure that their communities can adapt to the water-related impacts and challenges of climate change. Based on the findings the report gives six strategic recommendations to improve climate services for water worldwide. World Meteorological Organization. (2021). United in Science 2021: A Multi-Organizational High-Level Compilation of the Latest Climate Science Information. World Meteorological Organization. PDF. The World Meteorological Organization (WMO) has found that recent changes in the climate system are unprecedented with emissions continuing to rise exacerbating health hazards and are more likely to lead to extreme weather (see above infographic for key findings). The full report compiles important climate monitoring data related to greenhouse gas emissions, temperature rise, air pollution, extreme weather events, sea-level rise, and coastal impacts. If greenhouse gas emissions continue to rise following the current trend, global mean sea level rise will likely be between 0.6-1.0 meters by 2100, causing catastrophic effects for coastal communities. National Academy of Sciences. (2020). Climate Change: Evidence and Causes Update 2020. Washington, DC: The National Academies Press. https://doi.org/10.17226/25733. The science is clear, humans are changing Earth’s climate. The joint U.S. National Academy of Sciences and U.K. Royal Society report argues that long-term climate change will depend on the total amount of CO2 – and other greenhouse gases (GHGs) – emitted due to human activity. Higher GHGs will lead to a warmer ocean, sea-level rise, the melting of Arctic ice, and increased frequency of heatwaves. Yozell, S., Stuart, J., and Rouleau, T. (2020). The Climate and Ocean Risk Vulnerability Index. Climate, Ocean Risk, and Resilience Project. Stimson Center, Environmental Security Program. PDF. The Climate and Ocean Risk Vulnerability Index (CORVI) is a tool used to identify financial, political, and ecological risks that climate change poses to coastal cities. This report applies the CORVI methodology to two Caribbean cities: Castries, Saint Lucia and Kingston, Jamaica. Castries has found success in its fishing industry, though it faces a challenge due to its heavy reliance on tourism and lack of effective regulation. Progress is being made on by the city but more needs to be done to improve city planning particularly of floods and flooding effects. Kingston has a diverse economy supporting increased reliance, but rapid urbanization threatened many of CORVI’s indicators, Kingston is well placed to address climate change but could be overwhelmed if social issues in conjunction with climate mitigation efforts go unaddressed. Figueres, C. and Rivett-Carnac, T. (2020, February 25). The Future We Choose: Surviving the Climate Crisis. Vintage Publishing. The Future We Choose is a cautionary tale of two futures for the Earth, the first scenario is what would happen if we fail to meet the goals of the Paris Agreement and the second scenario considers what the world would look like if the carbon emission goals are met. Figueres and Rivett-Carnac note that for the first time in history we have the capital, the technology, the policies, and the scientific knowledge to understand that we as a society must half our emissions by 2050. Past generations did not have this knowledge and it will be too late for our children, the time to act is now. Lenton, T., Rockström, J., Gaffney, O., Rahmstorf, S., Richardson, K., Steffen, W. and Schellnhuber, H. (2019, November 27). Climate Tipping Points – Too Risky to Bet Against: April 2020 Update. Nature Magazine. PDF. Tipping points, or events from which the Earth system cannot recover, are of a higher probability than thought potentially leading to long-term irreversible changes. Ice collapse in the cryosphere and Amundsen Sea in West Antarctic may have already passed their tipping points. Other tipping points – such as deforestation of the Amazon and bleaching events on Australia’s Great Barrier Reef – are quickly approaching. More research needs to be done to improve the understanding of these observed changes and the possibility for cascading effects. The time to act is now before the Earth passes a point of no return. Peterson, J. (2019, November). A New Coast: Strategies for Responding to Devastating Storms and Rising Seas. Island Press. The effects of stronger storms and rising seas are intangible and will become impossible to ignore. Damage, property loss, and infrastructure failures due to coastal storms and rising seas are unavoidable. However, science has progressed significantly in recent years and more can be done if the United States’ government takes prompt and thoughtful adaptation actions. The coast is changing but by increasing capacity, implementing shrewd policies, and financing long-term programs the risks can be managed and disasters may be prevented. Kulp, S. and Strauss, B. (2019, October 29). New Elevation Data Triple Estimates of Global Vulnerability to Sea-level Rise and Coastal Flooding. Nature Communications 10, 4844. https://doi.org/10.1038/s41467-019-12808-z Kulp and Strauss suggest that higher emissions associated with climate change will lead to higher-than-expected sea-level rise. They estimate that one billion people will be affected by annual flooding by 2100, of those, 230 million occupy land within one meter of high tide lines. Most estimates place the average sea-level at 2 meters within the next century, if Kulp and Strauss are correct then hundreds of millions of people will soon be at risk of losing their homes to the sea. Powell, A. (2019, October 2). Red Flags Rise on Global Warming and the Seas. The Harvard Gazette. PDF. The Intergovernmental Panel on Climate Change (IPCC) report on the Oceans and Cryosphere – published in 2019 – warned about the effects of climate change, however, Harvard professors responded that this report may understate the urgency of the problem. A majority of people now report that they believe in climate change however, studies show people are more concerned about issues more prevalent in their daily lives such as jobs, health care, drug, etc. Though over the last five years climate change has become a bigger priority as people experience higher temperatures, more severe storms, and widespread fires. The good news is there is more public awareness now than ever before and there is a growing “bottom-up” movement for change. Hoegh-Guldberg, O., Caldeira, K., Chopin, T., Gaines, S., Haugan, P., Hemer, M., …, & Tyedmers, P. (2019, September 23) The Ocean as a Solution to Climate Change: Five Opportunities for Action. High Level Panel for a Sustainable Ocean Economy. Retrieved from: https://dev-oceanpanel.pantheonsite.io/sites/default/files/2019-09/19_HLP_Report_Ocean_Solution_Climate_Change_final.pdf Ocean-based climate action can play a major role in reducing the world’s carbon footprint delivering up to 21% of the annual greenhouse gas emission cuts as pledged by the Paris Agreement. Published by the High-Level Panel for a Sustainable Ocean Economy, a group of 14 heads of states and governments at the U.N. Secretary-General’s Climate Action Summit this in-depth report highlights the relationship between the ocean and climate. The report presents five areas of opportunities including ocean-based renewable energy; ocean-based transportation; coastal and marine ecosystems; fisheries, aquaculture, and shifting diets; and carbon storage in the seabed. Kennedy, K. M. (2019, September). Putting a Price on Carbon: Evaluating a Carbon Price and Complementary Policies for a 1.5 degree Celsius World. World Resources Institute. Retrieved from: https://www.wri.org/publication/evaluating-carbon-price It is necessary to put a price on carbon in order to reduce carbon emissions to the levels set by the Paris Agreement. Carbon price is a charge applied to entities that produce greenhouse gas emissions to shift the cost of climate change from society to entities responsible for emissions while also providing an incentive to reduce emissions. Additional policies and programs to spur innovation and make local-carbon alternatives more economically attractive are also necessary to achieve long-term results. Macreadie, P., Anton, A., Raven, J., Beaumont, N., Connolly, R., Friess, D., …, & Duarte, C. (2019, September 05) The Future of Blue Carbon Science. Nature Communications, 10(3998). Retrieved from: https://www.nature.com/articles/s41467-019-11693-w The role of Blue Carbon, the idea that coastal vegetated ecosystems contribute disproportionately large amounts of global carbon sequestration, plays a major role in international climate change mitigation and adaptation. Blue Carbon science continues to grow in support and is highly likely to broaden in scope through additional high-quality and scalable observations and experiments and increased multidisciplinary scientists from a variety of nations. Heneghan, R., Hatton, I., & Galbraith, E. (2019, May 3). Climate change impacts on marine ecosystems through the lens of the size spectrum. Emerging Topics in Life Sciences, 3(2), 233-243. Retrieved from: http://www.emergtoplifesci.org/content/3/2/233.abstract Climate change is a very complex issue that is driving countless shifts across the world; particularly it has caused serious alterations in the structure and function of marine ecosystems. This article analyzes how the underused lens of abundance-size spectrum can provide a new tool for monitoring ecosystem adaptation. Woods Hole Oceanographic Institution. (2019). Understanding Sea Level Rise: An in-depth look at three factors contributing to sea-level rise along the U.S. East Coast and how scientists are studying the phenomenon. Produced in Collaboration with Christopher Piecuch, Woods Hole Oceanographic Institution. Woods Hole (MA): WHOI. DOI 10.1575/1912/24705 Since the 20th-century sea-levels have risen six to eight inches globally, though this rate has not been consistent. The variation in sea-level rise is likely due to postglacial rebound, changes to the Atlantic Ocean circulation, and the melting of the Antarctic Ice Sheet. Scientists are in agreement that global water levels will continue to rise for centuries, but more studies are needed to address knowledge gaps and better predict the extent of future sea-level rise. Rush, E. (2018). Rising: Dispatches from the New American Shore. Canada: Milkweed Editions. Told via a first-person introspective, author Elizabeth Rush discusses the consequences vulnerable communities face from climate change. The journalistic-style narrative weaves together the true stories of communities in Florida, Louisiana, Rhode Island, California, and New York who have experienced the devastating effects of hurricanes, extreme weather, and rising tides due to climate change. Leiserowitz, A., Maibach, E., Roser-Renouf, C., Rosenthal, S. and Cutler, M. (2017, July 5). Climate Change in the American Mind: May 2017. Yale Program on Climate Change Communication and the George Mason University Center for Climate Change Communication. A joint study by George Mason University and Yale found 90 percent of Americans are unaware that there is a consensus within the scientific community that human-caused climate change is real. However, the study acknowledged that roughly 70% of Americans believe climate change is happening to some extent. Only 17% of Americans are “very worried” about climate change, 57% are “somewhat worried,” and the vast majority see global warming as a distant threat. Goodell, J. (2017). The Water Will Come: Rising Seas, Sinking Cities, and the Remaking of the Civilized World. New York, New York: Little, Brown, and Company. Told through personal narrative, author Jeff Goodell considers the rising tides around the world and its future implications. Inspired by Hurricane Sandy in New York, Goodell’s research takes him around the world to consider the dramatic action needed to adapt to the rising waters. In the preface, Goodell correctly states that this is not the book for those looking to understand the connection between climate and carbon dioxide, but what the human experience will look like as the sea levels rise. Laffoley, D., & Baxter, J. M. (2016, September). Explaining Ocean Warming: Causes, Scale, Effects, and Consequences. Full Report. Gland, Switzerland: International Union for Conservation of Nature. The International Union for Conservation of Nature presents a detailed fact-based report on the state of the ocean. The report finds that sea surface temperature, ocean heat continent, sea-level rise, melting of glaciers and ice sheets, CO2 emissions and atmospheric concentrations are increasing at an accelerating rate with significant consequences for humanity and the marine species and ecosystems of the ocean. The report recommends recognition of the severity of the issue, concerted joint policy action for comprehensive ocean protection, updated risk assessments, addressing gaps in science and capability needs, acting quickly, and achieving substantial cuts in greenhouse gases. The issue of a warming ocean is a complex issue that will have wide-ranging effects, some may be beneficial, but the vast majority of effects will be negative in ways that are not yet fully understood. Poloczanska, E., Burrows, M., Brown, C., Molinos, J., Halpern, B., Hoegh-Guldberg, O., …, & Sydeman, W. (2016, May 4). Responses of Marine Organisms to Climate Change across Oceans. Frontiers in Marine Science. Retrieved from: doi.org/10.3389/fmars.2016.00062 Marine species are responding to the effects of greenhouse gas emissions and climate change in expected ways. Some responses include poleward and deeper distributional shifts, declines in calcification, increased abundance of warm-water species, and loss of entire ecosystems (e.g. coral reefs). The variability of marine life response to shifts in calcification, demography, abundance, distribution, phenology is likely to lead to ecosystem reshuffling and changes in function that necessitate further study. Albert, S., Leon, J., Grinham, A., Church, J., Gibbes, B., and C. Woodroffe. (2016, May 6). Interactions Between Sea-level Rise and Wave Exposure on Reef Island Dynamics in the Solomon Islands. Environmental Research Letters Vol. 11 No. 05 . Five islands (one to five hectares in size) in the Solomon Islands have been lost due to sea-level rise and coastal erosion. This was the first scientific evidence of the effects of climate change on coastlines and people. It is believed that wave energy played a determining role in the island’s erosion. At this time another nine reef islands are severely eroded and likely to disappear in the coming years. Gattuso, J.P., Magnan, A., Billé, R., Cheung, W.W., Howes, E.L., Joos, F., & Turley, C. (2015, July 3). Contrasting futures for ocean and society from different anthropogenic CO2 emissions scenarios. Science, 349(6243). Retrieved from: doi.org/10.1126/science.aac4722 In order to adapt to anthropogenic climate change, the ocean has had to profoundly alter its physics, chemistry, ecology, and services. The current emissions projections would rapidly and significantly alter ecosystems that humans heavily depend upon. The management options to address the changing ocean due to climate change narrows as the ocean continues to warm and acidify. The article successfully synthesizes recent and future changes to the ocean and its ecosystems, as well as to the goods and services those ecosystems provide to humans. The Institute for Sustainable Development and International Relations. (2015, September). Intertwined Ocean and Climate: Implications for International Climate Negotiations. Climate – Oceans and Coastal Zones: Policy Brief. Retrieved from: https://www.iddri.org/en/publications-and-events/policy-brief/intertwined-ocean-and-climate-implications-international Providing an overview of policy, this brief outlines the intertwined nature of the ocean and climate change, calling for immediate CO2 emission reductions. The article explains the significance of these climate-related changes in the ocean and argues for ambitious emissions reductions at the international level, as increases in carbon dioxide will only become harder to tackle. Stocker, T. (2015, November 13). The silent services of the world ocean. Science, 350(6262), 764-765. Retrieved from: https://science.sciencemag.org/content/350/6262/764.abstract The ocean provides crucial services to the earth and to humans that are of global significance, all of which come with an increasing price caused by human activities and increased carbon emissions. The author emphasizes that the need for humans to consider the impacts of climate change on the ocean when considering adaptation to and mitigation of anthropogenic climate change, especially by intergovernmental organizations. Levin, L. & Le Bris, N. (2015, November 13). The deep ocean under climate change. Science, 350(6262), 766-768. Retrieved from: https://science.sciencemag.org/content/350/6262/766 The deep ocean, despite its critical ecosystem services, is often overlooked in the realm of climate change and mitigation. At depths of 200 meters and below, the ocean absorbs vast amounts of carbon dioxide and needs specific attention and increased research to protect its integrity and value. McGill University. (2013, June 14) Study of Oceans’ Past Raises Worry About Their Future. ScienceDaily. Retrieved from: sciencedaily.com/releases/2013/06/130614111606.html Humans are changing the amount of nitrogen available to fish in the ocean by increasing the amount of CO2 in our atmosphere. Findings show it will take centuries for the ocean to balance the nitrogen cycle. This raises concerns about the current rate of CO2 entering our atmosphere and it shows how the ocean may be changing chemically in ways we wouldn’t expect. The article above provides a brief introduction into the relationship between ocean acidification and climate change, for more detailed information please see The Ocean Foundation’s resource pages on Ocean Acidification. Fagan, B. (2013) The Attacking Ocean: The Past, Present, and Suture of Rising Sea Levels. Bloomsbury Press, New York. Since the last Ice Age sea levels have risen 122 meters and will continue to rise. Fagan takes readers around the world from prehistoric Doggerland in what is now the North Sea, to ancient Mesopotamia and Egypt, colonial Portugal, China, and modern-day United States, Bangladesh, and Japan. Hunter-gatherer societies were more mobile and could fairly easily move settlements to higher ground, yet they faced growing disruption as populations became more condensed. Today millions of people around the world are likely to face relocation in the next fifty years as sea levels continue to rise. Doney, S., Ruckelshaus, M., Duffy, E., Barry, J., Chan, F., English, C., …, & Talley, L. (2012, January). Climate Change Impacts on Marine Ecosystems. Annual Review of Marine Science, 4, 11-37. Retrieved from: https://www.annualreviews.org/doi/full/10.1146/annurev-marine-041911-111611 In marine ecosystems, climate change is associated with concurrent shifts in temperature, circulation, stratification, nutrient input, oxygen content, and ocean acidification. There are also strong linkages between climate and species distributions, phenology, and demography. These could eventually affect the overall ecosystem functioning and services upon which the world depends. Vallis, G. K. (2012). Climate and the Ocean. Princeton, New Jersey: Princeton University Press. There is a strong interconnected relationship between the climate and the ocean demonstrated through plain language and diagrams of scientific concepts including systems of wind and currents within the ocean. Created as an illustrated primer, Climate and the Ocean serves as an introduction into the ocean role as a moderator of the Earth’s climate system. The book allows readers to make their own judgments, but with the knowledge to understand generally the science behind the climate. Spalding, M. J. (2011, May). Before the Sun Sets: Changing Ocean Chemistry, Global marine Resources, and the Limits of Our Legal Tools to Address Harm. International Environmental Law Committee Newsletter, 13(2). PDF. Carbon dioxide is being absorbed by the ocean and affecting the pH of the water in a process called ocean acidification. International laws and domestic laws in the United States, at the time of writing, have the potential to incorporate ocean acidification polices, including the U.N. Framework Convention on Climate Change, the U.N. Convention on the Laws of the Sea, the London Convention and Protocol, and the U.S. Federal Ocean Acidification Research and Monitoring (FOARAM) Act. The cost of inaction will by far exceed the economic cost of acting, and present-day actions are needed. Spalding, M. J. (2011). Perverse Sea Change: Underwater Cultural Heritage in the Ocean is Facing Chemical and Physical Changes. Cultural Heritage and Arts Review, 2(1). PDF. Underwater cultural heritage sites are being threatened by ocean acidification and climate change. Climate change is increasingly altering the ocean’s chemistry, rising sea levels, warming ocean temperatures, shifting currents and increasing weather volatility; all of which affect the preservation of submerged historical sites. Irreparable harm is likely, however, restoring coastal ecosystems, reducing land-based pollution, reducing CO2 emissions, reducing marine stressors, increasing historic site monitoring and developing legal strategies can reduce the devastation of underwater cultural heritage sites. Hoegh-Guldberg, O., & Bruno, J. (2010, June 18). The Impact of Climate Change on the World’s Marine Ecosystems. Science, 328(5985), 1523-1528. Retrieved from: https://science.sciencemag.org/content/328/5985/1523 Rapidly rising greenhouse gas emissions are driving the ocean toward conditions that haven’t been seen for millions of years and is causing catastrophic effects. So far, anthropogenic climate change has caused decreased ocean productivity, altered food web dynamics, reduced abundance of habitat-forming species, shifting species distribution, and greater incidences of disease. Spalding, M. J., & de Fontaubert, C. (2007). Conflict Resolution for Addressing Climate Change with Ocean-Altering Projects. Environmental Law Review News and Analysis. Retrieved from: https://cmsdata.iucn.org/downloads/ocean_climate_3.pdf There is a careful balance between local consequences and global benefits, particularly when considering the detrimental effects of wind and wave energy projects. There is a need for the application of conflict resolution practices to be applied to coastal and marine projects that are potentially damaging to the local environment but are necessary to reduce reliance on fossil fuel. Climate change must be addressed and some of the solutions will take place in marine and coastal ecosystems, to mitigate conflict conversations must involve policymakers, local entities, civil society, and at the international level to ensure the best available actions will be taken. Spalding, M. J. (2004, August). Climate Change and Oceans. Consultative Group on Biological Diversity. Retrieved from: http://markjspalding.com/download/publications/peer-reviewed-articles/ClimateandOceans.pdf The ocean provides many benefits in terms of resources, climate moderation, and aesthetic beauty. However, greenhouse gas emissions from human activities are projected to alter coastal and marine ecosystems and exacerbate traditional marine problems (over-fishing and habitat destruction). Yet, there is opportunity for change through philanthropic support to integrate the ocean and climate to enhance the resilience of the ecosystems most at risk from climate change. Bigg, G.R., Jickells, T.D., Liss, P.S., & Osborn, T.J. (2003, August 1). The Role of The Oceans in Climate. International Journal of Climatology, 23, 1127-1159. Retrieved from: doi.org/10.1002/joc.926 The ocean is a vital component of the climate system. It is important in the global exchanges and redistribution of heat, water, gases, particles, and momentum. The freshwater budget of the ocean is decreasing and is a key factor for the degree and longevity of climate change. Dore, J.E., Lukas, R., Sadler, D.W., & Karl, D.M. (2003, August 14). Climate-driven changes to the atmospheric CO2 sink in the subtropical North Pacific Ocean. Nature, 424(6950), 754-757. Retrieved from: doi.org/10.1038/nature01885 Carbon dioxide uptake by ocean waters can be strongly influenced by changes in regional precipitation and evaporation patterns brought on by climate variability. Since 1990, there has been a significant decrease in the strength of the CO2 sink, which is due to the increase of partial pressure of ocean surface CO2 caused by evaporation and the accompanying concentration of solutes in the water. Revelle, R., & Suess, H. (1957). Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase in Atmospheric CO2 during the Past Decades. La Jolla, California: Scripps Institution of Oceanography, University of California. The amount of CO2 in the atmosphere, the rates and mechanisms of CO2 exchange between the sea and the air, and the fluctuations in marine organic carbon have been studied since shortly after the beginning of the Industrial Revolution. Industrial fuel combustion since the start of the Industrial Revolution, more than 150 years ago, has caused an increase of the average ocean temperature, a decrease in the carbon content of soils, and a change in the amount of organic matter in the ocean. This document served as a key milestone in the study of climate change and has greatly influenced scientific studies in the half-century since its publication. 3. Coastal and Ocean Species Migration due to the Effects of Climate Change Hu, S., Sprintall, J., Guan, C., McPhaden, M., Wang, F., Hu, D., Cai, W. (2020, February 5). Deep-reaching Acceleration of Global Mean Ocean Circulation over the Past Two Decades. Science Advances. EAAX7727. https://advances.sciencemag.org/content/6/6/eaax7727 The ocean has started to move faster over the last 30 years. The increased kinetic energy of ocean currents is due to increased surface wind spurred by warmer temperatures, particularly around the tropics. The trend is far larger than any natural variability suggesting increased current speeds will continue in the long term. Whitcomb, I. (2019, August 12). Droves of Blacktip Sharks Are Summering in Long Island for the First Time. LiveScience. Retrieved from: livescience.com/sharks-vacation-in-hamptons.html Every year, blacktip sharks migrate north in the summer seeking cooler waters. In the past, the sharks would spend their summers off the coast of the Carolinas, but due to the warming waters of the ocean, they must travel further north to Long Island to find cool enough waters. At the time of publication, whether the sharks are migrating farther north on their own or following their prey farther north is unknown. Fears, D. (2019, July 31). Climate change will spark a baby boom of crabs. Then predators will relocate from the south and eat them. The Washington Post. Retrieved from: https://www.washingtonpost.com/climate-environment/2019/07/31/climate-change-will-spark-blue-crab-baby-boom-then-predators-will-relocate-south-eat-them/?utm_term=.3d30f1a92d2e Blue crabs are thriving in the warming waters of the Chesapeake Bay. With the current trends of warming waters, soon blue crabs will no longer need to burrow in the winter to survive, which will cause the population to soar. The population boom may lure some predators to new waters. Furby, K. (2018, June 14). Climate change is moving fish around faster than laws can handle, study says. The Washington Post. Retrieved from: washingtonpost.com/news/speaking-of-science/wp/2018/06/14/climate-change-is-moving-fish-around-faster-than-laws-can-handle-study-says Vital fish species such as salmon and mackerel are migrating to new territories necessitating increased international cooperation to ensure abundance. The article reflects on the conflict that can arise when species cross national boundaries from the perspective of a combination of law, policy, economics, oceanography, and ecology. Poloczanska, E. S., Burrows, M. T., Brown, C. J., García Molinos, J., Halpern, B. S., Hoegh-Guldberg, O., … & Sydeman, W. J. (2016, May 4). Responses of Marine Organisms to Climate Change Across Oceans. Frontiers in Marine Science, 62. https://doi.org/10.3389/fmars.2016.00062 The Marine Climate Change Impacts Database (MCID) and the Fifth Assessment Report of the Intergovernmental Panel on Climate Change explores the marine ecosystem changes driven by climate change. Generally, climate change species responses are consistent with expectations, including poleward and deeper distributional shifts, advances in phenology, declines in calcification, and increases in abundance of warm water species. Areas and species that do not have documented climate change related impacts, do not mean they are not affected, but rather that there are still gaps in the research. National Oceanic and Atmospheric Administration. (2013, September). Two Takes on Climate Change in the Ocean? National Ocean Service: The United States Department of Commerce. Retrieved from: http://web.archive.org/web/20161211043243/http://www.nmfs.noaa.gov/stories/2013/09/9_30_13two_takes_on_climate_change_in_ocean.html Marine life throughout all parts of the food chain is shifting towards the poles to stay cool as things heat up and these changes can have significant economic consequences. Species shifting in space and time are not all happening at the same pace, therefore disrupting the food web and the delicate patterns of life. Now more than ever is it important to prevent overfishing and continue to support long-term monitoring programs. Poloczanska, E., Brown, C., Sydeman, W., Kiessling, W., Schoeman, D., Moore, P., …, & Richardson, A. (2013, August 4). Global imprint of climate change on marine life. Nature Climate Change, 3, 919-925. Retrieved from: https://www.nature.com/articles/nclimate1958 Over the last decade, there have been widespread systemic shifts in phenology, demography, and distribution of species in marine ecosystems. This study synthesized all available studies of marine ecological observations with expectations under climate change; they found 1,735 marine biological responses which either local or global climate change was the source. 4. Hypoxia (Dead Zones) Hypoxia is low or depleted levels of oxygen in water. It is often associated with the overgrowth of algae that leads to oxygen depletion when the algae die, sink to the bottom, and decompose. Hypoxia is also exacerbated by high levels of nutrients, warmer water, and other ecosystem disruption due to climate change. Slabosky, K. (2020, August 18). Can the Ocean Run Out of Oxygen?. TED-Ed. Retrieved from: https://youtu.be/ovl_XbgmCbw The animated video explains how hypoxia or dead zones are created in the Gulf of Mexico and beyond. Agricultural nutrient and fertilizer run-off is a major contributor of dead zones, and regenerative farming practices must be introduced to protect our waterways and threatened marine ecosystems. Although it is not mentioned in the video, warming waters created by climate change are also increasing the frequency and intensity of dead zones. Bates, N., and Johnson, R. (2020) Acceleration of Ocean Warming, Salinification, Deoxygenation and Acidification in the Surface Subtropical North Atlantic Ocean. Communications Earth & Environment. https://doi.org/10.1038/s43247-020-00030-5 Ocean chemical and physical conditions are changing. Data points collected in the Sargasso Sea during the 2010s provide critical information for ocean-atmosphere models and model-data decade-to-decade assessments of the global carbon cycle. Bates and Johnson found that temperatures and salinity in the Subtropical North Atlantic Ocean varied over the last forty years due to seasonal changes and changes in alkalinity. The highest levels of CO2 and ocean acidification occurred during the weakest atmospheric CO2 growth. National Oceanic and Atmospheric Administration. (2019, May 24). What is a Dead Zone? National Ocean Service: The United States Department of Commerce. Retrieved from: oceanservice.noaa.gov/facts/deadzone.html A dead zone is the common term for hypoxia and refers to a reduced level of oxygen in the water leading to biological deserts. These zones are naturally occurring, but are enlarged and enhanced by human activity through warmer water temperatures caused by climate change. Excess nutrients that run-off the land and into waterways is the primary cause of the increase of dead zones. Environmental Protection Agency. (2019, April 15). Nutrient Pollution, The Effects: Environment. The United States Environmental Protection Agency. Retrieved from: https://www.epa.gov/nutrientpollution/effects-environment Nutrient pollution fuels the growth of harmful algal blooms (HABs), which have negative impacts on aquatic ecosystems. HABs sometimes can create toxins that are consumed by small fish and work their way up the food chain and become detrimental to marine life. Even when they do not create toxins, they block sunlight, clog fish gills, and create dead zones. Dead zones are areas in water with little or no oxygen that are formed when algal blooms consume oxygen as they die causing marine life to leave the affected area. Blaszczak, J. R., Delesantro, J. M., Urban, D. L., Doyle, M. W., & Bernhardt, E. S. (2019). Scoured or suffocated: Urban stream ecosystems oscillate between hydrologic and dissolved oxygen extremes. Limnology and Oceanography, 64(3), 877-894. https://doi.org/10.1002/lno.11081 Coastal regions are not the only places where dead zone-like conditions are increasing due to climate change. Urban streams and rivers draining water from highly trafficked areas are common locations for hypoxic dead zones, leaving a bleak picture for freshwater organisms that call urban waterways home. Intense storms create pools of nutrient-laden run-off that remain hypoxic until the next storm flushes out the pools. Breitburg, D., Levin, L., Oschiles, A., Grégoire, M., Chavez, F., Conley, D., …, & Zhang, J. (2018, January 5). Declining oxygen in the global ocean and coastal waters. Science, 359(6371). Retrieved from: doi.org/10.1126/science.aam7240 Largely due to human activities that have increased the overall global temperature and the amount of nutrients that are discharged into coastal waters, the oxygen content of the overall ocean is and has been declining for at least the last fifty years. The declining level of oxygen in the ocean has both biological and ecological consequences on both regional and global scales. Breitburg, D., Grégoire, M., & Isensee, K. (2018). The ocean is losing its breath: Declining oxygen in the world’s ocean and coastal waters. IOC-UNESCO, IOC Technical Series, 137. Retrieved from: https://orbi.uliege.be/bitstream/2268/232562/1/Technical%20Brief_Go2NE.pdf Oxygen is declining in the ocean and humans are the major cause. This occurs when more oxygen is consumed than replenished where warming and nutrient increases cause high levels of microbial consumption of oxygen. Deoxygenation can be worsened by dense aquaculture, leading to reduced growth, behavioral changes, increased diseases, particularly for finfish and crustaceans. Deoxygenation is predicted to become exacerbated in coming years, but steps can be taken to combat this threat including reducing greenhouse gas emissions, as well as black carbon and nutrient discharges. Bryant, L. (2015, April 9). Ocean ‘dead zones’ a growing disaster for fish. Phys.org. Retrieved from: https://phys.org/news/2015-04-ocean-dead-zones-disaster-fish.html Historically, sea floors have taken millennia to recover from past eras of low oxygen, also known as dead zones. Due to human activity and rising temperatures dead zones currently constitute 10% and rising of the world’s ocean surface area. Agrochemical use and other human activities lead to rising levels of phosphorus and nitrogen in the water feeding the dead zones. 5. The Effects of Warming Waters Schartup, A., Thackray, C., Quershi, A., Dassuncao, C., Gillespie, K., Hanke, A., & Sunderland, E. (2019, August 7). Climate change and overfishing increase neurotoxicant in marine predators. Nature, 572, 648-650. Retrieved from: doi.org/10.1038/s41586-019-1468-9 Fish are the predominant source of human exposure to methylmercury, which can lead to long-term neurocognitive deficits in children that persist into adulthood. Since the 1970s there has been an estimated 56% increase in tissue methylmercury in Atlantic bluefin tuna due to increases in seawater temperatures. Smale, D., Wernberg, T., Oliver, E., Thomsen, M., Harvey, B., Straub, S., …, & Moore, P. (2019, March 4). Marine heatwaves threaten global biodiversity and the provision of ecosystem services. Nature Climate Change, 9, 306-312. Retrieved from: nature.com/articles/s41558-019-0412-1 The ocean has warmed considerably over the past century. Marine heatwaves, periods of regional extreme warming, have particularly affected critical foundation species such as corals and seagrasses. As anthropogenic climate change intensifies, the marine warming and heatwaves have the capability to restructure ecosystems and disrupt the provision of ecological goods and services. Sanford, E., Sones, J., Garcia-Reyes, M., Goddard, J., & Largier, J. (2019, March 12). Widespread shifts in the coastal biota of northern California during the 2014-2016 marine heatwaves. Scientific Reports, 9(4216). Retrieved from: doi.org/10.1038/s41598-019-40784-3 In response to prolonged marine heatwaves, increased poleward dispersal of species and extreme changes in sea surface temperature may be seen in the future. The severe marine heatwaves have caused mass mortalities, harmful algal blooms, declines in kelp beds, and substantial changes in the geographic distribution of species. Pinsky, M., Eikeset, A., McCauley, D., Payne, J., & Sunday, J. (2019, April 24). Greater vulnerability to warming of marine versus terrestrial ectotherms. Nature, 569, 108-111. Retrieved from: doi.org/10.1038/s41586-019-1132-4 It is important to understand which species and ecosystems will be most affected by warming due to climate change in order to ensure effective management. Higher sensitivity rates to warming and faster rates of colonization in marine ecosystems suggest that extirpations will be more frequent and species turnover faster in the ocean. Morley, J., Selden, R., Latour, R., Frolicher, T., Seagraves, R., & Pinsky, M. (2018, May 16). Projecting shifts in thermal habitat for 686 species on the North American continental shelf. PLOS ONE. Retrieved from: doi.org/10.1371/journal.pone.0196127 Due to changing ocean temperatures, species are beginning to change their geographic distribution towards the poles. Projections were made for 686 marine species that are likely to be affected by changing ocean temperatures. Future geographic shift projections were generally poleward and followed coastlines and helped identify which species are particularly vulnerable to climate change. Laffoley, D. & Baxter, J. M. (editors). (2016). Explaining Ocean Warming: Causes, Scale, Effects and Consequences. Full report. Gland, Switzerland: IUCN. 456 pp. https://doi.org/10.2305/IUCN.CH.2016.08.en Ocean warming is rapidly becoming one of the greatest threats of our generation as such the IUCN recommends increased recognition of impact severity, global policy action, comprehensive protection and management, updated risk assessments, closing gaps in research and capability needs, and acting quickly to make substantial cuts in greenhouse gas emissions. Hughes, T., Kerry, J., Baird, A., Connolly, S., Dietzel, A., Eakin, M., Heron, S., …, & Torda, G. (2018, April 18). Global warming transforms coral reef assemblages. Nature, 556, 492-496. Retrieved from: nature.com/articles/s41586-018-0041-2?dom=scribd&src=syn In 2016, the Great Barrier Reef experienced a record-breaking marine heatwave. The study hopes to bridge the gap between the theory and practice of examining the risks of ecosystem collapse to predict how future-warming events might affect coral reef communities. They define different stages, identify the major driver, and establish quantitative collapse thresholds. Gramling, C. (2015, November 13). How Warming Oceans Unleashed an Ice Stream. Science, 350(6262), 728. Retrieved from: DOI: 10.1126/science.350.6262.728 A Greenland glacier is shedding kilometers of ice into the sea each year as warm ocean waters undermine it. What is going on under the ice raises the most concern, as warm ocean waters have eroded the glacier far enough to detach it from the sill. This will cause the glacier to retreat even faster and creates huge alarm about the potential sea-level rise. Precht, W., Gintert, B., Robbart, M., Fur, R., & van Woesik, R. (2016). Unprecedented Disease-Related Coral Mortality in Southeastern Florida. Scientific Reports, 6(31375). Retrieved from: https://www.nature.com/articles/srep31374 Coral bleaching, coral disease, and coral mortality events are increasing due to high water temperatures attributed to climate change. Looking at the unusually high levels of contagious coral disease in southeastern Florida throughout 2014, the article links the high level of coral mortality to thermally stressed coral colonies. Friedland, K., Kane, J., Hare, J., Lough, G., Fratantoni, P., Fogarty, M., & Nye, J. (2013, September). Thermal habitat constraints on zooplankton species associated with Atlantic cod (Gadus morhua) on the US Northeast Continental Shelf. Progress in Oceanography, 116, 1-13. Retrieved from: https://doi.org/10.1016/j.pocean.2013.05.011 Within the ecosystem of the US Northeast Continental Shelf there are different thermal habitats, and the increasing water temperatures are impacting the quantity of these habitats. The amounts of warmer, surface habitats have increased whereas the cooler water habitats have decreased. This has the potential to significantly lower quantities of Atlantic Cod as their food zooplankton is affected by the shifts in temperature. 6. Marine Biodiversity Loss due to Climate Change Brito-Morales, I., Schoeman, D., Molinos, J., Burrows, M., Klein, C., Arafeh-Dalmau, N., Kaschner, K., Garilao, C., Kesner-Reyes, K., and Richardson, A. (2020, March 20). Climate Velocity Reveals Increasing Exposure of Deep-ocean Biodiversity to Future Warming. Nature. https://doi.org/10.1038/s41558-020-0773-5 Researchers have found that contemporary climate velocities – warming waters – are faster in the deep ocean than at the surface. The study now predicts that between 2050 and 2100 warming will occur faster at all levels of the water column, except the surface. As a result of the warming, biodiversity will be threatened at all levels, in particular at depths between 200 and 1,000 meters. To reduce the rate of warming limits should be placed on exploitation of deep-ocean resources by fishing fleets and by mining, hydrocarbon and other extractive activities. Additionally, progress can be made by expanding networks of large MPA’s in the deep ocean. Riskas, K. (2020, June 18). Farmed Shellfish Is Not Immune to Climate Change. Coastal Science and Societies Hakai Magazine. PDF. Billions of people worldwide get their protein from the marine environment, yet wild fisheries are being stretched thin. Aquaculture is increasingly filling the gap and managed production may improve water quality and reduce excess nutrients which cause harmful algal blooms. However, as water becomes more acidic and as warming water alters plankton growth, aquaculture and mollusk production are threatened. Riskas predicts mollusk aquaculture will begin a decline in production 2060, with some countries affected much earlier, particularly developing and least developed nations. Record, N., Runge, J., Pendleton, D., Balch, W., Davies, K., Pershing, A., …, & Thompson C. (2019, May 3). Rapid Climate-Driven Circulation Changes Threaten Conservation of Endangered North Atlantic Right Whales. Oceanography, 32(2), 162-169. Retrieved from: doi.org/10.5670/oceanog.2019.201 Climate change is causing ecosystems to rapidly change states, which renders a lot of conservation strategies based on historical patterns ineffective. With deep-water temperatures warming at rates twice as high as surface water rates, species like Calanus finmarchicus, a critical food supply for North Atlantic right whales, have changed their migration patterns. North Atlantic right whales are following their prey out of their historical migration route, changing the pattern, and thus putting them at risk to ship strikes or gear entanglements in areas conservation strategies do not protect them. Díaz, S. M., Settele, J., Brondízio, E., Ngo, H., Guèze, M., Agard, J., … & Zayas, C. (2019). The Global Assessment Report on Biodiversity and Ecosystem Services: Summary for Policymakers. IPBES. https://doi.org/10.5281/zenodo.3553579. Between half a million and one million species are threatened with extinction globally. In the ocean, unsustainable fishing practices, coastal land and sea use changes, and climate change are driving biodiversity loss. The ocean requires further protections and more Marine Protected Area coverage. Abreu, A., Bowler, C., Claudet, J., Zinger, L., Paoli, L., Salazar, G., and Sunagawa, S. (2019). Scientists Warning on the Interactions Between Ocean Plankton and Climate Change. Foundation Tara Ocean. Two studies that use different data both indicate that the impact of climate change on the distribution and quantities of planktonic species will be greater in polar regions. This is likely because higher ocean temperatures (around the equator) lead to increased diversity of planktonic species that may be more likely to survive changing water temperatures, though both planktonic communities could adapt. Thus, climate change acts as an additional stress factor for species. When combined with other changes in habitats, the food web, and species distribution the added stress of climate change could cause major shifts in ecosystem properties. To address this growing problem there needs to be improved science/policy interfaces where research questions are designed by scientists and policy-makers together. Bryndum-Buchholz, A., Tittensor, D., Blanchard, J., Cheung, W., Coll, M., Galbraith, E., …, & Lotze, H. (2018, November 8). Twenty-first-century climate change impacts on marine animal biomass and ecosystem structure across ocean basins. Global Change Biology, 25(2), 459-472. Retrieved from: https://doi.org/10.1111/gcb.14512 Climate change affects marine ecosystems in relation to primary production, ocean temperature, species distributions, and abundance at local and global scales. These changes significantly alter marine ecosystem structure and function. This study analyzes the responses of marine animal biomass in response to these climate change stressors. Niiler, E. (2018, March 8). More Sharks Ditching Annual Migration as Ocean Warms. National Geographic. Retrieved from: nationalgeographic.com/news/2018/03/animals-sharks-oceans-global-warming/ Male blacktip sharks historically have migrated south during the coldest months of the year to mate with females off the coast of Florida. These sharks are vital to Florida’s coastal ecosystem: By eating weak and sick fish, they help balance the pressure on coral reefs and seagrasses. Recently, the male sharks have stayed farther north as the northern waters become warmer. Without southward migration, the males will not mate or protect Florida’s coastal ecosystem. Worm, B., & Lotze, H. (2016). Climate Change: Observed Impacts on Planet Earth, Chapter 13 – Marine Biodiversity and Climate Change. Department of Biology, Dalhousie University, Halifax, NS, Canada. Retrieved from: sciencedirect.com/science/article/pii/B9780444635242000130 Long-term fish and plankton monitoring data has provided the most compelling evidence for climate-driven changes in species assemblages. The chapter concludes that conserving marine biodiversity may provide the best buffer against rapid climate change. McCauley, D., Pinsky, M., Palumbi, S., Estes, J., Joyce, F., & Warner, R. (2015, January 16). Marine defaunation: Animal loss in the global ocean. Science, 347(6219). Retrieved from: https://science.sciencemag.org/content/347/6219/1255641 Humans have profoundly affected marine wildlife and the function and structure of the ocean. Marine defaunation, or human-caused animal loss in the ocean, emerged only hundreds of years ago. Climate change threatens to accelerate marine defaunation over the next century. One of the main drivers of marine wildlife loss is habitat degradation due to climate change, which is avoidable with proactive intervention and restoration. Deutsch, C., Ferrel, A., Seibel, B., Portner, H., & Huey, R. (2015, June 05). Climate change tightens a metabolic constraint on marine habitats. Science, 348(6239), 1132-1135. Retrieved from: science.sciencemag.org/content/348/6239/1132 Both the warming of the ocean and the loss of dissolved oxygen will drastically alter marine ecosystems. In this century, the metabolic index of the upper ocean is predicted to reduce by 20% globally and 50% in northern high-latitude regions. This forces poleward and vertical contraction of metabolically viable habitats and species ranges. The metabolic theory of ecology indicates that body size and temperature influence organisms’ metabolic rates, which may explain shifts in animal biodiversity when the temperature changes by providing more favorable conditions to certain organisms. Marcogilese, D.J. (2008). The impact of climate change on the parasites and infectious diseases of aquatic animals. Scientific and Technical Review of the Office International des Epizooties (Paris), 27(2), 467-484. Retrieved from: https://pdfs.semanticscholar.org/219d/8e86f333f2780174277b5e8c65d1c2aca36c.pdf The distribution of parasites and pathogens will be directly and indirectly affected by global warming, which may cascade through food webs with consequences for entire ecosystems. Transmission rates of parasites and pathogens are directly correlated to temperature, the increasing temperature is increasing transmission rates. Some evidence also suggests that virulence is directly correlated as well. Barry, J.P., Baxter, C.H., Sagarin, R.D., & Gilman, S.E. (1995, February 3). Climate-related, long-term faunal changes in a California rocky intertidal community. Science, 267(5198), 672-675. Retrieved from: doi.org/10.1126/science.267.5198.672 The invertebrate fauna in a California rocky intertidal community has shifted northward when comparing two study periods, one from 1931-1933 and the other from 1993-1994. This shift northward is consistent with predictions of change associated with climate warming. When comparing the temperatures from the two study periods, the mean summer maximum temperatures during the period 1983-1993 were 2.2˚C warmer than the mean summer maximum temperatures from 1921-1931. 7. The Effects of Climate Change on Coral Reefs Figueiredo, J., Thomas, C. J., Deleersnijder, E., Lambrechts, J., Baird, A. H., Connolly, S. R., & Hanert, E. (2022). Global Warming Decreases Connectivity Among Coral Populations. Nature Climate Change, 12(1), 83-87 Global temperature increases are killing corals and decreasing population connectivity. Coral connectivity is how individual corals and their genes are exchanged among geographically separated sub-populations, which can greatly affect the ability of corals to recover after disturbances (such as those caused by climate change) is highly dependent on the connectivity of the reef. To make protections more effective spaces between protected areas should be reduced to esure reef connectivity. Global Coral Reef Monitoring Network (GCRMN). (2021, October). The Sixth Status of Corals of the World: 2020 Report. GCRMN. PDF. The ocean’s coral reef coverage has declined by 14% since 2009 mainly because of climate change. This decline is a cause for major concern as corals do not have enough time to recover in-between mass bleaching events. Principe, S. C., Acosta, A. L., Andrade, J. E., & Lotufo, T. (2021). Predicted Shifts in the Distributions of Atlantic Reef-Building Corals in the Face of Climate Change. Frontiers in Marine Science, 912. Certain coral species play a special role as reef builders, and changes in their distribution due to climate change comes with cascading ecosystem effects. This study covers current and future projections of three Atlantic reef builder species that are essential to overall ecosystem health. The coral reefs within the Atlantic ocean require urgent conservation actions and better governance to ensure their survival and revival through climate change. Brown, K., Bender-Champ, D., Kenyon, T., Rémond, C., Hoegh-Guldberg, O., & Dove, S. (2019, February 20). Temporal effects of ocean warming and acidification on coral-algal competition. Coral Reefs, 38(2), 297-309. Retrieved from: link.springer.com/article/10.1007/s00338-019-01775-y Coral reefs and algae are essential to ocean ecosystems and they are in competition with one another due to limited resources. Due to warming water and acidification as a result of climate change, this competition is being altered. To offset the combined effects of ocean warming and acidification, tests were conducted, but even enhanced photosynthesis was not enough to offset the effects and both corals and algae have reduced survivorship, calcification, and photosynthetic ability. Bruno, J., Côté, I., & Toth, L. (2019, January). Climate Change, Coral Loss, and the Curious Case of the Parrotfish Paradigm: Why Don’t Marine Protected Areas Improve Reef Resilience? Annual Review of Marine Science, 11, 307-334. Retrieved from: annualreviews.org/doi/abs/10.1146/annurev-marine-010318-095300 Reef-building corals are being devastated by climate change. To combat this, marine protected areas were established, and the protection of herbivorous fish followed. The others posit that these strategies have had little effect on the overall coral resilience because their main stressor is the rising ocean temperature. To save reef-building corals, efforts need to go past the local level. Anthropogenic climate change needs to be tackled head-on as it is the root cause of global coral decline. Cheal, A., MacNeil, A., Emslie, M., & Sweatman, H. (2017, January 31). The threat to coral reefs from more intense cyclones under climate change. Global Change Biology. Retrieved from: onlinelibrary.wiley.com/doi/abs/10.1111/gcb.13593 Climate change boosts the energy of cyclones that cause coral destruction. While cyclone frequency is not likely to increase, cyclone intensity will as a result of climate warming. The increase in cyclone intensity will accelerate coral reef destruction and slow post-cyclone recovery due to the cyclone’s obliteration of biodiversity. Hughes, T., Barnes, M., Bellwood, D., Cinner, J., Cumming, G., Jackson, J., & Scheffer, M. (2017, May 31). Coral reefs in the Anthropocene. Nature, 546, 82-90. Retrieved from: nature.com/articles/nature22901 Reefs are degrading rapidly in response to a series of anthropogenic drivers. Because of this, returning reefs to their past configuration is not an option. To combat reef degradation, this article calls for radical changes in science and management to steer reefs through this era while maintaining their biological function. Hoegh-Guldberg, O., Poloczanska, E., Skirving, W., & Dove, S. (2017, May 29). Coral Reef Ecosystems under Climate Change and Ocean Acidification. Frontiers in Marine Science. Retrieved from: frontiersin.org/articles/10.3389/fmars.2017.00158/full Studies have begun to predict the elimination of most warm-water coral reefs by 2040-2050 (although cold-water corals are at lower risk). They assert that unless rapid advances are made in emission reduction, communities that depend on coral reefs to survive are likely to face poverty, social disruption, and regional insecurity. Hughes, T., Kerry, J., & Wilson, S. (2017, March 16). Global warming and recurrent mass bleaching of corals. Nature, 543, 373-377. Retrieved from: nature.com/articles/nature21707?dom=icopyright&src=syn Recent recurrent mass coral bleaching events have varied significantly in severity. Using surveys of Australian reefs and sea surface temperatures, the article explains that water quality and fishing pressure had minimal effects on bleaching in 2016, suggesting that local conditions provide little protection against extreme temperatures. Torda, G., Donelson, J., Aranda, M., Barshis, D., Bay, L., Berumen, M., …, & Munday, P. (2017). Rapid adaptive responses to climate change in corals. Nature, 7, 627-636. Retrieved from: nature.com/articles/nclimate3374 A coral reefs’ ability to adapt to climate change will be crucial to projecting a reef’s fate. This article dives into the transgenerational plasticity among corals and the role of epigenetics and coral-associated microbes in the process. Anthony, K. (2016, November). Coral Reefs Under Climate Change and Ocean Acidification: Challenges and Opportunities for Management and Policy. Annual Review of Environment and Resources. Retrieved from: annualreviews.org/doi/abs/10.1146/annurev-environ-110615-085610 Considering the rapid degradation of coral reefs due to climate change and ocean acidification, this article suggests realistic goals for regional and local-scale management programs that could improve sustainability measures. Hoey, A., Howells, E., Johansen, J., Hobbs, J.P., Messmer, V., McCowan, D.W., & Pratchett, M. (2016, May 18). Recent Advances in Understanding the Effects of Climate Change on Coral Reefs. Diversity. Retrieved from: mdpi.com/1424-2818/8/2/12 Evidence suggests coral reefs may have some capacity to respond to warming, but it’s unclear if these adaptations can match the increasingly rapid pace of climate change. However, the effects of climate change are being compounded by a variety of other anthropogenic disturbances making it harder for corals to respond. Ainsworth, T., Heron, S., Ortiz, J.C., Mumby, P., Grech, A., Ogawa, D., Eakin, M., & Leggat, W. (2016, April 15). Climate change disables coral bleaching protection on the Great Barrier Reef. Science, 352(6283), 338-342. Retrieved from: science.sciencemag.org/content/352/6283/338 The current character of temperature warming, which precludes acclimation, has resulted in increased bleaching and death of coral organisms. These effects were most extreme in the wake of the 2016 El Nino year. Graham, N., Jennings, S., MacNeil, A., Mouillot, D., & Wilson, S. (2015, February 05). Predicting climate-driven regime shifts versus rebound potential in coral reefs. Nature, 518, 94-97. Retrieved from: nature.com/articles/nature14140 Coral bleaching due to climate change is one of the major threats facing coral reefs. This article considers long-term reef responses to major climate-induced coral bleaching of Indo-Pacific corals and identifies reef characteristics that favor rebound. The authors aim to use their findings to inform future best management practices. Spalding, M. D., & B. Brown. (2015, November 13). Warm-water coral reefs and climate change. Science, 350(6262), 769-771. Retrieved from: https://science.sciencemag.org/content/350/6262/769 Coral reefs support huge marine life systems as well as providing critical ecosystem services for millions of people. However, known threats such as overfishing and pollution are being compounded by climate change, notably warming and ocean acidification to increase the damage to coral reefs. This article provides a succinct overview of the effects of climate change on coral reefs. Hoegh-Guldberg, O., Eakin, C.M., Hodgson, G., Sale, P.F., & Veron, J.E.N. (2015, December). Climate Change Threatens the Survival of Coral Reefs. ISRS Consensus Statement on Coral Bleaching & Climate Change. Retrieved from: https://www.icriforum.org/sites/default/files/2018%20ISRS%20Consensus%20Statement%20on%20Coral%20Bleaching%20%20Climate%20Change%20final_0.pdf Coral reefs provide goods and services worth at least US$30 billion per year and support at least 500 million people worldwide. Due to climate change, reefs are under serious threat if actions to curb carbon emissions globally are not taken immediately. This statement was released in parallel with the Paris Climate Change Conference in December 2015. 8. The Effects of Climate Change on the Arctic and Antarctic Sohail, T., Zika, J., Irving, D., and Church, J. (2022, February 24). Observed Poleward Freshwater Transport Since 1970. Nature. Vol. 602, 617-622. https://doi.org/10.1038/s41586-021-04370-w Between 1970 and 2014 the intensity of the global water cycle increased by up to 7.4%, which the previous modeling suggested estimates of a 2-4% increase. Warm freshwater is pulled toward the poles changing our ocean temperature, freshwater content, and salinity. The increasing intensity changes to the global water cycle are likely to make dry areas dryer and wet areas wetter. Moon, T.A., M.L. Druckenmiller., and R.L. Thoman, Eds. (2021, December). Arctic Report Card: Update for 2021. NOAA. https://doi.org/10.25923/5s0f-5163 The 2021 Arctic Report Card (ARC2021) and the attached video illustrates that rapid and pronounced warming continues to create cascading disruptions for the Arctic marine life. Arctic-wide trends include tundra greening, increasing Arctic rivers discharge, loss of sea ice volume, ocean noise, beaver range expansion, and glacier permafrost hazards. Strycker, N., Wethington, M., Borowicz, A., Forrest, S., Witharana, C., Hart, T., and H. Lynch. (2020). A Global Population Assessment of the Chinstrap Penguin (Pygoscelis antarctica). Science Report Vol. 10, Article 19474. https://doi.org/10.1038/s41598-020-76479-3 Chinstrap penguins are uniquely adapted to their Antarctic environment; however, researchers are reporting population reductions in 45% of penguin colonies since the 1980s. Researchers found a further 23 populations of chinstrap penguins gone during an expedition in January of 2020. While exact assessments are not available at this time, the presence of abandoned nesting places suggests the decline is widespread. It is believed that warming waters reduce sea ice and the phytoplankton that krill depend on for food the primary food of chinstrap penguins. It is suggested that ocean acidification may affect the penguin’s ability to reproduce. Smith, B., Fricker, H., Gardner, A., Medley, B., Nilsson, J., Paolo, F., Holschuh, N., Adusumilli, S., Brunt, K., Csatho, B., Harbeck, K., Markus, T., Neumann, T., Siegfried M., and Zwally, H. (2020, April). Pervasive Ice Sheet Mass Loss Reflects Competing Ocean and Atmosphere Processes. Science Magazine. DOI: 10.1126/science.aaz5845 NASA’s Ice, Cloud and land Elevation Satellite-2, or ICESat-2, which was launched in 2018, is now providing revolutionary data on glacial melt. The researchers found that between 2003 and 2009 enough ice melted to raise sea-levels by 14 millimeters from Greenland and Antarctic ice sheets. Rohling, E., Hibbert, F., Grant, K., Galaasen, E., Irval, N., Kleiven, H., Marino, G., Ninnemann, U., Roberts, A., Rosenthal, Y., Schulz, H., Williams, F., and Yu, J. (2019). Asynchronous Antarctic and Greenland Ice-volume Contributions to the Last Interglacial Sea-ice Highstand. Nature Communications 10:5040 https://doi.org/10.1038/s41467-019-12874-3 The last time sea-levels rose above their present level was during the last interglacial period, roughly 130,000-118,000 years ago. Researchers have found that an initial sea-level highstand (above 0m) at ~129.5 to ~124.5 ka and intra-last interglacial sea-level rises with event-mean rates of rise of 2.8, 2.3, and 0.6m c−1. Future sea-level rise may become driven by increasingly rapid mass-loss from the West Antarctic Ice Sheet. There is an increased likelihood for extreme sea-level rise in the future based on historic data from the last interglacial period. Climate Change Effects on Arctic Species. (2019) Fact sheet from Aspen Institute & SeaWeb. Retrieved from: https://assets.aspeninstitute.org/content/uploads/files/content/upload/ee_3.pdf Illustrated fact sheet highlighting the challenges of Arctic research, the relatively short time frame that studies of species have been undertaken, and positing the effects of sea ice loss and other effects of climate change. Christian, C. (2019, January) Climate Change and the Antarctic. Antarctic & Southern Ocean Coalition. Retrieved from https://www.asoc.org/advocacy/climate-change-and-the-antarctic This summary article provides an excellent overview of the effects of climate change on the Antarctic and its effect on marine species there. The West Antarctic Peninsula is one of the fastest warming areas on Earth, with only some areas of the Arctic Circle experiencing faster-rising temperatures. This rapid warming affects every level of the food web in Antarctic waters. Katz, C. (2019, May 10) Alien Waters: Neighboring Seas Are Flowing into a Warming Arctic Ocean. Yale Environment 360. Retrieved from https://e360.yale.edu/features/alien-waters-neighboring-seas-are-flowing-into-a-warming-arctic-ocean The article discusses the “Atlantification” and “Pacification” of the Arctic Ocean as warming waters allowing new species to migrate northward and disrupting the ecosystem functions and lifecycles that have evolved over time within the Arctic Ocean. MacGilchrist, G., Naveira-Garabato, A.C., Brown, P.J., Juillion, L., Bacon, S., & Bakker, D.C.E. (2019, August 28). Reframing the carbon cycle of the subpolar Southern Ocean. Science Advances, 5(8), 6410. Retrieved from: https://doi.org/10.1126/sciadv.aav6410 Global climate is critically sensitive to physical and biogeochemical dynamics in the subpolar Southern Ocean, because it is there that deep, carbon-rich layers of the world ocean outcrop and exchange carbon with the atmosphere. Thus, how carbon uptake works there specifically must be well understood as a means of understanding past and future climate change. Based on their research, the authors believe that the conventional framework for the subpolar Southern Ocean carbon cycle fundamentally misrepresents the drivers of regional carbon uptake. Observations in the Weddell Gyre show that the rate of carbon uptake is set by interplay between the Gyre’s horizontal circulation and the remineralization at mid-depths of organic carbon sourced from biological production in the central gyre. Woodgate, R. (2018, January) Increases in the Pacific inflow to the Arctic from 1990 to 2015, and insights into seasonal trends and driving mechanisms from year-round Bering Strait mooring data. Progress in Oceanography, 160, 124-154 Retrieved from: https://www.sciencedirect.com/science/article/pii/S0079661117302215 With this study, conducted using data from year-round mooring buoys in the Bering Strait, the author established that northward flow of water through the straight had increased dramatically over 15 years, and that the change was not due to local wind or other individual weather events, but due to warming waters. The transport increase results from stronger northward flows (not fewer southward flow events), yielding a 150% increase in kinetic energy, presumably with impacts on bottom suspension, mixing, and erosion. It was also noted that the temperature of the northward-flowing water was warmer than 0 degrees C on more days by 2015 than at the beginning of the data set. Stone, D. P. (2015). The Changing Arctic Environment. New York, New York: Cambridge University Press. Since the industrial revolution, the Arctic environment is undergoing unprecedented change due to human activity. The seemingly pristine arctic environment is also showing high levels of toxic chemicals and increased warming which have started to have serious consequences on the climate in other parts of the world. Told through an Arctic Messenger, author David Stone examines scientific monitoring and influential groups have led to international legal actions to lessen the harm to the arctic environment. Wohlforth, C. (2004). The Whale and the Supercomputer: On The Northern Front of Climate Change. New York: North Point Press. The Whale and the Supercomputer weaves the personal stories of the scientists researching climate with the experiences of the Inupiat of northern Alaska. The book equally describes the whaling practices and traditional knowledge of the Inupiaq as much as data-driven measures of snow, glacial melt, albedo -that is, light reflected by a planet- and biological changes observable in animals and insects. The description of the two cultures allows non-scientists to relate to the earliest examples of climate change affecting the environment. 9. Ocean-Based Carbon Dioxide Removal (CDR) Tyka, M., Arsdale, C., and Platt, J. (2022, January 3). CO2 Capture by Pumping Surface Acidity to the Deep Ocean. Energy & Environmental Science. DOI: 10.1039/d1ee01532j There is a potential for new technologies – such as alkalinity pumping – to contribute to the portfolio of Carbon Dioxide Removal (CDR) technologies, although they are likely to be more expensive than on-shore methods due to the challenges of marine engineering. Significantly more research is necessary to assess the feasibility and the risks associated with ocean alkalinity alterations and other removal techniques. Simulations and small-scale tests have limitations and cannot fully predict how CDR methods will affect the ocean ecosystem when put to the scale of mitigating current CO2 emissions. Castañón, L. (2021, December 16). An Ocean of Opportunity: Exploring the Potential Risks and Rewards of Ocean-based Solutions to Climate Change. Woods Hole Oceanographic Institution. Retrieved from: https://www.whoi.edu/oceanus/feature/an-ocean-of-opportunity/ The ocean is an important part of the natural carbon sequestration process, diffusing excess carbon from the air into the water and eventually sinking it to the ocean floor. Some carbon dioxide bonds with weathered rocks or shells locking it into a new form, and marine algae uptakes other carbon bonds, integrating it into the natural biological cycle. Carbon Dioxide Removal (CDR) solutions intend to mimic or enhance these natural carbon storage cycles. This article highlights risks and variables that will affect the success of the CDR projects. Cornwall, W. (2021, December 15). To Draw Down Carbon and Cool off the Planet, Ocean Fertilization Gets Another Look. Science, 374. Retrieved from: https://www.science.org/content/article/draw-down-carbon-and-cool-planet-ocean-fertilization-gets-another-look Ocean fertilization is a politically charged form of Carbon Dioxide Removal (CDR) that used to be viewed as reckless. Now, researchers are planning to pour 100 tons of iron across 1000 square kilometers of the Arabian Sea. An important question being posed is how much of the absorbed carbon actually makes it to the deep ocean rather than being consumed by other organisms and re-emitted into the environment. Skeptics of the fertilization method note that recent surveys of 13 past fertilization experiments found only one that increased deep ocean carbon levels. Although potential consequences worry some, others believe that gauging the potential risks is another reason to move forward with the research. National Academies of Sciences, Engineering, and Medicine. (2021, December). A Research Strategy for Ocean-Based Carbon Dioxide Removal and Sequestration. Washington, DC: The National Academies Press. https://doi.org/10.17226/26278 This report recommends the United States undertake a $125 million research program dedicated to testing understanding challenges for ocean-based CO2 removal approaches, including economic and social obstacles. Six ocean-based Carbon Dioxide Removal (CDR) approaches were assessed in the report including nutrient fertilization, artificial upwelling and downwelling, seaweed cultivation, ecosystem recovery, ocean alkalinity enhancement, and electrochemical processes. There are still conflicting opinions on CDR approaches within the scientific community, but this report marks a notable step in the conversation for the bold recommendations laid out by ocean scientists. The Aspen Institute. (2021, December 8). Guidance for Ocean-Based Carbon Dioxide Removal Projects: A Pathway to Developing a Code of Conduct. The Aspen Institute. Retrieved From: https://www.aspeninstitute.org/wp-content/uploads/files/content/docs/pubs/120721_Ocean-Based-CO2-Removal_E.pdf Ocean-based Carbon Dioxide Removal (CDR) projects could be more advantageous than land-based projects, because of space availability, the possibility for co-locational projects, and co-beneficial projects (including mitigating ocean acidification, food production, and biofuel production). However, CDR projects face challenges including poorly studied potential environmental impacts, uncertain regulations and jurisdictions, the difficulty of operations, and varying rates of success. More small-scale research is necessary to define and verify carbon dioxide removal potential, catalog potential environmental and societal externalities, and account for governance, funding, and cessation issues. Batres, M., Wang, F. M., Buck, H., Kapila, R., Kosar, U., Licker, R., … & Suarez, V. (2021, July). Environmental and Climate Justice and Technological Carbon Removal. The Electricity Journal, 34(7), 107002. Carbon Dioxide Removal (CDR) methods should be implemented with justice and equity in mind, and the local communities where projects may be located should be at the core of decision-making. Communities often lack the resources and knowledge to participate and invest in CDR efforts. Environmental justice should remain at the forefront of project progression to avoid adverse effects on already overburdened communities. Fleming, A. (2021, June 23). Cloud Spraying and Hurricane Slaying: How Ocean Geoengineering Became the Frontier of the Climate Crisis. The Guardian. Retrieved from: https://www.theguardian.com/environment/2021/jun/23/cloud-spraying-and-hurricane-slaying-could-geoengineering-fix-the-climate-crisis Tom Green hopes to sink trillion tonnes of CO2 to the bottom of the ocean by dropping volcanic rock sand into the ocean. Green claims that if the sand is deposited on 2% of the world’s coastlines it would capture 100% of our current global annual carbon emissions. The size of CDR projects necessary to tackle our current emission levels makes all projects difficult to scale. Alternatively, rewilding coastlines with mangroves, salt marshes, and seagrasses both restore ecosystems and hold CO2 without facing the major risks of technological CDR interventions. Gertner, J. (2021, June 24). Has the Carbontech Revolution Begun? The New York Times. Direct carbon capture (DCC) technology exists, but it remains expensive. The CarbonTech industry is now beginning to resell the captured carbon to businesses that can use it in their products and in turn shrink their emission footprint. Carbon-neutral or carbon-negative products could fall under a larger category of carbon utilization products that make carbon capture profitable while appealing to the market. Although climate change will not be fixed with CO2 yoga mats and sneakers, it is just another small step in the right direction. Hirschlag, A. (2021, June 8). To Combat Climate Change, Researchers Want to Pull Carbon Dioxide From the Ocean and Turn It Into Rock. Smithsonian. Retrieved from: https://www.smithsonianmag.com/innovation/combat-climate-change-researchers-want-to-pull-carbon-dioxide-from-ocean-and-turn-it-into-rock-180977903/ One proposed Carbon Dioxide Removal (CDR) technique is to introduce electrically charged mesor hydroxide (alkaline material) into the ocean to trigger a chemical reaction that would result in carbonate limestone rocks. The rock could be used for construction, but the rocks would likely end up in the ocean. The limestone output could upset local marine ecosystems, smothering plant life and significantly altering seafloor habitats. However, researchers point out that the output water will be slightly more alkaline which has the potential to mitigate the effects of ocean acidification in the treatment area. Additionally, hydrogen gas would be a byproduct that could be sold to help offset installment costs. Further research is necessary to demonstrate the technology is viable on a large scale and economically viable. Healey, P., Scholes, R., Lefale, P., & Yanda, P. (2021, May). Governing Net-Zero Carbon Removals to Avoid Entrenching Inequities. Frontiers in Climate, 3, 38. https://doi.org/10.3389/fclim.2021.672357 Carbon Dioxide Removal (CDR) technology, like climate change, is embedded with risks and inequities, and this article includes actionable recommendations for the future to address these inequities. Currently, the emerging knowledge and investments in CDR technology are concentrated in the global north. If this pattern continues, it will only exacerbate the global environmental injustices and accessibility gap when it comes to climate change and climate solutions. Meyer, A., & Spalding, M. J. (2021, March). A Critical Analysis of the Ocean Effects of Carbon Dioxide Removal via Direct Air and Ocean Capture – Is it a Safe and Sustainable Solution?. The Ocean Foundation. Emerging Carbon Dioxide Removal (CDR) technologies could play a supporting role in larger solutions in the transition away from burning fossil fuels to a cleaner, equitable, sustainable energy grid. Among these technologies are direct air capture (DAC) and direct ocean capture (DOC), which both use machinery to extract CO2 from the atmosphere or ocean and transport it to underground storage facilities or utilize the captured carbon to recover oil from commercially depleted sources. Currently, carbon capture technology is very expensive and poses risks to ocean biodiversity, ocean and coastal ecosystems, and coastal communities including Indigenous peoples. Other nature-based solutions including: mangrove restoration, regenerative agriculture, and reforestation remain beneficial for biodiversity, society, and long-term carbon storage without many of the risks that accompany technological DAC/DOC. While risks and feasibility of carbon removal technologies are rightfully explored moving forward, it is important to “first, do no harm” to ensure adverse effects are not inflicted on our precious land and ocean ecosystems. Center for International Environmental Law. (2021, March 18). Ocean Ecosystems & Geoengineering: An introductory Note. Nature-based Carbon Dioxide Removal (CDR) techniques in the marine context include protecting and restoring coastal mangroves, seagrass beds, and kelp forests. Even though they pose fewer risks than technological approaches, there is still harm that can be inflicted on marine ecosystems. Technological CDR marine-based approaches seek to modify the ocean chemistry to uptake more CO2, including the most widely discussed examples of ocean fertilization and ocean alkalinization. The focus must be on preventing human-caused carbon emissions, rather than unproven adaptive techniques to lessen the world’s emissions. Gattuso, J. P., Williamson, P., Duarte, C. M., & Magnan, A. K. (2021, January 25). The Potential for Ocean-Based Climate Action: Negative Emissions Technologies and Beyond. Frontiers in Climate. https://doi.org/10.3389/fclim.2020.575716 Of the many types of carbon dioxide removal (CDR), the four primary ocean-based methods are: marine bioenergy with carbon capture and storage, restoring and increasing coastal vegetation, enhancing open-ocean productivity, enhancing weathering and alkalinization. This report analyzes the four types and argues for increased priority for CDR research and development. The techniques still come with many uncertainties, but they have the potential to be highly effective in the pathway to limit climate warming. Buck, H., Aines, R., et al. (2021). Concepts: Carbon Dioxide Removal Primer. Retrieved From: https://cdrprimer.org/read/concepts The author’s define Carbon dioxide removal (CDR) as any activity that removes CO2 from the atmosphere and durably stores it in geological, terrestrial, or ocean reserves, or in products. CDR is different from geoengineering, as, unlike geoengineering, CDR techniques remove CO2 from the atmosphere, but geoengineering simply focuses on reducing climate change symptoms. Many other important terms are included in this text, and it serves as a helpful supplement to the larger conversation. Keith, H., Vardon, M., Obst, C., Young, V., Houghton, R. A., & Mackey, B. (2021). Evaluating Nature-Based Solutions for Climate Mitigation and Conservation Requires Comprehensive Carbon Accounting. Science of The Total Environment, 769, 144341. http://dx.doi.org/10.1016/j.scitotenv.2020.144341 Nature-based Carbon Dioxide Removal (CDR) solutions are a co-beneficial approach to address the climate crisis, which includes carbon stocks and flows. Flow-based carbon accounting incentivizes natural solutions while highlighting the risks of burning fossil fuels. Bertram, C., & Merk, C. (2020, December 21). Public Perceptions of Ocean-Based Carbon Dioxide Removal: The Nature-Engineering Divide?. Frontiers in Climate, 31. https://doi.org/10.3389/fclim.2020.594194 Public acceptability of Carbon Dioxide Removal (CDR) techniques over the past 15 has remained low for climate engineering initiatives when compared to nature-based solutions. Perceptions research mainly has focused on the global perspective for climate-engineering approaches or a local perspective for blue carbon approaches. Perceptions vary greatly according to location, education, income, etc. Both technological and nature-based approaches are likely to contribute to the utilized CDR solutions portfolio, so it is important to consider the perspectives of groups that will be directly affected. ClimateWorks. (2020, December 15). Ocean Carbon Dioxide Removal (CDR). ClimateWorks. Retrieved from: https://youtu.be/brl4-xa9DTY. This four-minute animated video describes the natural ocean carbon cycles and introduces common Carbon Dioxide Removal (CDR) techniques. It must be noted that this video does not mention the environmental and societal risks of technological CDR methods, nor does it cover alternative nature-based solutions. Brent, K., Burns, W., McGee, J. (2019, December 2). Governance of Marine Geoengineering: Special Report. Centre for International Governance Innovation. Retrieved from: https://www.cigionline.org/publications/governance-marine-geoengineering/ The rise of marine geoengineering technologies is likely to place new demands on our international law systems to govern the risks and opportunities. Some existing policies on marine activities could apply to geoengineering, however, the rules were created and negotiated for purposes other than geoengineering. The London Protocol, 2013 amendment on ocean dumping is the most relevant farmwork to marine geoengineering. More international agreements are necessary to fill the gap in marine geoengineering governance. Gattuso, J. P., Magnan, A. K., Bopp, L., Cheung, W. W., Duarte, C. M., Hinkel, J., and Rau, G. H. (2018, October 4). Ocean Solutions to Address Climate Change and Its Effects on Marine Ecosystems. Frontiers in Marine Science, 337. https://doi.org/10.3389/fmars.2018.00337 It is important to reduce climate-related impacts on marine ecosystems without compromising ecosystem protection in the solution method. As such the authors of this study analyzed 13 ocean-based measures to reduce ocean warming, ocean acidification, and sea-level rise, including Carbon Dioxide Removal (CDR) methods of fertilization, alkalinization, land-ocean hybrid methods, and reef restoration. Moving forward, the deployment of various methods at a smaller scale would reduce risks and uncertainties associated with large-scale deployment. National Research Council. (2015). Climate intervention: Carbon Dioxide Removal and Reliable Sequestration. National Academies Press. The deployment of any Carbon Dioxide Removal (CDR) technique accompanies many uncertainties: effectiveness, cost, governance, externalities, co-benefits, safety, equity, etc. The book, Climate Intervention, addresses uncertainties, important considerations, and recommendations for moving forward. This source includes a good primary analysis of the main emerging CDR technologies. CDR techniques may never scale up to remove a substantial amount of CO2, but they still play an important part in the journey to net-zero, and attention must be paid. The London Protocol. (2013, October 18). Amendment to Regulate the Placement of Matter for Ocean Fertilization and other Marine Geoengineering Activities. Annex 4. The 2013 amendment to the London Protocol prohibits the dumping of wastes or other material into the sea to control and restrict ocean fertilization and other geoengineering techniques. This amendment is the first international amendment addressing any geoengineering techniques which will affect the types of carbon dioxide removal projects that can be introduced and tested in the environment. 10. Climate Change and Diversity, Equity, Inclusion, and Justice (DEIJ) Phillips, T. and King, F. (2021). Top 5 Resources For Community Engagement From A Deij Perspective. The Chesapeake Bay Program’s Diversity Workgroup. PDF. The Chesapeake Bay Program’s Diversity Workgroup has put together a resource guide for integrating DEIJ into community engagement projects. The fact sheet includes links to information on environmental justice, implicit bias, and racial equity, as well as definitions for groups. It is important that DEIJ be integrated into a project from the initial developing phase in order for meaningful involvement of all people and communities involved. Gardiner, B. (2020, July 16). Ocean Justice: Where Social Equity and the Climate Fight Intersect. Interview with Ayana Elizabeth Johnson. Yale Environment 360. Ocean justice is at the intersection of ocean conservation and social justice, and the problems that communities will face from climate change are not going away. Solving the climate crisis is not just an engineering problem but a social norm problem that leaves many out of the conversation. The full interview is highly recommended and is available at the following link: https://e360.yale.edu/features/ocean-justice-where-social-equity-and-the-climate-fight-intersect. Rush, E. (2018). Rising: Dispatches from the New American Shore. Canada: Milkweed Editions. Told via a first-person introspective, author Elizabeth Rush discusses the consequences vulnerable communities face from climate change. The journalistic-style narrative weaves together the true stories of communities in Florida, Louisiana, Rhode Island, California, and New York who have experienced the devastating effects of hurricanes, extreme weather, and rising tides due to climate change. 11. Policy and Government Publications The United Nations. (2015). The Paris Agreement. Bonn, Germany: United National Framework Convention on Climate Change secretariat, U.N. Climate Change. Retrieved from: https://unfccc.int/process-and-meetings/the-paris-agreement/the-paris-agreement The Paris Agreement came into force on 4 November 2016. Its intent was to unite nations in an ambitious effort to limit climate change and adapt to its effects. The central goal is to keep global temperature rise below 2 degrees Celsius (3.6 degrees Fahrenheit) above pre-industrial levels and limit further temperature increase to less than 1.5 degrees Celsius (2.7 degrees Fahrenheit). These have been codified by each party with specific Nationally Determined Contributions (NDCs) that require each party to regularly report on their emissions and implementation efforts. To date, 196 Parties have ratified the agreement, though it should be noted the United States was an original signatory but has given notice that it will withdraw from the agreement. Please note this document is the only source not in chronological order. As the most comprehensive international commitment affecting climate change policy, this source is included out of chronological order. Intergovernmental Panel on Climate Change, Working Group II. (2022). Climate Change 2022 Impacts, Adaptation, and Vulnerability: Summary for Policymakers. IPCC. PDF. The Intergovernmental Panel on Climate Change report is a high-level summary for policy makers of Working Group II’s contributions to the IPCC Sixth Assessment Report. The assessment integrates knowledge more strongly than earlier assessments, and it addresses climate change impacts, risks, and adaptation that are concurrently unfolding. The authors have issued a ‘dire warning’ about the current and future state of our environment. United Nations Environment Programme. (2021). Emissions Gap Report 2021. United Nations. PDF. The United Nations Environment Programme 2021 report shows that national climate pledges currently in place put the world on track to hit a global temperature rise of 2.7 degrees celsius by the end of the century. To keep global temperature rise below 1.5 degrees celsius, following the goal of the Paris Agreement, the world needs to cut global greenhouse gas emissions in half in the next eight years. In the short term, the reduction of methane emissions from fossil fuel, waste, and agriculture has the potential to reduce warming. Clearly defined carbon markets could also help the world meet emission goals. United Nations Framework Convention on Climate Change. (2021, November). Glasgow Climate Pact. United Nations. PDF. The Glasgow Climate Pact calls for increased climate action above the 2015 Paris Climate Agreement to keep the goal of only a 1.5C temperature rise. This pact was signed by nearly 200 countries and is the first climate agreement to explicitly plan to reduce coal usage, and it sets clear rules for a global climate market. Subsidiary Body for Scientific and Technological Advice. (2021). Ocean and Climate Change Dialogue to Consider How to Strengthen Adaptation and Mitigation Action. The United Nations. PDF. The Subsidiary Body for Scientific and Technological Advice (SBSTA) is the first summary report of what will now be the annual ocean and climate change dialogue. The report is a requirement of COP 25 for reporting purposes. This dialogue was then welcomed by the 2021 Glasgow Climate Pact, and it highlights the importance of Governments strengthening their understanding of and action on the ocean and climate change. Intergovernmental Oceanographic Commission. (2021). The United Nations Decade of Ocean Science for Sustainable Development (2021-2030): Implementation Plan, Summary. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000376780 The United Nations has declared that 2021-2030 to be the Ocean Decade. Throughout the decade the United Nations is working beyond the capacities of a single nation to collectively align research, investments, and initiatives around global priorities. Over 2,500 stakeholders contributed to the development of the UN Decade of Ocean Science for Sustainable Development plan which sets scientific priorities that will jumpstart ocean science based solutions for sustainable development. Updates on the Ocean Decade initiatives can be found here. The Law of the Sea and Climate Change. (2020). In E. Johansen, S. Busch, & I. Jakobsen (Eds.), The Law of the Sea and Climate Change: Solutions and Constraints (pp. I-Ii). Cambridge: Cambridge University Press. There is a strong link between solutions to climate change and the influences of international climate law and the law of the sea. Although they are largely developed through separate legal entities, addressing climate change with marine legislation can lead to achieving co-beneficial objectives. United Nations Environment Program (2020, June 9) Gender, Climate & Security: Sustaining Inclusive Peace on the Frontlines of Climate Change. United Nations. https://www.unenvironment.org/resources/report/gender-climate-security-sustaining-inclusive-peace-frontlines-climate-change Climate change is exacerbating conditions that threaten peace and security. Gender norms and power structures place a critical role in how people may be affected by and respond to the growing crisis. The United Nations report recommends integrating complementary policy agendas, scale-up integrated programming, increase targeted financing, and expand the evidence base of the gender dimensions of climate-related security risks. United Nations Water. (2020, March 21). The United Nations World Water Development Report 2020: Water and Climate Change. United Nations Water. https://www.unwater.org/publications/world-water-development-report-2020/ Climate change will affect the availability, quality, and quantity of water for basic human needs threatening food security, human health, urban and rural settlements, energy production, and increasing the frequency and magnitude of extreme events such as heatwaves and storm surge events. Water-related extremes exacerbated by climate change increase risks to water, sanitation, and hygiene (WASH) infrastructure. Opportunities to address the growing climate and water crisis include systematic adaptation and mitigation planning into water investments, which will make investments and associated activities more appealing to climate financiers. The changing climate will affect more than just marine life, but nearly all human activities. Blunden, J., and Arndt, D. (2020). State of the Climate in 2019. American Meteorological Society. NOAA’s National Centers for Environmental Information.https://journals.ametsoc.org/bams/article-pdf/101/8/S1/4988910/2020bamsstateoftheclimate.pdf NOAA reported that 2019 was the hottest year on record since records began in the mid-1800s. 2019 also saw record levels of greenhouse gases, rising sea levels, and increased temperatures recorded in every region of the world. This year was the first time that NOAA’s report included marine heatwaves showing the growing prevalence of marine heatwaves. The report supplements the Bulletin of the American Meteorological Society. Ocean and Climate. (2019, December) Policy Recommendations: A healthy ocean, a protected climate. The Ocean and Climate Platform. https://ocean-climate.org/?page_id=8354&lang=en Based on the commitments made during the 2014 COP21 and the 2015 Paris Agreement, this report lays out the steps for a healthy ocean and protected climate. Countries should begin with mitigation, then adaptation, and finally embrace sustainable finance. Recommended actions include: to limit the rise in temperature to 1.5°C; end subsidies to fossil fuel production; develop marine renewable energies; accelerate adaptation measures; boost efforts to end illegal, unreported and unregulated (IUU) fishing by 2020; adopt a legally binding agreement for fair conservation and sustainable management of biodiversity in the high seas; pursue a target of 30% of the ocean protected by 2030; strengthen international transdisciplinary research on ocean-climate themes by including a socio-ecological dimension. World Health Organization. (2019, April 18). Health, Environment and Climate Change WHO Global Strategy on Health, Environment and Climate Change: The Transformation Needed to Improve Lives and Well-being Sustainably through Healthy Environments. World Health Organization, Seventy-Second World Health Assembly A72/15, Provisional agenda item 11.6. Known avoidable environmental risks cause about one-quarter of all deaths and disease worldwide, a steady 13 million deaths each year. Climate change is increasingly responsible, but the threat to human health by climate change can be mitigated. Actions must be taken focusing on upstream determinants of health, determinants of climate change, and the environment in an integrated approach that is adjusted to local circumstances and supported by adequate governance mechanisms. United Nations Development Programme. (2019). UNDP’s Climate Promise: Safeguarding Agenda 2030 Through Bold Climate Action. United Nations Development Programme. PDF. In order to achieve the goals set forth in the Paris Agreement, the United Nations Development Programme will support 100 countries in an inclusive and transparent engagement process to their Nationally Determined Contributions (NDCs). The service offering includes support for building of political will and societal ownership at national and sub-national levels; review of and updates to existing targets, policies, and measures; incorporating new sectors and or greenhouse gas standards; assess costs and investment opportunities; monitor progress and strengthen transparency. Pörtner, H.O., Roberts, D.C., Masson-Delmotte, V., Zhai, P., Tignor, M., Poloczanska, E., …, & Weyer, N. (2019). Special Report on the Ocean and Cryosphere in a Changing Climate. Intergovernmental Panel on Climate Change. PDF. The Intergovernmental Panel on Climate Change released a special report authored by more than 100 scientists from over 36 countries on the enduring changes in the ocean and cryosphere-the frozen parts of the planet. The key finds are that major changes in high mountain areas will affect downstream communities, glaciers and ice sheets are melting contributing to increasing rates of sea-level rise predicted to reach 30-60 cm (11.8 – 23.6 inches) by 2100 if greenhouse gas emissions are sharply curbed and 60-110cm (23.6 – 43.3 inches) if greenhouse gas emissions continue their current rise. There will be more frequent extreme sea-level events, changes in the ocean’s ecosystems through ocean warming and acidification and Arctic sea ice is declining every month along with thawing permafrost. The report finds that strongly reducing greenhouse gas emissions, protecting and restoring ecosystems and careful resource management makes it possible to preserve the ocean and cryosphere, but action must be taken. The U.S. Department of Defense. (2019, January). Report on Effects of a Changing Climate to the Department of Defense. Office of the Under Secretary of Defense for Acquisition and Sustainment. Retrieved from: https://climateandsecurity.files.wordpress.com/2019/01/sec_335_ndaa-report_effects_of_a_changing_climate_to_dod.pdf The U.S. Department of Defense considers the national security risks associated with a changing climate and subsequent events such as recurrent flooding, drought, desertification, wildfires, and thawing permafrost’s effects on national security. The report finds that climate resilience must be incorporated in planning and decision-making processes and cannot act as a separate program. The report finds that there are significant security vulnerabilities from climate-related events on operations and missions. Wuebbles, D.J., Fahey, D.W., Hibbard, K.A., Dokken, D.J., Stewart, B.C., & Maycock, T.K. (2017). Climate Science Special Report: Fourth National Climate Assessment, Volume I. Washington, D.C., USA: U.S. Global Change Research Program. As part of the National Climate Assessment ordered by the U.S. Congress to be conducted every four years is designed to be an authoritative assessment of the science of climate change with a focus on the United States. Some key findings include the following: the last century is the warmest in the history of civilization; human activity -particularly the emission of greenhouse gases- is the dominant cause of the observed warming; the global average sea level has risen by 7 inches in the last century; tidal flooding is increasing and sea levels are expected to continue to rise; heatwaves will be more frequent, as will forest fires; and the magnitude of change will depend heavily on global levels of greenhouse gas emissions. Cicin-Sain, B. (2015, April). Goal 14—Conserve and Sustainably Use Oceans, Seas and Marine Resources for Sustainable Development. United Nations Chronicle, LI(4). Retrieved from: http://unchronicle.un.org/article/goal-14-conserve-and-sustainably-useoceans-seas-and-marine-resources-sustainable/ Goal 14 of the United Nations Sustainable Development Goals (UN SDGs) highlights the need for the conservation of the ocean and sustainable use of marine resources. The most ardent support for ocean management comes from the small island developing states and least developed countries that are adversely affected by ocean negligence. Programs that address Goal 14 also serve to meet seven other UN SDG goals including poverty, food security, energy, economic growth, infrastructure, reduction of inequality, cities and human settlements, sustainable consumption and production, climate change, biodiversity, and means of implementation and partnerships. United Nations. (2015). Goal 13—Take Urgent Action to Combat Climate Change and its Impacts. United Nations Sustainable Development Goals Knowledge Platform. Retrieved from: https://sustainabledevelopment.un.org/sdg13 Goal 13 of the United Nations Sustainable Development Goals (UN SDGs) highlights the need to address the increasing effects of greenhouse gas emissions. Since the Paris Agreement, many countries have taken positive steps for climate finance through nationally determined contributions, there remains significant need for action on mitigation and adaptation, particularly for least developed countries and small island nations. U.S. Department of Defense. (2015, July 23). National Security Implication of Climate-Related Risks and a Changing Climate. Senate Committee on Appropriations. Retrieved from: https://dod.defense.gov/Portals/1/Documents/pubs/150724-congressional-report-on-national-implications-of-climate-change.pdf The Department of Defense sees climate change as a present security threat with observable effects in shocks and stressors to vulnerable nations and communities, including the United States. The risks themselves vary, but all share a common assessment of climate change’s significance. Pachauri, R.K., & Meyer, L.A. (2014). Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Intergovernmental Panel on Climate Change, Geneva, Switzerland. Retrieved from: https://www.ipcc.ch/report/ar5/syr/ Human influence on the climate system is clear and recent anthropogenic emissions of greenhouse gases are the highest in history. Effective adaption and mitigation possibilities are available in every major sector, but responses will depend on policies and measures across the international, national, and local levels. The 2014 report has become a definitive study about climate change. Hoegh-Guldberg, O., Cai, R., Poloczanska, E., Brewer, P., Sundby, S., Hilmi, K., …, & Jung, S. (2014). Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part B: Regional Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK and New York, New York USA: Cambridge University Press. 1655-1731. Retrieved from: https://www.ipcc.ch/site/assets/uploads/2018/02/WGIIAR5-Chap30_FINAL.pdf The ocean is essential to the Earth’s climate and has absorbed 93% of the energy produced from the enhanced greenhouse effect and approximately 30% of the anthropogenic carbon dioxide from the atmosphere. Global average sea surface temperatures have increased from 1950-2009. The ocean chemistry is changing due to an uptake of CO2 decreasing the overall ocean pH. These, along with many other effects of anthropogenic climate change, have a plethora of detrimental repercussions on the ocean, marine life, the environment, and humans. Please note this is related to the Synthesis Report detailed above, but is specific to the Ocean. Griffis, R., & Howard, J. (Eds.). (2013). Oceans and Marine Resources in a Changing Climate; A Technical Input to the 2013 National Climate Assessment. The National Oceanic and Atmospheric Administration. Washington, D.C., USA: Island Press. As a companion to the National Climate Assessment 2013 report, this document looks at the technical considerations and findings specific to the ocean and marine environment. The report argues that climate-driven physical and chemical changes are causing significant harm, will adversely affect the ocean’s features, thus the Earth’s ecosystem. There remain many opportunities to adapt and address these problems including increased international partnership, sequestration opportunities, and improved marine policy and management. This report provides one of the most thorough investigates the consequence of climate change and its effects on the ocean supported by in-depth research. Warner, R., & Schofield, C. (Eds.). (2012). Climate Change and the Oceans: Gauging the Legal and Policy Currents in the Asia Pacific and Beyond. Northampton, Massachusetts: Edwards Elgar Publishing, Inc. This collection of essays looks at the nexus of governance and climate change within the Asia-Pacific region. The book begins by discussing the physical effects of climate change including effects on biodiversity and the policy implications. The moves into discussions of maritime jurisdiction in the Southern Ocean and Antarctic followed by a discussion of country and maritime boundaries, followed by a security analysis. The final chapters discuss the implications of greenhouse gases and opportunities for mitigation. Climate change presents an opportunity for global cooperation, signals a need for monitoring and regulating marine geo-engineering activities in response to climate change mitigation efforts, and develop a coherent international, regional, and national policy response that recognize the ocean’s role in climate change. United Nations. (1997, December 11). The Kyoto Protocol. United Nations Framework Convention on Climate Change. Retrieved from: https://unfccc.int/kyoto_protocol The Kyoto Protocol is an international commitment to set internationally binding targets for greenhouse gas emission reduction. This agreement was ratified in 1997 and entered into force in 2005. The Doha Amendment was adopted in December, 2012 to extend the protocol to December 31st, 2020 and revise the list of greenhouse gases (GHG) that must be reported by each party. 12. Proposed Solutions Ruffo, S. (2021, October). The Ocean’s Ingenious Climate Solutions. TED. https://youtu.be/_VVAu8QsTu8 We must think of the ocean as a source for solutions rather than another part of the environment we need to save. The ocean is currently what is keeping the climate stable enough to support humanity, and it is an integral part of the fight against climate change. Natural climate solutions are available by working with our water systems, while we simultaneously reduce our greenhouse gas emissions. Carlson, D. (2020, October 14) Within 20 Years, Rising Sea Levels Will Hit Nearly Every Coastal County – and their Bonds. Sustainable Investing. Increased credit risks from more frequent and severe flooding could hurt municipalities, an issue that has been exacerbated by the COVID-19 crisis. States with large coastal populations and economies face multi-decade credit risks due to the weaker economy and the high costs of sea-level rise. The U.S. states most at risk are Florida, New Jersey, and Virginia. Johnson, A. (2020, June 8). To Save the Climate Look to the Ocean. Scientific American. PDF. The ocean is in dire straits due to human activity, but there are opportunities in renewable offshore energy, the sequestration of carbon, algae biofuel, and regenerative ocean farming. The ocean is a threat to the millions living on the coast via flooding, a victim of human activity, and an opportunity to save the planet, all at the same time. A Blue New Deal is needed in addition to the proposed Green New Deal to address the climate crisis and turn the ocean from a threat into a solution. Ceres (2020, June 1) Addressing Climate as a Systematic Risk: A Call to Action. Ceres. https://www.ceres.org/sites/default/files/2020-05/Financial%20Regulator%20Executive%20Summary%20FINAL.pdf Climate change is a systematic risk due to its potential to destabilize capital markets which may lead to serious negative consequences for the economy. Ceres provides over 50 recommendations for key financial regulations for action on climate change. These include: acknowledging that climate change poses risks to the financial market stability, require financial institutions to conduct climate stress tests, require banks to assess and disclose climate risks, such as carbon emissions from their lending and investment activities, integrate climate risk into community reinvestment processes, particularly in low-income communities, and join efforts to foster coordinated efforts on climate risks. Gattuso, J., Magnan, A., Gallo, N., Herr, D., Rochette, J., Vallejo, L., and Williamson, P. (2019, November) Opportunities for Increasing Ocean Action in Climate Strategies Policy Brief. IDDRI Sustainable Development & International Relations. Published ahead of the 2019 Blue COP (also known as COP25), this report argues that advancing knowledge and ocean-based solutions can maintain or increase ocean services despite climate change. As more projects that address climate change are revealed and countries work toward their Nationally Determined Contributions (NDCs), countries should prioritize the scale-up of climate action and prioritize decisive and low regret projects. Gramling, C. (2019, October 6). In a Climate Crisis, is Geoengineering Worth the Risks? Science News. PDF. To combat climate change people have suggested large-scale geoengineering projects to reduce ocean warming and sequester carbon. Suggested projects include: building large mirrors in space, adding aerosols to the stratosphere, and ocean seeding (adding iron as fertilizer to the ocean to spur phytoplankton growth). Others suggest that these geoengineering projects could lead to dead zones and threaten marine life. The general consensus is that more research is needed due to the considerable uncertainty on the long-term effects of geoengineers. Hoegh-Guldberg, O., Northrop, E., and Lubehenco, J. (2019, September 27). The Ocean is Key to Achieving Climate and Societal Goals: Ocean-based Approached can help close Mitigation Gaps. Insights Policy Forum, Science Magazine. 265(6460), DOI: 10.1126/science.aaz4390. While climate change adversely affects the ocean, the ocean also serves as a source of solutions: renewable energy; shipping and transport; protection and restoration of coastal and marine ecosystems; fisheries, aquaculture, and shifting diets; and carbon storage in the seabed. These solutions have all been previously proposed, yet very few countries have included even one of these in their Nationally Determined Contributions (NDC) under the Paris Agreement. Only eight NDC include quantifiable measurements for carbon sequestration, two mention ocean-based renewable energy, and only one mentioned sustainable shipping. There remains an opportunity to direct time-bound targets and policies for ocean-based mitigation to ensure the goals of emission reduction are met. Cooley, S., BelloyB., Bodansky, D., Mansell, A., Merkl, A., Purvis, N., Ruffo, S., Taraska, G., Zivian, A. and Leonard, G. (2019, May 23). Overlooked ocean strategies to address climate change. https://doi.org/10.1016/j.gloenvcha.2019.101968. Many countries have committed to limits on greenhouse gases via the Paris Agreement. In order to be successful parties to the Paris Agreement must: protect the ocean and accelerate climate ambition, focus on CO2 reductions, understand and protect ocean ecosystem-based carbon dioxide storage, and pursue sustainable ocean-based adaptation strategies. Helvarg, D. (2019). Diving into an Ocean Climate Action Plan. Alert Diver Online. Divers have a unique view into the degrading ocean environment caused by climate change. As such, Helvarg argues that divers should unite to support an Ocean Climate Action Plan. The action plan will highlight the need for reformation of the U.S. National Flood Insurance Program, major coastal infrastructure investment with a focus on natural barriers and living shorelines, new guidelines for offshore renewable energy, a network of marine protected areas (MPAs), assistance for greening ports and fishing communities, increased aquaculture investment, and a revised National Disaster Recovery Framework. 13. Looking for More? (Additional Resources) This research page is designed to be a curated list of resources of the most influential publications on the ocean and climate. For additional information on specific topics we recommend the following journals, databases, and collections: - Meyer, A. and Spalding, M. (2021, March). A Critical Analysis of the Ocean Effects of Carbon Dioxide Removal via Direct Air and Ocean Capture- Is it a Safe and Sustainable Solution? (Part of a formal submission to the National Academies of Sciences, Engineering and Medicine for consideration in the public access file for NASEM’s ocean CDR study) - Journal of Ocean and Climate - Ocean and Coastal Management - NOAA’s 1991 Oceans and Climate Bibliography - History of the Discovery of Global Warming - New York University’s collection of Global Laws related to Climate Change
Skin effect is the tendency of an alternating electric current (AC) to become distributed within a conductor such that the current density is largest near the surface of the conductor, and decreases with greater depths in the conductor. The electric current flows mainly at the "skin" of the conductor, between the outer surface and a level called the skin depth. The skin effect causes the effective resistance of the conductor to increase at higher frequencies where the skin depth is smaller, thus reducing the effective cross-section of the conductor. The skin effect is due to opposing eddy currents induced by the changing magnetic field resulting from the alternating current. At 60 Hz in copper, the skin depth is about 8.5 mm. At high frequencies the skin depth becomes much smaller. Increased AC resistance due to the skin effect can be mitigated by using specially woven litz wire. Because the interior of a large conductor carries so little of the current, tubular conductors such as pipe can be used to save weight and cost. Conductors, typically in the form of wires, may be used to transmit electrical energy or signals using an alternating current flowing through that conductor. The charge carriers constituting that current, usually electrons, are driven by an electric field due to the source of electrical energy. An alternating current in a conductor produces an alternating magnetic field in and around the conductor. When the intensity of current in a conductor changes, the magnetic field also changes. The change in the magnetic field, in turn, creates an electric field which opposes the change in current intensity. This opposing electric field is called “counter-electromotive force” (back EMF). The back EMF is strongest at the center of the conductor, and forces the conducting electrons to the outside of the conductor, as shown in the diagram on the right. An alternating current may also be induced in a conductor due to an alternating magnetic field according to the law of induction. An electromagnetic wave impinging on a conductor will therefore generally produce such a current; this explains the reflection of electromagnetic waves from metals. Regardless of the driving force, the current density is found to be greatest at the conductor's surface, with a reduced magnitude deeper in the conductor. That decline in current density is known as the skin effect and the skin depth is a measure of the depth at which the current density falls to 1/e of its value near the surface. Over 98% of the current will flow within a layer 4 times the skin depth from the surface. This behavior is distinct from that of direct current which usually will be distributed evenly over the cross-section of the wire. The effect was first described in a paper by Horace Lamb in 1883 for the case of spherical conductors, and was generalised to conductors of any shape by Oliver Heaviside in 1885. The skin effect has practical consequences in the analysis and design of radio-frequency and microwave circuits, transmission lines (or waveguides), and antennas. It is also important at mains frequencies (50–60 Hz) in AC electrical power transmission and distribution systems. Although the term "skin effect" is most often associated with applications involving transmission of electric currents, the skin depth also describes the exponential decay of the electric and magnetic fields, as well as the density of induced currents, inside a bulk material when a plane wave impinges on it at normal incidence. where δ is called the skin depth. The skin depth is thus defined as the depth below the surface of the conductor at which the current density has fallen to 1/e (about 0.37) of JS. The imaginary part of the exponent indicates that the phase of the current density is delayed 1 radian for each skin depth of penetration. One full wavelength in the conductor requires 2π skin depths, at which point the current density is attenuated to e−2π (-54.6 dB) of its surface value. The wavelength in the conductor is much shorter than the wavelength in vacuum, or equivalently, the phase velocity in the conductor is very much slower than the speed of light in vacuum. For example, a 1 MHz radio wave has a wavelength in vacuum λ0 of about 300 m, whereas in copper, the wavelength is reduced to only about 0.5 mm with a phase velocity of only about 500 m/s. As a consequence of Snell's law and this very tiny phase velocity in the conductor, any wave entering the conductor, even at grazing incidence, refracts essentially in the direction perpendicular to the conductor's surface. - = resistivity of the conductor - = angular frequency of current = 2π × frequency - = relative magnetic permeability of the conductor - = the permeability of free space - = relative permittivity of the material - = the permittivity of free space At frequencies much below the quantity inside the large radical is close to unity and the formula is more usually given as: This formula is valid at frequencies away from strong atomic or molecular resonances (where would have a large imaginary part) and at frequencies that are much below both the material's plasma frequency (dependent on the density of free electrons in the material) and the reciprocal of the mean time between collisions involving the conduction electrons. In good conductors such as metals all of those conditions are ensured at least up to microwave frequencies, justifying this formula's validity.[note 1] For example, in the case of copper, this would be true for frequencies much below 1018 Hz. However, in very poor conductors, at sufficiently high frequencies, the factor under the large radical increases. At frequencies much higher than it can be shown that the skin depth, rather than continuing to decrease, approaches an asymptotic value: This departure from the usual formula only applies for materials of rather low conductivity and at frequencies where the vacuum wavelength is not much larger than the skin depth itself. For instance, bulk silicon (undoped) is a poor conductor and has a skin depth of about 40 meters at 100 kHz (λ = 3000 m). However, as the frequency is increased well into the megahertz range, its skin depth never falls below the asymptotic value of 11 meters. The conclusion is that in poor solid conductors such as undoped silicon, the skin effect doesn't need to be taken into account in most practical situations: any current is equally distributed throughout the material's cross-section regardless of its frequency. Formula for round conductorEdit When the skin depth is not small with respect to the radius of the wire, current density may be described in terms of Bessel functions. The current density inside round wire away from the influences of other fields, as function of distance from the axis is given by Weeks.:38 - = angular frequency of current = 2π × frequency - distance from the axis of the wire - radius of the wire - current density phasor at distance, r, from the axis of the wire - current density phasor at the surface of the wire - total current phasor - Bessel function of the first kind, order 0 - Bessel function of the first kind, order 1 - the wave number in the conductor - also called skin depth. - = resistivity of the conductor - = relative magnetic permeability of the conductor - = the permeability of free space = 4π x 10−7 H/m Since is complex, the Bessel functions are also complex. The amplitude and phase of the current density varies with depth. Impedance of round wireEdit The internal impedance is complex and may be interpreted as a resistance in series with an inductance. The inductance accounts for energy stored in the magnetic field inside the wire. It has a maximum value of H/m at zero frequency and goes to zero as the frequency increases. The zero frequency internal inductance is independent of the radius of the round wire. The effective resistance due to a current confined near the surface of a large conductor (much thicker than δ) can be solved as if the current flowed uniformly through a layer of thickness δ based on the DC resistivity of that material. The effective cross-sectional area is approximately equal to δ times the conductor's circumference. Thus a long cylindrical conductor such as a wire, having a diameter D large compared to δ, has a resistance approximately that of a hollow tube with wall thickness δ carrying direct current. The AC resistance of a wire of length L and resistivity is: The final approximation above assumes . The increase in AC resistance described above is accurate only for an isolated wire. For a wire close to other wires, e.g. in a cable or a coil, the ac resistance is also affected by proximity effect, which often causes a much more severe increase in ac resistance. Material effect on skin depthEdit In a good conductor, skin depth is proportional to square root of the resistivity. This means that better conductors have a reduced skin depth. The overall resistance of the better conductor remains lower even with the reduced skin depth. However the better conductor will show a higher ratio between its AC and DC resistance, when compared with a conductor of higher resistivity. For example, at 60 Hz, a 2000 MCM (1000 square millimetre) copper conductor has 23% more resistance than it does at DC. The same size conductor in aluminum has only 10% more resistance with 60 Hz AC than it does with DC. Skin depth also varies as the inverse square root of the permeability of the conductor. In the case of iron, its conductivity is about 1/7 that of copper. However being ferromagnetic its permeability is about 10,000 times greater. This reduces the skin depth for iron to about 1/38 that of copper, about 220 micrometres at 60 Hz. Iron wire is thus useless for AC power lines (except to add mechanical strength by serving as a core to a non ferromagnetic conductor like aluminum). The skin effect also reduces the effective thickness of laminations in power transformers, increasing their losses. Iron rods work well for direct-current (DC) welding but it is impossible to use them at frequencies much higher than 60 Hz. At a few kilohertz, the welding rod will glow red hot as current flows through the greatly increased AC resistance resulting from the skin effect, with relatively little power remaining for the arc itself. Only non-magnetic rods can be used for high-frequency welding. At 1 megahertz the skin effect depth in wet soil is about 5.0 m, in seawater it's about 0.25 m. A type of cable called litz wire (from the German Litzendraht, braided wire) is used to mitigate the skin effect for frequencies of a few kilohertz to about one megahertz. It consists of a number of insulated wire strands woven together in a carefully designed pattern, so that the overall magnetic field acts equally on all the wires and causes the total current to be distributed equally among them. With the skin effect having little effect on each of the thin strands, the bundle does not suffer the same increase in AC resistance that a solid conductor of the same cross-sectional area would due to the skin effect. Litz wire is often used in the windings of high-frequency transformers to increase their efficiency by mitigating both skin effect and proximity effect. Large power transformers are wound with stranded conductors of similar construction to litz wire, but employing a larger cross-section corresponding to the larger skin depth at mains frequencies. Conductive threads composed of carbon nanotubes have been demonstrated as conductors for antennas from medium wave to microwave frequencies. Unlike standard antenna conductors, the nanotubes are much smaller than the skin depth, allowing full utilization of the thread's cross-section resulting in an extremely light antenna. High-voltage, high-current overhead power lines often use aluminum cable with a steel reinforcing core; the higher resistance of the steel core is of no consequence since it is located far below the skin depth where essentially no AC current flows. In applications where high currents (up to thousands of amperes) flow, solid conductors are usually replaced by tubes, completely eliminating the inner portion of the conductor where little current flows. This hardly affects the AC resistance, but considerably reduces the weight of the conductor. The high strength but low weight of tubes substantially increases span capability. Tubular conductors are typical in electric power switchyards where the distance between supporting insulators may be several meters. Long spans generally exhibit physical sag but this does not affect electrical performance. To avoid losses, the conductivity of the tube material must be high. In high current situations where conductors (round or flat busbar) may be between 5 and 50 mm thick the skin effect also occurs at sharp bends where the metal is compressed inside the bend and stretched outside the bend. The shorter path at the inner surface results in a lower resistance, which causes most of the current to be concentrated close to the inner bend surface. This will cause an increase in temperature rise in that region compared with the straight (unbent) area of the same conductor. A similar skin effect occurs at the corners of rectangular conductors (viewed in cross-section), where the magnetic field is more concentrated at the corners than in the sides. This results in superior performance (i.e. higher current with lower temperature rise) from wide thin conductors – e.g. "ribbon" conductors, where the effects from corners is effectively eliminated. It follows that a transformer with a round core will be more efficient than an equivalent-rated transformer having a square or rectangular core of the same material. Solid or tubular conductors may be silver-plated to take advantage of silver's higher conductivity. This technique is particularly used at VHF to microwave frequencies where the small skin depth requires only a very thin layer of silver, making the improvement in conductivity very cost effective. Silver plating is similarly used on the surface of waveguides used for transmission of microwaves. This reduces attenuation of the propagating wave due to resistive losses affecting the accompanying eddy currents; the skin effect confines such eddy currents to a very thin surface layer of the waveguide structure. The skin effect itself isn't actually combatted in these cases, but the distribution of currents near the conductor's surface makes the use of precious metals (having a lower resistivity) practical. Although it has a lower conductivity than copper and silver, gold plating is also used, because unlike copper and silver, it does not corrode. A thin oxidized layer of copper or silver would have a low conductivity, and so would cause large power losses as the majority of the current would still flow through this layer. Recently, a method of layering non-magnetic and ferromagnetic materials with nanometer scale thicknesses has been shown to mitigate the increased resistance from the skin effect for very high frequency applications. A working theory is that the behavior of ferromagnetic materials in high frequencies results in fields and/or currents that oppose those generated by relatively nonmagnetic materials, but more work is needed to verify the exact mechanisms. As experiments have shown, this has potential to greatly improve the efficiency of conductors operating in tens of GHz or higher. This has strong ramifications for 5G communications. We can derive a practical formula for skin depth as follows: - the skin depth in meters - the relative permeability of the medium (for copper, = ) 1.00 - the resistivity of the medium in Ω·m, also equal to the reciprocal of its conductivity: (for copper, ρ = ×10−8 Ω·m) 1.68 - the frequency of the current in Hz Gold is a good conductor with a resistivity of ×10−8 Ω·m and is essentially nonmagnetic: 2.44 1, so its skin depth at a frequency of 50 Hz is given by Lead, in contrast, is a relatively poor conductor (among metals) with a resistivity of ×10−7 Ω·m, about 9 times that of gold. Its skin depth at 50 Hz is likewise found to be about 33 mm, or 2.2 times that of gold. Highly magnetic materials have a reduced skin depth owing to their large permeability as was pointed out above for the case of iron, despite its poorer conductivity. A practical consequence is seen by users of induction cookers, where some types of stainless steel cookware are unusable because they are not ferromagnetic. At very high frequencies the skin depth for good conductors becomes tiny. For instance, the skin depths of some common metals at a frequency of 10 GHz (microwave region) are less than a micrometer: |Conductor||Skin depth (μm)| Thus at microwave frequencies, most of the current flows in an extremely thin region near the surface. Ohmic losses of waveguides at microwave frequencies are therefore only dependent on the surface coating of the material. A layer of silver 3 μm thick evaporated on a piece of glass is thus an excellent conductor at such frequencies. In copper, the skin depth can be seen to fall according to the square root of frequency: |Frequency||Skin depth (μm)| In Engineering Electromagnetics, Hayt points out[page needed] that in a power station a busbar for alternating current at 60 Hz with a radius larger than one-third of an inch (8 mm) is a waste of copper, and in practice bus bars for heavy AC current are rarely more than half an inch (12 mm) thick except for mechanical reasons. Skin effect reduction of the self inductance of a conductorEdit Refer to the diagram below showing the inner and outer conductors of a coaxial cable. Since the skin effect causes a current at high frequencies to flow mainly at the surface of a conductor, it can be seen that this will reduce the magnetic field inside the wire, that is, beneath the depth at which the bulk of the current flows. It can be shown that this will have a minor effect on the self-inductance of the wire itself; see Skilling or Hayt for a mathematical treatment of this phenomenon. Note that the inductance considered in this context refers to a bare conductor, not the inductance of a coil used as a circuit element. The inductance of a coil is dominated by the mutual inductance between the turns of the coil which increases its inductance according to the square of the number of turns. However, when only a single wire is involved, then in addition to the "external inductance" involving magnetic fields outside of the wire (due to the total current in the wire) as seen in the white region of the figure below, there is also a much smaller component of "internal inductance" due to the magnetic field inside the wire itself, the green region in figure B. In a single wire the internal inductance becomes of little significance when the wire is much much longer than its diameter. The presence of a second conductor in the case of a transmission line requires a different treatment as is discussed below. Due to the skin effect, at high frequencies the internal inductance of a wire vanishes, as can be seen in the case of a telephone twisted pair, below. In normal cases the effect of internal inductance is ignored in the design of coils or calculating the properties of microstrips. Inductance per length in a coaxial cableEdit Let the dimensions a, b, and c be the inner conductor radius, the shield (outer conductor) inside radius and the shield outer radius respectively, as seen in the crossection of figure A below. For a given current, the total energy stored in the magnetic fields must be the same as the calculated electrical energy attributed to that current flowing through the inductance of the coax; that energy is proportional to the cable's measured inductance. The magnetic field inside a coaxial cable can be divided into three regions, each of which will therefore contribute to the electrical inductance seen by a length of cable. The inductance is associated with the magnetic field in the region with radius , the region inside the center conductor. The inductance is associated with the magnetic field in the region , the region between the two conductors (containing a dielectric, possibly air). The inductance is associated with the magnetic field in the region , the region inside the shield conductor. The net electrical inductance is due to all three contributions: is not changed by the skin effect and is given by the frequently cited formula for inductance L per length D of a coaxial cable: At low frequencies, all three inductances are fully present so that . At high frequencies, only the dielectric region has magnetic flux, so that . Most discussions of coaxial transmission lines assume they will be used for radio frequencies, so equations are supplied corresponding only to the latter case. As the skin effect increases, the currents are concentrated near the outside of the inner conductor (r=a) and the inside of the shield (r=b). Since there is essentially no current deeper in the inner conductor, there is no magnetic field beneath the surface of the inner conductor. Since the current in the inner conductor is balanced by the opposite current flowing on the inside of the outer conductor, there is no remaining magnetic field in the outer conductor itself where . Only contributes to the electrical inductance at these higher frequencies. Although the geometry is different, a twisted pair used in telephone lines is similarly affected: at higher frequencies the inductance decreases by more than 20% as can be seen in the following table. Characteristics of telephone cable as a function of frequencyEdit Representative parameter data for 24 gauge PIC telephone cable at 21 °C (70 °F). |Frequency (Hz)||R (Ω/km)||L (mH/km)||G (μS/km)||C (nF/km)| Chen gives an equation of this form for telephone twisted pair: - Note that the above equation for the current density inside the conductor as a function of depth applies to cases where the usual approximation for the skin depth holds. In the extreme cases where it doesn't, the exponential decrease with respect to the skin depth still applies to the magnitude of the induced currents, however the imaginary part of the exponent in that equation, and thus the phase velocity inside the material, are altered with respect to that equation. - "These emf's are greater at the center than at the circumference, so the potential difference tends to establish currents that oppose the current at the center and assist it at the circumference" Fink, Donald G.; Beaty, H. Wayne (2000). Standard Handbook for Electrical Engineers (14th ed.). McGraw-Hill. p. 2–50. ISBN 0-07-022005-0. - Hayt, William H. (1989), Engineering Electromagnetics (5th ed.), McGraw-Hill, ISBN 0070274061 - Vander Vorst, Rosen & Kotsuka (2006) - The formula as shown is algebraically equivalent to the formula found on page 130 Jordan (1968, p. 130) - Weeks, Walter L. (1981), Transmission and Distribution of Electrical Energy, Harper & Row, ISBN 006046982X - Hayt (1981, pp. 303) - Terman 1943, p. ?? - Fink, Donald G.; Beatty, H. Wayne, eds. (1978), Standard Handbook for Electrical Engineers (11th ed.), McGraw Hill, p. Table 18–21 - Popovic & Popovic 1999, p. 385 - Xi Nan & Sullivan 2005 - Central Electricity Generating Board (1982). Modern Power Station Practice. Pergamon Press. - "Spinning Carbon Nanotubes Spawns New Wireless Applications". Sciencedaily.com. 2009-03-09. Retrieved 2011-11-08. - [A. Rahimi and Y.-K. Yoon "Study on Cu/Ni Nano Superlattice Conductors for Reduced RF Loss," IEEE Microwave and Wireless Components Letters, vol. 26, no. 4, Mar. 16, 2016, pp. 258-260 http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7434554] - If the permeability is low, the skin depth is so large that the resistance encountered by eddy currents is too low to provide enough heat - Skilling (1951, pp. 157–159) - Hayt (1981, pp. 434–439) - Hayt (1981, p. 434) - Reeve (1995, p. 558) - Chen (2004, p. 26) - Chen, Walter Y. (2004), Home Networking Basics, Prentice Hall, ISBN 0-13-016511-5 - Hayt, William (1981), Engineering Electromagnetics (4th ed.), McGraw-Hill, ISBN 0-07-027395-2 - Hayt, William Hart (2006), Engineering Electromagnetics (7th ed.), New York: McGraw Hill, ISBN 0-07-310463-9 - Nahin, Paul J. Oliver Heaviside: Sage in Solitude. New York: IEEE Press, 1988. ISBN 0-87942-238-6. - Ramo, S., J. R. Whinnery, and T. Van Duzer. Fields and Waves in Communication Electronics. New York: John Wiley & Sons, Inc., 1965. - Ramo, Whinnery, Van Duzer (1994). Fields and Waves in Communications Electronics. John Wiley and Sons. - Reeve, Whitman D. (1995), Subscriber Loop Signaling and Transmission Handbook, IEEE Press, ISBN 0-7803-0440-3 - Skilling, Hugh H. (1951), Electric Transmission Lines, McGraw-Hill - Terman, F. E. (1943), Radio Engineers' Handbook, New York: McGraw-Hill - Xi Nan; Sullivan, C. R. (2005), "An equivalent complex permeability model for litz-wire windings", Industry Applications Conference, 3: 2229–2235, doi:10.1109/IAS.2005.1518758, ISBN 0-7803-9208-6, ISSN 0197-2618 - Jordan, Edward Conrad (1968), Electromagnetic Waves and Radiating Systems, Prentice Hall, ISBN 978-0-13-249995-8 - Vander Vorst, Andre; Rosen, Arye; Kotsuka, Youji (2006), RF/Microwave Interaction with Biological Tissues, John Wiley and Sons, Inc., ISBN 978-0-471-73277-8 - Popovic, Zoya; Popovic, Branko (1999), Chapter 20,The Skin Effect, Introductory Electromagnetics, Prentice-Hall, ISBN 978-0-201-32678-9
©Carnegie Institution for Science Scientists have long been puzzled about what makes Mercury's surface so dark. The innermost planet reflects much less sunlight than the Moon, a body on which surface darkness is controlled by the abundance of iron-rich minerals. These are known to be rare at Mercury's surface, so what is the "darkening agent" there? About a year ago, scientists proposed that Mercury's darkness was due to carbon that gradually accumulated from the impact of comets that traveled into the inner Solar System. Now scientists, led by Patrick Peplowski of the Johns Hopkins University Applied Physics Laboratory, have used data from the MESSENGER mission* to confirm that a high abundance of carbon is present at Mercury's surface. However, they also have also found that, rather than being delivered by comets, the carbon most likely originated deep below the surface, in the form of a now-disrupted and buried ancient graphite-rich crust, some of which was later brought to the surface by impact processes after most of Mercury's current crust had formed. The results are published in the March 7, 2016, Advanced Online Publication of Nature Geoscience. Co-author and Deputy Principal Investigator of the MESSENGER mission, Carnegie's Larry Nittler, explained: "The previous proposal of comets delivering carbon to Mercury was based on modelling and simulation. Although we had prior suggestions that carbon may be the darkening agent, we had no direct evidence. We used MESSENGER's Neutron Spectrometer to spatially resolve the distribution of carbon and found that it is correlated with the darkest material on Mercury, and this material most likely originated deep in the crust. Moreover, we used both neutrons and X-rays to confirm that the dark material is not enriched in iron, in contrast to the Moon where iron-rich minerals darken the surface." MESSENGER obtained its statistically robust data via many orbits on which the spacecraft passed lower than 60 miles (100 km) above the surface of the planet during its last year of operation. The data used to identify carbon included measurements taken just days before MESSENGER impacted Mercury in April 2015. Repeated Neutron Spectrometer measurements showed higher amounts of low-energy neutrons, a signature consistent with the presence of elevated carbon, coming from the surface when the spacecraft passed over concentrations of the darkest material. Estimating the amount of carbon present required combining the neutron measurements with other MESSENGER datasets, including X-ray measurements and reflectance spectra. Together, the data indicate that Mercury's surface rocks are made up of as much as a few weight percent graphitic carbon, much higher than on other planets. Graphite has the best fit to the reflectance spectra, at visible wavelengths, and the likely conditions that produced the material. When Mercury was very young, much of the planet was likely so hot that there was a global "ocean" of molten magma. From laboratory experiments and modeling, scientists have suggested that as this magma ocean cooled, most minerals that solidified would sink. A notable exception is graphite, which would have been buoyant and floated to form the original crust of Mercury. "The finding of abundant carbon on the surface suggests that we may be seeing remnants of Mercury's original ancient crust mixed into the volcanic rocks and impact ejecta that form the surface we see today. This result is a testament to the phenomenal success of the MESSENGER mission and adds to a long list of ways the innermost planet differs from its planetary neighbors and provides additional clues to the origin and early evolution of the inner Solar System," concluded Nittler. *Authors on this paper are Patrick Peplowski, Rachel Klima, David Lawrence, Carolyn Ernst, Brett Denevi, Elizabeth Frank, John Goldsten, Scott Murchie, Larry Nittler and Sean Solomon. MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) is a NASA-sponsored scientific investigation of the planet Mercury and the first space mission designed to orbit the planet closest to the Sun. The MESSENGER spacecraft launched on August 3, 2004, and entered orbit about Mercury on March 17, 2011 (March 18, 2011 UTC), to begin a yearlong study of its target planet. MESSENGER's extended mission began on March 18, 2012. It ended in April 2015. Dr. Sean C. Solomon, formerly a department director at the Carnegie Institution and now at the Lamont-Doherty Earth Observatory of Columbia University, leads the mission as Principal Investigator. The Johns Hopkins University Applied Physics Laboratory built and operates the MESSENGER spacecraft and managed this Discovery-class mission for NASA. The Carnegie Institution for Science has been a pioneering force in basic scientific research since 1902. It is a private, nonprofit organization with six research departments throughout the U.S. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science.
Next XYZ Courses (11 Part Series) A decision tree is a supervised machine learning model used to predict a target by learning decision rules from features. As the name suggests, we can think of this model as breaking down our data by making a decision based on asking a series of questions. Let's consider the following example in which we use a decision tree to decide upon an activity on a particular day: Based on the features in our training set, the decision tree model learns a series of questions to infer the class labels of the samples. As we can see, decision trees are attractive models if we care about interpretability. Although the preceding figure illustrates the concept of a decision tree based on categorical variables (classification), the same concept applies if our features are real numbers (regression). In this tutorial, we will discuss how to build a decision tree model with Python’s scikit-learn library. We will cover: - The fundamental concepts of decision trees - The mathematics behind the decision tree learning algorithm - Information gain and impurity measures - Classification trees - Regression trees Let’s get started! This tutorial is adapted from Next Tech’s Python Machine Learning series which takes you through machine learning and deep learning algorithms with Python from 0 to 100. It includes an in-browser sandboxed environment with all the necessary software and libraries pre-installed, and projects using public datasets. You can get started here! A decision tree is constructed by recursive partitioning — starting from the root node (known as the first parent), each node can be split into left and right child nodes. These nodes can then be further split and they themselves become parent nodes of their resulting children nodes. For example, looking at the image above, the root node is Work to do? and splits into the child nodes Stay in and Outlook based on whether or not there is work to do. The Outlook node further splits into three child nodes. So, how do we know what the optimal splitting point is at each node? Starting from the root, the data is split on the feature that results in the largest Information Gain (IG) (explained in more detail below). In an iterative process, we then repeat this splitting procedure at each child node until the leaves are pure — i.e. samples at each node all belong to the same class. In practice, this can result in a very deep tree with many nodes, which can easily lead to overfitting. Thus, we typically want to prune the tree by setting a limit for the maximal depth of the tree. In order to split the nodes at the most informative features, we need to define an objective function that we want to optimize via the tree learning algorithm. Here, our objective function is to maximize the information gain at each split, which we define as follows: Here, f is the feature to perform the split, Dp, Dleft, and Dright are the dataset of the parent and child nodes, I is the impurity measure, Np is the total number of samples at the parent node, and Nleft and Nright are the number of samples in the child nodes. We will discuss impurity measures for classification and regression decision trees in more detail in our examples below. But for now, just understand that information gain is simply the difference between the impurity of the parent node and the sum of the child node impurities — the lower the impurity of the child nodes, the larger the information gain. Note that the above equation is for binary decision trees — each parent node is split into two child nodes only. If you have a decision tree with multiple nodes, you would simply sum the impurity of all nodes. We will start by talking about classification decision trees (also known as classification trees). For this example, we will be using the Iris dataset, a classic in the field of machine learning. It contains the measurements of 150 Iris flowers from three different species —Setosa, Versicolor, and Virginica. These will be our targets. Our goal is to predict which category an Iris flower belongs to. The petal length and width in centimeters are stored as columns, which we also call the features of the dataset. Let’s first import the dataset and assign the features as X and the target as from sklearn import datasets iris = datasets.load_iris() # Load iris dataset X = iris.data[:, [2, 3]] # Assign matrix X y = iris.target # Assign vector y scikit-learn, we will now train a decision tree with a maximum depth of 4. The code is as follows: from sklearn.tree import DecisionTreeClassifier # Import decision tree classifier model tree = DecisionTreeClassifier(criterion='entropy', # Initialize and fit classifier max_depth=4, random_state=1) tree.fit(X, y) Notice that we set the criterion as ‘entropy’. This criterion is known as the impurity measure (mentioned in the previous section). In classification, entropy is the most common impurity measure or splitting criteria. It is defined by: Here, P(i|t) is the proportion of the samples that belong to class c for a particular node t. The entropy is therefore 0 if all samples at a node belong to the same class, and the entropy is maximal if we have a uniform class distribution. For a more visual understanding of entropy, let’s plot the impurity index for the probability range [0, 1] for class 1. The code is as follows: import numpy as np import matplotlib.pyplot as plt def entropy(p): return - p * np.log2(p) - (1 - p) * np.log2(1 - p) x = np.arange(0.0, 1.0, 0.01) # Create dummy data e = [entropy(p) if p != 0 else None for p in x] # Calculate entropy plt.plot(x, e, label='entropy', color='r') # Plot impurity indices for y in [0.5, 1.0]: plt.axhline(y=y, linewidth=1, color='k', linestyle='--') plt.xlabel('p(i=1)') plt.ylabel('Impurity Index') plt.legend() plt.show() As you can see, entropy is 0 if p(i=1|t) = 1. If the classes are distributed uniformly with p(i=1|t) = 0.5, entropy is 1. Now, returning to our Iris example, we will visualize our trained classification tree and see how entropy decides each split. A nice feature in scikit-learn is that it allows us to export the decision tree as a .dot file after training, which we can visualize using GraphViz, for example. In addition to GraphViz, we will use a Python library called pydotplus, which has capabilities similar to GraphViz and allows us to convert .dot data files into a decision tree image file. You can install graphviz by executing the following commands in your Terminal: pip3 install pydotplus apt install graphviz The following code will create an image of our decision tree in PNG format: from pydotplus.graphviz import graph_from_dot_data from sklearn.tree import export_graphviz dot_data = export_graphviz( # Create dot data tree, filled=True, rounded=True, class_names=['Setosa', 'Versicolor','Virginica'], feature_names=['petal length', 'petal width'], out_file=None ) graph = graph_from_dot_data(dot_data) # Create graph from dot data graph.write_png('tree.png') # Write graph to PNG image Looking at the resulting decision tree figure saved in the image file tree.png, we can now nicely trace back the splits that the decision tree determined from our training dataset. We started with 150 samples at the root and split them into two child nodes with 50 and 100 samples, using the petal width cut-off ≤ 1.75 cm. After the first split, we can see that the left child node is already pure and only contains samples from the setosa class (entropy = 0). The further splits on the right are then used to separate the samples from the Looking at the final entropy we see that the decision tree with a depth of 4 does a very good job of separating the flower classes. We will be using the Boston Housing dataset for our regression example. This is another very popular dataset which contains information about houses in the suburbs of Boston. There are 506 samples and 14 attributes. For simplicity and visualization purposes, we will only use two — MEDV (median value of owner-occupied homes in $1000s) as the target and LSTAT (percentage of lower status of the population) as the feature. Let’s first import the necessary attributes from scikit-learn into a import pandas as pd from sklearn import datasets boston = datasets.load_boston() # Load Boston Dataset df = pd.DataFrame(boston.data[:, 12]) # Create DataFrame using only the LSAT feature df.columns = ['LSTAT'] df['MEDV'] = boston.target # Create new column with the target MEDV df.head() Let’s use the DecisionTreeRegressor implemented in scikit-learn to train a regression tree: from sklearn.tree import DecisionTreeRegressor # Import decision tree regression model X = df[['LSTAT']].values # Assign matrix X y = df['MEDV'].values # Assign vector y sort_idx = X.flatten().argsort() # Sort X and y by ascending values of X X = X[sort_idx] y = y[sort_idx] tree = DecisionTreeRegressor(criterion='mse', # Initialize and fit regressor max_depth=3) tree.fit(X, y) Notice that our criterion is different from the one we used for our classification tree. Entropy as a measure of impurity is a useful criteria for classification. To use a decision tree for regression, however, we need an impurity metric that is suitable for continuous variables, so we define the impurity measure using the weighted mean squared error (MSE) of the children nodes instead: Here, Nt is the number of training samples at node t, Dt is the training subset at node t, y(i) is the true target value, and ŷt is the predicted target value (sample mean): Now, let’s model the relationship between LSTAT to see what the line fit of a regression tree looks like: plt.figure(figsize=(16, 8)) plt.scatter(X, y, c='steelblue', # Plot actual target against features edgecolor='white', s=70) plt.plot(X, tree.predict(X), # Plot predicted target against features color='black', lw=2) plt.xlabel('% lower status of the population [LSTAT]') plt.ylabel('Price in $1000s [MEDV]') plt.show() As we can see in the resulting plot, the decision tree of depth 3 captures the general trend in the data. I hope you enjoyed this tutorial on decision trees! We discussed the fundamental concepts of decision trees, the algorithms for minimizing impurity, and how to build decision trees for both classification and regression. In practice, it is important to know how to choose an appropriate value for a depth of a tree to not overfit or underfit the data. Knowing how to combine decision trees to form an ensemble random forest is also useful as it usually has a better generalization performance than an individual decision tree due to randomness, which helps to decrease the model's variance. It is also less sensitive to outliers in the dataset and doesn't require much parameter tuning. We cover these techniques in our Python Machine Learning series, as well as diving into other machine learning models such as perceptrons, Adaline, linear and polynomial regression, logistic regression, SVMs, kernel SVMs, k-nearest-neighbors, models for sentiment analysis, k-means clustering, DBSCAN, convolutional neural networks, and recurrent neural networks. We also look at other topics such as regularization, data processing, feature selection and extraction, dimensionality reduction, model evaluation, ensemble learning techniques, and deploying a machine learning model. You can get started here!
Researchers from Duke University have devised a method for estimating the air quality over a small patch of land using nothing but satellite imagery and weather conditions. Such information could help researchers identify hidden hotspots of dangerous pollution, greatly improve studies of pollution on human health, or potentially tease out the effects of unpredictable events on air quality, such as the breakout of an airborne global pandemic. The results appear online in the journal Atmospheric Environment. “We’ve used a new generation of micro-satellite images to estimate ground-level air pollution at the smallest spatial scale to date,” said Mike Bergin, professor of civil and environmental engineering at Duke. “We’ve been able to do it by developing a totally new approach that uses AI/machine learning to interpret data from surface images and existing ground stations.” The specific air quality measurement that Bergin and his colleagues are interested in is the amount of tiny airborne particles called PM2.5. These are particles that have a diameter of less than 2.5 micrometers — about three percent of the diameter of a human hair — and have been shown to have a dramatic effect on human health because of their ability to travel deep into the lungs. For example, PM2.5 was globally ranked as the fifth mortality risk factor, responsible for about 4.2 million deaths and 103.1 million years of life lost or lived with disability, by the 2015 Global Burden of Disease study. And in a recent study from the Harvard University T.H. Chan School of Public Health, researchers found that areas with higher levels of PM2.5 also are associated with higher death rates due to COVID-19. Current best practices in remote sensing to estimate the amount of ground-level PM2.5 use satellites to measure how much sunlight is scattered back to space by ambient particulates over the entire atmospheric column. This method, however, can suffer from regional uncertainties such as clouds and shiny surfaces, atmospheric mixing, and properties of the PM particles, and cannot make accurate estimates at scales smaller than about a square kilometer. While ground pollution monitoring stations can provide direct measurements, they suffer from their own host of drawbacks and are only sparsely located around the world. “Ground stations are expensive to build and maintain, so even large cities aren’t likely to have more than a handful of them,” said Bergin. “Plus they’re almost always put in areas away from traffic and other large local sources, so while they might give a general idea of the amount of PM2.5 in the air, they don’t come anywhere near giving a true distribution for the people living in different areas throughout that city.” In their search for a better method, Bergin and his doctoral student Tongshu Zheng turned to Planet, an American company that uses micro-satellites to take pictures of the entire Earth’s surface every single day with a resolution of three meters per pixel. The team was able to get daily snapshot of Beijing over the past three years. The key breakthrough came when David Carlson, an assistant professor of civil and environmental engineering at Duke and an expert in machine learning, stepped in to help. “When I go to machine learning and artificial intelligence conferences, I’m usually the only person from an environmental engineering department,” said Carlson. “But these are the exact types of projects that I’m here to help support, and why Duke places such a high importance on hiring data experts throughout the entire university.” With Carlson’s help, Bergin and Zheng applied a convolutional neural network with a random forest algorithm to the image set, combined with meteorological data from Beijing’s weather station. While that may sound like a mouthful, it’s not that difficult to pick your way through the trees. A random forest is a standard machine learning algorithm that uses a lot of different decision trees to make a prediction. We’ve all seen decision trees, perhaps as an internet meme that uses a series of branching yes/no questions to decide whether or not to eat a burrito. Except in this case, the algorithm is looking through decision trees based on metrics such as wind, relative humidity, temperature and more, and using the resulting answers to arrive at an estimate for PM2.5 concentrations. However, random forest algorithms don’t deal well with images. That’s where the convolutional neural networks come in. These algorithms look for common features in images such as lines and bumps and begin grouping them together. As the algorithm “zooms out,” it continues to lump similar groupings together, combining basic shapes into common features such as buildings and highways. Eventually the algorithm comes up with a summary of the image as a list of its most common features, and these get thrown into the random forest along with the weather data. “High-pollution images are definitely foggier and blurrier than normal images, but the human eye can’t really tell the exact pollution levels from those details,” said Carlson. “But the algorithm can pick out these differences in both the low-level and high-level features — edges are blurrier and shapes are obscured more — and precisely turn them into air quality estimates.” “The convolutional neural network doesn’t give us as good of a prediction as we would like with the images alone,” added Zheng. “But when you put those results into a random forest with weather data, the results are as good as anything else currently available, if not better.” In the study, the researchers used 10,400 images to train their model to predict local levels of PM2.5 using nothing but satellite images and weather conditions. They tested their resulting model on another 2,622 images to see how well it could predict PM2.5. They show that, on average, their model is accurate to within 24 percent of actual PM2.5 levels measured at reference stations, which is at the high end of the spectrum for these types of models, while also having a much higher spatial resolution. While most of the current standard practices can predict levels down to 1 million square meters, the new method is accurate down to 40,000 — about the size of eight football fields placed side-by-side. With that level of specificity and accuracy, Bergin believes their method will open up a wide range of new uses for such models. “We think this is a huge innovation in satellite retrievals of air quality and will be the backbone of a lot of research to come,” said Bergin. “We’re already starting to get inquiries into using it to look at how levels of PM2.5 are going to change once the world starts recovering from the spread of COVID-19.”
Poverty is the state of having few material possessions or little income. Poverty can have diverse social, economic, and political causes and effects. When evaluating poverty in statistics or economics there are two main measures: absolute poverty compares income against the amount needed to meet basic personal needs, such as food, clothing, and shelter; relative poverty measures when a person cannot meet a minimum level of living standards, compared to others in the same time and place. The definition of relative poverty varies from one country to another, or from one society to another. Statistically, as of 2019[update], most of the world's population live in poverty: in PPP dollars, 85% of people live on less than $30 per day, two-thirds live on less than $10 per day, and 10% live on less than $1.90 per day now changed to 2.15$/day.(extreme poverty). According to the World Bank Group in 2020, more than 40% of the poor live in conflict-affected countries. Even when countries experience economic development, the poorest citizens of middle-income countries frequently do not gain an adequate share of their countries' increased wealth to leave poverty. Governments and non-governmental organizations have experimented with a number of different policies and programs for poverty alleviation, such as electrification in rural areas or housing first policies in urban areas. The international policy frameworks for poverty alleviation, established by the United Nations in 2015, are summarized in Sustainable Development Goal 1: "No Poverty". Social forces, such as gender, disability, race and ethnicity, can exacerbate issues of poverty—with women, children and minorities frequently bearing unequal burdens of poverty. Moreover, impoverished individuals are more vulnerable to the effects of other social issues, such as the environmental effects of industry or the impacts of climate change or other natural disasters or extreme weather events. Poverty can also make other social problems worse; economic pressures on impoverished communities frequently play a part in deforestation, biodiversity loss and ethnic conflict. For this reason, the UN's Sustainable Development Goals and other international policy programs, such as the international recovery from COVID-19, emphasize the connection of poverty alleviation with other societal goals. The word poverty comes from the old (Norman) French word poverté (Modern French: pauvreté), from Latin paupertās from pauper (poor). There are several definitions of poverty depending on the context of the situation it is placed in, and usually references a state or condition in which a person or community lacks the financial resources and essentials for a certain standard of living. United Nations: Fundamentally, poverty is a denial of choices and opportunities, a violation of human dignity. It means lack of basic capacity to participate effectively in society. It means not having enough to feed and clothe a family, not having a school or clinic to go to, not having the land on which to grow one's food or a job to earn one's living, not having access to credit. It means insecurity, powerlessness and exclusion of individuals, households and communities. It means susceptibility to violence, and it often implies living in marginal or fragile environments, without access to clean water or sanitation. World Bank: Poverty is pronounced deprivation in well-being, and comprises many dimensions. It includes low incomes and the inability to acquire the basic goods and services necessary for survival with dignity. Poverty also encompasses low levels of health and education, poor access to clean water and sanitation, inadequate physical security, lack of voice, and insufficient capacity and opportunity to better one's life. European Union (EU): The European Union's definition of poverty is significantly different from definitions in other parts of the world, and consequently policy measures introduced to combat poverty in EU countries also differ from measures in other nations. Poverty is measured in relation to the distribution of income in each member country using relative income poverty lines. Relative-income poverty rates in the EU are compiled by the Eurostat, in charge of coordinating, gathering, and disseminating member country statistics using European Union Survey of Income and Living Conditions (EU-SILC) surveys. Main article: Measuring poverty Main article: Extreme poverty Absolute poverty, often synonymous with 'extreme poverty' or 'abject poverty', refers to a set standard which is consistent over time and between countries. This set standard usually refers to "a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information. It depends not only on income but also on access to services." Having an income below the poverty line, which is defined as an income needed to purchase basic needs, is also referred to as primary proverty. The "dollar a day" poverty line was first introduced in 1990 as a measure to meet such standards of living. For nations that do not use the US dollar as currency, "dollar a day" does not translate to living a day on the equivalent amount of local currency as determined by the exchange rate. Rather, it is determined by the purchasing power parity rate, which would look at how much local currency is needed to buy the same things that a dollar could buy in the United States. Usually, this would translate to having less local currency than if the exchange rate was used as the United States is a relatively more expensive country. From 1993 through 2005, the World Bank defined absolute poverty as $1.08 a day on such a purchasing power parity basis, after adjusting for inflation to the 1993 US dollar and in 2008, it was updated as $1.25 a day (equivalent to $1.00 a day in 1996 US prices) and in 2015, it was updated as living on less than US$1.90 per day, and moderate poverty as less than $2 or $5 a day. Similarly, 'ultra-poverty' is defined by a 2007 report issued by International Food Policy Research Institute as living on less than 54 cents per day. The poverty line threshold of $1.90 per day, as set by the World Bank, is controversial. Each nation has its own threshold for absolute poverty line; in the United States, for example, the absolute poverty line was US$15.15 per day in 2010 (US$22,000 per year for a family of four), while in India it was US$1.0 per day and in China the absolute poverty line was US$0.55 per day, each on PPP basis in 2010. These different poverty lines make data comparison between each nation's official reports qualitatively difficult. Some scholars argue that the World Bank method sets the bar too high, others argue it is too low. There is disagreement among experts as to what would be considered a realistic poverty rate with one considering it "an inaccurately measured and arbitrary cut off". Some contend that a higher poverty line is needed, such as a minimum of $7.40 or even $10 to $15 a day. They argue that these levels would better reflect the cost of basic needs and normal life expectancy. One estimate places the true scale of poverty much higher than the World Bank, with an estimated 4.3 billion people (59% of the world's population) living with less than $5 a day and unable to meet basic needs adequately. Philip Alston, a UN special rapporteur on extreme poverty and human rights, stated the World Bank's international poverty line of $1.90 a day is fundamentally flawed, and has allowed for "self congratulatory" triumphalism in the fight against extreme global poverty, which he asserts is "completely off track" and that nearly half of the global population, or 3.4 billion, lives on less than $5.50 a day, and this number has barely moved since 1990. Still others suggest that poverty line misleads as it measures everyone below the poverty line the same, when in reality someone living on $1.20 per day is in a different state of poverty than someone living on $0.20 per day. Other measures of absolute poverty without using a certain dollar amount include the standard defined as receiving less than 80% of minimum caloric intake whilst spending more than 80% of income on food, sometimes called ultra-poverty. Relative poverty views poverty as socially defined and dependent on social context. It is argued that the needs considered fundamental is not an objective measure and could change with the custom of society. For example, a person who cannot afford housing better than a small tent in an open field would be said to live in relative poverty if almost everyone else in that area lives in modern brick homes, but not if everyone else also lives in small tents in open fields (for example, in a nomadic tribe). Since richer nations would have lower levels of absolute poverty, relative poverty is considered the "most useful measure for ascertaining poverty rates in wealthy developed nations" and is the "most prominent and most-quoted of the EU social inclusion indicators". Usually, relative poverty is measured as the percentage of the population with income less than some fixed proportion of median income. This is a calculation of the percentage of people whose family household income falls below the Poverty Line. The main poverty line used in the OECD and the European Union is based on "economic distance", a level of income set at 60% of the median household income. The United States federal government typically regulates this line to three times the cost of an adequate meal. There are several other different income inequality metrics, for example, the Gini coefficient or the Theil Index. Rather than income, poverty is also measured through individual basic needs at a time. Life expectancy has greatly increased in the developing world since World War II and is starting to close the gap to the developed world. Child mortality has decreased in every developing region of the world. The proportion of the world's population living in countries where the daily per-capita supply of food energy is less than 9,200 kilojoules (2,200 kilocalories) decreased from 56% in the mid-1960s to below 10% by the 1990s. Similar trends can be observed for literacy, access to clean water and electricity and basic consumer items. Poverty may also be understood as an aspect of unequal social status and inequitable social relationships, experienced as social exclusion, dependency, and diminished capacity to participate, or to develop meaningful connections with other people in society. Such social exclusion can be minimized through strengthened connections with the mainstream, such as through the provision of relational care to those who are experiencing poverty. The World Bank's "Voices of the Poor", based on research with over 20,000 poor people in 23 countries, identifies a range of factors which poor people identify as part of poverty. These include abuse by those in power, dis-empowering institutions, excluded locations, gender relationships, lack of security, limited capabilities, physical limitations, precarious livelihoods, problems in social relationships, weak community organizations and discrimination. Analysis of social aspects of poverty links conditions of scarcity to aspects of the distribution of resources and power in a society and recognizes that poverty may be a function of the diminished "capability" of people to live the kinds of lives they value. The social aspects of poverty may include lack of access to information, education, health care, social capital or political power. Relational poverty is the idea that societal poverty exists if there is a lack of human relationships. Relational poverty can be the result of a lost contact number, lack of phone ownership, isolation, or deliberate severing of ties with an individual or community. Relational poverty is also understood "by the social institutions that organize those relationships...poverty is importantly the result of the different terms and conditions on which people are included in social life" In the United Kingdom, the second Cameron ministry came under attack for their redefinition of poverty; poverty is no longer classified by a family's income, but as to whether a family is in work or not. Considering that two-thirds of people who found work were accepting wages that are below the living wage (according to the Joseph Rowntree Foundation) this has been criticised by anti-poverty campaigners as an unrealistic view of poverty in the United Kingdom. Main article: Secondary poverty Secondary poverty refers to those that earn enough income to not be impoverished, but who spend their income on unnecessary pleasures, such as alcoholic beverages, thus placing them below it in practice. In 18th- and 19th-century Great Britain, the practice of temperance among Methodists, as well as their rejection of gambling, allowed them to eliminate secondary poverty and accumulate capital. Factors that contribute to secondary poverty includes but are not limited to: alcohol, gambling, tobacco and drugs. Poverty levels are snapshot pictures in time that omits the transitional dynamics between levels. Mobility statistics supply additional information about the fraction who leave the poverty level. For example, one study finds that in a sixteen-year period (1975 to 1991 in the US) only 5% of those in the lower fifth of the income level were still at that level, while 95% transitioned to a higher income category. Poverty levels can remain the same while those who rise out of poverty are replaced by others. The transient poor and chronic poor differ in each society. In a nine-year period ending in 2005 for the US, 50% of the poorest quintile transitioned to a higher quintile. According to Chen and Ravallion, about 1.76 billion people in developing world lived above $1.25 per day and 1.9 billion people lived below $1.25 per day in 1981. In 2005, about 4.09 billion people in developing world lived above $1.25 per day and 1.4 billion people lived below $1.25 per day (both 1981 and 2005 data are on inflation adjusted basis). The share of the world's population living in absolute poverty fell from 43% in 1981 to 14% in 2011. The absolute number of people in poverty fell from 1.95 billion in 1981 to 1.01 billion in 2011. The economist Max Roser estimates that the number of people in poverty is therefore roughly the same as 200 years ago. This is the case since the world population was just little more than 1 billion in 1820 and the majority (84% to 94%) of the world population was living in poverty. According to one study the number of people worldwide living in absolute poverty fell from 1.18 billion in 1950 to 1.04 billion in 1977. According to another study, the number of people worldwide estimated to be starving fell from almost 920 million in 1971 to below 797 million in 1997.[unreliable source?] The proportion of the developing world's population living in extreme economic poverty fell from 28% in 1990 to 21% in 2001. Most of this improvement has occurred in East and South Asia. In 2012 it was estimated that, using a poverty line of $1.25 a day, 1.2 billion people lived in poverty. Given the current economic model, built on GDP, it would take 100 years to bring the world's poorest up to the poverty line of $1.25 a day. UNICEF estimates half the world's children (or 1.1 billion) live in poverty. The World Bank forecasted in 2015 that 702.1 million people were living in extreme poverty, down from 1.75 billion in 1990. Extreme poverty is observed in all parts of the world, including developed economies. Of the 2015 population, about 347.1 million people (35.2%) lived in Sub-Saharan Africa and 231.3 million (13.5%) lived in South Asia. According to the World Bank, between 1990 and 2015, the percentage of the world's population living in extreme poverty fell from 37.1% to 9.6%, falling below 10% for the first time. During the 2013 to 2015 period, the World Bank reported that extreme poverty fell from 11% to 10%, however they also noted that the rate of decline had slowed by nearly half from the 25 year average with parts of sub-saharan Africa returning to early 2000 levels. The World Bank attributed this to increasing violence following the Arab Spring, population increases in Sub-Saharan Africa, and general African inflationary pressures and economic malaise were the primary drivers for this slow down. Many wealthy nations have seen an increase in relative poverty rates ever since the Great Recession, in particular among children from impoverished families who often reside in substandard housing and find educational opportunities out of reach. It has been argued by some academics that the neoliberal policies promoted by global financial institutions such as the IMF and the World Bank are actually exacerbating both inequality and poverty. In East Asia the World Bank reported that "The poverty headcount rate at the $2-a-day level is estimated to have fallen to about 27 percent [in 2007], down from 29.5 percent in 2006 and 69 percent in 1990." The People's Republic of China accounts for over three quarters of global poverty reduction from 1990 to 2005, which according to the World Bank is "historically unprecedented". China accounted for nearly half of all extreme poverty in 1990. In Sub-Saharan Africa extreme poverty went up from 41% in 1981 to 46% in 2001, which combined with growing population increased the number of people living in extreme poverty from 231 million to 318 million. Statistics of 2018 shows population living in extreme conditions has declined by more than 1 billion in the last 25 years. As per the report published by the world bank on 19 September 2018 world poverty falls below 750 million. In the early 1990s some of the transition economies of Central and Eastern Europe and Central Asia experienced a sharp drop in income. The collapse of the Soviet Union resulted in large declines in GDP per capita, of about 30 to 35% between 1990 and the through year of 1998 (when it was at its minimum). As a result, poverty rates tripled, excess mortality increased, and life expectancy declined. Russian President Boris Yeltsin's IMF-backed rapid privatization and austerity policies resulted in unemployment rising to double digits and half the Russian population falling into destitution by the early to mid 1990s. By 1999, during the peak of the poverty crisis, 191 million people were living on less than $5.50 a day. In subsequent years as per capita incomes recovered the poverty rate dropped from 31.4% of the population to 19.6%. The average post-communist country had returned to 1989 levels of per-capita GDP by 2005, although as of 2015 some are still far behind that. According to the World Bank in 2014, around 80 million people were still living on less than $5.00 a day. World Bank data shows that the percentage of the population living in households with consumption or income per person below the poverty line has decreased in each region of the world except Middle East and North Africa since 1990: |Region||$1 per day||$1.25 per day||$1.90 per day| |East Asia and Pacific||15.4%||12.3%||9.1%||77.2%||14.3%||80.2%||60.9%||34.8%||10.8%||2.1%||1.2%| |Europe and Central Asia||3.6%||1.3%||1.0%||1.9%||0.5%||—||—||7.3%||2.4%||1.5%||1.1%| |Latin America and the Caribbean||9.6%||9.1%||8.6%||11.9%||6.5%||13.7%||15.5%||12.7%||6%||3.7%||3.7%| |Middle East and North Africa||2.1%||1.7%||1.5%||9.6%||2.7%||—||6.5%||3.5%||2%||4.3%||7%| The effects of poverty may also be causes as listed above, thus creating a "poverty cycle" operating across multiple levels, individual, local, national and global. One-third of deaths around the world—some 18 million people a year or 50,000 per day—are due to poverty-related causes. People living in developing nations, among them women and children, are over represented among the global poor and these effects of severe poverty. Those living in poverty suffer disproportionately from hunger or even starvation and disease, as well as lower life expectancy. According to the World Health Organization, hunger and malnutrition are the single gravest threats to the world's public health and malnutrition is by far the biggest contributor to child mortality, present in half of all cases. Almost 90% of maternal deaths during childbirth occur in Asia and sub-Saharan Africa, compared to less than 1% in the developed world. Those who live in poverty have also been shown to have a far greater likelihood of having or incurring a disability within their lifetime. Infectious diseases such as malaria and tuberculosis can perpetuate poverty by diverting health and economic resources from investment and productivity; malaria decreases GDP growth by up to 1.3% in some developing nations and AIDS decreases African growth by 0.3–1.5% annually. Studies have shown that poverty impedes cognitive function although some of these findings could not be replicated in follow-up studies. One hypothesised mechanism is that financial worries put a severe burden on one's mental resources so that they are no longer fully available for solving complicated problems. The reduced capability for problem solving can lead to suboptimal decisions and further perpetuate poverty. Many other pathways from poverty to compromised cognitive capacities have been noted, from poor nutrition and environmental toxins to the effects of stress on parenting behavior, all of which lead to suboptimal psychological development. Neuroscientists have documented the impact of poverty on brain structure and function throughout the lifespan. Infectious diseases continue to blight the lives of the poor across the world. 36.8 million people are living with HIV/AIDS, with 954,492 deaths in 2017. Poor people often are more prone to severe diseases due to the lack of health care, and due to living in non-optimal conditions. Among the poor, girls tend to suffer even more due to gender discrimination. Economic stability is paramount in a poor household; otherwise they go in an endless loop of negative income trying to treat diseases. Often when a person in a poor household falls ill it is up to the family members to take care of them due to limited access to health care and lack of health insurance. The household members often have to give up their income or stop seeking further education to tend to the sick member. There is a greater opportunity cost imposed on the poor to tend to someone compared to someone with better financial stability. Substance abuse means that the poor typically spend about 2% of their income educating their children but larger percentages of alcohol and tobacco (for example, 6% in Indonesia and 8% in Mexico). Main article: Hunger See also: Malnutrition Rises in the costs of living make poor people less able to afford items. Poor people spend a greater portion of their budgets on food than wealthy people. As a result, poor households and those near the poverty threshold can be particularly vulnerable to increases in food prices. For example, in late 2007 increases in the price of grains led to food riots in some countries. The World Bank warned that 100 million people were at risk of sinking deeper into poverty. Threats to the supply of food may also be caused by drought and the water crisis. Intensive farming often leads to a vicious cycle of exhaustion of soil fertility and decline of agricultural yields. Approximately 40% of the world's agricultural land is seriously degraded. In Africa, if current trends of soil degradation continue, the continent might be able to feed just 25% of its population by 2025, according to United Nations University's Ghana-based Institute for Natural Resources in Africa. Every year nearly 11 million children living in poverty die before their fifth birthday. 1.02 billion people go to bed hungry every night. According to the Global Hunger Index, Sub-Saharan Africa had the highest child malnutrition rate of the world's regions over the 2001–2006 period. A psychological study has been conducted by four scientists during inaugural Convention of Psychological Science. The results find that people who thrive with financial stability or fall under low socioeconomic status (SES) tend to perform worse cognitively due to external pressure imposed upon them. The research found that stressors such as low income, inadequate health care, discrimination, and exposure to criminal activities all contribute to mental disorders. This study also found that children exposed to poverty-stricken environments have slower cognitive thinking. It is seen that children perform better under the care of their parents and that children tend to adopt speaking language at a younger age. Since being in poverty from childhood is more harmful than it is for an adult, it is seen that children in poor households tend to fall behind in certain cognitive abilities compared to other average families. For a child to grow up emotionally healthy, the children under three need "A strong, reliable primary caregiver who provides consistent and unconditional love, guidance, and support. Safe, predictable, stable environments. Ten to 20 hours each week of harmonious, reciprocal interactions. This process, known as attunement, is most crucial during the first 6–24 months of infants' lives and helps them develop a wider range of healthy emotions, including gratitude, forgiveness, and empathy. Enrichment through personalized, increasingly complex activities". In one survey, 67% of children from disadvantaged inner cities said they had witnessed a serious assault, and 33% reported witnessing a homicide. 51% of fifth graders from New Orleans (median income for a household: $27,133) have been found to be victims of violence, compared to 32% in Washington, DC (mean income for a household: $40,127). Studies have shown that poverty changes the personalities of children who live in it. The Great Smoky Mountains Study was a ten-year study that was able to demonstrate this. During the study, about one-quarter of the families saw a dramatic and unexpected increase in income. The study showed that among these children, instances of behavioral and emotional disorders decreased, and conscientiousness and agreeableness increased. Research has found that there is a high risk of educational underachievement for children who are from low-income housing circumstances. This is often a process that begins in primary school. Instruction in the US educational system, as well as in most other countries, tends to be geared towards those students who come from more advantaged backgrounds. As a result, children in poverty are at a higher risk than advantaged children for retention in their grade, special deleterious placements during the school's hours and not completing their high school education. Advantage breeds advantage. There are many explanations for why students tend to drop out of school. One is the conditions in which they attend school. Schools in poverty-stricken areas have conditions that hinder children from learning in a safe environment. Researchers have developed a name for areas like this: an urban war zone is a poor, crime-laden district in which deteriorated, violent, even warlike conditions and underfunded, largely ineffective schools promote inferior academic performance, including irregular attendance and disruptive or non-compliant classroom behavior. Because of poverty, "Students from low-income families are 2.4 times more likely to drop out than middle-income kids, and over 10 times more likely than high-income peers to drop out." For children with low resources, the risk factors are similar to others such as juvenile delinquency rates, higher levels of teenage pregnancy, and economic dependency upon their low-income parent or parents. Families and society who submit low levels of investment in the education and development of less fortunate children end up with less favorable results for the children who see a life of parental employment reduction and low wages. Higher rates of early childbearing with all the connected risks to family, health and well-being are major issues to address since education from preschool to high school is identifiably meaningful in a life. Poverty often drastically affects children's success in school. A child's "home activities, preferences, mannerisms" must align with the world and in the cases that they do not do these, students are at a disadvantage in the school and, most importantly, the classroom. Therefore, it is safe to state that children who live at or below the poverty level will have far less success educationally than children who live above the poverty line. Poor children have a great deal less healthcare and this ultimately results in many absences from school. Additionally, poor children are much more likely to suffer from hunger, fatigue, irritability, headaches, ear infections, flu, and colds. These illnesses could potentially restrict a student's focus and concentration. In general, the interaction of gender with poverty or location tends to work to the disadvantage of girls in poorer countries with low completion rates and social expectations that they marry early, and to the disadvantage of boys in richer countries with high completion rates but social expectations that they enter the labour force early. At the primary education level, most countries with a completion rate below 60% exhibit gender disparity at girls' expense, particularly poor and rural girls. In Mauritania, the adjusted gender parity index is 0.86 on average, but only 0.63 for the poorest 20%, while there is parity among the richest 20%. In countries with completion rates between 60% and 80%, gender disparity is generally smaller, but disparity at the expense of poor girls is especially marked in Cameroon, Nigeria and Yemen. Exceptions in the opposite direction are observed in countries with pastoralist economies that rely on boys' labour, such as the Kingdom of Eswatini, Lesotho and Namibia. The geographic concentration of poverty is argued to be a factor in entrenching poverty. William J. Wilson's "concentration and isolation" hypothesis states that the economic difficulties of the very poorest African Americans are compounded by the fact that as the better-off African Americans move out, the poorest are more and more concentrated, having only other very poor people as neighbors. This concentration causes social isolation, Wilson suggests, because the very poor are now isolated from access to the job networks, role models, institutions, and other connections that might help them escape poverty. Gentrification means converting an aging neighborhood into a more affluent one, as by remodeling homes. Landlords then increase rent on newly renovated real estate; the poor people cannot afford to pay high rent, and may need to leave their neighborhood to find affordable housing. The poor also get more access to income and services, while studies suggest poor residents living in gentrifying neighbourhoods are actually less likely to move than poor residents of non-gentrifying areas. Poverty increases the risk of homelessness. Slum-dwellers, who make up a third of the world's urban population, live in a poverty no better, if not worse, than rural people, who are the traditional focus of the poverty in the developing world, according to a report by the United Nations. There are over 100 million street children worldwide. Most of the children living in institutions around the world have a surviving parent or close relative, and they most commonly entered orphanages because of poverty. It is speculated that, flush with money, for-profit orphanages are increasing and push for children to join even though demographic data show that even the poorest extended families usually take in children whose parents have died. Many child advocates maintain that this can harm children's development by separating them from their families and that it would be more effective and cheaper to aid close relatives who want to take in the orphans. As of 2012, 2.5 billion people lack access to sanitation services and 15% practice open defecation. The most noteworthy example is Bangladesh, which had half the GDP per capita of India but has a lower mortality from diarrhea than India or the world average, with diarrhea deaths declining by 90% since the 1990s. Even while providing latrines is a challenge, people still do not use them even when available. By strategically providing pit latrines to the poorest, charities in Bangladesh sparked a cultural change as those better off perceived it as an issue of status to not use one. The vast majority of the latrines built were then not from charities but by villagers themselves. Water utility subsidies tend to subsidize water consumption by those connected to the supply grid, which is typically skewed towards the richer and urban segment of the population and those outside informal housing. As a result of heavy consumption subsidies, the price of water decreases to the extent that only 30%, on average, of the supplying costs in developing countries is covered. This results in a lack of incentive to maintain delivery systems, leading to losses from leaks annually that are enough for 200 million people. This also leads to a lack of incentive to invest in expanding the network, resulting in much of the poor population being unconnected to the network. Instead, the poor buy water from water vendors for, on average, about 5 to 16 times the metered price. However, subsidies for laying new connections to the network rather than for consumption have shown more promise for the poor. Energy poverty is lack of access to modern energy services. It refers to the situation of large numbers of people in developing countries and some people in developed countries whose well-being is negatively affected by very low consumption of energy, use of dirty or polluting fuels, and excessive time spent collecting fuel to meet basic needs. Today, 759 million people lack access to consistent electricity and 2.6 billion people use dangerous and inefficient cooking systems. It is inversely related to access to modern energy services, although improving access is only one factor in efforts to reduce energy poverty. Energy poverty is distinct from fuel poverty, which primarily focuses solely on the issue of affordability. The term “energy poverty” came into emergence through the publication of Brenda Boardman’s book, Fuel Poverty: From Cold Homes to Affordable Warmth (1991). Naming the intersection of energy and poverty as “energy poverty” motivated the need to develop public policy to address energy poverty and also study its causes, symptoms, and effects in society. When energy poverty was first introduced in Boardman's book, energy poverty was described as not having enough power to heat and cool homes. Today, energy poverty is understood to be the result of complex systemic inequalities which create barriers to access modern energy at an affordable price. Energy poverty is challenging to measure and thus analyze because it is privately experienced within households, specific to cultural contexts, and dynamically changes depending on the time and space.According to the Energy Poverty Action initiative of the World Economic Forum, "Access to energy is fundamental to improving quality of life and is a key imperative for economic development. In the developing world, energy poverty is still rife.". As a result of this situation, the United Nations (UN) launched the Sustainable Energy for All Initiative and designated 2012 as the International Year for Sustainable Energy for All, which had a major focus on reducing energy poverty. The UN further recognizes the importance of energy poverty through Goal 7 of its Sustainable Development Goals to "ensure access to affordable, reliable, sustainable, and modern energy for all." Cultural factors, such as discrimination of various kinds, can negatively affect productivity such as age discrimination, stereotyping, discrimination against people with physical disability, gender discrimination, racial discrimination, and caste discrimination. Children are more than twice as likely to live in poverty as adults. Women are the group suffering from the highest rate of poverty after children, in what is referred to as the feminization of poverty. In addition, the fact that women are more likely to be caregivers, regardless of income level, to either the generations before or after them, exacerbates the burdens of their poverty. Those in poverty have increased chances of incurring a disability which leads to a cycle where disability and poverty are mutually reinforcing. Max Weber and some schools of modernization theory suggest that cultural values could affect economic success. However, researchers[who?] have gathered evidence that suggest that values are not as deeply ingrained and that changing economic opportunities explain most of the movement into and out of poverty, as opposed to shifts in values. A 2018 report on poverty in the United States by UN special rapporteur Philip Alston asserts that caricatured narratives about the rich and the poor (that "the rich are industrious, entrepreneurial, patriotic and the drivers of economic success" while "the poor are wasters, losers and scammers") are largely inaccurate, as "the poor are overwhelmingly those born into poverty, or those thrust there by circumstances largely beyond their control, such as physical or mental disabilities, divorce, family breakdown, illness, old age, unlivable wages or discrimination in the job market." Societal perception of people experiencing economic difficulty has historically appeared as a conceptual dichotomy: the "good" poor (people who are physically impaired, disabled, the "ill and incurable," the elderly, pregnant women, children) vs. the "bad" poor (able-bodied, "valid" adults, most often male). According to experts, many women become victims of trafficking, the most common form of which is prostitution, as a means of survival and economic desperation. Deterioration of living conditions can often compel children to abandon school to contribute to the family income, putting them at risk of being exploited. For example, in Zimbabwe, a number of girls are turning to sex in return for food to survive because of the increasing poverty. According to studies, as poverty decreases there will be fewer and fewer instances of violence. Main article: Poverty reduction Various poverty reduction strategies are broadly categorized based on whether they make more of the basic human needs available or whether they increase the disposable income needed to purchase those needs. Some strategies such as building roads can both bring access to various basic needs, such as fertilizer or healthcare from urban areas, as well as increase incomes, by bringing better access to urban markets. In 2015 all UN Member States adopted the 17 Sustainable Development Goals as part of the 2030 Agenda for sustainable development. Goal 1 is to "end poverty in all its forms everywhere". It aims to eliminate extreme poverty for all people measured by daily wages less than $1.25 and at least half the total number of men, women, and children living in poverty. In addition, social protection systems must be established at the national level and equal access to economic resources must be ensured. Strategies have to be developed at the national, regional and international levels to support the eradication of poverty. Agricultural technologies such as nitrogen fertilizers, pesticides, new seed varieties and new irrigation methods have dramatically reduced food shortages in modern times by boosting yields past previous constraints. Goal 2 of the Sustainable Development Goals is the elimination of hunger and undernutrition by 2030. Before the Industrial Revolution, poverty had been mostly accepted as inevitable as economies produced little, making wealth scarce. Geoffrey Parker wrote that "In Antwerp and Lyon, two of the largest cities in western Europe, by 1600 three-quarters of the total population were too poor to pay taxes, and therefore likely to need relief in times of crisis." The initial industrial revolution led to high economic growth and eliminated mass absolute poverty in what is now considered the developed world. Mass production of goods in places such as rapidly industrializing China has made what were once considered luxuries, such as vehicles and computers, inexpensive and thus accessible to many who were otherwise too poor to afford them. Even with new products, such as better seeds, or greater volumes of them, such as industrial production, the poor still require access to these products. Improving road and transportation infrastructure helps solve this major bottleneck. In Africa, it costs more to move fertilizer from an African seaport 100 kilometres (60 mi) inland than to ship it from the United States to Africa because of sparse, low-quality roads, leading to fertilizer costs two to six times the world average. Microfranchising models such as door-to-door distributors who earn commission-based income or Coca-Cola's successful distribution system are used to disseminate basic needs to remote areas for below market prices. Nations do not necessarily need wealth to gain health. For example, Sri Lanka had a maternal mortality rate of 2% in the 1930s, higher than any nation today. It reduced it to 0.5–0.6% in the 1950s and to 0.6% today while spending less each year on maternal health because it learned what worked and what did not. Knowledge on the cost effectiveness of healthcare interventions can be elusive and educational measures have been made to disseminate what works, such as the Copenhagen Consensus. Cheap water filters and promoting hand washing are some of the most cost effective health interventions and can cut deaths from diarrhea and pneumonia. Strategies to provide education cost effectively include deworming children, which costs about 50 cents per child per year and reduces non-attendance from anemia, illness and malnutrition, while being only a twenty-fifth as expensive as increasing school attendance by constructing schools. Schoolgirl absenteeism could be cut in half by simply providing free sanitary towels. Fortification with micronutrients was ranked the most cost effective aid strategy by the Copenhagen Consensus. For example, iodised salt costs 2 to 3 cents per person a year while even moderate iodine deficiency in pregnancy shaves off 10 to 15 IQ points. Paying for school meals is argued to be an efficient strategy in increasing school enrollment, reducing absenteeism and increasing student attention. Desirable actions such as enrolling children in school or receiving vaccinations can be encouraged by a form of aid known as conditional cash transfers. In Mexico, for example, dropout rates of 16- to 19-year-olds in rural area dropped by 20% and children gained half an inch in height. Initial fears that the program would encourage families to stay at home rather than work to collect benefits have proven to be unfounded. Instead, there is less excuse for neglectful behavior as, for example, children stopped begging on the streets instead of going to school because it could result in suspension from the program. The right to housing is a human right. Policy incentives such as Housing First emphasize that other basic needs are easier to be met when housing is first guaranteed. Government revenue can be diverted away from basic services by corruption. Funds from aid and natural resources are often sent by government individuals for money laundering to overseas banks which insist on bank secrecy, instead of spending on the poor. A Global Witness report asked for more action from Western banks as they have proved capable of stanching the flow of funds linked to terrorism. Illicit capital flight, such as corporate tax avoidance, from the developing world is estimated at ten times the size of aid it receives and twice the debt service it pays, with one estimate that most of Africa would be developed if the taxes owed were paid. About 60 per cent of illicit capital flight from Africa is from transfer mispricing, where a subsidiary in a developing nation sells to another subsidiary or shell company in a tax haven at an artificially low price to pay less tax. An African Union report estimates that about 30% of sub-Saharan Africa's GDP has been moved to tax havens. Solutions include corporate "country-by-country reporting" where corporations disclose activities in each country and thereby prohibit the use of tax havens where no effective economic activity occurs. Developing countries' debt service to banks and governments from richer countries can constrain government spending on the poor. For example, Zambia spent 40% of its total budget to repay foreign debt, and only 7% for basic state services in 1997. One of the proposed ways to help poor countries has been debt relief. Zambia began offering services, such as free health care even while overwhelming the health care infrastructure, because of savings that resulted from a 2005 round of debt relief. Since that round of debt relief, private creditors accounted for an increasing share of poor countries' debt service obligations. This complicated efforts to renegotiate easier terms for borrowers during crises such as the COVID-19 pandemic because the multiple private creditors involved say they have a fiduciary obligation to their clients such as the pension funds. The World Bank and the International Monetary Fund, as primary holders of developing countries' debt, attach structural adjustment conditionalities in return for loans which are generally geared toward loan repayment with austerity measures such as the elimination of state subsidies and the privatization of state services. For example, the World Bank presses poor nations to eliminate subsidies for fertilizer even while many farmers cannot afford them at market prices. In Malawi, almost 5 million of its 13 million people used to need emergency food aid but after the government changed policy and subsidies for fertilizer and seed were introduced, farmers produced record-breaking corn harvests in 2006 and 2007 as Malawi became a major food exporter. A major proportion of aid from donor nations is tied, mandating that a receiving nation spend on products and expertise originating only from the donor country. US law requires food aid be spent on buying food at home, instead of where the hungry live, and, as a result, half of what is spent is used on transport. Distressed securities funds, also known as vulture funds, buy up the debt of poor nations cheaply and then sue countries for the full value of the debt plus interest which can be ten or 100 times what they paid. They may pursue any companies which do business with their target country to force them to pay to the fund instead. Considerable resources are diverted on costly court cases. For example, a court in Jersey ordered the Democratic Republic of the Congo to pay an American speculator $100 million in 2010. Now, the UK, Isle of Man and Jersey have banned such payments. The loss of basic needs providers emigrating from impoverished countries has a damaging effect. As of 2004, there were more Ethiopia-trained doctors living in Chicago than in Ethiopia. Proposals to mitigate the problem include compulsory government service for graduates of public medical and nursing schools and promoting medical tourism so that health care personnel have more incentive to practice in their home countries. It is very easy for Ugandan doctors to emigrate to other countries. It is seen that only 69% of the health care jobs were filled in Uganda. Other Ugandan doctors were seeking jobs in other countries leaving inadequate or less skilled doctors to stay in Uganda. Poverty and lack of access to birth control can lead to population increases that put pressure on local economies and access to resources, amplifying other economic inequality and creating increase poverty. Better education for both men and women, and more control of their lives, reduces population growth due to family planning. According to United Nations Population Fund (UNFPA), those who receive better education can earn money for their lives, thereby strengthening economic security. The following are strategies used or proposed to increase personal incomes among the poor. Raising farm incomes is described as the core of the antipoverty effort as three-quarters of the poor today are farmers. Estimates show that growth in the agricultural productivity of small farmers is, on average, at least twice as effective in benefiting the poorest half of a country's population as growth generated in nonagricultural sectors. A guaranteed minimum income ensures that every citizen will be able to purchase a desired level of basic needs. A basic income (or negative income tax) is a system of social security, that periodically provides each citizen, rich or poor, with a sum of money that is sufficient to live on. Studies of large cash-transfer programs in Ethiopia, Kenya, and Malawi show that the programs can be effective in increasing consumption, schooling, and nutrition, whether they are tied to such conditions or not. Proponents argue that a basic income is more economically efficient than a minimum wage and unemployment benefits, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. In 1968, Paul Samuelson, John Kenneth Galbraith and another 1,200 economists signed a document calling for the US Congress to introduce a system of income guarantees. Winners of the Nobel Prize in Economics, with often diverse political convictions, who support a basic income include Herbert A. Simon, Friedrich Hayek, Robert Solow, Milton Friedman, Jan Tinbergen, James Tobin and James Meade. Income grants are argued to be vastly more efficient in extending basic needs to the poor than subsidizing supplies whose effectiveness in poverty alleviation is diluted by the non-poor who enjoy the same subsidized prices. With cars and other appliances, the wealthiest 20% of Egypt uses about 93% of the country's fuel subsidies. In some countries, fuel subsidies are a larger part of the budget than health and education. A 2008 study concluded that the money spent on in-kind transfers in India in a year could lift all India's poor out of poverty for that year if transferred directly. The primary obstacle argued against direct cash transfers is the impractically for poor countries of such large and direct transfers. In practice, payments determined by complex iris scanning are used by war-torn Democratic Republic of Congo and Afghanistan, while India is phasing out its fuel subsidies in favor of direct transfers. Additionally, in aid models, the famine relief model increasingly used by aid groups calls for giving cash or cash vouchers to the hungry to pay local farmers instead of buying food from donor countries, often required by law, as it wastes money on transport costs. Corruption often leads to many civil services being treated by governments as employment agencies to loyal supporters and so it could mean going through 20 procedures, paying $2,696 in fees, and waiting 82 business days to start a business in Bolivia, while in Canada it takes two days, two registration procedures, and $280 to do the same. Such costly barriers favor big firms at the expense of small enterprises, where most jobs are created. Often, businesses have to bribe government officials even for routine activities, which is, in effect, a tax on business. Noted reductions in poverty in recent decades has occurred in China and India mostly as a result of the abandonment of collective farming in China and the ending of the central planning model known as the License Raj in India. The World Bank concludes that governments and feudal elites extending to the poor the right to the land that they live and use are 'the key to reducing poverty' citing that land rights greatly increase poor people's wealth, in some cases doubling it. Although approaches varied, the World Bank said the key issues were security of tenure and ensuring land transactions costs were low. Greater access to markets brings more income to the poor. Road infrastructure has a direct impact on poverty. Additionally, migration from poorer countries resulted in $328 billion sent from richer to poorer countries in 2010, more than double the $120 billion in official aid flows from OECD members. In 2011, India got $52 billion from its diaspora, more than it took in foreign direct investment. Microloans, made famous by the Grameen Bank, is where small amounts of money are loaned to farmers or villages, mostly women, who can then obtain physical capital to increase their economic rewards. However, microlending has been criticized for making hyperprofits off the poor even from its founder, Muhammad Yunus, and in India, Arundhati Roy asserts that some 250,000 debt-ridden farmers have been driven to suicide. Those in poverty place overwhelming importance on having a safe place to save money, much more so than receiving loans. Additionally, a large part of microfinance loans are spent not on investments but on products that would usually be paid by a checking or savings account. Microsavings are designs to make savings products available for the poor, who make small deposits. Mobile banking uses the wide availability of mobile phones to address the problem of the heavy regulation and costly maintenance of saving accounts. This usually involves a network of agents of mostly shopkeepers, instead of bank branches, would take deposits in cash and translate these onto a virtual account on customers' phones. Cash transfers can be done between phones and issued back in cash with a small commission, making remittances safer. Oxfam, among others, has called for an international movement to end extreme wealth concentration arguing that the concentration of resources in the hands of the top 1% depresses economic activity and makes life harder for everyone else—particularly those at the bottom of the economic ladder. And they say that the gains of the world's billionaires in 2017, which amounted to $762 billion, were enough to end extreme global poverty seven times over. See also: Causes of poverty The cause of poverty is a highly ideologically charged subject, as different causes point to different remedies. Very broadly speaking, the socialist tradition locates the roots of poverty in problems of distribution and the use of the means of production as capital benefiting individuals, and calls for redistribution of wealth as the solution, whereas the neoliberal school of thought holds that creating conditions for profitable private investment is the solution. Neoliberal think tanks have received extensive funding, and the ability to apply many of their ideas in highly indebted countries in the global South as a condition for receiving emergency loans from the International Monetary Fund. The existence of inequality is in part due to a set of self-reinforcing behaviors that all together constitute one aspect of the cycle of poverty. These behaviors, in addition to unfavorable, external circumstances, also explain the existence of the Matthew effect, which not only exacerbates existing inequality, but is more likely to make it multigenerational. Widespread, multigenerational poverty is an important contributor to civil unrest and political instability. For example, Raghuram G. Rajan, former governor of the Reserve Bank of India and former chief economist at the International Monetary Fund, has blamed the ever-widening gulf between the rich and the poor, especially in the US, to be one of the main fault lines which caused the financial institutions to pump money into subprime mortgages—on political behest, as a palliative and not a remedy, for poverty—causing the financial crisis of 2007–2009. In Rajan's view the main cause of the increasing gap between high income and low income earners was lack of equal access to higher education for the latter. A data based scientific empirical research, which studied the impact of dynastic politics on the level of poverty of the provinces, found a positive correlation between dynastic politics and poverty; i.e. the higher proportion of dynastic politicians in power in a province leads to higher poverty rate. There is significant evidence that these political dynasties use their political dominance over their respective regions to enrich themselves, using methods such as graft or outright bribery of legislators. Many scholars and public intellectuals argue that, throughout most of human history, extreme poverty was the norm for roughly 90% of the population, with only the emergence of industrial capitalism in the 19th century lifting masses of people out of it. This narrative is advanced by, among others, Martin Ravallion, Nicholas Kristof, and Steven Pinker. Some academics including Dylan Sullivan and Jason Hickel have challenged this contemporary mainstream narrative on poverty, arguing that extreme poverty was not the norm throughout human history, but emerged during "periods of severe social and economic dislocation," including high European feudalism and the apex of the Roman Empire, and that it expanded significantly after 1500 with the emergence of colonialism and the beginnings of capitalism, stating that "the expansion of the capitalist world-system caused a dramatic and prolonged process of impoverishment on a scale unparalleled in recorded history." Sullivan and Hickel assert that only with the rise of anti-colonial and socialist political movements in the 20th century did human welfare begin to see significant improvement. Main article: Environmentalism of the poor See also: Climate change and poverty Important studies such as the Brundtland Report concluded that poverty causes environmental degradation, while other theories like environmentalism of the poor conclude that the global poor may be the most important force for sustainability. Either way, the poor suffer most from environmental degradation caused by reckless exploitation of natural resources by the rich. This unfair distribution of environmental burdens and benefits has generated the global environmental justice movement. A report published in 2013 by the World Bank, with support from the Climate & Development Knowledge Network, found that climate change was likely to hinder future attempts to reduce poverty. The report presented the likely impacts of present day, 2 °C and 4 °C warming on agricultural production, water resources, coastal ecosystems and cities across Sub-Saharan Africa, South Asia and South East Asia. The impacts of a temperature rise of 2 °C included: regular food shortages in Sub-Saharan Africa; shifting rain patterns in South Asia leaving some parts under water and others without enough water for power generation, irrigation or drinking; degradation and loss of reefs in South East Asia, resulting in reduced fish stocks; and coastal communities and cities more vulnerable to increasingly violent storms. In 2016, a UN report claimed that by 2030, an additional 122 million more people could be driven to extreme poverty because of climate change. Global warming can also lead to a deficiency in water availability; with higher temperatures and CO2 levels, plants consume more water leaving less for people. By consequence, water in rivers and streams will decline in the mid-altitude regions like Central Asia, Europe and North America. And if CO2 levels continue to rise, or even remain the same, droughts will be happening much faster and will be lasting longer. According to a 2016 study led by Professor of Water Management, Arjen Hoekstra, four billion people are affected by water scarcity at least one month per year. Among some individuals, poverty is considered a necessary or desirable condition, which must be embraced to reach certain spiritual, moral, or intellectual states. Poverty is often understood to be an essential element of renunciation in religions such as Buddhism, Hinduism (only for monks, not for lay persons) and Jainism, whilst in Christianity, in particular Roman Catholicism, it is one of the evangelical counsels. The main aim of giving up things of the materialistic world is to withdraw oneself from sensual pleasures (as they are considered illusionary and only temporary in some religions—such as the concept of dunya in Islam). This self-invited poverty (or giving up pleasures) is different from the one caused by economic imbalance. Some Christian communities, such as the Simple Way, the Bruderhof, and the Amish value voluntary poverty; some even take a vow of poverty, similar to that of the traditional Catholic orders, in order to live a more complete life of discipleship. Benedict XVI distinguished "poverty chosen" (the poverty of spirit proposed by Jesus), and "poverty to be fought" (unjust and imposed poverty). He considered that the moderation implied in the former favors solidarity, and is a necessary condition so as to fight effectively to eradicate the abuse of the latter. As it was indicated above the reduction of poverty results from religion, but also can result from solidarity. Critics of neoliberalism have therefore looked at the evidence that documents the results of this great experiment of the past 30 years, in which many markets have been set free. Looking at the evidence, we can see that the total amount of global trade has increased significantly, but that global poverty has increased, with more today living in abject poverty than before neoliberalism. By any measure, the speed and scale of China's poverty reduction is historically unprecedented. "If, in 1987–1988, 2 percent of the Russian people lived in poverty (i.e., survived on less than $4 a day), by 1993–1995 the number reached 50 percent: in just seven years half the Russian population became destitute. So, what is the balance sheet of transition? Only three or at most five or six countries could be said to be on the road to becoming a part of the rich and (relatively) stable capitalist world. Many of the other countries are falling behind, and some are so far behind that they cannot aspire to go back to the point where they were when the Wall fell for several decades. Alongside ambitious investment in schooling girls (and more broadly, of course, all children), priority should be given to making high-quality family-planning services available to every woman on the planet, while economic, geographic, and cultural barriers to access should be removed. The combination of institutional support to plan one's child-bearing choices and educational attainment, including enhanced opportunity for higher education for women, yields immediate fertility declines. Among those who have become convinced of the virtues of the basic income approach are several Nobel Prize-winning economists of surprisingly diverse political convictions: Milton Friedman, Herbert Simon, Robert Solow, Jan Tinbergen and James Tobin (besides, of course, James Meade who was an advocate from his younger days). I would pursue my recommendations of years ago for a negative income tax.
By the end of this section, you will be able to: - State Hooke's law. - Explain Hooke's law using graphical representation between deformation and applied force. - Discuss the three types of deformations such as changes in length, sideways shear, and changes in volume. - Describe with examples the Young's modulus, shear modulus, and bulk modulus. - Determine the change in length given mass, length, and radius. We now move from consideration of forces that affect the motion of an object (such as friction and drag) to those that affect an object's shape. If a bulldozer pushes a car into a wall, the car will not move but it will noticeably change shape. A change in shape due to the application of a force is a deformation. Even very small forces are known to cause some deformation. For small deformations, two important characteristics are observed. First, the object returns to its original shape when the force is removed—that is, the deformation is elastic for small deformations. Second, the size of the deformation is proportional to the force—that is, for small deformations, Hooke's law is obeyed. In equation form, Hooke's law is given by where is the amount of deformation (the change in length, for example) produced by the force , and is a proportionality constant that depends on the shape and composition of the object and the direction of the force. Note that this force is a function of the deformation —it is not constant as a kinetic friction force is. Rearranging this to makes it clear that the deformation is proportional to the applied force. Figure 5.11 shows the Hooke's law relationship between the extension of a spring or of a human bone. For metals or springs, the straight line region in which Hooke's law pertains is much larger. Bones are brittle and the elastic region is small and the fracture abrupt. Eventually a large enough stress to the material will cause it to break or fracture. where is the amount of deformation (the change in length, for example) produced by the force , and is a proportionality constant that depends on the shape and composition of the object and the direction of the force. The proportionality constant depends upon a number of factors for the material. For example, a guitar string made of nylon stretches when it is tightened, and the elongation is proportional to the force applied (at least for small deformations). Thicker nylon strings and ones made of steel stretch less for the same applied force, implying they have a larger (see Figure 5.12). Finally, all three strings return to their normal lengths when the force is removed, provided the deformation is small. Most materials will behave in this manner if the deformation is less than about 0.1% or about 1 part in . How would you go about measuring the proportionality constant of a rubber band? If a rubber band stretched 3 cm when a 100-g mass was attached to it, then how much would it stretch if two similar rubber bands were attached to the same mass—even if put together in parallel or alternatively if tied together in series? We now consider three specific types of deformations: changes in length (tension and compression), sideways shear (stress), and changes in volume. All deformations are assumed to be small unless otherwise stated. Changes in Length—Tension and Compression: Elastic Modulus A change in length is produced when a force is applied to a wire or rod parallel to its length , either stretching it (a tension) or compressing it. (See Figure 5.13.) Experiments have shown that the change in length () depends on only a few variables. As already noted, is proportional to the force and depends on the substance from which the object is made. Additionally, the change in length is proportional to the original length and inversely proportional to the cross-sectional area of the wire or rod. For example, a long guitar string will stretch more than a short one, and a thick string will stretch less than a thin one. We can combine all these factors into one equation for : where is the change in length, the applied force, is a factor, called the elastic modulus or Young's modulus, that depends on the substance, is the cross-sectional area, and is the original length. Table 5.3 lists values of for several materials—those with a large are said to have a large tensile strength because they deform less for a given tension or compression. |Material||Young's modulus (tension–compression)Y||Shear modulus S||Bulk modulus B| |Bone – tension||16||80||8| |Bone – compression||9| Young's moduli are not listed for liquids and gases in Table 5.3 because they cannot be stretched or compressed in only one direction. Note that there is an assumption that the object does not accelerate, so that there are actually two applied forces of magnitude acting in opposite directions. For example, the strings in Figure 5.13 are being pulled down by a force of magnitude and held up by the ceiling, which also exerts a force of magnitude . The Stretch of a Long Cable Suspension cables are used to carry gondolas at ski resorts. (See Figure 5.14) Consider a suspension cable that includes an unsupported span of 3020 m. Calculate the amount of stretch in the steel cable. Assume that the cable has a diameter of 5.6 cm and the maximum tension it can withstand is . The force is equal to the maximum tension, or . The cross-sectional area is . The equation can be used to find the change in length. All quantities are known. Thus, This is quite a stretch, but only about 0.6% of the unsupported length. Effects of temperature upon length might be important in these environments. Bones, on the whole, do not fracture due to tension or compression. Rather they generally fracture due to sideways impact or bending, resulting in the bone shearing or snapping. The behavior of bones under tension and compression is important because it determines the load the bones can carry. Bones are classified as weight-bearing structures such as columns in buildings and trees. Weight-bearing structures have special features; columns in building have steel-reinforcing rods while trees and bones are fibrous. The bones in different parts of the body serve different structural functions and are prone to different stresses. Thus the bone in the top of the femur is arranged in thin sheets separated by marrow while in other places the bones can be cylindrical and filled with marrow or just solid. Overweight people have a tendency toward bone damage due to sustained compressions in bone joints and tendons. Another biological example of Hooke's law occurs in tendons. Functionally, the tendon (the tissue connecting muscle to bone) must stretch easily at first when a force is applied, but offer a much greater restoring force for a greater strain. Figure 5.15 shows a stress-strain relationship for a human tendon. Some tendons have a high collagen content so there is relatively little strain, or length change; others, like support tendons (as in the leg) can change length up to 10%. Note that this stress-strain curve is nonlinear, since the slope of the line changes in different regions. In the first part of the stretch called the toe region, the fibers in the tendon begin to align in the direction of the stress—this is called uncrimping. In the linear region, the fibrils will be stretched, and in the failure region individual fibers begin to break. A simple model of this relationship can be illustrated by springs in parallel: different springs are activated at different lengths of stretch. Examples of this are given in the problems at end of this chapter. Ligaments (tissue connecting bone to bone) behave in a similar way. Unlike bones and tendons, which need to be strong as well as elastic, the arteries and lungs need to be very stretchable. The elastic properties of the arteries are essential for blood flow. The pressure in the arteries increases and arterial walls stretch when the blood is pumped out of the heart. When the aortic valve shuts, the pressure in the arteries drops and the arterial walls relax to maintain the blood flow. When you feel your pulse, you are feeling exactly this—the elastic behavior of the arteries as the blood gushes through with each pump of the heart. If the arteries were rigid, you would not feel a pulse. The heart is also an organ with special elastic properties. The lungs expand with muscular effort when we breathe in but relax freely and elastically when we breathe out. Our skins are particularly elastic, especially for the young. A young person can go from 100 kg to 60 kg with no visible sag in their skins. The elasticity of all organs reduces with age. Gradual physiological aging through reduction in elasticity starts in the early 20s. Calculating Deformation: How Much Does Your Leg Shorten When You Stand on It? Calculate the change in length of the upper leg bone (the femur) when a 70.0 kg man supports 62.0 kg of his mass on it, assuming the bone to be equivalent to a uniform rod that is 40.0 cm long and 2.00 cm in radius. The force is equal to the weight supported, or and the cross-sectional area is . The equation can be used to find the change in length. All quantities except are known. Note that the compression value for Young's modulus for bone must be used here. Thus, This small change in length seems reasonable, consistent with our experience that bones are rigid. In fact, even the rather large forces encountered during strenuous physical activity do not compress or bend bones by large amounts. Although bone is rigid compared with fat or muscle, several of the substances listed in Table 5.3 have larger values of Young's modulus . In other words, they are more rigid and have greater tensile strength. The equation for change in length is traditionally rearranged and written in the following form: The ratio of force to area, , is defined as stress (measured in ), and the ratio of the change in length to length, , is defined as strain (a unitless quantity). In other words, In this form, the equation is analogous to Hooke's law, with stress analogous to force and strain analogous to deformation. If we again rearrange this equation to the form we see that it is the same as Hooke's law with a proportionality constant This general idea—that force and the deformation it causes are proportional for small deformations—applies to changes in length, sideways bending, and changes in volume. The ratio of force to area, , is defined as stress measured in N/m2. The ratio of the change in length to length, , is defined as strain (a unitless quantity). In other words, Sideways Stress: Shear Modulus Figure 5.16 illustrates what is meant by a sideways stress or a shearing force. Here the deformation is called and it is perpendicular to , rather than parallel as with tension and compression. Shear deformation behaves similarly to tension and compression and can be described with similar equations. The expression for shear deformation is where is the shear modulus (see Table 5.3) and is the force applied perpendicular to and parallel to the cross-sectional area . Again, to keep the object from accelerating, there are actually two equal and opposite forces applied across opposite faces, as illustrated in Figure 5.16. The equation is logical—for example, it is easier to bend a long thin pencil (small ) than a short thick one, and both are more easily bent than similar steel rods (large ). where is the shear modulus and is the force applied perpendicular to and parallel to the cross-sectional area . Examination of the shear moduli in Table 5.3 reveals some telling patterns. For example, shear moduli are less than Young's moduli for most materials. Bone is a remarkable exception. Its shear modulus is not only greater than its Young's modulus, but it is as large as that of steel. This is one reason that bones can be long and relatively thin. Bones can support loads comparable to that of concrete and steel. Most bone fractures are not caused by compression but by excessive twisting and bending. The spinal column (consisting of 26 vertebral segments separated by discs) provides the main support for the head and upper part of the body. The spinal column has normal curvature for stability, but this curvature can be increased, leading to increased shearing forces on the lower vertebrae. Discs are better at withstanding compressional forces than shear forces. Because the spine is not vertical, the weight of the upper body exerts some of both. Pregnant women and people that are overweight (with large abdomens) need to move their shoulders back to maintain balance, thereby increasing the curvature in their spine and so increasing the shear component of the stress. An increased angle due to more curvature increases the shear forces along the plane. These higher shear forces increase the risk of back injury through ruptured discs. The lumbosacral disc (the wedge shaped disc below the last vertebrae) is particularly at risk because of its location. The shear moduli for concrete and brick are very small; they are too highly variable to be listed. Concrete used in buildings can withstand compression, as in pillars and arches, but is very poor against shear, as might be encountered in heavily loaded floors or during earthquakes. Modern structures were made possible by the use of steel and steel-reinforced concrete. Almost by definition, liquids and gases have shear moduli near zero, because they flow in response to shearing forces. Calculating Force Required to Deform: That Nail Does Not Bend Much Under a Load Find the mass of the picture hanging from a steel nail as shown in Figure 5.17, given that the nail bends only . (Assume the shear modulus is known to two significant figures.) The force on the nail (neglecting the nail's own weight) is the weight of the picture . If we can find , then the mass of the picture is just . The equation can be solved for . Solving the equation for , we see that all other quantities can be found: S is found in Table 5.3 and is . The radius is 0.750 mm (as seen in the figure), so the cross-sectional area is The value for is also shown in the figure. Thus, This 51 N force is the weight of the picture, so the picture's mass is This is a fairly massive picture, and it is impressive that the nail flexes only —an amount undetectable to the unaided eye. Changes in Volume: Bulk Modulus An object will be compressed in all directions if inward forces are applied evenly on all its surfaces as in Figure 5.18. It is relatively easy to compress gases and extremely difficult to compress liquids and solids. For example, air in a wine bottle is compressed when it is corked. But if you try corking a brim-full bottle, you cannot compress the wine—some must be removed if the cork is to be inserted. The reason for these different compressibilities is that atoms and molecules are separated by large empty spaces in gases but packed close together in liquids and solids. To compress a gas, you must force its atoms and molecules closer together. To compress liquids and solids, you must actually compress their atoms and molecules, and very strong electromagnetic forces in them oppose this compression. We can describe the compression or volume deformation of an object with an equation. First, we note that a force “applied evenly” is defined to have the same stress, or ratio of force to area on all surfaces. The deformation produced is a change in volume , which is found to behave very similarly to the shear, tension, and compression previously discussed. (This is not surprising, since a compression of the entire object is equivalent to compressing each of its three dimensions.) The relationship of the change in volume to other physical quantities is given by where is the bulk modulus (see Table 5.3), is the original volume, and is the force per unit area applied uniformly inward on all surfaces. Note that no bulk moduli are given for gases. What are some examples of bulk compression of solids and liquids? One practical example is the manufacture of industrial-grade diamonds by compressing carbon with an extremely large force per unit area. The carbon atoms rearrange their crystalline structure into the more tightly packed pattern of diamonds. In nature, a similar process occurs deep underground, where extremely large forces result from the weight of overlying material. Another natural source of large compressive forces is the pressure created by the weight of water, especially in deep parts of the oceans. Water exerts an inward force on all surfaces of a submerged object, and even on the water itself. At great depths, water is measurably compressed, as the following example illustrates. Calculating Change in Volume with Deformation: How Much Is Water Compressed at Great Ocean Depths? Calculate the fractional decrease in volume () for seawater at 5.00 km depth, where the force per unit area is . Equation is the correct physical relationship. All quantities in the equation except are known. Solving for the unknown gives Substituting known values with the value for the bulk modulus from Table 5.3, Although measurable, this is not a significant decrease in volume considering that the force per unit area is about 500 atmospheres (1 million pounds per square foot). Liquids and solids are extraordinarily difficult to compress. Conversely, very large forces are created by liquids and solids when they try to expand but are constrained from doing so—which is equivalent to compressing them to less than their normal volume. This often occurs when a contained material warms up, since most materials expand when their temperature increases. If the materials are tightly constrained, they deform or break their container. Another very common example occurs when water freezes. Water, unlike most materials, expands when it freezes, and it can easily fracture a boulder, rupture a biological cell, or crack an engine block that gets in its way. Other types of deformations, such as torsion or twisting, behave analogously to the tension, shear, and bulk deformations considered here. - 1Approximate and average values. Young's moduli for tension and compression sometimes differ but are averaged here. Bone has significantly different Young's moduli for tension and compression.
Common Core Standards: Math 2. Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the data points to the line. You know the old saying, "A picture is worth a thousand words"? When it comes to math, a picture is probably worth a million. Sometimes (especially when it comes to statistics), one picture or graph can give you a crystal-clear look at what's going on, while a list of numbers just leaves you looking for the nearest exit. One awesome example of this is a scatter plot. This shows the relationship between two different quantities. It gives you an instant idea of what the data is doing: whether it's grouped in some sort of a pattern or scattered all over the place (hence the name!). Usually, the data does follow a pattern and fairly often, this pattern takes the shape of a line. Mathematicians, being who they are, immediately want to know which line the data is closest to because it can give us a good approximation of what other data points might be. The easiest way to find this line is to take a straightedge (a ruler, the edge of folded piece of paper, or our favorite, a piece of uncooked spaghetti) and try to get it to fit through as many of the points as possible. This line is called the line of best fit. (Yeah, every once in a while, mathematicians actually use a name that tells you what the thing is. This is one of those times.) Students should understand that in many cases, a line of best fit can apply to a scatter plot and provide a good means of understanding the relationship between the variables. They should also be able to guesstimate the accuracy of the data by looking how closely the line of best fit approximates the data points. Sometimes, the "line" of best fit isn't a line at all, but a curve of some sort. It's possible for the best fit to be a parabola or exponential graph. Just because there isn't a line of best fit doesn't mean the variables aren't somehow related. And if they aren't related for some reason, then no graph—line or curve—will be a "best fit." On the other hand, students should know that enough relationships are linear (or linear enough) to have a line of best fit.
(TL;DR: Start with two vectors with equal numbers of elements. Multiply them element-wise. Sum the results. This is the dot product.) LaTeX and MathJax warning for those viewing my feed: please view directly on website! Hmmm…this is a tricky one! Uhhh…did you know that Kendrick Lamar’s stage name used to be “K.Dot”? Last time, we learnt how to add vectors. It’s time to learn about dot products! Today’s topic: dot products Let’s define two vectors: Let’s multiply these vectors element-wise. We’ll take the first elements of our vectors and multiply them: Let’s take the second elements and multiply them: Now add the element-wise products: This, my friends, is the dot product of our vectors. More generally, if we have an arbitrary vector of elements and another arbitrary vector also of elements, then the dot product is: The dot product is equivalent to . Let’s come back to this next time when we talk about matrix multiplication. What is that angular ‘E’ looking thing? For anyone who doesn’t know how to read the dot product equation, let’s dissect its right-hand side! is the uppercase form of the Greek letter ‘sigma’. In this context, means ‘sum’. So we know that we’ll need to add some things. We have and . In an earlier post, we learnt that this refers to the th element of some vector. So we can refer to the first element of our vector as . We notice that also shares the same subscript . So we know that whenever we refer to the second element in (i.e. ), we will be referring to the second element in (i.e. ). We notice that is next to . So we’re going to be multiplying elements of our vectors which occur in the same position, . We see that below our uppercase sigma there is a little . We also notice that there is a little above it. These mean “Let . Keep incrementing until you reach ”. What is ? It’s the number of elements in our vectors! If we expand the right-hand side, we get: This looks somewhat similar to the equation from the example earlier: Easy! These are the mechanics of dot products. What the hell does this all mean anyway? For a deeper understanding of dot products (which is unfortunately beyond me right at this moment!) please refer to this video: The entire series in the playlist is so beautifully done. They are mesmerising! How can we perform dot products in R? Let’s define two vectors: x <- c(1, 2, 3) y <- c(4, 5, 6) We can find the dot product of these two vectors using the x %*% y ## [,1] ## [1,] 32 What does R do if we simply multiply one vector by the other? x * y ## 4 10 18 This is the element-wise product! If the dot product is simply the sum of the element-wise product, then x %*% y is equivalent to doing this: sum(x * y) ## 32 In our previous posts, R allowed us to multiply vectors of different lengths. Notice how R doesn’t allow us to calculate the dot product of vectors with different lengths: x <- c(1, 2) y <- c(3, 4, 5) x %*% y This is the exception that gets raised: Error in x %*% y : non-conformable arguments We have learnt the mechanics of calculating dot products. We can now finally move onto matrices. Ooooooh yeeeeeah.
Mobile Math Website Copyright © DigitMath.com All Rights Reserved. Algebra is best defined as a generalized and abstracted basic mathematics. In basic math emphasis is on simple operations applied to numbers or discrete values to provide a result. This is true for simple algebra. Elementary algebra steps beyond discrete numbers and constant terms to include variables, symbols that can represent a numeric value or quantity. Variables are literal numbers (or literal factors): a, b, c, x, y or z. Symbols specifying a constant value can be represented by Greek, Latin or other symbol. The Greek letter π (pi) is a mathematical constant for the ratio of a circle's circumference to its diameter, approximately 3.1416. Literals, capitalized or lower-case are different symbols. “A” is not “a”, “B” is not “b”. Lower-case is often used for a literal factor. Algebra has operations not considered basic math. The Greek letter ∑ (sigma) designates summation of a set having a range of values. By combining constant and variable symbols, what is numbers in basic math expands to the concept of a term. In algebra, all mathematical operations are applied on terms. A number is a coefficient of a term. A simple term could be a number, a literal number, or a number and a literal number: 4x3, meaning “x” cubed is multiplied by the coefficient four. Cartesian Coordinates for Elementary and Advanced Study. Terms combined with math operations form expressions. An algebraic expression with two or more terms is a multinomial. A simple expression is: 4x2 + 2x + 6; an expression with 3 terms. When a value for “x” is provided the terms can be evaluated for value and written as an equation 4x2 + 2x + 6 = y. The equation by using an expression provides a solution for y, the value of y. We could just as easily write 4x2 + 2x + 6 as 2 (2x2 + x + 3), each the mathematical expression of the other as equivalent different form, identity. Then, 4x2 + 2x + 6 = 2 (2x2 + x + 3) = y. Much of math from algebra onward is working with equations, expressions and forms of expressions. Math equations and expressions can define very abstract algebra relationships. Suppose the mass of matter and energy. If we could prove mathematically that mass is equivalent to energy then that aspect of mass and energy becomes an expressed relationship, an expression of the other; the same with different identity. This is what Albert Einstein demonstrated with E = MC2. It means the energy contained in matter (E) is equal to the mass of the matter (M) times the speed of light (C) squared. This equation helped prove the mass of matter and energy are not separate, but different forms of the same thing. I can’t think of a better way to say it; an expression is a different form of the same! Algebra is often taught as a math by itself to learn rules, properties and permutations of operations on terms using symbols. Its legacy is a math of trigonometry and geometry as a language enabling the description of many physical shapes and forms in their absence, and therefore provides abstract representation of physical shapes or forms. The algebraic equation is an expression of identity, an abstract representation by symbols, letters and operations that do not look anything like the physical shape or form. The specific shape or form can be created or constructed from the abstract representation. Advanced algebra curriculum is often referenced as pre-calculus or elementary functions. In algebra we write equations; 4x2 + 2x + 6 = y. In pre-calculus or elementary functions we write f(x) = 4x2 + 2x + 6; f(x), is a function of x, then f(x) = y. They are similar with differences attributed to subject emphasis on particular math topics by author(s) of the text. f(x) often represents a two dimensional or three dimensional map relationship as Cartesian coordinates (x, y) or (x, y, z) where the equation defining f(x) specifies a set or range of values, the domain of the function as line, area or other function type. Substitution Property – The equation formed by substituting one expression for an equal expression in an equation is equivalent to the original equation. Substitution Property Example: 5x - 4x = 6; x ( 5 - 4 ) = 6; x(1) = 6; x = 6 Addition Property – The equation formed by adding (or subtracting) the same quantity to both sides of an equation is equivalent to the original equation. Addition Property Examples: x - 4 = 6; x - 4 + 4 = 6 + 4; x = 10 x + 5 = 12; x + 5 - 5 = 12 - 5; x = 7 Multiplication Property – The equation formed by multiplying (or dividing) the same quantity to both sides of an equation is equivalent to the original equation. Multiplication Property Examples: 1/3 • x = 6; (1/3) • 3 • x = 6 ∙ 3; (1) x = 18; x = 18 5 x = 20; 5 x / 5 = 20 / 5; x = 4 Think of algebra as advanced basic math; that the purpose of basic math is the necessary foundation of algebra. Basic math often incorporates pre-algebra concepts, though that algebra is restricted to simple formulas relating trigonometric shapes, geometric shapes and lines. There usually isn’t an absolute delineation that separates one topic from another in mathematics. It’s fuzzy, it integrates. What is absolute is that algebra establishes the foundation for all other branches of mathematics beyond basic math.
Born sometime around 476 AD, somewhere in central India, scholars are fairly certain (or not) that the mathematician Aryabhata (if that was his name) travelled to the city of Kusumapura for advanced studies. It is also believed by historians that he became the head of an institution there … or perhaps of the university at Nalanda instead … or perhaps of the observatory at Taregana. Nor is it certain where or when he died, although it was supposedly around 550. Rather more is known about his work, for it laid the basis for civilization’s concepts regarding modern mathematics and astronomy. Although he wrote several scientific treatises, he is chiefly known for the 'Aryabhativa,' a work in 108 verses across various topics. Among these are observations and calculations in algebra, arithmetic, and plane and spherical trigonometry; he also included early sine tables and quadratic equations. In the process, to sort out his equations, Aryabhata worked out a place-value system using letters to represent unknown values; and, not coincidentally, he devised an approximation of “pi.” All this at the age of 23, according to his students. As if the mathematical insights weren’t enough, the 'Aryahativa' also offers astronomical calculations based on these, notable for determining planetary periods in the solar system. Using the value of pi, Aryabhata calculated that the earth had a circumference of 24,835 miles – correct to within 0.2% and far closer than any other until the Europeans discovered the world wasn’t flat after all. For hundreds of years the 'Aryabhativa' was unknown to the rest of the world, until Islamic scholars translated it during the 9th Century. From there, it made its way into Europe in the 1200s, just in time to set off an “astronomical revolution.” - The Civilopedia entry claims that he "calculated that the earth had a circumference of 24,835 miles - correct to within 0.2% and far closer than any other..." This is inaccurate because Aryabatha's measurement was given in terms of yojanas, whose exact length is unknown. Furthermore, a length of 24,835 miles is 0.27% smaller than the actual circumference of 24,902 miles, which is not within 0.2%. A possible source of this inaccuracy is the New World Encyclopedia, which gives the same numbers.
Relating Addition and Subtraction In this algebraic expressions instructional activity, students learn about inverse operations and how to solve problems using them. They then solve 11 addition and subtraction problems on their own. 19 Views 79 Downloads - Activities & Projects - Graphics & Images - Handouts & References - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Graphic Organizers - Writing Prompts - Constructed Response Items - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: One-Step Equations—Addition and Subtraction Just one step is all you need to find success in solving equations. The 27th installment in a series of 36 teaches how to solve one-step equations involving addition and subtraction. Tape diagrams help future mathematicians in this task. 6th Math CCSS: Designed Understanding Subtraction of Integers and Other Rational Numbers Subtraction is all about opposites and the fifth lesson in a series of 25 introduces the concept of subtracting integers. Pupils connect subtracting with discarding a card in the integer game. They develop the rule for subtracting... 7th Math CCSS: Designed Adding polynomials with multiple variables While explaining how to add polynomial expressions with multiple variables, Sal uses academic vocabulary and a clear thinking process. This video is relatively short and would be appropriate for students working on homework or prepping... 3 mins 9th - 11th Math Addition and Subtraction of Mixed Numbers Demonstrate the process of adding and subtracting mixed numbers using a video tutorial. A video instructor explains the steps of combining mixed numbers as she works through several examples. The examples add and/or subtract up to four... 6 mins 3rd - 7th Math CCSS: Adaptable Study Jams! Addition & Subtraction of Decimals So current with preteens is the topic of downloading tunes into their computers! In a relatable lesson, viewers are taught to figure out if Zoe can afford to purchase two songs if she has $3.00 left to her credit. Mia talks them through... 4th - 6th Math CCSS: Adaptable Write and Solve Subtraction Equations Using a Bar Model Subtraction means take away, but the bar model shows how subtraction relates to addition. The bar model organizes the parts of an equation into a visual model. Using fact families, your mathematicians will be able to organize the... 6 mins 5th - 7th Math CCSS: Designed
Interstellar travel is the term used for hypothetical crewed or uncrewed travel between stars or planetary systems. Interstellar travel will be much more difficult than interplanetary spaceflight; the distances between the planets in the Solar System are less than 30 astronomical units (AU)—whereas the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, interstellar travel would require a high percentage of the speed of light; huge travel time, lasting from decades to millennia or longer; or a combination of both. The speeds required for interstellar travel in a human lifetime far exceed what current methods of spacecraft propulsion can provide. Even with a hypothetically perfectly efficient propulsion system, the kinetic energy corresponding to those speeds is enormous by today's standards of energy development. Moreover, collisions by the spacecraft with cosmic dust and gas can produce very dangerous effects both to passengers and the spacecraft itself. A number of strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems, to microscopic space probes. Many different spacecraft propulsion systems have been proposed to give spacecraft the required speeds, including nuclear propulsion, beam-powered propulsion, and methods based on speculative physics. For both crewed and uncrewed interstellar travel, considerable technological and economic challenges need to be met. Even the most optimistic views about interstellar travel see it as only being feasible decades from now. However, in spite of the challenges, if or when interstellar travel is realised, a wide range of scientific benefits is expected. Most interstellar travel concepts require a developed space logistics system capable of moving millions of tons to a construction / operating location, and most would require gigawatt-scale power for construction or power (such as Star Wisp or Light Sail type concepts). Such a system could grow organically if space-based solar power became a significant component of Earth's energy mix. Consumer demand for a multi-terawatt system would automatically create the necessary multi-million ton/year logistical system. - 1 Challenges - 2 Prime targets for interstellar travel - 3 Proposed methods - 4 Propulsion - 4.1 Rocket concepts - 4.2 Non-rocket concepts - 4.3 Theoretical concepts - 5 Designs and studies - 6 Non-profit organizations - 7 Feasibility - 8 Discovery of Earth-Like planets - 9 See also - 10 References - 11 Further reading - 12 External links Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some 1.5×108 kilometers (93 million miles). Venus, the closest other planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 2018, Voyager 1, the farthest man-made object from Earth, is 141.5 AU away. The closest known star, Proxima Centauri, is approximately 268,332 AU away, or over 9,000 times farther away than Neptune. |Venus (nearest planet)||0.28||2.41 minutes| |Neptune (farthest planet)||29.8||4.1 hours| |Voyager 1||141.5||19.61 hours| |Proxima Centauri (nearest star and exoplanet)||268,332||4.24 years| Because of this, distances between stars are usually expressed in light-years, defined as the distance that a light photon travels in a year. Light in a vacuum travels around 300,000 kilometres (186,000 mi) per second, so this is some 9.461×1012 kilometers (5.879 trillion miles) or 1 light-year (63,241 AU) in a year. Proxima Centauri is 4.243 light-years away. Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to one meter (3.28 ft). On this scale, the distance to Alpha Centauri A would be 276 kilometers (171 miles). The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600 of a light-year in 30 years and is currently moving at 1/18,000 the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years. A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy where is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to . The velocity for a manned round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 petajoules or 4.50×1017 joules or 125 terawatt-hours (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that interstellar dust may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in the literature, but many unknowns remain and, owing to the inhomogeneous distribution of interstellar matter around the Sun, will depend on direction travelled. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the effects of exposure to ionizing radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. It has been argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now. Voyages undertaken before the minimum will be overtaken by those that leave at the minimum, whereas voyages that leave after the minimum will never overtake those that left at the minimum. Prime targets for interstellar travel |Alpha Centauri||4.3||Closest system. Three stars (G2, K1, M5). Component A is similar to the Sun (a G2 star). On August 24, 2016, the discovery of an Earth-size exoplanet (Proxima Centauri b) orbiting in the habitable zone of Proxima Centauri was announced.| |Barnard's Star||6||Small, low-luminosity M5 red dwarf. Second closest to Solar System.| |Sirius||8.7||Large, very bright A1 star with a white dwarf companion.| |Epsilon Eridani||10.8||Single K2 star slightly smaller and colder than the Sun. It has two asteroid belts, might have a giant and one much smaller planet, and may possess a Solar-System-type planetary system.| |Tau Ceti||11.8||Single G8 star similar to the Sun. High probability of possessing a Solar-System-type planetary system: current evidence shows 5 planets with potentially two in the habitable zone.| |Wolf 1061||~14||Wolf 1061 c is 4.3 times the size of Earth; it may have rocky terrain. It also sits within the ‘Goldilocks’ zone where it might be possible for liquid water to exist.| |Gliese 581 planetary system||20.3||Multiple planet system. The unconfirmed exoplanet Gliese 581g and the confirmed exoplanet Gliese 581d are in the star's habitable zone.| |Gliese 667C||22||A system with at least six planets. A record-breaking three of these planets are super-Earths lying in the zone around the star where liquid water could exist, making them possible candidates for the presence of life.| |Vega||25||A very young system possibly in the process of planetary formation.| |TRAPPIST-1||39||A recently discovered system which boasts 7 Earth-like planets, some of which may have liquid water. The discovery is a major advancement in finding a habitable planet and in finding a planet that could support life.| Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration Slow, uncrewed probes Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes such as used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Fast, uncrewed probes Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. Given the light weight of these probes, it would take much less energy to accelerate them. With onboard solar cells, they could continually accelerate using solar power. One can envision a day when a fleet of millions or even billions of these particles swarm to distant stars at nearly the speed of light and relay signals back to Earth through a vast interstellar communication network. Slow, manned missions In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Island hopping through interstellar space Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. If a spaceship could average 10 percent of light speed (and decelerate at the destination, for manned missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see also the section below on propulsion methods), but none of them are ready for near-term (few decades) developments at acceptable cost. Assuming faster-than-light travel is impossible, one might conclude that a human can never make a round-trip farther from Earth than 20 light years if the traveler is active between the ages of 20 and 60. A traveler would never be able to reach more than the very few star systems that exist within the limit of 20 light years from Earth. This, however, fails to take into account relativistic time dilation. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable and a tremendous heating load must be adequately handled. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited by the chemical energy stored in the fuel’s molecular bonds, which limits the thrust to about 5 km/s. They produce a high thrust(about 10⁶ N),but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s. Nuclear fission powered Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power Solar System exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to 12,000 km/s (7,500 mi/s). With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Nuclear fusion rockets Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of c. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. This section needs additional citations for verification. (August 2015) (Learn how and when to remove this template message) An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required. Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate. Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved. More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. Rockets with an external energy source Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis has proposed for an interstellar probe, with energy supplied by an external laser from a base station powering an Ion thruster. A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem: In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration. A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. |Mission||Laser Power||Vehicle Mass||Acceleration||Sail Diameter||Maximum Velocity (% of the speed of light)| |1. Flyby – Alpha Centauri, 40 years| |outbound stage||65 GW||1 t||0.036 g||3.6 km||11% @ 0.17 ly| |2. Rendezvous – Alpha Centauri, 41 years| |outbound stage||7,200 GW||785 t||0.005 g||100 km||21% @ 4.29 ly[dubious ]| |deceleration stage||26,000 GW||71 t||0.2 g||30 km||21% @ 4.29 ly| |3. Manned – Epsilon Eridani, 51 years (including 5 years exploring star system)| |outbound stage||75,000,000 GW||78,500 t||0.3 g||1000 km||50% @ 0.4 ly| |deceleration stage||21,500,000 GW||7,850 t||0.3 g||320 km||50% @ 10.4 ly| |return stage||710,000 GW||785 t||0.3 g||100 km||50% @ 10.4 ly| |deceleration stage||60,000 GW||785 t||0.3 g||100 km||50% @ 0.4 ly| Interstellar travel catalog to use photogravitational assists for a full stop The following table is based on work by Heller, Hippke and Kervella. |α Centauri A||101.25||4.36||1.52| |α Centauri B||147.58||4.36||0.50| - Successive assists at α Cen A and B could allow travel times to 75 yr to both stars. - Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail. - Area of the Lightsail, about 105 m2 = (316 m)2 - Velocity up to 37,300 km s−1 (12.5% c) Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation. Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative. It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and it is not known if this could be produced in sufficient quantity. In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify a spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or hypothetical concept of negative mass. Artificial black hole A theoretical idea for enabling interstellar travel is by propelling a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic string. The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes. Designs and studies The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. 100 Year Starship study The 100 Year Starship (100YSS) is the name of the overall effort that will, over the next century, work toward achieving interstellar travel. The effort will also go by the moniker 100YSS. The 100 Year Starship study is the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. - Project Orion, manned interstellar ship (1958–1968). - Project Daedalus, unmanned interstellar probe (1973–1978). - Starwisp, unmanned interstellar probe (1985). - Project Longshot, unmanned interstellar probe (1987–1988). - Starseed/launcher, fleet of unmanned interstellar probes (1996) - Project Valkyrie, manned interstellar ship (2009) - Project Icarus, unmanned interstellar probe (2009–2014). - Sun-diver, unmanned interstellar probe - Breakthrough Starshot, fleet of unmanned interstellar probes, announced in April 12, 2016. A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. - 100 Year Starship - Icarus Interstellar - Tau Zero Foundation (USA) - Initiative for Interstellar Studies (UK) - Fourth Millennium Foundation (Belgium) - Space Development Cooperative (Canada) The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multi-trillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list. Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT). Interstellar missions not for human benefit Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for unmanned slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of , corresponding to about , can be decelerated using a magnetic sail. Unmanned missions not for human benefit would hence be feasible. Discovery of Earth-Like planets In February 2017 NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from our solar system. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone. - Effect of spaceflight on the human body - Health threat from cosmic rays - Human spaceflight - Intergalactic travel - Interstellar communication - Interstellar travel in fiction - List of nearest terrestrial exoplanet candidates - Nuclear pulse propulsion - Uploaded astronaut - "Interstellar Travel". www.bis-space.com. Retrieved 2017-06-16. - Crawford, I. A. (2009). "The Astronomical, Astrobiological and Planetary Science Case for Interstellar Spaceflight". Journal of the British Interplanetary Society. 62: 415–421. arXiv: . Bibcode:2009JBIS...62..415C. - Conclusion of the 2016 Tennessee Valley Interstellar Workshop Space Solar Power Working Track run by Peter Garretson & Robert Kennedy. - JPL.NASA.GOV. "Where are the Voyagers – NASA Voyager". voyager.jpl.nasa.gov. Retrieved 2017-07-05. - "A Look at the Scaling". nasa.gov. NASA Glenn Research Center. - Millis, Marc G. (2011). "Energy, incessant obsolescence, and the first interstellar missions". arXiv: [physics.gen-ph]. - Zirnstein, E.J (2013). "Simulating the Compton-Getting Effect for Hydrogen Flux Measurements: Implications for IBEX-Hi and -Lo Observations". Astrophysical Journal. 778 (2): 112–127. Bibcode:2013ApJ...778..112Z. doi:10.1088/0004-637x/778/2/112. - Crawford, I. A. (2011). "Project Icarus: A review of local interstellar medium properties of relevance for space missions to the nearest stars". Acta Astronautica. 68 (7–8): 691–699. arXiv: . Bibcode:2011AcAau..68..691C. doi:10.1016/j.actaastro.2010.10.016. - Westover, Shayne (27 March 2012). Active Radiation Shielding Utilizing High Temperature Superconductors (PDF). NIAC Symposium. - Garrett, Henry (30 July 2012). "There and Back Again: A Layman's Guide to Ultra-Reliability for Interstellar Missions" (PDF). Archived from the original (PDF) on 8 May 2014. - Gibson, Dirk. "Terrestrial and Extraterrestrial Space Dangers: Outer Space Perils". - Forward, Robert L. (1996). "Ad Astra!". Journal of the British Interplanetary Society. 49 (1): 23–32. Bibcode:1996JBIS...49...23F. - Kennedy, Andrew (July 2006). "Interstellar Travel: The Wait Calculation and the Incentive Trap of Progress". Journal of the British Interplanetary Society. 59 (7): 239–246. Bibcode:2006JBIS...59..239K. - "Planet eps Eridani b". exoplanet.eu. Retrieved 2011-01-15. - Astronomers Have Discovered The Closest Potentially Habitable Planet. Yahoo News. December 18, 2015. - "Three Planets in Habitable Zone of Nearby Star". eso.org. - Croswell, Ken (3 December 2012). "ScienceShot: Older Vega Mature Enough to Nurture Life". sciencemag.org. Archived from the original on 4 December 2012. - Voyager. Louisiana State University: ERIC Clearing House. 1977. p. 12. Retrieved 2015-10-26. - "Project Dragonfly: The case for small, laser-propelled, distributed probes". Centauri Dreams. Retrieved 12 June 2015. - Nogrady, Bianca. "The myths and reality about interstellar travel". Retrieved 2017-06-16. - Daniel H. Wilson. Near-lightspeed nano spacecraft might be close. msnbc.msn.com. - Kaku, Michio. The Physics of the Impossible. Anchor Books. - Hein, A. M. "How Will Humans Fly to the Stars?". Retrieved 12 April 2013. - Hein, A. M.; et al. (2012). "World Ships: Architectures & Feasibility Revisited". Journal of the British Interplanetary Society. 65: 119–133. Bibcode:2012JBIS...65..119H. - Bond, A.; Martin, A.R. (1984). "World Ships – An Assessment of the Engineering Feasibility". Journal of the British Interplanetary Society. 37: 254–266. Bibcode:1984JBIS...37..254B. - Frisbee, R.H. (2009). Limits of Interstellar Flight Technology in Frontiers of Propulsion Science. Progress in Astronautics and Aeronautics. - Hein, Andreas M. "Project Hyperion: The Hollow Asteroid Starship – Dissemination of an Idea". Retrieved 12 April 2013. - "Various articles on hibernation". Journal of the British Interplanetary Society. 59: 81–144. 2006. - Crowl, A.; Hunt, J.; Hein, A.M. (2012). "Embryo Space Colonisation to Overcome the Interstellar Time Distance Bottleneck". Journal of the British Interplanetary Society. 65: 283–285. Bibcode:2012JBIS...65..283C. - "'Island-Hopping' to the Stars". Centauri Dreams. Retrieved 12 June 2015. - Crawford, I. A. (1990). "Interstellar Travel: A Review for Astronomers" (PDF). Quarterly Journal of the Royal Astronomical Society. 31: 377–400. Bibcode:1990QJRAS..31..377C. - Parkinson, Bradford W.; Spilker, James J. Jr.; Axelrad, Penina; Enge, Per (2014). 220.127.116.11Time Dilation. American Institute of Aeronautics and Astronautics. ISBN 978-1-56347-106-3. Retrieved 27 October 2015. - "Clock paradox III" (PDF). Taylor, Edwin F.; Wheeler, John Archibald (1966). "Chapter 1 Exercise 51". Spacetime Physics. W.H. Freeman, San Francisco. pp. 97–98. ISBN 0-7167-0336-X. - Crowell, Benjamin (2011), Light and Matter Section 4.3 - Yagasaki, Kazuyuki (2008). "Invariant Manifolds And Control Of Hyperbolic Trajectories On Infinite- Or Finite-Time Intervals". Dynamical Systems: an International Journal. 23 (3): 309–331. doi:10.1080/14689360802263571. Retrieved 27 October 2015. - Orth, C. D. (16 May 2003). "VISTA – A Vehicle for Interplanetary Space Transport Application Powered by Inertial Confinement Fusion" (PDF). Lawrence Livermore National Laboratory. - Clarke, Arthur C. (1951). The Exploration of Space. New York: Harper. - Dawn Of A New Era: The Revolutionary Ion Engine That Took Spacecraft To Ceres - Project Daedalus: The Propulsion System Part 1; Theoretical considerations and calculations. 2. REVIEW OF ADVANCED PROPULSION SYSTEMS, archived from the original on 2013-06-28 - General Dynamics Corp. (January 1964). "Nuclear Pulse Vehicle Study Condensed Summary Report (General Dynamics Corp.)" (PDF). U.S. Department of Commerce National Technical Information Service. - Freeman J. Dyson (October 1968). "Interstellar Transport". Physics Today. 21 (10): 41. Bibcode:1968PhT....21j..41D. doi:10.1063/1.3034534. - Cosmos by Carl Sagan - Lenard, Roger X.; Andrews, Dana G. (June 2007). "Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation" (PDF). Acta Astronautica. 61 (1–6): 450–458. Bibcode:2007AcAau..61..450L. doi:10.1016/j.actaastro.2007.01.052. - Friedwardt Winterberg (2010). The Release of Thermonuclear Energy by Inertial Confinement. World Scientific. ISBN 978-981-4295-91-8. - D.F. Spencer; L.D. Jaffe (1963). "Feasibility of Interstellar Travel". Astronautica Acta. 9: 49–58. - PDF C. R. Williams et al., 'Realizing "2001: A Space Odyssey": Piloted Spherical Torus Nuclear Fusion Propulsion', 2001, 52 pages, NASA Glenn Research Center - "Storing antimatter - CERN". home.web.cern.ch. - "ALPHA Stores Antimatter Atoms Over a Quarter of an Hour – and Still Counting - Berkeley Lab". 5 June 2011. - Winterberg, F. (21 August 2012). "Matter–antimatter gigaelectron volt gamma ray laser rocket propulsion". Acta Astronautica. 81 (1): 34–39. Bibcode:2012AcAau..81...34W. doi:10.1016/j.actaastro.2012.07.001. Retrieved 25 April 2015. - Landis, Geoffrey A. (29 August 1994). Laser-powered Interstellar Probe. Conference on Practical Robotic Interstellar Flight. NY University, New York, NY. Archived from the original on 2 October 2013. - A. Bolonkin (2005). Non Rocket Space Launch and Flight. Elsevier. ISBN 978-0-08-044731-5 - Forward, R.L. (1984). "Roundtrip Interstellar Travel Using Laser-Pushed Lightsails". J Spacecraft. 21 (2): 187–195. Bibcode:1984JSpRo..21..187F. doi:10.2514/3.8632. - "Alpha Centauri: Our First Target for Interstellar Probes" – via go.galegroup.com. - Andrews, Dana G.; Zubrin, Robert M. (1990). "Magnetic Sails and Interstellar Travel" (PDF). Journal of the British Interplanetary Society. 43: 265–272. Archived from the original (PDF) on 2014-10-12. Retrieved 2014-10-08. - Zubrin, Robert; Martin, Andrew (1999-08-11). "NIAC Study of the Magnetic Sail" (PDF). Retrieved 2014-10-08. - Landis, Geoffrey A. (2003). "The Ultimate Exploration: A Review of Propulsion Concepts for Interstellar Flight". In Yoji Kondo; Frederick Bruhweiler; John H. Moore, Charles Sheffield. Interstellar Travel and Multi-Generation Space Ships. Apogee Books. p. 52. ISBN 1-896522-99-8. - Heller, René; Hippke, Michael; Kervella, Pierre (2017). "Optimized trajectories to the nearest stars using lightweight high-velocity photon sails". The Astronomical Journal. 154 (3): 115. arXiv: . Bibcode:2017AJ....154..115H. doi:10.3847/1538-3881/aa813f. - Roger X. Lenard; Ronald J. Lipinski (2000). "Interstellar rendezvous missions employing fission propulsion systems". AIP Conference Proceedings. 504: 1544–1555. - Crawford, Ian A. (1995). "Some thoughts on the implications of faster-than-light interstellar space travel". Quarterly Journal of the Royal Astronomical Society. 36: 205–218. Bibcode:1995QJRAS..36..205C. - Feinberg, G. (1967). "Possibility of faster-than-light particles". Physical Review. 159 (5): 1089–1105. Bibcode:1967PhRv..159.1089F. doi:10.1103/physrev.159.1089. - Alcubierre, Miguel (1994). "The warp drive: hyper-fast travel within general relativity". Classical and Quantum Gravity. 11 (5): L73–L77. arXiv: . Bibcode:1994CQGra..11L..73A. doi:10.1088/0264-9381/11/5/001. Retrieved 2015-09-01. - "Are Black Hole Starships Possible?", Louis Crane, Shawn Westmoreland, 2009 - Chown, Marcus (25 November 2009). "Dark power: Grand designs for interstellar travel". New Scientist (2736). (subscription required) - A Black Hole Engine That Could Power Spaceships. Tim Barribeau, November 4, 2009. - "Ideas Based On What We'd Like To Achieve: Worm Hole transportation". NASA Glenn Research Center. - John G. Cramer; Robert L. Forward; Michael S. Morris; Matt Visser; Gregory Benford; Geoffrey A. Landis (15 March 1995). "Natural Wormholes as Gravitational Lenses". Physical Review D. 51 (3117): 3117–3120. arXiv: . Bibcode:1995PhRvD..51.3117C. doi:10.1103/PhysRevD.51.3117. - Visser, M. (1995). Lorentzian Wormholes: from Einstein to Hawking. AIP Press, Woodbury NY. ISBN 1-56396-394-9. - Gilster, Paul (April 1, 2007). "A Note on the Enzmann Starship". Centauri Dreams. - "Icarus Interstellar – Project Hyperion". Retrieved 13 April 2013. - http://www.grc.nasa.gov/WWW/bpp "Breakthrough Propulsion Physics" project at NASA Glenn Research Center, Nov 19, 2008 - http://www.nasa.gov/centers/glenn/technology/warp/warp.html Warp Drive, When? Breakthrough Technologies January 26, 2009 - "Archived copy". Archived from the original on 2009-03-27. Retrieved 2009-04-03. Malik, Tariq, "Sex and Society Aboard the First Starships." Science Tuesday, Space.com March 19, 2002. - "Dr. Harold "Sonny" White – Icarus Interstellar". icarusinterstellar.org. Archived from the original on 1 June 2015. Retrieved 12 June 2015. - "Icarus Interstellar – A nonprofit foundation dedicated to achieving interstellar flight by 2100". icarusinterstellar.org. Retrieved 12 June 2015. - Moskowitz, Clara (17 September 2012). "Warp Drive May Be More Feasible Than Thought, Scientists Say". space.com. - Forward, R. L. (May–June 1985). "Starwisp – An ultra-light interstellar probe". Journal of Spacecraft and Rockets. 22 (3): 345–350. Bibcode:1985JSpRo..22..345F. doi:10.2514/3.25754. - Benford, James; Benford, Gregory. "Near-Term Beamed Sail Propulsion Missions: Cosmos-1 and Sun-Diver" (PDF). Department of Physics, University of California, Irvine. Archived from the original (PDF) on 2014-10-24. - "Breakthrough Starshot". Breakthrough Initiatives. 12 April 2016. Retrieved 2016-04-12. - Starshot – Concept. - "Breakthrough Initiatives". breakthroughinitiatives.org. - "Map". 100yss.org. - Webpole Bt. "Initiative For Interstellar Studies". i4is.org. Retrieved 12 June 2015. - "Home". fourthmillenniumfoundation.org. Retrieved 12 June 2015. - "Space Habitat Cooperative". Space Habitat Cooperative. Retrieved 12 June 2015. - O’Neill, Ian (Aug 19, 2008). "Interstellar travel may remain in science fiction". Universe Today. - Odenwald, Sten (April 2, 2015). "Interstellar travel: Where should we go?". Huffington Post Blog. - Kulkarni, Neeraj; Lubin, Philip; Zhang, Qicheng (2017). "Relativistic Spacecraft Propelled by Directed Energy". The Astronomical Journal. 155 (4): 155. arXiv: . Bibcode:2018AJ....155..155K. doi:10.3847/1538-3881/aaafd2. - Gros, Claudius (5 September 2016). "Developing ecospheres on transiently habitable planets: the genesis project". Astrophysics and Space Science. 361 (10). arXiv: . Bibcode:2016Ap&SS.361..324G. doi:10.1007/s10509-016-2911-0. - How to Jumpstart Life Elsewhere in Our Galaxy, The Atlantic, 08-25-17. - Should we seed life through the cosmos using laser-driven ships?, New Scientist, 11-13-17. - "NASA Press Release Feb 22nd 2017". - Crawford, Ian A. (1990). "Interstellar Travel: A Review for Astronomers" (PDF). Quarterly Journal of the Royal Astronomical Society. 31: 377–400. Bibcode:1990QJRAS..31..377C. - Hein, A.M. (September 2012). "Evaluation of Technological-Social and Political Projections for the Next 100-300 Years and the Implications for an Interstellar Mission". Journal of the British Interplanetary Society. 33 (09/10): 330–340. Bibcode:2012JBIS...65..330H. - Long, Kelvin (2012). Deep Space Propulsion: A Roadmap to Interstellar Flight. Springer. ISBN 978-1-4614-0606-8. - Mallove, Eugene (1989). The Starflight Handbook. John Wiley & Sons, Inc. ISBN 0-471-61912-4. - Odenwald, Sten (2015). Interstellar Travel: An Astronomer's Guide. CreateSpace/Amazon.com. ISBN 978-1-5120-5627-3. - Woodward, James (2013). Making Starships and Stargates: The Science of Interstellar Transport and Absurdly Benign Wormholes. Springer. ISBN 978-1-4614-5622-3. - Zubrin, Robert (1999). Entering Space: Creating a Spacefaring Civilization. Tarcher / Putnam. ISBN 1-58542-036-0. - Leonard David – Reaching for interstellar flight (2003) – MSNBC (MSNBC Webpage) - NASA Breakthrough Propulsion Physics Program (NASA Webpage) - Bibliography of Interstellar Flight (source list) - DARPA seeks help for interstellar starship - How to build a starship – and why we should start thinking about it now (Article from The Conversation, 2016)
The frontispiece of Sir Henry Billingsley's first English version of Euclid's Elements, 1570 |Author||Euclid, and translators| |Language||Ancient Greek, translations| |Subject||Euclidean geometry, elementary number theory| |circa 300 BC| |Pages||13 books, or more in translation with scholia| Euclid's Elements (Ancient Greek: Στοιχεῖα Stoicheia) is a mathematical and geometric treatise consisting of 13 books attributed to the ancient Greek mathematician Euclid in Alexandria, Ptolemaic Egypt circa 300 BC. It is a collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. The books cover Euclidean geometry and the ancient Greek version of elementary number theory. The work also includes an algebraic system that has become known as geometric algebra, which is powerful enough to solve many algebraic problems, including the problem of finding the square root of a number. Elements is the second-oldest extant Greek mathematical treatise after Autolycus' On the Moving Sphere, and it is the oldest extant axiomatic deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science. According to Proclus, the term "element" was used to describe a theorem that is all-pervading and helps furnishing proofs of many other theorems. The word element in the Greek language is the same as letter. This suggests that theorems in the Elements should be seen as standing in the same relation to geometry as letters to language. Later commentators give a slightly different meaning to the term element, emphasizing how the propositions have progressed in small steps, and continued to build on previous propositions in a well-defined order. Euclid's Elements has been referred to as the most successful and influential textbook ever written. Being first set in type in Venice in 1482, it is one of the very earliest mathematical works to be printed after the invention of the printing press and was estimated by Carl Benjamin Boyer to be second only to the Bible in the number of editions published, with the number reaching well over one thousand. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through other school textbooks, did it cease to be considered something all educated people had read. - 1 History - 2 Influence - 3 Outline of Elements - 4 Euclid's method and style of presentation - 5 Criticism - 6 Apocrypha - 7 Editions - 8 See also - 9 Notes - 10 References - 11 External links Basis in earlier work Scholars believe that the Elements is largely a collection of theorems proven by other mathematicians, supplemented by some original work. Proclus (412 – 485 AD), a Greek mathematician who lived around seven centuries after Euclid, wrote in his commentary on the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors". Pythagoras (circa 570–495 BCE) was probably the source for most of books I and II, Hippocrates of Chios (circa 470–410 BCE, not the better known Hippocrates of Kos) for book III, and Eudoxus of Cnidus (circa 408–355 BC) for book V, while books IV, VI, XI, and XII probably came from other Pythagorean or Athenian mathematicians. The Elements may have been based on an earlier textbook by Hippocrates of Chios, who also may have originated the use of letters to refer to figures. Transmission of the text In the fourth century AD, Theon of Alexandria produced an edition of Euclid which was so widely used that it became the only surviving source until François Peyrard's 1808 discovery at the Vatican of a manuscript not derived from Theon's. This manuscript, the Heiberg manuscript, is from a Byzantine workshop around 900 and is the basis of modern editions. Papyrus Oxyrhynchus 29 is a tiny fragment of an even older manuscript, but only contains the statement of one proposition. Although known to, for instance, Cicero, no record exists of the text having been translated into Latin prior to Boethius in the fifth or sixth century. The Arabs received the Elements from the Byzantines around 760; this version was translated into Arabic under Harun al Rashid circa 800. The Byzantine scholar Arethas commissioned the copying of one of the extant Greek manuscripts of Euclid in the late ninth century. Although known in Byzantium, the Elements was lost to Western Europe until about 1120, when the English monk Adelard of Bath translated it into Latin from an Arabic translation. The first printed edition appeared in 1482 (based on Campanus of Novara's 1260 edition), and since then it has been translated into many languages and published in about a thousand different editions. Theon's Greek edition was recovered in 1533. In 1570, John Dee provided a widely respected "Mathematical Preface", along with copious notes and supplementary material, to the first English edition by Henry Billingsley. Copies of the Greek text still exist, some of which can be found in the Vatican Library and the Bodleian Library in Oxford. The manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been made about the contents of the original text (copies of which are no longer available). Ancient texts which refer to the Elements itself, and to other mathematical theories that were current at the time it was written, are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of the text. Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or further study. The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Sir Isaac Newton were all influenced by the Elements, and applied their knowledge of it to their work. Mathematicians and philosophers, such as Thomas Hobbes, Baruch Spinoza, Alfred North Whitehead, and Bertrand Russell, have attempted to create their own foundational "Elements" for their respective disciplines, by adopting the axiomatized deductive structures that Euclid's work introduced. The austere beauty of Euclidean geometry has been seen by many in western culture as a glimpse of an otherworldly system of perfection and certainty. Abraham Lincoln kept a copy of Euclid in his saddlebag, and studied it late at night by lamplight; he related that he said to himself, "You never can make a lawyer if you do not understand what demonstrate means; and I left my situation in Springfield, went home to my father's house, and stayed there till I could give any proposition in the six books of Euclid at sight". Edna St. Vincent Millay wrote in her sonnet "Euclid alone has looked on Beauty bare", "O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized!". Einstein recalled a copy of the Elements and a magnetic compass as two gifts that had a great influence on him as a boy, referring to the Euclid as the "holy little geometry book". The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about 2,000 years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remain the cornerstone of mathematics. Outline of Elements Contents of the books Books 1 through 4 deal with plane geometry: - Book 1 contains Euclid's 10 axioms (5 named 'postulates'—including the parallel postulate—and 5 named 'common notions') and the basic propositions of geometry: the pons asinorum (proposition 5), the Pythagorean theorem (proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area). - Book 2 is commonly called the "book of geometric algebra" because most of the propositions can be seen as geometric interpretations of algebraic identities, such as a(b + c + ...) = ab + ac + ... or (2a + b)2 + b2 = 2(a2 + (a + b)2). It also contains a method of finding the square root of a given number. - Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point, Thales' theorem - Book 4 constructs the incircle and circumcircle of a triangle, and constructs regular polygons with 4, 5, 6, and 15 sides - Book 5 is a treatise on proportions of magnitudes. Proposition 25 has as a special case the inequality of arithmetic and geometric means. - Book 6 applies proportions to geometry: similar figures. - Book 7 deals strictly with elementary number theory: divisibility, prime numbers, Euclid's algorithm for finding the greatest common divisor, least common multiple. Propositions 30 and 32 together are essentially equivalent to the fundamental theorem of arithmetic stating that every positive integer can be written as a product of primes in an essentially unique way, though Euclid would have had trouble stating it in this modern form as he did not use the product of more than 3 numbers. - Book 8 deals with proportions in number theory and geometric sequences. - Book 9 applies the results of the preceding two books and gives the infinitude of prime numbers (proposition 20), the sum of a geometric series (proposition 35), and the construction of even perfect numbers (proposition 36). - Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration. Books 11 through to 13 deal with spatial geometry: - Book 11 generalizes the results of books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds. - Book 12 studies volumes of cones, pyramids, and cylinders in detail and shows, for example, that the volume of a cone is a third of the volume of the corresponding cylinder. It concludes by showing that the volume of a sphere is proportional to the cube of its radius (in modern language) by approximating its volume by a union of many pyramids. - Book 13 constructs the five regular Platonic solids inscribed in a sphere, calculates the ratio of their edges to the radius of the sphere, and proves that there are no further regular solids. Euclid's method and style of presentation Many of Euclid's propositions were constructive, demonstrating the existence of some figure by detailing the steps he used to construct the object using a compass and straightedge. His constructive approach appears even in his geometry's postulates, as the first and third postulates stating the existence of a line and circle are constructive. Instead of stating that lines and circles exist per his prior definitions, he states that it is possible to 'construct' a line and circle. It also appears that, for him to use a figure in one of his proofs, he needs to construct it in an earlier proposition. For example, he proves the Pythagorean theorem by first inscribing a square on the sides of a right triangle, but only after constructing a square on a given line one proposition earlier. As was common in ancient mathematical texts, when a proposition needed proof in several different cases, Euclid often proved only one of them (often the most difficult), leaving the others to the reader. Later editors such as Theon often interpolated their own proofs of these cases. Euclid's presentation was limited by the mathematical ideas and notations in common currency in his era, and this causes the treatment to seem awkward to the modern reader in some places. For example, there was no notion of an angle greater than two right angles, the number 1 was sometimes treated separately from other positive integers, and as multiplication was treated geometrically he did not use the product of more than 3 different numbers. The geometrical treatment of number theory may have been because the alternative would have been the extremely awkward Alexandrian system of numerals. The presentation of each result is given in a stylized form, which, although not invented by Euclid, is recognized as typically classical. It has six different parts: First is the 'enunciation', which states the result in general terms (i.e., the statement of the proposition). Then comes the 'setting-out', which gives the figure and denotes particular geometrical objects by letters. Next comes the 'definition' or 'specification', which restates the enunciation in terms of the particular figure. Then the 'construction' or 'machinery' follows. Here, the original figure is extended to forward the proof. Then, the 'proof' itself follows. Finally, the 'conclusion' connects the proof to the enunciation by stating the specific conclusions drawn in the proof, in the general terms of the enunciation. No indication is given of the method of reasoning that led to the result, although the Data does provide instruction about how to approach the types of problems encountered in the first four books of the Elements. Some scholars have tried to find fault in Euclid's use of figures in his proofs, accusing him of writing proofs that depended on the specific figures drawn rather than the general underlying logic, especially concerning Proposition II of Book I. However, Euclid's original proof of this proposition, is general, valid, and does not depend on the figure used as an example to illustrate one given configuration. Euclid's list of axioms in the Elements was not exhaustive, but represented the principles that were the most important. His proofs often invoke axiomatic notions which were not originally presented in his list of axioms. Later editors have interpolated Euclid's implicit axiomatic assumptions in the list of formal axioms. For example, in the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points. Later, in the fourth construction, he used superposition (moving the triangles on top of each other) to prove that if two sides and their angles are equal, then they are congruent; during these considerations he uses some properties of superposition, but these properties are not described explicitly in the treatise. If superposition is to be considered a valid method of geometric proof, all of geometry would be full of such proofs. For example, propositions I.1 – I.3 can be proved trivially by using superposition. Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose." It was not uncommon in ancient time to attribute to celebrated authors works that were not written by them. It is by these means that the apocryphal books XIV and XV of the Elements were sometimes included in the collection. The spurious Book XIV was probably written by Hypsicles on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being The spurious Book XV was probably written, at least in part, by Isidore of Miletus. This book covers topics such as counting the number of edges and solid angles in the regular solids, and finding the measure of dihedral angles of faces that meet at an edge. - 1460s, Regiomontanus (incomplete) - 1482, Erhard Ratdolt (Venice), first printed edition - 1533, editio princeps by Simon Grynäus - 1557, by Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (only propositions, no full proofs, includes original Greek and the Latin translation) - 1572, Commandinus Latin edition - 1574, Christoph Clavius - 1505, Bartolomeo Zamberti (Latin) - 1543, Niccolò Tartaglia (Italian) - 1557, Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (Greek to Latin) - 1558, Johann Scheubel (German) - 1562, Jacob Kündig (German) - 1562, Wilhelm Holtzmann (German) - 1564–1566, Pierre Forcadel de Béziers (French) - 1570, Henry Billingsley (English) - 1572, Commandinus (Latin) - 1575, Commandinus (Italian) - 1576, Rodrigo de Zamorano (Spanish) - 1594, Typographia Medicea (edition of the Arabic translation of Nasir al-Din al-Tusi) - 1604, Jean Errard de Bar-le-Duc (French) - 1606, Jan Pieterszoon Dou (Dutch) - 1607, Matteo Ricci, Xu Guangqi (Chinese) - 1613, Pietro Cataldi (Italian) - 1615, Denis Henrion (French) - 1617, Frans van Schooten (Dutch) - 1637, L. Carduchi (Spanish) - 1639, Pierre Hérigone (French) - 1651, Heinrich Hoffmann (German) - 1651, Thomas Rudd (English) - 1660, Isaac Barrow (English) - 1661, John Leeke and Geo. Serle (English) - 1663, Domenico Magni (Italian from Latin) - 1672, Claude François Milliet Dechales (French) - 1680, Vitale Giordano (Italian) - 1685, William Halifax (English) - 1689, Jacob Knesa (Spanish) - 1690, Vincenzo Viviani (Italian) - 1694, Ant. Ernst Burkh v. Pirckenstein (German) - 1695, C. J. Vooght (Dutch) - 1697, Samuel Reyher (German) - 1702, Hendrik Coets (Dutch) - 1705, Charles Scarborough (English) - 1708, John Keill (English) - 1714, Chr. Schessler (German) - 1714, W. Whiston (English) - 1720s Jagannatha Samrat (Sanskrit, based on the Arabic translation of Nasir al-Din al-Tusi) - 1731, Guido Grandi (abbreviation to Italian) - 1738, Ivan Satarov (Russian from French) - 1744, Mårten Strömer (Swedish) - 1749, Dechales (Italian) - 1745, Ernest Gottlieb Ziegenbalg (Danish) - 1752, Leonardo Ximenes (Italian) - 1756, Robert Simson (English) - 1763, Pubo Steenstra (Dutch) - 1768, Angelo Brunelli (Portuguese) - 1773, 1781, J. F. Lorenz (German) - 1780, Baruch Schick of Shklov (Hebrew) - 1781, 1788 James Williamson (English) - 1781, William Austin (English) - 1789, Pr. Suvoroff nad Yos. Nikitin (Russian from Greek) - 1795, John Playfair (English) - 1803, H.C. Linderup (Danish) - 1804, F. Peyrard (French) - 1807, Józef Czech (Polish based on Greek, Latin and English editions) - 1807, J. K. F. Hauff (German) - 1818, Vincenzo Flauti (Italian) - 1820, Benjamin of Lesbos (Modern Greek) - 1826, George Phillips (English) - 1828, Joh. Josh and Ign. Hoffmann (German) - 1828, Dionysius Lardner (English) - 1833, E. S. Unger (German) - 1833, Thomas Perronet Thompson (English) - 1836, H. Falk (Swedish) - 1844, 1845, 1859 P. R. Bråkenhjelm (Swedish) - 1850, F. A. A. Lundgren (Swedish) - 1850, H. A. Witt and M. E. Areskong (Swedish) - 1862, Isaac Todhunter (English) - 1865, Sámuel Brassai (Hungarian) - 1873, Masakuni Yamada (Japanese) - 1880, Vachtchenko-Zakhartchenko (Russian) - 1901, Max Simon (German) - 1907, František Servít (Czech) - 1908, Thomas Little Heath (English) - 1939, R. Catesby Taliaferro (English) Currently in print - Euclid's Elements – All thirteen books in one volume, Based on Heath's translation, Green Lion Press ISBN 1-888009-18-7. - The Elements: Books I–XIII – Complete and Unabridged, (2006) Translated by Sir Thomas Heath, Barnes & Noble ISBN 0-7607-6312-7. - The Thirteen Books of Euclid's Elements, translation and commentaries by Heath, Thomas L. (1956) in three volumes. Dover Publications. ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3) - Euclid's Elements Redux, contains books I–VI, based on John Casey's translation, Starrhorse ISBN 978-1312110786 - Oliver Byrne (mathematician) who published a color version of Elements in 1847. - Heath (1956) (vol. 1), p. 372 - Heath (1956) (vol. 1), p. 409 - Boyer (1991). "Euclid of Alexandria". p. 101. With the exception of the Sphere of Autolycus, surviving work by Euclid are the oldest Greek mathematical treatises extant; yet of what Euclid wrote more than half has been lost,Missing or empty - Heath (1956) (vol. 1), p. 114 - Encyclopedia of Ancient Greece (2006) by Nigel Guy Wilson, page 278. Published by Routledge Taylor and Francis Group. Quote:"Euclid's Elements subsequently became the basis of all mathematical education, not only in the Roman and Byzantine periods, but right down to the mid-20th century, and it could be argued that it is the most successful textbook ever written." - Boyer (1991). "Euclid of Alexandria". p. 100. As teachers at the school he called a band of leading scholars, among whom was the author of the most fabulously successful mathematics textbook ever written – the Elements (Stoichia) of Euclid.Missing or empty - Boyer (1991). "Euclid of Alexandria". p. 119. The Elements of Euclid not only was the earliest major Greek mathematical work to come down to us, but also the most influential textbook of all times. [...]The first printed versions of the Elements appeared at Venice in 1482, one of the very earliest of mathematical books to be set in type; it has been estimated that since then at least a thousand editions have been published. Perhaps no book other than the Bible can boast so many editions, and certainly no mathematical work has had an influence comparable with that of Euclid's Elements.Missing or empty - The Historical Roots of Elementary Mathematics by Lucas Nicolaas Hendrik Bunt, Phillip S. Jones, Jack D. Bedient (1988), page 142. Dover publications. Quote:"the Elements became known to Western Europe via the Arabs and the Moors. There, the Elements became the foundation of mathematical education. More than 1000 editions of the Elements are known. In all probability, it is, next to the Bible, the most widely spread book in the civilization of the Western world." - From the introduction by Amit Hagar to Euclid and His Modern Rivals by Lewis Carroll (2009, Barnes & Noble) pg. xxviii: Geometry emerged as an indispensable part of the standard education of the English gentleman in the eighteenth century; by the Victorian period it was also becoming an important part of the education of artisans, children at Board Schools, colonial subjects and, to a rather lesser degree, women. ... The standard textbook for this purpose was none other than Euclid's The Elements. - Russell, Bertrand. A History of Western Philosophy. p. 212. - W.W. Rouse Ball, A Short Account of the History of Mathematics, 4th ed., 1908, p. 54 - Ball, p. 38 - The Earliest Surviving Manuscript Closest to Euclid's Original Text (Circa 850); an image of one page - L.D. Reynolds and Nigel G. Wilson, Scribes and Scholars 2nd. ed. (Oxford, 1974) p. 57 - One older work claims Adelard disguised himself as a Muslim student to obtain a copy in Muslim Córdoba (Rouse Ball, p. 165). However, more recent biographical work has turned up no clear documentation that Adelard ever went to Muslim-ruled Spain, although he spent time in Norman-ruled Sicily and Crusader-ruled Antioch, both of which had Arabic-speaking populations. Charles Burnett, Adelard of Bath: Conversations with his Nephew (Cambridge, 1999); Charles Burnett, Adelard of Bath (University of London, 1987). - Busard, H.L.L. (2005). "Introduction to the Text". Campanus of Novara and Euclid's Elements. I. Stuttgart: Franz Steiner Verlag. ISBN 978-3-515-08645-5. - Henry Ketcham, The Life of Abraham Lincoln, at Project Gutenberg, https://www.gutenberg.org/ebooks/6811 - Dudley Herschbach, "Einstein as a Student," Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA, page 3, web: HarvardChem-Einstein-PDF: about Max Talmud visited on Thursdays for six years. - Hartshorne 2000, p. 18. - Hartshorne 2000, pp. 18–20. - Ball, p. 55 - Ball, pp. 58, 127 - Heath (1963), p. 216 - Ball, p. 54 - Godfried Toussaint, "A new look at Euclid's second proposition," The Mathematical Intelligencer, Vol. 15, No. 3, 1993, pp. 12–23. - Heath (1956) (vol. 1), p. 62 - Heath (1956) (vol. 1), p. 242 - Heath (1956) (vol. 1), p. 249 - Ball (1960) p. 55. - Boyer (1991). "Euclid of Alexandria". pp. 118–119. In ancient times it was not uncommon to attribute to a celebrated author works that were not by him; thus, some versions of Euclid's Elements include a fourteenth and even a fifteenth book, both shown by later scholars to be apocryphal. The so-called Book XIV continues Euclid's comparison of the regular solids inscribed in a sphere, the chief results being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being that of the edge of the cube to the edge of the icosahedron, that is, . It is thought that this book may have been composed by Hypsicles on the basis of a treatise (now lost) by Apollonius comparing the dodecahedron and icosahedron. [...] The spurious Book XV, which is inferior, is thought to have been (at least in part) the work of Isidore of Miletus (fl. ca. A.D. 532), architect of the cathedral of Holy Wisdom (Hagia Sophia) at Constantinople. This book also deals with the regular solids, counting the number of edges and solid angles in the solids, and finding the measures of the dihedral angles of faces meeting at an edge.Missing or empty - Alexanderson & Greenwalt 2012, pg. 163 - K. V. Sarma (1997), Helaine Selin, ed., Encyclopaedia of the history of science, technology, and medicine in non-western cultures, Springer, pp. 460–461, ISBN 978-0-7923-4066-9 - JNUL Digitized Book Repository - available online, second edition 2007 commented by Petr Vopěnka - "Euclid's 'Elements' Redux". Euclid's 'Elements' Redux. starrhorse. Retrieved 13 September 2015. - Alexanderson, Gerald L.; Greenwalt, William S. (2012), "About the cover: Billingsley's Euclid in English", Bulletin (New Series) of the American Mathematical Society, 49 (1): 163–167 - Artmann, Benno: Euclid – The Creation of Mathematics. New York, Berlin, Heidelberg: Springer 1999, ISBN 0-387-98423-2 - Ball, W.W. Rouse (1960). A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.). New York: Dover Publications. pp. 50–62. ISBN 0-486-20630-0. - Hartshorne, Robin (2000). Geometry: Euclid and Beyond (2nd ed.). New York, NY: Springer. ISBN 9780387986500. - Heath, Thomas L. (1956). The Thirteen Books of Euclid's Elements (2nd ed. [Facsimile. Original publication: Cambridge University Press, 1925] ed.). New York: Dover Publications. - (3 vols.): ISBN 0-486-60088-2 (vol. 1), ISBN 0-486-60089-0 (vol. 2), ISBN 0-486-60090-4 (vol. 3). Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text. - Heath, Thomas L. (1963). A Manual of Greek Mathematics. Dover Publications. ISBN 978-0-486-43231-1. - Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons, Inc. ISBN 0-471-54397-7. |Wikiquote has quotations related to: Euclid's Elements| |Wikisource has original text related to this article:| |Wikimedia Commons has media related to Elements of Euclid.| - Multilingual edition of Elementa in the Bibliotheca Polyglotta - Euclid (1997) [c. 300 BC]. David E. Joyce, ed. "Elements". Retrieved 2006-08-30. In HTML with Java-based interactive figures. - Euclid's Elements in English and Greek (PDF), utexas.edu - Richard Fitzpatrick a bilingual edition (typset in PDF format, with the original Greek and an English translation on facing pages; free in PDF form, available in print) ISBN 978-0-615-17984-1 - Heath's English translation (HTML, without the figures, public domain) (accessed February 4, 2010) - Oliver Byrne's 1847 edition (also hosted at archive.org)– an unusual version by Oliver Byrne (mathematician) who used color rather than labels such as ABC (scanned page images, public domain) - The First Six Books of the Elements by John Casey and Euclid scanned by Project Gutenberg. - Reading Euclid – a course in how to read Euclid in the original Greek, with English translations and commentaries (HTML with figures) - Sir Thomas More's manuscript - Latin translation by Aethelhard of Bath - Euclid Elements – The original Greek text Greek HTML - Clay Mathematics Institute Historical Archive – The thirteen books of Euclid's Elements copied by Stephen the Clerk for Arethas of Patras, in Constantinople in 888 AD - Kitāb Taḥrīr uṣūl li-Ūqlīdis Arabic translation of the thirteen books of Euclid's Elements by Nasīr al-Dīn al-Ṭūsī. Published by Medici Oriental Press(also, Typographia Medicea). Facsimile hosted by Islamic Heritage Project. - Euclid's Elements Redux, an open textbook based on the Elements - 1607 Chinese translations reprinted as part of Siku Quanshu, or "Complete Library of the Four Treasuries."
Brigitte Castille November 21, 2020 Worksheets Fifth Grade Math Curriculum: What Students Will Learn. Common Core Math Standards for 5th-grade students cover writing and interpreting numerical expressions; analyzing patterns and relationships; understanding the place-value system; performing operations with multi-digit whole numbers and decimals to the hundredths; using equivalent fractions as a strategy to add and subtract fractions. 5th grade math worksheets – Printable PDF activities for math practice. This is a suitable resource page for fifth graders, teachers and parents. These math sheets can be printed as extra teaching material for teachers, extra math practice for kids or as homework material parents can use. Free 5th Grade Math Word Problems Worksheets (PDF) for topics including estimating, rounding, fractions, and decimals. For all Grade 5 math teachers and parents. Enjoy! This page hosts a vast collection of multiplication word problems for 3rd grade, 4th grade, and 5th grade kids, based on real-life scenarios, practical applications, interesting facts, and vibrant themes. Featured here are various word problems ranging from basic single-digit multiplication to two-digit and three-digit multiplication. Worksheets > Math > Grade 2 > Word problems. Math word problem worksheets for grade 2. These word problem worksheets place 2nd grade math concepts in a context that grade 2 students can relate to. We provide math word problems for addition, subtraction, multiplication, time, money and fractions. Grade 2 math worksheets help children understand concepts better and apply them. Number and place value along with the basics of multiplication and division are just some of the concepts solidified at this stage. In addition to sufficient practise, 2nd grade math worksheets helps them develop a problem-solving mindset. Math Index All Math Worksheets By Topic: Addition Subtraction Multiplication Division Geometry Word Problems . All Math Worksheets By Grade: Preschool Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 . Animal Facts. Geography. History. Language Arts. Lesson Plans. Magic Tricks. Math. Science. Thematic Units . Preschool. Kindergarten. Grade. Make practicing math FUN with these inovactive and seasonal – free 2nd grade math worksheets and math games to learn addition, subtraction, multiplication, measurement, graphs, shapes, telling time, adding money, fractions, and skip counting by 3s, 4s, 6s, 7s, 8s, 9s, 11s, 12s, and other second grade math. Fraction word problems 6th grade. The fraction word problems 6th grade students have to master are challenging because of the complexity of the problems and the complexity of doing operations with fractions. While it is fun to stump your child with these problems, the real goal is to get your child as comfortable as possible with these processes. Worksheets > Math > Grade 5 > Word problems. Math word problem worksheets for grade 5. These worksheets provide students with real world word problems that students can solve with grade 5 math concepts. Our word problems worksheets cover addition, subtraction, multiplication, division, fractions, decimals, measurement (volume, mass and length), GCF / LCM and variables and expressions. Fraction Worksheets and Printables 2nd Grade Fractions 3rd Grade Fractions 4th Grade Fractions 5th Grade Fractions Adding Fractions Comparing Fractions Dividing Fractions Equivalent Fractions: Fraction Charts for Bulletin Board Fraction Games Fraction Math Learning Centers Fraction Word Problems Fractions and Decimals Improper Fractions. Steps for Solving/Modeling Fraction Word Problems. 1) Read and annotate what you know. 2) Determine what the question is asking you. 3) Pick an appropriate strategy to solve. 4) Solve using the standard algorithms. 6) Recontextualize your answer in the context of the problem and include units. Fraction Word Problems – Examples and step by step Solutions of Word Problems using block models (tape diagrams), Solve a problem involving fractions of fractions and fractions of remaining parts, how to solve a four step fraction word problem using tape diagrams, grade 5, grade 6, grade 7 Addition Word Problems: Unlike Fractions. These worksheets have word problems with unlike fractions. Fraction word problems enable the students to understand the use of fraction in real-life situation. Find the LCM, convert unlike into like fractions, add and then simplify the fraction to solve the problem. Improve your students’ math skills and help them learn how to calculate fractions, percentages, and more with these word problems. The exercises are designed for students in the seventh grade, but anyone who wants to get better at math will find them useful.. The sections below contain two-word problem worksheets for students, in section Nos. 1 and 3. Grade 6 Maths word Problems With Answers. Grade 6 maths word problems with answers are presented. Some of these problems are challenging and need more time to solve. Also detailed solutions and full explanations are included. Two numbers N and 16 have LCM = 48 and GCF = 8. Find N. If the area of a circle is 81pi square feet, find its circumference. Math worksheets for grade 2. Whatever the case, our second grade math worksheets are designed to teach, challenge, and boost the confidence of budding mathematicians. And thanks to second grade math worksheets that feature cute, colorful characters and eye-catching graphics, practicing this vital skill just got a lot more fun. KidZone Grade 2 Alphabet Recognition and Printing Practice: Consonant Recognition and Printing Practice; Printing Practice – Tracer Pages; Math: Dynamic grade two math worksheets – use this section to generate an unlimited supply of different addition and subtraction worksheets for your kids. Dynamic grade two word problems; Math Times Tables Free pre-made elementary math worksheets to print, complete online, and customize. Math Fact Cafe. TM. Home; Pre-Made Worksheets Kindergarten (K) First Grade (1st) Second Grade (2nd) Third Grade (3rd) Fourth Grade (4th) Fifth Grade (5th) Custom Worksheets Basic Facts Counting Money Multiplication Tables Telling Time Word Problems. Distance. Worksheets > Math > Grade 2. Free grade 2 math worksheets. Our grade 2 math worksheets emphasize numeracy as well as a conceptual understanding of math concepts.All worksheets are printable pdf documents. Choose your grade 2 topic: 5th grade math word problems worksheets pdf. These word problems worksheets are appropriate for 4th Grade, 5th Grade, 6th Grade, and 7th Grade. U.S. Money Change from a Purchase Multiplication Word Problems These Word Problems Worksheets will produce problems that ask students to use multiplication to calculate the monetary value of a purchase and then find how much change is given from. Word problems are emphasized for a deeper understanding of how math works, along with reinforcing basic math facts. The enrichment math pages will easily complement your existing math program and can be used every week to build the children’s math skills and problem-solving strategies. Free Multi-Step 4th Grade Math Word Problems PDF Are you looking for engaging multi-step 4th grade math word problems with answers to add to your upcoming lesson plans? The following collection of free 4th grade maths word problems worksheets cover topics including addition, subtraction, multiplic 160 WORD PROBLEMS 5th Grade CCSS Math Aligned in 2 complete versions. Student ready worksheets filled with 160 word problems covering all 5th grade CCSS word problem standards. Version One: ”Page of the Week” format with four problems per page for easy weekly practice. Easy format to copy and ma
After World War II, communism spread for several reasons, driven by ideological, geopolitical, and socioeconomic factors. The defeat of Nazi Germany and imperial Japan marked a turning point in global politics, leading to a reshuffling of power dynamics and ideological struggles that facilitated the expansion of communism. One of the primary catalysts for the spread of communism was the rise of the Soviet Union as a superpower in the aftermath of the war. The Soviet Union’s victory over Nazi Germany and its role in liberating Eastern Europe from fascist occupation elevated its status on the world stage, positioning it as a champion of anti-fascism and liberation movements. The Soviet Union’s influence extended beyond its borders as it sought to export its socialist model and establish satellite states in Eastern Europe, known as the Eastern Bloc. Furthermore, the devastation caused by World War II and the subsequent process of decolonization created fertile ground for communist ideologies to take root. In war-ravaged Europe, widespread poverty, social upheaval, and disillusionment with capitalist systems fueled support for socialist and communist parties, particularly among working-class and marginalized populations. Communist parties capitalized on these grievances and mobilized support through promises of social justice, economic equality, and national liberation. In Asia, the defeat of colonial powers and the emergence of nationalist movements allowed communist leaders to spearhead anti-imperialist struggles and revolutionary movements. Leaders like Mao Zedong in China, Ho Chi Minh in Vietnam, and Kim Il-sung in Korea successfully galvanized nationalist sentiments and mobilized peasant populations against colonial and capitalist powers. The victory of the Chinese Communist Party in 1949 and the establishment of the People’s Republic of China further emboldened communist movements across Asia and beyond. Additionally, the Cold War rivalry between the United States and the Soviet Union fueled the spread of communism as both superpowers sought to expand their spheres of influence and promote their respective ideologies. The United States’ containment policy aimed to curb the spread of communism and preserve capitalist systems, leading to interventions, covert operations, and support for anti-communist regimes worldwide. Meanwhile, the Soviet Union provided military and ideological support to communist movements, further exacerbating global tensions and fueling proxy conflicts in various regions. In summary, the spread of communism after World War II was a complex and multifaceted phenomenon driven by ideological zeal, geopolitical rivalries, and socioeconomic factors. The aftermath of the war created fertile ground for communist ideologies to take root, leading to the establishment of communist regimes in Eastern Europe, Asia, Africa, and Latin America. However, the spread of communism also sparked intense opposition from Western powers, leading to a protracted ideological struggle and shaping global politics for decades. The Sino-Japanese War, spanning from 1937 to 1945, was a pivotal conflict that not only shaped the course of East Asian history but also became a significant battleground in the broader ideological struggle between communism and capitalism. Fueled by Japanese imperial expansionism and Chinese resistance to foreign aggression, the war escalated into a protracted conflict characterized by brutal warfare, widespread atrocities, and immense human suffering. The clash between the Communist forces led by Mao Zedong and the nationalist Kuomintang regime under Chiang Kai-shek added a layer of complexity to the conflict as both sides vied for control over China’s future. While initially cooperating against the common enemy of Japanese imperialism, the ideological differences between the Communists and Nationalists led to intermittent clashes and power struggles throughout the war. The Communist forces capitalized on the opportunity to expand their influence in rural areas, implementing land reforms and garnering popular support among the peasantry. Meanwhile, the Nationalist government struggled to control its territory and suffered from internal corruption and inefficiency. Despite the end of World War II and Japan’s surrender, the Sino-Japanese War laid the groundwork for the Chinese Civil War, culminating in the Communist victory in 1949 and the establishment of the People’s Republic of China. Mao Zedong, born on December 26, 1893, in Hunan Province, China, emerged as one of the most influential figures in modern Chinese history. Raised in a peasant family, Mao received a traditional Confucian education before becoming involved in revolutionary activities during his early adulthood. He joined the Chinese Communist Party (CCP) in the 1920s and quickly rose through its ranks due to his organizational skills and revolutionary fervor. Mao became known for his theories on peasant-based revolution and guerrilla warfare, which guided the CCP’s strategy during the Chinese Civil War against the ruling Nationalist government. Despite setbacks and internal power struggles, Mao’s leadership ultimately led to the CCP’s victory in 1949 and established the People’s Republic of China. As Chairman of the Communist Party and the country’s paramount leader, Mao implemented sweeping social and economic reforms, including the Great Leap Forward and the Cultural Revolution, which had profound and often devastating effects on Chinese society. Mao’s policies and leadership left an indelible mark on China’s history, shaping its trajectory into the modern era. The Communist Revolution in China, culminating in the establishment of the People’s Republic of China (PRC) in 1949, stands as one of the most transformative events of the 20th century. Led by Mao Zedong and the Chinese Communist Party (CCP), the revolution marked the end of centuries of imperial rule and the beginning of a new era for China. The roots of the revolution can be traced back to the early 20th century when China was plagued by internal turmoil, foreign imperialism, and socioeconomic inequality. The collapse of the Qing Dynasty in 1911 and the subsequent power struggles paved the way for the rise of competing political forces, including the CCP, which emerged as a leading voice for revolutionary change. The revolution gained momentum in the aftermath of World War II and the Chinese Civil War, as the CCP, with strong support from rural peasants and marginalized groups, waged a protracted struggle against the ruling Nationalist government led by Chiang Kai-shek. The CCP’s ability to mobilize mass support, implement land reforms, and effectively combat Japanese invaders during the Second Sino-Japanese War bolstered its legitimacy and appeal. Additionally, widespread corruption, inflation, and social unrest undermined the Nationalist government’s credibility, further fueling support for the communist cause. The turning point of the revolution came in 1949 when the CCP emerged victorious in the Chinese Civil War, culminating in the proclamation of the People’s Republic of China on October 1. In proclaiming “The Chinese people have stood up,” Mao Zedong announced the dawn of a new era of socialist transformation and national rejuvenation. The establishment of the PRC marked the consolidation of communist rule in China and the beginning of sweeping social, economic, and political reforms aimed at building a socialist society. The Communist Revolution in China had profound and far-reaching consequences, reshaping the geopolitical landscape of Asia and the world. The PRC’s emergence as a major global power and its commitment to socialism and self-reliance challenged the dominance of Western capitalist powers and inspired anti-colonial and anti-imperialist movements worldwide. However, the revolution also brought about immense human suffering, including political purges, mass campaigns, and economic upheavals, particularly during Mao’s radical policies such as the Great Leap Forward and the Cultural Revolution. Nevertheless, the Communist Revolution in China remains a defining moment in modern Chinese history, symbolizing the triumph of the oppressed masses over imperialist exploitation and feudal oppression and the emergence of a new socialist China. The Great Leap Forward, launched by Mao Zedong in 1958, was one of modern Chinese history’s most ambitious yet disastrous socioeconomic campaigns. Designed to rapidly transform China from an agrarian society into an industrial powerhouse, the Great Leap Forward aimed to achieve rapid economic growth and social progress through collectivization, communal farming, and mass mobilization. The campaign promoted the formation of communes, large-scale infrastructure projects, and backyard steel furnaces to surpass the industrial output of Western powers within a short timeframe. However, the Great Leap Forward quickly descended into chaos and catastrophe, resulting in widespread famine, economic collapse, and human suffering. The campaign’s emphasis on quantity over quality led to unrealistic production targets, exaggerated reports, and widespread mismanagement. Agricultural reforms, such as the forced collectivization of farms and the mass extermination of sparrows as pests, disrupted traditional farming practices and resulted in catastrophic crop failures and famine. Millions of people died from starvation, malnutrition, and disease, making the Great Leap Forward one of the deadliest man-made disasters in history. The failure of the Great Leap Forward had far-reaching consequences for China, shaking the legitimacy of the Chinese Communist Party and exposing the flaws of Mao’s radical policies. The campaign’s disastrous outcomes also contributed to a shift in Chinese leadership, leading to the rise of pragmatists like Deng Xiaoping, who favored gradual economic reforms and opening up to the outside world. Despite its monumental failures, the Great Leap Forward remains a controversial and painful chapter in Chinese history. It is a cautionary tale about the dangers of utopian ideologies, centralized planning, and the human cost of misguided policies. Land and Resource Distribution Movements to redistribute land and resources in Asia, Africa, and Latin America were deeply influenced by communism and socialism. These ideologies provided frameworks for challenging entrenched systems of inequality and exploitation. Inspired by Marxist principles of class struggle and social transformation, these movements sought to mobilize the masses and promote revolutionary change through collective action and state intervention. In Asia, communist and socialist parties played prominent roles in advocating for land redistribution and agrarian reforms as part of broader anti-colonial and nationalist struggles. Leaders such as Mao Zedong in China and Ho Chi Minh in Vietnam embraced Marxist-Leninist ideology to mobilize peasant movements and challenge feudal landownership systems. Similarly, in India, socialist thinkers like Ram Manohar Lohia and Jayaprakash Narayan advocated for land reforms to empower the rural poor and build a more equitable society. In Africa, communist and socialist ideologies were instrumental in shaping nationalist movements and post-colonial governance structures. Leaders such as Kwame Nkrumah in Ghana and Julius Nyerere in Tanzania embraced socialist principles to promote economic self-reliance and social welfare programs, including land redistribution initiatives. In South Africa, the African National Congress (ANC) drew inspiration from socialist ideals in its struggle against apartheid, advocating for land reforms and economic justice for the black majority. In Latin America, communist and socialist parties played key roles in organizing peasant movements and advocating for land redistribution as part of broader struggles against imperialism and economic exploitation. Leaders such as Fidel Castro in Cuba and Salvador Allende in Chile implemented agrarian reforms to challenge the dominance of wealthy landowners and promote rural development. In countries like Nicaragua and El Salvador, socialist guerrilla movements fought against oppressive regimes and sought to redistribute land to landless peasants. Overall, communism and socialism provided ideological frameworks and organizational structures that galvanized movements to redistribute land and resources in Asia, Africa, and Latin America. While these movements faced significant challenges and often encountered resistance from entrenched interests, they contributed to significant social and economic transformations, empowering marginalized communities and challenging systems of inequality and exploitation. The communist revolution in Vietnam was pivotal in the country’s struggle for independence and liberation from colonial rule. Led by Ho Chi Minh and the Vietnamese Communist Party, the revolution was deeply rooted in nationalist sentiment, anti-imperialist fervor, and Marxist-Leninist ideology. Following decades of French colonial domination and exploitation, Ho Chi Minh and his comrades mobilized popular support for a revolutionary movement to overthrow colonial rule and establish an independent socialist state. The Viet Minh, or League for the Independence of Vietnam, emerged as the vanguard of the anti-colonial struggle, drawing support from various social classes, including peasants, workers, and intellectuals. Under Ho Chi Minh’s leadership, the Viet Minh waged a protracted guerrilla war against French colonial forces, employing guerrilla warfare tactics, mass mobilization, and political organizing to undermine colonial control and galvanize nationlist sentiment. The climax of the communist revolution came in 1954 with the decisive victory over French forces at the Battle of Dien Bien Phu, which marked the end of French colonial rule in Indochina. The Geneva Accords of 1954 subsequently divided Vietnam into two separate states along the 17th parallel, with Ho Chi Minh’s Democratic Republic of Vietnam (North Vietnam) governing the northern region, while the South remained under the control of the French-backed State of Vietnam. However, the division of Vietnam proved to be a temporary arrangement, as the communist revolutionaries in the North remained committed to reunifying the country under their rule. The ensuing Vietnam War, which pitted North Vietnam and the communist Viet Cong guerrillas against the U.S.-backed government of South Vietnam, became a focal point of Cold War tensions and a symbol of anti-imperialist resistance. Ultimately, the communist revolution in Vietnam achieved its objective with the reunification of the country under communist rule in 1975, following the fall of Saigon and the withdrawal of American forces. The establishment of the Socialist Republic of Vietnam marked the culmination of decades of struggle and sacrifice, reaffirming the triumph of Vietnamese nationalism and the enduring legacy of the communist revolution in shaping the country’s destiny. Ho Chi Minh, revered as the father of modern Vietnam, was a towering figure whose life and leadership shaped the course of Vietnamese history. Born Nguyen Sinh Cung in 1890 in what was then French Indochina, Ho Chi Minh emerged as a passionate nationalist and revolutionary determined to free Vietnam from colonial rule. Inspired by Marxist-Leninist ideology and the principles of nationalism, Ho Chi Minh dedicated his life to the fight for Vietnamese independence and social justice. He played a central role in establishing the Indochinese Communist Party in 1930. He later founded the Viet Minh, a broad-based nationalist and communist coalition, to resist Japanese occupation during World War II and French colonial rule after the war. Ho Chi Minh’s leadership during the First Indochina War against the French and the subsequent Vietnam War against the United States demonstrated his strategic acumen, resilience, and unwavering commitment to the cause of Vietnamese liberation. Despite facing overwhelming odds and immense hardships, including years of guerrilla warfare and diplomatic isolation, Ho Chi Minh never wavered in his pursuit of independence and reunification for Vietnam. His vision of a unified and independent Vietnam, free from foreign domination, galvanized the Vietnamese people and inspired generations of revolutionaries worldwide. After he died in 1969, Ho Chi Minh’s legacy continued to loom large in Vietnam’s national consciousness, symbolizing the enduring spirit of resistance and resilience in the face of adversity. Mengistu Haile Mariam’s rise to power in Ethiopia unfolded amidst a nation deeply rooted in centuries of tradition under the rule of the Solomonic Dynasty, which had governed the country since the 13th century. However, by the 20th century, the rule of the emperors, including Emperor Haile Selassie, was marked by growing discontent among the populace due to socioeconomic disparities, political repression, and a desire for modernization and equality. Mengistu, a military officer, emerged as a central figure in the Ethiopian Revolution of 1974, which sought to dismantle the monarchy and bring about social and economic reforms. Inspired by Marxist-Leninist ideologies, Mengistu envisioned a socialist Ethiopia free from exploitation and inequality. The revolution led to the ousting of Emperor Haile Selassie and the rise of Mengistu’s regime, the Derg. However, Mengistu’s rule was marked by authoritarianism, brutality, and the infamous Red Terror, during which political opponents were ruthlessly persecuted and killed. Despite aspirations for social justice, Mengistu’s regime brought widespread suffering, economic mismanagement, and centralized control. In 1991, Mengistu was ousted from power, marking the end of an era. He fled Ethiopia and sought asylum in Zimbabwe. Although Mengistu’s legacy remains controversial, Ethiopia’s struggle for independence and sovereignty, coupled with pursuing socialist ideals, continues to shape its national identity and historical narrative. Land reform measures were initiated in Kerala as part of the state’s broader development agenda following independence. The Communist-led governments in Kerala during the 1950s and 1960s played a key role in implementing these reforms. The legislation included the Kerala Land Reforms Act of 1963, which addressed the skewed distribution of landownership by imposing land ceilings. Land ceilings limited the amount of land an individual or family could own, with surplus land redistributed to landless or marginal farmers. Furthermore, tenancy reforms were introduced to protect tenants’ rights, ensure fair rents, and provide security of tenure. This aimed to alleviate landlords’ exploitation of tenants and promote stability in agricultural communities. Additionally, efforts were made to provide land to landless agricultural laborers, empowering them to become independent cultivators and stakeholders in the agrarian economy. Implementing land reforms in Kerala saw significant success compared to many other states in India. By the late 1970s, a substantial portion of surplus land had been redistributed, reducing the concentration of land ownership and diminishing the influence of traditional landlord classes. This improved agricultural productivity, social equity, and rural livelihoods. In contrast, the effectiveness of land reform efforts varied across other states in India. In states like West Bengal and Tamil Nadu, communist-led governments similarly prioritized land reform agendas and achieved notable success in redistributing land to the landless and implementing tenancy reforms. However, in states lacking political will or administrative capacity, such as Uttar Pradesh and Bihar, progress in land reforms was slower, and entrenched social and economic inequalities persisted. Despite the challenges and variations in implementation, land reform remains a crucial component of India’s agricultural policy. Ongoing efforts to address landownership disparities and promote rural development remain a priority for policymakers at both the state and national levels. The White Revolution, initiated by Mohammad Reza Shah Pahlavi in Iran during the 1960s, represented a pivotal moment in the country’s history, marked by ambitious modernization and social reform attempts. Inspired by a desire to consolidate power, promote economic development, and address deep-rooted social inequalities, the Shah implemented sweeping reforms that touched various aspects of Iranian society. The White Revolution encompassed land reform, women’s rights, education, and healthcare, aiming to transform Iran into a modern, industrialized nation. While the Shah’s vision of modernization primarily drove the White Revolution, it also bore elements of socialist influence. For instance, The land reform program aimed to break traditional landlords’ power and redistribute land to tenant farmers and rural laborers, echoing socialist principles of economic equality and social justice. Additionally, the Shah’s emphasis on social welfare programs, including initiatives to promote literacy, healthcare, and women’s rights, reflected a commitment to improving the lives of ordinary Iranians and reducing socioeconomic disparities. Oil revenue played a crucial role in financing the ambitious projects of the White Revolution. With Iran possessing vast oil reserves, the Shah leveraged oil revenues to fund infrastructure development, industrial projects, and social welfare programs. The influx of oil wealth enabled the Shah to implement land reform, invest in education and healthcare, and undertake large-scale infrastructure projects, such as the construction of highways, dams, and industrial complexes. However, despite its ambitious goals, the White Revolution faced challenges and criticism. Many Iranians viewed the reforms as top-down and insufficient in addressing deep-rooted social and economic inequalities. The Shah’s authoritarian rule, coupled with widespread corruption and repression, fueled opposition from various sectors of society, including religious leaders, intellectuals, and leftist groups. Ultimately, the White Revolution failed to fully address the underlying grievances of the Iranian people, contributing to growing social unrest and opposition that would eventually culminate in the Iranian Revolution of 1979, leading to the overthrow of the Shah’s regime and the establishment of an Islamic republic. Would you rather watch a video on the spread of communism after 1900? Want to get back to the overview of the 1900 – present section?
ICSE Foundation Mathematics for Class 7 by RS Aggarwal Chapter 1: Sets(Download PDF) Chapter 2: Venn Diagrams(Download PDF) Chapter 3: Number System(Download PDF) Chapter 4: Fractions(Download PDF) Chapter 5: Decimals(Download PDF) Chapter 6: Factors and Multiples(Download PDF) Chapter 7 Powers and Roots(Download PDF) Chapter 8: Percentage(Download PDF) Chapter 9: Profit, Loss and Discount(Download PDF) Chapter 10: Ratio and Proportion(Download PDF) Chapter 11: Unitary method(Download PDF) Chapter 12: Time and Work(Download PDF) Chapter 13: Time and Distance(Download PDF) Chapter 14: Average(Download PDF) Chapter 15: Simple Interest(Download PDF) Chapter 16: Algebraic Expressions(Download PDF) Chapter 17: Formula(Download PDF) Chapter 18: Exponents(Download PDF) Chapter 19: Special Products as Identities(Download PDF) Chapter 20: Factorization(Download PDF) Chapter 21: Relations and Mapping(Download PDF) Chapter 22: Linear Equations(Download PDF) Chapter 23: Linear Inequations.(Download PDF) Chapter 24: Graphs(Download PDF) Chapter 25: Fundamental Geometrical Concepts(Download PDF) Chapter 26: Lines and Angles(Download PDF) Chapter 27: Basic Constructions(Download PDF) Chapter 28: Triangles(Download PDF) Chapter 29: Congruency of Triangles Chapter 30: Construction of Triangles(Download PDF) Chapter 31: Polygons(Download PDF) Chapter 32: Quadrilaterals(Download PDF) Chapter 33: circles Chapter 34: Symmetry, Reflection and Rotation(Download PDF) Chapter 35: Perimeter and Area(Download PDF) Chapter 36: Volume and Surface Area of Solids.(Download PDF) Chapter 37: Statistics(Download PDF) ICSE Foundation Mathematics Syllabus |Teaching Points||Teaching Notes| |Set Concepts||Review of work done in class VI – Idea of Notation, Equal Sets, Equivalent sets, the empty set, the universal set, cardinal property of a set, finite and infinite sets. Union and Intersection of Sets, Disjoint Sets, Overlapping Sets, Complement Set, Venn Diagrams. Examples should be drawn for the number systems with which the pupil is familiar and from real life situations; Operations on sets should be confined to the universal set and one or two its subsets two disjoint or two overlapping sets. |Numbers||Review of Work done in Class VI. Natural Numbers, Whole Numbers, the four fundamental Operations, factors,repeated factors, exponents, prime satisfactions, properties of exponents; H.C.F or G.C.D.; multiples, even and odd numbers, L.C.M.; perfect square natural numbers and their square roots. Integers : the four fundamental operations. Fractions: Classification and comparison of fractions; the four fundamental operations with fractions; simplification percentages; ratio. Decimals: the four fundamental operations.; recurring decimals; approximation(rounding off).; Powers and roots – elementary treatment, based on the multiplication tables and drilling in the most frequently used powers and roots. |Arithmetic Problems||Unitary method, speed, time and distance, simple problems Ratio, sharing in a ratio profit and loss Average (direct problems to be emphasized). |Fundamental Concepts||Review of Work done in Class VI. Concepts of degrees and coefficients; like and unlike terms; polynomials with rational coefficients. Addition, Subtraction and Multiplication of a Polynomial by a monomial, binomial; division of a polynomial, in one variable only by a monomial and binomial in one variable only. Using this Rule; Dividend = (Divider*Quotient) + Remainder to check the result of a division. |Formula||Translation from words to symbols (construction of a formula) and from symbols to words. Use of Formulae in Real life situations as in simple interest, mensuration, geometry, physics etc. Changing the subject of formula (simple cases only, e.g. not involving solution of quadratic equations or factorization other than the common factor). Substitution in a formula, substitution in an Expression in which the Variables are only up to power 2.| |Exponents||Integral Exponents only, Proofs are not required. Special Products as identities| |Factors||Factors of – (a) Polynomials with a common monomial (b) difference of two squares |Simplification||Simplification, addition and Subtraction of algebraic expressions with integral denominators| |Relations and Mapping||To be done through Arrow diagrams leading to listing of the matching pairs, Classification of functions not included| |Equations and inequations||A Mathematical sentence; an open mathematical sentence in one variable. Simple equation and Graphical representation of the Solutions. problems leading to simple equations. simple inequations in one variable and graphical representation of the solution| |Graphs||terms; rectangular coordinates, ordered pairs, abscissa and ordinate; plotting. Representing a Linear Equation in Two Variables, graphically.|
Hubbert peak theory The Hubbert peak theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil. Choosing a particular curve determines a point of maximum production based on discovery rates, production rates and cumulative production. Early in the curve (pre-peak), the production rate increases due to the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines because of resource depletion. The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite, therefore the rate of discovery which initially increases quickly must reach a maximum and decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume. - 1 Hubbert's peak - 2 Hubbert's theory - 3 Economics - 4 Hubbert peaks - 5 Criticisms of peak oil - 6 Criticisms of peak element scenarios - 7 See also - 8 Notes - 9 References "Hubbert's peak" can refer to the peaking of production of a particular area, which has now been observed for many fields and regions. Hubbert's peak was thought to have been achieved in the United States contiguous 48 states (that is, excluding Alaska and Hawaii) in the early 1970s. Oil production peaked at 10,200,000 barrels per day (1,620,000 m3/d) and then declined for several years since. Yet, recent advances in extraction technology, particularly those that led to the extraction of tight oil and oil from shale, have drastically changed the picture. A decline in production followed the 1970s peak of more than 10 million barrels. In November 2017 the United States once again surpassed the 10 million barrel mark for the first time since 1970. Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a predicted event: the peak of the entire planet's oil production. After peak oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. On the basis of his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965–1970. His prediction of inevitable decline has been incorrect, but the 1970 peak has yet not been surpassed. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction. In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries. Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline. The Hubbert curve satisfies these constraints. Furthermore, it is symmetrical, with the peak of production reached when half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak. Given past oil discovery and production data, a Hubbert curve that attempts to approximate past discovery data may be constructed and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of: where max is the total resource available (ultimate recovery of crude oil), the cumulative production, and and are constants. The year of maximum annual production (peak) is: so now the cumulative production reaches the half of the total available resource: The Hubbert equation assumes that oil production is symmetrical about the peak. Others have used similar but non-symmetrical equations which may provide better a fit to empirical production data. Use of multiple curvesEdit This section needs expansion. You can help by adding to it. (June 2008) The sum of multiple Hubbert curves, a technique not developed by Hubbert himself, may be used in order to model more complicated real life scenarios. For example, when new technologies like hydraulic fracturing combined with new formations that were not productive before the new technology, this can create a need for multiple curves. These technologies are limited in number, but make a big impact on production and cause a need for a new curve to be added to the old curve and the entire curve to be reworked. Hubbert, in his 1956 paper, presented two scenarios for US crude oil production: - most likely estimate: a logistic curve with a logistic growth rate equal to 6%, an ultimate resource equal to 150 Giga-barrels (Gb) and a peak in 1965. The size of the ultimate resource was taken from a synthesis of estimates by well-known oil geologists and the US Geological Survey, which Hubbert judged to be the most likely case. - upper-bound estimate: a logistic curve with a logistic growth rate equal to 6% and ultimate resource equal to 200 Giga-barrels and a peak in 1970. Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970, although the actual peak was 17% higher than Hubbert's curve. Production declined, as Hubbert had predicted, and stayed within 10 percent of Hubbert's predicted value from 1974 through 1994; since then, actual production has been significantly greater than the Hubbert curve. The development of new technologies has provided access to large quantities of unconventional resources, and the boost of production has largely discounted Hubbert's prediction. Hubbert's 1956 production curves depended on geological estimates of ultimate recoverable oil resources, but he was dissatisfied by the uncertainty this introduced, given the various estimates ranging from 110 billion to 590 billion barrels for the US. Starting in his 1962 publication, he made his calculations, including that of ultimate recovery, based only on mathematical analysis of production rates, proved reserves, and new discoveries, independent of any geological estimates of future discoveries. He concluded that the ultimate recoverable oil resource of the contiguous 48 states was 170 billion barrels, with a production peak in 1966 or 1967. He considered that because his model incorporated past technical advances, that any future advances would occur at the same rate, and were also incorporated. Hubbert continued to defend his calculation of 170 billion barrels in his publications of 1965 and 1967, although by 1967 he had moved the peak forward slightly, to 1968 or 1969. A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful" (providing the best fit to the data), though many areas studied had a sharper "peak" than predicted. A 2007 study of oil depletion by the UK Energy Research Centre pointed out that there is no theoretical and no robust practical reason to assume that oil production will follow a logistic curve. Neither is there any reason to assume that the peak will occur when half the ultimate recoverable resource has been produced; and in fact, empirical evidence appears to contradict this idea. An analysis of a 55 post-peak countries found that the average peak was at 25 percent of the ultimate recovery. Hubbert also predicted that natural gas production would follow a logistic curve similar to that of oil. The graph shows actual gas production in blue compared to his predicted gas production for the United States in red, published in 1962. Energy return on energy investmentEdit The ratio of energy extracted to the energy expended in the process is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Should the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source. There is a difference between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca tar sands plants. Cheap natural gas has also led to ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate. The assumption of inevitable declining volumes of oil and gas produced per unit of effort is contrary to recent experience in the US. In the United States, as of 2017, there has been an ongoing decade-long increase in the productivity of oil and gas drilling in all the major tight oil and gas plays. The US Energy Information Administration reports, for instance, that in the Bakken Shale production area of North Dakota, the volume of oil produced per day of drilling rig time in January 2017 was 4 times the oil volume per day of drilling five years previous, in January 2012, and nearly 10 times the oil volume per day of ten years previous, in January 2007. In the Marcellus gas region of the northeast, The volume of gas produced per day of drilling time in January 2017 was 3 times the gas volume per day of drilling five years previous, in January 2012, and 28 times the gas volume per day of drilling ten years previous, in January 2007. Growth-based economic modelsEdit Our principal constraints are cultural. During the last two centuries we have known nothing but exponential growth and in parallel we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of non growth.— M. King Hubbert, Exponential Growth as a Transient Phenomenon in Human History Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth". Brief oil interruptions in 1973 and 1979 markedly slowed—but did not stop—the growth of world GDP. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), in their 2003 study Food, Land, Population and the U.S. Economy, placed the maximum U.S. population for a sustainable economy at 200 million (actual population approx. 290m in 2003, 329m in 2019). To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before. Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources. Although observers believe that peak coal is significantly further out than peak oil, Hubbert studied the specific example of anthracite in the US, a high grade coal, whose production peaked in the 1920s. Hubbert found that anthracite matches a curve closely. Hubbert had recoverable coal reserves worldwide at 2.500 × 1012 metric tons and peaking around 2150 (depending on usage). More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB), published on April 5, 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years. Reporting on this, Richard Heinberg also notes that the date of peak annual energetic extraction from coal is likely to come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively. A second study, The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that ""coal might not be so abundant, widely available and reliable as an energy source in the future". In a paper in 1956, after a review of US fissionable reserves, Hubbert notes of nuclear power: |“||There is promise, however, provided mankind can solve its international problems and not destroy itself with nuclear weapons, and provided world population (which is now expanding at such a rate as to double in less than a century) can somehow be brought under control, that we may at last have found an energy supply adequate for our needs for at least the next few centuries of the "foreseeable future."||”| As of 2015, the identified resources of uranium are sufficient to provide more than 135 years of supply at the present rate of consumption. Technologies such as the thorium fuel cycle, reprocessing and fast breeders can, in theory, extend the life of uranium reserves from hundreds to thousands of years. |“||... you would have to build 10,000 of the largest power plants that are feasible by engineering standards in order to replace the 10 terawatts of fossil fuel we're burning today ... that's a staggering amount and if you did that, the known reserves of uranium would last for 10 to 20 years at that burn rate. So, it's at best a bridging technology ... You can use the rest of the uranium to breed plutonium 239 then we'd have at least 100 times as much fuel to use. But that means you're making plutonium, which is an extremely dangerous thing to do in the dangerous world that we live in.||”| Almost all helium on Earth is a result of radioactive decay of uranium and thorium. Helium is extracted by fractional distillation from natural gas, which contains up to 7% helium. The world's largest helium-rich natural gas fields are found in the United States, especially in the Hugoton and nearby gas fields in Kansas, Oklahoma, and Texas. The extracted helium is stored underground in the National Helium Reserve near Amarillo, Texas, the self-proclaimed "Helium Capital of the World". Helium production is expected to decline along with natural gas production in these areas. Helium, which is the second-lightest chemical element, will rise to the upper layers of Earth's atmosphere, where it can forever break free from Earth's gravitational attraction. Approximately 1,600 tons of helium are lost per year as a result of atmospheric escape mechanisms. Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The price of copper rose 500% between 2003 and 2007 and was attributed by some[who?] to peak copper. Copper prices later fell, along with many other commodities and stock prices, as demand shrank from fear of a global recession. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years. A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled. In 2009, Aaron Regent president of the Canadian gold giant Barrick Gold said that global output has been falling by roughly one million ounces a year since the start of the decade. The total global mine supply has dropped by 10pc as ore quality erodes, implying that the roaring bull market of the last eight years may have further to run. "There is a strong case to be made that we are already at 'peak gold'," he told The Daily Telegraph at the RBC's annual gold conference in London. "Production peaked around 2000 and it has been in decline ever since, and we forecast that decline to continue. It is increasingly difficult to find ore," he said. Ore grades have fallen from around 12 grams per tonne in 1950 to nearer 3 grams in the US, Canada, and Australia. South Africa's output has halved since peaking in 1970. Output fell a further 14 percent in South Africa in 2008 as companies were forced to dig ever deeper – at greater cost – to replace depleted reserves. World mined gold production has peaked four times since 1900: in 1912, 1940, 1971, and 2001, each peak being higher than previous peaks. The latest peak was in 2001, when production reached 2,600 metric tons, then declined for several years. Production started to increase again in 2009, spurred by high gold prices, and achieved record new highs each year in 2012, 2013, and in 2014, when production reached 2,990 tonnes. Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years. According to a 2008 study, the total reserves of phosphorus are estimated to be approximately 3,200 MT, with a peak production at 28 MT/year in 2034. Individual countries' supplies vary widely; without a recycling initiative America's supply is estimated around 30 years. Phosphorus supplies affect agricultural output which in turn limits alternative fuels such as biodiesel and ethanol. Its increasing price and scarcity (global price of rock phosphate rose 8-fold in the 2 years to mid 2008) could change global agricultural patterns. Lands, perceived as marginal because of remoteness, but with very high phosphorus content, such as the Gran Chaco may get more agricultural development, while other farming areas, where nutrients are a constraint, may drop below the line of profitability. Hubbert's original analysis did not apply to renewable resources. However, over-exploitation often results in a Hubbert peak nonetheless. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced. For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe aquifers whose water is not being recharged. - Fisheries: At least one researcher has attempted to perform Hubbert linearization (Hubbert curve) on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. The Atlantic northwest cod fishery was a renewable resource, but the numbers of fish taken exceeded the fish's rate of recovery. The end of the cod fishery does match the exponential drop of the Hubbert bell curve. Another example is the cod of the North Sea. - Air/oxygen: Half the world's oxygen is produced by phytoplankton. The numbers of plankton have dropped by 40% since the 1950s. Criticisms of peak oilEdit Economist Michael Lynch argues that the theory behind the Hubbert curve is too simplistic and relies on an overly Malthusian point of view. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed back the date. Leonardo Maugeri, vice president of the Italian energy company Eni, argues that nearly all of peak estimates do not take into account unconventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling because of improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today because of new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology because of the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices. Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas (ASPO) responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is. |“||Despite his valuable contribution, M. King Hubbert's methodology falls down because it does not consider likely resource growth, application of new technology, basic commercial factors, or the impact of geopolitics on production. His approach does not work in all cases-including on the United States itself-and cannot reliably model a global production outlook. Put more simply, the case for the imminent peak is flawed. As it is, production in 2005 in the Lower 48 in the United States was 66 percent higher than Hubbert projected.||”| CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an "undulating plateau" for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015. Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production. Criticisms of peak element scenariosEdit Although M. King Hubbert himself made major distinctions between decline in petroleum production versus depletion (or relative lack of it) for elements such as fissionable uranium and thorium, some others have predicted peaks like peak uranium and peak phosphorus soon on the basis of published reserve figures compared to present and future production. According to some economists, though, the amount of proved reserves inventoried at a time may be considered "a poor indicator of the total future supply of a mineral resource." As some illustrations, tin, copper, iron, lead, and zinc all had both production from 1950 to 2000 and reserves in 2000 much exceed world reserves in 1950, which would be impossible except for how "proved reserves are like an inventory of cars to an auto dealer" at a time, having little relationship to the actual total affordable to extract in the future. In the example of peak phosphorus, additional concentrations exist intermediate between 71,000 Mt of identified reserves (USGS) and the approximately 30,000,000,000 Mt of other phosphorus in Earth's crust, with the average rock being 0.1% phosphorus, so showing decline in human phosphorus production will occur soon would require far more than comparing the former figure to the 190 Mt/year of phosphorus extracted in mines (2011 figure). - Jean Laherrere, "Forecasting production from discovery", ASPO Lisbon May 19–20, 2005 - J.R. Wood. "Peak Oil: The Looming Energy Crisis". Michigan Technological University. Retrieved 2013-12-27. - Domm, Patti (2018-01-31). "US oil production tops 10 million barrels a day for first time since 1970". CNBC. Retrieved 2018-04-30. - Nuclear Energy and the Fossil Fuels, M.K. Hubbert, Presented before the Spring Meeting of the Southern District, American Petroleum Institute, Plaza Hotel, San Antonio, Texas, March 7–8–9, 1956 "Archived copy" (PDF). Archived from the original (PDF) on 2008-05-27. Retrieved 2014-11-10.CS1 maint: archived copy as title (link) - "1976 Hubbert Clip". YouTube. Retrieved 2013-11-03. - Bartlett A.A 1999 ,"An Analysis of U.S. and World Oil Production Patterns Using Hubbert-Style Curves." Mathematical Geology. - M. King Hubbert, 1962, "Energy Resources," National Academy of Sciences, Publication 1000-D, p. 57. - Hubbert’s Petroleum Production Model: An Evaluation and Implications for World Oil Production Forecasts, Alfred J. Cavallo, Natural Resources Research, Vol. 13, No. 4, December 2004 - Malanichev, A.G. (2018). "Limits of Technological Efficiency of Shale Oil Production in the USA". Foresight and STI Governance. 12 #4: 90–101 – via Scopus. - Laherrère, J.H. (Feb 18, 2000). "The Hubbert curve : its strengths and weaknesses". dieoff.org. Archived from the original on October 9, 2018. Retrieved September 16, 2011. - M. King Hubbert, 1962, "Energy Resources," National Academy of Sciences, Publication 1000-D, p. 60. - M. King Hubbert, "National Academy of Sciences Report on Energy Resources: reply," AAPG Bulletin, Oct. 1965, Vol. 49 No. 10 pp. 1720–27. - M. King Hubbert, "Degree of advancement of petroleum exploration in United States," AAPG Bulletin, Nov. 1967, Vol. 51 No. 11 pp. 2207–27. - Brandt, A. R. (2007). "Testing Hubbert". Energy Policy. 35 (5): 3074–88. doi:10.1016/j.enpol.2006.11.004. - Steve Sorrell and others, Global Oil Depletion, UK Energy Research Centre, ISBN 1-903144-03-5. - M. King Hubbert, 1962, "Energy Resources," National Academy of Sciences, Publication 1000-D, pp. 81–83. - US Energy Information Administration, Drilling productivity report, 15 May 2017, (see “Report data” spreadsheet). - "Exponential Growth as a Transient Phenomenon in Human History". Hubbertpeak.com. Retrieved 2013-11-03. - "Our Perpetual Growth Utopia". Dieoff.org. Archived from the original on 2019-04-28. Retrieved 2013-11-03. - Archived August 18, 2007, at the Wayback Machine - Cynic, Aaron (2003-10-02). "Eating Fossil Fuels". Energybulletin.net. Archived from the original on 2007-06-11. Retrieved 2013-11-03. - Archived September 28, 2007, at the Wayback Machine - "The Oil Drum: Europe | Agriculture Meets Peak Oil: Soil Association Conference". Europe.theoildrum.com. Retrieved 2013-11-03. - White, Bill (December 17, 2005). "State's consultant says nation is primed for using Alaska gas". Anchorage Daily News. Archived from the original on February 21, 2009. - Bentley, R.W. (2002). "Viewpoint - Global oil & gas depletion: an overview" (PDF). Energy Policy. 30 (3): 189–205. doi:10.1016/S0301-4215(01)00144-6. - Archived October 31, 2004, at the Wayback Machine - "Startseite" (PDF). Energy Watch Group. Archived from the original (PDF) on 2013-09-11. Retrieved 2013-11-03. - Phillips, Ari (2007-05-21). "Peak coal: sooner than you think". Energybulletin.net. Archived from the original on 2008-05-22. Retrieved 2013-11-03. - "Museletter". Richard Heinberg. 2009-12-01. Retrieved 2013-11-03. - "Coal: Bleak outlook for the black stuff", by David Strahan, New Scientist, January 19, 2008, pp. 38–41. - M. King Hubbert (June 1956). "Nuclear Energy And The Fossil Fuels" (PDF). Shell Development Company. Archived from the original (PDF) on 2008-05-27. Retrieved 2013-12-27. - NEA, IAEA (2016). Uranium 2016 – Resources, Production and Demand (PDF). Uranium. OECD Publishing. doi:10.1787/uranium-2016-en. ISBN 978-92-64-26844-9. - Jones, Tony (23 November 2004). "Professor Goodstein discusses lowering oil reserves". Australian Broadcasting Corporation. Archived from the original on 2013-05-09. Retrieved 14 April 2013. - Kockarts, G. (1973). "Helium in the Terrestrial Atmosphere". Space Science Reviews. 14 (6): 723ff. Bibcode:1973SSRv...14..723K. doi:10.1007/BF00224775. - "Earth Loses 50,000 Tonnes of Mass Every Year". SciTech Daily. - "Exponential Growth as a Transient Phenomenon in Human History". Hubbertpeak.com. Retrieved 2013-11-03. - Daniel L. Edelstein (January 2008). "Copper" (PDF). U.S. Geological Survey, Mineral Commodity Summaries. Retrieved 2013-12-27. - Andrew Leonard (2006-03-02). "Peak copper?". Salon. Archived from the original on 2008-03-07. Retrieved 2008-03-23. - "Peak Copper Means Peak Silver". News.silverseek.com. Archived from the original on 2013-11-04. Retrieved 2013-11-03. - "Commodities – Demand fears hit oil, metals prices". Uk.reuters.com. 2009-01-29. Retrieved 2013-11-03. - "Impact of lithium abundance and cost on electric vehicle battery applications". Cat.inist.fr. Retrieved 2013-11-03. - "Department for Transport". Dft.gov.uk. Retrieved 2013-11-03. - "Barrick shuts hedge book as world gold supply runs out". Telegraph. Retrieved 2013-11-03. - Thomas Chaise, World gold production 2010, 13 May 2010. - US Geological Survey, Gold, Mineral commodity summaries, Jan. 2016. - Archived October 6, 2006, at the Wayback Machine - Stuart White, Dana Cordell (2008). "Peak Phosphorus: the sequel to Peak Oil". Global Phosphorus Research Initiative (GPRI). Retrieved 2009-12-11. - Stephen M. Jasinski (January 2006). "Phosphate Rock" (PDF). U.S. Geological Survey, Mineral Commodity Summaries. Retrieved 2013-12-27. - Ecological Sanitation Research Programme (May 2008). "Closing the Loop on Phosphorus" (PDF). Stockholm Environment Institute. Archived from the original (PDF) on 2006-08-05. Retrieved 2013-12-27. - Don Nicol. "A postcard from the central Chaco" (PDF). Archived from the original (PDF) on 2009-02-26. Retrieved 2009-01-23. alluvial sandy soils have phosphorus levels of up to 200–300 ppm - Meena Palaniappan and Peter H. Gleick (2008). "The World's Water 2008–2009, Ch 1" (PDF). Pacific Institute. Archived from the original (PDF) on 2009-03-20. Retrieved 2009-01-31. - Archived September 13, 2006, at the Wayback Machine - Archived September 3, 2006, at the Wayback Machine - [dead link] - "How General is the Hubbert Curve?". Aspoitalia.net. Retrieved 2013-11-03. - "Laherrere: Multi-Hubbert Modeling". Hubbertpeak.com. Retrieved 2013-11-03. - "Plankton, base of ocean food web, in big decline". NBC News. 2010-07-28. Retrieved 2013-11-03. - "Energyseer, Strategic Energy & Economic Research Inc., Seer". Energyseer.com. Retrieved 2013-11-03. - Michael C. Lynch. "The New Pessimism about Petroleum Resources: Debunking the Hubbert Model (and Hubbert Modelers)" (PDF). Strategic Energy & Economic Research, Inc. Retrieved 2013-12-27. - "Michael Lynch Hubbert Peak of Oil Production". Hubbertpeak.com. Retrieved 2013-11-03. - Campbell, CJ (2005). Oil Crisis. Brentwood, Essex, England: Multi-Science Pub. Co. p. 90. ISBN 0-906522-39-0. - Maugeri, L. (2004). "Oil: Never Cry Wolf—Why the Petroleum Age Is Far from over". Science. 304 (5674): 1114–15. doi:10.1126/science.1096427. PMID 15155935. - "Oil, Oil Everywhere". Forbes. July 24, 2006. - "The truth about global oil supply". Thefirstpost.co.uk. Archived from the original on 2007-09-26. Retrieved 2013-11-03. - "ASPO – The Association for the Study of Peak Oil and Gas". Peakoil.net. Retrieved 2013-11-03. - Archived December 6, 2006, at the Wayback Machine - Valentine, Katie (2006-11-14). "CERA says peak oil theory is faulty". Energybulletin.net. Archived from the original on 2006-11-28. Retrieved 2013-11-03. - Valentine, Katie (2006-08-10). "CERA's report is over-optimistic". Energybulletin.net. Archived from the original on 2012-02-12. Retrieved 2013-11-03. - Valentine, Katie (2005-05-24). "Oil: Caveat empty". Energybulletin.net. Archived from the original on 2008-06-03. Retrieved 2013-11-03. - Whipple, Tom (2006-03-08). "Nuclear Energy and the Fossil Fuels". Energybulletin.net. Archived from the original on 2008-08-11. Retrieved 2013-11-03. - James D. Gwartney, Richard L. Stroup, Russell S. Sobel, David MacPherson. Economics: Private and Public Choice, 12th Edition. South-Western Cengage Learning, p. 730. extract, accessed 5-20-2012 - Stephen M. Jasinski (January 2012). "Phosphate Rock" (PDF). U.S. Geological Survey, Mineral Commodity Summaries. Retrieved 2013-12-27. - American Geophysical Union, Fall Meeting 2007, abstract #V33A-1161. Mass and Composition of the Continental Crust - Greenwood, N. N.; & Earnshaw, A. (1997). Chemistry of the Elements (2nd Edn.), Oxford:Butterworth-Heinemann. ISBN 0-7506-3365-4. - "Feature on United States oil production." (November, 2002) ASPO Newsletter #23. - Greene, D.L. & J.L. Hopson. (2003). Running Out of and Into Oil: Analyzing Global Depletion and Transition Through 2050 ORNL/TM-2003/259, Oak Ridge National Laboratory, Oak Ridge, Tennessee, October - Economists Challenge Causal Link Between Oil Shocks And Recessions (August 30, 2004). Middle East Economic Survey VOL. XLVII No 35 - Hubbert, M.K. (1982). Techniques of Prediction as Applied to Production of Oil and Gas, US Department of Commerce, NBS Special Publication 631, May 1982
Algorithms still rule the software world. The complex code can be broken down into smaller and understated blocks using algorithms. One might wonder that flowcharts can also help in establishing a proper understanding of the code but the answer is no. Before answering the question of why algorithms are still preferred over flowcharts, let us see what an algorithm and flowchart means in technical terms. An algorithm refers to a set of statements that forms the code for the program. They are written step by step in a simple, understandable language. English is the most commonly used language for the algorithms. These are the 5 main properties of an algorithm: - Input: Every algorithm consists of one or more inputs. - Output: Every algorithm generates an output. - Effectiveness: Every algorithm can be written using a pen and paper. - Finiteness: Every function used returns some value. - Definiteness: Every algorithm is unique. - Complexity: Complexity of an algorithm refers to the amount of time and space an algorithm takes to get executed. This is one factor of an algorithm. The first step of an algorithm is written as Begin indicating that the algorithm starts and the last step is written as End indicating that the algorithm ends. A flowchart refers to the pictorial or graphical representation of an algorithm. The algorithm is usually shown in form of shapes. - Start or stop is represented in elliptical shape. - Input or Output is represented in parallelogram. - Processing statements are represented in rectangles. - Decision-making statements are represented in rhombus. - Each shape is linked using the arrow(->) Flowcharts are more easier to understand than algorithms as they are represented in pictorial forms. One can easily write the code by looking at the flowchart. Yet algorithms are still preferred and here are the reasons: - One can write an algorithm even for the complex codes whereas it becomes difficult in a flowchart. - Conditional and Looping statements can be written in algorithms but these statements cannot be represented in flowcharts. - Debugging for an algorithm is easier whereas a flowchart cannot be debugged as it is quite complex to do so. - There are certain rules to be followed in case of flowcharts but there are no particular rules for algorithms. Consider a simple algorithm for adding two even numbers: Step 1: Begin Step 2: read a, b, c Step 3: check if(a%2==0 && b%2==0) Step 4: if true display a+b=c is even Step 5: else the statement is false Step 6: display it is not even Step 7: End The algorithm is written in simple and understandable language. One can easily write the code for this. This is not possible in case of a flowchart as there are conditional statements like the if….else statement given in the algorithm. Many top organizations prefer algorithms over flowcharts for writing codes. It is the best way to get a proper understanding of the code in simpler language. - An introduction to Flowcharts - Search Algorithms in AI - Genetic Algorithms - Consensus Algorithms in Blockchain - Classification of Routing Algorithms - Fixed and Flooding Routing algorithms - Top 10 Algorithms every Machine Learning Engineer should know - Introduction to Stock Market Algorithms - Nature-Inspired Optimization Algorithms - Mutation Algorithms for String Manipulation (GA) - Asymptotic Analysis and comparison of sorting algorithms - Mutation Algorithms for Real-Valued Parameters (GA) - Why Data Structures and Algorithms Are Important to Learn? - How can one become good at Data structures and Algorithms easily? - Live Classes for Data Structures and Algorithms: Interview Preparation Focused Course If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Instructions on how to enable your browser are contained in the help file. Math 10 Chapter 7 - Part A Pyramid in Egypt and Taj Mahal are the beautiful examples of application of ____________. a) Applied Geometry b) Simple Geometry c) General Geometry d) Practical Geometry A circle passing through three ___________ of triangle is called circum circle. A circle touching the three _________ of a triangle is called in-circle. Perpendicular distance between in-centre and any side of a triangle is called _______. b) True length Point of intersection of bisectors of interior angles of triangle is called ________. b) True length ___________of side of a triangle not only cuts it into two equal halves but also is perpendicular to it a) Circum Centre b) Right Bisector Point of intersection of right bisectors of sides of a triangle is called __________. b) Concurrent Centre c) Circum Centre d) Circum Centre _______________ escribed circles can be drawn for a triangle. c) Any number of The bisectors of one interior angle and two opposite exterior angles are _________. A tangent is ___________which touches a circle only at a single point. a) Half circle b) A semi-circle c) An Arc d) A Line __________ is always perpendicular to the radius of circle. a) An Arc b) A point c) A Tangent If common tangent to the two circles is on the ___________ of the line joining their centres, then it is called direct common tangent. a) Opposite Side b) Same side c) Parallel Side Median is a line segment joining one __________ of a triangle to the mid point of opposite sides If the points of contact of the common tangents of two circles lies on __________ of the line joining the centres of circles then tangents are called transverse common tangents. a) Same side b) Parallel Side c) Opposite Side ____________ tangents can be drawn to a circle from a point outside of a circle. Radius of a circle is 4cm. A point is taken at a distance of 5cm from centre of circle and tangent is drawn from it. Calculate length of tangent segment. Calculate radius of a circle if tangent is drawn from a point at a distance of 10 cm from centre of circle and length of tangent segment is 8cm. (point is outside of circle). The medians of a triangle are ____________. Calculate distance of a point from centre outside the circle from where tangents are drawn, if radius is 5cm, length of tangent segment is 8cm. Altitudes of a triangle are ____________. This is more feedback! This is the feedback!
The concept of decentralization in information technologies is not a new one. The internet, probably the single most influential technological innovation of the last 100 years, started out as a decentralized phenomenon. The pioneers of those early days used protocols to connect their computers with other machines around the world and built applications like email services and the World Wide Web, hosting the content on their own computers. The Internet Before Decentralization The internet is a human construction that has its own languages, and these languages have their own rules and protocols allowing it to function properly. Previous to the development of these languages computers were isolated machines with no way to communicate with each other. By creating a structure of interconnections between computers and using these communication protocols, computers are able to interact with each other. This interconnected structure is called system architecture and it makes the internet possible. There are a number of different types of architecture but the two most prevalent are client-server and peer-to-peer networks. Of these two, the client-server model dominates the landscape and uses a language called Hypertext Transfer Protocol (HTTP) to communicate. Data is stored in centralized servers that are then accessed using location-based addresses utilizing HTTP. This centralized server model and HTTP are very effective for certain actions like manipulating text and image files and creating websites; when dealing with issues of speed, latency and throughput, centralization has proven to be a useful model. The client-server model is also great at loading websites and handling text and images, aspects that once comprised the majority of internet traffic. Because of these strengths HTTP has dominated the landscape. However, HTTP is not perfect. Specifically, it is not suited to handle the transfer of large data files, like audio and video, which is why the adoption of peer-to-peer networks gained popularity. There is also the issue of server security. Having a consolidated organization means that the risk of data breaches and hacks are huge: all of the data for a general population is stored on a handful of servers under a central control. If bad actors gain access to these servers they can glean, manipulate and delete huge swaths of information. Decentralization basically means that instead of all actions and operations passing through a single, central point of access, they are spread across a number of different nodes. Each of these nodes forms an independent part of the network and is involved in the storage of data and the protocols used to access and manipulate it. The first glimpses of decentralized protocols that the average person came into contact with were music sharing services like Napster and BitTorrent. These platforms used Peer to Peer (p2p) networks to transfer data from the network of nodes to a user’s computer. All users on the network host content as independent nodes, eliminating the need for central servers. BitTorrent uses p2p and builds upon it, creating a way to download large files with limited bandwidth by sourcing small bits of data across the entire network and downloading them simultaneously, solving the issue of download speed often associated with a client-server model. How Does Decentralized Cloud Storage Work? HTTP serves the client-server model of data access very well: data is stored in centralized servers and location-based addresses are used to quickly and efficiently access data. But what happens when there aren’t centralized servers and no single address where data is stored? With different system architecture, this protocol is no longer suitable and a new language must be developed. One new language is called IPFS, or the InterPlanetray File System, and is an open-source project developed by Protocol Labs. IPFS is a collaborative project with hundreds of developers around the world contributing to its development. With the attention it has gained there are hopes that it can become a new standard in a decentralized internet. HTTP cannot function properly outside of a client-server model because it utilizes addresses to retrieve data, and on a decentralized network there is no single address for files. IPFS solves this by using content-based addressing: files are found not based on IP addresses and server location but by the data they contain. While IPFS shares some traits with BitTorrent’s decentralized p2p protocol, it also differs in some fundamental ways. BitTorrent is strictly used for p2p file-sharing, while IPFS is intended to replace HTTP entirely. IPFS also practices deduplication which eliminates redundancy on the network and frees up bandwidth and increases speed. IFPS uses hashing, the cryptographic method utilized by the blockchain in which files are broken into blocks and are then given unique numerical codes. This overlap with blockchain technology makes IPFS ideal for integration. This decentralized model has its own issues with privacy and data security which are addressed by each of the projects utilizing this protocol. However, given the hashing, encryption processes and decentralized architecture that are built into IPFS suggests that it will be more secure than the centralized models currently used. But what incentive is there for users to utilize their computers in order to provide the network with the storage it needs? Decentralization Through Blockchain & IPFS Seeing that the blockchain is a decentralized protocol by design it is not surprising that people would find ways for these technologies to be integrated. There are a number of different projects utilizing the blockchain for decentralized cloud storage, and here we will take a look at a few of the most exciting developments. FileCoin: Utilizing Unused Storage Through Blockchain This is a project that comes from Protocol Labs, the same group that developed IPFS, and provides a blockchain-based storage solution to the issue of incentivizing node participation on the IPFS network. The project was developed based on the fact that there are huge amounts of unused storage space across the world’s personal computers, and a way to utilize this idle storage could have profound implications. Users are able to join the network and rent out the unused space on their hard drives, disks, or data centers. Within the FileCoin ecosystem, there are four different roles. The first is Clients. These are the people paying for their data to be stored across the network. Then there are storage miners that rent out their space to the clients. Retrieval miners act as intermediaries, shuttling data from storage to the clients and back again based around a send/receive, request system. Finally, there are full nodes that act as validators for the entire network. It is only after the data is validated as being correct and transferred successfully that a storage miner is paid for his storage space. This validation is done using cryptographic means, utilizing the blockchain. Clients and storage miners are able to fine-tune their storage strategy to suit their needs as well as the needs of the network. With first-mover status, this is the project that could, with wide-scale implementation, create the eco-system for a decentralized internet to function. Sia: Blockchain-Powered Cloud Network This is another project which aims to replace the current giants of centralized cloud storage. It functions in a similar fashion as FileCoin, linking renters and storage providers on the network, but Sia differs from FileCoin in a few ways. Sia has placed a high priority on competitive pricing. Out of the gate, Sia is 70% cheaper than centralized cloud storage services like Amazon, Dropbox and Google. Sia also engineered competitiveness into their model. Hosts show their geography, speed, latency and price allowing renters to choose the host that best suits their needs for each transaction. According to Sia, this will put downward pressure on price while rewarding quality hosts. Sia also places an emphasis on security: low-cost data storage is worth nothing if it is not secure. One tool that Sia utilizes to reduce the loss of data is called Reed-Solomon Redundancy. This means that each piece of data is stored on 30 different devices around the world, while only 10 devices need to be online at any given moment to access the data. Since it is almost guaranteed that all of these machines will not be compromised the odds of data being lost are astronomically low. Sia also utilizes high-level encryption at several different points for each piece of data. Every separate piece of data has its own passcode and is encrypted on each individual machine. Renters have these passcodes and own all of their data; no third parties—not even the hosts—can access a renter’s data. This encryption along with Smart Contracts to ensure that hosts and renters fulfill their end of the deal as well as a 64 byte Merkle Tree method of ensuring proof of storage makes for a very secure cloud. Sia is an interesting project, and with their focus on competition and security, is definitely worth watching. Storj: A New Evolution Of Cloud Storage Another interesting development utilizing the blockchain for decentralized cloud storage is Storj (pronounced storage) and its Tardigrade project. Like Sia, this project uses sharding as a means to ensure the security of the data stored on its network. Data files are split into a number of smaller pieces in standardized sizes of either 8 or 30 MB. This process not only increases security but also increases privacy while improving the overall functioning of the network. Data is encrypted by clients before it is transferred to storage space on users’, called Farmers, drives. Each individual shard gets a hash with identifying data stored on a distributed hash table on the blockchain. The distribution of shards ensures that no one Farmer ever controls a complete file and improves data security. Storj also uses the Reed-Solomon algorithm to protect against lost data due to node failure: this algorithm can recreate a lost file from as little as 50% of the remaining shards. Proof of integrity is maintained by hourly audits that are performed on files using Merkle Trees. Farmers reply to the Merkle Tree query with an answer that can only come if all files are stored properly on their drives and it is only then that they are paid. This project comes from an experienced team with a history in the crypto industry going back to 2014 and offers the cheapest storage rental prices of any of these projects: prices start at $.015 GB per month. With a strong market presence and continued innovation Storj is an interesting prospect in the race for the adoption of this decentralized tech. There is a strong push for a decentralized future. This technology allows for an egalitarian development of further innovations and helps bring the power back to the people. Beyond any philosophical arguments about the evils of centralized data control, there are some real-world examples of the way that a decentralized cloud can benefit people. This is apparent in the way some nations have dealt with censorship and data manipulation. The consolidation of data provides governments an easy and nearly absolute way to control the information a population has access to. There have been many examples of state internet censorship around the world, with notable cases in China and Turkey. China has blocked many social media platforms and replaced them with their own, highly surveilled versions, while Turkey banned Wikipedia outright, claiming it was a threat to national security. These scenarios, along with the implications of hacking these massive servers, make for a strong case in favor of decentralization. There are a myriad of reasons why a decentralized ecosystem is beneficial for all parties involved—excluding big tech firms that rake in cash for centralized server space and authoritarian regimes. Everything from security and cost to ideology and philosophy are valid arguments for decentralizing data storage. The development of these new technologies bring us closer to taking the reins from monolithic corporations and developing a system that provides users with the freedom to grow and create in new and exciting ways. More great decentralized projects are available on KuCoin Mitchell, B. (2019, August 17). Hypertext Transfer Protocol Explained. Retrieved from: https://www.lifewire.com/hypertext-transfer-protocol-817944 Gearlog. (2010, October 27). LimeWire, Napster, The Pirate Bat: A Brief History Of File Sharing. Retrieved from: https://www.geek.com/gadgets/limewire-napster-the-pirate-bay-a-brief-history-of-file-sharing-1359473/ Saini, V. (2019, February 16). Understanding IPFS In Depth (1-6): A Beginner To Advanced Guide. Retrieved from: https://hackernoon.com/understanding-ipfs-in-depth-1-5-a-beginner-to-advanced-guide-e937675a8c8a Filecoin. (NA). A Robust Foundation For Humanity’s Information. Retrieved from: https://filecoin.io/ Sia. (NA). Decentralized Storage for The Post-Cloud World. Retrieved from: https://sia.tech/ Ray, S. (2017, December 14). Merkle Trees. Retrieved from: https://hackernoon.com/merkle-trees-181cb4bc30b4 Storj. (2019). Introducing Tardigrade.io Decentralized Storage. Retrieved from: https://storj.io/ Djhworld. (2019, February). Reed Solomon Codes Are Cool. Retrieved from: https://news.ycombinator.com/item?id=19247633 Sherman, J. (2019, January 11). Emulating China And Russia, More Countries Crack Down On Internet Freedoms. Retrieved from: https://www.worldpoliticsreview.com/articles/27162/emulating-china-and-russia-more-countries-crack-down-on-internet-freedoms
Surface Area and Volume Two-dimensional shapes have an x-y plane to go home to at the end of the day, and what do the solids have? Nothing. Three-dimensional shapes have been homeless for such a long time, they've begun to sell their surface areas for shelter. Well, it's high time for solids to have a place of their own. We've set up a home so that 3D shapes don't have to sleep on park benches with newspapers for blankets anymore. Here, solids can find their place and finally feel welcome in the mathematical world. It's called the 3D coordinate system. An x-y coordinate system won't be enough to contain three-dimensional figures. If we try to squish a 3D shape into a 2D coordinate plane, it won't be comfortable for the shape and we might rip the plane (and we can't afford a new one). Instead, we can set up an x-y-z coordinate system to accommodate any and all 3D shapes. How can we envision the 3D coordinate system? Easy. First, we draw an x-y plane down on a sheet of paper and look down at it. That's where all the 2D shapes like triangles and circles and quadrilaterals live. If we look up, we can imagine another axis coming up and out of the page through the origin and perpendicular to the other axes. That's the z-axis. That's 3D space. That's what solids live in. And that's what the real world is: a 3D coordinate system. Doesn't it look pretty? It's newly renovated with hardwood floors and everything. Just like in a 2D graph, we mark the points of shapes with coordinates. This time, since there are three axes, we need three (preferably real) numbers to identify points in space. These numbers are a coordinate called an ordered triple and are in order of (x, y, z). Point P, for instance, has the ordered triple (3, 1, 3). We'll need to calculate distances and stuff in this coordinate system too, so a formula would be useful. It's just like the 2D distance formula, but with a z coordinate added to it like an extra limb. The Malcom in the Midpoint formula can also be extended to the third dimension so that a point equidistant between two points in 3D space has the ordered triple What's the distance and midpoint between points T (6, 2, 3) and U (1, 7, –4)? d ≈ 9.95 We've found our distance. Now for the midpoint. See? Piece of cake. We can do more with these coordinates than just calculate this, that, and the other thing (all of which you'll need to know). We can draw stuff, too. For example, let's say we want to draw a triangular prism with a base that has vertices of (0, 0, 0), (1, 2, 0), and (4, 0, 0) and a height of 5 units. We can start off drawing the base of the prism and then decide where to go from there (Hawaii, anyone?). That's the 2D shape. To make it 3D, we have to add the 5 units of height in. Since it's not specified, we can choose where we want to take the height (Hawaii, anyone?). Nice. That's our triangular prism in 3D coordinate space. It's found a home, so Hawaii is probably out of the question…or is it? If we want to move a shape in 3D space, all we have to do is change every point of that shape by the same amount. This is called translation (no, not into Latin). For instance, to move a rectangular prism up 13 units in the y-axis, we just have to add 13 to every y-coordinate in every ordered triple. The same goes with increasing or decreasing a shape's size. To find the coordinates of a solid that's similar to a given one, all we need to know are the coordinates and the scale factor. Multiply each value of each point by the scale factor, and we're set. The box above has the following coordinates: A (0, 0, 0), B (2, 0, 0), C (2, 1, 0), D (0, 1, 0), E (0, 1, 3), F (0, 0, 3), G (2, 1, 3), and H (2, 0, 3). If we wanted to triple the size of the solid and move it over from the x-axis by 5 points, all we'd have to do is multiply each number in every point by 3 (to triple it) and add 5 to all the x-coordinates. A' (5, 0, 0), B' (11, 0, 0), C' (11, 3, 0), D' (5, 3, 0), E' (5, 3, 9), F' (5, 0, 9), G' (11, 3, 9), and H' (11, 0, 9). The figure would look like this. Three times as big, and moved over five units to the right. Mission accomplished. Our dear 3D solids finally have a home where they can move and grow in peace rather than in pieces.
This activity is related to a Teachable Moment from March 14, 2017. See "Celebrate Pi Day Like a NASA Rocket Scientist." OverviewIn the fourth installment of this illustrated problem set, students use the mathematical constant pi to solve real-world science and engineering problems. Students will use pi to calculate the angle of crater impacts on Mars, measure the size of the shadow that will fall on North America during the 2017 total solar eclipse, determine the orbital period of the Cassini spacecraft during its final weeks around Saturn, and find the habitable zone around TRAPPIST-1, a star that is home to seven Earth-size planets! Why March 14? Pi is what’s known as an irrational number, meaning its decimal representation never ends and it never repeats. It has been calculated to more than one trillion digits, but NASA scientists and engineers actually use far fewer digits in their calculations (see “How Many Decimals of Pi Do We Really Need?”). The approximation 3.14 is often precise enough, hence the celebration occurring on March 14, or 3/14 (when written in US month/day format). The first known celebration occurred in 1988, and in 2009, the US House of Representatives passed a resolution designating March 14 as Pi Day and encouraging teachers and students to celebrate the day with activities that teach students about pi. Why It’s Important While many of us celebrate by eating pi-themed pie and trying to memorize as many digits of pi as possible (the record is 70,030 digits), scientists and engineers at NASA take pi even further, using it in their day-to-day work exploring space! “Finding the volume of a sphere, area of a circle (and thus volume of a cylinder) are well known applications of pi,” said Charles Dandino, a JPL engineer who integrates mechanical engineering and electronics,“but those relationships also form the basis for how stiff a structure is, how it will vibrate, and understanding how a design might fail.” Rachel Weinberg works on the Orbiting Carbon Observatory 3, or OCO-3, instrument, which will investigate the distribution of carbon dioxide on Earth. She says pi came in handy during her studies at MIT and still does today for her work at JPL. “Just the other day during a meeting, the team went to the whiteboard and used pi to discuss the angles and dimensions of optical components on OCO-3,” she said. Pi allows us to calculate the size and area of two- and three-dimensional shapes, says Anita Sengupta, a JPL engineer, who has worked on a variety of planetary missions. “In my career, pi has allowed me to calculate the size of a shield needed to enter the atmosphere of Venus and the size of a parachute that could safely land the Curiosity rover on the surface of Mars. Most recently we used pi in our calculations of the expanding atom cloud we will create for an experiment called the Cold Atom Laboratory, which will fly aboard the International Space Station.” The Science Behind the Challenge The Pi Day Challenge gives students a chance to take part in recent discoveries and upcoming celestial events, all while using math and pi just like NASA scientists and engineers. “Students always want to know how math is used in the real world,” said Ota Lutz, a senior education specialist at JPL who helped create the Pi Day Challenge. “This problem set demonstrates the interconnectedness of science, math and engineering, providing teachers with excellent examples of cross-cutting concepts in action and students with the opportunity to solve real-world problems.” Here’s some of the science behind this year’s problem set. The craters that cover Mars can tell us a lot about the Red Planet. Studying ejecta – the material blasted out during an impact – can tell us even more. Information about ejecta patterns even came up during a recent workshop to discuss and select the final candidates for the Mars 2020 rover landing site. For the first problem in our Pi Day Challenge, students use pi and the area and perimeter of two craters to identify which was made by an impactor that struck Mars at a low angle. Researchers found that low-angle impactors create an unusual ejecta pattern around craters on Mars. As part of the research, scientists are currently working to identify and catalog these craters across Mars. The year 2017 brings a unique astronomical event to the United States for the first time in nearly 40 years! On August 21, 2017, a total solar eclipse will cross the continental United States. Starting in Oregon, the shadow of the moon will cross the country at more than 1,000 miles per hour, making its way to the Atlantic Ocean off the coast of South Carolina. Everyone inside the moon’s shadow will witness one of the most impressive sights nature has to offer. So how big is the shadow? In the second part of NASA’s Pi Day Challenge, students will use pi to calculate the area of the moon’s shadow on Earth during the total solar eclipse. This year also marks the final chapter in the exciting story of NASA’s Cassini mission at Saturn. Since 2004, Cassini has been orbiting the ringed giant, vastly improving our understanding of the second largest planet in the solar system. After more than 12 years around Saturn, Cassini’s fuel is running low, so mission operators have devised a grand finale that will take the spacecraft closer to Saturn than ever before – inside the gap between the planet and its rings – and finally into Saturn’s cloud tops, where it will burn up. To prevent the spacecraft from crashing into and possibly contaminating Saturn’s moons Enceladus and Titan, two locations with potentially habitable environments, students will use pi to safely navigate the spacecraft on its grand finale orbits and final dive into Saturn. Finally, students will investigate a relatively new and very exciting realm in astronomy, the search for habitable worlds. The discovery of exoplanets – worlds orbiting stars outside of our solar system – has changed our understanding of the universe. Until 1995, exoplanets hadn’t even been detected. Now, using the transit method – where planets are detected by measuring the light they block as they pass in front of a star – over 2300 exoplanets have been discovered. This has great implications in the search for life outside our solar system. Recently, astronomers discovered a record seven Earth-size planets orbiting a single star called TRAPPIST-1! Students will use pi to identify which of these planets orbit in the star’s habitable zone – the area where liquid water could exist. Download the free NASA "Pi in the Sky 4" poster and companion answer key: Join the Conversation - Join the conversation and share your Pi Day Challenge answers with @NASA/JPL_Edu on social media using the hashtag #NASAPiDayChallenge - Share how you're celebrating Pi Day 2017! Pi Day Challenges Facts and Figures
significant figures and density worksheet 2 name physical volume of irregular solids worksheets 5th grade. finding volume of irregular shapes worksheets solids worksheet 5th grade shared by. views of solids worksheets volume irregular shapes download them and try to solve sketching do. how to calculate the volume of a cube formula practice video lesson transcript screenshot irregular solids worksheet 5th grade. volume of a cube worksheet worksheets for all download and 5 irregular solids 5th grade free library print on. inspiration worksheets volume of solids on gallery for worksheet worksh. surface area worksheet basic finding volume of irregular solids worksheets. related post volume of irregular shapes worksheets unique best grade tutoring free printable measuring solids irregula. related post volume of irregular shapes worksheets admirable cubes worksheet free measuring solids i. surface area of solids worksheets solutions free printable for teaching and perimeter volume irregular w. collection of free volume solids worksheets ready to download or print please do not use any for commercial 5 finding irregu. volume of solid figures worksheet by irregular solids 7 finding worksheets library. physical science observation stations bundle original 1 volume of irregular solids worksheets 5th grade shapes teaching resources teach. volume of cylinders and cones worksheet recent a cylinder. irregular polygons area worksheets rectangle volume and surface of rectangular prisms two 1 irregu. video thumbnail volume of irregular solids worksheets 5th grade find the complex rectangular prisms. volume of irregular shapes worksheets for area and s measuring solids. finding area of composite figures worksheet with answers find the worksheets surface free for volume and irregular shapes ans. views of solids worksheets matter page 2 have fun teaching states volume an irr. volume of cylinders and cones worksheet impressive a cylinder. finding the volume of an irregular solid solids worksheet 5th grade measuring video online download. volume of irregular solids worksheet 3 measuring worksheets for all download. 6 can also measure volume of an irregular solid by using water displacement see worksheet slide solids 5th grade is the amount space object takes up. printable math worksheets area and perimeter download them try free collection of volume finding irre. volume of geometric solids worksheets the best image collection download and share measuring irregular. volume of irregular objects worksheet the best worksheets image collection download and share solids 5th grade. area of irregular figures worksheet volumes solids software volume worksheets 5th grade info. volume of cylinders and cones worksheet doc best 8 images on measuring irregular solid. volume surface area of prisms and cylinders worksheet unique pyramids worksheets irregular solids 5th grade math inspirational best image. classroom cafe volume practice freebie of an irregular solid worksheet answers general science. image titled find the volume of an irregular object using a graduated cylinder step 5 version 4 solids work. measuring volume of irregular objects worksheet 6 solids 5th grade workshe. graduated cylinder worksheet for middle school design of kids sci on volume irregular solids the. volume of solids worksheet prisms cylinders cones pyramids and spheres worksheets math irregu. measuring volume of irregular objects worksheet worksheets for all download and share free on solids. of pyramid worksheet volume irregular solids worksheets 5th grade lettering site. graduated cylinders displaced volume irregular shaped objects original 1 of solids worksheet 5th grade shapes teaching resources teachers pay. small size what is volume in science co by water displacement worksheet irregular solid a. views of solids worksheets geometric solid print shop volume an irregular worksheet answ. this page contains all information about finding volume of irregular solids worksheets prints measuring.
Correlational statistics vs tests of differences between groups Correlation/regression techniques reflect the strength of association between continuous variables Tests of group differences (t-tests, anova) indicate whether significant differences exist between group means Are The Differences We See Real? Major Assumptions Normally distributed variables Homogeneity of variance Robust to violation of assumptions A t -test or ANOVA is used to determine whether a sample of scores are from the same population as another sample of scores. (in other words these are inferential tools for examining differences in means) Why a t-test or ANOVA? t-tests An inferential statistical test used to determine whether two sets of scores come from the same population Is the difference between two sample means ‘real’ or due to chance? Use of t in t-tests Question: Is the t large enough that it is unlikely that the two samples have come from the same population? Decision: Is t larger than the critical value for t (see t tables – depends on critical and N) 68% 95% 99.7% Ye Good Ol’ Normal Distribution Use of t in t-tests t reflects the ratio of differences between groups to within groups variability Is the t large enough that it is unlikely that the two samples have come from the same population? Decision: Is t larger than the critical value for t (see t tables – depends on critical and N) One-tail vs. Two-tail Tests Two-tailed test rejects null hypothesis if obtained t-value is extreme is either direction One-tailed test rejects null hypothesis if obtained t-value is extreme is one direction (you choose – too high or too low) One-tailed tests are twice as powerful as two- tailed, but they are only focused on identifying differences in one direction. Compare one group (a sample) with a fixed, pre- existing value (e.g., population norms) E.g., Does a sample of university students who sleep on average 6.5 hours per day (SD = 1.3) differ significantly from the recommended 8 hours of sleep? Single sample t-test Compares mean scores on the same variable across different populations (groups) e.g., Do males and females differ in IQ? Do Americans vs. Non-Americans differ in their approval of George Bush? Independent groups t-test Assumptions (Independent samples t-test) IV is ordinal / categorical e.g., gender DV is interval / ratio e.g., self-esteem Homogeneity of Variance –If variances unequal (Levene’s test), adjustment made –Normality – t-tests robust to modest departures from normality: consider use of Mann-Whitney U test if severe skewness Independence of observations (one participant’s score is not dependent on any other participant’s score) Do males and females differ in memory recall? Paired samples t-test Same participants, with repeated measures Data is sampled within subjects, e.g., –Pre- vs. post- treatment ratings –Different factors e.g., Voter’s approval ratings of candidate X vs. Y Assumptions- paired samples t-test DV must be measured at interval or ratio level Population of difference scores must be normally distributed (robust to violation with larger samples) Independence of observations (one participant’s score is not dependent on any other participant’s score) Do females’ memory recall scores change over time? Assumptions IV is ordinal / categorical e.g., gender DV is interval / ratio e.g., self-esteem Homogeneity of Variance –If variances unequal, adjustment made (Levene’s Test) Normality - often violated, without consequence –look at histograms –look at skewness –look at kurtosis SPSS Output: Independent Samples t-test: Same Sex Relations SPSS Output: Independent Samples t-test: Opposite Sex Relations SPSS Output: Independent Samples t-test: Opposite Sex Relations What is ANOVA? (Analysis of Variance) An extension of a t-test A way to test for differences between Ms of: (i) more than 2 groups, or (ii) more than 2 times or variables Main assumption: DV is metric, IV is categorical Introduction to ANOVA Single DV, with 1 or more IVs IVs are discrete Are there differences in the central tendency of groups? Inferential: Could the observed differences be due to chance? Follow-up tests: Which of the Ms differ? Effect Size: How large are the differences? F test ANOVA partitions the ‘sums of squares’ (variance from the mean) into: Explained variance (between groups) Unexplained variance (within groups) – or error variance F represents the ratio between explained and unexplained variance F indicates the likelihood that the observed mean differences between groups could be attributable to chance. F is equivalent to a MLR test of the significance of R. F is the ratio of between- : within-group variance Assumptions – One-way ANOVA DV must be: 1.Measured at interval or ratio level 2.Normally distributed in all groups of the IV (robust to violations of this assumption if Ns are large and approximately equal e.g., >15 cases per group) 3. Have approximately equal variance across all groups of the IV (homogeneity of variance) 4. Independence of observations Example: One-way between groups ANOVA Does LOC differ across age groups? year-olds year olds year-olds 2 = SS between /SS total = / = Eta-squared is expressed as a percentage: 12.8% of the total variance in control is explained by differences in Age Which age groups differ in their mean control scores? (Post hoc tests) Conclude: Gps 0 differs from 2; 1 differs from 2 ONE-WAY ANOVA Are there differences in Satisfaction levels between students who get different Grades? Assumptions - Repeated measures ANOVA 1. Sphericity - Variance of the population difference scores for any two conditions should be the same as the variance of the population difference scores for any other two conditions (Mauchly test of sphericity) Note: This assumption is commonly violated, however the multivariate test (provided by default in SPSS output) does not require the assumption of sphericity and may be used as an alternative. When results are consistent, not of major concern. When results are discrepant, better to go with MANOVA Normality Example: Repeated measures ANOVA Does LOC vary over a period of 12 months? LOC measures obtained over 3 intervals: baseline, 6 month follow-up, 12 month follow-up. Mean LOC scores (with 95% C.I.s) across 3 measurement occasions 1-way Repeated Measures ANOVA Do satisfaction levels vary between Education, Teaching, Social and Campus aspects of university life? Followup Tests Post hoc: Compares every possible combination Planned: Compares specific combinations Post hoc Control for Type I error rate Scheffe, Bonferroni, Tukey’s HSD, or Student-Newman-Keuls Keeps experiment-wise error rate to a fixed limit Planned Need hypothesis before you start Specify contrast coefficients to weight the comparisons (e.g., 1 st two vs. last one) Tests each contrast at critical TWO-WAY ANOVA Are there differences in Satisfaction levels between Gender and Age? TWO-WAY ANOVA Are there differences in LOC between Gender and Age? Example: Two-way (factorial) ANOVA Main1: Do LOC scores differ by Age? Main2: Do LOC scores differ by Gender? Interaction: Is the relationship between Age and LOC moderated by Gender? (Does any relationship between Age and LOC vary as a function of Gender) Factorial designs test Main Effects and Interactions In this example we have two main effects (Age and Gender) And one interaction (Age x Gender) potentially explaining variance in the DV (LOC) Example: Two-way (factorial) ANOVA IVs Age recoded into 3 groups (3) Gender dichotomous (2) DV Locus of Control (LOC) Low scores = more internal High scores = more external Data Structure Plot of LOC by Age and Gender Age x gender interaction Age main effect Gender main effect Age x gender interaction Mixed Design ANOVA (SPANOVA) It is very common for factorial designs to have within-subject (repeated measures) on some (but not all) of their treatment factors. Mixed Design ANOVA (SPANOVA) Since such experiments have mixtures of between subjects and within-subject factors they are said to be of MIXED DESIGN Common practice to select two samples of subjects e.g., Males/Females Winners/Losers Mixed Design ANOVA (SPANOVA) Then perform some repeated measures on each group. Males and females are tested for recall of a written passage with three different line spacings Mixed Design ANOVA (SPANOVA) This experiment has two Factors B/W= Gender (male or Female) W/I = Spacing (Narrow, Medium, Wide) The Levels of Gender vary between subjects, whereas those of Spacing vary within-subjects Mixed Design ANOVA (SPANOVA) CONVENTION If A is Gender and B is Spacing the Reading experiment is of the type A X (B) signifying a mixed design with repeated measures on Factor B CONVENTION With three treatment factors, two mixed designs are possible These may be one or two repeated measures A X B X (C) or A X (B X C) ASSUMPTIONS Random Selection Normality Homogeneity of Variance Sphericity Homogeneity of Inter-Correlations SPHERICITY The variance of the population difference scores for any two conditions should be the same as the variance of the population difference scores for any other two conditions SPHERICITY Is tested by Mauchly’s Test of Sphericity If Mauchly’s W Statistic is p <.05 then assumption of sphericity is violated SPHERICITY The obtained F ratio must then be evaluated against new degrees of freedom calculated from the Greenhouse-Geisser, or Huynh- Feld, Epsilon values. HOMOGENEITY OF INTERCORRELATIONS The pattern of inter-correlations among the various levels of repeated measure factor(s) should be consistent from level to level of the Between-subject Factor(s) HOMOGENEITY OF INTERCORRELATIONS The assumption is tested using Box’s M statistic Homogeneity is present when the M statistic is NOT significant at p >.001. Mixed ANOVA or Split-Plot ANOVA Do Satisfaction levels vary between Gender for Education and Teaching? ANCOVA Does Education Satisfaction differ between people who are ‘Not coping’, ‘Just coping’ and ‘Coping well’? What is ANCOVA? Analysis of Covariance Extension of ANOVA, using ‘regression’ principles Assess effect of –one variable (IV) on –another variable (DV) –after controlling for a third variable (CV) Why use ANCOVA? Reduces variance associated with covariate (CV) from the DV error (unexplained variance) term Increases power of F-test May not be able to achieve experimental over a variable (e.g., randomisation), but can measure it and statistically control for its effect. Why use ANCOVA? Adjusts group means to what they would have been if all P’s had scored identically on the CV. The differences between P’s on the CV are removed, allowing focus on remaining variation in the DV due to the IV. Make sure hypothesis (hypotheses) is/are clear. Assumptions of ANCOVA As per ANOVA Normality Homogeneity of Variance (use Levene’s test) Assumptions of ANCOVA Independence of observations Independence of IV and CV. Multicollinearity - if more than one CV, they should not be highly correlated - eliminate highly correlated CVs. Reliability of CVs - not measured with error - only use reliable CVs. Assumptions of ANCOVA Check for linearity between CV & DV - check via scatterplot and correlation. Assumptions of ANCOVA Homogeneity of regression –Estimate regression of CV on DV –DV scores & means are adjusted to remove linear effects of CV –Assumes slopes of regression lines between CV & DV are equal for each level of IV, if not, don’t proceed with ANCOVA –Check via scatterplot, lines of best fit. Assumptions of ANCOVA ANCOVA Example Does Teaching Method affect Academic Achievement after controlling for motivation? IV = teaching method DV = academic achievement CV = motivation Experimental design - assume students randomly allocated to different teaching methods. ANCOVA example 1 Academic Achievement Teaching Method Motivation ANCOVA Example A one-way ANOVA shows a non-significant effect for teaching method (IV) on academic achievement (DV) An ANCOVA is used to adjust for differences in motivation F has gone from 1 to 5 and is significant because the error term (unexplained variance) was reduced by including motivation as a CV. ANCOVA Example ANCOVA & Hierarchical MLR ANCOVA is similar to hierarchical regression – assesses impact of IV on DV while controlling for 3 rd variable. ANCOVA more commonly used if IV is categorical. ANCOVA & Hierarchical MLR Does teaching method affect achievement after controlling for motivation? –IV = teaching method –DV = achievement –CV = motivation We could perform hierarchical MLR, with Motivation at step 1, and Teaching Method at step 2. ANCOVA & Hierarchical MLR 1 - Motivation is a sig. predictor of achievement. 2 - Teaching method is a sig, predictor of achievement after controlling for motivation. ANCOVA & Hierarchical MLR Does employment status affect well- being after controlling for age? –IV = Employment status –DV = Well-being –CV = Age Quasi-experimental design - P’s not randomly allocated to ‘employment status’. ANCOVA Example ANOVA - significant effect for employment status ANCOVA Example ANCOVA - employment status remains significant, after controlling for the effect of age. ANCOVA Example Summary of ANCOVA Use ANCOVA in survey research when you can’t randomly allocate participants to conditions e.g., quasi-experiment, or control for extraneous variables. ANCOVA allows us to statistically control for one or more covariates. Summary of ANCOVA We can use ANCOVA in survey research when can’t randomly allocate participants to conditions e.g., quasi-experiment, or control for extraneous variables. ANCOVA allows us to statistically control for one or more covariates. Summary of ANCOVA Decide which variable is IV, DV and CV. Check Assumptions: –normality –homogeneity of variance (Levene’s test) –Linearity between CV & DV (scatterplot) –homogeneity of regression (scatterplot – compares slopes of regression lines) Results – does IV effect DV after controlling for the effect of the CV? Multivariate Analysis of Variance MANOVA Generalisation to situation where there are several Dependent Variables. E.g., Researcher interested in different types of treatment on several types of anxiety. Test Anxiety Sport Anxiety Speaking Anxiety IV’s could be 3 different anxiety interventions: Systematic Desensitisation Autogenic Training Waiting List – Control MANOVA is used to ask whether the three anxiety measures vary overall as a function of the different treatments. ANOVAs test whether mean differences among groups on a single DV are likely to have occurred by chance. MANOVA tests whether mean differences among groups on a combination of DV’s are likely to have occurred by chance. MANOVA advantages over ANOVA 1.By measuring several DV’s instead of only one the researcher improves the chance of discovering what it is that changes as a result of different treatments and their interactions. e.g., Desensitisation may have an advantage over relaxation training or control, but only on test anxiety. The effect is missing if anxiety is not one of your DV’s. 2.When there are several DV’s there is protection against inflated Type 1 error due to multiple tests of likely correlated DV’s. 3.When responses to two or more DV’s are considered in combination, group differences become apparent. LIMITATIONS TO MANOVA As with ANOVA attribution of causality to IV’s is in no way assured by the test. The best choice is a set of DV’s uncorrelated with one another because they each measure a separate aspect of the influence of IV’s. When there is little correlation among DV’s univariate F is acceptable. Unequal cell sizes and missing data are problematical for MANOVA. Reduced Power can mean a non-significant Multivariate effect but one or more significant Univariate F’s! When cell sizes of greater than 30 assumptions of normality and equal variances are of little concern. Equal cell sizes preferred but not essential but ratios of smallest to largest of 1:1.5 may cause problems. MANOVA is sensitive to violations of univariate and multivariate normality. Test each group or level of the IV using the split file option. Multivariate outliers which affect normality can normally be identified using Mahalanobis distance in the Regression sub-menu. Linearity: Linear relationships among all pairs of DV’s must be assumed. Within cell scatterplots must be conducted to test this assumption. Homogeneity of Regression: It is assumed that the relationships between covariates and DV’s in one group is the same as other groups. Necessary if stepdown analyses required. Homogeneity of Variance: Covariance Matrices similar to assumption of homogeneity of variance for individual DV’s. Box’s M test is used for this assumption and should be non- significant p>.001. Multicollinearity and Singularity: When correlations among DV’s are high, problems of multicollinearity exist. WILKS’ LAMBDA Several Multi-variate Statistics are available to test significance of Main Effects and Interactions. Wilks’ Lambda is one such statistic F is the ratio of between- : within-group variance Effect Size: Eta-squared ( 2 ) Analagous to R 2 from regression = SS between / SS total = SS B / SS T = proportion of variance in Y explained by X = Non-linear correlation coefficient = proportion of variance in Y explained by X Ranges between 0 and 1 Interpret as for r 2 or R 2 ; a rule of thumb:.01 is small,.06 medium,.14 large Effect Size: Eta-squared ( 2 ) The eta-squared column in SPSS F-table output is actually partial eta-squared ( p 2 ). 2 is not provided by SPSS – calculate separately. R 2 -squared at the bottom of SPSS F-tables is the linear effect as per MLR – however, if an IV has 3 or more non-interval levels, this won’t equate with 2. Results - Writing up ANOVA Establish clear hypotheses Test the assumptions, esp. LOM, normality, univariate and multivariate outliers, homogeneity of variance, N Present the descriptive statistics (text/table) Consider presenting a figure to illustrate the data Results - Writing up ANOVA F results (table/text) and direction of any effects Consider power, effect sizes and confidence intervals Conduct planned or post-hoc testing as appropriate State whether or not results support hypothesis (hypotheses)
Find out the latest info on creative apps to keep their imaginations firing and how to keep your kids connected in a healthy way. more Multiplication facts and learning games Many of us can recall memorising our times tables as children. However the teaching of multiplication in schools now has changed from the drilling of times tables. Your child’s ability to visualise the process that is occurring when they multiply two numbers together is important before times tables are learnt. Multiplication is taught from the first year of school which can surprise parents as many think of multiplication as a concept taught in primary school. In fact the strong foundations for multiplication are taught from kindergarten or Prep. Multiplication games for kids aged 5-6 If your child is aged 5-6 years then are focusing on modelling equal groups or rows of items. They are using the language ‘group’ to describe a collection of items. This ability to ‘see’ the groups is important. Try these activities to reinforce the concepts: - Decorating Cupcakes – children at this age often like to help bake. When making cupcakes decorate with smarties or lollies and ask them to give each cupcake four smarties “we have 5 cupcakes and they each have the same number of lollies. Here is 1 cupcake of 4, that’s 1 group; here is the second group of 4 etc.” - Cereal necklaces – have children thread Cheerios or fruit loops onto a piece of string in a pattern 2 green, 3 blue, 2 green. Take turns to roll a dice and ‘eat’ away that number of Cheerios from the necklace. The winner is the first to eat their necklace. Multiplication games for kids aged 7-8 Children aged 7-8 years are counting by ones, twos, fives and tens in rhythmic patterns. They learn that equal rows are called arrays. Your child will also be making the connection that 3 groups of 2 gives the same answer as 2 groups of 3 but that it looks different. - Ice cubes tray – when refilling the ice cube tray use this opportunity for your child to practise their knowledge of skip counting and arrays. Ask them to count the ice cubes you are making. If they count by one remind them to count by the number in each row – usually counting by 2. - Egg containers – Using egg containers which you have saved practise the concept of arrays – equal rows. Using counters or sultanas inside the egg containers you can come up with different multiplication sums. - Beads or Painting – Use craft as a way to motivate kids to skip count and see patterns. Collect leads from the garden, thread bead necklaces. Cut out pictures from magazines and stick in arrays. - Feeding Pets – if you have more than one animal this is perfect. If you want to give 2 treats to each dog/cat –then how many treats will they need to give? Multiplication games for kids aged 9-10 Children 9-10 years are now at the stage of learning their times tables and understanding a number sentence can be used to represent multiplication. They are counting by threes, fours, sixes, sevens, eights or nines and will learn their times tables to 10x10. This is also the age at which homework for times tables becomes a nightmare. - Times table grid – this is different to writing out times tables, which may be given as homework already. This reinforces the relationship between numbers and helps children who are feeling daunted with how many facts they need to memorise. - Singing or rhyme – learning times tables by singing or funny rhymes aids memorisation for auditory learners. - Monsters – using drawing, painting or collage have children create monsters which match number sentences you write for them. Eg 7x3, 7x5 and your child draws 7 monsters with three eyes each, and 7 monsters have 5 arms each! This is fun but educational. - Story cards – tell children a funny story and have them write the number sentences that match. For example “I was looking for a park at the shopping centre. I drove past the 1st, 2nd, 3rd, 4th and 5th row which were all full. I could see 10 cars in each row. In the 6th row I parked and filled the last spot. The number sentence would be 6x10 = 60. This encourages oral listening and visualisation skills as well as relating to a real life experience. Multiplication games for kids aged 11-12 Children aged 11-12 are multiplying three and four digit numbers by one digit as well as long multiplication which is multiplying 3 digits by two digits. Accuracy with times tables is assumed at this age as students should have a level of automaticity which allows them to move onto problem solving skills. - Students will be given plenty of opportunities to write algorithms at school. If your child has difficulties with their handwriting or neatness then you may choose to revise the writing of algorithms at home. Many errors at this age are not from failing to understand the maths concept but from setting out the algorithm incorrectly and mixing columns and place value. - Tree diagrams – these are a great way of helping kids visualise the possibilities when applying their multiplication knowledge. For example Daniel could order a sandwich or a wrap at school and could choose between ham, chicken, Vegemite, salad or cheese as a filling. How many different combinations could Daniel have? - Revise number facts. Students who confidently rote learnt their times tables can face problems at this stage with multiplication and may need reminders at home of how to approach multiplication problems for example strategies such as ‘14 × 6 is 10 sixes plus 4 sixes’. - Word Problems to algorithms – you do not need to sit with your child for lengthy periods of time, particularly if they are resistant. If they do not want help then simply say “I will ask you one question and we will leave it with that if you are successful with writing and solving the algorithm”. Use real life contexts. For example “there are 28 kids in your class, and 4 classes per grade. It costs $47 to go on the school camp. How much money will be collected if only 65 kids pay? Questions such as this will highlight which part of the process your child struggles with. - Teaching kids to tell time - Teaching left vs. right - Tips for teaching addition - Tips for teaching subtraction - Tips for teaching multiplication - Tips for teaching division - The importance of music lessons - Mathematical milestones for pre-kinder children - Mathematical milestones for 5-6 year old children - Mathematical milestones for 7-8 year old children - Mathematical milestones for 9-10 year old children - Mathematical milestones for 11-12 year old children - Reading games for fun - Host your own spelling bee - Learning games with Kidspot's spelling scrambler handwriting with printable mazes - Handwriting fun with dot-to-dots - Fun teaching ideas to learn left from right facts and learning games - Subtraction learning games - Multiplication facts and learning ideas - How to teach division - What cooking will teach our kids Find more teaching tricks to inspire learning:
Teaching Women in the Zapatista Movement: Gender, Health, and Resistance The Zapatista movement is an important topic to integrate into the classroom, be it the World History classroom, the Latin American History classroom, or even the US History classroom. The Zapatistas are a formal army comprised of women and men located in Chiapas, the southernmost region of Mexico. The Zapatista uprising began on January 1, 1994, largely in response to the unchanging economic conditions for the indigenous people in Chiapas and the surrounding communities. The Zapatistas demand basic human rights: work, land, housing, food, health care, education, independence, freedom, democracy, justice, and peace. They also demand the withdrawal of the Mexican Army from Zapatista territory and the demobilization, disarming and investigation of the paramilitaries. Groups locally and globally support the Zapatista struggle, including Amnesty International and America's Watch. The Zapatista uprising has given women a unique leadership role. Prior to the revolution, indigenous Chiapan women found many of their actions dictated by a culture that saw women as inferior to men. When the Zapatista Army of National Liberation (EZLN) formed in the mid-1980s, women were allowed controversial executive positions within the movement that challenged traditional roles and stereotypes for women. Additionally, the Zapatista movement attempted to tackle some long-standing gender discrimination and problems for women within Mexican society. This paper will present suggestions for classroom exploration of women's role in the Zapatista movement. It will first present extensive background information on this issue and then a detailed and specific lesson plan.The Geographical and Cultural Context of the Zapatista Movement The Zapatista movement stems from the indigenous struggle fought by Emiliano Zapata (1879-1919), a Mexican revolutionary fighter during the early 1900s 1 During the early 20th century Porfirio Diaz (1813-1915), the president of Mexico from1876 to 1911, promised not to run again in the 1910 election. When he did, many people in Mexico were outraged. This led to pent-up tensions of the Mexican people and as a result; revolutionary fighters such as Francisco Madero (1873-1913), Francisco (also know as Pancho) Villa (1878-1923), Emiliano Zapata, and others banded together to overthrow the Diaz regime. These men successfully accomplished their goal to overthrow the Diaz regime because he resigned in 1911, and they became national heroes. Zapata, an Indian farmer, tried to recover lands expropriated from the indigenous people by the Diaz regime. He witnessed villages disappearing into sugarfields, water being redirected to hacienda irrigation ditches, and orchards shriveling due to the insufficient amount of water bequeathed to the dominant sugar oligarchies in Mexico. Because of this, Zapata articulated the famous Plan de Ayala (1911): it called for the expropriation of lands taken away from the indigenous people at the hand of the Diaz regime (Weinberg 48). In addition to his revolutionary plan, Zapata is crucial to the Zapatista struggle today because he illustrated the fundamentals of democracy by making decisions in big meetings so all could participate by sharing their viewpoints. The collective grievances and frustrations that exploded in the 1900s are similar to those that led to the Zapatista uprising. In 1968, a radical movement by the students at Mexico City's National Autonomous University (UNAM) denounced the construction of a Sports Palace. This sport arena was to be built near the University for games that only a few Mexicans would be financially able to attend. Incidents of police brutality against these disapproving students began to fuel further protests against the construction of this arena (Weinberg 60). Students were rallying against the corrupt nature of the Mexican regime and its undemocratic decisions such as the arena's construction. The government took a very pro-business and anti-peasant and anti-urban worker stance during the 1960s. Individuals were banding together and served as models for future resistance groups. They demonstrated that people are able to collaborate and voice opinions on national affairs with which they disagree. Because of the oil crisis in 1973, the U.S. shifted its reliance on Arab oil to Canadian and Latin American producers. As a result, in 1982, Mexico became the largest supplier of oil to the U.S. In addition, the Mexican economy prospered as an outgrowth of U.S. tourism with its $2.27 billion dollar industry in 1987. This thriving enterprise granted Mexico's inclusion into the General Agreements in Tariffs and Trade (GATT), which then led to the formation of a "US-Mexico Framework Agreement" to improve the market access which was the stepping stone for the North American Free Trade Agreement (NAFTA) (Weinberg 63). The boiling point for the Zapatistas was in 1992 when Article 27 of the Mexican constitution was amended to allow "privatization of the 'inalienable' ejidos" to allow NAFTA. The importance of Article 27 was that it protected the communal lands owned by the indigenous people. Some would argue that NAFTA sold Mexican sovereignty and further eroded the indigenous people's autonomy in their communities. Tom Hayden's The Zapatista Reader includes an article by Andrew Kopkind, a writer for The Nation, in which Kopkind argues that NAFTA took away the indigenous people's lives, culture, and history (Hayden 20). NAFTA allowed for the importation of cheap corn and wheat from the United States to Mexico, which drove many Mexican farmers out of business (Oppenheimer 52). Opponents of NAFTA also contend that NAFTA took away indigenous people's autonomy by tying the Mexican economy to the United States (Weinberg 64). To protest NAFTA, on January 1, 1994, the Zapatista movement declared war on the Mexican government and the international trade agreement (NAFTA). The rebels of this movement distributed the First Declaration of the Lacandon Selva, a manifesto calling on Mexico and the United States to comply with the Geneva Convention and for the world to monitor this conflict. According to EZLN, Article 39 of the Mexican Constitution legitimizes this resistance because it affirms that the indigenous people's sovereignty lies with the people of Mexico by giving them the right to change their form of government (EZLN 50). Subcommander Marcos, formerly known as Rafael Sebastian Guillen (1955-present), a university professor of graphic design in Mexico, left his job to become a revolutionary fighter in the Zapatista movement (Oppenheimer 244). He said that the NAFTA left Chiapas in a state of poverty (Weinberg 79). NAFTA is a death sentence for the indigenous people. NAFTA sets up competition among farmers, but how can our campesinos--who are mostly illiterate--compete with U.S. and Canadian farmers? And look at this rocky land we have here. How can we compete with the land in California or in Canada? So the people of Chiapas as well as the people of Oaxaca, Veracruz, Quintana Roo, Guerrero, and Sonora were the sacrificial lambs of NAFTA (Katzenberger 67). Marcos points out that NAFTA creates an unequal system of competition for the indigenous people in Mexico. With the high level of illiteracy in Mexico, he challenges NAFTA by asking how one can expect the indigenous people to compete with the United States and Canada on equal ground. As a result, the indigenous people live in impoverished conditions (Oppenheimer 20). To summarize, indigenous resistance since the early 1900s and the violation of indigenous rights protected by the Mexican Constitution prompted the resistance movement led by the Zapatistas. As mentioned earlier, the indigenous people did not benefit from the economic growth of Mexico as it became integrated into the economy of the United States. Simultaneously, the government's lack of democracy also infuriated the indigenous people. Even though the revolutionary fighters in the Mexican Revolution fought for democracy, Mexico still lacked this type of government. For example, the Institutional Revolutionary Party (PRI) was criticized for its tradition of having the current president of Mexico handpick his successor (Oppenheimer 10-11). It took more than six decades of PRI's rule to end when the people of Mexico voted them out of office in the election of 2000 (xii). A violation of rights, poverty, and the lack of democracy led to the pent up tensions of the Zapatistas in the 1990s.Demands of the Zapatistas for Maternal and Infant Health Care Adequate access to health care for the indigenous people in Chiapas is one of the primary concerns of the Zapatista movement. On a daily basis, people need access to pharmacies, clinics, doctors, nurses and medical services. However, because discrimination exists against the indigenous people in Chiapas, services are difficult to access. No efficient access to clinics and hospitals exists within a decent distance; if these services do exist, the distance of a clinic varies in travel time from innumerable hours to days of driving or walking As a result, this journey to a clinic or hospital is costly, uncomfortable, and in many instances, not a possibility. Tanya Zakrison, a student of paramedicine and infectious disease, wrote in 1999 about the experience of the indigenous people prior to the Zapatista Revolution: Paying for a four-hour truck ride over the unpaved, potholed and sinewy path through the mountains to the nearest hospital in Altamirano is impossible for most, whose sole income is selling their corn or coffee harvests at artificially low prices in the local market, competing in vain with the output of local corporations ((Zakrison 539). The costs of illnesses and injuries create "lost days from harvest and, hence, food from the table" (539). This causes difficulties for health promoters because the indigenous people cannot afford the prices and need to work for economic survival. It is apparent that the treatment of the indigenous people in Chiapas is discriminatory because the lack of access to health care providers and facilities is above the national average. Subcomandante Marcos writes: The health conditions of the people of Chiapas are a clear example of the capitalist imprint: One-and-a-half million people have no medical services at their disposal. There are 0.2 clinics for every 1,000 inhabitants, one-fifth of the national average. There are 0.3 hospital beds for every 1,000 Chiapenecos, one-third the amount in the rest of Mexico. There is one operating room per 100,000 inhabitants, one half the amount in the rest of Mexico, There are 0.5 doctors and 0.4 nurses per 1,000 people, one-half of the national average (Marcos 4). The statistics Marcos provides compare the people of Chiapas to the rest of Mexico to demonstrats the unfair treatment toward the indigenous people. He clearly describes the poor health conditions and lack of access to clinics, hospital beds, doctors and nurses for the indigenous people. This lack of access increases the necessity to push for heath care because basic needs are not being met. Unfortunately, state police and soldiers interrogate many health promoters if they are sympathizers to the Zapatista struggle. To further their abuse, the military has immigration checkpoints that create fear among community members, impeding travel to health clinics and organizing for their health rights (Ruiz 2). Along with health promoters, human rights observers are also harassed at these checkpoints (2). This creates even more problems for the indigenous people to attain adequate health care because government officials are harassing the people that want to provide this service (2). Discrimination and neglect leave the indigenous people with little availability and access to health care. Lack of health care in Chiapas results in death in many instances. Among some of these preventable deaths are maternal mortality rates, malnutrition, environment, food consumption, and stress (Gil 3). Raul Ruiz, a medical doctor candidate for the class of 2001 at Harvard and a member of the Partners in Health Chiapas Project describes his experience in seeing the risks women endure when delivering babies: Julio was wet from the pouring rain and frightened. He ran through the streets of Polho, a community in Chiapas sympathetic to the Zapatista rebels, to find Carlos, the health promoter. He explained to Carlos, in Tzotzil, that his young wife, Ana, had delivered their first child an hour ago and was still heavily bleeding at home. I ran with the student nurse to the clinic's poorly stocked pharmacy to the post-partum hemorrhage kit (1). Ruiz continues to write about his horrific travel to the woman delivering the baby. He describes Esperanza's (pregnant woman) location as very destitute. After Ruiz and the nurse performed the exam and made sure that Ana was not bleeding, they left the house. This made Ruiz question his experience and wonder what would have happened if he hadn't been there. He says, "My stomach cringed as I asked myself; What if the nurse and I was (sic) not there? If she continued to bleed would Anna have died? On my way out I gave on a last good look at the house to imprint it on my memory forever" (Ruiz 1). A preventable death in various parts of the world becomes a primary killer in indigenous communities that are not provided with basic health care. Ruiz's experience in Chiapas demonstrates the lack of medical attention received by the indigenous people that could prevent deaths. Mortality rates in regards to the deliverance of babies are extremely high in Chiapas in comparison to the rest of Mexico. About 12,000 more deaths occur in Chiapas compared to the whole state of Mexico (Gil 11). The lack of adequate health care is alarming because the region of Chiapas occupies about one sixth of the state. Malnutrition is another cause of death for the indigenous people in Chiapas. A typical dish for the people in Chiapas is tortillas, beans, and coffee. Maria, a Tzeltal woman, explains that because she does not have enough food, she often doesn't have enough breast milk in her body so she sometimes has to feed her three-month-old baby coffee (Katzenberger 36). This is one of many stories told by the indigenous people living in Chiapas. Malnutrition causes many occurrences of death as well as illnesses in this region. Children need the appropriate nutrients to survive and live healthy lives. Tortillas, beans, and coffee hardly constitute a healthy, balanced, and nutritious meal. Because of the hardships and poverty the indigenous people experience there is no surprise that many are dying from a preventable death ╠ starvation. This lack of nutrition in Chiapas is much higher than the rest of Mexico (Gil 5). Poor nutrition can cause anemia and accidents along with other health problems because the people lack the needed nutrients in their body (5). Malnutrition is another reason why so many indigenous people join the Zapatista movement. Living conditions, environment, dress, diet, sanitary water, jobs, respiratory infections, gastrointestinal problems, plaudismo (mosquito type infection), anemia, hypertension, and stress represent the many illnesses that occur among the indigenous people in Chiapas (Gil 4-5). These illnesses are not uncommon to many areas and are possible to prevent or remedy. However, in Chiapas, the needed medical attention is scarce to non-existent. One example is tuberculosis, for which a common medical test is administered by many public school systems in the United States to teachers working in their districts. This is usually regarded as a minor test procedure. If a person tests positive for tuberculosis, he or she will then receive medical treatment for this illness. Many people living in the United States have the luxury of receiving medical treatment that will easily rid them of this health concern. Reports show that death, as a result of tuberculosis, is thirty times higher in Chiapas than the rest of Mexico (Gil 13). The EZLN organizes sex education classes to teach women about their bodies and diseases. For example, Women learn hygiene and disease—especially women's diseases (e.g., urinary tract infections)—that men do not understand or misinterpret. The organization also teaches about contraceptives and supports their use. Moreover: 'The companera not only has the right to terminate pregnancy,' Marcos has stated, 'but the organization also has the obligation to provide the means for her to do it with total safety' (Cleaver, 18). EZLN's efforts for women's health are one of the reasons that women have joined the Zapatista movement.Women in the Zapatista Movement In the period prior to the Zapatista uprising, the economic conditions were incredibly troubling for the indigenous people in Chiapas. Though the area housed some of Mexico's most valuable and profitable agricultural resources, the indigenous people who lived among the resources and often harvested them received little or no recompense for the sale of these goods. In fact, most families in the Chiapas region lived in dire poverty. And for women, an already difficult situation was often made worse because of gender discriminatory cultural practices, beliefs, and behaviors. The culture in Chiapas dictated a subordinate and oppressive position in the family for women—who were often the victims of unpunished spousal abuse and rape—and a macho role for men. Additionally, the Catholic Church, a large force in the population and a major influence in the culture, condemned the use of contraceptives and the availability of legal abortions, meaning women had no control over the number of children they had, even though the number they could sustain was limited. The dowry system still existed, which allowed fathers to decide the value of their daughter's hand in marriage. Women were expected to care for the children, cook, clean, do the laundry, and help the men in the fields. Men rode on horseback to the fields while women walked behind, carrying the child or children. Once at the field, both men and women picked corn or harvested coffee. On the way back to the village, the men again rode on horseback, while the women carried firewood on their heads and supervised the children walking ahead.2 The women of the Chiapas region suffered from the hardships of domesticity not only because of the scarcity of resources passed along to the indigenous peoples but, also, because of the sexual division of labor that existed in the indigenous culture. In spite of these hardships and sexual divisions, women made up a significant portion of the EZLN when they came out of the Lacand˘n jungle on January 1, 1994, to take over the cities of San Cristobal, Las Margaritas, Altamirano, Rancho Nuevo, Chanal, and Ocosingo in the Chiapas region. Though Subcomandante Marcos emerged as the most vocal and recognizable figure of the insurgency, the presence of women like Comandante Ramona and Lieutenant Elena immediately demonstrated to the watching world that women were helping to lead this revolution. One of the remarkable things about this insurgency was the number and visibility of women within the movement both in leadership roles and in the rank and file. The conditions for indigenous women in Chiapas at the time of the January rebellion were difficult, compounding the already terrible conditions of poverty and oppression faced by the indigenous population. In allowing women roles of authority in the movement, then, the EZLN began a negotiation with both local and national culture on an unprecedented level. In fact, many of the Insurgent Infantry Captains of the EZLN were women leading armed resistance within the state, and about 30 percent of the EZLN ranks were women.3 Building upon women's groups that were encouraged in the precursory insurgent movements of the 1980s, the women in the EZLN made themselves prominent figures in the contemporary Zapatista movement. According to Subcomandante Marcos, who has always been quick to point out that he is only a subcommandante while women like Ramona hold the title of Comandante, "it was only after the women had trained in the mountains, had become officers, that the men saw that they were capable of following and giving orders."4 At the same time as men were recognizing the capabilities of women, many women themselves realized that they deserved a life free of oppression from the government and oppression by men. This "struggle within the struggle" gave them the confidence and the drive to take on leadership positions.5 Fortunately, as Marcos' quote demonstrated, the women gained the support of additional male Zapatistas who also saw the horrible conditions for women and realized the contradiction in fighting against governmental oppression while oppressing a portion of their own people. One of the male leaders of the insurgency, Major Eliseo, wrote, "We had to combat this [machismo] idea because a revolutionary idea can't be this way. We are looking for equality, for justice. Thus equality means that one person is worth the same as the other."6 This radical idea of a revolution fought for justice for all people, regardless of class, race, and sex, allowed female Zapatistas to come together to create "The Women's Revolutionary Law," which outlined demands for women's rights. The Law demanded: When "The Women's Revolutionary Law" was presented, alongside the EZLN's "Declaration from the Lacand˘n Jungle," it had the support of the Zapatistas. An interviewer, Yolanda Castro; Advisor to the Regional Union of Craftswomen of Chiapas in San Cristobal, Chiapas; asked two Chiapan women, Natalia and Soledad, their views on the Ten Revolutionary Laws. Both Natalia and Soledad left their community to work for the union of craftswomen. When asked how their community viewed them after they left, they both replied that their community looked down on them. Natalia says, "They think women and girls shouldn't decide things for themselves, that daughters should obey. If girls go out alone, they're not worth anything, because people think bad things about them--that they're looking for men" (Katzenberger 113). Soledad's family has similar beliefs. Soledad says that her "family and people in the community say that I don't respect my father, that I came here looking for men. The community is just waiting till I return with a baby. They think that I'll arrive anytime, pregnant; and then they'll laugh at me" (114). Natalia adds, "The young women don't have the right to leave and work in other places. The custom in the communities is to keep them shut in the house" (114). Dependency and control by men and their patriarchal customs appear to be the situation in these communities. These communities in Chiapas ostracize women who choose to work or leave home by accusing them of promiscuity and lack of respect for the men folk. Do Chiapan women desire the rights of the Revolutionary Laws? Yes, as Natalia says, "I think it's good. The women are opening up their eyes and beginning to realize that they have rights" (114). The interviewer asks these women what they would like to see the government accomplish. Soledad demands the government to "give us our rights. For example, since we work in artesania, they should pay us fair price for our work" (114). Soledad is expressing the Second Revolutionary Law, "the right to work and receive a just salary" (109). She is expressing a view like many others, requesting equal compensation for her work. These women are also saying that they want the government to establish laws that will demolish locally rooted customs of patriarchy. These women were asked about their thoughts on the benefits of the Revolutionary Laws. Natalia says that there is more respect for men and women because the indigenous people have been educated about their rights. Natalia also says that women voted in the last election, whereas, before, predominately only men voted. The Revolutionary laws provided women with knowledge about their rights as human beings, she commented. Female Zapatistas often fought an uphill battle, however, during the years following the January 1994 uprising. Because of the dictates of culture, women's participation in military or authoritarian forums was not always welcomed in indigenous regions. Maribel, one of the insurgent leaders, remarked, "In the communities, the mothers wouldn't let young women participate together with young men in anything. Here in the EZLN we have the right to do this."8 But despite having the "right" to do this, the fact that many male and female members of the indigenous community rejected the idea of women becoming a part of this armed struggle demonstrated the resistance women were faced with from the outset. Anthropologist Lynn Stephen's 1994 interviews with male members of the EZLN further illustrated the continuing difficulty in the acceptance of women's new role. The men that Stephen interviewed claimed that they supported the rights and new roles of women in the EZLN and Mexican society but that they still had difficulty following orders from women.9 As Subcomandante Marcos described it, Although the men in the EZLN wanted women to have more rights, it was difficult to accept new cultural roles. Compounding this further was the fact that the Mexican government and even feminists also took issue with the demands and participation of women in the Zapatista movement. Despite all of the opposition that women in the Zapatista movement faced in terms of having their voices heard and implementing their Revolutionary Laws, Zapatista women continued their fight in the period following the January 1st uprising. For example, one of the most important issues for many of the female insurgents, outlined in the Women's Revolutionary Law, was the "right to decide the number of children they have and care for." This law was particularly interesting because it not only gave wives the right to have a say in childbearing even if husbands wanted more children, and it also went against Catholic rules, which outlawed abortion and contraceptive use. Upon declaring the Women's Revolutionary Law, Chiapan women worked diligently toward their goals regarding childbearing. The EZLN organized sex education classes to teach women about their bodies and diseases. They were taught "about contraceptives and [the movement] support[ed] their use." Additionally, Marcos stated, "The companera not only has the right to terminate pregnancy, but the organization also has the obligation to provide the means for her to do it with total safety."11 The EZLN went against the Catholic Church by educating women on contraception and even went so far as to demand that its army members stay childless. Though the Third Revolutionary Law was certainly not at the top of the list of things women were fighting for, and though the goal of this Law has not yet been fully achieved, its existence demonstrated that women were beginning a war on those laws that subordinated or harmed them. Another women's law, Revolutionary Law Number Eight, demanded that "Women shall not be beaten or physically mistreated by their family members or by strangers. Rape and attempted rape will be severely punished." Unfortunately, however, with the military occupations in many of the communities in the Chiapas state, the occurrences of rape actually increased after January 1994. Lynn Stephen argued that "In Chiapas rape and the threat of rape have been deployed as both a physical and symbolic violence to discourage women from ongoing participation in community and regional forms of organization."12 The military would specifically target the areas they knew to be problematic and of concern to women as a way to scare them from insurgency. But the Zapatista women continued their fight against this violence towards women. Elena Poniatowska, author of the student uprising in the 1960s in Mexico as well as a writer for various publications (including La Jornada), described the change in Mexico. One of her articles, "Women's Battle for Respect: Inch By Inch," published in the Los Angeles Times in 1997, read, Historically, women in Mexico that were raped or abused were usually perceived as guilty. But with the work of skilled feminists and lawyers, as well as the existence of the Ten Revolutionary Laws, Claudia was able to challenge machismo culture in Mexico. Women have joined the Zapatista struggle for various reasons, but mainly because they would rather die fighting than live in fear of starvation. When women joined the movement, they contributed to the army's success immensely. They took up arms, worked with the sick, and trained other soldiers. Elena, a Lieutenant for the hospital unit, cares for the sick. She gives vaccines, cares for the wounded, and fights with the Zapatistas. She also takes command of the Army by giving orders and organizing defense positions (Katzenberger 38). This demonstrates the respect men and women give her because they abide by her command and look at her as an individual regardless of a specific sex assigned role. Irma, a Chol (indigenous person in Mexico) woman who holds the position of Insurgent Infantry Captain, leads soldiers under her command. She attacked the municipal palace (plaza at Ocosingo) until the people surrendered; and after the attack, she undid her braid to let her hair flow freely signifying that she is a free and new woman. Laura, a Tzotzil (indigenous person in Mexico) woman is an Insurgent Infantry Captain. Laura taught and gave orders to a unit completely composed of men. Anna-Maria, an Insurgent Infantry Captain as well, commanded the entire operation to take over San Cristibol. Unfortunately, the media gave the famous Marcos all the credit for this operation. These three women exemplified the respect men give to women in the Zapatista Army because they obey their commands and learn from their leadership. Women who join the Zapatista struggle also leave a great deal behind when they join the movement. They leave their children, husband, brother(s), father, partner(s), and friends. Commander Ramona described the lives of the women in her community to an interviewer in 1994. She discussed the hardships they encountered and how "they are the first to rise, long before dawn, and the last to collapse onto a mat at night" (Katzenberger 43). She says there is a lot of suffering because of the high poverty levels of her people so she decides to carry a rifle instead (43). Her goal is to free her people from the injustices they encounter. Ramona is a leader of the Clandestine Revolutionary Indigenous Committee (CCRI). The CCRI is the EZLN's general command elected democratically in assemblies to represent the Zapatista community. Ramona stands tall even though she is small in stature. In her speech, she invites women to follow the example of the Zapatista women (43). Women like Ramona, who speak out in Assemblies and actively participate in the Army embody the independence and respect they earn from the army. Ramona was a seamstress before she took up arms with the Zapatista Army and serves as a role model for many. Indigenous women demand the government and the Zapatistas provide them with equality. "Major Susana" spoke at a revolutionary meeting in 1993 and declared: Though the EZLN attempted to change the cultural patriarchy that was heavily ingrained in indigenous society, since the initial uprising in 1994 little has changed for women outside of the intimate circle of EZLN leaders. However, the women's movement within the Zapatista struggle has exposed hundreds of years of gender inequality and has raised the voice of female dissent throughout the country. Though discussion about women's rights in Mexico decreased in the years after the 1994 uprising, the EZLN continues, nevertheless, to respond to outrageous examples of violence and injustice towards women. This demonstrates that women's rights do remain on the Zapatista agenda, although, control over both economic sustenance and culture remain the focus.13 Many observers of this situation believe that the immediate product of the Zapatista movement, both in regard to laws that affect all indigenous people and laws that specifically target women, is a planting of the seeds of change. As can be seen in their fight for sex education for women, the growing availability of contraceptives, and Claudia's story of challenging machismo culture; the insurgents were able slowly to make changes for women of the Chiapas region and begin the transformation of the social consciousness of the entire country. Above and beyond the changes that women made for all indigenous members of Mexican society, changes implemented by women like Hospital Lieutenant Elena, Insurgent Infantry Captain Irma, and Comandante Ramona; Zapatista women also began to forge a new social code—one that would restructure everyday life and the behavior of the society. Without women, the Zapatista movement would lack much of its strength and creativity; without the women's movement, Mexico might never have been challenged to think differently about gender. Campbell also writes that women's participation in the movement helps women to "change customs and practices harmful to women" (3). This revolution provides women with a voice to fight for equality. Because their demands (work, land, housing, food, health care, education, independence, freedom, democracy, justice, peace, withdrawal of the Mexican Army from Zapatista territory, demobilization, disarmament, and the investigation of the paramilitaries) are lengthy and challenging to the Mexican government; it is unlikely that they will be met any time soon. The Zapatistas also state that they will not end their fighting until every condition is met. If this is the case, this resistance may take decades which makes it likely that feminist ideas will continue to spread throughout the movement. Women's Participation in Revolutions and Resistance Movements Teaching about women in the Zapatista movement not only gives insight into Latin American history, but it also allows for discussion of the potential for improved gender equality through women's participation in revolutionary and resistance movements more broadly in world history. The leaders of the Zapatistas face the same dilemma that Cherrie Moraga speaks of because they must unite women and men to fight for basic human rights from the government. Women must join men to fight for these basic rights of work, land, housing, food, and health care so they can live substantial lives. Women have a better chance at gaining equality by joining the Zapatista movement because it is easier to build within a movement for equality than to fight against the government and patriarchal customs simultaneously. Women's demands for equality in Mexico are supported by some of the revolutionary fighters but ignored by others. However, if women want their demands for a right to fair salary, the right to an education, and the right to decide how many children they will bear (to name a few), then participating in the Zapatista struggle seems inevitable (Katzenberger 109-110). Integrating this movement with women and men is more effective because it attracts greater numbers of people to fight for their basic human rights. Some argue that women are not only fighting against the Mexican government but also against the machismo culture. Andrea Mandel-Campbell, a writer for Financial Times in London says that the revolution is "a long battle for indigenous women who are seeking equality within their communities" (2). She offers an example by describing one of the battles for which women must fight, respect for the body and self. She states "In many communities, physical violence is considered justified, as is incest and the stealing or selling of young daughters into matrimony" (2). Women in these communities are sexualized by the men because they employ power over the women by committing acts of violence and sexual atrocities. Campbell also mentions one indigenous man's view of the women's movement within the Zapatista movement, "I don't agree that women should have all the same rights'I want to live like my ancestors--with the support of many women" (2). Campbell interviewed a man who expressed his desire to keep women in their traditional subservient role in both revolution and domestic efforts. With this deeply embedded culture of keeping women subservient to men, one can see that women must fight for equality within their communities and within the government. Campbell also writes about the inequalities women face with the law. She writes, "Women are denied land or inheritance rights and are often barred from voting in community assemblies or holding posts" (2). Because of their lack of power, women in the Zapatista movement have articulated their demands in the Ten Revolutionary Laws which asks the Mexican government for the right to participate in elections, hold leadership positions, and denounce the mistreatment of women in situations of abuse or rape (Katzenberger 109-110). Women demand these laws because they do not want to return to their assigned stereotypical and historical gender role of domestic work after the revolution is over. The Mexican Revolution and the Cuban Revolution serve as models for the indigenous women in Chiapas because they do not want to repeat the history of women who are used to fight in the cause but are expected to return to their subservient role once the fighting is over. A professor in a Women's Studies graduate class at San Diego State University asked me what makes the Zapatista Revolution different from the Cuban or Mexican Revolutions. She pointed out that the women in the Cuban and Mexican revolution were used by the men and once the fighting was over, they were expected to return to their domestic roles of cooking, cleaning, and childbearing. This professor posed a provocative and challenging question that I could not answer at the time, but since then I have examined it more closely especially through comparison of the Zapatistas with the Nicaraguan and Mexican Revolutions. In the Nicaraguan, Mexican, and Zapatista revolutions women demand equality within their revolutions. These women join their struggles because they feel that they can provide change in women's lives through equality of effort and a feminist future vision. Sheila Rowbotham is a social historian who also addresses women's fight for equality. She points out that many women fight against capitalism and their own oppression within the movement. She states that women must acknowledge that they must be in charge of liberating themselves because a man would not benefit from it the way they would. In reference to women's emancipation, she cites Bebel, a Marxist revoluntionary: Arguing that women were doubly exploited, he saw them fighting against capitalism and against their own oppression. Bebel follows the radical tradition in which he put the oppressed firmly in charge of their own liberation. Women's interests could no more be included with the interests of men than the workers could be included in the interests if the employers. From Mary Wollstonecraft to Flora Tristan, revolutionary women had looked to men to free women. But Bebel believed there was little likelihood of men as a group taking up the cause of women's emancipation. Why should they try to end women's dependence in the family and society, when this dependence benefited them? (81) Nicholson includes Shulamith Firestone's discussion on the importance of women's roles in revolutions in her book; Firestone, one of the founders of Radical Feminism, states that women need to end female oppression (Nicholson 20). Firestone argues that women have been oppressed since the beginning of time, and, because of this, women often feel despair and give up. Women, she says, need to strengthen the resistance in order to modify goals, gender relations, and states, Why should a woman give up her precious seat in the cattle car for a bloody struggle she could not hope to win? But, for the first time in some countries, the preconditions for feminist revolutions exist--indeed, the situation is beginning to demand such a revolution. (Nicholson 19) Women in the Zapatista struggle recognize men's desire to keep them subservient so they demand that the government establish laws that guarantee women's rights and equality to men (Ten Revolutionary Laws). Women in this revolution want to continue their independence and establish their equality to men during and after the revolution. Incorporating their demands into law will establish their equality with men. Michel Foucault, a well-known French philosopher, also addresses women's sexualization and lack of power in the public realm. He discusses how society objectifies women's bodies and devalues them: Women's being was'] thoroughly saturated with sexuality; whereby it was integrated into the sphere of medical practices, by reason of a pathology intrinsic to it; whereby, finally, it was placed in organic communication with the social body (whose regulated fecundity it was supposed to ensure), the family space (of which it had to be substantial and functional element), and the life of children (which it produced and had to guarantee, by virtue of a biologico-moral responsibility lasting through the entire period of the children's education): the Mother, with her negative image of "nervous woman," constituted the most visible form of the hysterization (104). Many revolutions entice women to join with the expectation that they will return to their domestic sphere after the revolution is won. In Nicaragua and Mexico, women sought equal rights during the revolution and made progress with their demands. These women challenged the myths that women were only good for cooking, cleaning, child rearing, and sex. Because of their efforts; certain laws, political organizations, and feminist demands were attained for women in Mexico and Nicaragua. At the end of these revolutions, women were not entirely equal to men; however, they were able to change the status of women in society for the better. This provided future generations of women a foundation to demand further equality in comparison to men. In the 1970s, most Nicaraguan people lived in extreme poverty with only a few who were well off. During this time period; there was high unemployment (22%), soaring illiteracy rates, people dying of curable diseases, and high mortality rates. The Somoza family was extremely wealthy because they controlled about forty percent of the country's earnings. Nicaragua was considered an "underdeveloped" country because it was dependent on the Western European capitalist economies. It exported cotton and coffee, and countries like the United States benefited. Because of this economic situation, most women were negatively affected. Many husbands left their families due to the terrible unemployment rates and poverty levels. As a result, Nicaraguan women became the sole supporters of their families. Most women turned to occupations that involved domestic tasks or prostitution. Economics and political repression motivated women to join the movement. Brutality by the National Guard and Somoza's private army also motivated women to become active participants and revolutionaries in the overthrow of the Somozan dictatorship (xiv). Internationally acclaimed feminist writer, photographer, and activist; Margaret Randall narrates the Nicaraguan struggle and the Nicaraguan people's attempt to rebuild their country during the Somoza regime in her book Sandino's Daughters: Testimonies of Nicaraguan Women in Struggle (1995). Randall's book provides testimonies from women's experiences. Through these experiences, women analyze the political development of Nicaragua during the 1970s (ix). Like the Zapatista movement, women fought on the front lines in the Sandinista National Liberation Front (FSLN). They "participated in support tasks, worked undercover in government offices, and were involved in every facet of the anti-Somoza opposition movement" (xii). Women also formed one of the most influential organizations to help overthrow the Somoza dictatorship, the Association of Nicaraguan Women Confronting the Nation's Problem (AMPRONAC). Like the Zapatistas, "women made up 30 per cent of the Sandinist army and held important leadership positions, commanding everything from small units to full battalions" (xii). Dora Maria, a member of the FSLN, said: This is the case with women. Women participated in our Revolution, not in the kitchens but as combatants. On the political leadership. This gives us a very different experience. Of course they played other roles during the war and acquired tremendous moral authority, so that any man--even in intimate relationships--had to respect them. A man would be hard put to lift a hand to hit or mistreat a woman combatant. (Randall 56) In regards to women's equality in Nicaragua, each woman Margaret Randall interviewed was confident about the future and that there was "no going back" for them (xvi). "Many young women, from the countryside and the cities, decided that the logical way to participate in the struggle was to join the people's troops" (Randall 130). As was the case for women in the Zapatista movement, "The Revolution had to become [sic] before my family" (211). Women's involvement in the movement increased their potential for equality, but it could not wholly overcome deeply embedded gendered beliefs that women were abandoning their true womanly calling of needing to care for their families. Foucault argues that women are seen only as sexual beings and producers of children. That is made worse during war. This next testimony demonstrates the chilling reality of his observation. Randall retells Amada Pineda's experience because it represents the torture experienced by many Nicaraguan peasant women: That night, several of them came to where they were holding me. They raped me. I struggled and they began to beat me, and that's when they did all those terrible things to me. My legs were black and blue, my thighs, my arms. I had bruises all over me. That's the way they treated all the peasant women they picked up; they raped them and tortured them and committed atrocities. It was just three days, but those three days were like three years to me--three years of being raped by those animals. They came round whenever they wanted, all the time. It's horrible--it's nothing like going to bed with your husband. It's not the same at all. Just before they captured me, there was a young woman who'd only been married a month. That woman couldn't even stand up when they were through with her. They grabbed one leg and then the other'I've never seen anyone bleed like that. When they let her go she had to steady herself against the walls so she wouldn't fall down. She had to hold on to the branches of the trees till she got to her house' (Randall 80). This chilling testimony demonstrates the issues of power and sexuality that Foucault addresses. Rape in these instances represents the ultimate power over women. In the struggle to overthrow the Somoza dictatorship, women were also fighting for equality within their revolutionary group. Some men did not like the idea that women were taking part by carrying arms and doing other "masculine" tasks. These men believed that women should be in the home performing domestic duties. Randall interviewed Monica Baltodano, a guerrilla commander of the Nicaraguan revolution, who said: Baltodano shows us that the potential for gender equality can exist amidst revolution. In Mexico, when Porifiro Diaz ran for reelection in 1910, after promising not to do so, the pent-up tensions among the Mexican people were elevated until eventually they led to the revolution in 1910. Women were needed on the front to fight, help the wounded, and spy (among other tasks). Thus, women learned that they could take on male dominated tasks and do them successfully. In Jaquette's book (1994) she includes an essay from Ramos Escandon, Professor of History at Occidental College, who in her essay "Women's Movements, Feminism, and Mexican Politics" discusses women's role in the Mexican revolution to overthrow the Diaz regime. These roles that women took on demonstrate their ability to carry out tasks that they did not think they could do before. Overall, this revolutionary time did not tremendously impact women's lives. However, women's legal status improved in some areas. Escandon addresses the improvement of women's legal status by saying: Women's roles in the Mexican Revolution transformed their agency and their ability as a group to promote change for themselves in Mexico. Women addressed issues of equality as well. Escandon points this out by discussing the impact of the Mexican revolution for women: The First Feminist Congress in Mexico was held in Merida in January 1916 to consider issues ranging from the function of schools, the importance of secular education, the need for sex education, and the political participation of women. The participants, mostly middle-class women, were divided on the latter issue. The feminist argued that women were the moral and intellectual equals of men and should participate as full citizens. (Jaquette 200) The Mexican Revolution began the thought process for women's equality and collective organization in Mexico. These two examples show that women's participation in revolutionary activity has the potential to create significant changes in personal and political relations between women and men. Studying all three revolutions allows students to see this potential. Biographical Note: Devon Hansen-Atchison received her Ph.D. in History from Boston University. She currently teaches U.S. History and Women's History at Grossmont CollegeLaura Patricia Ryan received her MA in Women's Studies from San Diego State University. Her Masters project was titled "A Curricular Teaching Module For The Community College Level, Women, Resistance, and Revolution: The Zapatistas, A Case Study." She was Project Manager for World History For Us All, a web-based model curriculum for World History for middle school through high school students. She currently teaches World History at Southwestern College. Cleaver, Harry. Introduction. In Editorial Collective (eds.) Zapatistas! Documents of the Mexican revolution. New York: Autonomedia, 1994. 11-24. Cleaver, Harry. The Chiapas uprising and the future of class struggle in the new world order. Riff-Raff [Italian Journal published in Padova, Italy] February 14, 1994. Retrieved April 20, 2002 from the World Wide Web: http://www.eco.utexas.edu/ Homepages/ Faculty/Cleaver/chiapasuprising.htm. Editorial Collective. Zapatistas! Documents of the Mexican Revolution. New York: Autonomedia, 1994. EZLN. The Revolt (December 31-January 1), In Editorial Collective (eds.) Zapatistas! Documents of the Mexican Revolution. New York: Autonomedia, 1994. Fisher, Bernice. No Angel in the Classroom: Teaching through Feminist Discourse. Boulder: Rowman & Littlefield, 2001. Foucault, Michel. (1997). Excerpts from Foucault's History of Sexuality; Vol. I, In S. Cayleff (Ed.), Sexuality and the Body Politic(s): Women's Studies 701—A reader [San Diego State University], (n.p.). San Diego, CA: KB Books. Foundation For Critical Thinking. Critical Thinking Workshop Handbook. Sonoma State University, 1996. Gil, Jose, Rivera, Jose, & Lopez, Olivia. Chiapas: la emergencia sanitaria permanente. Retrieved May 7, 2002 from the World Wide Web: http://www.EZLN.org/ revistachiapas/ch2blanco.html. Hayden T., ed. The Zapatista Reader. New York: Thunder's Mouth Press/Nation Books, 2002. Jaquette, Jane. The Women's Movement in Latin America: Participation and Democracy. San Francisco: Westview Press, 1994. Katzenberger, Elaine, ed. First World, HA HA HA! The Zapatista challenge. San Francisco: City Lights Books, 1995. Kesselman, Amy, Mc Nair, Lily & Schniedewind, Nancy. "What is Women's Studies?" Women, Images and Realities: A Multicultural Anthology. Mountain View, CA: Mayfield, 1995. Mandel-Campbell. Andrea. "Indigenous Women Find Road Long: Mexico's Zapatistas Are to March for Their Rights." Financial Times, February 21, 2001 [13+] Retrieved on May 6, 2003 from Proquest (Research Library Periodicals) on the World Wide Web: http://www.proquest.umi.con/pqdweb. Proquest search on "women and Zapatistas". Marcos, Subcommander. Chiapas: "The Southeast in Two Winds, a Storm and a Prophecy". Zapatistas. 1994. Retrieved April 20, 2002 from the World Wide Web: http:// Zapatistas.net/two-winds.html. Nicholson, Linda, ed. The Second Wave: A Reader in Feminist Theory. New York: Routledge, 1997. Oppenheimer, Andres. Bordering on Chaos. New York: Little, Brown and Company, 1996. Randall, Margaret. Sandino's Daughters: Testimonies of Nicaraguan Women in Struggle. New Jersey: Rutgers University Press, 1995. Rowbotham, Sheila. Women, Resistance & Revolution: A History of Women in Revolution in the Modern World. New York: Vintage Books, 1974. Ruiz, Raul. "Medicinal Herbs in Times of Low Intensity War, the Case of Chiapas, Mexico. Partners in Health, Year Unknown. Retrieved from the World Wide Web: April 19, 2002: http://www.pih.org/library/essays/medicinalherbs.html. Rutenberg, Taly. "Learning Women's Studies". Women, Images and Realities: A Multicultural Anthology. Mountain View, CA: Mayfield, 1995. Weinberg, Bill. Homage to Chiapas: The New Indigenous Struggles in Mexico. New York: Verso, 2000. Zakrison, Tanya. "Chiapas: A State of Health in a State of Siege". JAMC, 160.4 (1999): 539. Zimmerman, Bonnie. "Beyond Dualisms: Some Thoughts About Women's Studies for the Future." Unpublished paper presented at The Future of Women's Studies Conference. University of Arizona, October 2000. Retrieved November 3, 2001 from the World Wide Web:http://w3.arizona.edu/~ws/future/zimmerman-paper.html. This project includes a vocabulary assignment, introduction of terminology, film, reading assignment, Website activity, small group discussion, large group discussion, role-playing/debate activity, peer editing workshop, and a project assignment.Vocabulary Assignment The vocabulary assignment clarifies words with which students might not be familiar. It also enriches and enhances their critical thinking skills.Terminology Activity The terminology assignment clarifies important terms and documents to help students understand the Zapatista unit.Film Activity The film introduces students to the indigenous struggle for autonomy. It also provides audio and visual aids to show all perspectives of the indigenous struggle in Chiapas. Because students apply note-taking skills, it enforces active student learning proficiency while watching the film. Students must recall information expressed in the film and synthesize all of the perspective of the Zapatista struggle.Assigned Articles The assigned articles provide students with a brief overview of the Zapatista uprising, North American Free Trade Agreement (NAFTA), Zapatista Army, and images of the Zapatistas and Subcommander Marcos. Students interpret the assigned readings by pointing out the purpose, points of view, key questions, most important information, and the main conclusion.Website Activity The Website activity provides students with a background of Mexican history and its effects and influences on the current Zapatista struggle as well as women's important and critical role in the Zapatista struggle. This assignment incorporates Internet assignments into the curriculum and demonstrates the availability of information provided on the Internet. It also encourages students to evaluate the internet contents critically with the logic assignment and to compare information on this website to other indigenous struggles in the world.In-Class Activity The in-class activity requires students to assemble the three articles assigned for homework into one logic sheet. Students must assess and analyze the information from the three articles assigned by pointing out the main purpose, points of view, key questions, most important information, and main conclusion. This activity builds communication skills with classmates because students discuss the material in small groups. It also enforces teamwork within the small group discussion because students must arrive at a collective answer. This activity also demonstrates students' ability to analyze the material because the group must answer the instructor's questions on the logic sheets. After the students work in small groups, the instructor will write on the board the logic questions which are the main purpose, main point of view, alternative point of view, key questions, key information, and main conclusion. Students will participate in the large group discussion and answer the questions asked by the instructor. The five groups in class will then be asked to write their answers on the board in response to one of the logic questions the instructor chooses.Role Playing/Debate Activity The role paying/debate activity has students assess each perspective on the situation occurring in Chiapas. It provides students with stimulating interaction with their peers. This activity is an unrehearsed theatrical representation of all the different perspectives involved in the Zapatista uprising. The instructor facilitates this activity and asks each group to answer the question posed by the instructor according to their assigned role. Students must listen to the instructor's coaching, gather into assigned groups, and answer the questions appropriately for their assigned role. The questions students must answer are in the role-playing/debate section of this project. Lessson plan cover Taibo II, Paco Ignacio. Zapatistas! The Phoenix Rises. Ed. Tom Hayden, New York: Thunder's Mouth Press/Nation Books, 2002. Pages 21-30 Kopkind, Andrew. Opening Shots. Ed. Tom Hayden, New York: Thunder's Mouth Press/Nation Books, 2002. Pages 19-21. Length of Lesson: Plan for Lesson: Helpful Resources for Instructors How to help Review students' definitions of the assigned vocabulary list.Women in Resistance: ZapatistasVocabulary ListPlease define these words and cite your source for the following (Total length of time: 45 minutes) Clarify important terms and documents to help students understand the Zapatista unit.Instructor Activity Instructor will summarize the main points of the seven terms along with the class discussion on these terms. WOMEN IN RESISTANCE: ZAPATISTAS in Resistance: Zapatistas Purpose of the Assignment Introduction to the Assignment On the board: The Logic Questions:1. The main purpose of this film is: Why was it made?2. The main point (s) of view presented is (are): Also identify all alternative points of viewA. LandownersB. Indigenous women and menC. FilmmakersD. Film crew3. The key question (s) is (are):4. The most important information (facts, events, ideas, etc.) is (are): This information will let you answer the key question (s)5. The main conclusion (s) is (are): Logical inferences based on the most important informationSummary of the Film Students will be selected randomly from the stack of index cards to answer the following questions out loud (each student's name will be written on an index card). The Logic Questions:1. The main purpose of this film is: 5. The main conclusion (s) is (are): Students will save notes and turn them in with their portfolio project. WOMEN IN RESISTANCE: ZAPATISTAS (Total Length of time: 115 minutes) WOMEN IN RESISTANCE: ZAPATISTAS Article Homework Assignment (Total Length of time: 30 minutes) Students will read the following articles for homework: 1. Zapatistas! The Phoenix Rises by Paco Ignacio Taibo II (Pages 21-30). 2. Opening Shots by Andrew Kopkind (Pages 19-21).Citations: Taibo II, Paco Ignacio. Zapatistas! The Phoenix Rises. Ed. Tom Hayden, New York: Thunder's Mouth Press/Nation Books, 2002. Kopkind, Andrew. Opening Shots. Ed. Tom Hayden, New York: Thunder's Mouth Press/Nation Books, 2002.Overview of Information Found in These Articles A brief overview of the Zapatista uprising, NAFTA, Zapatista Army, images of the Zapatistas and Subcommander Marcos. 1. The main purpose of this (article, story, essay, Website, etc.) is: [Why was it written?]2. The main point (s) of view presented is (are): [Also identify an alternative point of view]3. The key question (s) is (are):4. The most important information (facts, events, ideas, etc.) is (are): [This information will let you answer the key question (s)]5. The main conclusion (s) is (are): Logical inferences based on the most important informationInstructor Activity Hand out Logic sheet questions:1. The main purpose of this (article, story, essay, Website, etc.) is: Why was it written?2. The main point (s) of view presented is (are): Also identify an alternative point of view3. The key question (s) is (are):4. The most important information (facts, events, ideas, etc.) is (are): [This information will let you answer the key question (s)]5. The main conclusion (s) is (are): Logical inferences based on the most important informationIn-class Assignment Instructor will recall students' assessment of the assigned readings.Women in Resistance: Zapatista Assigned Readings Women in Resistance: Zapatistas Assigned Readings Taibo II, Paco Ignacio. Zapatistas! The Phoenix Rises. Ed. Tom Hayden, New York: Thunder's Mouth Press/Nation Books, 2002. Women in Resistance Website Activity (Total length of time: 45 minutes) 1. Goal of the Assignment i. The following questions will be assigned on a handout (please see attached handout): 1. The main purpose of this (article, story, essay, Website, etc.) is: 2. The main point (s) of view presented is (are): 3. The key question (s) is (are): 4. The most important information (facts, events, ideas, etc.) is (are): 5. The main conclusion (s) is (are): ii. Students will be chosen randomly from a stack of index cards (each student's name will be written on an index card). Two students will be selected randomly from the stack of index cards to answer one of the five questions assigned for homework. iii. After students complete the questions on the board, the students will review their answers with other class members. Instructor can comment on answers. 2. Learning Objectives a. Incorporate Internet assignments into the curriculum. b. Show students the availability of information provided on the Internet. c. Encourage students to critically think with the five-question logic assignment. i. Students are required to assess the provided information from the instructor. They will synthesize the material by pointing out the purpose, the main point of view, key questions, the most important information and the main conclusion. d. Encourage students to compare information on this website to other situations occurring in the world. a. Address: http://www.zapatistas .net b. Name: Worldwide Zapatista Network c. Purpose of the Organization: This site provides information of the Zapatista struggle occurring in Mexico. It includes testimonies from delegates and Subcomandante Marcos, the San Andres Accords, Emiliano Zapata biography, videos and interviews. d. Students will read the following articles: i. "Comandante Esther Speaks!" ii. "Meet the Zapatista Delegates" iii. Marcos's 1992 essay 4. Overview of Information Found on This Site This site provides information of the Zapatista struggle occurring in Mexico. It also includes testimonies from delegates and Subcomandante Marcos, the San Andres Accords, biography of Emiliano Zapata, videos and interviews. 5. Hot Links All links are connected to the Zapatista website. This site provides a thorough guide to other resources and research on the Zapatista movement. Some of these sites are written in Spanish but can be translated for free at: http://www.freetranslation.com/web.htm WOMEN IN RESISTANCE: ZAPATISTAS WOMEN IN RESISTANCE: ZAPATISTAS WOMEN IN RESISTANCE: ZAPATISTAS Why was it written? Also identify as alternative point of view This information will let you answer the key question (s) Logical inferences based on the most important information WOMEN IN RESISTANCE: ZAPATISTAS Small Group to Large Group Discussion (Total length of time: 20 minutes) Goal of the Assignment Small Group Activity (Length of time: 10 minutes)Instructor Activity 1. The main purpose of these three (article, story, essay, Website, etc.) are: Large Group Activity (Length of activity: 10 minutes)Instructor Activity Also identify an alternative point of view 1. "Comandante Esther Speaks!"2. "Meet the Zapatista Delegates" 3. Marcos's 1992 essay THE LOGIC OF_______________________________________________________ Women in Resistance: Zapatistas Role Playing/Debate Activity (Total Length of time: 90 minutes)Goal of the Assignment Small Group Activity (Length of time: 20-30 minutes)Instructor Activity 1. Woman in the Zapatista Army2. Man in the Zapatista Army 3. A Mexican government official Large Group Activity (Length of time: 60 minutes)Instructor Activity Questions for Role Playing/Debate Activity Be prepared to answer the following questions: 1. Do you think the indigenous people in Mexico are being exploited? 2. Do you think the members in the Zapatista struggle are violent? 3. What is a possible solution to end the poverty rates in Mexico? 4. Do you think women are treated equal to men in the Zapatista movement? 5. What do you think of the Ten Revolutionary Laws? 6. Do you think the people in Mexico have prospered with the North American Free Trade Agreement (NAFTA)? 7. Why do you think people have been drawn to the Zapatista movement? Students will write a journal reflection (about 300-500 words) on one of the following questions of their choice: i. Why or why not? 2 Matilde Perez U. and Laura Castellanos, "DO NOT LEAVE US ALONE! Interview with Comandante Ramona," 7 March 1994, http://www.eco.utexas.edu/Homepages/Faculty/Cleaver/ bookalone.html. 3 "Twelve Women in the Twelfth Year," March 1996, http://flag.blackened.net/revolt/mexico/ezln/1996/marcos_12_women_march.html (May 8, 2001); Stephen, "Zapatista," 91. 4 Ann Louse Bardach, "Mexico's Poet Rebel," Vanity Fair, July 1994, 130. 5 "Women- The Struggle Within the Struggle," July 1997, http://vivaldi.nexus.it/commerce/tmcrew/chiapas/dalia.htm (May 8, 2001). 6 Lynn Stephen, Between NAFTA and Zapata: Histories, Nation Views, and Indigenous Identities in Southern Mexico. ms., 1999. Forthcoming, University of California Press, 233. 7 "EZLN-Women's Revolutionary Law," n.d., http://flag.blackened.net/revolt/mexico/ezln/womlaw.html (May 8, 2001). 8 Stephen, Between NAFTA, 225-226. 9 Stephen, "Zapatista," 91. 10 "Marcos to the Insurgentas," March 2000, http://flag.blackened.net/revolt/mexico/ezln/2000/marcos_insurgentas_march.html, (May 8, 2001). 11 Tom Hayden., ed., The Zapatista Reader (New York: Thunder's Mouth Press/Nation Books, 2002), 18. 12 Stephen, Between NAFTA, 237. 13 Rosalva Aida Hernandez Castillo, "Between Hope and Adversity: The Struggle of Organized Women in Chiapas Since the Zapatista Uprising," in Journal of Latin American Anthropology. vol. 3, no. 1 (1997): 115. 14 Moraga, Cherrie and Yamamoto, Hisaye. Kitchen Table/Women of Color; 2nd ed. 1989 |Home | List Journal Issues | Table of Contents| |© 2007 by the Board of Trustees of the University of Illinois|
|Document Name:||Constitution of the United States| |Jurisdiction:||All States and Territories| |Date Created:||September 17, 1787| |Date Presented:||September 28, 1787| |Date Ratified:||June 21, 1788| |Date Effective:||March 4, 1789| |Courts:||Supreme, Circuits, Districts| |Number Entrenchments:||2, 1 still active| |Date Legislature:||March 4, 1789| |Date First Executive:||April 30, 1789| |Date First Court:||February 2, 1790| |Date Last Amended:||May 5, 1992| |Location Of Document:||National Archives Building| |Commissioned:||Congress of the Confederation| |Signers:||39 of the 55 delegates| |Supersedes:||Articles of Confederation| The Constitution of the United States is the supreme law of the United States of America. The Constitution, originally comprising seven articles, delineates the national frame of government. Its first three articles embody the doctrine of the separation of powers, whereby the federal government is divided into three branches: the legislative, consisting of the bicameral Congress (Article One); the executive, consisting of the president (Article Two); and the judicial, consisting of the Supreme Court and other federal courts (Article Three). Articles Four, Five and Six embody concepts of federalism, describing the rights and responsibilities of state governments, the states in relationship to the federal government, and the shared process of constitutional amendment. Article Seven establishes the procedure subsequently used by the thirteen States to ratify it. It is regarded as the oldest written and codified national constitution in force. Since the Constitution came into force in 1789, it has been amended 27 times, including one amendment that repealed a previous one, in order to meet the needs of a nation that has profoundly changed since the eighteenth century. In general, the first ten amendments, known collectively as the Bill of Rights, offer specific protections of individual liberty and justice and place restrictions on the powers of government. The majority of the seventeen later amendments expand individual civil rights protections. Others address issues related to federal authority or modify government processes and procedures. Amendments to the United States Constitution, unlike ones made to many constitutions worldwide, are appended to the document. All four pages of the original U.S. Constitution are written on parchment. According to the United States Senate: "The Constitution's first three words—We the People—affirm that the government of the United States exists to serve its citizens. For over two centuries the Constitution has remained in force because its framers wisely separated and balanced governmental powers to safeguard the interests of majority rule and minority rights, of liberty and equality, and of the federal and state governments." The first permanent constitution of its kind, adopted by the people's representatives for an expansive nation, it is interpreted, supplemented, and implemented by a large body of constitutional law, and has influenced the constitutions of other nations. See also: History of the United States Constitution. From September 5, 1774, to March 1, 1781, the Continental Congress functioned as the provisional government of the United States. Delegates to the First (1774) and then the Second (1775–1781) Continental Congress were chosen largely through the action of committees of correspondence in various colonies rather than through the colonial or later state legislatures. In no formal sense was it a gathering representative of existing colonial governments; it represented the dissatisfied elements of the people, such persons as were sufficiently interested to act, despite the strenuous opposition of the loyalists and the obstruction or disfavor of colonial governors. The process of selecting the delegates for the First and Second Continental Congresses underscores the revolutionary role of the people of the colonies in establishing a central governing body. Endowed by the people collectively, the Continental Congress alone possessed those attributes of external sovereignty which entitled it to be called a state in the international sense, while the separate states, exercising a limited or internal sovereignty, may rightly be considered a creation of the Continental Congress, which preceded them and brought them into being. See main article: Articles of Confederation. The Articles of Confederation and Perpetual Union was the first constitution of the United States. It was drafted by the Second Continental Congress from mid-1776 through late 1777, and ratification by all 13 states was completed by early 1781. The Articles of Confederation gave little power to the central government. The Confederation Congress could make decisions, but lacked enforcement powers. Implementation of most decisions, including modifications to the Articles, required unanimous approval of all thirteen state legislatures. Although, in a way, the Congressional powers in Article 9 made the "league of states as cohesive and strong as any similar sort of republican confederation in history", the chief problem was, in the words of George Washington, "no money". The Continental Congress could print money but it was worthless. Congress could borrow money, but couldn't pay it back. No state paid all their U.S. taxes; some paid nothing. Some few paid an amount equal to interest on the national debt owed to their citizens, but no more. No interest was paid on debt owed foreign governments. By 1786, the United States would default on outstanding debts as their dates came due. Internationally, the United States had little ability to defend its sovereignty. Most of the troops in the 625-man United States Army were deployed facing – but not threatening – British forts on American soil. They had not been paid; some were deserting and others threatening mutiny. Spain closed New Orleans to American commerce; U.S. officials protested, but to no effect. Barbary pirates began seizing American ships of commerce; the Treasury had no funds to pay their ransom. If any military crisis required action, the Congress had no credit or taxing power to finance a response. Domestically, the Articles of Confederation was failing to bring unity to the diverse sentiments and interests of the various states. Although the Treaty of Paris (1783) was signed between Great Britain and the U.S., and named each of the American states, various states proceeded blithely to violate it. New York and South Carolina repeatedly prosecuted Loyalists for wartime activity and redistributed their lands. Individual state legislatures independently laid embargoes, negotiated directly with foreign authorities, raised armies, and made war, all violating the letter and the spirit of the Articles. In September 1786, during an inter–state convention to discuss and develop a consensus about reversing the protectionist trade barriers that each state had erected, James Madison angrily questioned whether the Articles of Confederation was a binding compact or even a viable government. Connecticut paid nothing and "positively refused" to pay U.S. assessments for two years. A rumor had it that a "seditious party" of New York legislators had opened a conversation with the Viceroy of Canada. To the south, the British were said to be openly funding Creek Indian raids on Georgia, and the state was under martial law. Additionally, during Shays' Rebellion (August 1786 – June 1787) in Massachusetts, Congress could provide no money to support an endangered constituent state. General Benjamin Lincoln was obliged to raise funds from Boston merchants to pay for a volunteer army. Congress was paralyzed. It could do nothing significant without nine states, and some legislation required all thirteen. When a state produced only one member in attendance, its vote was not counted. If a state's delegation were evenly divided, its vote could not be counted towards the nine-count requirement. The Congress of the Confederation had "virtually ceased trying to govern". The vision of a "respectable nation" among nations seemed to be fading in the eyes of revolutionaries such as George Washington, Benjamin Franklin, and Rufus King. Their dream of a republic, a nation without hereditary rulers, with power derived from the people in frequent elections, was in doubt. On February 21, 1787, the Confederation Congress called a convention of state delegates at Philadelphia to propose a plan of government. Unlike earlier attempts, the convention was not meant for new laws or piecemeal alterations, but for the "sole and express purpose of revising the Articles of Confederation". The convention was not limited to commerce; rather, it was intended to "render the federal constitution adequate to the exigencies of government and the preservation of the Union." The proposal might take effect when approved by Congress and the states. See main article: Constitutional Convention (United States). On the appointed day, May 14, 1787, only the Virginia and Pennsylvania delegations were present, and so the convention's opening meeting was postponed for lack of a quorum. A quorum of seven states met and deliberations began on May 25. Eventually twelve states were represented; 74 delegates were named, 55 attended and 39 signed. The delegates were generally convinced that an effective central government with a wide range of enforceable powers must replace the weaker Congress established by the Articles of Confederation. Two plans for structuring the federal government arose at the convention's outset: On May 31, the Convention devolved into a "Committee of the Whole" to consider the Virginia Plan. On June 13, the Virginia resolutions in amended form were reported out of committee. The New Jersey plan was put forward in response to the Virginia Plan. A "Committee of Eleven" (one delegate from each state represented) met from July 2 to 16 to work out a compromise on the issue of representation in the federal legislature. All agreed to a republican form of government grounded in representing the people in the states. For the legislature, two issues were to be decided: how the votes were to be allocated among the states in the Congress, and how the representatives should be elected. In its report, now known as the Connecticut Compromise (or "Great Compromise"), the committee proposed proportional representation for seats in the House of Representatives based on population (with the people voting for representatives), and equal representation for each State in the Senate (with each state's legislators generally choosing their respective senators), and that all money bills would originate in the House. The Great Compromise ended the stalemate between "patriots" and "nationalists", leading to numerous other compromises in a spirit of accommodation. There were sectional interests to be balanced by the Three-Fifths Compromise; reconciliation on Presidential term, powers, and method of selection; and jurisdiction of the federal judiciary. On July 24, a "Committee of Detail" – John Rutledge (South Carolina), Edmund Randolph (Virginia), Nathaniel Gorham (Massachusetts), Oliver Ellsworth (Connecticut), and James Wilson (Pennsylvania) – was elected to draft a detailed constitution reflective of the Resolutions passed by the convention up to that point. The Convention recessed from July 26 to August 6 to await the report of this "Committee of Detail". Overall, the report of the committee conformed to the resolutions adopted by the Convention, adding some elements. A twenty-three article (plus preamble) constitution was presented. From August 6 to September 10, the report of the committee of detail was discussed, section by section and clause by clause. Details were attended to, and further compromises were effected. Toward the close of these discussions, on September 8, a "Committee of Style and Arrangement" – Alexander Hamilton (New York), William Samuel Johnson (Connecticut), Rufus King (Massachusetts), James Madison (Virginia), and Gouverneur Morris (Pennsylvania) – was appointed to distill a final draft constitution from the twenty-three approved articles. The final draft, presented to the convention on September 12, contained seven articles, a preamble and a closing endorsement, of which Morris was the primary author. The committee also presented a proposed letter to accompany the constitution when delivered to Congress. The final document, engrossed by Jacob Shallus, was taken up on Monday, September 17, at the Convention's final session. Several of the delegates were disappointed in the result, a makeshift series of unfortunate compromises. Some delegates left before the ceremony, and three others refused to sign. Of the thirty-nine signers, Benjamin Franklin summed up, addressing the Convention: "There are several parts of this Constitution which I do not at present approve, but I am not sure I shall never approve them." He would accept the Constitution, "because I expect no better and because I am not sure that it is not the best". The advocates of the Constitution were anxious to obtain unanimous support of all twelve states represented in the Convention. Their accepted formula for the closing endorsement was "Done in Convention, by the unanimous consent of the States present." At the end of the convention, the proposal was agreed to by eleven state delegations and the lone remaining delegate from New York, Alexander Hamilton. Transmitted to the Congress of the Confederation, then sitting in New York City, it was within the power of Congress to expedite or block ratification of the proposed Constitution. The new frame of government that the Philadelphia Convention presented was technically only a revision of the Articles of Confederation. After several days of debate, Congress voted to transmit the document to the thirteen states for ratification according to the process outlined in its Article VII. Each state legislature was to call elections for a "Federal Convention" to ratify the new Constitution, rather than consider ratification itself; a departure from the constitutional practice of the time, designed to expand the franchise in order to more clearly embrace "the people". The frame of government itself was to go into force among the States so acting upon the approval of nine (i.e. two-thirds of the 13) states; also a departure from constitutional practice, as the Articles of Confederation could only be amended by unanimous vote of all the states. Three members of the Convention – Madison, Gorham, and King – were also Members of Congress. They proceeded at once to New York, where Congress was in session, to placate the expected opposition. Aware of their vanishing authority, Congress, on September 28, after some debate, resolved unanimously to submit the Constitution to the States for action, "in conformity to the resolves of the Convention", but with no recommendation either for or against its adoption. Two parties soon developed, one in opposition, the Anti-Federalists, and one in support, the Federalists, of the Constitution; and the Constitution was debated, criticized, and expounded upon clause by clause. Hamilton, Madison, and Jay, under the name of Publius, wrote a series of commentaries, now known as The Federalist Papers, in support of ratification in the state of New York, at that time a hotbed of anti-Federalism. These commentaries on the Constitution, written during the struggle for ratification, have been frequently cited by the Supreme Court as an authoritative contemporary interpretation of the meaning of its provisions. The dispute over additional powers for the central government was close, and in some states ratification was effected only after a bitter struggle in the state convention itself. On June 21, 1788, the constitution had been ratified by the minimum of nine states required under Article VII. Towards the end of July, and with eleven states then having ratified, the process of organizing the new government began. The Continental Congress, which still functioned at irregular intervals, passed a resolution on September 13, 1788, to put the new Constitution into operation with the eleven states that had then ratified it. The federal government began operations under the new form of government on March 4, 1789. However, the initial meeting of each chamber of Congress had to be adjourned due to lack of a quorum. George Washington was inaugurated as the nation's first president weeks later, on April 30. The final two states, North Carolina and Rhode Island, both subsequently ratified the ConstitutionNovember 21, 1789, and May 29, 1790, respectively. |Enlightenment and Rule of law| Two Treatises of Government life, liberty and property Both the influence of Edward Coke and William Blackstone were evident at the Convention. In his Institutes of the Lawes of England, Edward Coke interpreted Magna Carta protections and rights to apply not just to nobles, but to all British subjects. In writing the Virginia Charter of 1606, he enabled the King in Parliament to give those to be born in the colonies all rights and liberties as though they were born in England. William Blackstone's Commentaries on the Laws of England were the most influential books on law in the new republic. British political philosopher John Locke following the Glorious Revolution (1688) was a major influence expanding on the contract theory of government advanced by Thomas Hobbes. Locke advanced the principle of consent of the governed in his Two Treatises of Government. Government's duty under a social contract among the sovereign people was to serve the people by protecting their rights. These basic rights were life, liberty and property. Montesquieu's influence on the framers is evident in Madison's Federalist No. 47 and Hamilton's Federalist No. 78. Jefferson, Adams, and Mason were known to read Montesquieu. Supreme Court Justices, the ultimate interpreters of the Constitution, have cited Montesquieu throughout the Court's history. (See, e.g., Green v. Biddle . 21 . U.S. . 1. 1, 36 . 1823. United States v. Wood . 39 . U.S. . 430 . 438 . 1840. Myers v. United States . 272 . U.S. . 52 . 116 . 1926. Nixon v. Administrator of General Services . 433 . U.S. . 425 . 442. 1977. Bank Markazi v. Peterson . 136 . U.S. . 1310 . 1330. 2016.) Montesquieu emphasized the need for balanced forces pushing against each other to prevent tyranny (reflecting the influence of Polybius's 2nd century BC treatise on the checks and balances of the Roman Republic). In his The Spirit of the Laws, Montesquieu argues that the separation of state powers should be by its service to the people's liberty: legislative, executive and judicial. The constitution was a federal one, and was influenced by the study of other federations, both ancient and extant. The United States Bill of Rights consists of 10 amendments added to the Constitution in 1791, as supporters of the Constitution had promised critics during the debates of 1788. The English Bill of Rights (1689) was an inspiration for the American Bill of Rights. Both require jury trials, contain a right to keep and bear arms, prohibit excessive bail and forbid "cruel and unusual punishments". Many liberties protected by state constitutions and the Virginia Declaration of Rights were incorporated into the Bill of Rights. Neither the Convention which drafted the Constitution, nor the Congress which sent it to the thirteen states for ratification in the autumn of 1787, gave it a lead caption. To fill this void, the document was most often titled "A frame of Government" when it was printed for the convenience of ratifying conventions and the information of the public. This Frame of Government consisted of a preamble, seven articles and a signed closing endorsement. The preamble to the Constitution serves as an introductory statement of the document's fundamental purposes and guiding principles. It neither assigns powers to the federal government, nor does it place specific limitations on government action. Rather, it sets out the origin, scope and purpose of the Constitution. Its origin and authority is in "We, the people of the United States". This echoes the Declaration of Independence. "One people" dissolved their connection with another, and assumed among the powers of the earth, a sovereign nation-state. The scope of the Constitution is twofold. First, "to form a more perfect Union" than had previously existed in the "perpetual Union" of the Articles of Confederation. Second, to "secure the blessings of liberty", which were to be enjoyed by not only the first generation, but for all who came after, "our posterity". Article One describes the Congress, the legislative branch of the federal government. Section 1, reads, "All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives." The article establishes the manner of election and the qualifications of members of each body. Representatives must be at least 25 years old, be a citizen of the United States for seven years, and live in the state they represent. Senators must be at least 30 years old, be a citizen for nine years, and live in the state they represent. Article I, Section 8 enumerates the powers delegated to the legislature. Financially, Congress has the power to tax, borrow, pay debt and provide for the common defense and the general welfare; to regulate commerce, bankruptcies, and coin money. To regulate internal affairs, it has the power to regulate and govern military forces and militias, suppress insurrections and repel invasions. It is to provide for naturalization, standards of weights and measures, post offices and roads, and patents; to directly govern the federal district and cessions of land by the states for forts and arsenals. Internationally, Congress has the power to define and punish piracies and offenses against the Law of Nations, to declare war and make rules of war. The final Necessary and Proper Clause, also known as the Elastic Clause, expressly confers incidental powers upon Congress without the Articles' requirement for express delegation for each and every power. Article I, Section 9 lists eight specific limits on congressional power. The Supreme Court has sometimes broadly interpreted the Commerce Clause and the Necessary and Proper Clause in Article One to allow Congress to enact legislation that is neither expressly allowed by the enumerated powers nor expressly denied in the limitations on Congress. In McCulloch v. Maryland (1819), the Supreme Court read the Necessary and Proper Clause to permit the federal government to take action that would "enable [it] to perform the high duties assigned to it [by the Constitution] in the manner most beneficial to the people", even if that action is not itself within the enumerated powers. Chief Justice Marshall clarified: "Let the end be legitimate, let it be within the scope of the Constitution, and all means which are appropriate, which are plainly adapted to that end, which are not prohibited, but consist with the letter and spirit of the Constitution, are Constitutional." Article Two describes the office, qualifications, and duties of the President of the United States and the Vice President. The President is head of the executive branch of the federal government, as well as the nation's head of state and head of government. Article two is modified by the 12th Amendment which tacitly acknowledges political parties, and the 25th Amendment relating to office succession. The president is to receive only one compensation from the federal government. The inaugural oath is specified to preserve, protect and defend the Constitution. The president is the Commander in Chief of the United States Armed Forces and state militias when they are mobilized. He or she makes treaties with the advice and consent of a two-thirds quorum of the Senate. To administer the federal government, the president commissions all the offices of the federal government as Congress directs; he or she may require the opinions of its principal officers and make "recess appointments" for vacancies that may happen during the recess of the Senate. The president is to see that the laws are faithfully executed, though he or she may grant reprieves and pardons except regarding Congressional impeachment of himself or other federal officers. The president reports to Congress on the State of the Union, and by the Recommendation Clause, recommends "necessary and expedient" national measures. The president may convene and adjourn Congress under special circumstances. Section 4 provides for removal of the president and other federal officers. The president is removed on impeachment for, and conviction of, treason, bribery, or other high crimes and misdemeanors. Article Three describes the court system (the judicial branch), including the Supreme Court. There shall be one court called the Supreme Court. The article describes the kinds of cases the court takes as original jurisdiction. Congress can create lower courts and an appeals process. Congress enacts law defining crimes and providing for punishment. Article Three also protects the right to trial by jury in all criminal cases, and defines the crime of treason. Section 1 vests the judicial power of the United States in federal courts, and with it, the authority to interpret and apply the law to a particular case. Also included is the power to punish, sentence, and direct future action to resolve conflicts. The Constitution outlines the U.S. judicial system. In the Judiciary Act of 1789, Congress began to fill in details. Currently, Title 28 of the U.S. Code describes judicial powers and administration. As of the First Congress, the Supreme Court justices rode circuit to sit as panels to hear appeals from the district courts. In 1891, Congress enacted a new system. District courts would have original jurisdiction. Intermediate appellate courts (circuit courts) with exclusive jurisdiction heard regional appeals before consideration by the Supreme Court. The Supreme Court holds discretionary jurisdiction, meaning that it does not have to hear every case that is brought to it. To enforce judicial decisions, the Constitution grants federal courts both criminal contempt and civil contempt powers. The court's summary punishment for contempt immediately overrides all other punishments applicable to the subject party. Other implied powers include injunctive relief and the habeas corpus remedy. The Court may imprison for contumacy, bad-faith litigation, and failure to obey a writ of mandamus. Judicial power includes that granted by Acts of Congress for rules of law and punishment. Judicial power also extends to areas not covered by statute. Generally, federal courts cannot interrupt state court proceedings. Clause 1 of Section 2 authorizes the federal courts to hear actual cases and controversies only. Their judicial power does not extend to cases which are hypothetical, or which are proscribed due to standing, mootness, or ripeness issues. Generally, a case or controversy requires the presence of adverse parties who have some interest genuinely at stake in the case. Clause 2 of Section 2 provides that the Supreme Court has original jurisdiction in cases involving ambassadors, ministers and consuls, for all cases respecting foreign nation-states, and also in those controversies which are subject to federal judicial power because at least one state is a party. Cases arising under the laws of the United States and its treaties come under the jurisdiction of federal courts. Cases under international maritime law and conflicting land grants of different states come under federal courts. Cases between U.S. citizens in different states, and cases between U.S. citizens and foreign states and their citizens, come under federal jurisdiction. The trials will be in the state where the crime was committed. No part of the Constitution expressly authorizes judicial review, but the Framers did contemplate the idea. The Constitution is the supreme law of the land. Precedent has since established that the courts could exercise judicial review over the actions of Congress or the executive branch. Two conflicting federal laws are under "pendent" jurisdiction if one presents a strict constitutional issue. Federal court jurisdiction is rare when a state legislature enacts something as under federal jurisdiction. To establish a federal system of national law, considerable effort goes into developing a spirit of comity between federal government and states. By the doctrine of 'Res judicata', federal courts give "full faith and credit" to State Courts. The Supreme Court will decide Constitutional issues of state law only on a case by case basis, and only by strict Constitutional necessity, independent of state legislators motives, their policy outcomes or its national wisdom. Section 3 bars Congress from changing or modifying Federal law on treason by simple majority statute. This section also defines treason, as an overt act of making war or materially helping those at war with the United States. Accusations must be corroborated by at least two witnesses. Congress is a political body and political disagreements routinely encountered should never be considered as treason. This allows for nonviolent resistance to the government because opposition is not a life or death proposition. However, Congress does provide for other lesser subversive crimes such as conspiracy. Article Four outlines the relations among the states and between each state and the federal government. In addition, it provides for such matters as admitting new states and border changes between the states. For instance, it requires states to give "full faith and credit" to the public acts, records, and court proceedings of the other states. Congress is permitted to regulate the manner in which proof of such acts may be admitted. The "privileges and immunities" clause prohibits state governments from discriminating against citizens of other states in favor of resident citizens. For instance, in criminal sentencing, a state may not increase a penalty on the grounds that the convicted person is a non-resident. It also establishes extradition between the states, as well as laying down a legal basis for freedom of movement and travel amongst the states. Today, this provision is sometimes taken for granted, but in the days of the Articles of Confederation, crossing state lines was often arduous and costly. The Territorial Clause gives Congress the power to make rules for disposing of federal property and governing non-state territories of the United States. Finally, the fourth section of Article Four requires the United States to guarantee to each state a republican form of government, and to protect them from invasion and violence. Article Five outlines the process for amending the Constitution. Eight state constitutions in effect in 1787 included an amendment mechanism. Amendment making power rested with the legislature in three of the states and in the other five it was given to specially elected conventions. The Articles of Confederation provided that amendments were to be proposed by Congress and ratified by the unanimous vote of all thirteen state legislatures. This proved to be a major flaw in the Articles, as it created an insurmountable obstacle to constitutional reform. The amendment process crafted during the Philadelphia Constitutional Convention was, according to The Federalist No. 43, designed to establish a balance between pliancy and rigidity: There are two steps in the amendment process. Proposals to amend the Constitution must be properly adopted and ratified before they change the Constitution. First, there are two procedures for adopting the language of a proposed amendment, either by (a) Congress, by two-thirds majority in both the Senate and the House of Representatives, or (b) national convention (which shall take place whenever two-thirds of the state legislatures collectively call for one). Second, there are two procedures for ratifying the proposed amendment, which requires three-fourths of the states' (presently 38 of 50) approval: (a) consent of the state legislatures, or (b) consent of state ratifying conventions. The ratification method is chosen by Congress for each amendment. State ratifying conventions were used only once, for the Twenty-first Amendment. Presently, the Archivist of the United States is charged with responsibility for administering the ratification process under the provisions of 1 U.S. Code . The Archivist submits the proposed amendment to the states for their consideration by sending a letter of notification to each Governor. Each Governor then formally submits the amendment to their state's legislature. When a state ratifies a proposed amendment, it sends the Archivist an original or certified copy of the state's action. Ratification documents are examined by the Office of the Federal Register for facial legal sufficiency and an authenticating signature. Article Five ends by shielding certain clauses in the new frame of government from being amended. Article One, Section 9, Clauses 1 prevents Congress from passing any law that would restrict the importation of slaves into the United States prior to 1808, plus the fourth clause from that same section, which reiterates the Constitutional rule that direct taxes must be apportioned according to state populations. These clauses were explicitly shielded from Constitutional amendment prior to 1808. On January 1, 1808, the first day it was permitted to do so, Congress approved legislation prohibiting the importation of slaves into the country. On February 3, 1913, with ratification of the Sixteenth Amendment, Congress gained the authority to levy an income tax without apportioning it among the states or basing it on the United States Census. The third textually entrenched provision is Article One, Section 3, Clauses 1, which provides for equal representation of the states in the Senate. The shield protecting this clause from the amendment process is less absolute – "no state, without its consent, shall be deprived of its equal Suffrage in the Senate" – but permanent. Article Six establishes the Constitution, and all federal laws and treaties of the United States made according to it, to be the supreme law of the land, and that "the judges in every state shall be bound thereby, any thing in the laws or constitutions of any state notwithstanding." It validates national debt created under the Articles of Confederation and requires that all federal and state legislators, officers, and judges take oaths or affirmations to support the Constitution. This means that the states' constitutions and laws should not conflict with the laws of the federal constitution and that in case of a conflict, state judges are legally bound to honor the federal laws and constitution over those of any state. Article Six also states "no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States." Article Seven describes the process for establishing the proposed new frame of government. Anticipating that the influence of many state politicians would be Antifederalist, delegates to the Philadelphia Convention provided for ratification of the Constitution by popularly elected ratifying conventions in each state. The convention method also made it possible that judges, ministers and others ineligible to serve in state legislatures, could be elected to a convention. Suspecting that Rhode Island, at least, might not ratify, delegates decided that the Constitution would go into effect as soon as nine states (two-thirds rounded up) ratified. Once ratified by this minimum number of states, it was anticipated that the proposed Constitution would become this Constitution between the nine or more that signed. It would not cover the four or fewer states that might not have signed. The signing of the United States Constitution occurred on September 17, 1787, when 39 delegates to the Constitutional Convention endorsed the constitution created during the convention. In addition to signatures, this closing endorsement, the Constitution's eschatocol, included a brief declaration that the delegates' work has been successfully completed and that those whose signatures appear on it subscribe to the final document. Included are, a statement pronouncing the document's adoption by the states present, a formulaic dating of its adoption, and the signatures of those endorsing it. Additionally, the convention's secretary, William Jackson, signed the document to authenticate the validity of the delegate signatures. He also made a few secretarial notes. The language of the concluding endorsement, conceived by Gouverneur Morris and presented to the convention by Benjamin Franklin, was made intentionally ambiguous in hopes of winning over the votes of dissenting delegates. Advocates for the new frame of government, realizing the impending difficulty of obtaining the consent of the states needed to make it operational, were anxious to obtain the unanimous support of the delegations from each state. It was feared that many of the delegates would refuse to give their individual assent to the Constitution. Therefore, in order that the action of the Convention would appear to be unanimous, the formula, Done in convention by the unanimous consent of the states present ... was devised. The document is dated: "the Seventeenth Day of September in the Year of our Lord" 1787, and "of the Independence of the United States of America the Twelfth." This two-fold epoch dating serves to place the Constitution in the context of the religious traditions of Western civilization and, at the same time, links it to the regime principles proclaimed in the Declaration of Independence. This dual reference can also be found in the Articles of Confederation and the Northwest Ordinance. The closing endorsement serves an authentication function only. It neither assigns powers to the federal government nor does it provide specific limitations on government action. It does however, provide essential documentation of the Constitution's validity, a statement of "This is what was agreed to." It records who signed the Constitution, and when and where. The procedure for amending the Constitution is outlined in Article Five (see above). The process is overseen by the Archivist of the United States. Between 1949 and 1985 it was overseen by the Administrator of General Services, and before that by the Secretary of State. Under Article Five, a proposal for an amendment must be adopted either by Congress or by a national convention, but all amendments have gone through Congress. The proposal must receive two thirds of the votes of both houses to proceed. It is passed as a joint resolution, but is not presented to the President, who plays no part in the process. Instead, it is passed to the Office of the Federal Register, which copies it in slip law format and submits it to the States. Congress decides whether the proposal is to be ratified in the state legislature or by a state ratifying convention. To date all amendments have been ratified by the state legislatures except one, the Twenty-first Amendment. . A proposed amendment becomes an operative part of the Constitution as soon as it is ratified by three-fourths of the States (currently 38 of the 50 States). There is no further step. The text requires no additional action by Congress or anyone else after ratification by the required number of states. Thus, when the Office of the Federal Register verifies that it has received the required number of authenticated ratification documents, it drafts a formal proclamation for the Archivist to certify that the amendment is valid and has become part of the nation's frame of government. This certification is published in the Federal Register and United States Statutes at Large and serves as official notice to Congress and to the nation that the ratification process has been successfully completed. The Constitution has twenty-seven amendments. Structurally, the Constitution's original text and all prior amendments remain untouched. The precedent for this practice was set in 1789, when Congress considered and proposed the first several Constitutional amendments. Among these, Amendments 1–10 are collectively known as the Bill of Rights, and Amendments 13–15 are known as the Reconstruction Amendments. Excluding the Twenty-seventh Amendment, which was pending before the states for, the longest pending amendment that was successfully ratified was the Twenty-second Amendment, which took . The Twenty-sixth Amendment was ratified in the shortest time, days. The average ratification time for the first twenty-six amendments was 1 year, 252 days; for all twenty-seven, 9 years, 48 days. The First Amendment (1791) prohibits Congress from obstructing the exercise of certain individual freedoms: freedom of religion, freedom of speech, freedom of the press, freedom of assembly, and right to petition. Its Free Exercise Clause guarantees a person's right to hold whatever religious beliefs he or she wants, and to freely exercise that belief, and its Establishment Clause prevents the federal government from creating an official national church or favoring one set of religious beliefs over another. The amendment guarantees an individual's right to express and to be exposed to a wide range of opinions and views. It was intended to ensure a free exchange of ideas, even unpopular ones. It also guarantees an individual's right to physically gather or associate with others in groups for economic, political or religious purposes. Additionally, it guarantees an individual's right to petition the government for a redress of grievances. The Second Amendment (1791) protects the right of individuals to keep and bear arms. Although the Supreme Court has ruled that this right applies to individuals, not merely to collective militias, it has also held that the government may regulate or place some limits on the manufacture, ownership and sale of firearms or other weapons. Requested by several states during the Constitutional ratification debates, the amendment reflected the lingering resentment over the widespread efforts of the British to confiscate the colonists' firearms at the outbreak of the Revolutionary War. Patrick Henry had rhetorically asked, shall we be stronger, "when we are totally disarmed, and when a British Guard shall be stationed in every house?" The Third Amendment (1791) prohibits the federal government from forcing individuals to provide lodging to soldiers in their homes during peacetime without their consent. Requested by several states during the Constitutional ratification debates, the amendment reflected the lingering resentment over the Quartering Acts passed by the British Parliament during the Revolutionary War, which had allowed British soldiers to take over private homes for their own use. The Fourth Amendment (1791) protects people against unreasonable searches and seizures of either self or property by government officials. A search can mean everything from a frisking by a police officer or to a demand for a blood test to a search of an individual's home or car. A seizure occurs when the government takes control of an individual or something in his or her possession. Items that are seized often are used as evidence when the individual is charged with a crime. It also imposes certain limitations on police investigating a crime and prevents the use of illegally obtained evidence at trial. The Fifth Amendment (1791) establishes the requirement that a trial for a major crime may commence only after an indictment has been handed down by a grand jury; protects individuals from double jeopardy, being tried and put in danger of being punished more than once for the same criminal act; prohibits punishment without due process of law, thus protecting individuals from being imprisoned without fair procedures; and provides that an accused person may not be compelled to reveal to the police, prosecutor, judge, or jury any information that might incriminate or be used against him or her in a court of law. Additionally, the Fifth Amendment also prohibits government from taking private property for public use without "just compensation", the basis of eminent domain in the United States. The Sixth Amendment (1791) provides several protections and rights to an individual accused of a crime. The accused has the right to a fair and speedy trial by a local and impartial jury. Likewise, a person has the right to a public trial. This right protects defendants from secret proceedings that might encourage abuse of the justice system, and serves to keep the public informed. This amendment also guarantees a right to legal counsel if accused of a crime, guarantees that the accused may require witnesses to attend the trial and testify in the presence of the accused, and guarantees the accused a right to know the charges against them. In 1966, the Supreme Court ruled that, with the Fifth Amendment, this amendment requires what has become known as the Miranda warning. The Seventh Amendment (1791) extends the right to a jury trial to federal civil cases, and inhibits courts from overturning a jury's findings of fact. Although the Seventh Amendment itself says that it is limited to "suits at common law", meaning cases that triggered the right to a jury under English law, the amendment has been found to apply in lawsuits that are similar to the old common law cases. For example, the right to a jury trial applies to cases brought under federal statutes that prohibit race or gender discrimination in housing or employment. Importantly, this amendment guarantees the right to a jury trial only in federal court, not in state court. The Eighth Amendment (1791) protects people from having bail or fines set at an amount so high that it would be impossible for all but the richest defendants to pay and also protects people from being subjected to cruel and unusual punishment. Although this phrase originally was intended to outlaw certain gruesome methods of punishment, it has been broadened over the years to protect against punishments that are grossly disproportionate to or too harsh for the particular crime. This provision has also been used to challenge prison conditions such as extremely unsanitary cells, overcrowding, insufficient medical care and deliberate failure by officials to protect inmates from one another. The Ninth Amendment (1791) declares that individuals have other fundamental rights, in addition to those stated in the Constitution. During the Constitutional ratification debates Anti-Federalists argued that a Bill of Rights should be added. The Federalists opposed it on grounds that a list would necessarily be incomplete but would be taken as explicit and exhaustive, thus enlarging the power of the federal government by implication. The Anti-Federalists persisted, and several state ratification conventions refused to ratify the Constitution without a more specific list of protections, so the First Congress added what became the Ninth Amendment as a compromise. Because the rights protected by the Ninth Amendment are not specified, they are referred to as "unenumerated". The Supreme Court has found that unenumerated rights include such important rights as the right to travel, the right to vote, the right to privacy, and the right to make important decisions about one's health care or body. The Tenth Amendment (1791) was included in the Bill of Rights to further define the balance of power between the federal government and the states. The amendment states that the federal government has only those powers specifically granted by the Constitution. These powers include the power to declare war, to collect taxes, to regulate interstate business activities and others that are listed in the articles or in subsequent constitutional amendments. Any power not listed is, says the Tenth Amendment, left to the states or the people. While there is no specific list of what these "reserved powers" may be, the Supreme Court has ruled that laws affecting family relations, commerce within a state's own borders, and local law enforcement activities, are among those specifically reserved to the states or the people. The Eleventh Amendment (1795) specifically prohibits federal courts from hearing cases in which a state is sued by an individual from another state or another country, thus extending to the states sovereign immunity protection from certain types of legal liability. Article Three, Section 2, Clause 1 has been affected by this amendment, which also overturned the Supreme Court's decision in Chisholm v. Georgia. The Sixteenth Amendment (1913) removed existing Constitutional constraints that limited the power of Congress to lay and collect taxes on income. Specifically, the apportionment constraints delineated in Article 1, Section 9, Clause 4 have been removed by this amendment, which also overturned an 1895 Supreme Court decision, in Pollock v. Farmers' Loan & Trust Co., that declared an unapportioned federal income tax on rents, dividends, and interest unconstitutional. This amendment has become the basis for all subsequent federal income tax legislation and has greatly expanded the scope of federal taxing and spending in the years since. The Eighteenth Amendment (1919) prohibited the making, transporting, and selling of alcoholic beverages nationwide. It also authorized Congress to enact legislation enforcing this prohibition. Adopted at the urging of a national temperance movement, proponents believed that the use of alcohol was reckless and destructive and that prohibition would reduce crime and corruption, solve social problems, decrease the need for welfare and prisons, and improve the health of all Americans. During prohibition, it is estimated that alcohol consumption and alcohol related deaths declined dramatically. But prohibition had other, more negative consequences. The amendment drove the lucrative alcohol business underground, giving rise to a large and pervasive black market. In addition, prohibition encouraged disrespect for the law and strengthened organized crime. Prohibition came to an end in 1933, when this amendment was repealed. The Twenty-first Amendment (1933) repealed the Eighteenth Amendment and returned the regulation of alcohol to the states. Each state sets its own rules for the sale and importation of alcohol, including the drinking age. Because a federal law provides federal funds to states that prohibit the sale of alcohol to minors under the age of twenty-one, all fifty states have set their drinking age there. Rules about how alcohol is sold vary greatly from state to state. The Thirteenth Amendment (1865) abolished slavery and involuntary servitude, except as punishment for a crime, and authorized Congress to enforce abolition. Though millions of slaves had been declared free by the 1863 Emancipation Proclamation, their post Civil War status was unclear, as was the status of other millions. Congress intended the Thirteenth Amendment to be a proclamation of freedom for all slaves throughout the nation and to take the question of emancipation away from politics. This amendment rendered inoperative or moot several of the original parts of the constitution. The Fourteenth Amendment (1868) granted United States citizenship to former slaves and to all persons "subject to U.S. jurisdiction". It also contained three new limits on state power: a state shall not violate a citizen's privileges or immunities; shall not deprive any person of life, liberty, or property without due process of law; and must guarantee all persons equal protection of the laws. These limitations dramatically expanded the protections of the Constitution. This amendment, according to the Supreme Court's Doctrine of Incorporation, makes most provisions of the Bill of Rights applicable to state and local governments as well. It superseded the mode of apportionment of representatives delineated in Article 1, Section 2, Clause 3, and also overturned the Supreme Court's decision in Dred Scott v. Sandford. The Fifteenth Amendment (1870) prohibits the use of race, color, or previous condition of servitude in determining which citizens may vote. The last of three post Civil War Reconstruction Amendments, it sought to abolish one of the key vestiges of slavery and to advance the civil rights and liberties of former slaves. The Nineteenth Amendment (1920) prohibits the government from denying women the right to vote on the same terms as men. Prior to the amendment's adoption, only a few states permitted women to vote and to hold office. The Twenty-third Amendment (1961) extends the right to vote in presidential elections to citizens residing in the District of Columbia by granting the District electors in the Electoral College, as if it were a state. When first established as the nation's capital in 1800, the District of Columbia's five thousand residents had neither a local government, nor the right to vote in federal elections. By 1960 the population of the District had grown to over 760,000 people. The Twenty-fourth Amendment (1964) prohibits a poll tax for voting. Although passage of the Thirteenth, Fourteenth, and Fifteenth Amendments helped remove many of the discriminatory laws left over from slavery, they did not eliminate all forms of discrimination. Along with literacy tests and durational residency requirements, poll taxes were used to keep low-income (primarily African American) citizens from participating in elections. The Supreme Court has since struck down these discriminatory measures, opening democratic participation to all. The Twenty-sixth Amendment (1971) prohibits the government from denying the right of United States citizens, eighteen years of age or older, to vote on account of age. The drive to lower the voting age was driven in large part by the broader student activism movement protesting the Vietnam War. It gained strength following the Supreme Court's decision in Oregon v. Mitchell. The Twelfth Amendment (1804) modifies the way the Electoral College chooses the President and Vice President. It stipulates that each elector must cast a distinct vote for President and Vice President, instead of two votes for President. It also suggests that the President and Vice President should not be from the same state. Article II, Section 1, Clause 3 is superseded by this amendment, which also extends the eligibility requirements to become President to the Vice President. The Seventeenth Amendment (1913) modifies the way senators are elected. It stipulates that senators are to be elected by direct popular vote. The amendment supersedes Article 1, Section 2, Clauses 1 and 2, under which the two senators from each state were elected by the state legislature. It also allows state legislatures to permit their governors to make temporary appointments until a special election can be held. The Twentieth Amendment (1933) changes the date on which a new President, Vice President and Congress take office, thus shortening the time between Election Day and the beginning of Presidential, Vice Presidential and Congressional terms. Originally, the Constitution provided that the annual meeting was to be on the first Monday in December unless otherwise provided by law. This meant that, when a new Congress was elected in November, it did not come into office until the following March, with a "lame duck" Congress convening in the interim. By moving the beginning of the president's new term from March 4 to January 20 (and in the case of Congress, to January 3), proponents hoped to put an end to lame duck sessions, while allowing for a speedier transition for the new administration and legislators. The Twenty-second Amendment (1951) limits an elected president to two terms in office, a total of eight years. However, under some circumstances it is possible for an individual to serve more than eight years. Although nothing in the original frame of government limited how many presidential terms one could serve, the nation's first president, George Washington, declined to run for a third term, suggesting that two terms of four years were enough for any president. This precedent remained an unwritten rule of the presidency until broken by Franklin D. Roosevelt, who was elected to a third term as president 1940 and in 1944 to a fourth. The Twenty-fifth Amendment (1967) clarifies what happens upon the death, removal, or resignation of the President or Vice President and how the Presidency is temporarily filled if the President becomes disabled and cannot fulfill the responsibilities of the office. It supersedes the ambiguous succession rule established in Article II, Section 1, Clause 6. A concrete plan of succession has been needed on multiple occasions since 1789. However, for nearly 20% of U.S. history, there has been no vice president in office who can assume the presidency. The Twenty-seventh Amendment (1992) prevents members of Congress from granting themselves pay raises during the current session. Rather, any raises that are adopted must take effect during the next session of Congress. Its proponents believed that Federal legislators would be more likely to be cautious about increasing congressional pay if they have no personal stake in the vote. Article One, section 6, Clause 1 has been affected by this amendment, which remained pending for over two centuries as it contained no time limit for ratification. Collectively, members of the House and Senate typically propose around 200 amendments during each two-year term of Congress. Most however, never get out of the Congressional committees in which they were proposed, and only a fraction of those that do receive enough support to win Congressional approval to actually go through the constitutional ratification process. Six amendments approved by Congress and proposed to the states for consideration have not been ratified by the required number of states to become part of the Constitution. Four of these are technically still pending, as Congress did not set a time limit (see also Coleman v. Miller) for their ratification. The other two are no longer pending, as both had a time limit attached and in both cases the time period set for their ratification expired. The way the Constitution is understood is influenced by court decisions, especially those of the Supreme Court. These decisions are referred to as precedents. Judicial review is the power of the Court to examine federal legislation, federal executive, and all state branches of government, to decide their constitutionality, and to strike them down if found unconstitutional. Judicial review includes the power of the Court to explain the meaning of the Constitution as it applies to particular cases. Over the years, Court decisions on issues ranging from governmental regulation of radio and television to the rights of the accused in criminal cases have changed the way many constitutional clauses are interpreted, without amendment to the actual text of the Constitution. Legislation passed to implement the Constitution, or to adapt those implementations to changing conditions, broadens and, in subtle ways, changes the meanings given to the words of the Constitution. Up to a point, the rules and regulations of the many federal executive agencies have a similar effect. If an action of Congress or the agencies is challenged, however, it is the court system that ultimately decides whether these actions are permissible under the Constitution. The Supreme Court has indicated that once the Constitution has been extended to an area (by Congress or the Courts), its coverage is irrevocable. To hold that the political branches may switch the Constitution on or off at will would lead to a regime in which they, not this Court, say "what the law is". Courts established by the Constitution can regulate government under the Constitution, the supreme law of the land. First, they have jurisdiction over actions by an officer of government and state law. Second, federal courts may rule on whether coordinate branches of national government conform to the Constitution. Until the twentieth century, the Supreme Court of the United States may have been the only high tribunal in the world to use a court for constitutional interpretation of fundamental law, others generally depending on their national legislature. |John Jay, 1789–1795| New York co-author The Federalist PapersFile:John Marshall by Henry Inman, 1832.jpg |John Marshall, 1801–1835| Fauquier County delegate Virginia Ratification Convention The basic theory of American Judicial review is summarized by constitutional legal scholars and historians as follows: the written Constitution is fundamental law. It can change only by extraordinary legislative process of national proposal, then state ratification. The powers of all departments are limited to enumerated grants found in the Constitution. Courts are expected (a) to enforce provisions of the Constitution as the supreme law of the land, and (b) to refuse to enforce anything in conflict with it. In Convention. As to judicial review and the Congress, the first proposals by Madison (Va) and Wilson (Pa) called for a supreme court veto over national legislation. In this it resembled the system in New York, where the Constitution of 1777 called for a "Council of Revision" by the Governor and Justices of the state supreme court. The Council would review and in a way, veto any passed legislation violating the spirit of the Constitution before it went into effect. The nationalist's proposal in Convention was defeated three times, and replaced by a presidential veto with Congressional over-ride. Judicial review relies on the jurisdictional authority in Article III, and the Supremacy Clause. The justification for judicial review is to be explicitly found in the open ratifications held in the states and reported in their newspapers. John Marshall in Virginia, James Wilson in Pennsylvania and Oliver Ellsworth of Connecticut all argued for Supreme Court judicial review of acts of state legislature. In Federalist No. 78, Alexander Hamilton advocated the doctrine of a written document held as a superior enactment of the people. "A limited constitution can be preserved in practice no other way" than through courts which can declare void any legislation contrary to the Constitution. The preservation of the people's authority over legislatures rests "particularly with judges". The Supreme Court was initially made up of jurists who had been intimately connected with the framing of the Constitution and the establishment of its government as law. John Jay (New York), a co-author of The Federalist Papers, served as Chief Justice for the first six years. The second Chief Justice for a term of four years, was Oliver Ellsworth (Connecticut), a delegate in the Constitutional Convention, as was John Rutledge (South Carolina), Washington's recess appointment as Chief Justice who served in 1795. John Marshall (Virginia), the fourth Chief Justice, had served in the Virginia Ratification Convention in 1788. His service on the Court would extend 34 years over some of the most important rulings to help establish the nation the Constitution had begun. In the first years of the Supreme Court, members of the Constitutional Convention who would serve included James Wilson (Pennsylvania) for ten years, John Blair Jr. (Virginia) for five, and John Rutledge (South Carolina) for one year as Justice, then Chief Justice in 1795. When John Marshall followed Oliver Ellsworth as Chief Justice of the Supreme Court in 1801, the federal judiciary had been established by the Judiciary Act, but there were few cases, and less prestige. "The fate of judicial review was in the hands of the Supreme Court itself." Review of state legislation and appeals from state supreme courts was understood. But the Court's life, jurisdiction over state legislation was limited. The Marshall Court's landmark Barron v. Baltimore held that the Bill of Rights restricted only the federal government, and not the states. In the landmark Marbury v. Madison case, the Supreme Court asserted its authority of judicial review over Acts of Congress. Its findings were that Marbury and the others had a right to their commissions as judges in the District of Columbia. Marshall, writing the opinion for the majority, announced his discovered conflict between Section 13 of the Judiciary Act of 1789 and Article III. In this case, both the Constitution and the statutory law applied to the particulars at the same time. "The very essence of judicial duty" according to Marshall was to determine which of the two conflicting rules should govern. The Constitution enumerates powers of the judiciary to extend to cases arising "under the Constitution". Further, justices take a Constitutional oath to uphold it as "Supreme law of the land". Therefore, since the United States government as created by the Constitution is a limited government, the Federal courts were required to choose the Constitution over Congressional law if there were deemed to be a conflict. "This argument has been ratified by time and by practice..." The Supreme Court did not declare another Act of Congress unconstitutional until the controversial Dred Scott decision in 1857, held after the voided Missouri Compromise statute had already been repealed. In the eighty years following the Civil War to World War II, the Court voided Congressional statutes in 77 cases, on average almost one a year. Something of a crisis arose when, in 1935 and 1936, the Supreme Court handed down twelve decisions voiding Acts of Congress relating to the New Deal. President Franklin D. Roosevelt then responded with his abortive "court packing plan". Other proposals have suggested a Court super-majority to overturn Congressional legislation, or a Constitutional Amendment to require that the Justices retire at a specified age by law. To date, the Supreme Court's power of judicial review has persisted. The power of judicial review could not have been preserved long in a democracy unless it had been "wielded with a reasonable measure of judicial restraint, and with some attention, as Mr. Dooley said, to the election returns." Indeed, the Supreme Court has developed a system of doctrine and practice that self-limits its power of judicial review. The Court controls almost all of its business by choosing what cases to consider, writs of certiorari. In this way, it can avoid opinions on embarrassing or difficult cases. The Supreme Court limits itself by defining for itself what is a "justiciable question." First, the Court is fairly consistent in refusing to make any "advisory opinions" in advance of actual cases. Second, "friendly suits" between those of the same legal interest are not considered. Third, the Court requires a "personal interest", not one generally held, and a legally protected right must be immediately threatened by government action. Cases are not taken up if the litigant has no standing to sue. Simply having the money to sue and being injured by government action are not enough. These three procedural ways of dismissing cases have led critics to charge that the Supreme Court delays decisions by unduly insisting on technicalities in their "standards of litigability". They say cases are left unconsidered which are in the public interest, with genuine controversy, and resulting from good faith action. "The Supreme Court is not only a court of law but a court of justice." The Supreme Court balances several pressures to maintain its roles in national government. It seeks to be a co-equal branch of government, but its decrees must be enforceable. The Court seeks to minimize situations where it asserts itself superior to either President or Congress, but federal officers must be held accountable. The Supreme Court assumes power to declare acts of Congress as unconstitutional but it self-limits its passing on constitutional questions. But the Court's guidance on basic problems of life and governance in a democracy is most effective when American political life reinforce its rulings. Justice Brandeis summarized four general guidelines that the Supreme Court uses to avoid constitutional decisions relating to Congress: The Court will not anticipate a question of constitutional law nor decide open questions unless a case decision requires it. If it does, a rule of constitutional law is formulated only as the precise facts in the case require. The Court will choose statutes or general law for the basis of its decision if it can without constitutional grounds. If it does, the Court will choose a constitutional construction of an Act of Congress, even if its constitutionality is seriously in doubt. Likewise with the Executive Department, Edwin Corwin observed that the Court does sometimes rebuff presidential pretensions, but it more often tries to rationalize them. Against Congress, an Act is merely "disallowed". In the executive case, exercising judicial review produces "some change in the external world" beyond the ordinary judicial sphere. The "political question" doctrine especially applies to questions which present a difficult enforcement issue. Chief Justice Charles Evans Hughes addressed the Court's limitation when political process allowed future policy change, but a judicial ruling would "attribute finality". Political questions lack "satisfactory criteria for a judicial determination". John Marshall recognized that the president holds "important political powers" which as Executive privilege allows great discretion. This doctrine was applied in Court rulings on President Grant's duty to enforce the law during Reconstruction. It extends to the sphere of foreign affairs. Justice Robert Jackson explained, Foreign affairs are inherently political, "wholly confided by our Constitution to the political departments of the government ... [and] not subject to judicial intrusion or inquiry." Critics of the Court object in two principal ways to self-restraint in judicial review, deferring as it does as a matter of doctrine to Acts of Congress and Presidential actions. See main article: History of the Supreme Court of the United States. Supreme Courts under the leadership of subsequent Chief Justices have also used judicial review to interpret the Constitution among individuals, states and federal branches. Notable contributions were made by the Chase Court, the Taft Court, the Warren Court, and the Rehnquist Court. Salmon P. Chase was a Lincoln appointee, serving as Chief Justice from 1864 to 1873. His career encompassed service as a U.S. Senator and Governor of Ohio. He coined the slogan, "Free soil, free Labor, free men." One of Lincoln's "team of rivals", he was appointed Secretary of Treasury during the Civil War, issuing "greenbacks". To appease radical Republicans, Lincoln appointed him to replace Chief Justice Roger B. Taney of Dred Scott case fame. In one of his first official acts, Chase admitted John Rock, the first African-American to practice before the Supreme Court. The "Chase Court" is famous for Texas v. White, which asserted a permanent Union of indestructible states. Veazie Bank v. Fenno upheld the Civil War tax on state banknotes. Hepburn v. Griswold found parts of the Legal Tender Acts unconstitutional, though it was reversed under a late Supreme Court majority. |Salmon P. Chase | Union, ReconstructionFile:William Howard Taft.jpg |William Howard Taft | commerce, incorporationFile:Earl Warren.jpg |Earl Warren | due process, civil rightsImage:William Rehnquist.jpg |William Rehnquist | William Howard Taft was a Harding appointment to Chief Justice from 1921 to 1930. A Progressive Republican from Ohio, he was a one-term President. As Chief Justice, he advocated the Judiciary Act of 1925 that brought the Federal District Courts under the administrative jurisdiction of the Supreme Court. Taft successfully sought the expansion of Court jurisdiction over non- states such as District of Columbia and Territories of Alaska and Hawaii. In 1925, the Taft Court issued a ruling overturning a Marshall Court ruling on the Bill of Rights. In Gitlow v. New York, the Court established the doctrine of "incorporation which applied the Bill of Rights to the states. Important cases included the Board of Trade of City of Chicago v. Olsen that upheld Congressional regulation of commerce. Olmstead v. United States allowed exclusion of evidence obtained without a warrant based on application of the 14th Amendment proscription against unreasonable searches. Wisconsin v. Illinois ruled the equitable power of the United States can impose positive action on a state to prevent its inaction from damaging another state. Earl Warren was an Eisenhower nominee, Chief Justice from 1953 to 1969. Warren's Republican career in the law reached from County Prosecutor, California state attorney general, and three consecutive terms as Governor. His programs stressed progressive efficiency, expanding state education, re-integrating returning veterans, infrastructure and highway construction. In 1954, the Warren Court overturned a landmark Fuller Court ruling on the Fourteenth Amendment interpreting racial segregation as permissible in government and commerce providing "separate but equal" services. Warren built a coalition of Justices after 1962 that developed the idea of natural rights as guaranteed in the Constitution. Brown v. Board of Education banned segregation in public schools. Baker v. Carr and Reynolds v. Sims established Court ordered "one-man-one-vote". Bill of Rights Amendments were incorporated into the states. Due process was expanded in Gideon v. Wainwright and Miranda v. Arizona. First Amendment rights were addressed in Griswold v. Connecticut concerning privacy, and Engel v. Vitale relative to free speech. William Rehnquist was a Reagan appointment to Chief Justice, serving from 1986 to 2005. While he would concur with overthrowing a state supreme court's decision, as in Bush v. Gore, he built a coalition of Justices after 1994 that developed the idea of federalism as provided for in the Tenth Amendment. In the hands of the Supreme Court, the Constitution and its Amendments were to restrain Congress, as in City of Boerne v. Flores. Nevertheless, the Rehnquist Court was noted in the contemporary "culture wars" for overturning state laws relating to privacy prohibiting late-term abortions in Stenberg v. Carhart, prohibiting sodomy in Lawrence v. Texas, or ruling so as to protect free speech in Texas v. Johnson or affirmative action in Grutter v. Bollinger. See main article: American civil religion. There is a viewpoint that some Americans have come to see the documents of the Constitution, along with the Declaration of Independence and the Bill of Rights, as being a cornerstone of a type of civil religion. This is suggested by the prominent display of the Constitution, along with the Declaration of Independence and the Bill of Rights, in massive, bronze-framed, bulletproof, moisture-controlled glass containers vacuum-sealed in a rotunda by day and in multi-ton bomb-proof vaults by night at the National Archives Building. The idea of displaying the documents struck one academic critic looking from the point of view of the 1776 or 1789 America as "idolatrous, and also curiously at odds with the values of the Revolution". By 1816, Jefferson wrote that "[s]ome men look at constitutions with sanctimonious reverence and deem them like the Ark of the Covenant, too sacred to be touched". But he saw imperfections and imagined that there could potentially be others, believing as he did that "institutions must advance also". Some commentators depict the multi-ethnic, multi-sectarian United States as held together by a political orthodoxy, in contrast with a nation state of people having more "natural" ties. See main article: United States Constitution and worldwide influence. |José RizalFile:Sun Yat Sen portrait 2.jpg||Sun Yat-sen| The United States Constitution has been a notable model for governance around the world. Its international influence is found in similarities of phrasing and borrowed passages in other constitutions, as well as in the principles of the rule of law, separation of powers and recognition of individual rights. The American experience of fundamental law with amendments and judicial review has motivated constitutionalists at times when they were considering the possibilities for their nation's future. It informed Abraham Lincoln during the American Civil War, his contemporary and ally Benito Juárez of Mexico, and the second generation of 19th-century constitutional nationalists, José Rizal of the Philippines and Sun Yat-sen of China. Since the latter half of the 20th century, the influence of the United States Constitution may be waning as other countries have revised their constitutions with new influences. The United States Constitution has faced various criticisms since its inception in 1787. The Constitution did not originally define who was eligible to vote, allowing each state to determine who was eligible. In the early history of the U.S., most states allowed only white male adult property owners to vote. Until the Reconstruction Amendments were adopted between 1865 and 1870, the five years immediately following the Civil War, the Constitution did not abolish slavery, nor give citizenship and voting rights to former slaves. These amendments did not include a specific prohibition on discrimination on the basis of sex; it took another amendment – the Nineteenth, ratified in 1920 – for the Constitution to prohibit any United States citizen from being denied the right to vote on the basis of sex. . harv . Adler . Mortimer . Mortimer J. Adler . Gorman . William . yes . The American Testament: for the Institute for Philosophical Research and the Aspen Institute for Humanistic Studies . Praeger . New York . 1975 . 978-0-275-34060-5 . . harv . George Athan Billias . American Constitutionalism Heard Round the World, 1776–1989: A Global Perspective . New York University Press . New York . 2009 . 978-0-8147-9107-3 . . harv . Catherine Drinker Bowen . Miracle at Philadelphia: The Story of the Constitutional Convention, May to September 1787 . Little, Brown . New York . 2010 . First published 1966 . 978-0-316-10261-2 . Miracle at Philadelphia . . harv . Daniel A. Farber . Lincoln's Constitution . University of Chicago Press . Chicago . 2003 . 978-0-226-23793-0 . . harv . Pauline Maier . Ratification: The People Debate the Constitution, 1787–1788 . Simon & Schuster . New York . 2010 . 978-0-684-86854-7 . . harv . Gordon S. Wood . The Creation of the American Republic, 1776–1787 . University of North Carolina Press . Chapel Hill . 1998 . 978-0-8078-4723-7 . . James Bryce, 1st Viscount Bryce . The American Commonwealth . 2nd . London . Macmillan and Co. . 1891 . vol. 1 . –397, –645, 669–682, et passim. . Jonathan Elliot (historian) . The Debates in the Several State Conventions of the Adoption of the Federal Constitution . Vol. 1, Constitution, Declaration of Independence, Articles of Confederation, Journal of Federal Convention, Vol. 2, State Conventions Massachusetts, Connecticut., New Hampshire, New York, Pennsylvania, Maryland, Vol. 3, Virginia, Vol. 4, North. and South. Carolina, Resolutions, Tariffs, Banks, Debt, Vol. 5 Debates in Congress, Madison's Notes, Misc. Letters. . Christian G. Fritz . American Sovereigns: The People and America's Constitutional Tradition Before the Civil War . Cambridge University Press . 2008. . Kermit L. Hall . The Oxford Companion to the Supreme Court of the United States . New York . Oxford University Press . 1992. . Stanley L. Klos . President Who? Forgotten Founders . Evisum . Pittsburgh, PA . 261 . 2004 . 0-9752627-5-0. . harv . Forrest McDonald . Novus Ordo Seclorum: The Intellectual Origins of the Constitution . University Press of Kansas . Lawrence . 1985 . 978-0-7006-0311-4 . . Laurence Tribe . American Constitutional Law . 1999. . Samuel Eliot Morison . The Oxford History of the American People . 1965 . Oxford . Oxford University Press . 312.
Learning theories describe the conditions and processes through which learning occurs, providing teachers with models to develop instruction sessions that lead to better learning. These theories explain the processes that people engage in as they make sense of information, and how they integrate that information into their mental models so that it becomes new knowledge. Learning theories also examine what motivates people to learn, and what circumstances enable or hinder learning. Sometimes people are skeptical of having to learn theory, believing those theories will not be relevant in the real world, but learning theories are widely applicable. The models and processes that they describe tend to apply across different populations and settings, and provide us with guidelines to develop exercises, assignments, and lesson plans that align with how our students learn best. Learning theories can also be engaging. People who enjoy teaching often find the theories interesting and will be excited when they start to see connections between the theory and the learning they see happening in their own classrooms. General Learning Theories With a basic understanding of learning theories, we can create lessons that enhance the learning process. This understanding helps us explain our instructional choices, or the “why” behind what and how we teach. As certain learning theories resonate with us and we consciously construct lessons based on those theories, we begin to develop a personal philosophy of teaching that will guide our instructional design going forward. This chapter provides a bridge from theory to practice by providing specific examples of how the theories can be applied in the library classroom. These theories provide a foundation to guide the instructional design and reflective practices presented in the rest of this textbook. As you read, you might consider keeping track of the key points of each theory and thinking about how these theories could be applied to your practice. Figure 3.1 provides you with an example of a graphic organizer, one of the instructional materials that will be discussed in Chapter 11, that you could use to take notes as you read this chapter. In addition to the examples in practice that are provided in this chapter, you might add some of your own. Figure 3.1: Graphic Organizer for Major Learning Theories Behaviorism is based largely on the work of John B. Watson and B. F. Skinner. Behaviorists were concerned with establishing psychology as a science and focused their studies on behaviors that could be empirically observed, such as actions that could be measured and tested, rather than on internal states such as emotions (McLeod, 2015). According to behaviorists, learning is dependent on a person’s interactions with their external environment. As people experience consequences from their interactions with the environment, they modify their behaviors in reaction to those consequences. For instance, if a person hurts their hand when touching a hot stove, they will learn not to touch the stove again, and if they are praised for studying for a test, they will be likely to study in the future According to behavioral theorists, we can change people’s behavior by manipulating the environment in order to encourage certain behaviors and discourage others, a process called conditioning (Popp, 1996). Perhaps the most famous example of conditioning is Pavlov’s dog. In his classic experiment, Pavlov demonstrated that a dog could be conditioned to associate the sound of a bell with food, so that eventually the dog would salivate whenever it heard the bell, regardless of whether it received food. Watson adapted stimulus conditioning to humans (Jensen, 2018). He gave an 11-month-old baby a rat, and the baby seemed to enjoy playing with it. Over time, Watson caused a loud, unpleasant sound each time he brought out the rat. Eventually, the baby associated the rat with the noise and cried when he saw the rat. Although Watson’s experiment is now considered ethically questionable, it did establish that people’s behavior could be modified through control of environmental stimuli. Skinner (1938) examined how conditioning could shape behavior in longer-term and more complex ways by introducing the concept of reinforcement. According to Skinner, when people receive positive reinforcement, such as praise and rewards for certain behaviors, those behaviors are strengthened, while negative reinforcement will deter behaviors. According to Skinner, by carefully controlling the environment and establishing a system of reinforcements, teachers, parents, and others can encourage and develop desired behaviors (Jensen, 2018). A simple example of behaviorism in the classroom is a point system in which students are awarded points for good behavior and deducted points for unwanted behavior. Eventually, accumulated points might be traded in for rewards like small gifts or homework passes. This approach assumes that motivation is external, in that students will engage in certain behaviors in order to gain the rewards. Because it emphasizes the external environment, behaviorism largely ignores or discounts the role of internal influences such as prior knowledge and emotion (Popp, 1996). To an extent, behaviorists view learners as blank slates and emphasize the role of the teacher in the classroom. In this teacher-centered approach, instructors hold the knowledge, decide what will be learned, and establish the rewards for learning. Since their experience and prior knowledge are not considered relevant, learners are passive participants simply expected to absorb the knowledge transmitted by the teacher. While the idea of learners as blank slates has fallen out of favor, many of the conditioning aspects of behaviorism remain popular. As almost any student can attest, behavioral methods of reinforcement, such as the point system described above, are still common, especially in younger grades. Recent trends toward gaming in the classroom, where certain behaviors are rewarded with points and leveling up, are based in a behaviorist approach to learning. See Activity 3.1 for a brief activity on behaviorism. Activity 3.1: Reflecting on Behaviorism Think of some of your own learning experiences, whether they were in a traditional classroom, through professional development training, or related to personal interests, such as dance or photography lessons. Try to identify a few examples of behaviorism from those experiences and reflect on the following questions: - How did your instructors use behavioral practice in their classrooms? - Did you find those practices motivating? Why or why not? - If you can think of examples of behaviorism from several different learning experiences, were they more appropriate in some situations than others? How so? - Have you ever used, or can you imagine using, behaviorism in your own teaching practice? How so? Humanism recognizes the basic dignity and worth of each individual and believes people should be able to exercise some control over their environment. Although humanism as an educational philosophy has its roots in the Italian Renaissance, the more modern theorists associated with this approach include John Dewey, Carl Rogers, Maria Montessori, Paolo Freire, and Abraham Maslow. Humanist learning theory is a whole-person approach to education that centers on the individual learners and their needs, and that considers affective as well as cognitive aspects of learning. At its essence, “humanism in education traditionally has referred to a broad, diffuse outlook emphasizing human freedom, dignity, autonomy, and individualism” (Lucas, 1996). Within this broader context, humanism is also characterized by the following tenets (Madsen & Wilson, 2012; Sharp, 2012): - Students are whole people, and learning must attend to their emotional as well as their cognitive state. - Teachers should be empathetic. - Learners are self-directed and internally motivated. - The outcome of learning is self-actualization. Humanism centers the individual person as the subject and recognizes learners as whole beings with emotional and affective states that accompany their cognitive development. Recognizing the role of students’ emotions means understanding how those emotions impact learning. Student anxiety, say around a test or a research paper, can interfere with the cognitive processes necessary to be successful. Empathetic teachers recognize and try to understand students’ emotional states, taking steps to alleviate negative emotions that might detract from learning by creating a supportive learning environment. In a library context, Mellon (1986) identified the phenomenon of library anxiety, or the negative emotions that some people experience when doing research or interacting with library tools and services. This anxiety can distract learners and make it difficult to engage in the processes necessary to search for, evaluate, and synthesize the information they need to complete their task. Similarly, in her Information Search Process, Kuhlthau (1990) describes the affective states as well as the cognitive processes students engage in when doing research, acknowledging that their emotions fluctuate among anxiety, optimism, and, ultimately, satisfaction or disappointment. A humanist approach to education recognizes these affective states and seeks to limit their negative impact. For instance, we can acknowledge that feelings of anxiety are common so learners recognize that they are not alone. We can also explain how the skills students learn are relevant to their lives in and outside of the classroom. Because humanists see people as autonomous beings, they believe that learning should be self-directed, meaning students should have some choice in what and how they learn. Humanistic education is often connected with student-centered pedagogical approaches such as differentiated curricula, self-paced learning, and discovery learning (Lucas, 1996). Self-directed learning can take many forms, but it generally means that the instructor acts as a guide, and learners are given the freedom to take responsibility for their own learning. Teachers will provide the materials and opportunities for learning, but students will engage with the learning on their own terms. In a library classroom, we can give students choices about the topics they will research or offer learners different types of activities to practice skills and demonstrate what they have learned. Humanists also believe that learning is part of a process of self-actualization. They maintain that learning should be internally motivated and driven by students’ interests and goals, rather than externally motivated and focused on a material end goal such as achievement on tests, or employment (Sharp, 2012). The expectation is that when students are allowed to follow their interests and be creative, and when learning takes place within a supportive environment, students will engage in learning for its own sake. This emphasis on self-actualization is largely based on Maslow’s (1943) hierarchy of needs. Maslow identified five levels of needs: basic physiological needs such as food, water, and shelter; safety and security needs; belongingness and love needs, including friends and intimate relationships; esteem needs, including feelings of accomplishment; and self-actualization, when people achieve their full potential. Importantly, these needs are hierarchical, meaning a person cannot achieve the higher needs such as esteem and self-actualization until more basic needs such as food and safety are met. The role of the humanist teacher is to facilitate the student’s self-actualization by helping to ensure needs such as safety and esteem are met through empathetic teaching and a supportive classroom. In his book, Pedagogy of the Oppressed, Freire (2000) brings together many of the student-centered elements of humanistic education, with a strong emphasis on social justice aspects of learning and teaching. In contrast to behaviorist approaches, Freire emphasizes the importance of students’ life experience to their learning. He criticizes what he describes as the “banking model” of education, in which students are viewed as passive and empty vessels into which teachers simply deposit bits of knowledge that students are expected to regurgitate on exams or papers without any meaningful interaction. Freire insists that learning must be relevant to the student’s life and the student should be an active participant in order for learning to be meaningful. Freire also emphasized the emancipatory role of education, arguing that the purpose of education was for learners to gain agency to challenge oppressive systems and improve their lives, and praxis, in which learners put abstract and theoretical knowledge into practice in the real world. While a student-centered approach and choice can be introduced in any classroom, observers note that in an age of curriculum frameworks and standardized tests, where teachers are often constrained by the material, the ability to provide students with choice and allow for exploration is limited (Sharp, 2012; Zucca-Scott, 2010). Librarians often face similar constraints. School librarians also must meet state and district curriculum standards. Academic librarians generally depend on faculty invitations to conduct instruction and need to adapt their sessions to fit the content, time frame, and learning objectives of the faculty member. Nevertheless, we can always find ways to integrate some self-direction. For instance, rather than using planned examples to demonstrate searches, we might have students suggest topics to search. If we plan hands-on practice activities, we could allow learners to explore their own interests as they engage in the activity, rather than limiting them to preselected topics. Cognitivism, or cognitive psychology, was pioneered in the mid-twentieth century by scientists including George Miller, Ulric Neisser, and Noam Chomsky. Whereas behaviorists focus on the external environment and observable behavior, cognitive psychologists are interested in mental processes (Codington-Lacerte, 2018). They assert that behavior and learning entail more than just response to environmental stimuli and require rational thought and active participation in the learning process (Clark, 2018). To cognitivists, learning can be described as “acquiring knowledge and skills and having them readily available from memory so you can make sense of future problems and opportunities” (Brown et al., 2014, p. 2). Cognitivists view the brain as an information processor somewhat like a computer that functions on algorithms that it develops in order to process information and make decisions. According to cognitive psychology, people acquire and store knowledge, referred to as schema, in their long-term memory. In addition to storing knowledge, people organize their knowledge into categories, and create connections across categories or schema that help them retrieve relevant pieces of information when needed (Clark, 2018). When individuals encounter new information, they process it against their existing knowledge or schema in order to make new connections. Cognitivists are interested in the specific functions that allow the brain to store, recall, and use information, as well as in mental processes such as pattern recognition and categorization, and the circumstances that influence people’s attention (Codington-Lacerte, 2018). Because cognitivists view memory and recall as the key to learning, they are interested in the processes and conditions that enhance memory and recall. According to cognitive psychology research, traditional methods of study, including rereading texts and drilling practice, or the repetition of terms and concepts, are not effective for committing information to memory (Brown et al., 2014). Rather, cognitivists assert that activities that require learners to recall information from memory, sometimes referred to as “retrieval practice,” lead to better memory and ultimately better learning. For example, they suggest that language learners use flash cards to practice vocabulary words, rather than writing the words out over and over or reading and rereading a list of words, because the flash cards force the learner to recall information from memory. While testing has fallen out of favor with many educators and education theorists, cognitivists find tests can be beneficial as both a retrieval practice and a diagnostic tool. They view tests not only as a way to measure what has been learned but as a way to practice retrieval of important concepts, and as a way to identify gaps or weaknesses in knowledge so that learners know where to concentrate their efforts (Brown et al., 2014). Cognitivists encourage “spaced practice,” or recalling previously learned information at regular intervals, and “interleaving,” or learning related concepts together to establish connections among them. Their research has found that retrieval is more effective when the brain is forced to recall information after some time has passed, and when the recall involves two or more related subjects or concepts. Finally, cognitivists also promote problem-based learning, maintaining that “trying to solve a problem before being taught the solution leads to better learning, even when errors are made in the attempt” (Brown et al., 2014, p.4). These processes that enhance memory and recall, and thus learning, have some implications for instructors in creating an optimal environment for learning. Gagné (1985) proposed nine conditions for learning, referred to as the external conditions of learning, or the nine events of instruction: - Gain attention. Engage students’ attention by tying learning to relevant events in their lives and asking stimulating questions. - Inform the learner of the objective. Begin by sharing the learning goals with the students, thus setting expectations and providing a map of the learning. - Stimulate recall of prior learning. Encourage students to remember previously learned relevant skills and knowledge before introducing new information. - Present the stimulus. Share new information. This step depends on the content of the lesson. For instance, a lesson on Boolean operators might begin with a Venn diagram and examples of the uses of and, or, and not. - Provide learner guidance. Facilitate learning by demonstration and explanation. - Elicit performance. Allow time for students to practice skills and demonstrate their abilities. Ideally, students would be given low-stakes opportunities for practice, so they feel comfortable if they do not succeed immediately. - Provide feedback. Offer students input on what they are doing well and where they can improve. - Assess performance. Employ measures such as assignments, activities, and projects to gauge whether learning has occurred. - Enhance retention and transfer. Give students opportunities to practice skills in new contexts, which improves retention and helps students see how the skills are applied to different areas. Cognitivism remains a popular approach to learning. However, one criticism of cognitive psychology is that, unlike humanism, it does not account for the role of emotions in learning (Codington-Lacerte, 2018). Further, some critics believe that cognitivism overemphasizes memorization and recall of facts to the detriment of higher-order skills such as creativity and problem solving. However, cognitivists argue that the ability to recall facts and concepts is essential to higher-order thinking, and therefore the two are not mutually exclusive but actually interdependent (Brown et al., 2014). Finally, cognitivism is considered teacher-centered, rather than learner-centered, since it emphasizes the role of the instructor in organizing learning activities and establishing the conditions of learning (Clark, 2018). Activity 3.2 is a brief exercise on cognitivism. Activity 3.2: Reflecting on Cognitivism Cognitive scientists recommend retrieval practice, including spaced practice and interleaving, over drilling. Questions for Reflection and Discussion: - What kind of study practices do you tend to use? Do your practices vary depending on the content or material you are studying? How so? - Can you think of ways to integrate retrieval practices into your work for this class? - Spaced practice involves returning to previously learned concepts at later times, but information professionals often teach one-shot sessions. Can you think of ways to integrate spaced practice into a one-shot session? Constructivism posits that individuals create knowledge and meaning through their interactions with the world. Like cognitivism, and as opposed to behaviorism, constructivism acknowledges the role of prior knowledge in learning, believing that individuals interpret what they experience within the framework of what they already know (Kretchmar, 2019a). Social constructs, such as commonly held beliefs, and shared expectations around behavior and values provide a framework for knowledge, but people “do not just receive this knowledge as if they were empty vessels waiting to be filled. Individuals and groups interact with each other, contributing to the common trove of information and beliefs, reaching consensus with others on what they consider is the true nature of identity, knowledge, and reality” (Mercadal, 2018). Cognitivism and constructivism overlap in a number of ways. Both approaches build on the theories of Jean Piaget, who is sometimes referred to as a cognitive constructivist. However, while cognitivism is considered teacher-centered, constructivism centers the learner by recognizing their role in engaging with content and constructing meaning. Constructivist teachers act as guides or coaches, facilitating learning by developing supportive activities and environments, and building on what students already know (Kretchmar, 2019b). Piaget discusses the concepts of assimilation, accommodation, and disequilibrium to describe how people create knowledge. In his early work as a biologist, Piaget noticed how organisms would adapt to their environment in order to survive. Through such adaptation, the organism achieved equilibrium. Extending these observations to cognitive science, he posited that human beings also seek equilibrium (Kretchmar, 2019a). When they encounter new situations, or new information, human beings must find a way to deal with the new information. Similar to the processes described in the section on cognitivism, people will examine their existing knowledge, or schema, to see if the new information fits into what they already know. If it does, they are able to assimilate the information relatively easily. However, if the new information does not fit into what people already know, they experience disequilibrium or cognitive conflict, and must adapt by accommodating the new information. For example, once children learn what a dog is, they might call any four-legged creature they see a dog. This is assimilation, as the children are fitting new information into their existing knowledge. However, as children learn the differences between, say, a dog and cat, they can adjust their schema to accommodate this new knowledge (Heick, 2019). Disequilibrium and accommodation can be uncomfortable. People might be confused or anxious when they encounter information that does not fit their existing schema, and they might struggle to accommodate that new information, but disequilibrium is crucial to learning (Kretchmar, 2019a). During assimilation, people might be adding new bits of information to their knowledge store, but they are not changing their understanding of the world. During accommodation, as people change their schema, construct new knowledge, and draw new connections among existing areas of knowledge, actual learning occurs, and accommodation requires disequilibrium. Acknowledging the role of disequilibrium is important for both instructors and students. People naturally want to avoid discomfort, but that can also mean avoiding real learning. As instructors, we can facilitate accommodation by acknowledging that the process might be challenging, and by creating conditions that allow students to feel safe exploring new information. We can reassure learners that feelings of discomfort or anxiety are normal and provide them with low-stakes opportunities to engage with new information. Social constructivism builds on the traditions of constructivism and cognitivism; whereas those theories focus on how individuals process information and construct meaning, social constructivists also consider how people’s interactions with others impact their understanding of the world. Social constructivists recognize that different people can have different reactions and develop different understandings from the same events and circumstances, and are interested in how factors such as identity, family, community, and culture help shape those understandings (Mercadal, 2018).While cognitivists and constructivists view other people as mostly incidental to an individual’s learning, social constructivists see community as central. Social constructivism can be defined as “the belief that the meanings attached to experience are socially assembled, depending on the culture in which the child is reared and on the child’s caretakers” (Schaffer, 2006). Like constructivism, social constructivism centers on the learners’ experiences and engagement, and sees the role of the instructor as a facilitator or guide. Two of the major theorists associated with social constructivism are Pierre Bourdieu and Lev Vygotsky. Vygotsky built on the work of Piaget and believed knowledge is constructed, but felt that prior theories overemphasized the role of the individual in that construction of knowledge. Instead, he “was most interested in the role of other people in the development and learning processes of children,” including how children learn in cooperation with adults and older or more experienced peers who can guide them with more complex concepts (Kretchmar, 2019b). Vygotsky was also interested in how language and learning are related. He postulated that the ways in which people communicate their thoughts and understandings, even when talking themselves through a concept or problem, are a crucial element of learning (Kretchmar, 2019b). For Vygotsky, interaction and dialogue among students, teachers, and peers are key to how learners develop an understanding of the world and of the socially constructed meanings of their communities. Bourdieu examined the way in which social structures influence people’s values, knowledge, and beliefs, and how these structures often become so ingrained as to be invisible. People within a society become so enculturated into the systems and beliefs of that society that they often accept them as “normal” and do not see them as imposed structures (Roth, 2018). As a result, individuals might not question or challenge those structures, even when they are unfair or oppressive. In addition to examining how community and culture help shape knowledge, Bourdieu was interested in how issues of class impact learning. He observed that over time, schools developed to reflect the cultures of wealthier families, which enabled their children to succeed because they inherently understood the culture of the classroom and the system of education. We continue to see such issues today, and as discussed more in Chapter 5 and Chapter 6, part of our critical practice is to ensure that our classrooms and instructional strategies are inclusive of and responsive to all students. Activity 3.3 explores how we can use theory to guide our practice. Activity 3.3: Using Learning Theory to Plan Lessons While learning theories can be interesting on their own, our goal as instructors is to apply them to classroom practice. Imagine that you are a high school librarian working with a class that has just been assigned a research paper. Your goal for this session is for students to brainstorm keywords and synonyms for their topics, and to learn how to string those words together using the Boolean operators and, or, and not. You want to be sure the students understand the function of the Boolean operators and can remember how to use them for future searches. Choose one of the learning theories outlined in this chapter and design a brief lesson to teach Boolean operators from the perspective of that theory. Concentrate less on what you would teach but rather on how you would teach it in keeping with the chosen theory: - How would you introduce the topic? - What sort of learning activities would you use? - What would you be doing during the lesson? What would you expect students to do? - How might any of your answers to these questions change if you were to use a different theory as your guide? The learning theories outlined above discuss various cognitive processes involved in learning, as well as some of the motivators and conditions that facilitate learning. While these theories attempt to describe how people learn, it is important to note that individuals are not born ready to engage in all of these processes at once, nor do they necessarily all engage in the same processes at the same time. Rather, more complex processes develop over time as people experience the world and as their brain matures. In addition to studying how people learn, some theorists have also proposed theories or frameworks to describe developmental stages, or the various points in human development when different cognitive processes are enabled, and different kinds of learning can occur. Piaget outlined four hierarchical stages of cognitive development: sensorimotor, preoperational, concrete operational, and formal operational (Clouse, 2019), illustrated in Table 3.1. In the sensorimotor stage, from birth to about two years, infants react to their environment with inherent reflexes such as sucking, swallowing, and crying. By about age two, they begin problem solving using trial and error. The preoperational stage, also sometimes called the intuitive intelligence stage, lasts from about ages two to seven. During this time, children develop language and mental imagery. They are able to use their imagination, but they view the world only from their own perspective and have trouble understanding other perspectives. Their understanding of the world during this stage is tied to their perceptions. Children are in the operational stage from about ages seven to 12, during which time they begin to think more logically about the world, can understand that objects are not always as they appear, and begin to understand other people’s perspectives. The final stage, formal operationalism, begins around age 12. At this point, individuals can think abstractly and engage in ideas that move beyond the concrete world around them, and they can use deductive reasoning and think through consequences (Clark, 2018; Clouse, 2019). Table 3.1: Piaget’s Four Stages of Cognitive Development |Stage||Age Range||Behaviors and Abilities| |Sensorimotor||Birth to 18-24 months|| |Preoperational||18-24 months to 7 years|| |Concrete operational||7 to 12 years|| |Formal operational||12 years and up|| Perry’s (1970) Scheme of Intellectual and Moral Development offers another useful framework for understanding the developmental stages of learning. Perry proposed four stages of learning. In the first stage, dualism, children generally believe that all problems can be solved, and that there are right and wrong answers to each question. At this stage, children generally look to instructors to provide them with correct answers. The second stage is multiplicity, where learners realize that there are conflicting views and controversies on topics. Learners in the multiplicity stage often have trouble assessing the authority and credibility of arguments. They tend to believe that all perspectives are equally valid and rely on their own experiences to form opinions and decide what information to trust. In the next stage, referred to as relativism, learners begin to understand that there are different lenses for understanding and evaluating information. They learn that different disciplines have their own methods of research and analysis, and they can begin to apply these perspectives as they evaluate sources and evidence. At this point, learners can understand that not all answers or perspectives are equal, but that some answers or arguments might be more valid than others. In the final stage, commitment, students integrate selected information into their knowledge base. You might notice connections between Perry and the cognitivists and constructivists described above in the way they each describe people making sense of information by comparing new information to existing knowledge. However, Perry organizes the processes into developmental stages that outline a progression of learning. Understanding the stages laid out by Piaget and Perry, we can develop lessons that are appropriate to learners at each stage. For example, in presenting a lesson on climate change to preoperational students using Piaget’s framework, an instructor could gather pictures of different animal habitats, or take children on a nature walk to observe the surrounding environment. Instructors could ask these children to describe what they see and reflect on their personal experiences with weather, while older children could be asked to imagine how the changes are impacting other people and organisms, anticipate consequences of the impact of climate change, and perhaps use problem solving to propose steps to improve their environment. Considering Perry’s Scheme, instructors might guide students from multiplicity to relativism by explaining scientific methods for measuring climate, and challenging learners to evaluate and compare different sources of information to determine which presents the strongest evidence. Piaget and Perry offer developmental models that outline stages broadly aligned with a person’s age. Both models assume a relatively linear chronological development, with children and young adults passing through different stages at roughly the same time. Vygotsky, on the other hand, describes a model that focuses more on the content being mastered rather than the age of the student. According to Vygotsky’s theory, known as Zone of Proximal Development (ZPD), as learners acquire new knowledge or develop new skills, they pass through three stages, often illustrated as concentric circles, as in Figure 3.2. The center circle, or first zone, represents tasks that the learner can do on their own. The second zone, or the Zone of Proximal Development, represents an area of knowledge or set of tasks that the learner can accomplish with assistance. The tasks and knowledge in this zone require students to stretch their abilities somewhat beyond their current skill level but are not so challenging as to be completely frustrating. The outermost circle, or third zone, represents tasks that the learner cannot yet do. Vygotsky posits that by working within the ZPD, learners can continue to grow their skills and abilities and increase their knowledge (Flair, 2019). Figure 3.2: The Zone of Proximal Development Whereas Piaget and Perry’s theories suggest that learners pass through the same stages at roughly the same time, Vygotsky maintains that the ZPD, or the zone of learning that will appropriately challenge the learner, is different for each student, depending on their background knowledge, experience, and ability (Flair, 2019). The same individual can experience different ZPDs in different subject areas; they might be advanced in math and able to take on material above their grade level but might find languages more challenging. Like with social constructivism, interaction with others is central to ZPD. According to Vygotsky, learning takes place when students interact with others who are more knowledgeable, including peers and instructors, who can provide guidance in the ZPD (Schaffer, 2006). Math can provide a good example of working within the ZPD. Once students are comfortable with addition, they can probably learn subtraction with some help from a teacher or other peers but are probably not ready to learn long division. Our challenge as instructors is to identify the ZPD for each student so that we are neither boring learners with material that is too easy nor overwhelming them with material that is too hard. Chapter 7 discusses methods for assessing learners’ background knowledge to help determine the appropriate level of learning. Most of the educational theories and frameworks outlined in this chapter were developed with a focus on children and young adults. While many of the principles can apply to an adult audience, they do not necessarily account for the specific issues, challenges, and motivations of adult learners. Yet, many information professionals will work mostly or even exclusively with adults. Academic librarians and archivists largely work with students who are at least 17 years old and, as the numbers of nontraditional students continue to increase, will find themselves increasingly working with older learners. Likewise, information professionals in corporations and medical and legal settings work almost exclusively with adults. Public librarians see a range of patrons, and many public libraries are increasing educational programming for their adult patrons. This section presents the educational concept of andragogy, which addresses teaching and learning for adults. Knowles proposed andragogy as “the art and science of helping adults learn” (1988, p. 43). Andragogy is based on a set of assumptions about the ways in which adult learners’ experience, motivations, and needs differ from those of younger students, and suggests that traditional classroom approaches developed with younger students in mind will not necessarily be successful with adult learners. Perhaps one of the biggest differences between child and adult learners, according to Knowles (1988), is that adults are interested in the immediate applicability of what they are learning and are often motivated by their social roles as employees, parents, and so on. As Knowles notes, in traditional classrooms, children are usually taught discrete subjects like math, reading, and history, and their learning is focused on building up knowledge for the future. Young students might not use geometry in their everyday lives, but it forms a foundation for more complex math and for future job or life tasks like measuring materials for home repairs. Adults, on the other hand, are already immersed in the social roles for which younger students are only preparing, and they want to see how their learning applies to those roles. Thus, Knowles suggests that adults will be interested in a competency-based, rather than a subject-based, approach to learning. Further, as autonomous individuals, adults are likely to be more self-directed in their learning. That is, they will want to, and should be encouraged to, take an active part in the design and planning of lessons, providing input on content and goals. Finally, Knowles also argues that adults’ wider experience and larger store of knowledge should be a resource for learning. Knowles (1988, p. 45) organized his approach around four assumptions of adult learners: - Their self-concept moves from one of being a dependent personality toward a self-directed human being. - They accumulate a growing reservoir of experience that becomes an increasingly rich resource for learning. - Their readiness to learn becomes oriented increasingly to the developmental tasks of their social roles. - Their time perspective changes from one of postponed application of knowledge to immediacy of application, and, accordingly, their orientation toward learning shifts from one of subject-centeredness to one of performance-centeredness. Later, he elaborated with two additional assumptions, summed up by Merriam et al. (2007): - The most potent motivations are internal rather than external. - Adults need to know why they need to learn something. Certain understandings follow from Knowles’ assumptions that we can use to guide our practice with adult learners. To begin with, we should recognize and respect adults’ tendency to be self-motivated and self-directed learners. After all, in most states, school attendance is compulsory up to a certain age, and relatively strict curriculum standards are set by each state, meaning that children have little choice about attending school in some form or about what content they learn. At least in theory, adults have a choice about whether to attend college or engage in other kinds of learning opportunities such as workshops and professional development and continuing education courses. Presumably, adults are motivated to pursue these opportunities for a specific reason, whether out of personal curiosity, to advance in their careers, or to gain a new skill. These adult learners will likely have opinions and ideas about what they want to learn and perhaps even how they want to engage with the content, so Knowles suggests we provide adult learners with choices and opportunities for input to help shape the curriculum. Adult learners also have a larger store of knowledge and experience than their younger counterparts. From a cognitivist or constructivist point of view, adults have a larger schema against which to compare new information and make new connections. As instructors, we should recognize this store of knowledge and find ways to integrate it into the classroom, by providing ample opportunity for reflection and using guiding questions to encourage learners to draw on that knowledge. We can approach adult learners as peers or co-learners, acting more as coaches or facilitators in the learning process than as the more directive teacher associated with a traditional school classroom. This focus on learner-centered approaches and a democratic environment overlaps with humanistic and constructivist approaches to teaching. Points three, four, and six in Knowles’ list of assumptions underscore the importance of relevance and transparency for adult learners. Knowles suggests that adults have different priorities in learning, perhaps in part because they are learning by choice and are in a better position to direct their own learning. Adult learners also tend to have more demands on their time than younger students; they may have families and jobs that impact the time they have to devote to their studies. Thus, adult learners want to see the applicability of what they are learning and might be resistant to work or information that seems incidental. We should be transparent with our adult students, both about what they will learn and how that learning is important and relevant. Sharing learning goals is an important step toward transparency, as it can help set expectations so that students understand the purpose of the lesson and activities. To illustrate relevance, we can provide concrete examples of how the learning can be applied in practice. One could argue that all students, not just adults, deserve transparency and to see the relevance of lesson goals and learning. Knowles’ point is that adults are more likely to expect, and perhaps appreciate, such transparency. While some controversy exists over whether andragogy really constitutes a theory per se or is more a set of guiding principles or best practices, the assumptions provide helpful guidance to instructors not just in how they organize content but also in how they frame the lesson and its purposes. Based on these assumptions, we can take certain steps to set an appropriate environment for adult education (Bartle, 2019): - Set a cooperative learning climate. - Create mechanisms for input. - Arrange for a diagnosis of learner needs and interests. - Enable the formulation of learning objectives based on the diagnosed needs and interests. - Design sequential activities for achieving the objectives. - Execute the design by selecting methods, materials, and resources. - Evaluate the quality of the learning experience while rediagnosing needs for further learning. As noted above, andragogy overlaps with other theories such as humanism and constructivism, and some of the principles of andragogy, like transparency, would benefit all learners. Still, this framework is useful in reminding instructors that adult learners likely have different priorities and motivations, and thus some differences in classroom approach might be warranted. In addition to how people learn, we should also know something about why people learn. What motivates a student to put the time and effort into learning a skill or topic, and what can we do to cultivate that motivation? Svinicki (2004) offers an intriguing model that amalgamates some of the prevailing theories of motivation in learning. She suggests that motivation is a factor of the perceived value of the learning, along with students’ belief in their own self-efficacy, or their belief in their ability to achieve the goal. As Svinicki explains, “motivation involves a constant balancing of these two factors of value and expectations for success” (2004, p. 146). Most of the learning theories outlined above address motivation implicitly or explicitly. For instance, behaviorists talk in terms of reinforcement, or external motivators, as students strive to avoid negative consequences and achieve the rewards of good work. Humanists, on the other hand, focus on the internal motivation of self-actualization. As instructors, we can create environments to increase our learners’ motivation or their perception of the value of the goal and their self-efficacy: - Emphasize the relevance of the material. As outlined in the section on andragogy, learners are motivated when they see the benefits of learning and understand why the material is important. Instructors should explain how the effort individuals put into learning can help them achieve personal goals, such as getting a good grade on a paper or finding a job. - Make the material appropriately challenging. Reminiscent of the Zone of Proximal Development, material that is too easy will be boring for learners, while material that is too challenging will be overwhelming and frustrating. - Give learners a sense of choice and control. Choice allows learners to have a stake in the class, while control helps them determine the level of risk they will take and thus increase their confidence. We can foster choice and control by allowing learners options in the types of activities and assignments they engage in, or in the topics they research. - Set learners up for success. Clear expectations for the class or the assignment help learners understand what a successful performance or project looks like. By providing meaningful feedback, we can guide learners toward success. - Guide self-assessment. When learners accurately assess their current level of knowledge and skill, they can make reasonable predictions of the likelihood of their success with the current material. Activity 3.4 offers an opportunity to reflect on motivation in learning. Activity 3.4: What Motivates You? Think back on learning experiences such as courses or workshops where you felt more or less motivated as a learner. These experiences could be related to academics, hobbies, sports, or other interests. Questions for Reflection and Discussion: - In the experiences in which you felt motivated, what steps did the instructor take that helped you feel motivated? - In the experiences where you felt less motivated, what could the instructor have done differently? - In each case, what role did self-efficacy, or your confidence in your own abilities, play? Dweck’s (2016) mindset theory has gained much attention in the field of education over the last few decades and has some implications for student motivation. Although this theory is somewhat different in its conceptualizations than those described in the rest of this chapter, it is included here both because of its popularity and because it provides interesting insight into how instructors can coach learners to understand and build on their potential. Dweck’s theory is less about how people learn and more about how their attitude toward learning and their self-concept can impact their ability and willingness to learn. According to Dweck, people tend to approach learning with a fixed mindset or a growth mindset. Those with more of a fixed mindset tend to believe that ability is innate; either people are born with a certain talent and ability, or they are not. If individuals are not born with natural ability in a certain area, they would waste time working on that area because they will never truly be successful. People with more of a growth mindset, on the other hand, tend to believe that ability is the outcome of hard work and effort. These people see value in working at areas in which they are not immediately successful because they believe they can improve. Even when they are good at something, they are willing to continue to work at it because they believe they can continue to get better (Dweck, 2016). These mindsets can have a profound impact on how a person approaches learning (Dweck, 2016). People with a fixed mindset will view low grades or poor test performance as a sign of their lack of natural ability and are likely to become discouraged. They might try to avoid that subject altogether or resign themselves to failure because they do not believe that practice or study will help them improve. Instead, they will tend to stick to subjects in which they already perform well. People with a growth mindset take an opposite view. They tend to view low grades or poor performance as a diagnostic tool that helps them see where they need to concentrate their efforts in order to get better. They are willing to put in extra effort because they believe that their hard work will lead to improved performance. They are also willing to take risks because they understand that failure is just part of the process of learning. We can see connections between Dweck’s theory and Piaget’s argument that the discomfort of disequilibrium is necessary to learning. Understandably, people with a growth mindset are usually more successful learners because they believe in their own ability to learn and grow. Luckily, Dweck maintains that these mindsets themselves are not necessarily immutable. That is, a person with a fixed mindset can be coached to adopt a growth mindset. Learners can begin by recognizing when they are engaging in fixed mindset thinking, for instance when getting anxious about mistakes or telling themselves that they are “no good” at something. Once learners understand that this thinking is counterproductive, they can change their thinking to adopt a more encouraging voice. Importantly, Dweck notes that encouraging a growth mindset in the classroom does not mean lowering standards for learning. She maintains that instructors should have high standards but also create a supportive and nurturing atmosphere. To begin with, instructors themselves must believe that learning and growth are possible, and not give up on students who are struggling. Instructors can model this belief for students by replacing fixed mindset feedback with growth mindset feedback. For example, Dweck suggests that if learners are struggling, instructors can respond by telling them they have not succeeded yet. The word “yet” implies that they will achieve the necessary learning; they just need to keep working at it. In that way, instructors can reframe mistakes and struggles as opportunities to learn rather than as failures. Instructors should encourage and appreciate effort as well as learning. In other words, rather than focusing only on a student’s achievement, instructors can praise the effort and hard work that led to that achievement. At the same time, Dweck (2015) notes that a growth mindset is not just about effort. In addition to putting in the work, learners must also be willing to try different strategies and be open to feedback on their performance. The goal is to help students view challenges as part of the learning process and to work with them rather than to fear or avoid them. Learning theories are meant to help instructors understand the processes and circumstances that enable learning and, by extension, offer guidance in developing activities and environments that best support learning. But what to make of the fact that there are so many different theories and that some contradict each other? The truth is that the human brain and its cognitive processes are incredibly complex and not yet fully understood. Learning theorists do their best to describe how people learn based on careful observation and experimentation, but no learning theory is perfect. Indeed, each theory has its critics, and the various theories go in and out of favor over time. Even so, the theories provide us with an empirically based understanding of how learning occurs. Further, these theories are not mutually exclusive. We do not have to strictly adhere to one theory but can combine elements across theories in ways that resonate with our teaching styles and reflect our best understanding of our students. For instance, a teacher might draw on elements of cognitivism to enhance students’ retention and recall but also develop group activities that promote social constructivism through peer-to-peer communication. Especially with younger children, instructors might draw on behaviorism by using rewards and positive reinforcement to motivate student engagement with the content, but also integrate humanism by empathizing with students and use constructive feedback to encourage a growth mindset. We can use our understanding of developmental stages to create lessons and activities that provide an appropriate level of challenge to help students grow in their understanding. Ultimately, we should view learning theories as guidelines, not rules, and draw on them in ways that reflect our own values and understandings. Keeping this idea of learning across theories in mind, we can sum up the key takeaways from this chapter: - Learning is the change in knowledge, behavior, or understanding that occurs when people make connections between new information and their existing knowledge. Various theories attempt to describe the factors that enable the learning process. - Learning does not happen in the same way or at the same time for all students. Understanding developmental stages can help instructors align instruction with student readiness. Adult learners may have needs and constraints that differ from younger learners. - The learning process is influenced by internal factors such as the student’s level of motivation and feelings of self-efficacy, and external factors such as the classroom environment and the adults and peers with whom the learner interacts. - Instructors can take steps to foster better learning, including: - Creating a democratic, empathetic, and supportive learning environment - Assisting students in becoming self-directed learners and enhancing their motivation by offering a sense of control and choice in their learning - Acknowledging that learning can be challenging, and helping students develop the mindset and self-efficacy that will support their persistence - Offering regular and meaningful feedback Brown, P. C., Roediger, H. L. III, & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Belknap Press. Brown, Roediger, and McDaniel present an engaging and accessible overview of current research in cognitive psychology. In addition to the science, the authors offer clear examples of how recommended recall and retrieval practices can be integrated into teaching. Cooke, N. A. (2010). Becoming an andragogical librarian: Using library instruction as a tool to combat library anxiety and empower adult learners. New Review of Academic Librarianship, 16(2), 208-227. https://doi.org/10.1080/13614533.2010.507388 This article offers a thorough overview of andragogy and the characteristics and motivators of adult learners and offers library-specific advice for teaching adult students. Curtis, J. A. (2019). Teaching adult learners: A guide for public librarians. Libraries Unlimited. Curtis provides a clear introduction to andragogy to contextualize instruction in public libraries. She also addresses issues of culture and generational differences in teaching adults. Covering many aspects of instruction, including developing learning objects and teaching online, this book is valuable as one of the few to focus exclusively on issues of teaching and learning in public libraries. Dweck, C. S. (2016). Mindset: The new psychology of success (Updated ed.). Penguin Random House. In this book, Dweck defines fixed and growth mindsets and how they can influence people’s feelings of motivation and self-efficacy in learning. She also offers guidance on how to facilitate the development of a growth mindset for better learning. Freire, P. (2000). Pedagogy of the oppressed (30th Anniversary Edition). Bloomsbury. In this foundational work, Freire presents the concept of the banking model of education. This book provides a social justice foundation for a humanistic approach to education. Merriam, S. B., & Bierema, L. L. (2014). Adult learning: Linking theory and practice. Jossey-Bass. The authors provide a clear, concise, and engaging overview of both traditional and current theories of adult learning. The book includes activities and concrete examples for implementing the theories in the classroom. Roy, L., & Novotny, E. (2000). How do we learn? Contributions of learning theory to reference services and library instruction. Reference Librarian, 33(69/70), 129-139. https://doi.org/10.1300/J120v33n69_13 The authors provide an overview of some of the major learning theories, followed by specific ideas and advice for applying the theory to reference and library instruction. Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Bolton, MA: Anker Publishing. This book takes a student-centered approach to describing learning theory. Chapter 7 provides an excellent overview of motivation and self-efficacy, including implications for practice. Bartle, S. M. (2019). Andragogy. In Salem press encyclopedia. EBSCO. Brown, P. C., Roediger, H. L. III, & McDaniel, M.A. (2014). Make it stick: The science of successful learning. Belknap Press. Clark, K. R. (2018). Learning theories: Cognitivism. Radiologic Technology, 90(2), 176-179. Clouse, B. (2019). Jean Piaget. In Salem press biographical encyclopedia. EBSCO. Codington-Lacerte, C. (2018). Cognitivism. Salem press encyclopedia. EBSCO. Dweck, C. S. (2015, September 22). Carol Dweck revisits the “growth mindset.” Education Week, 35(5), 20-24. https://www.edweek.org/ew/articles/2015/09/23/carol-dweck-revisits-the-growth-mindset.html Dweck, C. S. (2016). Mindset: The new psychology of success (Updated ed.). Penguin Random House. Flair, I. (2019). Zone of proximal development (ZPD). Salem press encyclopedia. EBSCO Freire, P. (2000). Pedagogy of the oppressed (30th Anniversary Edition). Bloomsbury. Gagné, R. M. (1985). The conditions of learning and theory of instruction. Wadsworth Publishing. Heick, T. (2019, October 28). The assimilation vs accommodation of knowledge. teachthought. https://teachthought.com/learning/assimilation-vs-accommodation-of-knowledge/ Jensen, R. (2018). Behaviorism. Salem press encyclopedia of health. EBSCO. Knowles, M. S. (1988). The modern practice of adult education: From pedagogy to andragogy. Revised and updated. Cambridge, The Adult Education Company. Kretchmar, J. (2019a). Constructivism. Salem press encyclopedia. EBSCO. Kretchmar, J. (2019b). Gagné’s conditions of learning. Salem press encyclopedia. EBSCO. Kuhlthau, C. C. (1990). The information search process: From theory to practice. Journal of Education for Library and Information Science, 31(1), 72-75. https://doi.org/10.2307/40323730 Lucas, C. J. (1996). Humanism. In J. J. Chambliss (Ed.), Philosophy of education: An encyclopedia. Routledge. Madsen, S. R., & Wilson, I. K. (2012). Humanistic theory of learning: Maslow. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning. Springer. Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370-396. McLeod, S. A. (2015). Cognitive approach in psychology. Simply Psychology. http://www.simplypsychology.org/cognitive.html Mellon, C. A. (1986). Library anxiety: A grounded theory and its development. College & Research Libraries, 47(2), 160-165. https://doi.org/10.5860/crl.76.3.276 Mercadal, T. (2018). Social constructivism. Salem press encyclopedia. EBSCO. Merriam, S. B., Caffarella, R. S., & Baumgartner, L. M. (2007). Learning in adulthood: A comprehensive guide (3rd edition). Wiley. Perry, W. G., Jr. (1970). Forms of intellectual and ethical development in the college years; A scheme. Holt. Popp, J. A. (1996). Learning, theories of. In J. J. Chambliss (Ed.), Philosophy of education: An encyclopedia. Routledge. Roth, A. L. (2018). Pierre Bourdieu. Salem press biographical encyclopedia. EBSCO. Shaffer, R. H. (2006). Key concepts in developmental psychology. Sage UK. Sharp, A. (2012). Humanistic approaches to learning. In N.M. Seel (Ed.), Encyclopedia of the Sciences of Learning. Springer. Skinner, B. F. (1938). The Behavior of organisms: An experimental analysis. Appleton-Century. Svinicki, M. D. (2004). Learning and motivation in the postsecondary classroom. Anker Publishing. Zucca-Scott, L. (2010). Know thyself: The importance of humanism in education. International Education, 40(1), 32-38.
"I Can’t Remember or Understand What I Just Read!" Here’s a teaching strategy to improve recall and overall comprehension of a text: Begin by asking the students to visualize (imagine, picture) what they read in their heads as they read, stopping periodically to first, model what you are visualizing (“I’m picturing her face looking angry. It’s red and her fists are clenched…”). Ask the students what they are picturing. Stop after each part of the story (beginning, middle, end). At each stopping point, ask the students to verbalize the part (beg., mid., end) as they visualize it. Then, have them draw what they visualized. After reading the whole text, ask students to use their drawings to retell the story’s beginning, middle, and end. Lastly, write a sentence or two next to each picture in order to produce a complete retelling. (Tip: For added support, give students sentence frames… i.e. In the beginning, ___.) The text can be read round robin style of reading with a small group or as a mini-lesson with the whole class. Scaffold the approach by gradually releasing responsibility (i.e. allow the student to identify the beginning, middle and end, instead of explicitly stating it and determining it for students). You can also use this approach with non-fiction. Vary the strategy by stopping after reading each section, and model what you visualized (i.e. “I pictured the frog in my head changing from an egg to a…”). Ask the students what they pictured. Then, have them draw what they visualized. After reading the whole text, ask students to use their drawings to retell the main idea and supporting details. Lastly, write a sentence or two next to each picture in order to produce a complete retelling. (Tip: For added support, give students sentence frames… i.e. Frogs change from ___.) Check out my "See & Say" Reading Comprehension Strategy: AND compatible graphic organizer for retellings... Tip: I have my students complete retelling sheets after each book we read ...BUT... since paper is a hot commodity along with a teacher's time which can be saved from copying, I place my retelling sheets inside these pockets so I can have them for the ENTIRE school year... yes, you read that correctly! Some of our students are reading over two years below grade level! This is not only shocking, but can also seem like a daunting task! So how do we ever get these students to grade level? We have to create comprehensive guided reading lesson plans. First, assess your students. Determine their lagging skills. Students usually fall into two categories for what is holding them back in reading. The first group are those that struggle with decoding (sounding words out). The second are the students who are unable to summarize, retell, or answer questions about a text. This is the comprehension group. If your students are struggling with decoding, use a phonics skill lesson plan for short targeted practice. If your students are struggling with comprehension, use a comprehension skill and strategy lesson alongside your guided reading lesson. If students are lagging in both areas, use a daily or weekly integrated plan to target both areas simultaneously. Students who are typically reading significantly below grade level require an integrated approach to reading, targeting both areas: phonics and comprehension. Comprehensive guided reading lesson plans should incorporate: -extensive teacher-student interaction, -multisensory learning methods, -all components of reading: decoding (single word accuracy/automaticity), comprehension, vocabulary, sight word, fluency, plus + encoding, -special emphasis upon mastery of foundational reading skills and... -independent application of comprehension strategies to help ALL students access the general education curriculum! Check out some Comprehensive guided reading lesson plans! Related Blog Posts... Mentors serve as good examples of skills for our students. Teachers are mentors. Parents are mentors. Books are mentors. No, you did not read that incorrectly! For centuries, we have been reading aloud to kids. These books serve as mentors for all types of skills. Mentor texts entered educational lingo as a way to refer to the books that we read aloud to students as models for good writing. Today, we are learning to write non-fiction pieces. First, we will begin by looking at the way good non-fiction writers write by reading one of Gail Gibbons’ science texts! Later, we will practice writing as non-fiction writers. We will share and discuss our trials as we draft! A few years ago mentor texts reinvigorated as a way to teach students reading skills too. Today, we will be learning about summarizing. We will begin by reading the text Where The Wild Things Are aloud. We will then summarize the story using a graphic organizer. We will do this as a whole group, and then, you will practice the skill using your independent reading books. After that, we will gather together as a group and summarize (no pun intended) what we learned while practicing our skill. This sounds like an ideal lesson, right?! If I were looking to get observed, this may be the lesson plan I use, right?! Hmmm...but what about Tommy? There is no way he will sit for that long and only have 2 possible movement breaks! And what about Janey? She hates when I read aloud because she can’t sit still and always asks to use the bathroom during read alouds. And now that I think about it, there are always 3 of them that ask for a bathroom break whenever I read. Plus, these days I can only seem to hold their attention for less than five minutes? Sound like every teacher in the world? There are always those classes that cause you to let out an audible sigh at the end of every day as you flop your tired body and mind into your chair, only to become quickly overwhelmed by the stacks of to-do’s on your desk! Today’s learners require a circus act to hold their attention. They have grown up with technology at their fingertips; a world that moves faster than any superhero they have ever known! Visual mentor texts are a great tool for these learners! They provide a concise context for targeting literacy skills in the form of a visual mentor text which means they hold our students ATTENTION! Visual Mentor Texts in READING… You can teach all reading skills from inferencing to theme using Pixar short films. For example, the Pixar Short Films For the Birds (2000) is a great visual mentor text to teach theme. A large dopey bird who wants to join in with a group of smaller birds. When he sits on their wire, the smaller birds become angry, pecking the larger bird’s feet. He drops, causing the wire to slingshot. The large bird falls to the ground intact while the smaller birds land minus some feathers! What is the message (Trick: THE MEssage) or theme? Want to teach the skill of inferencing? One Man Band (2005) is a Pixar Short Films that can be used to teach inferencing AND has the most adorable little girl! Visual Mentor Texts in WRITING… Commercials can be another form of visual texts. Watch "Unsung Hero" (Official HD) TVC Thai Life Insurance 2014. The commercial profiles a seemingly poor man who fills his life with good deeds, changing the lives of others and making him rich with happiness! Tell the unsung hero’s story! AND use visual WORDLESS mentor texts in writing as prompts! Use visual mentor texts in writing that are lacking a conclusion and write one! Visual Mentor Texts in SCIENCE… Use these animations in science! Watch a short and ask: how many simple machines did you notice? What would be impossible in real life? Watch a portion of the movie Cloudy With a Chance of Meatballs (2009) to prompt a discussion about scientists or hypothesis! Visual Mentor Texts in HISTORY… Use visual mentor texts in history class. Relate the stories to concepts and people of our past to help make connections. The Pixar Short Films La Luna (2011) tells the story of a young boy who reaches for the moon. He is unsure of the lead to follow - his father’s or his grandfather’s. The film demonstrates the theme of finding one’s own path and can be related to many great historical leaders (MLK, Amelia Earhart…) and movements (the Underground Railroad, colonization). Visual Mentor Texts in SEL… Social Emotional Learning has become a core curriculum for today’s classrooms. As a result, SEL needs to be explicitly taught in isolation AND infused across the curriculum. Many of the Pixar Short Films examples I have shared have an SEL component. For the Birds prompts a discussion around bullying, differences, following the crowd, and the list goes on. One Man Band can incite a discussion around competition. The perseverance of the main character in Cloudy With a Chance of Meatballs demonstrates grit! La Luna is a great example of learning from the past. Warning! This does NOT mean I want you to throw out your picture books! One of my favorite moments is watching a middle schooler melt into a pile of sweet innocence as a teacher reads aloud one of their childhood favorites! However, there are those times, when you need a stronger strategy! Because unlike technology, teachers DO have superpowers! By Miss Rae There’s at least one student each year who reverses his/her b’s and d’s or just writes uppercase B’s and D’s (well, because that was an easier strategy to learn). Should teachers be concerned? Are reversals a sign of dyslexia? Reversing letters is common until around age 7. Here are some tricks to reverse reversals… 1. Have students fist pump themselves with their palms facing towards them. Stick their thumbs up and you have a b/d (this is a great indiscreet trick for older students too) 2. Draw a bed with the letters b and d - bd - draw a stick finger person laying down whose head lays down on the b 3. Draw a bat and ball to create a b and a drum with a drum stick for a d 4. Make an uppercase B and then, erase it's top b 5. Practice visual tracking with activities like the one in the image Practice, practice, practice! But if there is no progress after all of that practice, then, a teacher should be concerned. If dyslexia is the reason for the letter reversals, teachers may also note that students struggle with letter and number sequencing. And a word of caution... there is no evidence to suggest letter reversals are more common among dyslexic children, compared to same-aged peers learning how to read and write; however, it is more so that most children grow out of letter reversals, whereas students with dyslexia may be slower to. AND don't forget to rule out a visual processing disorder. By Miss Rae The overarching goal of 21st century education is to equip today’s students with the ability to analyze, evaluate, and create; all of which are the highest levels of Bloom’s Taxonomy. Our states’ standardized testing assesses our students’ capabilities on Bloom’s high-ranking skills of analysis, evaluation, and creation through text-based constructed responses to open ended questions. For example, a student may be asked to explain the relationship between two characters in a text. Directions to this response will include citing evidence from the text to support the student’s answer. First, a student needs to read and comprehend the text. These are the lower levels of Bloom’s Taxonomy. Next, the student must analyze the text in relationship to the question and make an evaluation to answer the question. Finally, the student must create a written response that supports his/her claim. In order to begin building our learners toward mastery of high level educational learning objectives, we must support our students with appropriate and supportive instruction and environments. Think scaffolded supports! Learners do not just enter our schoolroom doors, equipped with these learning superpowers. Instead, we must teach our students to mastery. One strategy I keep in my toolkit is teaching students how to explain their reasoning, and here is one way I do that! First, I prep! I put quotes from our texts on chart paper. To incorporate some movement for my kinesthetic learners, I hang the quotes around the classroom. Students are partnered or grouped. They are then given 7 minutes at each quote. They must use this time to... -read the quote -discuss its meaning -narrow the meaning down to one sentence -write the meaning down, and finally… -support your answer with textual evidence. This activity allows my students to master the learning process with the support of their fellow learners, wrestle and engage with the curriculum, learn to work in a cooperative learning group, and own and guide their own learning, AND I get to use my doorbell for transition times! What are some ways that you teach students how to analyze and explain their learning? ~By Miss Rae Remember being a little kid with visions of your future classroom dancing in your head Your nameplate sits atop your desk facing your happy little students, while the sun, beaming through the massive windows, brilliantly lights the shiny red apple sitting in the center of your large desk calendar, as the centerpiece to your brightly colored classroom! And then, you grow up! So maybe your dream hasn’t changed. You still want your classroom to be Pinterest-Perfect Ready, but you also have bills to pay (oh, and something called a life outside of work!). Get your Pinterest-Perfect Classroom for Pennies with these tips! ONE: Painted Jars Add some color to your classroom with some painted Mason Jars. Re-use sauce jars or buy some Mason Jars. Buy some craft paint, pour a some paint in the jar, cap the jar, and shake! Use them as desk decor for scissors, pens, pencils, and more OR as vases! Posters are a quick way to decorate! Use them to motivate your students or share positive messages! Make your own: Download free posters from the Internet or Etsy. Mount the posters onto scrapbook paper or construction paper for some added color OR frame them with Dollar Store frames! THREE: Get Your Craft On! Make your own Welcome Sign: Decorate with scrapbook flowers: And many of the chain craft stores offer teacher discounts when you show your teacher ID too! FOUR: Fake Flowers A cheap and easy way to add color ALL year long is to bring in some fake flowers! And as an added bonus, they don’t require any care! Use the painted jars as vases! FIVE: Decorate Like It’s a Holiday All Year Long Get window clings during every holiday at your local Dollar Store, and wahlah! You are decorated all year! Light up your room! Use a doorbell to get your students’ attention, but spice it up by changing up the tunes! Classroom Transformations have taken over social media! And some of them are amazing! But at the end of the day, you remember how someone makes you feel! By: Miss Rae There are a few tools that EVERY Special Education teacher should have on hand… a candy bar for tough days, a few dollars in your drawer for all of those EXTRA donations, some mints to pop before a meeting, a trusted colleague whose on hand for venting without judgment, and the list goes on… But there are only TWO Must Have Special Education Teacher Tools for Teaching Reading! ONE: A Drawer Full of Tools! Every guided reading table needs to have a drawer full of tools on hand! Fill your drawer with the best supports for your students’ needs. Special Education teaching tools should allow Special Education students to easily access the general education curriculum! Here are the TOOLS my DRAWER is stocked with: -Guided Reading Strips -Creepy Witch Fingers or any gimmicky tool for tracking to strengthen visual tracking for fluency and decoding! -Strategy cards (decoding and comprehension) to reinforce and support learned strategies! -Comprehension discussion cards or sticks to increase oral discussion and promote text comprehension! Sight Word flashcards to learn and practice decoding and word reading for fluency! -Paper graphic organizers -Pencils, notebooks or paper, and BIG erasers to support varying instructional activities from practice to assessment! to support encoding (spelling) through a multisensory activity! because what isn’t more engaging than writing your sight words in shaving cream?! I also keep cookie sheets with magnet letters to practice encoding (spelling) on top of my drawer along with any assistive tech devices! TWO: A Guided Reading Binder A Guided Reading Tool Binder allows a teacher to easily plan for varied multi-sensory activities without copying, reinventing the wheel, or spending more time creating or buying on learning games. A Guided Reading Tool Binder can also keep differentiated tools on hand for each learner’s needs. A Guided Reading Tool Binder should include... -Lesson plan formats for specialized programs -Scope and sequence charts for specialized programs to support a teacher’s own learning - and remembering! -Checklist of reading behaviors to notice, teach, and support at each reading level for instructional planning and progress monitoring! Make 5-6 copies (enough for each student in a small group to have his/her own) and place the following in top loading sheet protectors. Students can write on these with dry erase markers and erase for re-use. to practice encoding using word chains for spelling... (at - cat - scat) or to increase vocabulary! (write the word at - add one letter to at to spell a word for an animal that purrs - add a letter to cat to create a word that means to run away) to sort dictated words by spelling patterns to support phonics skills! -Elkonin sound boxes to build phonological awareness! -Word Sort Mats to increase phonics skills! -Word detective charts where students hunt and locate specific words or word patterns in texts to reinforce learned phonics skills! Words are recorded along with the page number of their location. -Graphic organizers that can be used for all texts to promote comprehension! Pull out any of your drawer or binder tools as instructional supports within your lesson plan, to increase engagement, AND as time fillers - if you are ever lucky enough to get through everything you planned AND have extra time! (Disclaimer: If you are one of the lucky few who is a traveling teacher, carry your tools in a supply caddy or a bag that can be easily organizer.) ~By Miss Rae The most important component of special education - next to the students - is the data! Data is a special educator's lifeline. We employ data for eligibility determinations. We use it to monitor progress toward a student's IEP goals. We use it to set goals for students, determine extended year programming, report at meetings, and qualify our statements in meetings and on special education documents. We need the data to justify the TEAM's decision about a student's plan. We know the importance of data. The hard part is tracking it! Here's how I do it? I review at my students’ IEP goals and objectives. During this process, I pair each objective with an assessment. For example, if a student has a sight word reading goal using the Fry Word List, I pull out the Fry Word List. When I’m finished pairing assessments, I set a schedule for each probe. I typically begin the year with a full battery of assessments to obtain a baseline for a student’s goals and objectives. Some objectives are then tested weekly. For example, I will complete a weekly running record on a student’s reading. Other objectives I may assess monthly. This may be a student’s writing objective regarding a narrative piece of writing. As a result, I will plan to have a completed narrative writing piece once per month. I put this schedule into my Google Calendar and check this step off of my To Do List! I organize my students’ goals and objectives along with the assessments I have chosen for each on tracking forms. All forms contain a student’s name, goal(s), and objectives. The forms, then, vary by the assessment schedule. For example, some goals and/or objectives may need a spot for weekly tracking while others may need a monthly. When a student is assessed, I record the score (AKA the data) directly onto the form along with the date. This keeps my data all on one form that I can pull out on the spot when it is needed. So, if a parent states “Ben says he completes all of his work, but you lose it,” you can pull out your trusty form with evidence that Ben has completed 30 percent of his assignments in the last month. Or when it’s time to write Special Education progress reports, you don’t have to dread it. The data is at your fingertips. The tool I use for this is my IEP Data Collection Progress Monitoring Forms and Cards for this. You can grab my IEP Data Collection Progress Monitoring Forms and Cards from Miss Rae’s Room Teachers Pay Teachers Store HERE! If you need to track behavioral data, check out my BEHAVIOR Data Tracking Forms & Points Sheets! I break out the three-hole punch and get wild! I keep all of my tracking forms in a binder (because I grew up in the 80s, okay?!). When my caseload is on the small side, it makes my life easier to organize my binder sections by student. In this way, when I need my data for a particular student, I can quickly find it, and I don’t have to flip from section to section when I am writing reports. However, as caseloads sometimes grow over the years, it has become more efficient to have the sections organized by assessments. So when my Google Calendar alerts me that I need to test math fact fluency, I can quickly flip to the section containing the sight word assessments and tracking forms for that probe. I also keep reference sheets in my binder for easy access. For example, I always keep a reference page that correlates reading levels from Fountas and Pinnell to Reading A-Z to lexile levels. Some data needs to be tracked more frequently. For example, lagging skills in executive functioning, behavior, attention, and social emotional capacities often needs to be tracked within a 30 minute time period or during one subject area. The binder can become too cumbersome when to employ for frequent data tracking. Often times, I clip my forms to clipboards for easy access. The forms I use can be copied onto cardstock and cut smaller to be placed on a key rings for easy access as well. If I have access to an iPad or tablet, I use Google Forms. You can make a simple form that enables you to just hit a button each time the data needs to be recorded. Google Forms will save the data, and when needed, Google Forms will compile the data into one spreadsheet for analysis when it’s needed. And there you have it! Your data is tracked! Now, you can continue on with just being a teaching rockstar! ;) ~By Miss Rae Check out The BEST Special Education Teacher Binder with FREE updates for life to get ALL things Special Education including DATA TRACKING form options! What are some other ways to track data? The bell rings, and in walk the students. Chris and Jordy are already arguing, completely ignoring your instructions. "I didn't do my homework. Did my mom email you? She said she doesn't like the homework and I don't have to do it," Sam asserts upon entrance. Kiana is tugging on your sleeve, "Miss, I need to talk to you NOW!" She gives you the eyes! "Phone is ringing," Eric yells as he walks into the room. A chair crashes to the floor from an angry Kim who has slammed her fists onto the desk, and then, walks out before class has begun. You get to the phone just in time for it to stop ringing. So you try... “Boys and girls?” “Okay, let’s focus.” This phrase drives me crazy! First, you are the adult. You shouldn’t have to wait! Second, what kid doesn’t want you to wait while they finish having fun? This is a terrible strategy! The scenario above is going to happen! We all these moments, days, and days of these moments! And it's okay to have one of these "days", but they do not need to be the definition of your classroom! Classroom routines and procedures should be established and practiced at the beginning of the school year! Expectations around procedures should also be established at the beginning of the school year AND reviewed periodically throughout the year! As long as we are consistent, our classrooms should be well oiled machines, right?! While I do like to think of myself as the QUEEN of my classroom, I just honestly don’t have that kind of power. Things are going to happen that are beyond our control. That’s teaching! But we can’t just WAIT to get students’ attention! I’m sure they would love for us to wait! Once we have things back under control, we have to regain our students’ focus immediately. So, how do we do that? Here are some tips and tricks that I use! #1 Ding, Dong! My doorbell has to be my Number One because it is foolproof! It is my best trick for gaining my students’ attention, AND it is also engaging, versatile, and CHEAP! (What more can a teacher ask for?!) I purchased the doorbell online! I keep the buzzer in my pocket or just lying on my desk, and I press it when I want the kids' attention. It totally works! I also had some fun with it when I first got it! Much like a game of ‘Where’s Waldo?’, my students searched for weeks to answer “Where is that noise coming from?!” A few of them found the doorbell’s hiding spot (a plug behind a bookcase), but they continued to let the other students’ play detective! My doorbell also doubles as a signal to transition! For example, when we are rotating through editing stations or centers, I use press my buzzer. The first time the doorbell rings, I alert them to an upcoming transition: “One more minute.” The second time the students hear it, they move to their next location. #2 Wordly Whispers Improve your students’ vocabulary AND keep them on their toes! Teach your students a word. Practice saying the word. Define the word. Use the word in sentences. Then, use the word as your attention getting signal. My students have a set of vocabulary words that they dissect, manipulate, and employ each week. Daily, I choose one of these words to be my Word of the Day. When students hear me say the Word of the Day, they know to freeze in place, turn their voices off, and put their eyes on me. First, we use the Word of the Day in a sentence. Typically, I will model the word’s usage in a sentence for the first time we use a word. Students, then, generate sentences each time thereafter that I say the word. After using the Word of the Day in a sentence, I will proceed with the instructions or transition, depending on the reason that I had asked for the students’ attention. Students will be listening and waiting to hear the word! As a result, they will tune into the sound of your voice each time you address them. #3 Let Me Get a Chant! Chants are fun and can help improve oral language! Plus, it gets the entire class involved, and a little team building can go a long way! Call-and-response strategies are utilized by the military to churches, and for our purposes, classrooms! This is when students respond verbally in unison with an identical saying to a teacher’s ‘call’. Here are a few examples! Teacher: Zip, zip, zap! Students: We’re all that! Teacher: Everyone in the house Students: is quiet as a mouse. Teacher: 1, 2, 3, eyes on me. Students: 1, 2, eyes on you. Teacher: 1, 2 Students: eyes on you. Teacher: 1, 2 Students: eyes on you. Teacher: 3, 4 Students: talk no more. Teacher: L - I - S Students: T - E - N Whichever chant you choose, I would suggest having it as a written visual for your students as well. The visual will serve as a reminder of the expectation AND act as a literacy resource within your room! If your kiddos do not have oral language, use claps! Make pattern calls and pattern responses! These are a few tips for getting your students attention, but there are millions of attention-getting tricks! ~By Miss Rae What are some other ways to get our kiddos’ attention? Hi! I'm Miss Rae! I'm a Special Education Coordinator with a passion for creating research-based resources for DiVeRSe learners.
Moving a nanosatellite around in space takes only a tiny amount of thrust. Engineers from Michigan Technological University and the University of Maryland teamed up, put a nanoscale rocket under a microscope, and watched what happened. To Infinity and Beyond with Nanosatellites When a satellite is placed into orbit by a rocket, its journey has only just begun. Released into space on its own, the satellite needs an on-board thruster so it can navigate to its desired location and then remain there despite the many things that do their best to kick it off course. "Space isn't the empty vacuum of nothingness many of us assume," says Kurt Terhune, a mechanical engineering graduate student and the lead author on a new study published inNanotechnology this week. "Space actually has a small amount of atmosphere that causes drag, solar winds that push satellites off course and space debris that present a constant hazard." This is especially important in the new era of space exploration. Dozens of companies plan to launch thousands of tiny satellites—some as small as shoe boxes—within the next five years. Each of these nanosatellites will need its own tiny thruster. One solution comes in the form of an electrospray thruster that Terhune studies along with his advisor, L. Brad King, the Ron and Elaine Starr Professor of Space Systems Engineering. The propellants for these thrusters are called “ionic liquids,” which are room-temperature liquid salts. "Much like the sodium chloride table salt many of us enjoy on French fries, ionic liquids are comprised of roughly equal numbers of positively and negatively charged ions," Terhune says, explaining that electric fields, supplied by spacecraft batteries, can exert forces on these ions and eject them into space at great velocity. The emitted ion beam can provide the gentle thrust that the nanosatellite needs. Many of these tiny electrospray thrusters packed together could propel a spacecraft over great distances, maybe even to the nearest exoplanet. Electrospray thrusters are currently being tested on the European Space Agency’s LISA Pathfinder, which hopes to poise objects in space so precisely that they would only be disturbed by gravitational waves. "The challenge is obtaining images of a material in the presence of such a strong electric field, which is why we turned to John Cumings at the University of Maryland," King says, explaining that Cumings is known for his work with challenging materials. To make things harder, the tip of the droplet can move around by a few microns while the thruster is operating. A few microns is a small distance, but compared with the features that the team needed to observe, this made the experiment like trying to find a needle in a haystack. "Finding the actual nano-scale tip of the droplet with an electron microsope is like trying to look through a soda straw to find a penny somewhere on the floor of a room," King says. "And if that penny moves, like the tip of the molten salt droplet does—then it's off camera, and you have to start searching all over again." At the Advanced Imaging and Microscopy Lab at the University of Maryland, Cumings put the tiny thruster in a transmission electron microscope (TEM)—an advanced scope that can see things down to millionths of a meter. They watched as the droplet elongated and sharpened to a point, and then started emitting ions. Then the tree-like defects began to appear. Back in Orbit The researchers say that figuring out why these branched structures grow could help prevent them from forming. The problem occurs as the microscope's high-energy electron beam exposes the fluid to radiation, breaking some of the bonds between atoms in the ions. This damages the molten salt's molecular structure, so it gels and piles up. "We were able to watch the dendritic structures accumulate in real time," Terhune says. "The specific mechanism still needs to be investigated, but this could have importance for spacecraft in high-radiation environments." He adds that the microscope's electron beam is more powerful than natural settings, but the gelling could affect the lifetime of electrospray engines in deep space and geosynchronous orbits where most of the planet's satellites circle. And you don't have to be a rocket scientist to know figuring out the physics to improve that lifetime is a good idea.
In March 2014, at Harvard, physicists announced they'd made one of the most significant scientific discoveries of the 21st century. Using BICEP2, a telescope located near the South Pole, they'd found evidence of gravitational waves — variations in the strength of gravity throughout space that would serve as crucial evidence for part of the Big Bang theory. Physicists everywhere rejoiced, and many predicted that the work would lead to a Nobel Prize. Three months later, a lot has changed. Many scientists have questioned the discovery — and last week, when the data was finally published in a peer-reviewed journal, the physicists behind it subtly qualified their claim, saying it was impossible to rule out the chance of an error. It's still uncertain whether the discovery is right or wrong. Future data from other telescopes — including data from an orbiting space telescope to be released in October — will hopefully help clarify the issue. So what exactly was the discovery — and how has it been thrown into doubt so quickly? Here's a basic primer. What the physicists originally found The Big Bang theory is a model that describes the beginning of the universe. Essentially, it states that about 13.82 billion years ago, all matter that we see around us was contained in a single, extremely dense point. Suddenly, it began expanding faster than the speed of light, eventually growing into the universe we know today. But according to the theory, this expansion didn't happen at a uniform rate: a tiny fraction of a second after the Big Bang began, the universe expanded at an exponentially fast rate — growing from smaller than an atom to roughly the size of a golf ball in a particularly pivotal instant — then slowed gradually. This part of the theory is called cosmic inflation. Inflation, calculations predict, should have led to something called gravitational waves. Basically, the sheer strength of gravity would have varied slightly from one part of the early microscopic universe to another — and as the universe expanded, these variations would have been stretched out, producing fluctuations in gravity on a much larger scale. These gravitational waves are often referred to as ripples in the fabric of spacetime. Until recently, we had physical evidence for the Big Bang theory in general, but not cosmic inflation: the main reason we had to believe in it was a series of theoretical calculations, originally made in the late 1970s. But scientists have long been searching for physical evidence, and in March, a four person team — made up of John Kovac, Clement Pryke, Jamie Bock and Chao-Lin Kuo — announced they'd found it, in the form of indirect evidence for gravitational waves. How they made this discovery Using the BICEP2 telescope in Antarctica (where cold, dry air limits interference from the Earth's atmosphere), the scientists looked at a faint form of light called the Cosmic Microwave Background, which was emitted shortly after the Big Bang and is still permeating through the universe. They found a distinct twisting pattern in the light (formally called B-mode polarization). This type of polarization could be caused by the light crossing through gravitational waves as it travelled through the early universe billions of years ago — so the discovery, if accurate, would have confirmed the theory of cosmic inflation. Why the discovery may have been wrong The physicists publicly announced their discovery and released their data on March 17, noting that they'd spent years analyzing it to eliminate the chance of an error. Normally, scientific discoveries are publicly announced after they've been peer-reviewed — a process in which other scientists in the field critically analyze the work, looking for any weaknesses present. But in this case, the discovery was announced prior to peer review, causing this sort of normal skeptical analysis to occur in the public eye. And, over the past couple of months, several independent scientists have suggested the detection of this form of polarization may have been an error. Initially, the researchers responded confidently, but their stance has now subtly changed. The peer-reviewed version of their work, published in journal Physical Review Letters last week, notes that they can't "exclude the possibility of dust emission bright enough to explain the entire excess signal." In other words, dust may be to blame. The basic problem is that in looking for the polarization, the researchers are analyzing an extremely faint form of light in a tiny slice of the sky, so any kind of interference can throw off the results. In this case, the light may have been polarized by dust scattered throughout our galaxy just before it reached us. As Joel Achenbach of the Washington Post put it, "rather than seeing the aftershock of the birth of the universe, the scientists may have seen only some schmutz in the foreground, as if they needed to clean their eyeglasses." The physicists tried to take this dust into account to eliminate its effects from their analysis, but they did so with an unpublished map of the Milky Way's dust, and may have misinterpreted what exactly it showed. That could have misled them into thinking less dust was present than there actually is — so the polarization they found, rather than being caused by gravitational waves, may have simply been due to extra dust in the galaxy. What happens next A number of different instruments are currently collecting data that might resolve this dispute. It could happen as soon as October, when data from the European Space Agency's orbiting Planck satellite will be released. This satellite is analyzing the faint cosmic microwave background, and among other things, it should give us a more complete map of how much dust is scattered throughout the sky. Its resolution might not be fine enough to say exactly how much dust is in the tiny slice of sky the BICEP2 telescope looked at, but if it indicates there is much more dust throughout the sky than the researchers estimated, it'd make their conclusion look less likely. If the Planck data doesn't rule out the BICEP2 findings, subsequent data from several other telescopes (including the adjacent South Pole Telescope and the POLARBEAR experiment in Chile) could confirm or refute the gravitational wave discovery over the next few years. But one thing to note is that even if this finding does turn out to be an error, caused by galactic dust, it doesn't rule out the cosmic inflation theory — any more than digging into the ground and not finding any dinosaur fossils would mean that dinosaurs never existed. It's entirely possible, for instance, that cosmic inflation occurred but the gravitational waves it generated are too small for us to detect. But the vast majority of theoretical physicists believe in cosmic inflation, based on calculations — the main thing that's uncertain right now is whether we have physical evidence of the theory yet or not. - Joel Achenbach's rundown of the BICEP2 debate - New Scientist's detailed coverage of the potential error - My coverage of the original discovery
Unlike many other sensory systems, the visual system – components from the eye to neural circuits – develops largely after birth, especially in the first few years of life. At birth, visual structures are fully present yet immature in their potentials. From the first moment of life, there are a few innate components of an infant's visual system. Newborns can detect changes in brightness, distinguish between stationary and kinetic objects, as well as follow kinetic objects in their visual fields. However, many of these areas are very poorly developed. With physical improvements such as increased distances between the cornea and retina, increased pupil dimensions, and strengthened cones and rods, an infant's visual ability improves drastically. The neuro- pathway and physical changes that underlie these improvements in vision remains a strong focus in research. Because of an infant's inability to verbally express their visual field, growing research in this field relies heavily on non-verbal cues including an infants perceived ability to detect patterns and visual changes. The major components of the visual system can be broken up into visual acuity, depth perception, color sensitivity, and light sensitivity. By providing a better understanding of the visual system, future medical treatments for infant and pediatric ophthalmology can be established. By additionally creating a timeline on visual perception development in "normal" newborns and infants, research can shed some light on abnormalities that often arise and interfere with ideal sensory growth and change. Visual acuity, the sharpness of the eye to fine detail, is a major component of a human’s visual system. It requires not only the muscles of the eye – the muscles of orbit and the ciliary muscles – to be able to focus on a particular object through contraction and relaxation, but other parts of the retina such of the fovea to project a clear image on the retina. The muscles that initiate movement start to strengthen from birth to 2 months, at which point infants have control of their eye. However, images still appear unclear at two months due to other components of the visual system like the fovea and retina and the brain circuitry that are still in their developmental stages. This means that even though an infant is able to focus on a clear image on the retina, the fovea and other visual parts of the brain are too immature to transmit a clear image. Visual acuity in newborns is very limited as well compared to adults – being 12 to 25 times worse than that of a normal adult. It is important to note that the distance from the cornea at the front of the infant’s eye to the retina which is at the back of eye is 16–17 mm at birth, 20 to 21 mm at one year, and 23–25 mm in adolescence and adulthood. This results in smaller retinal images for infants. The vision of infants under one month of age ranges from 20/800 to 20/200. By two months, visual acuity improves to 20/150. By four months, acuity improves by a factor of 2 – calculated to be 20/60 vision. As the infant grows, the acuity reaches the healthy adult standard of 20/20 at six months. One major method used to measure visual acuity during infancy is by testing an infant’s sensitivity to visual details such as a set of black strip lines in a pictorial image. Studies have shown that most one week old infants can discriminate a gray field from a fine black stripped field at a distance of one foot away. This means that most infants will look longer at patterned visual stimuli instead of a plain, pattern-less stimuli. Gradually, infants develop the ability to distinguish strips of line that are closer together. Therefore, by measuring the width of the strips and their distance from an infant’s eye, visual acuity can be estimated, with detection of finer strips indicating better acuity. When examining an infants preferred visual stimuli, it was found that one month-old infants often gazed mostly at prominent, sharp features of an object – whether it is a strong defined curve or an edge. Beginning at two month-old, infants begin to direct their saccades to the interior of the object, but still focusing on strong features. Additionally, infants starting from one month of age have been found to prefer visual stimuli that are in motion rather stationary. Newborns are exceptionally capable of face discrimination and recognition shortly after birth. Therefore it is not surprising that infants develop strong facial recognition of their mother. Studies have shown that newborns have a preference for their mothers' faces two weeks after birth. At this stage, infants would focus their visual attention on pictures of their own mother for a period larger than a picture of complete strangers. Studies have shown that infants even as early as four days old look longer at their mothers’ face than at those of strangers only when the mother is not wearing a head scarf. This may suggest that hairline and outer perimeter of the face play an integral part in the newborn’s face recognition. According to Maurer and Salapateck, a one month-old baby scans the outer contour of the face, with strong focus on the eyes, while a two month-old scans more broadly and focuses on the features of the face, including the eyes and mouth. When comparing facial features across species, it was found that infants of six months were better at distinguishing facial information of both humans and monkeys than older infants and adults. They found that both nine-month-olds and adults could discriminate between pictures of human faces; however, neither infants nor adults had the same capabilities when it came to pictures of monkeys. On the other hand, six-month-old infants were able to discriminate both facial features on human faces and on monkey faces. This suggests that there is a narrowing in face processing, as a result of neural network changes in early cognition. Another explanation is that infants likely have no experience with monkey faces and relatively little experience with human faces. This may result in a more broadly tuned face recognition system and, in turn, an advantage in recognizing facial identity in general (i.e., regardless of species). In contrast, healthy adults due to their interaction with people on a frequent basis have fine tuned their sensitivity to facial information of humans – which has led to cortical specialization. To perceive depth, infants as well as adults rely on several signals such as distances and kinetics. For instance, the fact that objects closer to the observer fill more space in our visual field than farther objects provides some cues into depth perception for infants. Evidence has shown that newborns' eyes do not work in the same fashion as older children or adults – mainly due to poor coordination of the eyes. Newborn’s eyes move in the same direction only about half of the time. Strength of eye muscle control is positively correlated to achieve depth perception. Human eyes are formed in such a way that each eye reflects a stimulus at a slightly different angle thereby producing two images that are processed in the brain. These images provide the essential visual important regarding 3D features of the external world. Therefore, an infant’s ability to control his eye movement and converge on one object is critical for developing depth perception. Infants who are cross eyes, an innate condition called convergent strabismus, fail to produce proper depth perception if their condition is not surgically fixed with surgery. One of the important discoveries of infant depth perception is thanks to researchers Eleanor J. Gibson and R.D. Walk. Gibson and Walk developed an apparatus called the visual cliff that could be used to investigate visual depth perception in infants. In short, infants were placed on a centerboard to one side which contained an illusory steep drop (“deep side”) and another which contained a platform of the centerboard (“shallow side”). In reality, both sides, covered in glass, was safe for infants to trek. From their experiment, Gibson and Walk found that a majority of infants ranging from 6–14 months-old would not cross from the shallow side to the deep side due to their innate sense of fear to heights. From this experiment, Gibson and Walk concluded that by six months an infant has developed a sense of depth. However, this experiment was limited to infants that could independently crawl or walk. To overcome the limitations of testing non-locomotive infants, Campos and his colleges devised an experiment that was dependent on heart rate reactions of infants when placed in environments that reflected different depth scenarios. Campos and his colleagues placed six week-old infants on the “deep end” of the visual cliff, the six week-old infants' heart rate decreased and a sense of fascination was seen in the infants. However, when seven month-old infants were lowered down on the same “deep end” illusion, their heart rates accelerated rapidly and they started to whimper. Gibson and Walk concluded that infants had developed a sense of visual depth prior to beginning locomotion. Therefore, it could be concluded that sometime at the spark of crawling around 4–5 months, depth perception begins to strongly present itself. From an infant's standpoint, depth perception can be inferred using three means: binocular, static, and kinetic cues. As mentioned previous, humans are binocular and each eye views the external world with a different angle – providing essential information into depth. The convergence of each eye on a particular object and the stereopsis, also known as the retinal disparity among two objects, provides some information for infants older than ten weeks. With binocular vision development, infants between four to five months also develop a sense of size and shape constancy objects, regardless of the objects location and orientation in space. From static cues based upon monocular vision, infants older of five month of age have the ability to predict depth perception from pictorial position of objects. In other words, edges of closer objects overlap objects in the distance. Lastly, kinetic cues are another factor in depth perception for humans, especially young infants. Infants ranging from three to five months are able to move when an object approaches them in the intent to hit them – implying that infants have depth perception. Color sensitivity improves steadily over the first year of life for humans due to strengthening of the cones of the eyes. Like adults, infants have chromatic discrimination using three photoreceptor types: long-, mid- and short-wavelength cones. These cones recombine in the precortical visual processing to form a luminance channel and two chromatic channels that help an infant to see color and brightness. The particular pathway used for color discrimination is the parvocellular pathway. There is a general debate among researchers with regards to the exact age that infants can detect different colors/chromatic stimuli due to important color factors such as brightness/luminance, saturation, and hue. Regardless of the exact timeline for when infants start to see particular colors, it is understood among researcher that infants' color sensitivity improves with age. It is generally accepted across all current research that infants prefer high contrast and bold colors at their earlier stages of infancy, rather than saturated colors. One study found that newborn infants looked longer at checkered patterns of white and colored stimuli (including red, green, yellow) than they did at a uniform white color. However, infants failed to discriminate blue from white checkered patterns. Another study – recording the fixation time of infants to blue, green, yellow, red, and gray at two difference luminance levels – found that infants and adults differed in their color preference. Newborns and one month did not show any preference among the colored stimuli. It was found that three-month-old infants preferred the longer wavelength (red and yellow) to the short-wavelength (blue and green) stimuli, while adults had the opposite. However, both adults and infants preferred colored stimuli over non-colored stimuli. According this study, it was suggest that infants had a general preference for colored stimuli over non-colored stimuli at birth; however, infants were not able to distinguish between the different colored stimuli until after three months of age. Research into the development of color vision using infant monkeys indicates that color experience is critical for normal vision development. Infant monkeys were placed in a room with monochromatic lighting limiting their access to normal spectrum of colors for a one-month period. After a one year period, the monkey’s ability to distinguish colors was poorer than that of normal monkey exposed to full spectrum of colors. Although this result directly pertains to infant monkeys and not humans, they strongly suggest that visual experience with color is critical for proper, healthy vision development in humans as well. The threshold for light sensitivity is much higher in infants compared to adults. From birth, the pupils of an infant remain constricted to limit the amount of entering light. In regards to pupil dimensions, newborn's pupil grow from approximately 2.2 mm to an adult length of 3.3 mm. A one-month-old infant can detect light thresholds only when it is approximately 50 times greater than that of an adult. By two months, the threshold decreases measurably to about ten times greater than that of an adult. The increase in sensitivity is the result of lengthening of the photoreceptors and further development of the retina. Therefore, postnatal maturation of the retinal structures has led to strong light adaptations for infants. Vision abnormalities in infants Vision problems in infants are both common and easily treatable if addressed early by an ophthalmologist. Critical warning signs - Excessive tearing - Red or encrusted eyelids - White pupils - Extreme sensitivity to bright light - Constant eye turning - Hirano, S.; Yamamoto, Y.; Takayama, H.; Sugata, Y.; Matsuo, K. (1979). "Ultrasonic observation of eyes in premature babies. Part 6: Growth curves of ocular axial length and its components (author's transl)". Nippon Ganka Gakkai zasshi 83 (9): 1679–1693. PMID 525595. - Banks, M. S.; Salapatek, P. (1978). "Acuity and contrast sensitivity in 1-, 2-, and 3-month-old human infants". Investigative ophthalmology & visual science 17 (4): 361–365. PMID 640783. - Dobson, V.; Teller, D. Y. (1978). "Visual acuity in human infants: A review and comparison of behavioral and electrophysiological studies". Vision research 18 (11): 1469–1483. doi:10.1016/0042-6989(78)90001-9. PMID 364823. - Courage, M. L.; Adams, R. J. (1990). "Visual acuity assessment from birth to three years using the acuity card procedure: Cross-sectional and longitudinal samples". Optometry and vision science : official publication of the American Academy of Optometry 67 (9): 713–718. doi:10.1097/00006324-199009000-00011. PMID 2234832. - Sokol, S. (1978). "Measurement of infant visual acuity from pattern reversal evoked potentials". Vision research 18 (1): 33–39. doi:10.1016/0042-6989(78)90074-3. PMID 664274. - Maurer, D. & Maurer, C. (1988) The world of the newborn. New York. Basic Books, ISBN 0465092306. - Snow, C. W. (1998) Infant development (2nd edition) Upper Saddle River, NJ: Prentice-Hall. - Bronson, G. W. (1991). "Infant Differences in Rate of Visual Encoding". Child Development 62 (1): 44–54. doi:10.1111/j.1467-8624.1991.tb01513.x. PMID 2022137. - Bronson, G. W. (1990). "Changes in infants' visual scanning across the 2- to 14-week age period". Journal of experimental child psychology 49 (1): 101–125. doi:10.1016/0022-0965(90)90051-9. PMID 2303772. - Maurer, D.; Salapatek, P. (1976). "Developmental changes in the scanning of faces by young infants". Child development 47 (2): 523–527. doi:10.2307/1128813. PMID 1269319. - Braddick, O. J.; Atkinson, J. (2009). "Infants' Sensitivity to Motion and Temporal Change". Optometry and Vision Science 86 (6): 577–582. doi:10.1097/OPX.0b013e3181a76e84. PMID 19417703. - Field, T. M.; Cohen, D.; Garcia, R.; Greenberg, R. (1984). "Mother-stranger face discrimination by the newborn". Infant Behavior and Development 7: 19. doi:10.1016/S0163-6383(84)80019-3. - Frank, M. C.; Vul, E.; Johnson, S. P. (2009). "Development of infants' attention to faces during the first year". Cognition 110 (2): 160–170. doi:10.1016/j.cognition.2008.11.010. PMC 2663531. PMID 19114280. - Bushnell, I. W. R. (2001). "Mother's face recognition in newborn infants: Learning and memory". Infant and Child Development 10: 67–74. doi:10.1002/icd.248. - Pascalis, O.; De Schonen, S.; Morton, J.; Deruelle, C.; Fabre-Grenet, M. (1995). "Mother's face recognition by neonates: A replication and an extension". Infant Behavior and Development 18: 79. doi:10.1016/0163-6383(95)90009-8. - Pascalis, O.; De Haan, M.; Nelson, C. A. (2002). "Is Face Processing Species-Specific During the First Year of Life?". Science 296 (5571): 1321–1323. doi:10.1126/science.1070223. PMID 12016317. - Kellman PJ, Banks MS. (1998) Infant visual perception. In Handbook of Child Psychology, Volume 2: Cognition, Perception, and Language (1st edn), vol. 2, Kuhn D, Siegler RS (eds). Wiley: New York; 103–146. - Gibson, E.J.; Walk, R.D. (1960). "Visual Cliff". Scientific American. - Campos, J.J. Hiatt, S., Ramsay, D., Henderson, C., & Svejda, M. (1978). The emergence of fear on the visual cliff. The origins of affect. New York: Plenum. - Bornstein, M. & Lamb, M. (1992) Developmental Psychology. 3rd ed. Lawrence Erlbaum Associates, NJ. - Kavšek, M.; Granrud, C. E.; Yonas, A. (2009). "Infants' responsiveness to pictorial depth cues in preferential-reaching studies: A meta-analysis". Infant Behavior and Development 32 (3): 245–253. doi:10.1016/j.infbeh.2009.02.001. PMID 19328557. - Fox, R.; Aslin, R.; Shea, S.; Dumais, S. (1980). "Stereopsis in human infants". Science 207 (4428): 323–324. doi:10.1126/science.7350666. PMID 7350666. - Thomasson, M. A.; Teller, D. Y. (2000). "Infant color vision: Sharp chromatic edges are not required for chromatic discrimination in 4-month-olds". Vision research 40 (9): 1051–1057. doi:10.1016/S0042-6989(00)00022-5. PMID 10738064. - Teller, D. Y.; Peeples, D. R.; Sekel, M. (1978). "Discrimination of chromatic from white light by two-month-old human infants". Vision research 18 (1): 41–48. doi:10.1016/0042-6989(78)90075-5. PMID 307296. - Adams, R. J.; Maurer, D.; Cashin, H. A. (1990). "The influence of stimulus size on newborns' discrimination of chromatic from achromatic stimuli". Vision research 30 (12): 2023–2030. doi:10.1016/0042-6989(90)90018-G. PMID 2288103. - Adams, R. J. (1987). "An evaluation of color preference in early infancy". Infant Behavior and Development 10 (2): 143–150. doi:10.1016/0163-6383(87)90029-4. - Sugita, Y. (2004). "Experience in Early Infancy is Indispensable for Color Perception". Current Biology 14 (14): 1267–1271. doi:10.1016/j.cub.2004.07.020. PMID 15268857. - Brown, A. M. (1986). "Scotopic sensitivity of the two-month-old human infant". Vision research 26 (5): 707–710. doi:10.1016/0042-6989(86)90084-2. PMID 3750850.
Math Assignment Help With Decimal Numbers, Power Of 10, Rounding 2.1 Introduction: A number that contains a decimal point (.) is called a decimal number. But before understanding a decimal number you should know what Place Value is.While writing a number the position or the place of each number is of great importance. For eg: In a number say 786 Starting from the extreme right, the first digit is in the Units place. Moving towards left, the second digit is in the Tens Place. And the digit at extreme Left is in the Hundreds Place. So the number is now read as Seven hundred eighty six Thus if you notice, while moving towards left, each position increases 10 times of the previous one. Similarly while moving towards right; each position gets 10 times smaller. And when we keep moving towards right, beyond the units place, the digit past units is 10 times smaller, that is, 1/10 of units place. But before writing the digit past unit, decimal point should be written. It indicates where the unit position is. And after the decimal point the digit should be written. So when we write This is called a Decimal number. So, as you move right from the decimal point, each value is divided by 10. 2.1.1 How to Read and Write Decimal Numbers Email Based Assignment Help in Decimal numbers, power of 10, rounding To Schedule a Decimal numbers, power of 10, rounding tutoring session To submit Decimal numbers, power of 10, rounding assignment click here. Following are some of the topics in math Decimal numbers, power of 10, rounding in which we provide help: Geometry Help | Calculus Help | Math Tutors | Algebra Tutor | Tutorial Algebra | Algebra Learn | Math Tutorial | Algebra Tutoring | Calculus Tutor | Precalculus Help | Geometry Tutor | Maths Tutor | Geometry Homework Help | Homework Tutor | Mathematics Tutor | Calculus Tutoring | Online Algebra Tutor | Geometry Tutoring | Online Algebra Tutoring | Algebra Tutors | Math Homework Helper | Calculus Homework Help | Online Tutoring | Calculus Tutors | Homework Tutoring - Assignment Help - Homework Help - Writing Help - Academic Writing Assistance - Editing Services - Plagiarism Checker Online - Research Writing Help
9 Crustal Deformation and Earthquakes At the end of this chapter, students should be able to: - Differentiate between stress and strain - Differentiate between brittle, ductile, and elastic deformation - Identify the three major types of stress and their associated plate tectonics boundary - Name different fold types - Name different faults - Understand the elastic rebound theory - Describe how seismic waves are measured - Understand earthquake magnitude and how it is quantified - Identify areas of increased seismic hazard - Determine the location of an epicenter - Describe notable historical earthquakes - Explain how humans can induce seismicity Tectonic processes produce horizontal forces in the crust that cause pushing, pulling, and shearing stresses that deform the rock. Stresses created by tectonics, gravity, and igneous pluton emplacement cause deformation in rock. The type of deformation, which can be folds, fractures, and/or faults, depends on the setting, timing, and rock material. 9.1 Stress and Strain Stress is the force exerted per unit area and strain is the material’s response to that force. Strain is deformation caused by stress. Strain in rocks can be represented as a change in rock volume and/or rock shape, as well as fracturing the rock. There are three types of stress: tensional, compressional, and shear . Tensional stress involves pulling something apart in opposite directions, stretching and thinning the material. Compressional stress involves things coming together and pushing on each other, thickening the material. Shear stress involves transverse movement of a material moving past each other, like a scissor. |Type of Stress||Associated Plate Boundary type||Resulting Strain||Associated fault and offset types| |Tensional||divergent||Stretching and thinning||Normal| |Compressional||convergent||Shortening and thickening||Reverse| When rocks are stressed, the resulting strain can be elastic, ductile, or brittle. This change is generally called deformation. Elastic deformation is strain that is reversible after a stress is released. For example, when you compress a spring, it elastically returns to its original shape after you release it. Ductile deformation occurs when enough stress is applied to a material that the changes in its shape are permanent, and the material is no longer able to revert to its original shape. For example, if you stretch a spring too far, it can be permanently bent out of shape. Note that concepts related to ductile deformation apply at the visible (macro) scale, and deformation is more complex at a microscopic scale. Research of plastic deformation, which touches on the atomic scale, is generally beyond the scope of introductory texts. Yield point is the amount of strain at which elastic deformation is surpassed and permanent deformation is measurable. In the figure, yield point is where the line transitions from elastic deformation to ductile deformation (the end of the dashed line). Brittle deformation is when the material undergoes another critical point of no return. When sufficient stress to pass that point occurs, it fails and fractures. Important factors that influence if or how a rock will undergo elastic, ductile, or brittle deformation are intensity of the applied stress, time, temperature, confining pressure, pore pressure, strain rate, and rock strength. Pore pressure is the pressure exerted by fluids inside of the open spaces (pores) inside of a rock or sediment. Strain rate is how quickly a material is deformed. Rock strength is a measure of how easily a rock will respond to stress. Shale has low strength and granite has high strength. Removing heat (decreasing temperature) makes the material more rigid. Likewise, heating materials make them more ductile. Heating glass makes it capable of bending and stretching. In terms of strain response, it is easier to bend a piece of wood slowly without breaking it. |Increase Temperature||More Plastic| |Increase Strain Rate||More Brittle| |Increase Rock Strength||More Brittle| 9.3 Field Geology and Geological Maps Topographic maps are two-dimensional (2D) representations of a three-dimensional (3D) land surface. Similarly, geologic maps are 2D representations of 3D geologic structures at the earth’s surface. Geologists use geologic maps to represent where geologic formations, faults, folds, and inclined rock units are. Geologic formations are recognizable, mappable rock units. In a geologic map, each formation drawn on the map is recognized by a color and an abbreviated label. For examples of geologic maps, check out the UGS geologic map viewer. Symbols for formations on geologic maps have symbols formed in a specific way. The first capital letter(s) of that label represents the geologic time period of the formation, while the following lowercase letters represent the formation name and/or an abbreviated rock type description. Where more than one capital letter begins the symbol, it indicates multiple time periods for the formation. 9.3.1 Cross sections Cross sections are subsurface interpretations from surface and subsurface measurements. Maps display geology in the horizontal plane, while cross sections show subsurface geology in the vertical plane. For more information on cross sections, check out the AAPG wiki on cross sections. 9.3.2 Strike and Dip Geologists use a special symbol called strike and dip to represent beds that are inclined. Strike and dip symbols look like the capital letter “T” on a map with a wide top of the T. The short trunk of the “T” represents the dip direction of the inclined rock bed. Oftentimes, the dip symbol will have a number next to it that represents dip angle. Dip is the angle that a bed plunges into the Earth from the horizontal. One way to visualize strike is to think about a pitched roof on a rectangular house. The strike of the roof would be indicated by the horizontal line at the top of the roof or the eave that extends in a compass direction (NSEW). The strike is the angle between that horizontal line and true north or true south, e.g. N 43° E, meaning the horizontal line points toward the NE at an angle of 43° from true north. The dip of the roof would represent how steep the roof is with respect to horizontal. The direction of dip would be the same direction that a ball would roll off of the roof from stationary. A horizontal rock bed has a dip of 0°, and a vertical bed has a dip of 90°. Strike and dip considered together are called rock attitude. Geologic folds are layers of rock that are curved or bent by ductile deformation. Terms involved with folds include axis, which is the line along which the bending occurred, and limbs, which are the dipping beds that make up the sides of the folds. Folds are most commonly formed by compressional forces at depth, where hotter temperatures and higher confining pressures allow ductile deformation to occur. Folds are described by the orientation of their axes, axial planes, and limbs. A fold is made up of two or more sets of dipping beds, generally dipping in opposite directions, that come together along a line, called the axis. Each set of dipping beds is known as a fold limb. The plane that splits the fold into two halves is known as the axial plane. Symmetrical folds have mirrored limbs across their axial planes. The limbs of a symmetrical fold are inclined at the same (but opposite) angle indicating equal compression on both sides of the fold. Asymmetrical folds have dipping, non-vertical axial planes, where limbs dip into the ground at different angles. Recumbent folds are very tight folds with limbs compressed near the axial planes, and are generally horizontal, and overturned folds are where the angles on both limbs dip in the same direction. The fold axis is where the axial plane intersects the strata involved in the fold. A horizontal fold has a horizontal fold axis. When the axis of the fold plunges into the ground, the fold is called a plunging fold. Anticlines are arch-like (“A”-shaped) folds, with downward curving limbs that have beds that dip away from the central axis of the fold. They are convex-upward in shape. In anticlines, the oldest rock strata are in the center of the fold, along the axis, and the younger beds are on the outside. An antiform has the same shape as an anticline, but in antiforms the relative ages of the beds in the fold cannot be determined. Oil geologists have interest in anticlines because they can form oil traps, where oil migrates up along the limbs of the fold and accumulates in the high point along the axis of the fold. Synclines are trough-like (“U” shaped) , upward curving folds that have beds that dip in towards the central axis of the fold. They are concave-upward in shape. In synclines, the older rock is on the outside of the fold and the youngest rock is on the inside of the fold along the axis. A synform has the shape of a syncline but, like an antiform, does not distinguish between the ages of the units. Monoclines are step-like folds, in which flat rocks are upwarped or downwarped, then continue flat. Monoclines are relatively common on the Colorado Plateau where they form “reefs,” which are ridges that act as topographic barriers and should not be confused with ocean reefs. Capitol Reef is an example of a monocline in Utah. Monoclines can be caused by bending of shallower sedimentary strata as faults grow below them. These faults are commonly called “blind faults” because they end before reaching the surface and can be either normal or reverse faults. A dome is a symmetrical to semi-symmetrical upwarping of rock beds. Domes have a shape like an inverted bowl, similar to domes on buildings, like the Capitol Building. Domes in Utah include the San Rafael Swell, Harrisburg Junction Dome, and the Henry Mountains . Some domes are formed from compressional forces, while other domes are formed from underlying igneous intrusions , by salt diapirs, or even impacts, like upheaval dome in Canyonlands National Park. A basin is the inverse of a dome. The basin is when rock forms a bowl-shaped depression. The Uinta Basin is an example of a basin in Utah. Technically, geologists refer to rocks folded into a bowl-shape as structural basins. Sometimes structural basins can also be sedimentary basins in which large quantities of sediment accumulate over time. Sedimentary basins can form as a result of folding, but are much more commonly produced in mountain building, between mountain blocks or via faulting. Regardless of the cause, as the basin sinks (called subsidence), it can accumulate even more sediment as the weight of the sediment causes more subsidence in a positive-feedback loop. There are active sedimentary basins all over the world . An example of a rapidly subsiding basin in Utah is the Oquirrh Basin of Pennsylvanian-Permian age in which over 30,000 feet of fossliferous sandstones, shales, and limestones accumulated. These strata can be seen in the Wasatch Mountains along the east side of Utah Valley, especially on Mt. Timpanogos and in Provo Canyon. Faults are the places in the crust where brittle deformation occurs as two blocks of rocks move relative to one another. There are three major fault types: normal, reverse, and strike-slip. Normal and reverse faults display vertical, also known as dip-slip, motion. Dip-slip motion consists of relative up and down movement along a dipping fault between two blocks, the hanging wall and the footwall. In a dip-slip system, the footwall is below the fault plane and the hanging-wall is above the fault plane. A good way to remember this is to imagine a mine tunnel through a fault; the hanging wall would be where a miner would hang a lantern and the footwall would be at the miner’s feet. Faults are more prevalent near and related to plate boundaries, but can occur in plate interiors as well. Faults can show evidence of movement along the fault plane. Slickensides are polished, often grooved surfaces along the fault plane created by friction during the movement. A joint or fracture is a plane of breakage in a rock that does not show movement or offset. Joints can result from many processes, such as cooling, depressurizing, or folding. Joint systems may be regional affecting many square miles. Normal faults move by a vertical motion where the hanging-wall moves downward relative to the footwall along the dip of the fault. Normal faults are created by tensional forces in the crust. Normal faults and tensional forces are commonly caused at divergent plate boundaries and where the crust is being stretched by tensional stresses. Utah examples of normal faults are the Wasatch Fault, the Hurricane Fault, and other faults bounding valleys in the Basin and Range. Grabens, horsts, and half-grabens are all blocks of crust or rock that are bounded by normal faults. Grabens drop down relative to adjacent blocks and create valleys. Horsts go up relative to adjacent down-dropped blocks, and become areas of high topography. Where together, horsts and grabens create an symmetrical pattern of valleys surrounded by normal faults on both sides and mountains. Half-grabens are a one-sided version of a horst and graben, where blocks are tilted by a normal fault on one side, creating an asymmetrical valley-mountain arrangement. The mountain-valleys of the Basin and Range Province of Western Utah and Nevada consist of a series of full and half-grabens from the Salt Lake Valley to the Sierra Nevada Mountains. When the dip of a normal fault decreases with depth (i.e. the fault becomes more horizontal as it goes deeper), the fault is a listric fault. Extreme versions of listric faulting occur when large amounts of extension occur along very low-angle normal faults, known as detachment faults. The normal faults of the Basin and Range appear to become detachment faults at depth. Reverse faults are when the hanging-wall moves up relative to the footwall. Reverse faults are caused by compressional forces. A thrust fault is a reverse fault where the fault plane has a low dip angle (generally less than 45 degrees). Thrust faults bring older rocks on top of younger rocks and can cause repetition of rock units in the stratigraphic record. Convergent plate boundaries with subduction zones create a special type of “reverse” fault called a megathrust fault. Megathrust faults cause the largest magnitude earthquakes and commonly cause tsunamis. Strike-slip faults have side to side motion. In pure strike-slip motion, crustal blocks on either side of the fault do not move up or down relative to each other. There is left-lateral (sinistral) and right-lateral (dextral) strike slip motion. In left-lateral or sinistral strike slip motion, the opposite block moves left relative to the block that the observer is standing on. In right-lateral or dextral strike slip motion, the opposite block moves right relative to the observer’s block. Strike-slip faults are most commonly associated with transform boundaries, and are prevalent in fracture zones adjacent to mid-ocean ridges. Bends in strike-slip faults can create areas where the sliding blocks create compression or tension. Tensional stresses will create transtensional features with normal faults and basins (like California’s Salton Sea), and compressional stresses will create transpressional features with reverse faults and small-scale mountain building (like California’s San Gabriel Mountains). The faults that splay off of transpression or transtension features are known as flower structures. An example of a right-lateral strike-slip fault is the San Andreas Fault, which denotes a transform boundary between the North American and Pacific plates. An example of a left-lateral strike-slip fault is the Dead Sea fault in Jordan and Israel. 9.6 Earthquake Essentials People feel approximately 1 million earthquakes a year. Few are noticed very far from the source. Even fewer are major earthquakes. Earthquakes are usually felt only when they are greater than a magnitude 2.5. The USGS Earthquakes Hazards Program has a realtime map showing the most recent earthquakes. Most earthquakes occur along active plate boundaries. Intraplate earthquakes (not along plate boundaries) are still poorly understood. Earthquake energy is known as seismic energy, and it travels through the earth in the form of seismic waves. To understand some of the basics of earthquakes and how they are measured, consider some of the basic properties of waves. Waves describe a motion that repeats itself in a medium (rock or unconsolidated sediments in our case). The magnitude (height) of the motion is the amplitude of a wave. Wavelength is the distance between two successive peaks of the wave. The number of repetitions of the motion over time (cycles per time) is the frequency. The inverse of frequency, which is the amount of time for a wave to travel one wavelength, is the period. When multiple waves combine, they can interfere with each other. When the waves are in sync with each other, they will have constructive interference, where the influence of one wave will add to and magnify the other. If the waves are out of sync with each other, they will have destructive interference. If two waves have the same amplitude and frequency and they are ½ wavelength out of sync, the destructive interference between them can eliminate each wave. This process of constructive and destructive interference is illustrated below. 9.6.2 How Earthquakes Happen The release of seismic energy is explained by the elastic rebound theory. When rock is strained to the point that it undergoes brittle deformation, built-up elastic energy is released during displacement, which in turn radiates away as seismic waves. When the brittle deformation occurs, it creates an offset between the fault blocks at a starting point called the focus. This offset propagates along the surface of rupture, which is known as the fault plane. The fault blocks of persistent faults like the Wasatch Fault of Utah are locked together by friction. Over hundreds to thousands of years, stress builds up along the fault. Eventually, stress along the fault overcomes the frictional resistance, and slip initiates as the rocks break. The deformed rocks “snap back” toward their original position in a process called elastic rebound. Bending of the rocks near the fault may reflect this build-up of stress and in earthquake prone areas like California, strain gauges that measure this bending are set up in an attempt to understand more about predicting an earthquake. In some locations where the flt is not locked, seismic stress causes continuous movement along the fault called fault creep, where displacement occurs gradually. Fault creep occurs along some parts of the San Andreas Fault. Release of seismic energy occurs in a series of steps. After a seismic energy release, energy begins to build again during a period of inactivity along the fault. The accumulated elastic strain may produce small earthquakes (on or near the main fault). These are called foreshocks and can occur hours or days before a large earthquake, but they may not occur at all. The main release of energy occurs during the major earthquake, known as the mainshock. Aftershocks may then occur to adjust strain that built up from movement of the fault. They generally decrease over time. 9.6.3 Focus and Epicenter The focus (aka hypocenter) of an earthquake is the point of initial breaking or rupturing where displacement of rocks. The focus is always at some depth below the ground surface in the crust, and not at the surface. From the focus, the displacement propagates up, down, and laterally along the fault plane. The displacement produces shock waves, called seismic waves. Generally speaking, the larger the displacement and the further it propagates, the greater the amount of shaking produced. More shaking is usually the result of more seismic energy released. The epicenter is the location on the Earth’s surface vertically above the point of rupture (focus). This is the location that most news reports give because it is the center of the area where people are affected. The focus is the point along the fault plane from which the seismic waves spread outward. 9.6.4 Seismic Waves Seismic waves are an expression of the energy released after an earthquake. Seismic waves occur as body waves and surface waves. When seismic energy is released, the first waves to propagate out are body waves that pass through the body of the planet. Body waves include primary waves (P waves) and secondary waves (S waves). Primary waves are the fastest seismic waves. They move through rock via compression, very much like sound waves move through air. Particles of rock move forward and back during passage of the P waves. Primary waves can travel through both fluids and solids. Secondary waves travel slower and follow primary waves, propagating as shear waves. Particles of rock move from side to side during passage of S waves. Because of this, secondary waves cannot travel through fluids, including liquids, plasma, or gas. When an earthquake occurs at a location in the earth, the body waves radiate outward, passing through the earth and into the rock of the mantle as a sub-spherical wave front. A point on this spreading wave front travels along a specific path which reaches a seismograph located at one of thousands of seismic stations scattered over the earth. That specific travel path is a line called a seismic ray. Since the density (and seismic velocity) of the mantle increases with depth, a process called refraction causes earthquake rays to curve away from the vertical and bend back toward the surface, passing through bodies of rock along the way. Surface waves are produced when P and S body waves strike the surface of the earth and travel along the Earth’s surface, radiating outward from the epicenter. Surface waves travel more slowly than body waves. They have complex horizontal and vertical ground movement that creates a rolling motion. Because they propagate at the surface and have complex motions, surface waves are responsible for most of the damage. Two types of surface waves are Love waves and Rayleigh waves. Love waves produce horizontal ground shaking and, ironically from their name, are the most destructive. Rayleigh waves produce an elliptical motion of points on the surface, with longitudinal dilation and compression, like ocean waves. However, with Raleigh waves rock particles move in a direction opposite to that of water particles in ocean waves. Earth is like a bell, and an earthquake is a way to ring it. Like other waves, seismic waves bend and bounce when passing from one material to another, like moving from a dense rock to a rock with even higher density. When a wave bends as it moves into a different substance, it is known as refraction, and when waves bounce back, it is known as reflection. Because S waves cannot move through liquid, they are blocked by the liquid outer core, creating a shadow zone on the opposite side of the planet to the earthquake source. 9.7 Measuring Earthquakes Seismographs are instruments used to measure seismic waves. They measure vibration of the ground using pendulums or springs. The principle of the seismograph involves mounting a recording device solidly to the earth and suspending a pen or writing instrument above it on a spring or pendulum. As the ground shakes, the suspended pen records the shaking on the recording device. The graph resulting from measurements of a seismograph is a seismogram. Seismographs of the early 20th century were essentially springs or pendulums with pens on them that wrote on a rotating drum of paper. Digital ones now use magnets and wire coils to measure ground motion. Typical seismograph arrays measure vibrations in three directions: north-south (x), east-west (y), and up-down (z). To determine distance of the seismograph from the epicenter, seismologists use the difference between the times when the first P waves and S waves arrive. After an earthquake, P waves will appear first on the seismogram, followed by S waves, and finally body waves, which have the largest amplitude on the seismogram. Surface waves do lose energy quickly, so they are not measured at great distances from the focus. Seismographs across the globe record arrivals of waves from each earthquake at many station sites. The distance to the epicenter can be determined by comparing arrival times of the P and S waves. Electronic communication among seismic stations and connected computers used to make calculations mean that locations of earthquakes and news reports about them are generated quickly in the modern world. 9.7.2 Locating Earthquake Epicenters with Triangulation Each seismograph gives the distance from that station to the earthquake epicenter. Three or more seismograph stations are needed to locate the epicenter of an earthquake through triangulation. Using the arrival-time difference from the first P wave to first S wave, one can determine the distance from the epicenter, but not the direction. The distance from the epicenter to each station can be plotted as a circle, the distance being equal to the circle’s radius. The place where the circles intersect demarks the epicenter. This method also works in three dimensions with spheres and multi-axis seismographs to locate not only the epicenter but also the depth of the focus of the earthquake. 9.7.3 Seismograph Network The International Registry of Seismograph Stations lists more than 20,000 seismographs on the planet. Seismologists can use and compare data from sets of multiple seismometers dispersed over a wide area, which is a seismograph network. By collaborating, scientists can map the properties of the inside of the earth, detect detonation of large explosive devices, and predict tsunamis. The Global Seismograph Network, a set of world-wide linked seismographs that distribute real time data electronically, consists of more than 150 stations that meet specific design and precision standards. The Global Seismograph Network helps the Comprehensive Nuclear-Test-Ban Treaty Organization monitor for nuclear tests. The USArray is a network hundreds of permanent and transportable seismographs in the United States. The USArray is being used to map the subsurface through passive collection of seismic waves created by earthquakes (see below). Nepal Earthquake M7.9 Ground Motion Visualization 9.7.4 Seismic tomography Very much like a CT (Computed Tomography) scan uses X-rays at different angles to image the inside of a body, seismic tomography uses rays from seismic waves created by thousands of earthquakes that occur each year and pass at all angles through masses of rock within the earth to generate images of internal structures. Based on the assumption that the earth consists of homogenous layers, geologists have developed a model of expected properties of earth materials at every depth within the earth called the PREM (Preliminary Reference Earth Model). Included in these expected properties is transmission velocity of seismic waves which is dependent on density and elasticity of the rock. In the mantle, density differences in rock bodies result primarily from differences in temperature. Slightly cooler rocks have a higher density and therefore transmit earthquake waves slightly faster than the velocity predicted by PREM. Slightly warmer rocks transmit earthquake waves slightly slower than predicted by PREM. These small differences from PREM are called seismic anomalies and can be measured for bodies of rock within the earth from arrival times of seismic rays passing through them at stations of the seismic network. Such seismically defined bodies of rock can thus be imaged via seismic tomography by the network of seismic stations distributed over the earth. Seismograph networks provide data for creating tomographic images and maps of the distribution of rock density beneath the crust. For example, seismologists have mapped the Farallon Plate, a tectonic plate that subducted beneath North America during the last several million years, and the Yellowstone magma chamber, which is a product of the Yellowstone hot spot under the North American continent. Peculiarities of the subduction of the Farallon Plate are thought to be responsible for many features of western North America including the Rocky Mountains (See chapter 8 for more information on the Farallon Plate). 9.7.5 Determining Earthquake Magnitude Magnitude is the measure of intensity of an earthquake. The Richter scale is the most well-known magnitude scale devised for an earthquake, and was the first one developed by Charles Richter at CalTech. This was the magnitude scale used historically by early seismologists. The Richter scale magnitude is determined from measurements on a seismogram. Magnitudes on the Richter scale are based on measurements of the maximum amplitude of the needle trace measured on the seismogram and the arrival time difference of S and P waves which gives the distance to the earthquake. The Richter scale is a logarithmic scale, based on powers of 10. Amplitude of the seismic wave recorded on the seismogram is 10 times greater for each increase of 1 unit on the Richter scale. That means a magnitude 6 earthquake shakes the ground 10 times more than a magnitude 5. However the actual energy released for each 1 unit magnitude increase is 32 times greater. That means energy released for a magnitude 6 earthquake is 32 times greater than a magnitude 5. The Richter scale was developed for distances appropriate for earthquakes in Southern California and on seismograph machines in use there. Its applications to larger distances and very large earthquakes is limited. Therefore, most agencies no longer use the methods of Richter to determine magnitude, but generate a quantity called the Moment Magnitude, which is more accurate for large earthquakes measured at the seismic array across the earth. As numbers, the moment magnitudes are comparable to the magnitudes of the Richter Scale. The media still often give magnitudes as Richter Magnitude even though the actual calculation was of moment magnitude. 9.7.6 Moment magnitude scale The Moment Magnitude scale depicts the absolute size of earthquakes, comparing information from multiple locations and using a measurement of actual energy released calculated from cross-sectional area of rupture, amount of slippage, and the rigidity of the rocks. Because of the unique geologic setting of each earthquake and because rupture area is often hard to measure, estimates of moment magnitude can take days to months to calculate. Like Richter magnitude, the moment magnitude scale is logarithmic. Both scales are used in tandem because the estimates of magnitude may change after a quake. The Richter scale is used as a quick determination immediately following the quake (and thus is usually reported in news accounts), and the moment magnitude is calculated days to months later. Magnitude values of the two magnitudes are approximately equal except for very large earthquakes. 9.7.7 Modified Mercalli Intensity Scale The Modified Mercalli Intensity Scale is a qualitative scale (I-XII) of the intensity of ground shaking based on damage to structures and people’s perceptions. This scale can vary depending on the location and population density (urban vs rural). It was also used for historic earthquakes which occurred before quantitative measurements of magnitude could be made. The Modified Mercalli Intensity maps show where the damage is most severe based on questionnaires sent to residents, newspaper articles, and reports from assessment teams. Recently, USGS has used the internet to help gather data more quickly. |I||Not felt||Not felt except by a very few under especially favorable conditions.| |II||Weak||Felt only by a few persons at rest,especially on upper floors of buildings.| |III||Weak||Felt quite noticeably by persons indoors, especially on upper floors of buildings. Many people do not recognize it as an earthquake. Standing motor cars may rock slightly. Vibrations similar to the passing of a truck. Duration estimated. |IV||Light||Felt indoors by many, outdoors by few during the day. At night, some awakened. Dishes, windows, doors disturbed; walls make cracking sound. Sensation like heavy truck striking building. Standing motor cars rocked noticeably. |V||Moderate||Felt by nearly everyone; many awakened. Some dishes, windows broken. Unstable objects overturned. Pendulum clocks may stop.| |VI||Strong||Felt by all, many frightened. Some heavy furniture moved; a few instances of fallen plaster. Damage slight.| |VII||Very strong||Damage negligible in buildings of good design and construction; slight to moderate in well-built ordinary structures; considerable damage in poorly built or badly designed structures; some chimneys broken.| |VIII||Severe||Damage slight in specially designed structures; considerable damage in ordinary substantial buildings with partial collapse. Damage great in poorly built structures. Fall of chimneys, factory stacks, columns, monuments, walls. Heavy furniture overturned.| |IX||Violent||Damage considerable in specially designed structures; well-designed frame structures thrown out of plumb. Damage great in substantial buildings, with partial collapse. Buildings shifted off foundations.| |X||Extreme||Some well-built wooden structures destroyed; most masonry and frame structures destroyed with foundations. Rails bent.| Shake maps (written ShakeMaps by the USGS) use high-quality seismograph data from seismic networks to show areas of intense shaking. They are the result of rapid, computer-interpolated seismograph data. They are useful in crucial minutes after an earthquake, as they can show emergency personnel where the greatest damage likely occurred and locate areas of possible damaged gas lines and other utilities. 9.8 Earthquake Risk 9.8.1 What determines shaking? In general, the larger the magnitude, the stronger the shaking and the longer the shaking will last. But, other factors influence the level of shaking as described in the following paragraphs. Table and descriptions from https://earthquake.usgs.gov/learn/topics/mag_vs_int.php |Magnitude||Modified Mercalli Intensity||Shaking/Damage Description| |1.0 – 3.0||I||Only felt by a very few.| |3.0 – 3.9||II – III||Noticeable indoors, especially on upper floors.| |4.0 – 4.9||IV – V||Most to all feel it. Dishes, doors, cars shake and possibly break.| |5.0 – 5.9||VI – VII||Everyone feels it. Some items knocked over or broken. Building damage possible.| |6.0 – 6.9||VII – IX||Frightening amounts of shaking. Significant damage especially with poorly constructed buildings| |≥ 7.0||≥ VIII||Significant destruction of buildings. Potential for objects to be thrown in air from shaking.| Location and Direction Closer earthquakes will inherently cause more shaking than those farther away. The location in relation to epicenter and direction of rupture will influence how much shaking is felt. The direction that the rupture propagates along the fault influences the shaking. The path of greatest rupture can intensify shaking in an effect known as directivity. Local Geologic conditions The nature of the ground materials affects the properties of the seismic waves. Different materials respond differently to an earthquake. Think of shaking jello versus shaking a meatloaf, one will jiggle much more to the same amount of shaking. The response to shaking depends on their degree of consolidation; lithified sedimentary rocks and crystalline rocks shake less than unconsolidated sediments and land fill. This is because seismic waves move faster through consolidated bedrock, move slower through unconsolidated sediment, and move slowest through unconsolidated materials with high water content. Since the energy is carried by both velocity and amplitude, when a seismic wave slows down, its amplitude increases, which in turn increases seismic shaking. Energy is transferred to the vertical motion of the surface waves. Depth of focus The focus is the place within the Earth where the earthquake starts. The depth of earthquakes influences the amount of shaking. Deeper earthquakes cause less shaking at the surface because they lose much of their energy before reaching the surface. Recall that most of the destruction is caused by surface waves which are caused as the body waves reach the surface. 9.8.2 What determines destruction? Building material choices can influence the amount of damage caused by earthquake shaking. The flexibility of building materials relates to their resistance to damage by earthquake waves. Unreinforced Masonry (URM) is the most devastated by ground shaking. Wood framing held together with nails which can bend and flex with wave passage are more likely to survive earthquakes. Steel also has the ability to deform elastically before brittle failure. The Salt Lake City campaign “Fix the Bricks” has good information on URMs and earthquake safety. Shaking Intensity and Duration Greater shaking and duration of shaking will cause more destruction than less shaking and shorter shaking. Resonance is when the frequency of seismic energy matches a building’s natural frequency of shaking, determined by properties of the building, and intensifies the amplitude of shaking. This famously happened in the 1985 Mexico City Earthquake, where buildings of heights between 6 and 15 stories were especially vulnerable to earthquake damage. Skyscrapers designed with earthquake resilience have dampers and base isolation features to reduce resonance. 9.8.3 Earthquake Recurrence Geologists dig earthquake trenches across some faults to measure ground deformation and estimate the frequency of occurrence of past earthquakes. Trenches are effective for faults with relatively long recurrence intervals (100s to 10,000s of years), which is the period of time between significant earthquakes. In areas with more frequent earthquakes and more measured earthquake data, trenches are less necessary. A long hiatus in earthquake activity could indicate the buildup of stress on a specific segment of a fault with strain held in place by friction, which would indicate a higher probability of an earthquake along that segment. This hiatus of seismic activity along a length of a fault (i.e. a fault that is locked and not having any earthquakes) is known as seismic gap. 9.8.4 Distribution of Earthquake Hazard Subduction zones are where the largest, deepest earthquakes occur. These are known as megathrust earthquakes. Example areas include the Sumatran Islands, the Aleutian Islands, and the west coast of South America. The Cascadia Subduction Zone off the coast of Washington and Oregon is another exmple. Collision zone earthquakes Continental collisions create broad area of earthquakes. They can have some deep, large earthquakes from ‘left-over’ subduction and/or deep-crustal processes. An area where this is occurring is the Himalayan Mountains and the Alps. Transform Fault Boundaries Transform fault boundaries create moderate and large earthquakes, usually having a maximum magnitude of about 8. The San Andreas fault in California is an example of a transform fault boundary. Other examples are the Alpine Fault in New Zealand and the Anatolian Faults in Turkey. Rifts and mid-ocean ridges Continental rifts and mid-ocean ridges are characteristic of divergent plate boundaries. These areas generally produce moderate earthquakes. Examples of such areas include the East African Rift and Iceland. The United State’s Basin and Range is another area undergoing tensional forces that experiences earthquakes. Intraplate earthquakes are earthquakes not near tectonic plate boundaries. Intraplate earthquakes generally occur in areas of weakened crust or concentrated tectonic stress. The New Madrid Seismic zone is an area in Missouri, Illinois, Tennessee, Arkansas, and Indiana that is thought to represent the failed Reelfoot Rift zone . The failed rift zone created an area of crustal weakness, which is relatively responsive to tectonic stresses related to plate movement and interaction. The infrequent earthquakes could be related to reactivated areas of weakness with a low rate of strain. 9.8.5 Secondary Hazards Caused by Earthquakes Liquefaction is when saturated unconsolidated sediments (usually silt or sand) is liquefied from shaking. Shaking causes loss of cohesion between grains of sediment, reducing the effective stress resistance of the sediment. The sediment flows very much like the quicksand presented in movies. Liquefaction creates sand volcanoes, which is when liquefied sand is squirted through an overlying (usually finer-grained) layer, creating cone-shaped sand features. It may also cause buildings to settle or tilt. Many of the more recent devastating natural disasters have been caused by earthquake-induced tsunamis. Tsunamis form when the sea floor is offset by earthquakes in the ocean subsurface. This offset can be caused by fault movement or underwater landslides and actually lifts a volume of ocean water generating the tsunami wave. Tsunami waves travel fast with low amplitude in deep ocean water, but are significantly amplified as the water shallows as they approach shore. When a tsunami is about to strike land, the water in front of the wave along the shore will recede significantly, tragically causing curious people to wander out. This receding water is the drawback of the trough in front of the tsunami wave which then crashes on shore as a wall of water upwards of a hundred feet high. The behavior of tsunamis as ocean waves is covered in the section on shorelines in Chapter 12. Warning systems have been established to help mitigate the loss of life caused by tsunamis. Shaking can trigger landslides (see landslide section for more information). One example is the 1992 magnitude 5.9 earthquake in St. George Utah . This earthquake caused the Springdale landslide, having a scarp that offset and destroyed several structures in the Balanced Rock Hills subdivision . Seiches are waves on lakes generated by earthquakes, which cause sloshing of water back and forth and, sometimes, even changes in elevation of the lake. A seich in Hebgen Lake during the 1959 earthquake caused major destruction to structures and roads around the lake. Land Elevation Changes Significant subsidence and upheaval of the land can occur in relation to the slippage that causes earthquakes. Land elevation changes are the result of the relaxation of stress and subsequent movement along the fault plane. The 1964 Alaska earthquake is a good example of this. Where the fault cuts the surface, elevation of one side causes a fault scarp that may be a few feet to 20 or 30 feet in height. The Wasatch Mountains represent an accumulation of fault scarps of a couple dozen feet at a time over a few million years. 9.8.6 Human Activity that can Create Seismic Energy We have used seismographs to determine the size of nuclear weapons tested by other countries, most recently in North Korea . The Earthquake magnitude energy calculator can inform readers on the amount of energy that different magnitude earthquakes can produce. 9.9 Case Studies 9.9.1 Basin and Range Earthquakes Basin and Range earthquakes are caused primarily by normal faults created by tensional forces pulling the area apart. The Wasatch Fault defines the eastern extent of the Basin and Range and has been studied as an earthquake hazard for more than 100 years. The Basin and Range extends from the Wasatch Fault to the Sierra Nevada. 9.9.2 North American Earthquakes - 1811-1812 New Madrid Earthquakes – Historical accounts of the New Madrid seismic zone date as far back as 1699 . A sequence of large (moment magnitude >7) occurred from December 1811 to February 1812 in the New Madrid Missouri area . The earthquakes damaged houses in St. Louis, affected the stream course of the Mississippi River, and leveled the town of New Madrid. These earthquakes were the result of seismic activity in the New Madrid seismic zone, an area of intraplate seismic activity. The intraplate activity is thought to be derived from a failed Reelfoot rift zone (an aulacogen), creating crustal weakness in the region . The New Madrid seismic zone continues to produce earthquakes. - 1868 Charleston – The 1868 earthquake of Charleston South Carolina was a moment magnitude 7.0, with a Mercalli intensity of X, killing at least 60 people. This was an intraplate earthquake, likely associated with ancient faults created during the breakup of Pangea. The earthquake caused significant liquefaction . Scientists estimate that destructive earthquakes may recur in this area with an interval of approximately 1500 to 1800 years. - 1906 Great San Francisco Earthquake and Fire – On April 17, 1906, a large earthquake occurred along the San Andreas fault near San Francisco. The earthquake had an estimated moment magnitude of 7.8 and a Modified Mercalli Intensity of XI. Geologist G.K. Gilbert was present to take measurements and photographs after the earthquake . There were multiple aftershocks followed by fires that devastated the city. About 80% of the city was destroyed. - 1964 Alaska – Magnitude 9.2 earthquake created by the megathrust fault along the Aleutian subduction zone. Large areas of land dropped down while other areas uplifted. The earthquake caused significant mass wasting (see landslides section). The 1964 Alaska earthquake was one of the most powerful earthquakes ever recorded. - 1989 Loma Prieta – The Loma Prieta earthquake was a moment magnitude 6.9 earthquake created by movement along the San Andreas Fault. It caused 63 deaths and buckled portions of the freeway and part of the Oakland San Francisco Bay Bridge. 9.9.3 Global Earthquakes - 1556 Shaanxi – On January 23, 1556 an earthquake of approximate magnitude 8 hit central China, killing approximately 830,000 people. This system is thought to have a recurrence interval of 1000 years . Much of the death toll was attributed to the collapse of loess (windblown sediment in an area in China) cave dwellings (yaodongs) that collapsed due to the shaking. This earthquake is considered the most deadly earthquake in history. - 1755 Lisbon – On November 1, 1755 an earthquake with an estimated magnitude of 8-9 struck Lisbon Portugal , killing between 10,000 to 17,400 people . The earthquake was followed by a tsunami. - 1960 Valdivia Chile – The 1960 Valdivia earthquake was the most powerful earthquake ever measured, having a moment magnitude between 9.4-9.6. The earthquake, occurring on May 22, is estimated to have lasted 10 minutes. It triggered a tsunami that destroyed houses in Japan and Hawaii, and caused a volcanic eruption of vents along the Cordón Caulle volcano. - 1976 Tangshan earthquake – Just before 4 am on July 28, 1976 a magnitude 7.8 struck Tangshan, Hebei, China. This earthquake killed more than 240,000 people. The high death-toll is thought to be contributed to by the earthquake occurring early in the morning and building techniques that were not appropriate for earthquakes. - 2004 Indonesia – On December 26, 2004, a moment magnitude 9.0-9.3 earthquake occurred off the coast of Sumatra, Indonesia . The earthquake was created by slippage of the Sunda Megathrust, where the Australia plate is subducted below the Sunda plate in the Indian Ocean (“Long-Term Perspectives on Giant Earthquakes and Tsunamis at Subduction Zones,” 2007). The earthquake resulted in a massive tsunami that is estimated to have killed over 200,000 people along the coastlines of the Indian Ocean, creating waves as tall as 24 meters when they reached the shore. - 2010 Haiti – The magnitude 7 2010 Haiti earthquake occurred on January 12, 2010. It had many significant aftershocks at magnitude 4.5 or higher. It killed more than 92,000 people. Death toll was increased by destroyed and damaged infrastructure, which contributed to a cholera outbreak, among other issues. - 2011 Tōhoku Japan – On March 11, 2011, Japan experienced a magnitude 9.0 earthquake. Because most of the buildings in Japan were designed to tolerate earthquakes, the earthquake caused a lot less damage than the tsunami it created. The tsunami caused tens of billions of dollars in damage and caused more than 15,000 deaths. The tsunami resulted in the meltdown and destruction of the Fukushima nuclear power plant. 9.9.4 Induced Seismicity Injection of waste fluids in the ground, commonly a byproduct of an extraction process for natural gas known as fracking, can increase the outward pressure that liquid in the pores of a rock exerts, known as pore pressure . The increase in pore pressure decreases the frictional forces that keep rocks from sliding past each other, essentially lubricating fault planes. This effect is causing earthquakes to occur near injection sites, in a human induced activity known as induced seismicity . The significant increase in drilling activity in the central United States has spurred the requirement for the disposal of significant amounts of waste drilling fluid, resulting in a measurable change in the cumulative number of earthquakes experienced in the region. Stress can come in the form of tension, shear, and compression which can generate normal, strike-slip, and reverse faults. Seismic energy is released when faults slip, and that energy can be measured and used to map the locations of earthquakes, the distribution of shaking, and the internal structures of our planet. When rock deformation is ductile instead of brittle, rocks can fold instead of faulting.
In classical physics and special relativity, an inertial frame of reference is a frame of reference that is not undergoing acceleration. In an inertial frame of reference, a physical object with zero net force acting on it moves with a constant velocity (which might be zero)—or, equivalently, it is a frame of reference in which Newton's first law of motion holds. An inertial frame of reference can be defined in analytical terms as a frame of reference that describes time and space homogeneously, isotropically, and in a time-independent manner. Conceptually, the physics of a system in an inertial frame have no causes external to the system. An inertial frame of reference may also be called an inertial reference frame, inertial frame, Galilean reference frame, or inertial space. All inertial frames are in a state of constant, rectilinear motion with respect to one another; an accelerometer moving with any of them would detect zero acceleration. Measurements in one inertial frame can be converted to measurements in another by a simple transformation (the Galilean transformation in Newtonian physics and the Lorentz transformation in special relativity). In general relativity, in any region small enough for the curvature of spacetime and tidal forces to be negligible, one can find a set of inertial frames that approximately describe that region. In a non-inertial reference frame in classical physics and special relativity, the physics of a system vary depending on the acceleration of that frame with respect to an inertial frame, and the usual physical forces must be supplemented by fictitious forces. In contrast, systems in general relativity don't have external causes, because of the principle of geodesic motion. In classical physics, for example, a ball dropped towards the ground does not go exactly straight down because the Earth is rotating, which means the frame of reference of an observer on Earth is not inertial. The physics must account for the Coriolis effect—in this case thought of as a force—to predict the horizontal motion. Another example of such a fictitious force associated with rotating reference frames is the centrifugal effect, or centrifugal force. The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. If the coordinates are chosen badly, the laws of motion may be more complex than necessary. For example, suppose a free body that has no external forces acting on it is at rest at some instant. In many coordinate systems, it would begin to move at the next instant, even though there are no forces on it. However, a frame of reference can always be chosen in which it remains stationary. Similarly, if space is not described uniformly or time independently, a coordinate system could describe the simple flight of a free body in space as a complicated zig-zag in its coordinate system. Indeed, an intuitive summary of inertial frames can be given: in an inertial reference frame, the laws of mechanics take their simplest form. with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces. In contrast, Newton's second law in a rotating frame of reference, rotating at angular rate Ω about an axis, takes the form: which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics): where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer). The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present. In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket). As we now know, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. The Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based upon the simplicity of the laws of physics in the frame. In particular, the absence of fictitious forces is their identifying property. In practice, although not a requirement, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces very little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. To illustrate further, consider the question: "Does our Universe rotate?" To answer, we might attempt to explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive, that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If we attribute its apparent rate of rotation entirely to rotation in an inertial frame, a different "flatness" is predicted than if we suppose part of this rotation actually is due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every 60×1012 years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation: Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K.— Albert Einstein: The foundation of the general theory of relativity, Section A, §1 This simplicity manifests in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames have external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity; see Nagel and also Blagojević. The laws of Newtonian mechanics do not always hold in their simplest form...If, for instance, an observer is placed on a disc rotating relative to the earth, he/she will sense a 'force' pushing him/her toward the periphery of the disc, which is not caused by any interaction with other bodies. Here, the acceleration is not the consequence of the usual force, but of the so-called inertial force. Newton's laws hold in their simplest form only in a family of reference frames, called inertial frames. This fact represents the essence of the Galilean principle of relativity: The laws of mechanics have the same form in all inertial frames.— Milutin Blagojević: Gravitation and Gauge Symmetries, p. 4 In practical terms, the equivalence of inertial reference frames means that scientists within a box moving uniformly cannot determine their absolute velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, which can be viewed as a limiting case of special relativity in which the speed of light is infinite, inertial frames of reference are related by the Galilean group of symmetries. Newton posited an absolute space considered well approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some scientists (called "relativists" by Mach), even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced. Indeed, the expression inertial frame of reference (German: Inertialsystem) was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" by a more operational definition. As translated by Iro, Lange proposed the following definition: A reference frame in which a mass point thrown from the same point in three different (non co-planar) directions follows rectilinear paths each time it is thrown, is called an inertial frame. A discussion of Lange's proposal can be found in Mach. The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojević: - The existence of absolute space contradicts the internal logic of classical mechanics since, according to Galilean principle of relativity, none of the inertial frames can be singled out. - Absolute space does not explain inertial forces since they are related to acceleration with respect to any one of the inertial frames. - Absolute space acts on physical objects by inducing their resistance to acceleration but it cannot be acted upon.— Milutin Blagojević: Gravitation and Gauge Symmetries, p. 5 The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary: The original question, "relative to what frame of reference do the laws of motion hold?" is revealed to be wrongly posed. For the laws of motion essentially determine a class of reference frames, and (in principle) a procedure for constructing them.— Robert DiSalle Space and Time: Inertial Frames Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of inertial frame to include all physical laws, not simply Newton's first law. Newton viewed the first law as valid in any reference frame that is in uniform motion relative to the fixed stars; that is, neither rotating nor accelerating relative to the stars. Today the notion of "absolute space" is abandoned, and an inertial frame in the field of classical mechanics is defined as: An inertial frame of reference is one in which the motion of a particle not subject to forces is in a straight line at constant speed. Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries. If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then we have to be able to determine when zero net force is applied. The problem was summarized by Einstein: The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.— Albert Einstein: The Meaning of Relativity, p. 58 There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so we have only to be sure that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is that we might miss something, or account inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when we shift reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames, and have complicated rules of transformation in general cases. On the basis of universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces. The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.— Isaac Newton: Principia, Corollary V, p. 88 in Andrew Motte translation This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares with the special principle the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below. Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. Some theories may even postulate the existence of a privileged frame which provides absolute space and absolute time. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates: where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation and length contraction, and the relativity of simultaneity, which have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. There is no experiment observers can perform to distinguish whether an acceleration arises because of a gravitational force or because their reference frame is accelerating.— Douglas C. Giancoli, Physics for Scientists and Engineers with Modern Physics, p. 155. This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "noninertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the solar system. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 metres. The car in front is travelling at 22 metres per second and the car behind is travelling at 30 metres per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance d = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t. Notice that these formulas predict at t = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is: Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of v2 − v1 = 8 m/s. In order to catch up to the first car, it will take a time of d/v2 − v1 = 200/8 s, that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m/s. It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving towards the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x-axis and the direction in front of him as the positive y-axis. To him, the car moves along the x axis with some velocity v in the positive x-direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x-axis, and the direction in front of her as the positive y-axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y-direction. If she is driving north, then north is the positive y-direction; if she turns east, east becomes the positive y-direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x-direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, a in the negative y-direction. However, if she is accelerating at rate A in the negative y-direction (in other words, slowing down), she will find Candace's acceleration to be a′ = a − A in the negative y-direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y-direction (speeding up), she will observe Candace's acceleration as a′ = a + A in the negative y-direction—a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference. It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal" time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations). The definition of inertial reference frame can also be extended beyond three-dimensional Euclidean space. Newton's assumed a Euclidean space, but general relativity uses a more general geometry. As an example of why this is important, consider the geometry of an ellipsoid. In this geometry, a "free" particle is defined as one at rest or traveling at constant speed on a geodesic path. Two free particles may begin at the same point on the surface, traveling with the same constant speed in different directions. After a length of time, the two particles collide at the opposite side of the ellipsoid. Both "free" particles traveled with a constant speed, satisfying the definition that no forces were acting. No acceleration occurred and so Newton's first law held true. This means that the particles were in inertial frames of reference. Since no forces were acting, it was the geometry of the situation which caused the two particles to meet each other again. In a similar way, it is now common to describe that we exist in a four-dimensional geometry known as spacetime. In this picture, the curvature of this 4D space is responsible for the way in which two bodies with mass are drawn together even if no forces are acting. This curvature of spacetime replaces the force known as gravity in Newtonian mechanics and special relativity. Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′. The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′. From the geometry of the situation, we get Taking the first and second derivatives of this with respect to time, we obtain where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, we can now write Newton's second law as When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): or, to solve for the acceleration in the accelerated frame, Multiplying through by the mass m gives The effect of this being in the noninertial frame is to require the observer to introduce a fictitious force into his calculations….— Sidney Borowitz and Lawrence A Bornstein in A Contemporary View of Elementary Physics, p. 138 The presence of fictitious forces indicates the physical laws are not the simplest laws available so, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: The equations of motion in a non-inertial system differ from the equations in an inertial system by additional terms called inertial forces. This allows us to detect experimentally the non-inertial nature of a system.— V. I. Arnol'd: Mathematical Methods of Classical Mechanics Second Edition, p. 129 Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. How then, are "fictitious" forces to be separated from "real" forces? It is hard to apply the Newtonian definition of an inertial frame without this separation. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force (which is made up of the Coriolis force and the centrifugal force). How can we decide that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). We will find there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. If bodies, any how moved among themselves, are urged in the direction of parallel lines by equal accelerative forces, they will continue to move among themselves, after the same manner as if they had been urged by no such forces.— Isaac Newton: Principia Corollary VI, p. 89, in Andrew Motte translation This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, we can define inertial frames collectively as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it.: 59 Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour. reference laws of physics. The Principle of Relativity. rotating sphere Mach cord OR string OR rod. acceleration azimuthal Morin.
About This Chapter Fractions & Percents - Chapter Summary How much do you know about fractions and percents? Find out by reviewing the engaging lessons in this chapter. Refresh your knowledge of how to perform mathematical operations on fractions, percents, mixed numbers, percent notation and more. Completing this chapter can ensure you're able to: - Raise, reduce, compare and order fractions - Find least common denominators - Add and subtract like and unlike fractions and mixed numbers - Multiply and divide fractions and mixed numbers - Convert from percent notation to decimal notation and fraction notation - Change between decimals and percents, and decimals and fractions - Solve word problems that use percents Each lesson features a self-assessment quiz you can use to gauge your comprehension of fractions and percents. If you answer any quiz questions incorrectly, click the link next to the answer to return to the related portion of the video lesson and get a quick review. A chapter exam is available to reinforce lesson concepts. If you have questions about any topics covered in this chapter, feel free to submit them to our experts. 1. How to Raise and Reduce Fractions When working with fractions, you sometimes need to change your fraction to make it easier to work with. Watch this video lesson to learn how you can raise and reduce fractions to make your problem solving easier. 2. How to Find Least Common Denominators Finding a Least Common Denominator is an important tool when working with fractions. To find least common denominators, we will use multiples of our numbers. 3. Comparing and Ordering Fractions Comparing and ordering fractions is way to examine fractions that contain different-sized sets. In order to compare these fractions, you must find a common denominator and make equivalent fractions. 4. Changing Between Improper Fraction and Mixed Number Form Improper fractions and mixed numbers are two different types of fractions. They both have different purposes when using fractions in a problem. In this lesson, you will learn how to convert between these two different forms. 5. How to Add and Subtract Like Fractions and Mixed Numbers When adding and subtracting fractions, you must be sure to have a common denominator. Once you have a common denominator, you just add or subtract your numerators and then your whole numbers. Learn how in this lesson. 6. How to Add and Subtract Unlike Fractions and Mixed Numbers Simple fraction arithmetic gets a little more complicated when our denominators don't match. In this lesson, we'll learn how to add and subtract unlike fractions. Then, we'll do the same with mixed numbers. 7. Multiplying Fractions and Mixed Numbers Multiplying fractions is much more straightforward than adding, subtracting or dividing fractions. In this lesson, learn how it works. We'll also learn how to multiply mixed numbers. 8. Dividing Fractions and Mixed Numbers Dividing fractions and mixed numbers? It sounds daunting, but it's not as tricky as it sounds. In this lesson, we'll learn how to divide fractions and mixed numbers. 9. Converting from Percent Notation to Decimal Notation Knowing how to convert a percent to a decimal can be a useful skill in many different situations. This lesson will show you how to perform that conversion and places where this skill will be helpful. 10. Converting Percent Notation to Fraction Notation The process of converting percents to fractions is quite easy if you know where to start. This lesson will show you how to perform the conversion and give some real-world examples. 11. Changing Between Decimals and Percents Watch as we go on a rollercoaster ride in changing from a decimal to percent and then back again. Learn to master this technique so you can get as fast as a rollercoaster going downhill on tests and quizzes when you have to convert between the two. 12. Changing Between Decimals and Fractions In this video lesson, you will learn about the usefulness of knowing how to change quickly between decimals and fractions. You will also learn how changing from a decimal to a fraction is different than changing from a fraction to a decimal. 13. How to Solve Word Problems That Use Percents 100 percent of percent problems can be solved if we follow the correct steps. In this lesson, we'll practice solving a variety of different percent problems. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the PCAT: Study Guide & Test Prep course - Verbal Ability: Analogies - Verbal Ability: Sentence Completion - Biology: Cell Structure & Division - Biology: Cellular Processes - Biology: DNA & Enzymes - Biology: Genetics - Animal & Human Behavior - Biology: Evolution & Natural Selection - Speciation, Extinction & Taxonomy - Biology: Intro to Nutrition - Biology: Nutrition - Carbohydrates - Biology: Nutrition - Lipids - Protein in Nutrition - Minerals & Nutrition - Vitamins in Nutrition - Biology: Diseases & Prevention - Biology: Types of Diseases - Microrganisms & Microbial Ecology - Medical Microbiology - Microbiology: The Immune System - Structures of Human Anatomy & Physiology - Human Skeletal System - Muscular System - The Human Nervous System - Human Circulatory System Basics - The Circulatory System, Blood & Blood Vessels - Respiratory System - Urinary Systems & Structures - Functions of the Human Digestive System - Overview of the Endocrine System - Male Reproductive System - Female Reproductive System - Integumentary System Overview - The Five Senses of the Human Body - The Atom and Atomic Theory - Chemical Bonding in Physical Chemistry - Types of Reactions in Chemistry - Equilibrium in Chemistry - Overview of Stoichiometry - Properties of Liquids & Solids - Gas Laws - Kinetics & Thermodynamics - Elements, Compounds & Mixtures - Nuclear Processes - Organic & Inorganic Chemistry - Reading Comprehension & Analysis - Solving Algebraic Equations - Linear Equations Overview - Inequalities Overview - Understanding the Basics of Functions - Interpreting Tables & Plots - Basic Statistics & Probability - Linear Regression & Correlation - Solving Exponential & Logarithmic Functions - Complex Numbers & Vectors - Limits of Functions - Overview of Continuity - Rate of Change & Calculating Derivatives - Integrals & Antiderivatives - PCAT Flashcards
NASA has been trying to drive home the importance of acknowledging global warming for several years now. However, the recent visualization that was released by NASA shows just how impactful the gases that escape our cities industrial plants really are. The simulation was created on a supercomputer in Maryland. The simulation is a representation of the greenhouse gases, or more specifically CO2 emissions from May 2005 to June of 2007. The impact of the greenhouse gases is often impressive when standing alone, without computer-enhanced illustration. However, the scope of this simulation was actually 64 times as well-defined as any previous model. Which lends this project to being significantly more valuable in the science field, as well as landing far more credibility for those who use the model in further research. The simulation actually revealed two things that really go under-discussed within the science community when it comes to global warming, and greenhouse gas emissions. First, the findings of the simulation show that the greenhouse gases that are emitted from humans, are almost exclusively emitted from the Northern Hemisphere. There is a serious discrepancy between the northern, and southern halves of the Earth, with regards to how much is being discharged into the atmosphere. As the simulation reveals, once the greenhouse gases are emitted into the atmosphere, it’s then the weather patterns that carry them around the globe. The United States, Europe, Asia, and then eventually the Arctic regions all see plumes of greenhouse gases cluster over them, and then roll onward, with the winds in the upper atmosphere. The second finding from this model is that mass amounts of CO2 are actually accounted for by nature. Meaning, forests and other green vegetation takes care of our problem, temporarily, while they are in bloom. The model shows that through late spring and early summer, some of the gases tend to fade away in places that experience high-quantities of growth during that time period. Google gives cash boost to Turing award making prize $1 million Burning fossil fuels is the largest factor contributing to global warming and the emission of these greenhouse gases. It’s been noted that roughly 36 billion metric tons of extra carbon dioxide are sent into the atmosphere each year as a result of human burning of those fossil fuels. NASA created this simulation, as well as increasing resources to further researching global warming, and CO2 emissions to better understand what is happening in the upper-atmosphere, what the numbers are, and how we can improve.
The most important construction on a circle or arc is to locate itscenter. With the center in place it is a simple matter to draw aradius; a radius, in turn, makes it possible to draw a tangent to thecircle. Both of the constructions below establish the circle centerby using the perpendicular bisectors of chords. 13)Locate the center point of a given circle. Draw any chord on the circle and then construct its perpendicularbisector. Extend this line to form the diameter of the circle:On this diameter construct the perpendicular bisector to draw asecond diameter. Since both diameters, by definition, passthrough the circle center, their intersection marks that center. Theprinciples at work here are that the perpendicular bisector of achord is always a radius of the circle, and that by definition theradial line passes through the circle center. 14)Locate the center point of a given arc. Unlike fully closed circles arcs may not possess a diameter, andthus their centers cannot be determined by bisecting a diameter. An arc, however, always has chords on hand to determine itscenter. This method uses two intersecting radii to establish the arccenter.Draw any two chords on an arc and then construct theirperpendicular bisectors. As radii these bisectors will both passthrough the arc center with their intersection designating thatpoint.
How to Read this ManualUp: Often this will be something other than a number. Other than that, there is absolutely no difference between the two! All we did was change the equation that we were plugging into the function. All throughout a calculus course we will be finding roots of functions. A root of a function is nothing more than a number for which the function is zero. To get the remaining roots we will need to use the quadratic formula on the second equation. To complete the problem, here is a complete list of all the roots of this function. Also note that, for the sake of the practice, we broke up the compact form for the two roots of the quadratic. You will need to be able to do this so make sure that you can. This example had a couple of points other than finding roots of functions. The first was to remind you of the quadratic formula. In fact, the answers in the above example are not really all that messy. So, here is fair warning. One of the more important ideas about functions is that of the domain and range of a function. In simplest terms the domain of a function is the set of all values that can be plugged into a function and have the function exist and have a real number for a value. The range of a function is simply the set of all possible values that a function can take. Example 4 Find the domain and range of each of the following functions. This means that this function can take on any value and so the range is all real numbers. If we know the vertex we can then get the range. Example 5 Find the domain of each of the following functions. Recall that these points will be the only place where the function may change sign. This means that all we need to do is break up a number line into the three regions that avoid these two points and test the sign of the function at a single point in each of the regions. So, here is a number line showing these computations. From this we can see that the only region in which the quadratic in its modified form will be negative is in the middle region. We have to worry about division by zero and square roots of negative numbers. We can either solve this by the method from the previous example or, in this case, it is easy enough to solve by inspection. Note as well that order is important here. Interchanging the order will more often than not result in a different answer. The order in which the functions are listed is important! This answer is different from the previous part. Order is important in composition. Composition still works the same way. We will take a look at that relationship in the next section.Again, note that the last example is a “ Compound Inequality ” since it involves more than one inequality. The solution set is the ordered pairs that satisfy both inequalities; it is indicated by the darker shading. Bounded and Unbounded Regions. With our Linear Programming examples, we’ll have a set of compound inequalities, and they will be bounded inequalities, meaning the. This first one is a function. Given an \(x\), there is only one way to square it and then add 1 to the result. So, no matter what value of \(x\) you put into the equation, there is only one possible value of \(y\) when we evaluate the equation at that value of \(x\). § Implementation of Texas Essential Knowledge and Skills for Mathematics, High School, Adopted (a) The provisions of §§ of this subchapter shall be . As a member, you'll also get unlimited access to over 75, lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. When looking at the equation of the moved function, however, we have to be careful.. When functions are transformed on the outside of the \(f(x)\) part, you move the function up and down and do the “regular” math, as we’ll see in the examples regardbouddhiste.com are vertical transformations or translations, and affect the \(y\) part of the function. When transformations are made on the inside of. The first edition of Basic Econometrics was published thirty years ago. Over the years, there have been important developments in the theory and practice of econometrics. In each of the subsequent editions, I have tried to incorporate the major.
Grade Seven Math Curriculum Beginning of Intermediate School and on the brink of their teenage years, Grade 7 helps children discover their true personality. This is also the year in which academic competition starts to rise and directions towards future careers begin to be mapped. Strengthening of mathematical concepts in this grade also becomes imperative. Moving from the world of basic decimals and fractions, Grade 7 further explores the number system and proportional relationships and introduces the concepts of transformations and congruence. Table of Contents Overall and Specific Expectations The overall expectations are divided in five main categories each of which contains several subcategories. Every subcategory lists specific expectations for grade 7 in more detail. A. Number Sense Students will demonstrate an understanding of numbers and make connections to the way numbers are used in everyday life. They will - represent and compare whole numbers up to and including one billion, including in expanded form using powers of ten, and describe various ways they are used in everyday life. - identify and represent perfect squares, and determine their square roots, in various contexts. - read, represent, compare, and order rational numbers, including positive and negative fractions and decimal numbers to thousandths, in various contexts. 2.Fractions and Decimals,and Percents - use equivalent fractions to simplify fractions, when appropriate, in various contexts. - generate fractions and decimal numbers between any two quantities. - round decimal numbers to the nearest tenth, hundredth, or whole number, as applicable, in various contexts. - convert between fractions, decimal numbers, and percents, in various contexts. Students will use knowledge of numbers and operations to solve mathematical problems encountered in everyday life. They will 1.Properties and Relationships - use the properties and order of operations, and the relationships between operations, to solve problems involving whole numbers, decimal numbers, fractions, ratios, rates, and percents, including those requiring multiple steps or multiple operations. 2. Math Facts - understand and recall commonly used percents, fractions, and decimal equivalents. 3. Mental Math - use mental math strategies to increase and decrease a whole number by 1%, 5%, 10%, 25%, 50%, and 100%, and explain the strategies used. 4. Addition and Subtraction - use objects, diagrams, and equations to represent, describe, and solve situations involving addition and subtraction of integers. - add and subtract fractions, including by creating equivalent fractions, in various contexts. 5. Multiplication and Division - determine the greatest common factor for a variety of whole numbers up to 144 and the lowest common multiple for two and three whole numbers. - evaluate and express repeated multiplication of whole numbers using exponential notation, in various contexts. - multiply and divide fractions by fractions, using tools in various contexts. - multiply and divide decimal numbers by decimal numbers, in various contexts. - identify proportional and non-proportional situations and apply proportional reasoning to solve problems. A. Patterns and Relationships Students will identify, describe, extend, create, and make predictions about a variety of patterns, including those found in real-life contexts. They will - identify and compare a variety of repeating, growing, and shrinking patterns, including patterns found in real-life contexts, and compare linear growing patterns on the basis of their constant rates and initial values. - create and translate repeating, growing, and shrinking patterns involving whole numbers and decimal numbers using various representations, including algebraic expressions and equations for linear growing patterns. - determine pattern rules and use them to extend patterns, make and justify predictions, and identify missing elements in repeating, growing, and shrinking patterns involving whole numbers and decimal numbers, and use algebraic representations of the pattern rules to solve for unknown values in linear growing patterns. - create and describe patterns to illustrate relationships among integers. B. Equations and Inequalities Students will demonstrate an understanding of variables, expressions, equalities, and inequalities, and apply this understanding in various contexts. They will 1. Variables and Expressions - add and subtract monomials with a degree of 1 that involve whole numbers, using tools. - evaluate algebraic expressions that involve whole numbers and decimal numbers. 2. Equalities and Inequalities - solve equations that involve multiple terms, whole numbers, and decimal numbers in various contexts, and verify solutions. - solve inequalities that involve multiple terms and whole numbers and verify and graph the solutions. Students will solve problems and create computational representations of mathematical situations using coding concepts and skills. They will 1. Coding Skills - Students will solve problems and create computational representations of mathematical situations using coding concepts and skills. They will - read and alter existing code, including code that involves events influenced by a defined count and/or sub-program and other control structures, and describe how changes to the code affect the outcomes and the efficiency of the code. D. Mathematical Modelling Students will apply the process of mathematical modelling to represent, analyse, make predictions, and provide insight into real-life situations. A. Data Literacy Students will manage, analyse, and use data to make convincing arguments and informed decisions, in various contexts drawn from real life. They will 1. Data Collection and Organization - explain why percentages are used to represent the distribution of a variable for a population or sample in large sets of data and provide examples. - collect qualitative data and discrete and continuous quantitative data to answer questions of interest, and organize the sets of data as appropriate, including using percentages. 2. Data Visualization - select from among a variety of graphs, including circle graphs, the type of graph best suited to represent various sets of data; display the data in the graphs with proper sources, titles, and labels, and appropriate scales; and justify their choice of graphs. - create an infographic about a data set, representing the data in appropriate ways, including in tables and circle graphs, and incorporating any other relevant information that helps to tell a story about the data. 3. Data Analysis - determine the impact of adding or removing data from a data set on a measure of central tendency and describe how these changes alter the shape and distribution of the data. - analyse different sets of data presented in various ways, including in circle graphs and in misleading graphs, by asking and answering questions about the data, challenging preconceived notions, and drawing conclusions, then make convincing arguments and informed decisions. Students will describe the likelihood that events will happen and use that information to make predictions. They will - describe the difference between independent and dependent events, and explain how their probabilities differ, providing examples. - determine and compare the theoretical and experimental probabilities of two independent events happening and of two dependent events happening. A. Geometric and Spatial Reasoning Students will describe and represent shape, location, and movement by applying geometric properties and spatial relationships to navigate the world around them. They will 1. Geometric Reasoning - describe and classify cylinders, pyramids, and prisms according to their geometric properties, including plane and rotational symmetry. - draw top, front, and side views, as well as perspective views, of objects and physical spaces, using appropriate scales. 2. Location and Movement - perform dilations and describe the similarity between the image and the original shape. - describe and perform translations, reflections, and rotations on a Cartesian plane, and predict the results of these transformations. Students will compare, estimate, and determine measurements in various contexts. They will 1. The Metric System - describe the differences and similarities between volume and capacity and apply the relationship between millilitres (mL) and cubic centimetres (cm3) to solve problems. - solve problems involving perimeter, area, and volume that require converting from one metric unit of measurement to another. - use the relationships between the radius, diameter, and circumference of a circle to explain the formula for finding the circumference and to solve related problems. - construct circles when given the radius, diameter, or circumference. - show the relationships between the radius, diameter, and area of a circle, and use these relationships to explain the formula for measuring the area of a circle and to solve related problems. 3. Volume and Surface Area - represent cylinders as nets and determine their surface area by adding the areas of their parts. - show that the volume of a prism or cylinder can be determined by multiplying the area of its base by its height and apply this relationship to find the area of the base, volume, and height of prisms and cylinders when given two of the three measurements. A. Money and Finances Students will demonstrate an understanding of the value of Canadian currency. They will 1. Money Concepts - identify and compare exchange rates and convert foreign currencies to Canadian dollars and vice versa. - identify and describe various reliable sources of information that can help with planning for and reaching a financial goal. - create, track, and adjust sample budgets designed to meet longer-term financial goals for various scenarios. - identify various societal and personal factors that may influence financial decision making and describe the effects that each might have. 3.Consumer and Civic Awareness - explain how interest rates can impact savings, investments, and the cost of borrowing to pay for goods and services over time. - compare interest rates and fees for different accounts and loans offered by various financial institutions and determine the best option for different scenarios. List of Skills More than 340 math skills are considered in the math curriculum for grade 7 many of which are common to grade 6. Please, use the detailed list of skills in the old LG for grade 7. Objective evaluation is believed to be one of the most essential parts of teaching mathematics. In Genius Math, we use different tools and methods to evaluate the mathematical knowledge of students and their progress. Our evaluation process consists of three stages: before teaching sessions, during teaching sessions and after teaching sessions. - Initial Assessment Test Before starting our teaching sessions, we administrate an assessment test to obtain some insights on the strengths and weaknesses of students and their previous math knowledge. This key information helps us to come up with a special plan for every single student. - Standard Problems During teaching sessions, we use a combination of different resources providing standard problems that are designed by famous mathematicians all over the world to improve the problem-solving skills of students. Among those resources are Math Kangaroo Contests, CEMC (University of Waterloo), AMC (American Mathematics Competitions), and even IMO (International Mathematics Olympiad), the latter might be considered for those who want to tackle more challenging problems or prepare for math olympiads. We use these problems to design homework, quizzes, and tests for our students based on their grades, needs and goals. As a matter of fact, such problems can be used to unveil the depth of students’ mathematical understanding. - Final Assessment Test When teaching sessions are over, students are asked to take another assessment test aiming to show their real progress in mathematics. Most Common Challenging Topics The followings are among the most common challenges students face in grade 7: - Fractions and decimals - Mixed operations - Variables and equations as well as inequalities - Data analysis - Mixed transformations and impacts on geometric shapes - Area and volume What We Can Offer Students have different goals and expectations according to their background, knowledge, or experience. This data along with the result of assessment session help us to design a unique plan for each student. There are different kinds of helps that we offer students in Genius Math: - To review and practice their class notes and handouts - To be helped with their homework, quizzes, and tests - To improve their math skills in general - To level up (e.g., moving from B- to B+) - To get A+ - To learn topics beyond curriculum - To prepare for math competitions
You’ve seen it on TV. A police forensic computer scientist stares at a computer screen full of thousands of ones and zeros and states, “Oh, this guy is good!” This leaves you wondering, why all the ones and zeros? Is this how computers really work? Why Do Computers Use Binary? Computers use binary because it’s the simplest method for counting available and is how a computer codes everything from memory to HD video streaming Binary allows for a computer to process millions of inputs very quickly. With binary, there are only two options, on or off. Computers communicate by stringing a series of ons and offs into complex groups which tell the computer what it is supposed to do. To achieve this, computer systems use a series of switches and electrical signals to understand what it is supposed to do. Think of a standard light switch, there are two options for the switch: on and off. Computers rely on this same concept and use transistors as electrical switches. The transistor is either switched on or off through an electrical signal. The computer reads the on/off signals to create the desired output or complete the programmed function. What Is Binary? Binary is a system of counting which is based on a base-2 numeral system that only allows for a 2-digit option, 0 and 1, where 0 represents off and 1 represents on. Counting in binary works the same way as counting in base-10 or the decimal system, which is what we use. For example, with base-10, there are 10 possible options for a place holder, 0 through 9. If you want to express the number 9, you simply write 9 because the number in the place holder can be up to 9. Each place holder increases in multiples of 10. If you want to express the number 10 you must add a digit, 1, to the first digit since the first digit can only be 0 through 9. This leaves you with 10. To understand binary using your understanding of the decimal system, think of the number 10 in decimal by saying yes or no to the placeholder. If there is a 1 in the 10’s place holder, that means that yes, there is one 10. If there is a 0 in the one’s placeholder, that means that there are no ones, leaving you with 10. One ten plus zero ones equals 10. Binary works the same way but with only 0 or 1 as the option for each place holder. Each place holder is an exponent of 2. For example, to write the number 1 in binary, you simply write 1 since the place holder can be either 0 or 1. To write the number 2, you will have to add another placeholder digit, leaving you with 1 0 in binary. (Note, when reading binary, read everything as one or zero, meaning 1 0 would be read one zero). Once again, think of it in terms of yes and no. Yes, there is one 2 (the first digit in binary 1 0) and no there are 0 ones (the second digit in binary 1 0). This leaves you with yes 1 two and no ones, which equals 2. Since each digit can only be a zero or one, the placeholders increase by exponents of 2, which are 128, 64, 32, 16, 8, 4, 2, and 1. The binary number 10010110 would be yes to 128, no to 64, no to 32, yes to 16, no to 8, yes to 4, yes to 2, and no to 1. Add these together and you get 128+16+4+2=150. 128 + 16 + 4 + 2 = 150 Each digit, either 0 or 1, is called a bit and a series of eight bits strung together is a byte. Why Do Computers Use Binary Instead Of Base-10? Computers use binary instead of base-10 for simplicity. A computer does not understand human logic, which is needed to grasp the concept of a base-10 or decimal system. Computers use transistors or small electrical switches to communicate. A switch is easy for a computer to understand. The electrical current can either trigger the switch to the on position or leave the switch as it is in the off position. With binary, there are only these two options of on and off, making it easy for the computer to understand. A base-10 system is much more complicated with each place holder having 10 possible values. For a computer to understand such a system, the electrical current going into the transistor would have to be slightly different for each of the 10 possible values. These minute changes in electrical current would have to be extremely precise and would be very difficult to get to always function properly in all environments. There is another system of counting which can be used for computing which is the ternary system that allows for three possible values for each place holder. This computing method, however, never gained traction in the computing world due to the increased level of difficulty in controlling voltage into the transistors and the fact that all binary systems would have to be completely reworked to use this method. Binary remains the basis of computing due to its simplicity and speed of computing. Do Smartphones Use Binary? Yes, smartphones use binary to communicate. With older traditional cell phone technology, the audio signal was sent via radio wave through the receiving tower to the specified device. The radio wave was sent as it was received, as a radio wave frequency signal. With smartphones, rather than sending the original radio frequency to the receiving tower, the device converts the signal into a system of 1s and 0s, or binary, to increase the amount of data that can be sent and the speed at which it is sent. By converting to binary, a smartphone can send not only audio, but files, pictures, and videos as well. How Is Text Represented In Binary? Text is represented in binary by mapping a specific character to a specific binary number using a specific encoding method. I know that definition sounds very vague and complicated, but stay with me and we will break it down. The first thing to understand is that, while you are most likely reading this article in the English language, a computer that only communicates through binary code has no idea what the English language is. The computer only knows that if you press the “a” button on your keyboard, it has been programmed to equate that specific button with a specific binary number. It stores this binary number and, when called upon to regurgitate the character represented by the button pressed, knows that that specific binary number equals the letter “a.” This mapping of specific binary numbers to specific characters holds true for all characters available in the given language including symbols such as !, +, or even a blank space by pressing the space bar. But wait! If the computer does not understand the English language, how does it know to print an English character of “a?” This is where the encoding method comes into play. A computer, as previously stated, has no idea what the English language is. For the computer to map a specific binary number to a specific character, it has to be told which character set it should use for the mapping. For example, the computer can be told to use the ASCII character set as its encoding method. This lets the computer know that it will map a specific character, in this case, “a,” to the corresponding binary number value as defined by the ASCII map. Different regions and languages have their own unique character maps for this binary conversion such as KS X 1001:1992 for South Korea or KPS 9566 for North Korea. Is There A Universal Binary Code For Text? Yes, there is a universal binary code for text, and it is called Unicode. Before the massive international reach of the internet, each region and language had its own set of encoding rules for text. This lack of cohesion was not an issue initially because most computing was done within the scope of one’s own language parameters. Computers in the United States had no problem speaking and sharing files with other computers in the United States. However, the internet opened dialogue between countries and continents, leading to the need for a global text standard. Remember that a computer only communicates in 1s and 0s, so if a file from Korea was coded using the Korean standard and then sent to the United States to be read, the computer in the United States would take the 1s and 0s sent by the Korean computer and try and map those 1s and 0s to the English language map. Obviously, this would result in pure gibberish since the Korean language does not even use letters. One way to combat this confusion would be to change the character map the computer was to use to decode the file to the appropriate character map. This process can be highly complicated, however, especially when dealing with communications for multiple languages. Unicode is an international standard for text representation through binary which recognizes 143, 859 characters in 154 modern and historical scripts (including Klingon!), along with symbols, emojis, and more. Unicode was developed to bridge the gap between character encoding maps to create a universal character map that can be used by all countries and regions to communicate effectively with one another. How Are Images Represented In Binary? Images are represented in binary by assigning a binary value to a pixel. Multitudes of pixels are placed next to one another in a grid pattern to represent an image. Just like converting text to binary, other information must also be placed into the computer so that the computer knows how many pixels to place together and what color value range will be used in the image. This information is recorded as the image’s metadata. The creation of a black and white image is quite simple with each pixel assigned a binary value of either 0 for white or 1 for black. To create color, more bits or numbers must be added to the binary number. For example, a two-bit binary color would be 00 for white, 01 for blue, 10 for green, and 11 for red. The more bits you add to the binary number, the more colors you can achieve. Today, the RGB or Red-Green-Blue color scale is most commonly used for color representation. This is a 24-bit number with the first 8 bits representing how much red is in the color, the second 8 bits representing how much green is in the color, and the final 8 bits representing how much blue is in the color. By mixing these values, 16,777,216 possible colors can be represented. How Is Audio Represented In Binary? Sound is created by waves. Each wave contains two distinct factors: amplitude and frequency. The amplitude measures how loud or soft the sound is while the frequency indicates the pitch of the sound. For a computer to understand and replay a sound wave, it must first be converted into voltage, usually using a recording device such as a microphone. Once the sound waves are converted into voltage changes, binary samples can be taken through the use of an Analog-to-Digital Converter, or ADC. An ADC turns voltage into a binary number by taking samples of the wave at regular intervals. Each sample of the voltage change is recorded in binary. To replay the recorded audio, the computer uses a Digital-to-Analog Converter. The DAC turns the saved binary sample data into the correct voltage for each sample, which passes through an amplifier and triggers vibrations in the listening device, such as speakers or headphones, at the specified voltage to create sound. The voltage changes are producing different frequencies and amplitudes. When all the samples are played in conjunction with one another, the result is a continuous stream of sound or audio. The quality of the audio recording is dependent upon the sample rate and the bit rate. The sample rate is how many samples are taken in a second and is measured in Hertz or Hz. The higher the sample rate, the more precise the audio recording. For example, imagine you are told to listen to and record a person telling a story, but you are only allowed to record every 10th word. When you go to play the recording of the story, much of the story is lost due to only every 10th word being recorded. As you increase the number of words you are allowed to record from the story, the playback of the story becomes clearer and clearer until all the words are recorded, resulting in a seamless story. The same principle holds true for the sample rate. The more samples per second that the computer records, the higher the quality of the audio playback. A standard sampling rate for audio files is 44.1 kHz or 44,100 samples per second. The number of bits used for each sample of the sound wave is referred to as the bit rate. The higher the bit rate, the higher the quality of the audio. For example, if the bit rate for an audio file is 2 bits, there are only four possible values that can be recorded for the sound. This would obviously not result in a very precise sound value. However, the more bits that are added to the bit rate, the more precise the sampling will be. Think of audio sampling the same way you think of color sampling. If you are shown a photo with numerous shades of color and are told to recreate the image but you can only use white, red, green, and blue, the resulting image may resemble the image but will not even be close to perfectly recreating the image. The same principle holds true for audio samples. If a computer is told it can only use 2 bits to represent a sound, it will attempt to pick the closest representation it can but will lose a lot in the process. A standard bit rate is 16 bits, which results in 65,536 possible values for the sound. “Binary and Hexadecimal Numbers Explained for Developers.” TheServerSide.Com, https://www.theserverside.com/tip/Binary-and-hexadecimal-numbers-explained-for-developers. Accessed 22 Mar. 2022. Colors and Images. https://cs.wellesley.edu/~cs110/reading/colors-and-images.html. Accessed 22 Mar. 2022. How Things Work: Cell Phones – The Tartan. http://thetartan.org/2010/9/27/scitech/cellphone. Accessed 22 Mar. 2022. McCallum, Jacob. Understanding How Binary Code Get Converted to Text. 14 Sept. 2020, https://www.nerdynaut.com/understanding-how-binary-code-get-converted-to-text. Sound Representation | Binary Representation of Sound. 5 Feb. 2018, https://teachcomputerscience.com/sound-representation/. “The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!).” Joel on Software, 8 Oct. 2003, https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/. Leave a Reply
Gross national income (GNI) is a measure of income produced by a nation. It is closely related to the gross domestic product (GDP), and includes many of the same figures. Economic organizations regularly produce rankings of nations by GNI, with various adjustments to contextualize the numbers they present, and this information is often available to the public. This among other measures can provide important information about economic health. To determine a nation’s GNI, economists start by looking at the value of goods and services produced within the country, like they are assessing GDP. They also examine income from other sources, such as interest and dividends produced through overseas investments. All of this information can be added together to determine how much wealth the nation generated in a given time period, typically a year. Decreases may be indicative of declining economic activity or rising government debt. One issue with GNI numbers is that looking at them in a stand alone context may not provide very much valuable information. For this reason, some charts adjust for purchasing power parity (PPP). They look at the amount paid for common goods and services between nations and use this to normalize the units used so accurate comparisons between countries can be made. If a hamburger, for example, costs $3.75 United States Dollars (USD) in one country but $7 USD in another, these nations need to be adjusted for purchasing power parity to show that a dollar goes further in one nation than the other. Calculations of GNI take account of outflows from a nation more effectively than GDP. Measurements that just look at the value of goods and services produced can miss some important factors. If a nation has a lot of foreign investment, for example, it might produce income in the country, but that money departs to return to the parent companies. GNI measurements account for this, while GDP does not. This explains why rankings of nations by GDP and GNI often look different. Archival data can be found in financial publications and other records. When looking at old information, it is important to pay attention to whether it was adjusted at the time, as this might skew the numbers. People comparing changes in GNI need to consider factors like inflation that might change the meaning of the information. Adjustments like changes for PPP can also create problems unless people are aware those modifications to the numbers were made.
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you! Presentation is loading. Please wait. Published byJulian Todd Modified over 3 years ago Motion and Force A. Motion 1. Motion is a change in position 2. Reference points are necessary B. Speed 1. The rate of change in position 2. Types of speed: a. Instantaneous b. Constant c. Average 3. Speed = Distance / Time 5. Displacement vs. distancea. Distance – how far something moves b. Displacement – the distance and direction of an object’s change in position from its starting point C. Velocity 1. Speed in a defined direction 2. Velocity can change even if speed is constant as long as direction changes D. Acceleration 1. The rate of change of velocity 2. a = (vf – vi) / t or a = v / t 3. Units are m/s/s 4. +a = speeding up; -a = slowing down Example: constant speed & acceleration Motion. The process of changing position. Distance The total length of your path. Motion, Acceleration, and Forces (chapter 3) Motion occurs as something changes position ◦ Relative motion appears according to your position ◦ Distance. 2.1 Position, Velocity, and Speed 2.1 Displacement x x f - x i 2.2 Average velocity 2.3 Average speed Unit 2- Force and Motion Vocabulary- Part I. Frame of Reference A system of objects that are not moving with respect to each other. Uniform Motion. 1) Uniform (rectilinear) motion a) Constant Speed b) straight line c) same direction 2) Speed a) Distance covered in a period of time. Acceleration Velocity-time graph Questions. Acceleration. 2 Motion in One Dimension Displacement, velocity, speed Acceleration Motion with Constant Acceleration Integration Hk: 13, 27, 51, 59, 69, 81, 83, 103. Is it in motion? Look at the coke zero can sitting on my desk. You and your table needs to answer the following. 1. Is the can in motion relative to you. Do you know your x-t graphs?. x t Slowing Down (in the positive direction) Negative Acceleration Chapter 2 Linear Motion. The Language of Physics. Graphing Motion, Speed and Velocity. Graphs of Motion A Distance vs. time graph that is a straight line always means the object moves the same. Kinematics: Linear Motion in One Dimension Class 1. Acceleration & Speed How fast does it go?. Definition of Motion Event that involves a change in the position or location of something. Describing Motion Physical Science Section 2.1. Motion You can tell something moved if it has changed position relative to a stationary object Reference. Kinematics Demo – Ultrasonic locator Graph representation. Chapter 2 Motion Table of Contents 2 2 Motion 2.1 Describing Motion Speed is the distance an object travels per unit of time. Any change over time is. Forces and Motion. Motion Definition An event that involves the change in position or location of something. Motion Quiz. 1. The slope of a position (distance) vs time graph equals what quantity of the motion? 3.2 Notes - Acceleration Part A. Objectives Describe how acceleration, time and velocity are related. Explain how positive and negative acceleration. Which line represents the greater speed? Graphing motion The greater the speed, the steeper the slope. Objectives: 1.Be able to distinguish between distance and displacement 2.Explain the difference between speed and velocity 3.Be able to interpret motion. MOTION GRAPHS. INTERPRET THE GRAPH BELOW: 1.What is the graph showing? Speed, Velocity, Acceleration, Deceleration? 2.How do you know? 3.Explain what. STARTER During a road trip, in 6 hours you travel 300 miles. What is your average velocity? Average Velocity = distance travelled/time taken = 300 miles/6. Force and Motion Unit Vocabulary Week 1. S8P3a Determine the relationship between velocity and acceleration. Sec 3.7: Rates of Change in the Natural and Social Sciences Syllabus: Example 1 only instantaneous rate of change of S with respect to t instantaneous. “In science, there is only physics; all the rest is stamp collecting.” -Ernest Rutherford 1. Motion graphs 1.Position (displacement) vs. time 2.Distance vs. time 3.Speed vs. times 4.Velocity vs. time 5.Acceleration vs. time ________. A graph of the instantaneous velocity of an object over a specified period of time Time is independent (x-axis) Velocity is dependent (y-axis) Remember, An object is in motion if the distance from another object is changing. A reference point is the starting point you choose to describe the location, Motion in One Dimension. Displacement x = x f - x i. Displacement vs Time, Velocity vs Time, and Acceleration vs Time Graphs. Speed and Acceleration. Motion Motion occurs when an object changes position relative to a reference point You do not need to see an object in motion. Acceleration in Graphs 9/27/2011. Position vs. Time Graph For this graph, where is the velocity positive? Where is the velocity zero? Where is the velocity. The Language of Motion Position – Velocity – Acceleration. Speed and Acceration. distance Total distance an object travels from a starting point to ending point. Motion Notes. Key Terms 1)Motion: 2)Reference point: The state in which one object’s distance from another is changing. A place or object used for comparison. Chapter 2 Motion and Speed Instantaneous Velocity The velocity at an instant of time. For a curved graph, use very small intervals of time. Motion IPC NOTES. MOTION & POSITION motion – a change in an object’s position relative to a reference point. DESCRIBING MOTION: Kinematics in One Dimension CHAPTER 2. Velocity - time graph 1. The velocity – time graph shows the motion of a particle for one minute. Calculate each of the following. (a) The acceleration. 2.2 Acceleration Physics A. Objectives I can describe motion in terms of changing velocity. I can compare graphical representations of accelerated and. Derivative Examples 2 Example 3 Complete the velocity vs. time and acceleration vs. time graphs for an object whose motion results in the position vs. Scalar (Dot) Product. Scalar Product by Components. Chapter 11 Motion. Position Position- a place or location –Positions may be described differently by the groups, but the distance/displacement is the. Acceleration. Definition Any change in velocity is acceleration What are the possible causes of acceleration? Speeding up Slowing down Changing direction. Chapter 3 Accelerated Motion Accelerated Motion. Acceleration Acceleration = change in speed or velocity over time. It is the rate at which an object’s. Kinematics. Kinematics is the study of motion. Distance normally refers to the total distance an object moves during a particular journey. Displacement. Physics Motion, Speed, and Velocity. SC Standards Covered PS-5.1 Explain the relationship among distance, time, direction, and the velocity of an object. © 2017 SlidePlayer.com Inc. All rights reserved.
All High School Math Resources Example Question #1 : How To Find The Equation Of A Circle The center of a circle is and its radius is . Which of the following could be the equation of the circle? The general equation of a circle is , where the center of the circle is and the radius is . Thus, we plug the values given into the above equation to get . Example Question #2 : How To Find The Equation Of A Circle Which one of these equations accurately describes a circle with a center of and a radius of ? The standard formula for a circle is , with the center of the circle and the radius. Plug in our given information. This describes what we are looking for. This equation is not one of the answer choices, however, so subtract from both sides.
Students who took this test also took: Order of Operations Order of operations basic 5n18 order of operations - parentheses. Answer Key. The first question asks the student to calculate 5 x 3 using repeated addition. The student wrote 555 = 15, and was marked wrong, with the teacher writing in the "correct" solution of 33333 = 15. The second question prompts the student to calculate 4 x 6 using an array. After having gone through the stuff given above, we hope that the students would have understood, "Bodmas rule worksheets " Apart from the stuff given above, If you want to know more about "Bodmas rule worksheets ", please click here. AplusClick free funny math problems, questions, logic puzzles, and math games on numbers, geometry, algebra for Grade 6. In order to solve a question with multiple operations add/subtract, multiply/divide there is an order to follow often referred to as 'BEDMAS' BEDMAS is an acronym that stands for; B-brackets E-exponents DM-multiply or divide left to right AS-add subtract left to right this is to help students remember what order to do the work in. BEDMAS is an acronym to help remember an order of operations in algebra basics. When you have math problems that require the use of different operations multiplication, division, exponents, brackets, subtraction, addition order is necessary and mathematicians have agreed on the BEDMAS. Fraction Word Problems - Examples and step by step Solutions of Word Problems using block models tape diagrams, Solve a problem involving fractions of fractions and fractions of remaining parts, how to solve a four step fraction word problem using tape diagrams, grade 5, grade 6, grade 7. BODMAS/ BIDMAS Questions, Revision and Worksheets Bidmas Bodmas orders of operation Level 1 Level 2 Level 3 BODMAS/ BIDMAS Questions, Revision and Worksheets GCSE Maths Level 1 Level 2 Level 3 Prime Factors LCM HCF Worksheets, Questions and Revision HCF highest common factor LCM lowest common multiple prime factor decomposition prime. BODMAS Questions for Competitive Exams – Concept & Example. The table below covers the BODMAS concept and BODMAS problems to help you understand the BODMAS rule properly. You must practice the BODMAS questions for competitive exams on a regular basis to ace the Quant section and score maximum marks. Free Worksheets for Order of Operations. Find here an unlimited supply of worksheets for the order of operations for grades 2-9 that use addition, subtraction, multiplication, division, exponents, and/or parentheses. The worksheets are available both in PDF and html formats html is editable and can be customized in multitudes of ways. Order of operations, Quiz. 2: Print. Instructions. You might find it helpful to print out the quiz and give yourself time to think about the questions. Read each question carefully and choose the answer that you think is most likely to be correct. You can only. 12/12/2017 · This video is unavailable. Watch Queue Queue. Watch Queue Queue. Addition Word Problems 1. A guardrail needs to be exactly 19.77 m long. A contractor has 3 pieces measuring 2.21m, 9.14m an 3.21m, does he have enough to complete the guardrail? Bodmas Questions And Answers Pdf Maths Improvers Blended Learning BODMAS factsheet and initial activity 1 of 2 there would be more than one answer to some questions. Complex sums have. All our math worksheet are free and in printable pdf format. estimation of answers, division and long division, mixed operations and Bodmas math worksheets. BIDMAS. 24/06/2015 · A math problem can often look super simple. before you sit down to actually do it and find you have no clue how to solve it. Then there are the problems that make you feel like a math whiz when you solve it in 2 seconds flat — only to find your answer is WAAAAY off. That's why math problems go. Welcome to The Order of Operations with Positive Decimals Four Steps A Math Worksheet from the Order of Operations Worksheets Page at Math This Order of Operations Worksheet may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Sexual Questions To Ask A Guy, be very savvy and sassy when asking sexual questions. Do not use a tone that will make him shy or intimidate him always u. Grade 7 & 8 Math Circles October 8/9, 2013 Algebra Introduction When evaluating mathematical expressions it is important to remember that there is only one right answer. Because of this, there is a strict set of rules that must be followed by all mathematicians so that we can all agree and understand why an answer is correct. Order of Operations. Now, the question is whether there is a definite rule which tells, what is right. The PEMDAS rule, clearly puts multiplication before division so that x/3x = x/3x = 1/3. Most humans follow the PEMDAS rule. Because they have been taught so. There is also the BEDMAS. Grade 6 Math Circles October 8/9, 2013 Algebra Note: Some material and examples from the Tuesday lesson were changed for the Wednesday lesson. These notes correspond to the Wednesday lesson and are the most up-to-date/edited. No exercises or problem set questions have been changed. Order of Operations BEDMAS Example 1. Answers to the Above Questions. 60 miles per hour 0.25 20 meters 1/3 -3,-4 300 square meters 400 square cm 3 hours and 5 minutes 10 cm per minute 12 cards at $0.25 and 8 cards at $0.15 618.75 square cm graph at the bottom left 7.2 cm graph at the bottom left. 17/09/2014 · Reporting on what you care about. We hold major institutions accountable and expose wrongdoing. We test and find the best products. No matter your budget, we got you covered. Search, watch, and cook every single Tasty recipe and video ever - all. On this page you find our worksheets with negative integers, negative decimals and negative fractions. Our negative number worksheets are suited for math grade 6 and 7 and are a great math resource for remedial math or math tutoring purposes. Agree that the order in which number operations are carried out does make a difference to the result, and that using brackets helps us to understand the order of operations. Session 2. SLOs: Understand and explain the rules for order of operations, including explaining the acronym, BEDMAS. Apply the order of operations to solving problems. Waana play the classic 21 questions game with someone new? But you lack questions? Well, it happens to the best of us. While trying to get to know someone, you can’t keep on asking the usual questions like “what’s up?”, And “how was your day?”. 3. You owe $225. on your credit card. You make a $55. payment and then purchase $87 worth of clothes at Dillards. What is the integer that represents the balance owed on the credit card? All Questions 0 Exponents and Bedmas question 0. 491. 2. the teacher has 15 gold prize ribbons, 22 blue ribbons and 53 red ribbons. there are 12 boys and 18 girls in the class. if the teacher divides the ribbons evenly amongst the class,how many ribbons will each student have. Integer Order of Operations Worksheet All work must be shown for credit. 1. 6 −15 ÷3 2. −10 2 1÷ 3. 34 −7. To assist with the BIDMAS work we’ve been the following link goes to a page with lots of worded problems. Your challenge will be to solve the problem by creating a single number sentence. Jogando Onze Hoje Partida Índia Sling Tv On Dish Network Corsair Gaming Mouse Amazon Vw Tdi Wagon Z Gallerie Bedding Aws Application Load Balancer 504 2017 Ford Fusion Sport Awd 1000 Iene Japonês Para Aud Apk For Pubg Mobile Shampoo Para Clarear Os Cabelos Sites Para Encontrar Trabalho Freelance Vigilantes Do Peso Repolho Refogado Adidas Adisonic Sapatos Ansiedade Tween Na Hora De Dormir Nasa Graphic Tee Flores Em Buquês 2007 Accord Ex Como Desenhar Shaymin Rn Jobs Near Me Work From Home Descarregar O Farming Simulator 19 Grátis Under Armour Cool Gear Justin Gatlin Running Comunidade Visual Studio Download Do Mac Police Car Game Download Grátis Arduino Wire Bender Perda De Sucção Do Rotador De Tubarão Calvin Fazendo Amigos Camiseta Versículos Da Bíblia Sobre Adoração Apaixonada Vestido Com Xale Anexado Adidas Adiprene Ozweego Colar De Diamante Preto De Beisebol Capas De Sapato Resistentes A Produtos Químicos Salsa Com Milho Samsung 8k Tv À Venda Del Taco Vegan Menu Suporte Do Uber Eats Online Premium Lounge Pass Perda De Peso Oral De Câncer Ímã Do Torrent De Dragon Ball Z
An ellipse is the set of all points of the -plane for which the sum of the distances to two given points and is equal (). Ellipses belong to the class of conic sections (see the exhibit “Conic Sections”). The points and are called foci. The center of their connecting line (of length , — eccentricity) is called the center of the ellipse. The distance from this center to the two vertices and is , respectively, and to the vertices and is , respectively, with (according to the Pythagorean theorem; see also the exhibits “Pythagoras” and “Proof without words: Pythagoras to lay”), i.e., . The connecting line between a focal point or (focus) and a point of the ellipse is called leading ray or focal ray. The names focal point and focal ray result from the property that the angle between the two focal rays at a point of the ellipse is bisected by the normal (straight line, perpendicular to the tangent) at that point. Thus, the angle of incidence formed by one focal ray with the tangent is equal to the angle of reflection formed by the tangent with the other focal ray. Consequently, a light beam originating from one focal point, e.g. , is reflected at the elliptical tangent in such a way that it hits the other focal point. Thus, for an elliptical mirror, all light rays emanating from one focal point meet at the other focal point. If the eccentricity , is valid. The ellipse becomes a circle with radius . A simple way to draw an ellipse exactly is the so-called gardener’s construction. It directly uses the ellipse definition: To create an ellipse-shaped flowerbed, you drive two stakes into the focal points and attach to them the ends of a string with length . Now stretch the string and run a marking tool along it. Since this method requires additional tools besides a compass and a ruler — namely a string — it is not a construction of classical geometry. In MATHEMATICS ADVENTURE LAND, this construction can be understood by means of a simple experiment. And now … the mathematics of it: In the following the ellipse equation is derived from the “gardener construction” described above: For a point of the ellipse — corresponding to figure 1 above — , i. e. i.e., if we set and , we get the equation Squaring this equation gives Squaring again yields and by simplification — i.e., suitable “truncation” — we get: Because of (see above), the normal form (also “midpoint form”) of an elliptic equation is then The so-called first Kepler’s law (“ellipse theorem”, “planet theorem”) states that the orbit of a satellite is an ellipse. One of its foci lies in the center of gravity of the system. This law follows from Newton’s law of gravitation, provided that the mass of the central body is much greater than that of the satellites and the interaction of the satellite with the central body can be neglected. Consequently, according to Kepler’s first law, all planets move in an elliptical orbit around the sun, with the sun at one of the two foci. The same applies to the orbits of recurring (periodic) comets, planetary moons or double stars. Schupp, H.: Kegelschnitte, Mannheim, 1988.
Entering fractions in Microsoft Excel isn’t exactly intuitive, but it is easy. Learn how to enter and display fractions as either fractions or as decimal values. Fractions are enough to make you cry if you’re not a math whiz, but Microsoft Excel handles them very well. In fourth-grade math, you learned fraction basics: Fractions are a numerical representation of a part of a whole, such as ½, ¾, and so on. Then, you learned how to represent that same fractional value as a decimal value, and that’s how Excel evaluates fractions internally, as decimal values. In this article, I’ll show you how to enter fractions in a way that Excel can interpret them correctly. Then, I’ll show you how to format in ways that you can interpret. SEE: Software Installation Policy (TechRepublic) I’m using Microsoft 365 on a Windows 10 64-bit system, but you can use an earlier version. You don’t need a demonstration file. I’ll be using the fraction 4/5 throughout the article, but feel free to experiment with other fractions. About math fractions There are three types of fractions, and Excel interprets them all: - Proper fractions: The numerator is always less than the denominator; this fraction is always less than the whole, or mathematically speaking 1, and is represented by a decimal value. For instance, 4/5 is .8. - Improper fractions: The numerator is always larger than the denominator; this fraction is always more than the whole, or 1, and is represented by a whole number and a decimal value. For instance, 9/5 is 1.8. - Mixed fractions: The fraction includes a whole number. For instance, 1 4/5 is 1.8. Excel handles fractions and displays them in fraction form. However, internally, Excel stores the fraction as a decimal value or a whole number and a decimal value. For example, Excel stores 4/5 as .8. Our minds can quickly visualize 4/5 but we don’t usually interpret .8 as 4/5. How to enter fractions in Excel Entering a fraction in Excel isn’t intuitive to most of us. If you enter 4/5, Excel will interpret the entry as a date and display it as a date, 4/5 or April 5. If you don’t realize it, that “date” will not evaluate as you expect. In order for Excel to recognize a fraction as a fraction, you must also enter the fraction’s whole number, which in most cases will be 0, followed by a space. Let’s try that now with the fraction 4/5: - Choose any cell. - Enter 0 4/5. Remember, three’s a space character between 0 and 4. - Press Enter to see the results shown in Figure A. The formula bar displays the fraction as .8, even though it displays it as 4/5 at the cell level, which looks a lot like a date again, doesn’t it? In addition, the 0 isn’t to be seen anywhere at all. Excel needs that digit to interpret the entry as a fraction. However, if you were to enter 1 4/5, Excel retains the 1 digit and displays the fraction as 1 4/5 and stores the value 1.8. Fractions don’t look so hot, but numerically, Excel evaluates them correctly. How to format fractions in Excel You might think this section will help you display fractions traditionally, by displaying the numerator over the denominator and using a smaller font, but that’s not going to happen in Excel. Instead, Excel’s fraction formats will help you round fractions. There are several to choose from and we’ll apply each to a properly entered fraction, 4/5. Enter 0 4/5 into any cell and then copy it for a total of nine 4/5 fractions. Then, select the first one and do the following to apply a fraction format: - Click the Number dialog launcher (on the Home tab). - Click the Number tab if necessary. - From the Category list, choose Fraction. - In the Type list, choose the first option: Up to One Digit (1/4) (Figure B) - Click OK. Repeat this process for each fraction, always choosing the next format. Figure C shows the results. The first three fractions accommodate 1, 2 and 3 digits in both the numerator and the denominator—so these first three formats are strictly about spacing. Notice that as the digit number goes up, the resulting fraction moves a bit to the left. The halves format rounds 4/5 up to 1. This fraction has at least 1 half of a whole. The next five formats display the fraction with different denominators, rounding as closely as possible. As you can see in B6, 4/5 doesn’t convert correctly to fourths, so it displays the next best thing 3/4, which is .75. That’s close to .8, but not quite .8. Internally, Excel is evaluating .8 in all these format options. The only two that represent .8 exactly are tenths and hundreds (8/10 and 80/100). As shown, not all fractions display what you entered. Here are a few things to keep in mind when entering and display fractions: - If Excel can’t display the exact fraction, Excel will round the entry up to the nearest result. - Excel converts fractions to the lowest denominator when entered. For example, if you enter 0 5/10, Excel will display 1/2. If you don’t want Excel to convert the lowest denominator, you can create a custom format as follows: - Open the Format Cells dialog as you did earlier by using the Number dialog launcher. - Choose Custom in the Category list. - Enter ??/10, into the Type control (Figure D). You could enter ?/10, but ??/10 accommodates a two-digit numerator. - Click OK to see the result in Figure E. Be sure to match the section component to the entered denominator. For instance, if you entered 0 5/100, you’d use the format ???/100. You could enter ?/100, but ???/100 would accommodate a 3-digit numerator. If you want to enter fractions as fractions by displaying the underlying value, format the entries using one of the Number formats. In this way, you can quickly enter fractions without converting them to whole and decimal values in your head. It’s up to you whether you need to format fractions differently than Excel’s first display. The way you’re using those fractions will help you make that decision.